149 113 26MB
English Pages 379 [378] Year 2023
Lecture Notes in Networks and Systems 852
Pandian Vasant · Mohammad Shamsul Arefin · Vladimir Panchenko · J. Joshua Thomas · Elias Munapo · Gerhard-Wilhelm Weber · Roman Rodriguez-Aguilar Editors
Intelligent Computing and Optimization Proceedings of the 6th International Conference on Intelligent Computing and Optimization 2023 (ICO2023), Volume 2
Lecture Notes in Networks and Systems
852
Series Editor Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland
Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).
Pandian Vasant · Mohammad Shamsul Arefin · Vladimir Panchenko · J. Joshua Thomas · Elias Munapo · Gerhard-Wilhelm Weber · Roman Rodriguez-Aguilar Editors
Intelligent Computing and Optimization Proceedings of the 6th International Conference on Intelligent Computing and Optimization 2023 (ICO2023), Volume 2
Editors Pandian Vasant Faculty of Electrical and Electronics Engineering, Modeling Evolutionary Algorithms Simulation and Artificial Intelligence Ton Duc Thang University Ho Chi Minh City, Vietnam Vladimir Panchenko Laboratory of Non-traditional Energy Systems, Department of Theoretical and Applied Mechanics, Federal Scientific Agroengineering Center VIM Russian University of Transport Moscow, Russia
Mohammad Shamsul Arefin Department of Computer Science Chittagong University of Engineering and Technology Chittagong, Bangladesh J. Joshua Thomas Department of Computer Science UOW Malaysia KDU Penang University College George Town, Malaysia Gerhard-Wilhelm Weber Faculty of Engineering Management Pozna´n University of Technology Poznan, Poland
Elias Munapo School of Economics and Decision Sciences North West University Mmabatho, South Africa Roman Rodriguez-Aguilar Facultad de Ciencias Económicas y Empresariales, School of Economic and Business Sciences Universidad Panamericana Mexico City, Mexico
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-50329-0 ISBN 978-3-031-50330-6 (eBook) https://doi.org/10.1007/978-3-031-50330-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
The sixth edition of the International Conference on Intelligent Computing and Optimization (ICO’2023) was held during April 27–28, 2023, at G Hua Hin Resort and Mall, Hua Hin, Thailand. The objective of the international conference is to bring the global research scholars, experts and scientists in the research areas of intelligent computing and optimization from all over the world to share their knowledge and experiences on the current research achievements in these fields. This conference provides a golden opportunity for global research community to interact and share their novel research results, findings and innovative discoveries among their colleagues and friends. The proceedings of ICO’2023 is published by SPRINGER (in the book series Lecture Notes in Networks and Systems) and indexed by SCOPUS. Almost 70 authors submitted their full papers for the 6th ICO’2023. They represent more than 30 countries, such as Australia, Bangladesh, Bhutan, Botswana, Brazil, Canada, China, Germany, Ghana, Hong Kong, India, Indonesia, Japan, Malaysia, Mauritius, Mexico, Nepal, the Philippines, Russia, Saudi Arabia, South Africa, Sri Lanka, Thailand, Turkey, Ukraine, UK, USA, Vietnam, Zimbabwe and others. This worldwide representation clearly demonstrates the growing interest of the global research community in our conference series. The organizing committee would like to sincerely thank all the authors and the reviewers for their wonderful contribution for this conference. The best and high-quality papers will be selected and reviewed by International Program Committee in order to publish the extended version of the paper in the international indexed journals by SCOPUS and ISI WoS. This conference could not have been organized without the strong support and help from LNNS SPRINGER NATURE, Easy Chair, IFORS and the Committee of ICO’2023. We would like to sincerely thank Prof. Roman Rodriguez-Aguiler (Universidad Panamericana, Mexico) and Prof. Mohammad Shamsul Arefin (Daffodil International University, Bangladesh), Prof. Elias Munapo (North West University, South Africa) and Prof. José Antonio Marmolejo Saucedo (National Autonomous University of Mexico, Mexico) for their great help and support for this conference. We also appreciate the wonderful guidance and support from Dr. Sinan Melih Nigdeli (Istanbul University—Cerrahpa¸sa, Turkey), Dr. Marife Rosales (Polytechnic University of the Philippines, Philippines), Prof. Rustem Popa (Dunarea de Jos University, Romania), Prof. Igor Litvinchev (Nuevo Leon State University, Mexico), Dr. Alexander Setiawan (Petra Christian University, Indonesia), Dr. Kreangkri Ratchagit (Maejo University, Thailand), Dr. Ravindra Boojhawon (University of Mauritius, Mauritius), Prof. Mohammed Moshiul Hoque (CUET, Bangladesh), Er. Aditya Singh (Lovely Professional University, India), Dr. Dmitry Budnikov (Federal Scientific Agroengineering Center VIM, Russia), Dr. Deepanjal Shrestha (Pokhara University, Nepal), Dr. Nguyen Tan Cam (University of Information Technology, Vietnam) and Dr. Thanh Dang Trung (Thu Dau Mot University, Vietnam). The ICO’2023 committee would like to sincerely thank all the authors, reviewers, keynote speakers (Prof. Roman Rodriguez-Aguiler,
vi
Preface
Prof. Kaushik Deb, Prof. Rolly Intan, Prof. Francis Miranda, Dr. Deepanjal Shrestha, Prof. Sunarin Chanta), plenary speakers (Prof. Celso C. Ribeiro, Prof. José Antonio Marmolejo, Dr. Tien Anh Tran), session chairs and participants for their outstanding contribution to the success of the 6th ICO’2023 in Hua Hin, Thailand. Finally, we would like to sincerely thank Prof. Dr. Janusz Kacprzyk, Dr. Thomas Ditzinger, Dr. Holger Schaepe and Ms. Varsha Prabakaran of LNNS SPRINGER NATURE for their great support, motivation and encouragement in making this event successful in the global stage. April 2023
Dr. Pandian Vasant (Chair) Prof. Dr. Gerhard-Wilhelm Weber Prof. Dr. Mohammad Shamsul Arefin Prof. Dr. Roman Rodriguez-Aguiler Dr. Vladimir Panchenko Prof. Dr. Elias Munapo Dr. J. Joshua Thomas
Contents
Artificial Intelligence, Metaheuristics, and Optimization Application of Metaheuristic Algorithms and Their Combinations to Travelling Salesman Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yinhao Liu, Xu Chen, and Omar Dib Evolutionary Optimization of Entanglement Distillation Using Chialvo Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Timothy Ganesan, Roman Rodriguez-Aguilar, José Antonio Marmolejo-Saucedo, and Pandian Vasant Optimization of Seismic Isolator Systems via Teaching-Learning Based Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ayla Ocak, Gebrail Bekda¸s, and Sinan Melih Nigdeli TOPSIS Based Optimization of Laser Surface Texturing Process Parameters . . . Satish Pradhan, Ishwer Shivakoti, Manish Kumar Roy, and Ranjan Kumar Ghadai PCBA Solder Vision Inspection Using Machine Vision Algorithm and Optimization Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aries Dayrit, Robert De Luna, and Marife Rosales AI-Based Air Cooling System in Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shamsun Nahar Zaman, Nadia Hannan Sharme, Rehnuma Naher Sumona, Md. Jekrul Islam, Ahmed Wasif Reza, and Mohammad Shamsul Arefin
3
19
27
37
43
53
Reinforced Concrete Beam Optimization via Flower Pollination Algorithm by Changing Switch Probability Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yaren Aydın, Gebrail Bekda¸s, and Sinan Melih Nigdeli
66
Cost Prediction Model Based on Hybridization of Artificial Neural Network with Nature Inspired Simulated Annealing Algorithm . . . . . . . . . . . . . . . Vijay Kumar, Sandeep Singla, and Aarti Bansal
75
Optimum Design of Reinforced Concrete Footings Using Jaya Algorithm . . . . . Hani Kerem Türko˘glu, Gebrail Bekda¸s, and Sinan Melih Nigdeli
86
viii
Contents
AI Models for Spot Electricity Price Forecasting—A Review . . . . . . . . . . . . . . . . G. P. Girish, Rahul Bhagat, S. H. Preeti, and Sweta Singh
97
Comparison of Various Weight Allocation Methods for the Optimization of EDM Process Parameters Using TOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Sunil Mintri, Gaurav Sapkota, Nameer Khan, Soham Das, Ishwer Shivakoti, and Ranjan Kumar Ghadai Assessment of the Outriggers and Their Stiffness in a Tall Building Using Multiple Response Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Shashank Dwivedi, Ashish Kumar, and Sandeep Singla The Role of Artificial Intelligence in Art: A Comprehensive Review of a Generative Adversarial Network Portrait Painting . . . . . . . . . . . . . . . . . . . . . . 126 Sunanda Rani, Dong Jining, Dhaneshwar Shah, Siyanda Xaba, and Prabhat Ranjan Singh Introducing Set-Based Regret for Online Multiobjective Optimization . . . . . . . . . 136 Kristen Savary and Margaret M. Wiecek The Concept of Optimizing the Operation of a Multimodel Real-Time Federated Learning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Elizaveta Tarasova Ambulance Priority Dispatch Under Multi—Tiered Response by Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Kanchala Sudtachat and Nitirut Phongsirimethi Seeds Classification Using Deep Neural Network: A Review . . . . . . . . . . . . . . . . . 168 Hafiz Al Fahim, Md. Abid Hasan, Md. Hasan Imam Bijoy, Ahmed Wasif Reza, and Mohammad Shamsul Arefin Agrophytocenosis Development Analysis and Computer Monitoring Software Complex Based on Microprocessor Hardware Platforms . . . . . . . . . . . . 183 K. Tokarev, N. Lebed, Yu Daus, and V. Panchenko Reducing Fish Ball’s Setting Process Time by Considering the Quality of the Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Maria Shappira Joever Pranata and Debora Anne Yang Aysia Optimal Fire Stations for Industrial Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Ornurai Sangsawang and Sunarin Chanta
Contents
ix
Optimisation of a Sustainable Biogas Production from Oleochemical Industrial Wastewater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Mohd Faizan Jamaluddin, Kenneth Tiong Kim Yeoh, Chee Ming Choo, Marie Laurina Emmanuelle Laurel-Angel Guillaume, Lik Yin Ng, and Vui Soon Chok Ensemble Approach for Optimizing Variable Rigidity Joints in Robotic Manipulators Using MOALO-MODA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 G. Shanmugasundar, Subham Pal, Jasgurpeet Singh Chohan, and Kanak Kalita Future Design: An Analysis of the Impact of AI on Designers’ Workflow and Skill Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Kshetrimayum Dideshwor Singh and Yi Xi Duo Clean Energy, Agro-Farming, and Smart Transportation Application of Remote Sensing to Assess the Ability to Absorb Carbon Dioxide of Green Areas in Thu Dau Mot City, Binh Duong Province, Vietnam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Nguyen Huynh Anh Tuyet, Nguyen Hien Than, and Dang Trung Thanh Efficient Cooling System of Cloud Data Center by Reducing Energy Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Nazia Tabassum Natasha, Imran Fakir, Sabiha Afsana Falguni, Faria Tabassum Mim, Syed Ridwan Ahmed Fahim, Ahmed Wasif Reza, and Mohammad Shamsul Arefin Reducing Energy Usage and Cost for Environmentally Conscious Cooling Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Md. Jakir Hossain, Fardin Rahman Akash, Sabbir Ahmed, Mohammad Rifat Sarker, Ahmed Wasif Reza, and Mohammad Shamsul Arefin Investigation of the Stimulating Effect of the EHF Range Electromagnetic Field on the Sowing Qualities of Vegetable Seeds . . . . . . . . . . . . . . . . . . . . . . . . . . 272 P. Ishkin, D. Klyuev, O. Osipov, Yu. Sokolova, A. Kuzmenko, Yu. Daus, and V. Panchenko Prototype Development of Electric Vehicle Database Platform Using Serverless and Microservice Architecture with Intelligent Data Analytics . . . . . . 282 Somporn Sirisumrannukul, Nattavit Piamvilai, Sirawich Limprapassorn, Tanachot Wattanakitkarn, and Touchakorn Loedchayakan
x
Contents
A Cost-Effective and Energy-Efficient Approach to Workload Distribution in an Integrated Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Obaida Jahan, Nighat Zerin, Nahian Nigar Siddiqua, Ahmed Wasif Reza, and Mohammad Shamsul Arefin Optimization of the Collection and Concentration of Allelopathically Active Plants Root Secretions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 A. N. Skorokhodova, Alexander A. Anisimov, Julia Larikova, D. M. Skorokhodov, and O. M. Mel’nikov Potential Application of Phosphorus-Containing Micronutrient Complexates in Hydroponic System Nutrient Solutions . . . . . . . . . . . . . . . . . . . . . 317 E. A. Nikulina, N. V. Tsirulnikova, N. A. Semenova, M. M. Godyaeva, and S. V. Akimova Impact Analysis of Rooftop Solar Photovoltaic Systems in Academic Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Pranta Nath Nayan, Amir Khabbab Ahammed, Abdur Rahman, Fatema Tuj Johora, Ahmed Wasif Reza, and Mohammad Shamsul Arefin A Policy Framework for Cost Effective Production of Electricity Using Renewable Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Sazzad Hossen, Rabeya Islam Dola, Tohidul Haque Sagar, Sharmin Islam, Ahmed Wasif Reza, and Mohammad Shamsul Arefin Technology of Forced Ventilation of Livestock Premises Based on Flexible PVC Ducts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Igor M. Dovlatov, Sergey S. Yurochka, Dmitry Y. Pavkin, and Alexandra A. Polikanova Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
About the Editors
Pandian Vasant is Research Associate at Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam, and Editor in Chief of International Journal of Energy Optimization and Engineering (IJEOE). He holds Ph.D. in Computational Intelligence (UNEM, Costa Rica), M.Sc. (University Malaysia Sabah, Malaysia, Engineering Mathematics) and B.Sc. (Hons, Second Class Upper) in Mathematics (University of Malaya, Malaysia). His research interests include soft computing, hybrid optimization, innovative computing and applications. He has co-authored research articles in journals, conference proceedings, presentations, special issues Guest Editor, chapters (312 publications indexed in Research-Gate) and General Chair of EAI International Conference on Computer Science and Engineering in Penang, Malaysia (2016) and Bangkok, Thailand (2018). In the years 2009 and 2015, he was awarded top reviewer and outstanding reviewer for the journal Applied Soft Computing (Elsevier). He has 35 years of working experience at the universities. Currently, Dr. Pandian Vasant is General Chair of the International Conference on Intelligent Computing and Optimization (https://www.icico.info/) and Research Associate at Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, HCMC, Vietnam. Professor Dr. Mohammad Shamsul Arefin is in lien from Chittagong University of Engineering and Technology (CUET), Bangladesh and currently affiliated with the Department of Computer Science and Engineering (CSE), Daffodil International University (DIU), Dhaka, Bangladesh. Earlier he was the head of CSE Department, CUET. Prof. Arefin received his Doctor of Engineering Degree in Information Engineering from Hiroshima University, Japan with support of the scholarship of MEXT, Japan. As a part of his doctoral research, Dr. Arefin was with IBM Yamato Software Laboratory, Japan. His research includes data privacy and mining, big data management,
xii
About the Editors
IoT, Cloud Computing, Natural Language processing, Image Information Processing, Social Networks Analysis and Recommendation Systems and IT for agriculture, education and environment. Prof. Arefin is the Editor in Chief of Computer Science and Engineering Research Journal (ISSN: 1990-4010) and was the Associate Editor of BCS Journal of Computer and Information Technology (ISSN: 2664-4592) and a reviewer as well as TPC member of many international journals and conferences. Dr. Arefin has more than 120 referred publications in international journals, book series and conference proceedings. He delivered more than 30 keynote speeches/invited talks. He also received a good number of research grants/funds from home and abroad. Dr. Arefin is a senior member of IEEE, Member of ACM, Fellow of IEB and BCS. Prof. Arefin involves/earlier involved in many professional activities such as Chairman of Bangladesh Computer Society (BCS) Chittagong Branch; Vice-President (Academic) of BCS National Committee; Executive Committee Member of IEB Computer Engineering Division; Advisor, Bangladesh Robotic Foundation. He was also a member of pre-feasibility study team of CUET IT Business Incubator, first campus based IT Business Incubator in Bangladesh. Prof. Arefin is an Principle Editor of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, Volume 95) published by Springer and an editor of the books on Applied Informatics for Industry 4.0, Applied Intelligence for Industry 4.0 and Computer Vision and Image Analysis for Industry 4.0 to be published Tailor and Francis. Prof. Arefin is the Vice-Chair (Technical) of IEEE CS BDC for the year 2022. He was the Vice-Chair (Activity) of IEEE CS BDC for the year 2021 and the Conference Co-Coordinator of IEEE CS BDC for two consecutive years, 2018 and 2019. He is acting as a TPC Chair of MIET 2022 and the TPC Chair of IEEE Summer Symposium 2022. He was the Organizing Chair of International Conference on Big Data, IoT and Machine Learning (BIM 2021) and National Workshop on Big Data and Machine Learning (BDML 2020). He served as the TPC Chair, International Conference on Electrical, Computer and Communication Engineering (ECCE 2017); Organizing Co-chair, ECCE 2019, Technical Co-chair, IEEE CS BDC Winter Symposium 2020 and Technical Secretary, IEEE CS BDC Winter Symposium 2021. Dr. Arefin helped different international conferences
About the Editors
xiii
in the form of track chair, TPC member, reviewer and/or secession chair etc. He is a reviewer of many reputed journals including IEEE Access, Computing Informatics, ICT Express, Cognitive Computation etc. Dr. Arefin visited Japan, Indonesia, Malaysia, Bhutan, Singapore, South Korea, Egypt, India, Saudi Arabia and China for different professional and social activities. Vladimir Panchenko is an Associate Professor of the “Department of Theoretical and Applied Mechanics” of the “Russian University of Transport”, Senior Researcher of the “Laboratory of Non-traditional Energy Systems” of the “Federal Scientific Agroengineering Center VIM” and the Teacher of additional education. Graduated from the “Bauman Moscow State Technical University” in 2009 with the qualification of an engineer. Ph.D. thesis of the specialty “Power plants based on renewable energy” was defended in 2013. From 2014 to 2016 Chairman of the Council of Young Scientists and the Member of the Academic Council of the All-Russian Institute for Electrification of Agriculture, Member of the Council of Young Scientists of the Russian University of Transport, Member of the International Solar Energy Society, Individual supporter of Greenpeace and the World Wildlife Fund, Member of the Russian Geographical Society, Member of the Youth section of the Council “Science and Innovations of the Caspian Sea”, Member of the Committee on the use of renewable energy sources of the Russian Union of Scientific and Engineering Public Associations. Diplomas of the winner of the competition of works of young scientists of the AllRussian Scientific Youth School with international participation “Renewable Energy Sources”, Moscow State University M.V. Lomonosov in 2012, 2014, 2018 and 2020, Diploma with a bronze medal of the 15th Russian agroindustrial exhibition “Golden Autumn—2013”, Diploma with a gold medal of the 18th Russian agro-industrial exhibition “Golden Autumn—2016”, Diploma with a silver medal of the XIX Moscow International Salon of Inventions and Innovative technologies “Archimedes—2016”, Diploma for the winning the schoolchildren who have achieved high results in significant events of the Department of Education and Science of the City of Moscow (2020–2021, School No. 2045). Scientific adviser of schoolchildren-winners and prize-winners of the Project and Research Competition “Engineers of the Future” at NUST MISiS 2021 and
xiv
About the Editors
RTU MIREA 2022. Invited expert of the projects of the final stages of the “Engineers of the Future” (2021, 2022) and the projects of the “Transport of the Future” (2022, Russian University of Transport). Grant “Young teacher of MIIT” after competitive selection in accordance with the Regulations on grants for young teachers of MIIT (2016– 2019). Scholarship of the President of the Russian Federation for 2018–2020 for young scientists and graduate students carrying out promising research and development in priority areas of modernization of the Russian economy. Grant of the Russian Science Foundation 2021 “Conducting fundamental scientific research and exploratory scientific research by international research teams”. Reviewer of articles, chapters and books IGI, Elsevier, Institute of Physics Publishing, International Journal of Energy Optimization and Engineering, Advances in Intelligent Systems and Computing, Journal of the Operations Research Society of China, Applied Sciences, Energies, Sustainability, AgriEngineering, Ain Shams Engineering Journal, Concurrency and Computation: Practice and Experience. Presenter of the sections of the Innovations in Agriculture conference, keynote speaker of the ICO 2019 conference session, keyspeaker of the special session of the ICO 2020 conference. Assistant Editor since 2019 of the “International Journal of Energy Optimization and Engineering”, Guest Editor since 2019 of the Special Issues of the journal MDPI (Switzerland) “Applied Sciences”, Editor of the book of the “IGI GLOBAL” (USA), as well as book of the “Nova Science Publisher” (USA). Participated in more than 100 exhibitions and conferences of various levels. Published more than 250 scientific papers, including 14 patents, 1 international patent, 6 educational publications, 4 monographs. J. Joshua Thomas is an Associate Professor at UOW Malaysia KDU Penang University College, Malaysia since 2008. He obtained his Ph.D. (Intelligent Systems Techniques) in 2015 from University Sains Malaysia, Penang, and Master’s degree in 1999 from Madurai Kamaraj University, India. From July to September 2005, he worked as a research assistant at the Artificial Intelligence Lab in University Sains Malaysia. From March 2008 to March 2010, he worked as a research associate at the same University. Currently, he is working with Machine Learning, Big Data, Data Analytics, Deep Learning, specially targeting on
About the Editors
xv
Convolutional Neural Networks (CNN) and Bi-directional Recurrent Neural Networks (RNN) for image tagging with embedded natural language processing, End to end steering learning systems and GAN. His work involves experimental research with software prototypes and mathematical modelling and design He is an editorial board member for the Journal of Energy Optimization and Engineering (IJEOE), and invited guest editor for Journal of Visual Languages Communication (JVLC-Elsevier). Recently with Computer Methods and Programs in Biomedicine (Elsevier). He has published more than 40 papers in leading international conference proceedings and peer reviewed journals. Elias Munapo has a Ph.D. obtained in 2010 from the National University of Science and Technology (Zimbabwe) and is a Professor of Operations Research at the North West University, Mafikeng Campus in South Africa. He is a Guest Editor of the Applied Sciences Journal and has co-published two books. The first book is titled Some Innovations in OR Methodology: Linear Optimization and was published by Lambert Academic publishers in 2018. The second book is titled Linear Integer Programming: Theory, Applications, and Recent Developments and was published by De Gruyter publishers in 2021. Professor Munapo has co-edited a number of books, is currently a reviewer of a number of journals, and has published over 100 journal articles and book chapters. In addition, Prof. Munapo is a recipient of the North West University Institutional Research Excellence award and is a member of the Operations Research Society of South Africa (ORSSA), EURO, and IFORS. He has presented at both local and international conferences and has supervised more than 10 doctoral students to completion. His research interests are in the broad area of operations research.
xvi
About the Editors
Gerhard-Wilhelm Weber is a Professor at Poznan University of Technology, Poznan, Poland, at Faculty of Engineering Management. His research is on mathematics, statistics, operational research, data science, machine learning, finance, economics, optimization, optimal control, management, neuro-, bio- and earth-sciences, medicine, logistics, development, cosmology and generalized spacetime research. He is involved in the organization of scientific life internationally. He received Diploma and Doctorate in Mathematics, and Economics/Business Administration, at RWTH Aachen, and Habilitation at TU Darmstadt (Germany). He replaced Professorships at University of Cologne, and TU Chemnitz, Germany. At Institute of Applied Mathematics, Middle East Technical University, Ankara, Turkey, he was a Professor in Financial Mathematics and Scientific Computing, and Assistant to the Director, and has been a member of five further graduate schools, institutes and departments of METU. G.-W. Weber has affiliations at Universities of Siegen (Germany), Federation University (Ballarat, Australia), University of Aveiro (Portugal), University of North Sumatra (Medan, Indonesia), Malaysia University of Technology, Chinese University of Hong Kong, KTO Karatay University (Konya, Turkey), Vidyasagar University (Midnapore, India), Mazandaran University of Science and Technology (Babol, Iran), Istinye University (Istanbul, Turkey), Georgian International Academy of Sciences, at EURO (Association of European OR Societies) where he is “Advisor to EURO Conferences” and IFORS (International Federation of OR Societies), where he is member in many national OR societies, honorary chair of some EURO working groups, subeditor of IFORS Newsletter, member of IFORS Developing Countries Committee, of Pacific Optimization Research Activity Group, etc. G.-W. Weber has supervised many M.Sc. and Ph.D. students, authored and edited numerous books and articles, and given many presentations from a diversity of areas, in theory, methods and practice. He has been a member of many international editorial, special issue and award boards; he participated at numerous research projects; he received various recognitions by students, universities, conferences and scientific organizations. G.-W. Weber is an IFORS Fellow.
About the Editors
xvii
Roman Rodriguez-Aguilar is a professor in the School of Economic and Business Sciences of the “Universidad Panamericana” in Mexico. His research is on large-scale mathematical optimization, evolutionary computation, data science, statistical modeling, health economics, energy, competition, and market regulation. He is particularly interested in topics related to artificial intelligence, digital transformation, and Industry 4.0. He received his Ph.D. at the School of Economics at the National Polytechnic Institute, Mexico. He also has a master’s degree in Engineering from the School of Engineering at the National University of Mexico (UNAM), a master’s degree in Administration and Public Policy in the School of Government and Public Policy at Monterrey Institute of Technology and Higher Education, a postgraduate in applied statistics at the Research Institute in Applied Mathematics and Systems of the UNAM and his degree in Economics at the UNAM. Prior to joining Panamericana University, he has worked as a specialist in economics, statistics, simulation, finance, and optimization, occupying different management positions in various public entities such as the Ministry of Energy, Ministry of Finance, and Ministry of Health. At present, he has the secondhighest country-wide distinction granted by the Mexican National System of Research Scientists for scientific merit (SNI Fellow, Level 2). He has co-authored research articles in science citation index journals, conference proceedings, presentations, and book chapters.
Artificial Intelligence, Metaheuristics, and Optimization
Application of Metaheuristic Algorithms and Their Combinations to Travelling Salesman Problem Yinhao Liu, Xu Chen, and Omar Dib(B) Department of Computer Science, Wenzhou-Kean University, Wenzhou, China {liuyinh,chexu,odib}@kean.edu
Abstract. This paper presents four classical metaheuristic algorithms: Genetic Algorithm (GA), Ant Colony Optimization (ACO), Simulated Annealing (SA), and Tabu Search (TS) for solving the Travelling Salesman Problem (TSP). In addition, this paper introduces two novel hybrid approaches, (SAACO) based on Ant Colony Optimization and Simulated Annealing, and (TSACO) based on Ant Colony Optimization and Tabu Search. To compare the efficiency of the considered algorithms and to verify the effectiveness of the novel hybrid algorithms, this paper uses ten well-known benchmark instances from TSPLIB; the instances are of variable difficulty and size, ranging from 70 to 783 nodes with different node topologies. Assessment criteria involve computational time, fitness value, convergence speed, and robustness of algorithms. The experimental results show that hybrid algorithms overcome the limitations of individual algorithms, namely the slow convergence and local optima. Moreover, when applied solely, hybrid algorithms achieve better fitness values than GA, ACO, and TS in most simulations. Keywords: Traveling salesman problem · Ant colony optimization · Simulated annealing · Tabu search · Hybrid metaheuristics
1 Introduction The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization due to its easy formulation, theoretical significance, and many applications. The problem can be formulated as follows: “Given a list of nodes (e.g., cities) and a cost metric (e.g., the distance) between each pair of nodes, find the shortest possible route that visits each node exactly once and returns to the origin node.” This problem has significant implications for theoretical computer science and operations research as it is an NP-hard problem. Furthermore, solving TSP substantially reduces the manufacturing costs in various areas such as chip manufacturing, task scheduling, DNA sequencing, and path planning [12]. The worst-case time complexity of an optimal TSP algorithm is known to grow exponentially with the number of cities [28]. Despite the easy formulation of the problem, so far, there is no algorithm with polynomial time complexity to solve the TSP problem optimally. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 3–18, 2023. https://doi.org/10.1007/978-3-031-50330-6_1
4
Y. Liu et al.
Due to its NP-hard nature, solving TSP is computationally challenging, especially when the problem’s input size is large. Hence, due to the problem’s requirements, such as large input size or real-time decision-making of some industrial applications, relying on exact algorithms has not been a feasible option. Instead, there have been unceasing efforts to apply approximate approaches such as heuristic or metaheuristic algorithms to compute near-optimal solutions for complex TSP instances in a reasonable amount of running time. Metaheuristics are powerful and versatile; however, ensuring their high performance requires careful design and cautious selection of hyperparameters [8]. From a design perspective, metaheuristics can be classified into population-based and single solution-based algorithms. The former evolves and improves one initial solution via a local search strategy until convergence or stopping criteria. The latter, oppositely, follows a collective approach with a set of solutions working together to tackle the problem’s search space [7]. Neither of the two categories is superior for solving all types of optimization problems or handling all instances. Intrinsically, a single solution algorithm often has a better exploitation ability, while a population-based one has a superior exploration feature. In this article, we consider the Genetic Algorithm (GA) and Ant Colony Optimization (ACO) among the population-based algorithms and Simulated Annealing (SA) and Tabu Search (TS) among the single solution methods. Furthermore, we introduce two novel hybrid approaches combining ACO with SA and TS. We exploit the prominent exploration feature of collective search and the exploitation characteristic of individualbased methods. We aim to demonstrate that hybrid algorithms can effectively improve the accuracy of the ant colony optimization algorithm for the TSP. The rest of the paper is organized as follows. In Sect. 2, we present state-of-the-art related to the studied problem and applied algorithms. In Sect. 3, we present the novel hybrid approaches ACOSA and ACOTS. We discuss the experimental studies, including the experimental setup, test instances, performance metrics, and results analysis in Sect. 4. Finally, we highlight conclusions and future works in Sect. 5.
2 Related Work The TSP has received significant attention from the exact and heuristic communities. Many algorithms have been proposed, such as linear programming techniques, which have been extensively exploited to deal with small and medium size instances. For example, the Concorde solver [3] has successfully solved a substantial TSP instance, including 85,900 cities available in the well-known TSPLIB [25]. Due to its NP-hard nature, many researchers have applied heuristic, and metaheuristic approaches to solving TSP instances. For instance, the authors of [13] surveyed six heuristic algorithms, including Nearest Neighbor (NN), Genetic Algorithm (GA), Simulated Annealing (TS), Tabu Search (TS), Ant Colony Optimization (ACO), and Tree Physiology Optimization (TPO) for solving several small TSPLIB instances. Through
Application of Metaheuristic Algorithms and Their …
5
simulations, the authors indicated that the computation time of TS and ACO is six and three times longer than other algorithms, respectively. And among the fastest algorithms are NN, followed by TPO and GA. In the statistical comparison, the authors indicated that TS, TPO, and GA could obtain higher-quality solutions than other algorithms. In [6], the authors introduced SCGA, a hybrid Cellular Genetic Algorithm with Simulated Annealing (SA) to solve thirteen small instances from TSPLIB. SCGA is motivated by the GA’s fast convergence and insufficient optimization precision. Compared to a classical GA and SA, results show that a mean of 7% shortens the distance obtained by SCGA and has good robustness. Similarly, the authors of [9] introduced a new optimization model named multiple bottlenecks TSP (MBTSP) to handle various salesmen and tasks. And they proposed a novel hybrid genetic algorithm (VNSGA) with variable neighborhood search (VNS) for multi-scale MBTSP. The experiments show that VNSGA can achieve better solution quality than the state-of-the-art algorithms for MBTSP problems, demonstrating hybrid methods’ superiority. Differentiation techniques have substantially impacted the quality of solutions compared to other evolutionary methods. Those methods have been essentially used to deal with continuous optimization problems. However, there have been many pertinent attempts to devise differential evolution methods for combinatorial optimization problems, such as the TSP. For example, the authors in [2] introduced a novel discrete differential evolution algorithm for improving the performance of the standard differential evolution algorithm for TSP. The authors used a mapping mechanism between continuous and discrete variables, a k-means method to enhance the initial population, and an ensemble of mutation strategies to increase diversity. Interestingly, the approach was compared with 27 state-of-the-art algorithms for solving 45 TSP instances of different sizes. The experimental results demonstrated the efficiency of the approach in terms of the average error to the optimal solution. In [4], the authors studied a new TSP variant called a profitable tour problem (PTP) that maximizes the total profit minus total travel cost. The paper proposed three methods, including a multi-start hyper-heuristic (MSHH), a multi-start iterated local search (MS-ILS), and a multi-start general VNS (MS-GVNS) to solve the PTP. MSHH uses eight low-level heuristics, whereas MS-ILS and MS-GVNS use five different neighborhoods for local search. A set of TSPLIB instances was solved to prove the effectiveness of the various combinations. Nature-based metaheuristics have been very popular recently in dealing with largescale optimization problems [24]. For example, the authors of [15] proposed an improved Artificial Bee Colony algorithm with multiple update rules and K-opt operation to solve the TSP. The authors used eight rules to update solutions in the algorithm via an employed bee or an onlooker bee. The proposed method was tested with benchmark test problems from TSPLIB. It is observed that the algorithm’s efficiency is adequate concerning the accuracy and consistency of solving standard TSPs. In addition, the authors of [30] presented a discrete Pigeon-inspired optimization (PIO) algorithm (DPIO), which uses the
6
Y. Liu et al.
Metropolis acceptance criterion of simulated annealing for the TSP. To enhance exploration and exploitation ability, the authors proposed a new map and compass operator with comprehensive learning ability and a landmark operator with cooperative learning ability to learn from the heuristic information. Systematic experiments were performed on 33 large-scale TSP instances from TSPLIB, and the results validated the advantage of DPIO compared with most state-of-the-art meta-heuristic algorithms. Moreover, the authors of [20] solved the rich vehicle routing problem (RVRP) using a Discrete and Improved Bat Algorithm (DaIBA). Two neighborhood structures were used and explored depending on the bat’s distance regarding the best individual of the swarm. DaIBA was compared with evolutionary simulated annealing and a firefly algorithm. Based on statistical analysis and a benchmark of 24 datasets from 60 to 1000 customers, the authors concluded that the proposed DaIBA is a promising technique for addressing RVRP. Ant Colony Optimization (ACO) has been a prevalent method for solving many variants of optimization problems. However, like GAs, ACO tends to fall into local minima prematurely. Therefore, there have been many attempts to improve ACO through various hybridization techniques. The authors of [27], a rank-based ant system (ASrank ) proposed for TSP. In ASrank , the ant agents that have found elite solutions are selected to update the pheromone on a specific route. As a result, the computational time is reduced, but the algorithm becomes more prone to falling into a local solution due to the concentration of pheromones on a specific route. The authors proposed a new ant system based on individual memories (ASIM) to improve diversity. Another ACO-based hybrid approach was proposed by the authors of [17]; a Slime Mold-Ant Colony Fusion Algorithm (SMACFA) was proposed in which an optimized path is first obtained by Slime Mold Algorithm (SMA) for TSP; Then, the high-quality pipelines are selected, and their ends are directly applied to the ACO by the principle of fixed selection. Several techniques have been proposed in [29], such as entropy-weighted learning, nucleolus game, and mean filtering techniques to diversify the population and avoid early convergence. To reduce the search space in ACO, some attempts have been to restrain the candidate set of nodes to the k nearest cities [14]. Despite its local nature, this assumption works well in practice as the observation drives that in TSP, reasonable solutions are often found via local transitions. Likewise, the authors of [21] proposed a restricted variant of the ACO pheromone matrix with linear memory complexity to reduce the ACO memory complexity. Motivated by the need to efficiently visit the search space in medium and large-scale optimization problems, we study several combinations of hybrid metaheuristics for the traveling salesman problem in this paper. We aim to balance the exploitation and exploration of metaheuristics through careful hybridization of single solutions and populationbased metaheuristics [18, 19]. We consider ACO, GA, SA, and TS and several cooperation techniques between them to improve the convergence, reduce the computation time, and decrease the sensitivity toward initial settings.
Application of Metaheuristic Algorithms and Their …
7
3 Hybrid Metaheuristics Marco Dorigo, the proposer of ACO, and many studies have shown that ACO shows better capabilities than genetic algorithms when applied to the Traveling Merchant Problem (TSP) [10, 11, 23]. Therefore, this paper focuses on combining ACO with single solutions metaheuristics. 3.1 ACO + TS Due to the existence of the pheromone mechanism, during the iteration process, the ants have a high probability of choosing the route with a high pheromone concentration, which allows the algorithm to converge at a faster rate. However, it also makes the algorithm lose the ability to jump out of the local optima. After a certain number of iterations, the pheromone matrix will likely promote one route due to pheromone concentration on specific edges. With that, the ant agents will select one route with almost 100% probability and ignore other possible better paths. Therefore, to improve the exploitation and exploration ability of ACO and to deal with premature convergence toward local optima, we integrate tabu search in the ACO search process. Adding TS will improve the exploration ability of the algorithm without compromising the exploitation as TS heavily relies on intense local search and short-term, long-term tabu memories. Combining ACO and TS would intriguingly increase the computational time of the hybrid approach. We follow a selective process to reduce the algorithm’s time consumption. Whenever the ACO goes through k iterations (e.g., 5), the best route S0 found via the ACO search mechanism will be used as the initial solution for the Tabu Search. The Tabu Search is used to check whether the neighbor region of the incumbent one contains more promising solutions, which helps the ACO to jump out of the local optima’s area. While performing the TS process, information about the search space is progressively retrieved and shared with the ACO ant agents by updating the pheromone matrix. Therefore, both ACO and TS collectively tackle the search space and successively orchestrate the exploitation and exploration of the algorithm. Performing TS on the ant with the best fitness is not the only strategy; an alternative option is to let TS helps the ant agent with the worst fitness or, more interestingly, follow a dynamic approach and select a random ant every time. However, that might significantly delay the convergence and increase the computation time. 3.2 ACO + SA The ACO used in this paper adopts a primitive pheromone update strategy, i.e., all ants in the colony release pheromones along their visited paths based on the length of the route. Alternatively, the ant’s agent having the best fitness can control the pheromone concentration. Compared with keeping only the pheromone of the best ant, the collective strategy reduces the probability of falling into local minima, effectively improving the
8
Y. Liu et al.
algorithm’s global search capability. However, this strategy also makes the pheromone distribution so vast that the algorithm’s convergence is slow. Therefore, to balance the convergence speed and fitness of the algorithm, we introduce a simulated annealing mechanism in the ACO so that the individuals in the ant colony converge to the optimal solution faster while maintaining better fitness. The hybrid SAACO starts with an initial solution S0 of length L0 computed via the simulated annealing algorithm. Then, the pheromones associated with S0 are used to initialize the pheromones matrix of the ACO. With that, the algorithm will likely avoid premature convergence due to the uniform distribution of pheromones in the earlier optimization stage. When m ants complete one iteration, the distance set L and path set S of the obtained solutions are examined. Notably, we denote the sets Sk and Lk as the routes and distances traveled by an ant k. The best solutions in the candidate set are characterized by Smin and Lmin . According to the simulated annealing mechanism, the candidate set is filtered upon Lk , Lmin , a random number ζ ∈ [0, 1), and the current temperature value. The solutions fulfilling the SA requirements will eventually be used to update the pheromone matrix. After computing Pk , the following is applied. If Pk = 1, then Sk and Lk are added to the updating list. Otherwise, the algorithm generates a random ζ in the interval [0, 1) following a uniform distribution. If Pk > ζ , then allow Sk and Lk to join the update set. Otherwise, let Sm in and Lm in join the update set instead of Sk and Lk . By following the above process, all solutions can participate in the definition of the search direction. That is, the pheromone update and evaporation are not only controlled by the behavior of the ant’s agent having the best fitness. Instead, the SA mechanism involves all ants taking into consideration their quality, the stage in the search, and a random factor. Similar to ACO with TS, the rate of exploitation and exploitation is dynamic and automatically controlled, taking advantage of the various features of ACO and SA.
4 Experiments and Results In this section, we present the computational results of the four classical metaheuristics Ant Colony Optimisation (ACO), Genetic Algorithm (GA), Tabu Search (TS), and Simulated Annealing (SA), as well as the two novel hybrid metaheuristics TSACO and SAACO. For each algorithm, we repeat each simulation 15 times to address any bias related to the structure of the initial state or any random steps in the algorithm’s search process. A total of 10 benchmark instances of different sizes and difficulties have been solved, and the average of the 15 simulations is used to compare the algorithms. The hardware and software specifications of the assessment platform are as follows: Operating System: Windows 10; CPU: Intel(R) Core(TM) i7-8750H; RAM: 16 GB; Platform: MATLAB2022a; Network: Gigabit Ethernet.
Application of Metaheuristic Algorithms and Their …
9
4.1 Benchmark Instances The ten instances used to assess the algorithms were taken from the well-known TSP dataset website named TSPLIB [25]. The instances were classified as Small (n ≤ 150), Medium (150 < n < 400), Large (400 ≤ n) according to the complexity and size of the instances [26]. Based on previous research studies [1, 5, 16, 22] and repeated experimental tests, the parameters of the six algorithms considered in this paper are shown in Tables 2 and 5. Table 1. Benchmark instances Id
Instance
Number of nodes
Complexity
Optimal solution
1
st70
70
Small
675
2
lin105
105
Small
14,379
3
xqf131
131
Small
564
4
kora150
150
Small
26,524
5
kora200
200
Medium
29,368
6
pr226
226
Medium
80,369
7
lin318
318
Medium
42,029
8
pr439
439
Large
107,217
9
rat575
575
Large
36,905
10
rat783
783
Large
8806
4.2 Local Optimal Solution Criterion The experimental results indicate that all algorithms tend to converge in the first fifty percent of the fixed number of iterations. After that, it is nearly complicated to jump out of the local optima region; hence the convergence point is achieved. Due to that, and to reduce the computation overhead, the stopping criterion of an algorithm will be adjusted based on its convergence state. – Small and Medium Instances: When a solution is repeated for Cmax × 1/5 (Cmax indicates the maximum number of iterations), the algorithm stops as is judged to be converged, and consequently stopped. – Large Instances: When a solution is repeated for Cmax ×1/4, the algorithm is judged to be converged; hence the execution halts. 4.3 Result of Small and Medium Instances The results for the Small and Medium instances are shown in Tables 3 and 4. For each algorithm, the average of 15 simulations and the best and worst fitness values are reported. Also, the error σ ratio concerning the optimal value for each instance is indicated. In
10
Y. Liu et al.
bold, we also highlight the best algorithm based on the average value indicator. The parameter settings for each algorithm in Small and Medium Instances are tabulated in Table 2. Table 2. Parameter setting for small and medium instances Algorithm
Parameter
GA
m = 20,000
Algorithm
Iteration = 500
pc = 0.8 pm = 0.5
Parameter m = 100
ACO
alpha = 1
n = city_size
beta = 2
c = n × 0.8
rho = 0.1 Q = 20
TS
Iteration = 2000
T0 = 10,000
n = city_size tabusize = n × 0.8
maxgen = 2000 SA
Lk = 500
c = 400
alpha = 0.99
Iteration = 500
m = 100
m = 100
alpha = 1
alpha = 1
beta = 2
beta = 2
rho = 0.1
Iteration = 500
TSACO
rho = 0.1
SAACO
Q = 20
Q = 20
T0 = 10,000
tabusize = 200
maxgen = 500
c = 200
Lk = 500 alpha = 0.99
4.4 Result of Large Instances The experimental results for large instances are shown in Table 6. For each dataset, the performance of the best algorithm is highlighted in bold. The parameter settings in the context of large instances are tabulated in Table 5. Compared with the parameters of small & medium instances, the candidate set sizes (i.e., the number of inner loops) of TS, SA, TSACO, and SAACO are increased along with the maximum number of iterations to enable the algorithm to handle a larger objective space, and potentially finding better solutions.
Application of Metaheuristic Algorithms and Their …
11
Table 3. Test results for small instances TSP
Algorithm
Average
Error (σ )
Best
Worst
st70
GA
740.09
0.096
706.95
787.84
675
ACO
731.32
0.083
709.01
739.68
TS
760.32
0.126
699.66
798.3
SA
688.65
0.02
678.99
698.67
SAACO
684.68
0.014
682.13
706.09
TSACO
680.12
0.008
677.11
684.80
lin105
GA
16,125.47
0.121
15,190.32
17,273.57
14379
ACO
15,463.97
0.075
15,274.2
15,589.79
TS
16,956.76
0.179
16,071.02
18,284.58
SA
14,688.54
0.022
14,480.15
15,178.9
SAACO
15,227.08
0.059
14,753.43
15,768.96
TSACO
14,625.49
0.017
14,416.78
14,757.79
xqf131
GA
649.64
0.152
614.35
688.04
564
ACO
621.76
0.102
607.4
626.58
TS
668.66
0.186
642.98
727.31
SA
593.79
0.053
581.25
611.66
SAACO
588.31
0.043
583.85
595.56
TSACO
588.10
0.043
580.48
596.51
kroa150
GA
29,919.56
0.128
29,273.41
30,695.34
26524
ACO
30,205.43
0.139
29,667.71
30,321.02
TS
28,996.22
0.093
27,591.59
32,060.59
SA
27,908.03
0.052
27,310.08
28,949.48
SAACO
28,984.27
0.093
27,076.44
30,908.1
TSACO
28,123.09
0.06
27,983.06
28,498.19
4.5 Time Comparison As the instance size increases, Table 7 shows the average running time for each algorithm, while Fig. 1 shows the trend of the average running time for each algorithm. Due to the different implementation principles of the algorithms, the average running time required by the Population-based algorithms is significantly higher than that of the single solution metaheuristic algorithms. That is because, at each iteration, a population-based algorithm needs to calculate the fitness of each solution (ant or individual) and perform the required updates. Based on Table 7, the average running time of GA is about 4–10 times longer than the average running time of ACO because the size of the population (genes) that needs to be updated by GA (20,000) is much larger than the size of the population (ants) that needs to be updated by ACO (500).
12
Y. Liu et al. Table 4. Test results for medium instances
TSP
Algorithm
Average
Error (σ )
Best
Worst
kroA200
GA
32,671.36
0.112
31,621.18
34,562.99
29368
ACO
34,222.21
0.165
32,504.02
35,402.15
TS
34,241.16
0.166
31,140.80
35,685.53
SA
31,282.57
0.065
30,587.07
32,217.01
SAACO
32,347.06
0.101
29,731.31
35,600.10
TSACO
32,328.56
0.101
31,760.47
32,977.41
pr226
GA
92,829.63
0.155
87,591.71
102,585.88
80369
ACO
87,019.13
0.083
86,125.39
87,804.54
TS
103,249.01
0.285
91,750.62
113,964.43
SA
85,013.91
0.058
81,902.16
89,923.26
SAACO
97,075.98
0.208
89,682.37
109,963.67
TSACO
84,962.42
0.057
84,554.22
85,639.83
lin318
GA
47,588.08
0.132
46,063.69
49,552.08
42029
ACO
48,679.24
0.158
47,710.60
49,889.48
TS
51,013.77
0.214
48,889.31
53,017.77
SA
46,014.72
0.095
44,880.77
47,255.70
SAACO
52,499.88
0.249
49,988.75
56,533.60
TSACO
47,857.31
0.139
47,341.72
48,464.82
In terms of running time, the single-solution metaheuristics (TS and SA) have a definite advantage over the population-based metaheuristics (GA, ACO, SAACO, TSACO). For a given identical instance, the single-solution metaheuristic often takes only one-fifth to one-hundredth of the time required by the population-based metaheuristic. Furthermore, as the number of instances increases, the advantage of the single solver heuristic algorithm in terms of running time will become more apparent. For the two singlesolution metaheuristics (TS and SA) covered in the paper, TS takes slightly better running time than SA for small and medium-sized instances. In contrast, for large instances, TS takes nearly twice as long to run as SA because TS has twice the number of candidate solutions when dealing with large instances than small and medium-sized instances to reduce the fitness of TS. For the two hybrid metaheuristics proposed in this paper (SAACO and TSACO), SAACO and ACO are almost identical regarding running time. However, SAACO shows an unusually high efficiency in medium-sized instances, which proves that giving the initial solution to ACO can, to some extent, guide the algorithm to converge to the optimal solution faster to improve the algorithm’s efficiency. Like SAACO, TSACO outperforms ACO in terms of running time for two medium-sized instances, pr226 and lin318, demonstrating that introducing the TS into ACO can also speed up the algorithm’s
Application of Metaheuristic Algorithms and Their …
13
Table 5. Parameter setting for large instances Algorithm
GA
Parameter
Algorithm
Parameter
m = 20,000
Iteration = 500
pc = 0.8
m = 100
pm = 0.5
ACO
alpha = 1
n = city_size
beta = 2
c = n × 0.8
rho = 0.1 Q = 20
Iteration = 2000 TS
n = city_size
T0 = 10,000 SA
maxgen = 2000
tabusize = 200
Lk = 700
c = 800
alpha = 0.99 Iteration = 500
TSACO
m = 100
m = 100
alpha = 1
alpha = 1
beta = 2
beta = 2
rho = 0.1
SAACO
rho = 0.1
Q = 20
T0 = 10,000
tabusize = 200
maxgen = 700
c = 500
Lk = 700 alpha = 0.99
convergence to the optimal solution to some extent. However, for small and two large instances (pr439 and rat575), TSACO requires significantly more runtime than ACO. TSACO requires significantly better runtime than ACO for rat783, which may be related to the distribution of instance nodes for rat783. Based on Fig. 1, it can be seen that the larger the size of the instance, the more average running time the algorithm demands. Meanwhile, When compared to TSP problem sizes, each algorithm exhibits a distinct pattern of computing time concerning the input size: GA shows a vital exponential characteristic, and ACO, TSACO, and SAACO show weak exponential behavior. 4.6 Fitness Comparison The results of each algorithm for each instance are tabulated in Tables 3, 4, and 6. The values highlighted in bold represent the best results for each instance and category. The relative error metric is used to calculate the gap between the average effect of each algorithm and the optimal solution (located under the name of each instance.) of each instance.
14
Y. Liu et al. Table 6. Test results for large instances
TSP
Algorithm
Average
Error (σ )
Best
Worst
pr439
GA
123,033.75
0.148
118,447.09
126,804.93
107217
ACO
122,898.18
0.146
118,518.98
127,453.46
TS
125,885.78
0.174
123,172.55
136,693.29
SA
120,151.70
0.121
116,287.44
127,247.69
SAACO
125,886.58
0.174
117,936.83
131,929.77
TSACO
119,153.14
0.111
117,926.09
120,592.73
rat575
GA
7898.60
0.184
7750.28
8023.15
6673
ACO
8212.86
0.231
8070.97
8275.03
TS
8225.53
0.233
7967.14
8519.40
SA
7665.27
0.149
7491.13
7831.27
SAACO
7762.32
0.163
7382.30
8311.21
TSACO
7800.31
0.169
7711.68
7894.68
rat783
GA
11,389.90
0.293
11,192.16
11,651.58
8806
ACO
10,772.51
0.223
10,571.67
10,864.04
TS
11,618.94
0.319
11,224.82
12,003.48
SA
10,435.87
0.185
10,201.14
10,646.98
SAACO
9956.25
0.131
9737.03
10,956.38
TSACO
10,630.28
0.207
10,457.53
10,766.62
Table 7. Computation time for ten instances (in seconds) Instance
GA
ACO
TS
SA
SAACO
TSACO
st70
39.57
11.62
2.56
4.49
11.94
29.88
lin105
80.75
19.30
2.70
3.94
12.75
28.51
xqf131
138.65
37.57
3.81
5.08
28.66
40.16
kroa150
183.94
42.47
4.36
4.69
14.44
66.54
kroA200
386.74
54.13
4.19
5.15
17.62
56.29
pr226
694.74
83.36
4.52
6.17
24.79
76.15
lin318
1368.44
166.23
5.90
7.20
35.94
88.77
pr439
2582.78
383.38
24.07
10.12
273.42
432.86
rat575
4208.81
538.02
34.86
17.96
567.66
800.64
rat783
6039.92
1364.25
43.59
22.48
1387.78
1028.09
Application of Metaheuristic Algorithms and Their …
15
Fig. 1. Time by the increasing size of instances
Based on the experimental results in the tables above, TSACO shows better average results when tackling small-size instances, proving that introducing dynamic tabu lists in ACO can help jump out of local optima regions and find better solutions. However, for kroa150 and medium instances, SA outperforms TSACO in general, even though the average fitness of TSACO in instance pr226 is slightly better than SA. This result shows that TSACO does not perform as well as SA for medium-sized instances. However, it is undeniable that the TSACO, with the introduction of the tabu search mechanism, still outperforms the original ACO in the ability to search for the optimal global solution. In significant instances, TSACO, SA, and SAACO each achieve the best performance once. Therefore, it is difficult to conclude which algorithm is more suitable for largesize instances in terms of accuracy when the average case is considered. Nonetheless, it is verified that ACO performs better than GA for all size instances. At the same time, SAACO with an annealing mechanism and TSACO with a tabu search mechanism outperform the traditional ACO. 4.7 Robustness In this paper, the robustness metric is assessed. It can be interpreted as the results’ stability of an algorithm, which means the magnitude of the error between solutions obtained for the same instance over multiple experiments. The smaller the error between solutions, the more robust the algorithm and vice versa. Figure 2 shows the robustness exhibited by the above four metaheuristics and two hybrid metaheuristics for ten instances of different sizes. The length of each box represents the algorithm’s robustness, and the longer the length of the box, the weaker the robustness of its corresponding algorithm. Based on Fig. 2, TS shows weak robustness in all ten instances, indicating that TS has difficulty consistently obtaining better solutions. The robustness of GA is better than TS in general, but GA is still weak compared to the other four algorithms. The robustness of ACO and SA is stronger. However, for instance, kora200, ACO shows unusually weak robustness. For both hybrid metaheuristics, the robustness of TAACO outperforms SAACO in terms of global performance. Both exhibit strong robustness when computing small-scale instances of size less than 150. However, when facing instances of size
16
Y. Liu et al.
greater than or equal to 150, the robustness of SAACO sharply decreases, while TSACO consistently maintains excellent robustness. st70 (Opt. Fitness = 675)
840
19000
xqf131 (Opt. Fitness = 564)
lin105 (Opt. Fitness = 14379) 750
820 720
18000
800
690
740
Fitness
17000
760
Fitness
Fitness
780
16000
720 700
660
630
600
15000
680 570
660
14000
640
540
GA
33000
ACO
TS
SA
SAACO TSACO
GA
ACO
TS
SA
SAACO TSACO
GA
38000 32000
TS
SA
SAACO TSACO
115000
37000
110000
36000
31000
105000
30000
29000
Fitness
Fitness
35000
Fitness
ACO
pr226 (Opt. Fitness = 80369)
kora200 (Opt. Fitness = 29368)
kora150 (Opt. Fitness = 26524)
34000
100000
95000
33000 90000
32000
28000
85000
31000 27000 30000
GA
58000
ACO
TS
SA
SAACO TSACO
80000
GA
lin318 (Opt. Fitness = 42029)
140000
ACO
TS
SA
SAACO TSACO
GA
pr439 (Opt. Fitness = 107217)
8800
ACO
TS
SA
SAACO TSACO
rat575 (Opt. Fitness = 36905)
8600
56000
135000 8400
54000 8200
50000
Fitness
52000
Fitness
Fitness
130000
125000
120000
8000 7800 7600
48000 7400 115000
46000
7200
44000
110000
GA
ACO
TS
SA
SAACO TSACO
7000
GA
ACO
TS
SA
SAACO TSACO
GA
ACO
TS
SA
SAACO TSACO
Fig. 2. Distribution of fitness of each algorithm
5 Conclusions This paper considers six algorithms, including two novel hybrid approaches, to solve ten TSP problems of different sizes and difficulties. Each algorithm is presented and assessed in terms of its average, best, and worst fitness accuracy compared to the optimal solution, average running time, and robustness across multiple simulations. The results show that TS and SA outperform the other four algorithms for running time. Comparing the fitness’s accuracy of the algorithms, the average performance of both TSACO and SAACO outperforms ACO, demonstrating that introducing a tabu search mechanism to ACO for local search can effectively compensate for ACO’s lack of ability to jump out of
Application of Metaheuristic Algorithms and Their …
17
local optima. Meanwhile, SAACO outperforms ACO in average running time because its average fitness is better than ACO, which indicates that introducing a simulated annealing mechanism into ACO can effectively accelerate the algorithm’s convergence and obtain similar or even better fitness. Meanwhile, a hybrid metaheuristic algorithm yields more accurate results for small and large instances. However, for medium-sized instances, SA is a better choice. Combining the algorithm’s running time, adaptability, and robustness, SA is a suitable trade-off method for solving traveling salesman problems of arbitrary size.
References 1. Aarts, E.H., Korst, J.H., van Laarhoven, P.J.: A quantitative analysis of the simulated annealing algorithm: a case study for the traveling salesman problem. J. Stat. Phys. 50(1), 187–206 (1988) 2. Ali, I.M., Essam, D., Kasmarik, K.: A novel design of differential evolution for solving discrete traveling salesman problems. Swarm Evol. Comput. 52, 100607 (2020) 3. Applegate, D., Bixby, R., Chvatal, V., Cook, W.: Concorde tsp solver (2006), http://www.tsp. gatech.edu/concorde 4. Dasari, K.V., Pandiri, V., Singh, A.: Multi-start heuristics for the profitable tour problem. Swarm Evol. Comput. 64, 100897 (2021) 5. Deb, K., Agrawal, S., et al.: Understanding interactions among genetic algorithm parameters. Found. Genetic Alg. 5(5), 265–286 (1999) 6. Deng, Y., Xiong, J., Wang, Q.: A hybrid cellular genetic algorithm for the traveling salesman problem. Math. Probl. Eng. 2021 (2021) 7. Dib, O.: Novel hybrid evolutionary algorithm for bi-objective optimization problems. Sci. Rep. 13(1), 4267 (2023) 8. Dib, O., Moalic, L., Manier, M.A., Caminada, A.: An advanced ga-vns combination for multicriteria route planning in public transit networks. Expert Syst. Appl. 72, 67–82 (2017) 9. Dong, X., Zhang, H., Xu, M., Shen, F.: Hybrid genetic algorithm with variable neighborhood search for multi-scale multiple bottleneck traveling salesmen problem. Future Gener. Comput. Syst. 114, 229–242 (2021) 10. Dorigo, M., Gambardella, L.M.: Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1(1), 53–66 (1997) 11. Drigo, M.: The ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 26(1), 1–13 (1996) 12. Erol, M.H., Bulut, F.: Real-time application of travelling salesman problem using google maps api. In: 2017 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), pp. 1–5. IEEE (2017) 13. Halim, A.H., Ismail, I.: Combinatorial optimization: comparison of heuristic algorithms in travelling salesman problem. Arch. Comput. Methods Eng. 26(2), 367–380 (2019) 14. Ismkhan, H.: Effective heuristics for ant colony optimization to handle large-scale problems. Swarm Evol. Comput. 32, 140–149 (2017) 15. Khan, I., Maiti, M.K.: A swap sequence based artificial bee colony algorithm for traveling salesman problem. Swarm Evol. Comput. 44, 428–438 (2019) 16. Knox, J.: Tabu search performance on the symmetric traveling salesman problem. Comput. Oper. Res. 21(8), 867–876 (1994) 17. Liu, M., Li, Y., Li, A., Huo, Q., Zhang, N., Qu, N., Zhu, M., Chen, L.: A slime mold-ant colony fusion algorithm for solving traveling salesman problem. IEEE Access 8, 202508–202521 (2020)
18
Y. Liu et al.
18. Luo, Y., Dib, O., Zian, J., Bingxu, H.: A new memetic algorithm to solve the stochastic tsp. In: 2021 12th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), pp. 69–75. IEEE (2021) 19. Nan, Z., Wang, X., Dib, O.: Metaheuristic enhancement with identified elite genes by machine learning. In: Knowledge and Systems Sciences, pp. 34–49. Springer, Singapore (2022) 20. Osaba, E., Yang, X.S., Fister, I., Jr., Del Ser, J., Lopez-Garcia, P., Vazquez-Pardavila, A.J.: A discrete and improved bat algorithm for solving a medical goods distribution problem with pharmacological waste collection. Swarm Evol. Comput. 44, 273–286 (2019) 21. Peake, J., Amos, M., Yiapanis, P., Lloyd, H.: Scaling techniques for parallel ant colony optimization on large problem instances. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 47–54 (2019) 22. Peker, M., Sen, ¸ B., Kumru, P.Y.: An efficient solving of the traveling salesman problem: the ant colony system having parameters optimized by the Taguchi method. Turk. J. Electr. Eng. Comput. Sci. 21(7), 2015–2036 (2013) 23. Putha, R., Quadrifoglio, L., Zechman, E.: Comparing ant colony optimization and genetic algorithm approaches for solving traffic signal coordination under oversaturation conditions. Comput.-Aided Civ. Infrastruct. Eng. 27(1), 14–28 (2012) 24. Qiu, Y., Li, H., Wang, X., Dib, O.: On the adoption of metaheuristics for solving 0–1 knapsack problems. In: 2021 12th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), pp. 56–61. IEEE (2021) 25. Reinhelt, G.: {TSPLIB}: a library of sample instances for the tsp (and related problems) from various sources and of various types. http://comopt.ifi.uniheidelberg.de/software/TSPLIB95 (2014) 26. Stodola, P., Otˇrísal, P., Hasilová, K.: Adaptive ant colony optimization with node clustering applied to the travelling salesman problem. Swarm Evol. Comput. 70, 101056 (2022) 27. Tamura, Y., Sakiyama, T., Arizono, I.: Ant colony optimization using common social information and self-memory. Complexity 2021 (2021) 28. Tang, Z., Hoeve, W.J.v., Shaw, P.: A study on the traveling salesman problem with a drone. In: International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 557–564. Springer (2019) 29. Yang, K., You, X., Liu, S., Pan, H.: A novel ant colony optimization based on game for traveling salesman problem. Appl. Intell. 50(12), 4529–4542 (2020) 30. Zhong, Y., Wang, L., Lin, M., Zhang, H.: Discrete pigeon-inspired optimization algorithm with metropolis acceptance criterion for large-scale traveling salesman problem. Swarm Evol. Comput. 48, 134–144 (2019)
Evolutionary Optimization of Entanglement Distillation Using Chialvo Maps Timothy Ganesan1(B) , Roman Rodriguez-Aguilar2 , José Antonio Marmolejo-Saucedo3 , and Pandian Vasant4 1 Department of Physics and Astronomy, University of Calgary, Calgary, AB, Canada
[email protected]
2 Facultad de Ciencias Económicas y Empresariales, Universidad Panamericana, Mexico City,
Mexico 3 Facultad de Ingeniería, Universidad Nacional Autónoma de México, Mexico City, Mexico 4 Modeling Evolutionary Algorithms Simulation and Artificial Intelligence (MERLIN), Ton
Duc Thang University, Ho Chi Minh City, Viet Nam
Abstract. In quantum information theory, entanglement distillation is a key component for designing quantum computer networks and quantum repeaters. In this work, the practical entanglement distillation problem is re-designed in a bilevel optimization framework. The primary goal of this work is to propose and test an effective optimization technique that combines evolutionary algorithms (differential evolution) and the Chialvo map - for solving the bilevel practical entanglement distillation problem. The primary idea is to leverage on the complex dynamical behavior of Chialvo maps to improve the optimization capabilities of the evolutionary algorithm. Analysis on the computational results and comparisons with a standard evolutionary algorithm implementation is presented. Keywords: Quantum information theory · Practical entanglement distillation · Bilevel optimization · Evolutionary algorithm · Chialvo maps
1 Introduction In quantum computing and quantum information theory, entanglement distillation is a critical component for designing quantum computer networks as well as quantum repeaters. The central idea of entanglement distillation is to restore the quality of diluted entanglement states of photons transmitted over long distances. This dilution of entanglement states is a direct result of inevitable decoherence effects. Many theoretical and empirical research works have been focused on investigating quantum distillation frameworks [1, 2]. In the recent work [3], the authors experimentally employed pairs of single photons entangled in multiple degrees of freedom to determine the domain of distillable states (and their relative fidelity). In that work, comparative analyses and benchmarking studies were also carried out on different distillation schemes to understand the design of resilient quantum networks. Similarly, in [4] the authors designed a proof-ofconcept experiment to investigate the implementation of filtering protocols (in atomic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 19–26, 2023. https://doi.org/10.1007/978-3-031-50330-6_2
20
T. Ganesan et al.
ensembles) for constructing quantum repeater nodes. The experiment was conducted in a rare-earth-ion-doped crystal, where the entanglement was prepared. In [5], the authors theoretically investigated the relations between entanglement distillation, bit thread and entanglement purification within the holographic framework. The authors provided a bit thread interpretation of the one-shot entanglement distillation tensor network where they demonstrated that the holographic entanglement purification process could be thought of as a special case of a type of surface growth scheme. The authors in [5] were aiming to provide an effective framework that accurately describes physical entanglement structures. Recent works show that many optimization-based research efforts have also been directed towards quantum information systems. For instance, in [6], the authors showed that the read-out procedure of local unitaries of a high-retrieval efficiency quantum memory could be optimized is an unsupervised manner. The signal-to-noise ratio and the retrieval efficiency of the high-retrieval efficiency quantum memory were examined. This work reformulates the practical entanglement distillation problem in a bilevel optimization framework. The objective of this work is to propose an effective optimization technique that combines evolutionary algorithms and the chialvo map for solving the bilevel practical entanglement distillation problem. The primary idea is to leverage on the complex behavior of chialvo maps to improve the optimization capabilities of the evolutionary algorithm - in this case: the differential evolution (DE) algorithm [7]. This paper is organized as follows: the second section describes the model formulation for the bilevel practical entanglement distillation problem. The third section provides details on the chialvo map and its integration with the evolutionary algorithm. The fourth section presents analysis on the results generated by the computational experiments. This paper ends with some concluding remarks and recommendations for future research works.
2 Methods 2.1 Model Formulation: Entanglement Distillation In this work, a bipartite entanglement distillation model is considered – where the central idea is to convert a state, ρAB (density matrix form) into a state which is close to a maximally entangled state (utilizing only local operations and classical communication). This communication takes place between two nodes of a communication network A and B. This is represented mathematically as follows: d −1 1 |i ˆ |iBˆ F = d |ηAˆ Bˆ |d such that, |d = √ d i=0 A
(1)
where F ∈ (0, 1) is the fidelity – i.e., closeness of the converted state to the maximally entangled state. A and B are the input registers while Aˆ and Bˆ are the output registers. d is the dimension of the quantum state and ηAˆ Bˆ is the converted state (ρAB → ηAˆ Bˆ ). ˆ Therefore, as an |d is the maximally entangled state across output registers Aˆ and B. example if the dimension, d = |A| = |B| = 2, then ρAB would be as follows: ρAB = (1 − p)|0101| + p|2 2 |
(2)
Evolutionary Optimization of Entanglement Distillation
21
In contrast to theoretical entanglement distillation, practical entanglement distillation frameworks allow for the possibility of failure. Therefore, the fidelity parameter would only be relevant to the analysis if the entanglement distillation is a success. This then creates a multilevel scenario with the probability of distillation success, P(δ) ∈ (0, 1) is cascaded by the fidelity parameter, F. Since the local quantum memory utilized to store the quantum entanglement is imperfect/nonideal, this entanglement cannot be preserved for an arbitrary amount of time. Hence, the probability of distillation success, P(δ) would control the rate at which high-fidelity entanglement between the different nodes in the network is aimed at. Practical entanglement distillation uses schemes involving the application of local operation and measurement on A and B registers. This is then followed by a measurement outcome (single exchange) using classical communication to determine distillation success/failure. In this work, the practical quantum distillation scheme presented in [8] is considered. The central idea is to search for the following optimal parameters: (i) output dimension, d , (ii) input state, ρAB as well as (iii) quantum channels (i.e., quantum operations). To optimize quantum operations, Choi isomorphism is utilized - where a one-to-one correspondence between quantum channels and quantum states could be established (with certain properties) [9]. This way, the isomorphism would carry over all the information from the original channel to the Choi state. Following the problem formulation in [8], the bilevel optimization formulation of entangle distillation is follows: Maximize → F =
|A||B| ˆ ˆ Tr |d d |Aˆ Bˆ ⊗ ρAT B Cˆ 1,AA ˆ ⊗ C1,BB P(δ)
(3)
subject to, Maximize → P(δ) = |A||B|Tr ρAT B Cˆ 1,A ⊗ Cˆ 1,B such as, ˆ ˆ ≥0 Cˆ 1,AA ˆ ≥ 0, C1,BB IA ˆ IB Cˆ 1,A ≤ , C1,B ≤ , |A| = |B| = d ≥ 0 : d ∈ N |A| |B| where A and B are Choi state equivalent for output registers A and B. Similarly, ˆ ˆ , Cˆ 1,A and Cˆ 1,B are matrices depicting Choi states which correspond to Cˆ 1,AA ˆ , C1,BB quantum channels. The symbol ⊗ represents the Kronecker product and the dimensions of the identity matrices, IA and IB depend on the dimensions of the registers A and B . 2.2 Combined Approach: Chialvo Map & Evolutionary Optimization Since the early nineties, significant progress has been made on developing formulations and techniques to model the human brain [10]. An effective formulation in this regard is the Hodgkin–Huxley (HH) class of models or conductance-based neuron models. These models are based on coupled nonlinear differential equations - which are simplified versions of the complete partial differential equations used to describe neuronal membrane. However, these HH class of models have high computational cost as they suffer from the ‘curse of dimensionality’ – hence they require numerous parameters. These setbacks
22
T. Ganesan et al.
provided the motivation for researchers to explore more simplistic models which are equally accurate. One such model is the coupled map lattice formulation (CML); which maps a dynamical system with continuous state variables in discreet space and time. Map-based neuronal modeling like the coupled map lattices have been observed to be robust, computationally efficient, and effective. The Chialvo map is a two-dimensional map-based model employed for modeling neurons as well as modeling dynamics of excitable systems [11, 12]. The Chialvo map has been shown to mimic single as well as interconnected neuronal networks using only three (or four) parameters. The mentioned map has been proven to show diverse behavior, from oscillatory to chaotic dynamics. The iterative Chialvo map for a single neuron is as follows: yn+1 = yn2 exp(zn − yn ) + k zn+1 = a1 zn − a2 yn + a3 such as a1 , a2 < 1
(4)
where z is the recovery variable and y is the activation variable, k is the bias constant, a1 is the constant of recovery and a2 describes the activation-dependence of the recovery process and a3 serves as an offset constant. Inspired by the development of evolutionary algorithms, differential evolution (DE) is a type of meta-heuristic algorithm introduced to solve multidimensional, nonlinear, nonconvex, and real-valued optimization problems [7, 13]. The DE algorithm is a direct result from the assimilation of perturbative techniques into evolutionary/meta-heuristic algorithms. The DE algorithm begins by spawning a population of candidate solutions (minimum of four). These candidate solutions are real-coded N-dimensional vectors. Then a single principal parent and three auxiliary parents are chosen randomly. From the candidate solutions, one of the solutions is designated as a principal parent and the other three are designated as auxiliary parents. A generic mutated vector (via differential mutation) is recombined with the principal parent to generate child trial vectors. The survival of a child trial vector would depend on its performance with respect to the fitness criterion. At the next iteration, the selection of the principal parent is done, and the process is repeated until no further improvement of fitness function occurs. The bilevel optimization problem in Eq. (3) is solved within a Stackelberg game theoretic framework [14]. The upper level is the fidelity objective function, F (follower) while the lower level is the probability of success, P(δ) (leader). In this sense, the strategy played by the leader involves optimization of the sub-problem (P(δ)) which will then influence the strategy played by the follower – optimization of the overall problem (F). Thus, the computational techniques employed in this work iteratively solves each level of the optimization problem as a Stackelberg game (until the most optimal solution is attained). In this work, the entanglement distillation problem was solved using two approaches: (i) Combined Chialvo-map and Differential Evolution strategy (ChialvoDE) and (ii) Differential Evolution technique using pseudo-random number generators (PRNG-DE). In the PRNG-DE approach, the sub-problem problem is solved by searching for the optimal dimension, d that maximizes, P(δ). The quantum state, ρAT B and the Choi states: C 1,A and C 1,B are generated using the PRNG. Consequently, using the obtained dimension, d , the density state ρAT B and P(δ), the upper-level problem is solved by
Evolutionary Optimization of Entanglement Distillation
searching for the best Choi states: C
23
1,AA
and C
1,BB
using the PRNG. As for the proposed
Chialvo-DE approach, a similar Stackelberg framework is utilized where the leaderfollower problems are solved iteratively. However, in the Chialvo-DE approach, the quantum state of the qubit before conversion (ρAT B ) is approximated using a hybrid approach: Chialvo map and PRNG. The algorithm for the Chialvo-DE approach is as follows: Algorithm Chialvo-DE approach Step 1: Define Chialvo & DE parameters. Step 2: Solve P(δ) using DE to find optimal quantum state, ρAT B and Choi states: C 1,A and C 1,B . Step 3: Initiate and run simulation on Chialvo single neuron model. Step 4: Determine statistical moments on two-dimensional simulation data – mean and variance. Step 5: Using statistical moments on a Gaussian distribution, simulate random values for quantum state ρAT B . and Step 6: Using standard PRNG, simulate random values for Choi states: C 1,AA C . 1,BB Step 7: Solve for F in the upper-level problem. Step 8: Re-initialize Stackelberg game framework until fitness function cannot be further improved.
The parameter settings used in this work for the DE segment are population size = 15, mutation factor = 0.7, recombination factor = 0.7 and maximum iterations = 300. The parameter settings for the Chialvo map are lattice length = 25, maximum iteration = 100, constant of recovery (a1 ) = 0.5, activation-dependence (a2 ) = 0.5, offset constant, (a3 ) = 0.8 and Bias constant, (k) = 0.02).
3 Results In this work, the Stackelberg game-theoretic framework was employed to solve the entanglement distillation problem using: the combined Chialvo-map and DE strategy (Chialvo-DE) and the standard DE with pseudo-random number generators (PRNGDE). The computational experiments were conducted using the Python programming language on Google Collaboratory platform using Python 3 Google Compute Engine (RAM 12.68 GB and Disk space:107.72 GB). Each approach was executed with a total of 40 times - where each time the technique was run 3 times; and the best solution was taken for each execution. Therefore, both techniques were individually executed a total of 120 times. The computational results obtained using both techniques were measured
using the weighted hypervolume indicator: wHVI = w1 (x∗ − x) + w2 xo∗ − xo . The optimal solution candidate is denoted (x∗ , x0∗ ) and the nadir point is (x, xo ). The weights w1 and w2 enables the relative importance of the contribution of the upper-level problem and lower-level problem. In these experiments the weights: w1 = 0.7 and w1 = 0.3. The nadir point is for the upper-level problem (or fidelity objective) and lower-level
24
T. Ganesan et al.
subproblem (or probability of success) is x = 1andxo = 1. The larger the value of the wHVI metric, the better the optimization performance. The graphical depiction of the optimal quantum state of the best solution obtained using the Chialvo-DE and PRNG-DE are shown in Fig. 1(a) and (b) respectively.
Fig. 1. The quantum state, ρAT B for the best individual solution produced using the Chialvo-DE (a) and PRNG-DE (b) approaches
The ranked individual solutions obtained using the PRNG-DE and Chialvo-DE approaches are given in Tables 1 and 2. Table 1. Ranked individual solutions obtained using the PRNG-DE approach Parameters Best
Median
Worst
d
9
9
4
P(δ)
0.9998
0.9988
0.9994
F
0.9892
0.5762
0.0465
Iterations
4517
4565
4549
ρAT B
[0.03901916 0.10701331 0.0121292 0.22934241 0.10388438 0.15669025 0.21988115 0.12159139 0.01044875]
[0.21452542 0.00297664 [0.11978318 0.10877041 0.02077339 0.18057944 0.3920281 0.37941831] 0.22082836 0.02932727 0.19601912 0.06966524 0.06530511]
wHVI
0.9924
0.703
0.3323
It can be observed in Fig. 1 that the best individual solution reached by the PRNG-DE and Chialvo-DE approaches have a quantum state, ρAT B with the dimensions, d = 9 and d = 4 respectively. In addition, the optimality of best individual solutions achieved by both approaches have very minimal difference (about 0.673%) when measured with wHVI metric. The overall optimization of all generated solutions achieved by both techniques was measured by taking the sum of the individual solution values measured using the wHVI. The wHVI values for the PRNG-DE and Chialvo-DE approaches are 28.64 and 30.42 respectively. Therefore, in terms of overall optimization performance,
Evolutionary Optimization of Entanglement Distillation
25
Table 2. Ranked individual solutions obtained using the Chialvo-DE approach Parameters Best
Median
Worst
d
4
4
4
P(δ)
0.9997
0.9996
0.9997
F
0.9988
0.6669
0.318
Iterations
4561
4569
4571
ρ T AB
[0.50292047 0.38502632 [0.42229104 0.13014768 [0.29478596 0.12376416 0.09549422 0.01655899] 0.291877640.15568364] 0.35264505 0.22880483]
wHVI
0.9991
0.7667
0.5225
the Chialvo-DE approach outperforms the PRNG-DE approach by about 6%. This is due to the generic/wide-range dynamical behavior of the Chialvo map. This output behavior enables the Chialvo-DE to explore larger regions of the objective space and hence obtain better candidate solutions - while avoiding stagnation issues in certain local optima. In terms of computational time, the implementation of the Chialvo-DE approach was 163.16% more computationally costly than the PRNG-DE approach. This is because their additional computational complexity is integrated to simulate the Chialvo neuronal segment of the Chialvo-DE algorithm. Hence, this additional complexity contributes to the additional computational cost. Both approaches were robust and performed stable computation and generated results for the bilevel quantum entanglement distillation problem.
4 Conclusions and Recommendations The practical entanglement distillation problem was re-formulated within a bilevel optimization framework and solved using the combined evolutionary algorithms (DE) and the Chialvo map. The performance of the proposed approach was compared with the standard DE technique - which uses PRNGs instead of the Chialvo map to determine the quantum state, ρAT B . Due to the Chialvo neuron’s capacity for complex dynamical behavior, the overall optimization results of the Chialvo-DE outperformed that of the standard DE approach (PRNG-DE). Future works could be directed on testing the performance of other novel meta-heuristics or evolutionary algorithms for practical entanglement distillation. In addition, research efforts could also be directed to reformulating the entanglement distillation problem in a multi-objective framework such that the identification of the Pareto front would lead to deeper insights.
References 1. Li, M., Fei, S., Li-Jost, X.: Bell inequality, separability and entanglement distillation. Chin. Sci. Bull. 56(10), 945–954 (2011) 2. Ruan, L., Dai, W., Win, M.Z.: Adaptive recurrence quantum entanglement distillation for two-Kraus-operator channels. Phys. Rev. A 97(5), 052332 (2018)
26
T. Ganesan et al.
3. Ecker, S., Sohr, P., Bulla, L., Huber, M., Bohmann, M., Ursin, R.: Experimental single-copy entanglement distillation. Phys. Rev. Lett. 127(4), 040506 (2021) 4. Liu, C., et al.: Towards entanglement distillation between atomic ensembles using high-fidelity spin operations. Commun. Phys. 5(1), 1–9 (2022) 5. Lin, Y.Y., Sun, J.R., Sun, Y.: Bit thread, entanglement distillation, and entanglement of purification. Phys. Rev. D 103(12), 126002 (2021) 6. Gyongyosi, L., Imre, S.: Optimizing high-efficiency quantum memory with quantum machine learning for near-term quantum devices. Sci. Rep. 10(1), 1–24 (2020) 7. Raghul, S., Jeyakumar, G.: Investigations on distributed differential evolution framework with fault tolerance mechanisms. In: Kumar, B.V., Oliva, D., Suganthan, P.N. (eds.) Differential Evolution: From Theory to Practice. SCI, vol. 1009, pp. 175–196. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-8082-3_6 8. Rozp˛edek, F., Schiet, T., Elkouss, D., Doherty, A.C., Wehner, S.: Optimizing practical entanglement distillation. Phys. Rev. A 97(6), 062333 (2018) 9. Jiang, M., Luo, S., Fu, S.: Channel-state duality. Phys. Rev. A 87(2), 022310 (2013) 10. Girardi-Schappo, M., Tragtenberg, M.H.R., Kinouchi, O.: A brief history of excitable mapbased neurons and neural networks. J. Neurosci. Methods 220(2), 116–130 (2013) 11. Muni, S.S., Fatoyinbo, H.O., Ghosh, I.: Dynamical effects of electromagnetic flux on Chialvo neuron map: nodal and network behaviors. Int. J. Bifurc Chaos 32(09), 2230020 (2022) 12. Bashkirtseva, I., Tsvetkov, I.: Noise-induced excitement and mixed-mode oscillatory regimes in the Chialvo model of neural activity. In: AIP Conference Proceedings, vol. 2522, No. 1, p. 050002. AIP Publishing LLC (2022) 13. Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.): Intelligent Computing & Optimization: Proceedings of the 5th International Conference on Intelligent Computing and Optimization 2022 (ICO2022), vol. 569. Springer (2022) 14. Ganesan, T., Vasant, P., Litvinchev, I.: Chaotic simulator for bilevel optimization of virtual machine placements in cloud computing. J. Oper. Res. Soc. China 10(4), 703–723 (2022)
Optimization of Seismic Isolator Systems via Teaching-Learning Based Optimization Ayla Ocak, Gebrail Bekda¸s, and Sinan Melih Nigdeli(B) Department of Civil Engineering, Istanbul University-Cerrahpa¸sa, 34320 Avcılar, Istanbul, Turkey {bekdas,melihnig}@iuc.edu.tr
Abstract. Seismic base isolators act as an intermediary in breaking the relationship of any dynamic effect that will be transmitted to the structure from the base. Against seismic loads, the isolators rapidly reduce the acceleration of the earthquake load with their unique mobility and provide an effective displacement control. The correct selection of parameters is important for isolators with certain damping limits in the evaluation of control performance. In this study, various distant fault earthquakes were excited as excitation to a single degree of freedom structure using an isolator with a damping limit of 30%. The damping and period of the isolator structure under seismic excitations were optimized to minimize the system acceleration, and the performance of reducing the structure motion in the critical earthquake record was recorded. For this purpose, the optimization process was carried out with the Teaching-Learning Based Optimization (TLBO) algorithm, which is a metaheuristic algorithm, and the algorithm performance was compared with the structure optimization study with sample isolators. When the results are examined, it has been observed that increasing the isolator mobility provides a good acceleration reduction optimization compared to similar metaheuristic algorithms. Keywords: Base isolation · Seismic isolator · Metaheuristic algorithm · Teaching-learning base isolation
1 Introduction Seismic base isolators are control devices that act as an intermediary in breaking the relationship between the energy that will be transmitted from the base to the structure and the soil. The working method of isolators is based on rapidly reducing the acceleration coming to the structure by exhibiting a flexible movement against a load transmitted from the base. These systems, which are generally considered rigid vertical and suitable for movement horizontal, have rubber and slip-based types, as well as mixed system types where these two features are combined, and spring-type types that allow vertical movement [1]. In general, control is provided depending on the flexibility of the isolator. One of the ways to adjust the stiffness of the isolator is to increase the mass, while another option is to adjust the period of the isolator. The ductility level can be balanced © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 27–36, 2023. https://doi.org/10.1007/978-3-031-50330-6_3
28
A. Ocak et al.
by the correct adjustment of the damping ratio of the isolator system. Although it is desired for the isolator to show ductile behavior, highly ductile-designed isolators cause excessive damage to the architectural elements of the building, even in small earthquakes. Therefore, the correct setting of the isolator movement capability and thus the parameters affecting them requires the optimization process to increase the efficiency to be obtained from the control. Optimization is a method that provides the best suitable result for the problem with various algorithms. In recent years, metaheuristic algorithms inspired by its easy-toapply structure and the enormous balance in nature and the instinctive behavior of living things have been frequently used in the optimization process. These algorithms are inspired by the echolocation used by bats to find direction, such as formulating and transforming the path followed by bees to search for food and are fed by natural events and living behaviors. There are many types such as Genetic Algorithm, Flower Pollination Algorithm, Adaptation Search Algorithm, Ant Colony Optimization, Bat Algorithm, Artificial Bee Colony Algorithm, and Teaching-Learning Based Optimization Algorithm [2–8]. There are many studies on its use in civil engineering [9–13]. The TeachingLearning Based Optimization Algorithm (TLBO) is a two-stage metaheuristic algorithm that includes the teaching and learning process developed by Rao et al. [8]. Studies have supported the remarkable effects of concrete compressive strength and size-shape optimization of structures, sizing of truss structures, and minimization of the construction cost of shallow foundations [14–18]. Today, studies have been carried out on the use of control systems used in the vibration control of structures in systems where the structure-soil interaction is taken into account in the design optimization [19–22]. In this study, a structure model with a single degree of freedom (SDOF) isolator with a 30% damping limit placed at the base is optimized under FEMA P-695 earthquake records, which include far fault earthquake records [23]. By using TLBO in optimization, the damping ratio and period of the rigid SDOF system with isolators were optimized within the given limits. FEMA P-695 earthquake records were excited to the structure via Matlab Simulink and optimum parameters were determined to minimize the total acceleration of the structure [24]. For earthquake recording, which is critical in the structure without an isolator, the displacement and total acceleration values of the system with isolators were presented, and the results were compared according to the isolator mobility.
2 Methodology In this section, the equations of motion of the structure with seismic isolators and the algorithm equations of TLBO, the optimization algorithm used in the study, are given. 2.1 Seismic Isolator Parameters and Equations of Motion Seismic isolators are control systems that break the bond between the vibrations transmitted from the ground to the structure and the structure. It is usually placed on the base of the building with a weight of one floor of the building. For an SDOF system with
Optimization of Seismic Isolator Systems
29
isolators, the total mass (mtotal ) is obtained by adding the isolator mass (mb ) and the structure mass (mstructure ). Equation 1 shows the total mass calculation. mtotal = mb + mstructure
(1)
For a single degree of freedom (SDOF) systems isolated from the base, the structure, and the isolator act together to show a common period, damping, and rigidity. For the SDOF system, the system period; Tb is calculated as in Eq. 2, the system stiffness; kb is calculated as in Eq. 3 and the damping coefficient; cb of the system is calculated as in Eq. 4. Tb =
2π wb
(2)
kb = mtotal × w2b
(3)
cb = 2 × ζb × mtotal × wb
(4)
wb in the equations denotes the natural angular frequency of the system, and ζb denotes the damping ratio of the system. The basic equation of motion of the SDOF system is shown in Eq. 5. ¨ + cb X ˙ + kb X = −mtotal X ¨g mtotal X
(5)
In the equation of motion of the SDOF system, the displacement of the system is X , and its velocity is X˙ , its acceleration is X¨ , and the ground acceleration is X¨ g . 2.2 Teaching-Learning-Based Optimization of Seismic Base Isolators The basic working principle of isolators is acceleration reduction. In the optimization processes for the isolators, the acceleration of the system under dynamic effects is tried to be minimized to optimize the design parameters. While the acceleration is minimized, the movement constraint of the isolator is also controlled in the optimization processes, taking into account movement constraint. The equation used for minimizing the acceleration, one of the objective functions of the isolator system, is given in Eq. 6, and the equation used to keep the displacement under control is given in Eq. 7. ¨ (6) f(x) = max X g(x) = max(|X |)
(7)
In the optimization to be made with metaheuristic algorithms, the necessary design constants and algorithm parameters are introduced to the system in line with the objective function given first. Solution matrices are produced by considering the design constraints given according to the production equation of the algorithm. Each solution matrix produced by the amount of iteration is compared with the previous matrix, and the optimum
30
A. Ocak et al.
result is reached by updating the old solutions with the better solution by the objective function. The teaching-learning-based optimization algorithm (TLBO) is a metaheuristic algorithm inspired by the teaching and learning process developed by Rao et al. [8]. It includes the process of sharing knowledge by transferring knowledge among a group of students after being trained by a teacher. This algorithm, consists of two phases, the first phase refers to the education of the students and the second phase refers to the interaction between the students. In optimization with TLBO, initial values are generated randomly within limits, taking into account the design constants and design constraints introduced to the system. By substituting the produced values in the objective function, the results obtained as the number of population form the objective function vector. In Eq. 8, the objective function vector is shown. The population number is indicated by “pn” in the equations. ⎤ ⎡ f(x1 ) ⎢ f(x2 ) ⎥ ⎥ ⎢ ⎥ ⎢ .. (8) f(x) = ⎢ ⎥ . ⎥ ⎢ ⎣ f(xpn−1 ) ⎦ f(xpn )
The Xmean value, which is the average of the population, is calculated by taking the average of the random values substituted in the objective function, and the first phase is completed. The random value that gives the minimum value in the objective function vector is called Xteacher . Equation 9 shows Xteacher . Xteacher = Xminf(x)
(9)
Here, in the first phase, the teacher is training a group, while in the second phase, the interaction between the students is mentioned. Each value produced in the first stage is produced depending on the teaching factor, which is expressed as a teaching factor and takes 1 or 2 values. The instruction factor (TF ) random assignment expression is given in Eq. 10 and the solution generation equation is given in Eq. 11. TF = round[1 + rand(0, 1)] → {1 − 2}
(10)
Xnew = Xold + rand(0, 1)(Xteacher − TF .Xmean )
(11)
Given in the equations, Xold denotes old solutions and Xnew denotes new solutions. In the second phase, there is a transfer of knowledge among the students within the group. As seen in Eq. 12, old solutions are updated as a result of the comparison of the equivalents in the objective function of two randomly selected students from the solution matrix produced. Xold + rand(0, 1)Xj − Xk → f Xj > f(Xk ) (12) Xnew = Xold + rand(0, 1) Xk − Xj → f(Xk ) > f(Xj ) In Eqs. 12, Xj and Xk represent any two students selected from the educated class. The solution production process is repeated with the amount of iteration and the solutions that are better than in the past are updated, and the optimum result is reached.
Optimization of Seismic Isolator Systems
31
3 The Numerical Example In this study, an SDOF structure was isolated from the base and its behavior under various FEMA P-695 distant fault earthquake excitations was investigated [23]. In the study, as a rigid SDOF model, a 10-story structure with each story mass of 360 tons and story stiffness and damping coefficient between 650 MN/m and 6.2 MNs/m was used [24]. By transmitting the seismic records to the structure via Matlab Simulink, the damping ratio and period of the system are optimized so that the acceleration of the structure is minimized [25]. The model used in the study is shown in Fig. 1 in two and three dimensions.
Fig. 1. (a) SDOF structure model with base isolator (b) 3D view of the model.
Teaching-learning based optimization (TLBO) algorithm was used in the optimization process. The design constraints and constants used in the optimization are shown in Table 1. The damping ratio and period of the SDOF system with seismic isolators are optimized for the medium-damped (30%) isolator limit for 30, 40, and 50 cm isolator mobility limits. The optimum results obtained are given in Table 2. Critical earthquake analysis was carried out with the optimum isolator system damping ratio and period. Among the FEMA far fault earthquakes, the most critical earthquake for the uncontrolled structure before being isolated from the base was selected and the change in the damping of the system was observed by adding an isolator. The results of the critical earthquake analysis of the isolator-added system are given in Table 3.
32
A. Ocak et al. Table 1. Isolator system design limits and constants.
Explanation
Design parameter
Displacement limit
30, 40, and 50 cm
Damping ratio
1–30%
Minimum-maximum period
1–5 s
Population number ˙ Iteration number
10 250
Table 2. Optimum results of the system with isolators. Variables
Displacement limit (m) For 30 cm
Damping ratio
30%
For 40 cm
For 50 cm
Tb (s)
ζb (%)
Tb (s)
ζb (%)
Tb (s)
ζb (%)
1.9797
30.00
2.8976
30.00
3.3952
30.00
Table 3. Displacement and total acceleration values of the system for critical earthquake recording. Displacement limit (m)
Damping ratio
Displacement (m)
Total acceleration (m/s2 )
With isolator
30%
0.138631
2.1016463
0.1295327
1.1945409
0.3 0.4 0.5
Without isolator
0.1273559
0.9340001
0.4101091
19.283306
The graphs of the system displacement and total acceleration values obtained in the critical earthquake recording are given in Figs. 2, 3, and 4, respectively, for the isolator system with 30, 40, and 50 cm mobility.
4 Discussion and Conclusions In this study, a 30% damped isolator was placed on the base of a 10-story SDOF structure and its behavior under distant-fault earthquakes was investigated for different mobility limits. At the 30% damping limit and 30, 40, and 50 cm displacement limits, the common period and damping ratio of the isolator system have been optimized. The TLBO performance was compared with the optimization results performed with the adapted harmony search algorithm (AHS) [26]. Table 4 shows the percentages of AHS and TLBO algorithms to reduce the displacement and total acceleration of the structure in the critical earthquake of an uncontrolled structure.
Optimization of Seismic Isolator Systems
33
Fig. 2. Displacement and total acceleration graphs under critical earthquake analysis for a 30% damping ratio and 30 cm displacement limit.
Fig. 3. Displacement and total acceleration graphs under critical earthquake analysis for a 30% damping ratio and 40 cm displacement limit.
Fig. 4. Displacement and total acceleration graphs under critical earthquake analysis for a 30% damping ratio and 50 cm displacement limit.
34
A. Ocak et al.
Table 4. Structure displacement and total acceleration reduction percentages with isolator for a 10-story structure [26]. Displacement limit (m)
Displacement reduction (%)
Total acceleration reduction (%)
TLBO
AHS
TLBO
AHS
0.30
66.20
66.54
89.10
91.64
0.40
68.42
66.19
93.81
89.39
0.50
68.95
68.95
95.16
95.16
According to Table 4, while the AHS optimization for 30 cm, which has a lower displacement limit, shows almost the same success as TLBO, it is seen that AHS is more successful with a difference of about 2.5% in acceleration reduction. It is seen that TLBO gives a better result by about 2% for the 40 cm range of motion, while the performance is higher by 4.5% in acceleration reduction. The same level of success was achieved in both optimizations for the 50 cm range of motion. The results obtained with the design optimization for the acceleration reduction of the isolator system can be summarized as follows, i. It is seen that both algorithms are successful in systems with low isolator movement limits and there is no notable performance difference. As the isolator flexibility increased, it was understood that TLBO optimization was more successful, especially in reducing acceleration, and the continued increase in flexibility provided the same performance for both algorithms. Considering this situation, it is seen that the TLBO algorithm comes to the forefront for acceleration minimization in isolators with medium mobility such as 40 cm. ii. When the movement of the isolator is limited or very mobile, the convergence of both algorithms to the optimum result is very close. In addition, the best optimization results for the 30% damping limit were obtained at the highest defined mobility. In light of the given limit values, it can be said that increasing the flexibility of the isolator system will increase the optimization efficiency. iii. As the allowable movement limit for the isolator increases, the optimum system period gets longer. Compared to the increase in the range of motion, a period of approximately 1 s was observed at 30 and 40 cm, which is a lower range of motion, and about 0.5 s increase at 50 cm compared to that at 40 cm. Although the period is a parameter proportional to the stiffness, the optimum system period increment pattern was deviated by 0.5 s in the increase of the movement limit. In this case, it is understood that systems with high ductility can provide successful control with a lower period of movement. In light of all the data, it is understood that the optimization of the systems with isolators can provide a good acceleration control at the level of over 95% and isolated system parameters of the two-phase TLBO algorithm give successful results among similar heuristic algorithms in optimization.
Optimization of Seismic Isolator Systems
35
References 1. Bakkalo˘glu, E.: Seismic isolator systems in hospital buildings the effects of the use decision on the building manufacturing process. Master’s thesis, Istanbul Technical University, Institute of Science, Istanbul, Turkey (2018) 2. Holland, J.H.: Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor, MI (1975) 3. Yang, X.S.: Flower pollination algorithm for global optimization. In: Durand-Lose, J., Jonoska, N. (eds.) Lecture Notes in Computer Science, vol. 7445, pp. 240–249. Springer, London (2012) 4. Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. SIMULATION 76(2), 60–68 (2001) 5. Dorigo, M., Maniezzo, V., Colorni, A.: The ant system: an autocatalytic optimizing process. IEEE Trans. Syst. Man Cybern. B 26, 29–41 (1996) 6. Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: Nature-Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74. Springer, Berlin, Heidelberg (2010) 7. Karabo˘ga, D.: An idea based on honey bee swarm for numerical optimization, vol. 200, pp. 1– 10. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005) 8. Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching-Learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43, 303–315 (2011) 9. Atmaca, B.: Determination of proper post-tensioning cable force of cable-stayed footbridge with TLBO algorithm. Steel Compos. Struct. 40(6), 805–816 (2021) 10. Kaveh, A., Hosseini, S.M., Zaerreza, A.: Improved Shuffled Jaya algorithm for sizing optimization of skeletal structures with discrete variables. In: Structures, vol. 29, pp. 107–128. Elsevier (2021) 11. Zhang, H.Y., Zhang, L.J.: Tuned mass damper system of high-rise intake towers optimized by the improved harmony search algorithm. Eng. Struct. 138, 270–282 (2017) 12. Yahya, M., Saka, M.P.: Construction site layout planning using multi-objective artificial bee colony algorithm with Levy flights. Autom. Constr. 38, 14–29 (2014) 13. Bekda¸s, G., Ni˘gdeli, S.M., Aydın, A.: Optimization of tuned mass damper for multistorey structures by using impulsive motions. In: 2nd International Conference on Civil and Environmental Engineering (ICOCEE 2017), Cappadocia, Turkey (2017) 14. Öztürk, H.T.: Modeling of concrete compressive strength with Jaya and teaching-learning based optimization (TLBO) algorithms. J. Invest. Eng. Technol. 1(2), 24–29 (2018) 15. Zhao, Y., Moayedi, H., Bahiraei, M., Foong, L.K.: Employing TLBO and SCE for optimal prediction of the compressive strength of concrete. Smart Struct. Syst. 26(6), 753–763 (2020) 16. Degertekin, S.O., Hayalioglu, M.S.: Sizing truss structures using teaching-learning-based optimization. Comput. Struct. 119, 177–188 (2013) 17. Dede, T., Ayvaz, Y.: Combined size and shape optimization of structures with a new metaheuristic algorithm. Appl. Soft Comput. 28, 250–258 (2015) 18. Gandomi, A.H., Kashani, A.R.: Construction cost minimization of shallow foundation using recent swarm intelligence techniques. IEEE Trans. Ind. Inf. 14(3), 1099–1106 (2017) 19. Bekda¸s, G., Kayabekir, A.E., Nigdeli, S.M., Toklu, Y.C.: Transfer function amplitude minimization for structures with tuned mass dampers considering soil-structure interaction. Soil Dyn. Earthq. Eng. 116, 552–562 (2019) 20. Ocak, A., Bekda¸s, G., Nigdeli, S.M.: A metaheuristic-based optimum tuning approach for tuned liquid dampers for structures. Struct. Des. Tall Spec. Build. 31(3), e1907 (2022)
36
A. Ocak et al.
21. Kaveh, A., Ardebili, S.R.: A comparative study of the optimum tuned mass damper for high-rise structures considering soil-structure interaction. Period. Polytech. Civ. Eng. 65(4), 1036–1049 (2021) 22. Bekda¸s, G., Nigdeli, S.M., Yang, X.S.: Metaheuristic-based optimization for tuned mass dampers using frequency domain responses. In: International Conference on Harmony Search Algorithm, pp. 271–279. Springer, Singapore (2017) 23. FEMA P-695: Quantification of Building Seismic Performance Factors. Washington 24. Singh, M.P., Singh, S., Moreschi, L.M.: Tuned mass dampers for response control of torsional buildings. Earthq. Eng. Struct. Dynam. 31(4), 749–769 (2002) 25. The MathWorks, Matlab R2018a. Natick, MA (2018) 26. Ocak, A., Nigdeli, S.M., Bekda¸s, G., Kim, S., Geem, Z.W.: Optimization of seismic base isolation system using adaptive harmony search algorithm. Sustainability 14(12), 7456 (2022)
TOPSIS Based Optimization of Laser Surface Texturing Process Parameters Satish Pradhan1 , Ishwer Shivakoti2(B) , Manish Kumar Roy2 , and Ranjan Kumar Ghadai2 1 Advanced Technical Training Centre (ATTC), Bardang, Sikkim, India 2 Sikkim Manipal Institute of Technology, Sikkim Manipal University, Sikkim, India
[email protected]
Abstract. Laser surface texturing (LST) is proving to be a promising technique for surface texturing of different work materials. The selection of suitable parameters to conduct LST has become one of the vital criteria for texture quality. In this work, the texture was made on Zirconia ceramic employing a fibre laser set at different parametric combinations to modify the surface of the material. Average power, scanning speed, pulse frequency and transverse feed has been considered as control variable whereas, surface roughness (Ra and Rz) has been considered as the process response. Furthermore, Technique for order performance by similarity to ideal solution (TOPSIS) based Multi Criteria Decision Making Methods (MCDM) was adopted to determine the suitable parametric combination for improving the process efficiency. The mean weight method has been utilized for providing weightage to TOPSIS. The results shows that the TOPSIS method is relatively simpler to use to determine the suitable combination. Keywords: MCDM · TOPSIS · LST · Laser
1 Introduction Laser beam machining has been widely used for various operations such as drilling, cutting, welding, marking, surface texturing etc. in today’s manufacturing processes. Various methods have been utilized to produce a texture on the surface on the material. The laser surface texturing has been referred as most advanced in comparison to other methods of texturing [1]. Due to its greater flexibility, good accuracy and controllability, the researchers have shown keen interest on LST [2]. The LST can be performed in many ways such as texturing using ablation and through laser interference [3]. LST can modify the surface topography of the material by altering its various surface properties like optical, tribological etc. [4]. The selection of suitable parameters is important and become a thriving area in precision manufacturing processes. Multi criteria decision making methods (MCDM) shows a potential in determining the appropriate parametric combination for improving the process efficiency. Researchers across the globe has utilized various MCDM methods to determine the same. In the present paper, Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) was employed to identify © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 37–42, 2023. https://doi.org/10.1007/978-3-031-50330-6_4
38
S. Pradhan et al.
the finest possible parametric combination during surface texturing using lasers. It is one of the extensively utilized MCDM techniques owing to which it is receiving a lot of attention from researchers. [5]. Chodha et al. utilized the TOPSIS and entropy MCDM methods to select the industrial arc welding robot and found that TOPSIS method to be simple and accurate for the same [6]. Kannan et. al successfully employed the TOPSIS method for finding the optimal machining parameters in LBM for generating elliptical profiles [7]. Rao et.al demonstrated the hybrid approach of AHP-TOPSIS to select the optimal level of EDM parameters while machining AISI D2 steel [8]. Tran et.al during their experimental research utilized GRA based TOSIS using weights from entropy method to find the suitable parameters and concluded that the hybrid approach of grey theory and TOSIS is useful to facilitate the MCDM problems having vagueness and uncertainty [9]. Banerjee et.al demonstrated the usefulness of AHP-TOPSIS hybrid approach to select USM parameters for producing quality zirconia bio ceramic parts with high productivity [10]. Based on the literature review a systematic experimentation planning was done based on L25 Taguchi design of experiments. Based on the design, the experiments were conducted and TOPSIS based MCDM method has been employed for selecting the suitable parameters to achieve excellent process capability.
2 Experiment Details The experiment was conducted with a multi diode pumped fibre laser with a wavelength of 1060 ± 10 nm and an average power of 100 W. The experimental setup for LST consist of zirconia ceramic with a size of 10 mm × 10 mm × 3 mm manufactured by powder metallurgy process. Based on literature survey and sufficient pilot experiments the suitable process parameters namely transverse feed, pulse frequency, average power and scanning speed and its levels were selected and is presented in Table 1. The surface roughness (Ra and Rz) has been considered as process response. Table 1. LST parameters and their levels Sl. No.
Process parameters
Levels 1
2
3
4
5
5
10
15
20
25
1
Average power (W)
2
Pulse frequency (kHz)
5
10
15
20
25
3
Scanning speed (mm/s)
2
5
8
11
14
4
Transverse feed (mm)
0.01
0.02
0.03
0.04
0.05
Based on the design, the experiments were executed with several different combination of parameters and performance criteria such as Ra and Rz were measured with the help of Mitutoyo 178-561-02A surftest SJ-210 Surface Roughness Tester. The experiments were conducted at room temperature and normal atmosphere. Single pass horizontal laser scanning was executed to produce micro grooved pattern with area
TOPSIS Based Optimization of Laser Surface Texturing Process
39
Fig. 1. Schematic representation of LST
of 5 × 5 mm at various parametric combinations on a zirconia ceramic material having dimension of 10 × 10 × 3 mm. Figure 1 shows the schematic representation of LST on zirconia ceramic.
3 TOPSIS Based Selection of Optimal Parametric Combination As discussed in literature review, TOPSIS is one of the frequently employed MCDM technique by researchers due to its simplicity and can be easily integrated with other methods. The underlying philosophy of TOPSIS is founded on the premise that the selected alternative shall be nearest to the positive ideal solution (PIS) and farthest away from the negative ideal solution (NIS). Also, Euclidean distance approach assesses how closely the alternative is related to the ideal answer [11]. The closeness coefficient is calculated to identify the best and worst solution in this method. In this method the decision matrix is formulated and is then normalized, the weights are assigned as per the importance and the weighted normalized matrix is created. In this work mean weight method has been employed for providing the weights to performance criteria. Based on the weighted normalized values the ideal positive and negative solution is determined. Finally, the closeness coefficient is calculated and based on this coefficient the ranking has been done to determine suitable parametric combinations. Table 2 shows the weighted normalized matrix. Where, X1, X2, X3 and X4 are average power (Watt), pulse frequency (kHz), scanning speed (mm/s) and transverse feed (mm) respectively. As per the mean weight method equal importance has been provided to both the criteria. Table 3 shows the closeness coefficient values and rank obtained from the same. Where Si+ and Si− are the Euclidean distance from the best and ideal worst solution and CC represents the closeness coefficient.
40
S. Pradhan et al. Table 2. Weighted normalized matrix based on mean weight method
Expt No.
X1
X2
X3
X4
Ra (µm)
Rz (µm)
1
5
5
2
0.01
0.006
0.009
2
5
10
5
0.02
0.006
0.007
3
5
15
8
0.03
0.011
0.016
4
5
20
11
0.04
0.008
0.013
5
5
25
14
0.05
0.006
0.008
6
10
5
5
0.03
0.066
0.070
7
10
10
8
0.04
0.053
0.057
8
10
15
11
0.05
0.041
0.045
9
10
20
14
0.01
0.126
0.120
10
10
25
2
0.02
0.111
0.113
11
15
5
8
0.05
0.089
0.105
12
15
10
11
0.01
0.083
0.087
13
15
15
14
0.02
0.101
0.105
14
15
20
2
0.03
0.176
0.157
15
15
25
5
0.04
0.122
0.125
16
20
5
11
0.02
0.096
0.104
17
20
10
14
0.03
0.135
0.132
18
20
15
2
0.04
0.203
0.199
19
20
20
5
0.05
0.067
0.074
20
20
25
8
0.01
0.085
0.090
21
25
5
14
0.04
0.093
0.097
22
25
10
2
0.05
0.104
0.104
23
25
15
5
0.01
0.115
0.110
24
25
20
8
0.02
0.115
0.110
25
25
25
11
0.03
0.116
0.118
4 Results and Discussions From TOPSIS the optimal parametric combination is determined based on the ranking as presented in Table 4. It is to be noted from the table that the experimental run no 18 is having the highest rank. The mean weight method provides a equal weightage to both the criteria i.e. Ra and Rz. The parametric combination of experimental run 18 which is the most desirable combination to improve process efficiency are, average power of 20 W, pulse frequency of 15 kHz, scanning speed of 2 mm/s and a transverse feed of 0.04 mm. Table 4 shows the optimal parametric combination obtained by TOPSIS.
TOPSIS Based Optimization of Laser Surface Texturing Process
41
Table 3. Closeness coefficient and ranking Expt. No.
Ra (µm)
Rz (µm)
Si+
Si−
CC
Rank
1
0.006
0.0099
0.27408
0.00262
0.00946
23
2
0.006
0.0072
0.27593
0
0
25
3
0.0115
0.0169
0.26528
0.0111
0.04015
21
4
0.0081
0.0133
0.27019
0.00645
0.02331
22
5
0.0065
0.0089
0.27438
0.00175
0.00634
24
6
0.0661
0.0703
0.18897
0.08705
0.31538
18
7
0.0538
0.058
0.20632
0.0697
0.25252
19
8
0.0416
0.0457
0.22366
0.05235
0.18967
20
9
0.1267
0.121
0.11015
0.16582
0.60086
5
10
0.111
0.1135
0.12663
0.14936
0.54118
9
11
0.0895
0.1054
0.14813
0.12888
0.46525
13
12
0.0834
0.0876
0.16448
0.11154
0.40411
16
13
0.1011
0.1058
0.13914
0.13692
0.49598
11
14
0.176
0.157
0.05085
0.22656
0.81671
2
15
0.1228
0.1256
0.10975
0.16627
0.60239
4
16
0.0963
0.1048
0.14334
0.13295
0.4812
12
17
0.1355
0.1328
0.09558
0.18034
0.65359
3
18
0.2039
0.1995
0
0.27593
1
1
19
0.0676
0.0747
0.18478
0.09139
0.33092
17
20
0.0859
0.0907
0.16053
0.11553
0.4185
15
21
0.0936
0.0977
0.15018
0.12585
0.45593
14
22
0.1044
0.1041
0.13785
0.13808
0.50043
10
23
0.1153
0.1101
0.12587
0.15009
0.54388
8
24
0.1153
0.1105
0.12557
0.15038
0.54496
7
25
0.1169
0.1185
0.11886
0.15712
0.56932
6
Table 4. Optimal parametric combination Sl. No.
Average power (W)
Pulse frequency (kHz)
Scanning speed (mm/s)
Transverse feed (mm)
01
20
15
2
0.04
42
S. Pradhan et al.
5 Conclusion In present research work attempt has been done to create texture on the zirconia ceramic with the help of fibre laser. Four parameters namely average power, pulse frequency, scanning speed and transverse feed and performance criteria surface roughness (Ra and Rz) has been considered. Taguchi based L25 orthogonal array was adopted for experimental design and TOPSIS based MCDM method has been utilized for selection of suitable parametric combinations. Based on the work it was found that TOPSIS may be utilized for determining the optimal levels of parameters. Based on TOSIS the experimental run 18 is the best choice to obtain the desired roughness in the surface of the zirconia ceramic. The obtained parametric combinations for the same are average power of 20 W, pulse frequency of 15 kHz, scanning speed of 2 mm/s and a transverse feed of 0.04 mm. Furthermore, the present result may provide a guideline for researchers working in LST to use TOPSIS based MCDM in their research work.
References 1. Etsion, I.: State of the art in laser surface texturing. J. Tribol. 127(1), 248–253 (2005). https:// doi.org/10.1115/1.1828070 2. Mao, B., Siddaiah, A., Liao, Y., Menezes, P.L.: Laser surface texturing and related techniques for enhancing tribological performance of engineering materials. A review. J. Manuf. Process. 53, 153–173 (2020). https://doi.org/10.1016/j.jmapro.2020.02.009 3. Shivakoti, I., Kibria, G., Cep, R., Pradhan, B.B., Sharma, A.: Laser surface texturing for biomedical applications: a review. Coatings 11(2), 124 (2021). https://doi.org/10.3390/coatin gs11020124 4. Han, J., Zhang, F., Van Meerbeek, B., Vleugels, J., Braem, A., Castagne, S.: Laser surface texturing of zirconia-based ceramics for dental applications: a review. Mater. Sci. Eng.: C 123, 112034 (2021). https://doi.org/10.1016/j.msec.2021.112034 5. Çelikbilek, Y., Tüysüz, F.: An in-depth review of theory of the TOPSIS method: an experimental analysis. J. Manag. Anal. 7(2), 281–300 (2020). https://doi.org/10.1080/23270012. 2020.1748528 6. Chodha, V., Dubey, R., Kumar, R., Singh, S., Kaur, S.: Selection of industrial arc welding robot with TOPSIS and Entropy MCDM techniques. Mater. Today: Proc. 50, 709–715 (2022). https://doi.org/10.1016/j.matpr.2021.04.487 7. Kannan, V.S., Navneethakrishnan, P.: Machining parameters optimization in laser beam machining for micro elliptical profiles using TOPSIS method. Mater. Today: Proc. 21, 727–730 (2020). https://doi.org/10.1016/j.matpr.2019.06.747 8. Rao, K.M., Kumar, D.V., Shekar, K.C., Singaravel, B.: Optimization of EDM process parameters using TOPSIS for machining AISI D2 steel material. Mater. Today: Proc. 46, 701–706 (2021). https://doi.org/10.1016/j.matpr.2020.12.067 9. Tran, Q.P., Nguyen, V.N., Huang, S.C.: Drilling process on CFRP: multi-criteria decisionmaking with entropy weight using grey-TOPSIS method. Appl. Sci. 10(20), 7207 (2020). https://doi.org/10.3390/app10207207 10. Banerjee, B., Mondal, K., Adhikary, S., Paul, S.N., Pramanik, S., Chatterjee, S.: Optimization of process parameters in ultrasonic machining using integrated AHP-TOPSIS method. Mater. Today: Proc. 62, 2857–2864 (2022). https://doi.org/10.1016/j.matpr.2022.02.419 11. Odu, G.O.: Weighting methods for multi-criteria decision-making technique. J. Appl. Sci. Environ. Manag. 23(8), 1449–1457 (2019). https://doi.org/10.4314/jasem.v23i8.7
PCBA Solder Vision Inspection Using Machine Vision Algorithm and Optimization Process Aries Dayrit(B) , Robert De Luna, and Marife Rosales Graduate School, Polytechnic University of the Philippines, Valencia St. Near Ramon Magsaysay Blvd. Sta. Mesa, Manila, Philippines [email protected], {rgdeluna, marosales}@pup.edu.ph
Abstract. Inspection in the manufacturing industry is commonly used to detect any abnormality or non-conformance in entire processes. Human visual inspection where the operator conducts a 100% inspection of the product under a magnifying lamp. This kind of inspection affects the quality of the product due operator’s inconsistent product judgment which highly depends on experience and skills. In this paper, Automated Optical Inspection (AOI) image capturing will introduce to improve the monthly production output. MATLAB machine vision algorithms such as image masking and segmentation will process the captured image. Image boundaries will calculate the solder area to determine if the product met the defined specification. 95% accuracy result on cross and hold-out validation using K Nearest Neighbor (KNN) supervised learning classifier algorithm. 30–40% monthly productivity improvement after implementation of the AOI machine vision system. Keywords: Automated optical inspection (AOI) · Machine vision · MATLAB · K nearest neighbor (KNN)
1 Introduction In-process inspection plays a very critical role in producing high-quality products based on the standard set by the company. Implementing in-process inspection systems for all critical processes in production lines will help to improve production productivity, quality, and customer satisfaction [1]. It also helps to contain the problem in the specific affected process and not to flow down the issue to the next processes. The operator randomly collected samples from all identified critical processes that will undergo for 100% quality inspection to check if there are any abnormalities occurred in the product after being processed in the machine. The disadvantage of human inspection is very subjective in product judgment that depends on the operator’s experience and physical condition [2]. The judgment of a highly skilled experienced operator may be different from the newly trained operator which is not so familiar with some of the critical inspection criteria. Unstable judgment may result in a product defect escapee that leads to product failure in the customer end user. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 43–52, 2023. https://doi.org/10.1007/978-3-031-50330-6_5
44
A. Dayrit et al.
To ensure the consistency of image inspection judgment, an automated inspection was developed to complement the conventional manual human visual inspection. The product was inspected using a vision camera to capture the image base from the region of interest or the critical parts of the product to ensure that all the defined inspection criteria were achieved. One of the common vision camera inspection systems that are widely used in the manufacturing industry is the automated optical inspection (AOI) system (Fig. 1).
Fig. 1. Proto type machine vision using AOI system
2 Automated Optical Inspection An automatic visual inspection system was developed to reduce the cost, inconsistent judgment, and dependency on human expertise or experience [3]. Automated optical inspection (AOI) systems are a technology that is widely used in PCB manufacturing. Automated optical inspection (AOI) systems are a technology that is widely used in PCB manufacturing. The use of built-in camera sensors gives highly efficient image-capturing process improvements and quality achievements [4]. 2.1 System Configuration The Automated Optical Inspection (AOI) system interface is composed of the sensor head, sensor amplifier, Ethernet cable, control panel, and USB memory stick (or SD card). The sensor head or the camera vision is the main heart of the inspection system where
PCBA Solder Vision Inspection Using Machine Vision Algorithm
45
the image was detected, compare, and judge whether the image captured is acceptable by defined standards. The sensor amplifier serves as the central processing unit that is responsible for monitoring the screen operation and information from the camera setup parameter to output the image result. All the image history was stored in a limited memory capacity. All the history images can be retrieved from the sensor amplifier using USB memory or an SD card. All the image was directly projected into the control panel to display the output image. 2.2 Camera Hardware Installation In order to achieve the optimum image vision capturing and avoid the false vision rejection judgment (either over-rejecting or under-rejecting), the inspection camera position must be optimized with respect to the target object of inspection or the region of interest. X (horizontal), Y (vertical), and Z (focus angle) position is manually adjusted during the installation to move and position the camera. This is one of the critical parts of during installation automated optical inspection system as the sensor head is very sensitive to the X, Y & Z positions, any wrong position of any axis will result in wrong camera image capturing that may result in unable to focus image of the target object, blur image capture or unable to capture the image of the target. 2.3 Camera Sensor Head Optimization Once the camera hardware configuration has been installed, camera sensor head optimization is the next step to capture the good quality and accuracy of the image. Field of View (FOV) is one factor that needs to consider where ambient light will affect the image quality. Ambient light including solar light, lights of other devices, and photoelectric sensors are some of the factors that may affect the image during the capturing [5]. The light intensity from nearby ambient light may interfere with the light produced by the camera sensor head. The light shield around the camera is recommended if there are too many ambient light sources nearby the camera where installed (Fig. 2).
Fig. 2. AOI camera head X, Y & Z position configuration
46
A. Dayrit et al.
3 Methodology Industrial revolution four (IR4) automated optical inspection (AOI) technology is commonly used to provide high-quality image inspection [6]. It helps the manufacturing industry to improve overall equipment efficiencies (OEE) such as productivity, process yield, and product quality. Image segmentation is one of the machine vision methodologies that is widely used in digital imaging processing where the target image was analyzed into multiple layers or regions based on the characteristics of the pixels in the image. Image segmentation can separate the image foreground from the background or cluster regions of pixels based on similarities in color or shape [7]. The proposed system will capture all the image data collected from automated optical inspection using sensor image detection. All captured images will undergo in machine vision MATLAB programming algorithm where the image segmentation and boundaries will execute. Image where the region of interest will be measured to classify if the image is within the specification which considers “Good” samples or not met the required specs which are classified as “No Good” samples. All image measure data will be trained and validated using the supervised machine learning classifier K Nearest Neighbor algorithm to ensure the accuracy of the image judgment (Fig. 3).
Fig. 3. Flow diagram of proposed machine vision system
3.1 Data Collection Image has been captured from the camera where head imaging startup signal by synchronizing to the target position from a photoelectric switch of programmable logic control, the imaging sensor will trigger in regular intervals, it uses build-in light to obtain an image of the target with the CMOS image sensor. Once the image has been captured it was saved in an allocated machine memory storage and processed to MATLAB programming machine vision image segmentation. The image area will be measured and judge base on the measurement if it is meet the criteria (passed) or not (failed) (Fig. 4; Table 1).
PCBA Solder Vision Inspection Using Machine Vision Algorithm
47
Fig. 4. Machine vision MATLAB propose flowchart
Table 1. Descriptive statistics of solder vision diameter measurement Statistics
Mfg DC
Diameter
510.00
510.00
510.00
6316.50
214.12
6.19
147.37
4.90
1.25
Min
6062.00
201.00
3.27
25%
6189.25
210.00
4.93
50%
6316.50
211.00
6.43
75%
6443.75
220.00
6.94
Max
6571.00
223.00
9.23
Count Average Std. deviation
PCBA SN
3.2 Image Masking Create a set of data points within the identified target area to specify the region of interest (ROI) [8]. Roipol is a popular MATLAB function where the binary image is the source of a mask for the masked filtering process [9]. 3.3 Image Segmentation Image segmentation is a critical component in visual image representation where the image was divided into multiple segments [10]. MATLAB activecontour function is commonly used as an image segmentation algorithm. The specific curve during the masking process that is formulated will serve as an active region during the image segmentation.
48
A. Dayrit et al.
3.4 Image Boundary Image boundaries are the traces of the outer boundaries of the active region of the image after the image has been segmented. MATLAB bwboundaries is the function where the coordinates of the border pixel of the object refer to row and column. Those binary images having a pixel value of 1 (one) belong to the object where all the pixels with the value of 0 (zero) is representing the background [11]. 3.5 Image Area Calculation Once the image has been segmented and defined boundaries were obtained using bwboundaries function, the area of the target image will be calculated using the MATLAB function of regionprops This function measures different kinds of image quantities and features in a black-and-white image. Image automatically determines the properties of each contiguous white region that is connected to data in the masking process. It will measure the centroid or center of the mass from the given image boundaries [12]. 3.6 Machine Vision Judgment MATLAB machine vision learning algorithm calculates the solder diameter. The upper specs limit is 8 mm and the lower specs limit is 5 mm. “Good” judgment if the solder image measurement is within 5–8 mm, otherwise if outside the given specs measurement are “No Good” judgment (Fig. 5).
Fig. 5. Machine vision judgment base from the area calculation
3.7 Image Training & Testing Image measured data from machine vision will undergo the training and validation process using different machine learning classifier algorithms such as Logistic Regression (LR), K Nearest Neighbor (KNN), and Support Vector Machine (SVM). The data set was divided into 80:20 ratio to accommodate the required data set for training & testing respectively (Tables 2 and 3).
PCBA Solder Vision Inspection Using Machine Vision Algorithm
49
Table 2. Supervise learning classifier cross validation accuracy table Cross-validation
Accuracy
Model
Setting
Mean
Deviation
Logistic regression (LR)
Solver = lbfgs
0.800000
0.478680
K Nearest neighbour (KNN)
Default, k = 5
0.954901
0.017647
Support vector machine (SVM)
Default, kernel = rbf
0.949019
0.019996
Table 3. Supervise learning classifier hold-out validation accuracy table Hold-out validation
Accuracy
Model
Setting
Mean
Logistic regression (LR)
Solver = lbfgs
0.843100
K Nearest neighbour (KNN)
Default, k = 5
0.951000
Support vector machine (SVM)
Default, kernel = rbf
0.947700
3.8 Feature Selection Before Optimization Feature Selection is the method of reducing the input variable in the model to identify the relevant data and remove noise in data. It is also the process that will automatically choose the potential attributing factor that will use in the optimization process. Univariate Selection, Recursive Feature Elimination, and Feature Importance feature selection method was used in this paper (Fig. 6).
Fig. 6. Feature selection model result
3.9 Optimization Process Since the model linear regression is the lowest accuracy during the training and testing of the dataset. It was undergone in the optimization process to improve the current accuracy level. Initial accuracy means data from the cross and hold-out validation is 0.8000 & 0. 8431 respectively. Linear regression optimization using optimal parameters using a different type of solver such as “lbfgs”, “newton-cg” & “saga” (Table 4).
50
A. Dayrit et al. Table 4. Linear regression optimization accuracy result
Solver
C
Penalty
Accuracy
lbfgs
1000
12
0.822600
newton-cg
1000
12
0.806500
saga
1000
12
0.774200
4 Result and Discussion 4.1 Image Diameter Judgment Result Based on the quality standard specs defined by the company to ensure the continuity of electronics components attached to PCB, the required diameter of solder must be > 0.80 mm in diameter to be considered as a “Good” solder condition, less than < 5 mm judge as “No Good” or reject samples. 510pcs samples of PCBA with the combined condition of good and no good. It was run to the proto-type machine vision system using an automated optical inspection (AOI) camera type. All captured image was undergone with machine vision MATLAB programming algorithm where the image segmentation and boundaries will be defined to calculate the diameter of the PCBA solder. Machine vision MATLAB programming algorithms are able to classify the image samples as either good or no good base on the calculated diameter. Out of 510 samples, 291 were judged as good samples while 219 samples are no good samples. 4.2 Image Training and Testing Result All image sample was split into 80:20 ratios for training and testing samples respectively. K Nearest Neighbor (KNN) classifier was recommended to use in the proposed system as the accuracy result shows a 95% level in both cross & hold-out validation. It also shows that there is no variation in the process as the standard deviation value is 0.0176. 4.3 Optimization Result Linear regression optimization using “lbfgs” as a solver will result has the best optimization accuracy of 0.8226 which improve the accuracy by 2%. 4.4 Projected Improved Productivity The implementation of machine vision in an automated optical inspection system helps to improve the in-process vision inspection yield. Based on the historical average vision inspection yield is 97% which is below the set target yield of 99%. The projected calculated yield can reach up to 99% which is about 30–40% increase in production vision inspection output after the full implementation of MATLAB programming machine vision into the machine-automated optical inspection system.
PCBA Solder Vision Inspection Using Machine Vision Algorithm
51
5 Conclusion In this research paper, automated optical inspection (AOI) is highly recommended to ensure clear image capture. The machine vision programming algorithm will ensure the correct of image classification either “Good” or “No Good” samples based on the calculated solder diameter measurement. Judgment accuracy was ensured using the K Nearest Neighbor (KNN) algorithm result of 95% with 0.0176 variations in the process for both cross and hold-out validation. The proposed machine vision system using AOI with a machine vision algorithm can improve the overall machine vision yield by 30– 40%. Quality product will also improve because of the high accuracy of vision judgment from the AOI compare to human inspection. Acknowledgments. I would like to express my sincere gratitude to my professor Engr. Marife Rosales and Dr. Robert de Luna for their continuous support while I’m writing this paper. Besides to my professors, I would like to thank also my co-workers Process Technician Gregorio Masacupan & Equipment Engineer Michael Arce for helping me to set up and test the prototype AOI machine. Finally, I would like to thank my wife Eleina Dayrit, and my kids Andrea, Allen & Adrian Dayrit for all the inspiration, motivation & support while I’m doing this paper.
References 1. Martinez, P.A., Ahmad, R.: Quantifying the impact of inspection processes on production lines through stochastic discrete-event simulation modeling. Modelling 2(4), 406–424 (2021). https://doi.org/10.3390/modelling2040022 2. Yang, F., Ho, C., Chen, L.: Automated optical inspection system for O-ring based on photometric stereo and machine vision. Appl. Sci. 11(6), 2601 (2021). https://doi.org/10.3390/app 11062601 3. Jalayer, M., Jalayer, R., Kaboli, A., Orsenigo, C., Vercellis, C.: Automatic visual inspection of rare defects: a framework based on GP-WGAN and enhanced faster R-CNN. In: 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT) (2021). https://doi.org/10.1109/iaict52856.2021.9532584 4. Vitoriano, P., Amaral, T.: 3D solder joint reconstruction on SMD based on 2D images. SMT Surface Mount Technol. Mag. 31 (2016) 5. Keyence IV2 Series User Manual, “Mounting the Sensor”, p. 34 6. Copyright (C) 2023 Keyence Corporation [Online] 7. Moru, D.K., Borro, D.: A machine vision algorithm for quality control inspection of gears. Int. J. Adv. Manufact. Technol. 106(1–2), 105–123 (2019). https://doi.org/10.1007/s00170019-04426-2 8. The Math Work, Inc.: 1994–2021, Image Processing Toolbox Documentation Image Segmentation 9. Brinkmann, R.: The art and science of digital compositing. Choice Rev. Online 37(07), 37– 3935 (2000). https://doi.org/10.5860/choice.37-3935 10. Roipoly (Image Processing Toolbox). http://www.ece.northwestern.edu [Online] 11. Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D.: Image segmentation using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 1 (2021). https://doi.org/10.1109/tpami.2021.3059968
52
A. Dayrit et al.
12. The Math Works, Inc.: 1994–2021, Image Processing Toolbox Documentation Boundary Tracing in Images 13. Stack Overflow Website: Explanation of Matlab’s bwlabel, regionprops and centroid functions [Online]
AI-Based Air Cooling System in Data Center Shamsun Nahar Zaman1 , Nadia Hannan Sharme1 , Rehnuma Naher Sumona1 , Md. Jekrul Islam1 , Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2(B) 1 Department of Computer Science and Engineering, East West University, Dhaka 1212,
Bangladesh [email protected] 2 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram 4349, Bangladesh [email protected]
Abstract. The increasing demand for storage, networking, and computing power has increased the number, size, complexity, and power density of data centers, creating significant energy challenges. Cooling energy consumption accounts for the majority of total data center consumption and can account for as much as 40% in inefficient cooling systems. The objective of our research is to highlight and present the effectiveness as well as the pros and cons of Artificial intelligence (AI)-based cooling systems in a data center that is being manually controlled and monitored. Additionally, we wanted to focus mainly on the effectiveness of AIbased cooling systems and how it can be helpful for monitoring as well as being much greener in case of cost, power, and human resources. In order to carry out the ambition and enthusiasm of the research, a Support Vector Machine (SVM) was implemented. This SVM will later provide a more precise and accurate result which will help the cooling system to regulate much more efficiently in case of power and cost. The accuracy of the model which was implemented is 82%. This paper is composed with the aim to convert current manually controlled and less efficient data centers which are in similar condition, dimensional size, and equipment. Keywords: Data center · Cooling system · Power consumption · Efficient cooling system · SVM model · AI cooling system · Green data center
1 Introduction Data centers are one of the largest energy consumers, especially in many developed countries. Rapid advances in information technology have created massive data centers around the world. This is due to the increased use of cloud systems and the exchange of information between users. It is recorded that the number has increased year by year and reached more than 100 million by 2014 [1]. This massive data management increased power consumption. The trend in data center development is leading to the densification of data centers. IT devices that contribute to excessive energy consumption and costs. Temperature is a factor in data center devices. There is a possibility of malfunction or © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 53–65, 2023. https://doi.org/10.1007/978-3-031-50330-6_6
54
S. N. Zaman et al.
failure due to overheating. However, the devices also transport items that are vital to the mission. Continuous operation (24 h a day, 7 days a week). As a result, controlling airflow and the cooling system is vital. The task of supplying cold air to the data center and ensuring that all equipment is properly cooled. IT devices consume 50% of total energy consumption and cooling systems about 37% [1, 2]. The target of this document is to focus on efficiency because energy efficiency programs are meant to reduce the data center energy consumption of the cooling system. There are numerous methods to improve cooling system efficiency, which can result in energy savings of up to 45% [2]. We introduce intelligent data center cooling as a dynamic cooling system that splits cooling and computing distributions through modeling and measurement by recording the temperature distribution and expected temperature distribution in the data center in real-time [3]. The optimization of the cooling system is a compromise between these two goals [4, 5]. The primary contributions of this paper are: (i) Formulas are customized to suit the conditions of the data center which includes the size, equipment, models, and planning of the entire data center. (ii) A SVM model was implemented as per the conditions and data were collected from the data center in a much greener way which will provide a much higher accuracy and precision that will make the cooling system installed to be more sensitive to changes and thus may decrease the power consumption and may become more cost-efficient. The biggest challenge in data center O&M is figuring out which components of the system need to be changed and then figuring out the best combination after doing so. In his current O&M practice, there are no formulas or algorithms to refer to. A prediction model (SVM model) is sent to the framework for implication. Potential cooling guidelines are scanned and simulated using an evolutionary algorithm to replicate with the inference platform’s powerful inference and computational capabilities. Moreover, researchers have covered a wide range of effective cooling aspects in data centers and figured out ways to make it more efficient in the case of power and energy [11]. Additionally, it is quite often seen that researchers have heavily focused not only on graphical presentations but also have focused on confusion matrix as well [18]. Besides that, it is important to highlight the measurement aspect of how the data centers are efficient and that can be determined by observing the Power Usage Effectiveness (PUE) [7–9, 16, 17].
2 Literature Review In this piece of writing, we are focusing on the current and related work of authors and discussing the analytics, observations made, contributions, and an overview of the works. In [1], the authors explained and provided analysis of airflow is important for data centers that are cooled with air-cooling systems. A data center with an elevated floor’s thermal environment and energy efficiency is impacted by the flow path and distribution. However, the research was done in a State-of-the-Art Datacenter. So a proper understanding of how a small-scale data center might operate is difficult. In [2], the authors discussed and explained optimizing the power consumption due to the cooling system in the data center. The authors provided a detailed and in-depth analysis of Power Usage Effectiveness (PUE) used to determine the suitable methods for cooling systems.
AI-Based Air Cooling System in Data Center
55
In [4], the authors shared their in-depth research on smart established ways and how it can be re-implemented into a better and smart system by redesigning the layout and aisle positions. However, the data center in this paper was also state of the art and hence similar comments cannot be done for a smaller-scale data center. On the other hand, we have covered the failings which were found in this paper by implementing much greener models. In [5], the authors considered the primary factors in the data centers’ room-level thermal environment are layout, raised floor plenum and ceiling heights, and perforated tiles. Two major air distribution problems have been identified in data centers: bypass air and recirculated air. Recirculated air occurs when there is insufficient airflow to the equipment, and some of the hot air is recirculated, which can lead to large inlet temperature differences between the bottom and top of the rack. Cold bypass is caused by high flow or leakage from the cold air path. In [11], the authors have covered a wide range of aspects by thoroughly analyzing cooling methods, configurations, models, airflow, IT load, equipment, power consumption, etc. Mathematical formulas and their uses are clearly given and also promoted to readers to save energy and use power efficiently. In [18], the authors have discussed the importance of the confusion matrix as a metric measuring system of Machine Learning and Deep Learning models and how important they are for researchers to understand and use it to measure their models. The authors have also mentioned the importance of the confusion matrix in the Data Science field and its importance in understanding the aspects of it when it comes down to using it in the work field. In [19], the authors have done a very detailed analysis of the specific sector of the cooling system. The method used in this paper was mostly on how natural cold resources can be included to reduce power consumption and make it more cost-effective as well as reduce power consumption due to cooling. Different seasons in different regions were also covered, however, tropical regions like Bangladesh, India, etc. were not taken into account. Researchers tried to address many issues related to green computing [20–29] to make our planet safe for living. In this research paper, we have considered and analyzed the above-mentioned papers as well as other similar papers to construct this proposed methodology. In this paper, we proposed an AI-based cooling system that is much more efficient for a small scaled data center and it can be implemented for data centers in every country in every region.
3 Materials and Methods In this section, we have provided detailed explanations and reasonings for the aim and purpose of our research. A flowchart of our contribution and its process are clearly labeled and they are further explained. 3.1 Data Collection and Dataset For the research purpose, we have collected all the datasets that are required to proceed with our research and aim from the officers who are in charge of the data center in
56
S. N. Zaman et al.
East West University via interviews over several days. We did not use any datasets from outside the university due to various limitations which come with the terms and conditions of data centers. The datasets which were collected contains various type of data which includes the following: (i) Power room and Server Room temperature and Humidity Status: This includes the status of the temperature inside the power room and also provides a plot graph to show the change in temperature via a statistical plot. It also shows the temperature/Humidity status with a graphical representation as well. The power room humidity status is also collected along with the graph. all of the mentioned data are collected from all the connectors of the server room. Table 1 shows the readings of both the server and the power rooms’ temperature and humidity readings which are provided by the data center officer: Table 1. Dataset of power room and server room of EWU data center Temperature (°C)
Humidity (%)
Lowest reading
18.5
31.8
Highest readings
40.2
82.6
Lowest reading
8.9
31.5
Highest reading
33.8
92.6
Readings of power room
Readings of server room
After doing so, the dataset was then constructed in a CSV file so that it can be used for AI models later on. (ii) Current model Dataset: Table 2 is the dataset chart which was provided by the officers of EWU Data Center. The dataset includes the floor area, number of servers and racks, server power consumption, UPS with its battery power consumption, and lighting. All the information within the columns is provided, not calculated by us. 3.2 Design and Progress Flowchart First, all the datasets that are necessary for the research are collected and understood with the help of the data center officers in charge. Then the formulas are figured out according to the dimensions, size, condition, current equipment, and its performances, which are considered in Flowchart in Fig. 1. 3.3 Algorithm of Model We have implemented machine learning models; one is Support Vector Machine (SVM) and another one is a recurrent neural network called Long Short-Term Memory (LSTM).
AI-Based Air Cooling System in Data Center
57
Table 2. Dataset chart of EWU data center Item
Calculation
Total
Floor area
300 ft2 (300 × 20)
4.5 kW/15363 BTU
Servers and racks
6 racks with servers each (6 × 30 servers 5)
Server power consumption
15 W each (30 × 15)
0.45 kW/15636.3 BTU
Lighting
10 W × 15
0.15 kW/512.1 BTU
UPS with battery power consumption
2 UPS 40 KVA (capacity 32 kW) with 12% power consumption (3.84 × 2)
7.68 kW/26219.521 BTU
Grand total
4.5 + 0.458 + 7.68 + 0.15
12.78 kW/4360.92
Fig. 1. Flow chart of the proposed model
Both models are basic and they are used due to the lack of both model usage in small-scale data centers globally. • LSTM Model: Long Short-Term Memory is a Machine Learning model which is a special kind of recurrent neural network that is able to learn dependence over time in data. We have implemented an LSTM model for prediction and classification purposes. Proposed model’s Procedure: 1: Load the dataset in the project repository by mounting Google Drive. 2: Read the datasets. 3: Check the dataset printing the dataset and check for null values. 4: Extract all the time-based datasets and convert them to date time indices. 5: Print the number of Unique years. 6: Plot Power consumption graph (Energy in week versus Date). 7: Plot Energy Distribution graph.
58
S. N. Zaman et al.
Algorithm of Model: Input: Datasets. Output: power consumption graph and Energy Distribution graph. 1: Import the libraries. 2: Read the dataset. 3: Extract the data and store them in DateTime format and the data from all date-based columns. 4: Print the unique years. 5: Import the style from libraries. 6: Plot the Power consumption graph. 7: Generate the subplot Energy in week versus Date. 8: Plot the Energy Distribution graph. Both graphs are explained and provided in the results section in Figs. 1 and 2. • SVM Model: Support Vector Model is a Machine Learning model which is widely used for classification and regression analysis. For that reason, we have implemented this model to aid the LSTM model to provide a much more accurate and much more efficient output for the data center to operate using it. Proposed model’s Procedure 1: Load the dataset by mounting Google Drive. 2: Create NumPy arrays and store the datasets in the array. 3: Import LabelEncoder and encode the datasets to convert them into numeric values. 4: Then split the dataset for training and testing. 5: Import LinearSVC model and fit the training data sets in the model. 6: Check the prediction after training the model and reshape the model. 7: Import confusion matrix and fit testing dataset and prediction in the model. 8: Generate the confusion matrix using heatmap plotting. 9: Check the accuracy score of the model. Algorithm of the Model: Input: Datasets. Output: Confusion matrix as heatmap plot. 1: Import the libraries of Pandas, seaborn, matplotlib.pyplot and NumPy. 2: Read the dataset. 3: Preprocess and encoding the dataset. 4: Split the dataset into training and testing sets. 5: Print the length of x training and testing sets and y training and testing sets. 6: Import LinearSVC and fit the training sets. 7: Predict y by calling predict() on model using y testing dataset. 8: Predict value to vertical form using reshape() with parameters length of y test and 1. 9: y_true vertical form by using reshape() y test.
AI-Based Air Cooling System in Data Center
59
10: Use Numpy concatenate and store the y_true vertical and y test vertical. 11: Predict SVM by using x test dataset and store it. 12: Reshape the predicted y and store it. 13: Import confusion matrix from library. 14: Seaborn heatmap to plot confusion matrix. 15: Generate accuracy by using accuracy score function with y test and y predicted values. 16: Print the SVM prediction. The confusion Matrix has been explained and provided in the Results section in Fig. 3.
4 Results In this section, the figures of the plots which were simulated from both the LSTM model and SVM model are provided with descriptions of them below each of them. 4.1 Figures of Models and Descriptions
Fig. 2. LSTM model’s energy per week versus date graph
Figure 2 was simulated via the LSTM model as a graphical representation of how the energy consumption is happening on a weekly basis. It can be seen that there is a linear increase in power consumption from the first of the month to the first day of the next month. This weekly calculation can be easily understood by seeing how it turned zero after 4 intervals which indicated the 4 weeks in a month. The line then has a constant gradient for the next 4 weeks. It can also be seen that the power increases from 33.00 to 34.00 kWh. Figure 3 is plotted to the energy distribution throughout the week. This indicates how the cooling system as well as other equipment in the data center are consuming power to operate on a weekly basis. The bars indicate how much power is consumed in that slot, and the curve shows the consumption in continuous terms.
60
S. N. Zaman et al.
Fig. 3. Energy distribution graph from LSTM model, density versus week
Fig. 4. Confusion matrix generated by the SVM model
Figure 4 is the confusion matrix which is generated by the SVM model which we implemented. The Confusion matrix is a very well-liked metric for classifying problems. Both binary classification and multiclass classification issues can be solved using it [18]. The confusion matrix was utilized for analyzing the performance evaluation of the methods used after classification is carried out. Table 3 is the dataset which is obtained from the proposed models if it is launched in the Data Center. We can see that the power consumption by the data center dimensions (floor) and energy-saving lights usage in a much less quantity has reduced the total power consumption. This reduction in power consumption has led the PUE of the proposed model to an efficient path compared to that of the current models. 4.2 Formatting of Mathematical Components In this sector, the formulas and the numeric methods which were used for the proposed models and methods implemented by us are provided. The reasons behind each of them are included below each of them. Equation (1) is the formula to calculate the Power Usage Effectiveness (PUE) of a data center. PUE calculations of the Current model and proposed model: PUE = Total facility power/IT equipment energy = 12.78 kW/8.13 kW PUE of current model = 1.57
(1) (2)
AI-Based Air Cooling System in Data Center
61
Table 3. Proposed models table obtained after model implementation Item
Calculation
Total
Floor area
300 ft (300 × 20)
1.76 kW/6008.64 BTU
Server and racks
6 racks with 5 servers each (6 × 5)
30 servers
UPS with battery power consumption
2 UPS 40 KVA (capacity 32 kW) with 12% power consumption (3.84 × 2)
7.68 kW/26219.52 BTU
Lighting
5W×8
0.04 kW/136.56 BTU
Server power consumption
15 W each (30 × 15)
0.45 kW/1536.3 BTU
Grand total ( 1.76 + 0.45 + 7.68 + 0.08)
9.898 kW/33791.772 BTU for max cooling
The answer from (2) above is the PUE of the current model that is being used in the Data Center of East West University (EWU). Now, calculating the Data Center Infrastructure Efficiency (DCIE) from Eq. (3). This is used to figure out the efficiency of the systems used for the Data Center: DCIE = (1/PUE) × 100 = (1/1.57) × 100 = 63.7%
(3)
Now, calculating the PUE of the proposed model for the data center (data taken from Table 3): PUE = Total facility power/IT equipment energy = 9.898 kW/8.13 kW PUE of current model = 1.22
(4)
Now, calculating the Data Center Infrastructure Efficiency (DCIE). This is used to figure out the efficiency of the systems used for the Data Center: DCIE = (1/PUE) × 100 = (1/1.22) × 100 = 82.0%
(5)
Firstly, just from both DCIE percentages, we can conclude that our proposal will be much more efficient. Secondly, from Table 4 we can see that the current DCIE is average and ours is efficient. The following table illustrates the levels of efficiency based on PUE and DCIE: Table 5 is used to convert the units in Table 3 that were required. Table 4 is the general measurement that is used to determine the efficiency level of a data center. Table 5 provides an easier understanding of the conversions used in this paper. Additionally, we have simulated a pie chart for a statistical representation of the PUE from both the current and proposed models in Fig. 5.
62
S. N. Zaman et al. Table 4. Table of efficiency based on DCIE and PUE
33–39% 40–49%
Very inefficient ˙Inefficient
50–66%
Average
67–82%
Efficient
83–99%
Very efficient
Table 5. Conversion table for units required To convert
Multiply by
BTU/hour into Watts
0.293
Watts into BTU/hour
3.41
Watts to kiloWatts
0.001
BTU/hour into kiloWatts
0.000293
kiloWatts into BTU/hour
3414
Fig. 5. PUE of the current model and proposed model
5 Discussion So far we have seen other authors like Capozzoli in reference [1] have pointed out the power and cost efficiency of the cooling system that was mostly based on the state of Art Data centers all around the world. In their papers, they have covered a wide range of consumption and cost analyses, how they have approached and utilized the data they had collected, and finally provided an analysis of the system the data center is using. Some authors like R. Kumar [3], Patel [4], and Chaoqiang Jin [6] covered the design aspect of the cooling aspect of the data centers in their paper which contains a complete analysis he and his co-authors have done. However, from what we have found out, almost all of the papers so far didn’t have much research that contained a broader analysis of Data centers that use Air cooling systems with a green AI model which can be used to make the
AI-Based Air Cooling System in Data Center
63
system a much more efficient system. In [21], the author performed a detailed analysis and research on air cooling systems in Data centers and provided a well-explained analysis of his observation and approach. Additionally, there was not much research on how small scaled Data Centers operate, what cooling systems they have, and how to make small scaled Data centers more efficient without the need of installing a chilled water cooling system. The research that we have done was to cover the gap which was left by other researchers and we were able to provide a much wider perspective of a small scaled data center and how it operates and what can be done in order to make it greener. However, there are a few limitations in our research data collection and they are that the datasets which are used have limitations on their own, it doesn’t have a huge amount of data which makes them hard to use in the models. Due to the limitations of the datasets which were provided by the data center, the accuracy was below 90%. Figures 2 and 3, in Results Sect. 4.1 are examples of how the dataset has performed and contributed to the capabilities and reliabilities of the model. The accuracy which was obtained from the model was possible due to the usage of the efficient power calculations conducted using the mathematical formulas from Sect. 4.2, Eq. 5. The application of models provided efficiently controlled outputs which later provided efficient results. However, the models on their own are the best in the cases of classification, regression, and accuracy prediction. Thus, we have the limitations of the datasets and the models would be further reliable if more standard datasets are used for the models.
6 Conclusion This research was done properly, and we were able to meet the goals and aims which were set at the beginning by providing sufficient data analysis, detailed explanations of what models were implemented, how they were implemented, why they were used, what outcomes they provide and how it is greener than the current models. Both the SVM model and LSTM model were successfully implemented and they were able to generate the graphs which are required. Therefore, it is seen that in this study it is possible to show that by using the combination of machine learning model and deep learning models, an efficient solution for the power consumption of cooling systems in data centers can be obtained and thus can be utilized and implemented to make data center efficiency and overall greener. Additionally, it can be concluded that this study can be used as a guide to convert manually controlled and monitored data centers, which are sized similarly to the one which is used, to an automatically controlled, power efficient, and fast cooling system of data centers. Overall, the study provides a creditable contribution by addressing the energy-related challenges which are faced by data centers. To conclude, the gap which we focused on was to conduct a study and create a model which is based on small-scale data centers. As authors of this paper, it is recommended to focus on collecting more proper datasets which consist of more features and precision so that it can be used for further efficient studies and implementations.
64
S. N. Zaman et al.
References 1. Capozzoli, A., Primiceri, G.: Cooling systems in data centers: state of art and emerging technologies. Energy Procedia 83, 484–493 (2015) 2. Kumar, R., Khatri, S.K., José Diván, M.: Effect of cooling systems on the energy efficiency of data centers: machine learning optimisation. In: 2020 International Conference on Computational Performance Evaluation (ComPE), pp. 596–600 (2020) 3. Patel, C., Bash, C., Sharma, R., Beitelmal, M., Friedrich, R.: Smart cooling of data centers, pp. 4–5 (2003) https://doi.org/10.1115/IPACK2003-35059 4. Fei, Z., Song, X.: Deep neural networks can reverse spiraling energy use in data centers & cut PUE, pp. 5–9 (2020) 5. Jin, C., Bai, X., Yang, C.: Effects of airflow on the thermal environment and energy efficiency in raised-floor data centers. Sci. Total Environ. 695, 133–801 (2019). ISSN 0048-9697 6. Mukaffi, A.R.I., Arief, R.S., Hendradjit, W., Romadhon, R.: Optimization of cooling system for data center case study: PAU ITB data center. Procedia Eng. 170, 552–557 (2017). ISSN 1877-7058 7. Zuo, W., Wetter, M., Van Gilder, J., Han, X., Fu, Y., Faulkner, C., Hu, J., Tian, W., Condor, M.: Improving data center energy efficiency through end-to-end cooling modeling and optimization, pp. 1–109. Report for US Department of Energy, DOE-CUBoulder-07688 (2021) 8. Bhatia, A.: HVAC cooling systems for data centers, pp. 15–23 (2016) 9. Sharma, M., Arunachalam, K., Sharma, D.: Analyzing the data center efficiency by using PUE to make data centers more energy efficient by reducing the electrical consumption and exploring new strategies. Procedia Comput. Sci. 48, 142–148 (2016). https://doi.org/10.1016/ j.procs.2015.04.163 10. Tschudi, W., Xu, T., Sartor, D., Stein, J.: High-Performance Data Centers: A Research Roadmap, pp. 25–37 (2004) 11. Zhang, Y., Liu, C., Wang, L., Yang, A.: Information Computing and Applications—Third International Conference, ICICA 2012, Chengde, China, September 14–16. Proceedings, Part II. Volume 308 of Communications in Computer and Information Science, pp. 179–186. Springer, Berlin (2012) 12. Van Houdt, G., Mosquera, C., Nápoles, G.: A review on the long short-term memory model. Artif. Intell. Rev. (2020) 13. Zhang, Y.: Support vector machine classification algorithm and ıts application. In: Liu, C., Wang, L., Yang, A. (eds.) Information Computing and Applications. ICICA, Communications in Computer and Information Science, vol. 308. Springer, Berlin (2012) 14. Pries, R., Jarschel, M., Schlosser, D., Klopf, M., Tran-Gia, P.: Power consumption analysis of data center architectures, vol. 51 (2012). https://doi.org/10.1007/978-3-642-33368-2_10 15. Dai, J., Ohadi, M.M., Das, D., Pecht, M.: Optimum Cooling of Data Centers, pp. 47–90 (2014). ISBN 978-1-4614-5602-5 16. Borgini, J.: How to calculate data center cooling requirements (2022) 17. Deng, X., Liu, Q., Deng, Y., Mahadevan, S.: An improved method to construct basic probability assignment based on the confusion matrix for classification problem. Inf. Sci. 340–341, 250–261 (2016). ISSN 0020-0255 18. Kannan, R., Roy, M.S., Pathuri, S.H.: Artificial intelligence based air conditioner energy saving using a novel preference map. IEEE Access 8, 206622–206637 (2020). https://doi.org/ 10.1109/ACCESS.2020.3037970 19. Riccardo Lucchese, R.C.: Cooling control strategies in data centers for energy efficiency and heat recovery, pp. 79–97 (2019). ISBN 978-91-7790-438-0
AI-Based Air Cooling System in Data Center
65
20. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable system of data center cooling and power management utilizing renewable energy. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_67 21. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste through ımproved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/ 978-3-031-19958-5_97 22. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste management system and recycling trade for Bangladesh in green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_33 23. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable approach to reduce power consumption and harmful effects of cellular base stations. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_66 24. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_104 25. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sustainable approach between satellite and traditional broadband transmission technologies based on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_26 26. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.: Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_35 27. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-03119958-5_75 28. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustainable and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_18 29. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for ımproving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., MarmolejoSaucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi. org/10.1007/978-3-031-19958-5_95
Reinforced Concrete Beam Optimization via Flower Pollination Algorithm by Changing Switch Probability Parameter Yaren Aydın, Gebrail Bekda¸s, and Sinan Melih Nigdeli(B) Department of Civil Engineering, Istanbul University-Cerrahpa¸sa, 34320 Avcılar, Istanbul, Turkey [email protected], {bekdas,melihnig}@iuc.edu.tr
Abstract. Reinforced concrete (RC) is one of the most widely used and preferred types of construction. Therefore, the optimum design of RC-bearing elements is an important design problem. Metaheuristic algorithms are suitable for the optimum design of the RC beam. In this study, the problem of minimizing the cost of reinforced concrete beams will be solved with the Flower Pollination Algorithm (FPA). For this purpose, the dimensions of the single-span rectangular RC beam are optimized to give minimum cost. In the analysis, the effects of defining different values of switch probability (sp) and defining the sp value randomly on the change of the optimum result were determined. Results showed that using random sp is have near performance to best sp cases. Keywords: Optimization · Metaheuristic · Reinforced concrete beam · Flower pollination algorithm · Cost optimization
1 Introduction Optimization is the technique of finding the best solution among the set of all possible solutions. Optimization has been widely used in engineering for a long time. Cost is one of the most important parameters in the construction of structures. A good engineering design must be economic and safe at the same time. Reinforced concrete (RC) is one of the most widely used and preferred types of construction because it is rigid and economical. It also consists of a combination of concrete and steel. Thus, it has high fire resistance to provide a long service life. In the design of reinforced concrete (RC) structures, the optimum design problem that minimizes cost can only be solved by iterative methods. The best and most effective methodology for the iterative discovery of different design-variable combinations is provided by metaheuristic methods. There is a need for an improved method for frame members [1]. Since reinforced concrete is widely used in civil engineering applications, the optimum design of reinforced concrete structures is very important. Metaheuristic algorithms are effective in the optimization of structural engineering problems. Different methods © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 66–74, 2023. https://doi.org/10.1007/978-3-031-50330-6_7
Reinforced Concrete Beam Optimization via Flower Pollination
67
have been developed and used to find the optimum design for engineering design problems. Optimization enables the efficient finding of purpose-based designs for complex problems in structural engineering [2]. In the literature, it is seen that many studies have been done on cost optimization. Bekda¸s et al. [1] proposed a modified harmony search methodology for optimization cost and CO2 of RC frames including dynamic force resulting from earthquake motion. As a result of the study, it was seen that the use of recycled elements provides a sustainable design. Camp and Huq [3] used a big bang-big crunch (BB-BC) algorithm for CO2 and cost optimization of reinforced concrete frames. The BB-BC algorithm has produced designs that reduce costs and carbon footprint for frames. Jelušiˇc [4] presented the cost optimization of a reinforced concrete section, using the optimization method of mixed-integer nonlinear programming (MINLP). It has been observed that the use of direct crack width calculation instead of restricting the bar spacing reduces material costs in concrete sections. Lee et al. [5] employed Genetic Algorithm (GA) in the minimization of the cost and CO2 emissions of RC frames. According to the results, the design with a small number of reinforcement bars and the lowest carbon dioxide emissions has a relatively large amount of reinforcement bars. Nigdeli et al. [6] proposed a Harmony Search-based methodology for the biaxial loading of RC columns. The proposed method is effective to find a detailed optimum result by using optimum bars with different sizes. Kayabekir et al. [7] used Jaya algorithm to optimize T-beams. They found that CO2 emission optimization is more effective than cost optimization in reducing CO2 emission value. Ulusoy et al. [8] used various metaheuristic algorithms in the design of several RC beams for cost minimization. The results showed that using high-strength materials for high bending moment capacity is less costly than low-strength concrete as doubly-reinforced structures are not the most suitable choice. Bekda¸s and Nigdeli [9] used an education-based metaheuristic algorithm called Teaching-Learning-Based Optimization to investigate the optimum design of reinforced concrete columns. It was concluded that the cost changed due to the increase in the dimension and quality of the reinforcements. Nigdeli et al. [10] proposed a metaheuristic-based methodology for the cost optimization of RC footings by employing several algorithms to deal with non-linear optimization problems. According to the results, detailed optimization and location optimization using a truncated pyramid shape reduces the optimum cost. In this study, the effect of the switch probability parameter on the optimum RC beam was investigated. As a result of the study, the researchers were informed about the effects of the switch probability parameter as well as the most effective metaheuristic method for optimum design. In this study, one of the optimization methods called Flower Pollination Algorithm (FPA) is used. The flower pollination algorithm is a population-based metaheuristic algorithm and inspired by many creatures and the natural life behaviors of the society they live in. It was developed by Yang [11]. The basis of the algorithm is pollination activity, a natural process of flowering plants. The unique feature of the algorithm is one of the reasons for the selection of the algorithm. FPA is efficient in terms of computational performance [12]. Yang et al. also extended FPA for multiobjective optimization problems [13]. The effect of the FPA algorithm on structural design optimization has been supported by studies [14–17]. The present study applies, the FPA with different
68
Y. Aydın et al.
switch probability (sp) values to the structural design of RC beam for a minimum economic cost. The effect of the switch probability parameter on the optimum RC beam was investigated.
2 Methodology For this study, the cost optimization of RC beams on the Matlab [18] program is examined. FPA has different parameters. The FPA-specific parameter that will be examined in this study is the switch probability. Cost of RC beam optimized with flower pollination algorithm by choosing different switch probability (sp). Also, sp was chosen as random in each iteration. Thus, the algorithm became parameter setting free. 2.1 Cost Optimization of RC Beam via Flower Pollination Algorithm The flower pollination algorithm is a nature-inspired metaheuristic algorithm developed by Yang [11] in 2012. The four characteristics and idealization rules were used by Yang and these can be summarized as follows [12]: Rule 1: Global pollination represents biotic and cross-pollination processes, where pollinators fly according to the rules of Lévy flights. Rule 2: Local pollination represents abiotic and self-pollination that does not require any pollinators. Rule 3: Flower constancy represents a reproduction probability. This probability is proportional to the similarity of two flowers. Rule 4: Switch probability sp ∈ [0, 1] can be controlled between local pollination and global pollination due to other factors including wind. Pollinators can travel over long distances during biotic pollination. The flight of pollinators behaves like Lévy flight behavior [19]. Thus, Rule 1 and Rule 3 can be mathematically formulated as Eq. 1. j,t+1 j,t j,t i,∗ if sp > r1 (i = 1, 2, . . . , m) = Xi + L Xi − Xbest (1) Xi j,t
j,t+1
is the solution of (t + 1)th iteration where Xi i is the solution vector at iteration t, Xi i,∗ and Xbest is the current best solution. Because pollinators travel over long distances with various distance intervals, the Lévy flight can be drawn from a Lévy distribution as Eq. 2. 1 1 × r −1.5 × e 2r (2) L= 2π Abiotic pollination occurs by other factors without any pollinators. The local pollination (Rule 2) and flower constancy (Rule 3) can be represented as Eq. 3. j,t+1 j,t Xi = Xi + ε Xia,t − Xib,t if sp ≤ r1 (3) Two randomly chosen existing solutions (a and b) are chosen in the modification due to self-pollinations. Xia,t and Xib,t are two random flowers and ε is a linear distribution.
Reinforced Concrete Beam Optimization via Flower Pollination
69
This equation essentially mimics the flower constancy in a limited neighborhood. For a local random walk, if Xia,t and Xib,t comes from the same species then ε is drawn from a uniform distribution as [0, 1]. Rule 4 can be done as the probability of using global and local pollination. At the beginning of the algorithm, the initial values are randomly chosen according to a solution range defined for the design variables. Existing and generated values are compared according to the optimization objective. The solution matrix is updated. Results are only updated if new solutions are better than existing ones [19]. FPA has its specific parameters. Changing these parameters affects the search abilities of the algorithm. Flower constancy is about the tendency of specific pollinators to specific flowers. This relationship is combined with global pollination and global pollination to define two types of optimization processes. These processes are selected due to an algorithm parameter called switch probability (sp ∈ [0, 1]). Switch probability is used to switch between common global pollination to intensive local pollination (Fig. 1).
Fig. 1. Flowchart of the optimization process.
In the analyzes made for the sample in question, the effects of defining different values of switch probability and defining the switch probability value as random (rand)
70
Y. Aydın et al.
on the change of the optimum result were determined. As a result of the research, the performances of the algorithm were evaluated according to the different switch probability values.
3 The Numerical Example In this study, reinforced concrete beam design is considered as an optimization problem. FPA is used as the optimization method and the aim is to determine the effects of the switch probability parameter on the optimum solution of the beam problem. For this purpose, the optimization problem of the dimensions of a single span, simply-reinforced, rectangular-section reinforced concrete beam is considered to obtain the minimum cost. The calculations are based on TS 500-Requirements for Design and Construction of Reinforced Concrete Structures [20]. In order to evaluate the effect of the switch probability, the cost minimization of the RC beam with the design constraint is considered as an engineering problem. The beam model to be used in the RC beam optimization application has an 8 m span length and 30 kN/m uniformly distributed load. The b and h values are beam width and length, respectively as seen in Fig. 2. As is the reinforcement area.
Fig. 2. RC beam model and cross-section.
The equation expressing the minimization of the cost and the objective function in the optimization problem is given in Eq. 4. Cc is the cost of the concrete per m3 ($), Cs is the cost of the steel per ton ($) and γs is the specific gravity of steel (t/m3 ). Minf (x) = Cc × b × d × L + Cs × As × L × γs
(4)
The variation ranges for the width and depth dimensions of the RC beam section are given in Eqs. 5 and 6. These are lower and upper limits design variables. 250 ≤ b ≤ 1000
(5)
250 ≤ h ≤ 1000
(6)
Reinforced Concrete Beam Optimization via Flower Pollination
The constraint of the optimization problem is given in Eq. 7. g1 = d 2 − 2 × M × 106 /0.85 × fcd × b
71
(7)
In the rest of the present study, the following two cases are examined for the switch probability. Case 1: sp = (0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1). Case 2: Random switch probability values in all iterations.
4 Results The performance of the FPA was compared based on switch probability. In this section, the main results are obtained by the different switch probability values for the RC beam cost minimization problem. In this optimization process where the parameter effect is observed, the population size is 20. The performance of FPA is evaluated by changing switch probability. For Cases 1 and 2, the algorithm was run 100 times with 10000 iterations. For Case 1, the switching probability is increased from 0 to 1 such that the number of steps is 0.1. For Case 2, random switch probability values are generated. The minimum values, mean and standard deviations of the optimized total cost for different switch probabilities, and the number of iterations with the optimum value, which will be examined while comparing the algorithm, are shown in Table 1. Table 1. Results of the different switch probability values.
Case1
Case 2
Switch probability
Minimum
Mean
Standard deviation
Iteration number
0
163.3447
163.3447
8.5695e−14
736.83
0.1
163.3447
163.3447
8.5695e−14
620.28
0.2
163.3447
163.3447
8.5695e−14
606.33
0.3
163.3447
163.3447
8.5695e−14
487.32
0.4
163.3447
163.3447
8.5695e−14
481.72
0.5
163.3447
163.3447
8.5695e−14
442.32
0.6
163.3447
163.3447
8.5695e−14
489.28
0.7
163.3447
163.3447
8.5695e−14
548.01
0.8
163.3447
163.6695
3.2477
598.03
0.9
163.3447
164.6438
6.3962
572.37
1
163.3447
165.9429
8.8551
562.28
sp = rand
163.3447
163.3447
8.5695e-14
489.99
Global and local pollination preferences change as the switch probability changes in the FPA algorithm. When the sp value is between 0–0.4, the dominant type of pollination
72
Y. Aydın et al.
is local pollination, while between 0.6–1, the dominant type of pollination is global pollination. When the sp value is 0.5, the probability of the local search and the global search is equal. In Case 1, in analysis where the sp value is between 0 - 0.7, since the minimum value, mean and standard deviation are the same, speed comparison can be examined for the numbers of iterations. It is seen that the sp value increases between 0–0.5, that is, the local pollination decreases and the global pollination increases. The number of iterations decreases with a slope. It means that, as the global pollination weight increases, it finds the minimum value in lower iterations. A minimum value was found for all switch probability values. However, the mean value and standard deviation increased in the 0.8–1 switch probability values. The range where the result is found the fastest is the range where the switch probability value is 0.4–0.6. The smallest number of iterations, hence the fastest analysis, was realized at 0.5 switch probability. There is more than 1.5 times difference between 0 and 0.5 switch probability values, that is, the optimum value was found 1.5 times faster. In Case 2, analyzes were made with random sp values, and the average of the minimum value, mean, standard deviation and number of iterations with the optimum values for different random switch probability values were found. Since random values are used for the switch probability value, there will be random switch values whose average is close to 0.5. The minimum value, mean and standard deviation values of 0.5 switch probability values in Case 1 and Case 2 are the same, but their speeds are different. When the sp value is 0.5, a lower iteration number than the randomly generated values is obtained.
5 Conclusions The problem of optimizing the dimensions of the single-span rectangular RC beam with minimum cost was carried out with flower pollination algorithm. FPA is a frequently preferred optimization method because it is fast and the number of parameters is less. There are 2 types of pollination in FPA. When deciding which of these two types of pollination will occur, the switch probability parameter specific to FPA is used. In this study, different switching probability values and their effects on the performance of the algorithm are investigated. The analysis results are divided into two cases according to which the probability to change is increased by 0.1 between 0 and 1, and randomly generated switch probability value. When comparing the performance of the algorithm, the minimum value, mean, standard deviation and number of iterations with the optimum value are taken into account. The results from the optimization process can be summarized as follows: • Between 0–0.5 sp values, the algorithm gives faster results because as the switch probability value increases, the number of iterations decreases with a slope. • In Case 1, the minimum value was found for all switch probability values. However, the mean value and standard deviation increased in the 0.8–1 switch probability values. This shows that there is no minimum value for every run in these sp values. • In Case 1, the fastest analysis was realized at 0.5 switch probability.
Reinforced Concrete Beam Optimization via Flower Pollination
73
• When Case 1 and Case 2 are examined, it is seen that when sp value is 0.5, a lower iteration number is obtained than when randomly generated values. But, randomly selected sp is also nearly effective as the best case and it is competitive. So, it may be advantageous to eliminate the only algorithm parameter tuning. Thus, the performance analysis proves the effect of the switch probability changing technique for the optimization of the RC beam.
References 1. Bekda¸s, G., Nigdeli, S.M., Kim, S., Geem, Z.W.: Modified Harmony search algorithm-based optimization for eco-friendly reinforced concrete frames. Sustainability 14(6), 3361 (2022) 2. Arora, J.: Introduction to Optimum Design, 3rd edn. Academic Press, Waltham, MA, USA (2012). ISBN 978-0-12-381375-6 3. Camp, C.V., Huq, F.: CO2 and cost optimization of reinforced concrete frames using a big bang-big crunch algorithm. Eng. Struct. 48, 363–372 (2013) 4. Jelušiˇc, P.: Cost optimization of reinforced concrete section according to flexural cracking. Modelling 3(2), 243–254 (2022) 5. Lee, M.S., Hong, K., Choi, S.W.: Genetic algorithm based optimal structural design method for cost and CO2 emissions of reinforced concrete frames. J. Comput. Struct. Eng. Inst. Korea 29, 429–436 (2016) 6. Nigdeli, S.M., Bekdas, G., Kim, S., Geem, Z.W.: A novel harmony search based optimization of reinforced concrete biaxially loaded columns. Struct. Eng. Mech. 54(6), 1097–1109 (2015) 7. Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M.: Optimum design of T-beams using Jaya algorithm. In: 3rd International Conference on Engineering Technology and Innovation (ICETI), Belgrade, Serbia (2019) 8. Ulusoy, S., Kayabekir, A.E., Bekda¸s, G., Ni˘gdeli, S.M.: Metaheuristic algorithms in optimum design of reinforced concrete beam by investigating strength of concrete (2020) 9. Bekda¸s, G., Nigdeli, S.M.: Optimum design of reinforced concrete columns employing teaching learning based optimization. Challenge J. Struct. Mech. 2(4), 216–219 (2016) 10. Nigdeli, S.M., Bekda¸s, G., Yang, X.S.: Metaheuristic optimization of reinforced concrete footings. KSCE J. Civ. Eng. 22(11), 4555–4563 (2018) 11. Yang, X.S.: Flower pollination algorithm for global optimization. In: International Conference on Unconventional Computing and Natural Computation, pp. 240–249. Springer, Berlin (2012) 12. Alyasseri, Z.A.A., Khader, A.T., Al-Betar, M.A., Awadallah, M.A., Yang, X.S.: Variants of the flower pollination algorithm: a review. In: Nature-Inspired Algorithms and Applied Optimization, pp. 91–118 (2018) 13. Yang, X.S., Karamanoglu, M., He, X.: Flower pollination algorithm: a novel approach for multiobjective optimization. Eng. Optim. 46(9), 1222–1237 (2014) 14. Mergos, P.E.: Optimum design of 3D reinforced concrete building frames with the flower pollination algorithm. Journal of Building Engineering 44, 102935 (2021) 15. Mergos, P.E., Mantoglou, F.: Optimum design of reinforced concrete retaining walls with the flower pollination algorithm. Struct. Multidiscip. Optim. 61(2), 575–585 (2019). https://doi. org/10.1007/s00158-019-02380-x 16. Nigdeli, S.M., Bekda¸s, G., Yang, X.S.: Application of the flower pollination algorithm in structural engineering. In: Metaheuristics and Optimization in Civil Engineering, pp. 28–42. Springer, Berlin (2016)
74
Y. Aydın et al.
17. Bekda¸s, G.: New improved metaheuristic approaches for optimum design of posttensioned axially symmetric cylindrical reinforced concrete walls. Struct. Design Tall Spec. Build. 27(7), e1461 (2018) 18. The MathWorks, Matlab R2022a. Natick, Massachusetts (2022) 19. Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M., Yang, X.S.: A comprehensive review of the flower pollination algorithm for solving engineering problems. In: Nature-Inspired Algorithms and Applied Optimization, pp. 171–188 (2018) 20. TS500: Betonarme Yapıların Tasarım ve Yapım Kuralları. Türk Standartları Enstitüsü, Ankara (2000)
Cost Prediction Model Based on Hybridization of Artificial Neural Network with Nature Inspired Simulated Annealing Algorithm Vijay Kumar1 , Sandeep Singla1(B) , and Aarti Bansal2 1 Department of Civil Engineering, RIMT University, Mandi Gobindgarh, Punjab, India
[email protected]
2 Chitkara University Institute of Engineering and Technology, Chitkara University,
Chandigarh, Punjab, India [email protected]
Abstract. This paper proposes a cost prediction model for construction projects. The key main implication of this research is to reduce the error between actual and predicted cost value of the cost prediction model. This is achieved by deploying artificial neural network for it and determine the optimal weight valued of artificial neural network (ANN) algorithm using natured inspired simulated annealing algorithm. Next, performance parameters, namely, root mean square error (RMSE), normalized mean absolute error (NMAE), and mean absolute percentage error (MAPE) are measured on the standard dataset to verify the performance of the cost prediction model. Besides that, convergence rate parameter is determined for simulated annealing algorithm to determine how quickly it find optimal weight values. The result shows that the proposed model achieves lower values of performance parameter over the existing ANN model. Thus, the proposed model is very useful in construction projects to predict cost parameter. Keywords: Artificial neural network ANN · Cost prediction model · Construction projects · Machine learning · Simulated annealing SA
1 Introduction Although the construction industry is booming, it still faces the same old problems like high threats, low quality, excessive costs, and delays in completion [1]. Time, money, and quality are the “triple constraint,” and everyone in the business world agrees they are what ultimately decide whether or not a project is successful [2–4]. It is a well-known truth, however, that the building sector is less productive and efficient than the service and industrial sectors. The discrepancy between the planned and real costs of major building initiatives is one such issue. Eventually, due to the one-of-a-kind character of building projects, the gap between their planned and real budgets expands [5]. Differences between observed and predicted values can be as high as 150%, as noted by Alzara et al. [6]. As a result, methods for predicting future costs are required to bring it down to a more manageable level. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 75–85, 2023. https://doi.org/10.1007/978-3-031-50330-6_8
76
V. Kumar et al.
In the literature, machine learning algorithms are deployed for design cost prediction models. “Support vector machine” [7], “decision tree” [8], “random forest” [9], “neural network” [10] is the successfully used machine learning algorithms in the cost prediction models. Out of these, neural network is most preferred algorithm over others [11, 12]. The weight and bias values of the neural network is impacting its performance. The determination of optimal weight value is a complex problem. To overcome this problem, nature inspired algorithm are successfully used in neural network in the number of applications. The term “nature inspired algorithm” is used to describe a class of algorithms that takes cues from the natural surroundings, such as “swarm intelligence”, “biological systems”, and “physical and chemical systems” [13]. The nature inspired algorithm searches the optimal solution based on the objective function. According to the given problem, the objective function is either maximized or minimized. In the cost prediction models, the objective function is minimizing the error between actual and predicted cost value. Initially, in these algorithms random population is generated in the lower and upper bound. After that, the objective function is used to evaluate population fitness and determine the fittest population. Next, new populations are generated based on the initial population and its fitness evaluation is done based on the objective function. The fitness of the generated population is compared with the best population and updated it if required. The whole operation is iterated for fixed number of iterations. In the literature, genetic algorithm [14], particle swarm optimization [15], cuckoo search algorithms [16] are used for neural network. This paper uses the simulated annealing algorithm because of its simple structure, ability to handle noisy data, and highly non-linear models. The main contribution of this paper is to design an efficient cost prediction model using artificial neural network. The performance of the ANN algorithm is dependent on the weight values which connects the neurons. The determination of optimal weight values adjusts the strength of the connections between neurons. Therefore, nature inspired simulated annealing algorithm is taken under consideration in the proposed model to determine weight values. The nature inspired algorithm based on the objective function determines the optimal weight of the ANN. Root Mean Square Error (RMSE) is taken as an objective function. Further, for simulation evaluation, the dataset of construction industry in the Gaza Strip is taken under consideration. The database contains 169 building projects. Further, four performance metrics such as root mean square error (RMSE), normalized mean absolute error (NMAE), and mean absolute percentage error (MAPE) are determined for the proposed model. Finally, these performance metrics are compared to the existing ANN algorithm and result shows the proposed cost estimation model performs well over the ANN based cost estimation model. The remaining sections are classified into six sections. Section 2 shows the related work in which existing cost prediction models are studied. Section 3 explains the algorithms such as ANN and SA is taken under consideration in the proposed model. Section 4 explains the proposed cost estimation model. Section 5 shows the result and discussion. In the last Sect. 6, conclusion is defined.
2 Related Work In this section, we have investigated and analysed the existing cost prediction models are designed by various researchers.
Cost Prediction Model Based on Hybridization of Artificial
77
Sanni-Anibirea et al. [1] show a methodical approach to developing a model to predict the initial cost of tall construction projects using Machine Learning methods. Methods like MLRA, KNN, ANN, SVM, and MCs (Multi Classifier Systems) are considered. Standardized success measures were used to evaluate the twelve different versions. Their model achieves CC (0.81%), MAPE (80.95%), and RMSE (6.09%). Further, a Multi Classifier System with KNN as the merging classifier performed the best. The building cost estimation methods of neural networks and multilayer perceptron have also been explored by Sumita K. Magdum and Amol C. Adamuth [11], different configurations of hidden layers and hidden nodes in each of the 12 MLP models and 4 NN models are evaluated. There is a large performance gap between the training and testing datasets when using a NN with a distinct collection of concealed nodes. There is little to no noticeable performance variation between MLP models on the training data collection. For optimal training outcomes, use MLP with either ten or eight concealed nodes. On the training dataset, however, MLP models do not outperform NN with 8 concealed nodes. The statistical multiple regression approach is used to evaluate these techniques. When using the training data collection, both NN and MLP models have very small RMSE values. Then, using a collection of 78 building construction projects, Viren Chandanshive and Ajay R. Kambekar [12], developed a cost forecast algorithm. Statistics for this initiative were collected in and around the megacity of Bombay. The neural network models took as input the most important design factors of the structural cost of structures in Indian National Rupees (INR), and as output the overall cost of the structural framework in INR. The purpose of this study is to design and implement a multilayer feed forward neural network model for estimating construction expenses using a backpropagation method (INR). To improve neural networks’ learning abilities and prevent overfitting, techniques like early halting and Bayesian regularisation are used. Bayesian regularisation method performance level is superior than early halting during building cost projection, it has been noted. The developed neural network model’s performance in predicting early-stage construction costs is demonstrated by its final findings. As a result of this research, property owners and financial investors will have a better grasp of the construction industry’s unpredictable financial climate, which improves construction projects. Miao Fan and Ashutosh Sharma [17], used the leastsquares SVM method and the standard SVM to forecast the project’s budget. The first step is to minimise the number of dimensions in the initial material. SVM and LSSVM models are used for training and forecasting, respectively, using the processed data; the outcomes of these models’ predictions are then contrasted and evaluated so that the most plausible one can be chosen. Parameter modification improves the forecast outcome even further. The forecast model’s percentage inaccuracy is less than 7%, which is both high and steady.
78
V. Kumar et al.
3 Overview of Artificial Neural Network and Nature Inspired Simulated Annealing Algorithm 3.1 Artificial Neural Network (ANN) ANN is a type of numerical model used to describe complicated connections among inputs and outcomes using non-linear statistical evidence. The high accuracy with which neural networks describe the connections among inputs and outputs makes them useful instruments for developing cost prediction models [11, 18, 19]. With NN, users have an input layer, a hidden layer, and an output layer. Except for the output layer, which only has a single neuron representing the training process’s final result, each of the other layers includes multiple neurons. Model for estimating construction costs that uses an artificial neural network trained to provide more accurate estimates of building expenses. The NN can be written as in Eq. (1). n wi xi + b (1) y = f (net) = f i=1
where n represents the total number of inputs, x_i represents the data sent to each individual processing element, w_i represents its weights, and b represents a bias. The activation function is denoted by the letter f. The outgoing signal undergoes an algebraic procedure as a result of the activation function f. Additionally, MLP networks have numerous hidden levels and are feed-forward neural networks. The linearly separable issues can be solved by the single layer perceptron. Multilayer perceptron is a model that takes the single layer perceptron and adds more layers to it so that it can solve a problem that cannot be solved using a linear combination of simpler problems. Multiple artificial neurons, representing the fundamental working part of a neural network, are linked with one another in a multi-layer perceptron (MLP) neural network. There’s a linear combiner and then a transfer function. Figure 1 shows multi-layer perception architecture of artificial neural network. ⎛ ⎞ (n) (n−1) (n) ⎠ yj .wjk (2) yk = f (n) ⎝ j (n−1)
yk
=
(n−1) (n) wij
xi
(n)
+ bij
(3)
i (n−1)
Every unit j in layer j takes activations yj (n)
(n)
.wjk from the preceding layer of (n)
processing units and directs activations yk . Here, yk represents the rate of predicted, (n−1) (n) represents the rate of materials, wjk and f (n) represents the function of activation xi (n)
wij shows the connection weights among the material rate as well as the hidden neuron (n)
and among the hidden neuron and the predicted cost, bij represents the bias terms and i, j and k represents the number of neurons in every layer. In this research, we make use of the unique characteristics of a transfer function with a linear unit that is exponential.
Cost Prediction Model Based on Hybridization of Artificial
79
Fig. 1. Multi-layer perception architecture of artificial neural network [11]
3.2 Simulated Annealing Algorithm A local search method (metaheuristic), simulated annealing can be used to break out of local optimum solutions. Its popularity over the past two decades can be attributed to its simplicity, convergence, and use of hill-climbing strategies to overcome the confines of local optimum solutions [20]. Similar to the actual annealing of objects, which is where the term “simulated annealing” comes from. A crystalline substance is subjected to a controlled heating and cooling cycle until its hexagonal structure is optimised. It has no defects in its molecular structure. The end arrangement can produce a solid with such high structural stability if the cooling rate is low enough. This thermal behavior is related to the search for global minima in a finite optimization problem, and this is where simulated annealing comes in [21]. It also offers a computational method for capitalizing on such a connection. The values of two solutions (the present solution and a freshly chosen solution) are compared at each step of a simulated annealing method applied to a discrete optimization problem. All solutions that improve the problem are accepted, while some solutions that don’t are accepted in the hopes of breaking out of a local optimum and finding a better one globally. A temperature measure, which usually does not increase with each run of the algorithm, determines the likelihood of adopting non-improving alternatives.
4 Proposed Cost Estimation Model In the proposed model, ANN and SA algorithm is used to design cost estimation model. Figure 2 is a flowchart depicting the proposed model.
80
V. Kumar et al.
Initially, standard dataset is read. Next, dataset is split into input and output targets. In the cost estimation model, input target attributes are area of typical floor, number of floors, footing type, slab type, elevator type, air conditioning, electrical, and mechanical type whereas output target is cost. Next, ANN algorithm is applied on the input and output targets for cost prediction. Feedforward network is chosen in the ANN algorithm. It consists of a series of layer. The first layer is connected with input target whereas last layer is connected to output target. Besides that, internal subsequent layers are connected with the previous layers. The hidden layer size is 5 taken in the proposed method. After that, artificial neural network is trained using input, output, and hidden layers. The weight value of ANN algorithm is randomly initialized after that the optimal weight values of artificial neural network are determined using simulated annealing algorithm. Based on the objective function, the SA algorithm computes the best possible solutions. Root mean square error (RMSE) is taken as a objective function. This function calculates the error between actual and predicted cost value and which weight value gives the minimum error is taken as an optimal value in the ANN algorithm. After determining the optimal weight values, it is assigned to artificial neural network and final cost prediction is done. Finally, the proposed method evaluation is done using for performance metrics, such as MAPE, NMAE, and RMSE.
Start
Read the Dataset
Data Split into Input and Output Targets
Train the Artificial Neural Network using Input, Output, and Hidden Layer Size
Determine Optimal Weight Values using Simulated Annealing Algorithm
Cost Prediction using Artificial Neural Network Algorithm
Performance Analysis
End
Fig. 2. Flowchart of the proposed cost estimation mode
Cost Prediction Model Based on Hybridization of Artificial
81
5 Result and Discussion This section discusses the proposed model’s simulation outcomes and compares them to the existing model. The model uses Gaza Strip construction data. The dataset contains 169 building projects. The dataset contains a number of attributes such as cost, area of typical floor (less than 1200 m2 ), number of floors {1–8}, footing type, slab type {solid, ribbed, drop beams}, elevator type {0–2}, air conditioning {none-central-split}, electrical {basic-luxury}, and mechanical type {basic-luxury}. MATLAB software is used for simulation purposes because it contains a huge inbuilt library and functions. The system configuration to execute the model is intel (R) Core (TM) i7-7500 CPU @.270GHz, 8GB RAM, and 64-bit operating system. Table 1 shows the parameter values of the ANN and SA algorithm. Table 1. Parameter values of ANN and SA Parameter
Values
Iterations
100
Objective function
RMSE
Population weight value
1
Tolerance
4–10
Hidden layer size
5
The proposed model (SA-ANN) with the actual cost value is compared with predicted value in Fig. 3. The result shows that the proposed model efficiently predict the cost. Next, we have measured the various performance metrics and based on it compare with the existing model [19–21]. • Mean Absolute Percentage Error (MAPE): The mean absolute percentage error is determined using Eq. (4). N 100 Ca − Cp (4) MAPE = C N a i=1
In Eq. (4), Ca , Cp denotes the actual and predicted cost whereas N denotes the total number of samples. Table 2 shows the comparative analysis based on the MAPE parameter. The proposed model achieves a MAPE value of 15.922 over the existing ANN model’s MAPE value of 21.081. This reflects the fact that the proposed model achieves lower value than the existing model. • Normalized Mean Absolute Error (NMAE): This parameter measures the normalized mean absolute error between the actual and predicted cost of the proposed model using Eq. (5). N 1 Ca − Cp (5) NMAE = N CPeak i=1
82
V. Kumar et al.
Fig. 3. Comparative analysis of actual and predicted cost by proposed model
Table 2. Comparative analysis based on MAPE parameter Parameter
ANN model
Proposed model
MAPE
21.081
15.922
Table 3 shows the comparative analysis based on the NMAE parameter. The proposed model achieves a NMAE value of 7.0647e−05 over the existing ANN model’s NMAE value of 0.5151. This reflects the fact that the proposed model achieves lower value than the existing model. Table 3. Comparative analysis based on NMAE parameter Parameter
ANN model
Proposed model
NMAE
0.5151
7.0647e−05
• Root Mean Square Error (RMSE): This parameter measures the root mean square error between the actual and predicted cost of the proposed model using Eq. (6).
N 1
2 Ca − Cp (6) RMSE = N i=1
Cost Prediction Model Based on Hybridization of Artificial
83
Table 4 shows the comparative analysis based on the RMSE parameter. The proposed model achieves a RMSE value of 51132 over the existing ANN model’s NMAE value of 1.43e+05. This reflects the fact that the proposed model achieves lower value than the existing model. Table 4. Comparative analysis based on RMSE parameter Parameter
ANN model
Proposed model
RMSE
1.43e+05
51332
• Convergence Rate: Convergence rate defines how quickly nature inspired algorithm searches the optimal solution. It is plotted between iterations vs. fitness function. Figure 4 shows the convergence rate of the simulated annealing algorithm to determine the optimal weight value. The result shows that the simulated annealing algorithm determines the optimal weight values according to the fitness function in the initial iterations (approximate in the 38th iteration in the graph).
Fig. 4. Convergence rate
5.1 Discussion From the simulation results, we have analysed the ANN algorithm performance enhanced if the optimal weight values are assigned to the network over the random values. Besides
84
V. Kumar et al.
that, simulated annealing is efficiently determined the optimal weight values based on the objective function and gives the minimum error between actual and predicted cost value.
6 Conclusion The cost prediction in the earlier phase of construction projects increases the overhead costs and increases the revenue. Thus, to achieve this goal, in this paper, we have hybrid the artificial neural network (ANN) and simulated annealing (SA) algorithm for design cost prediction model. In the proposed model, the simulated annealing algorithm based on the objective function determines the optimal weight values of ANN algorithm. We have taken RMSE as the objective function. Further, standard dataset has taken under consideration and various parameters such as MAPE, NMAE, RMSE, and convergence rate are measured and compared with ANN algorithm. The proposed model provides lower value of MAPE, NMAE, RMSE parameter and better convergence rate. In the future, we will explore enhance the proposed model using the following approaches. • In the proposed model, single objective function is taken under consideration. In the future, we will work on multi-objective function to enhance the prediction model. • In the proposed model, nature-inspired simulated annealing algorithm is taken under consideration. In the future, we will explore the recent nature inspired algorithm that provides better performance and provides lesser complexity over the existing algorithms.
References 1. Sanni-Anibire, M.O., Mohamad Zin, R., Olatunji, S.O.: Developing a preliminary cost estimation model for tall buildings based on machine learning. Int. J. Manag. Sci. Eng. Manag. 16(2), 134–142 (2021) 2. Gunduz, M., Nielsen, Y., Ozdemir, M.: Fuzzy assessment model to estimate the probability of delay in Turkish construction projects. J. Manag. Eng. 31(4), 04014055 (2015) 3. Ghosh, M., Kabir, G., Hasin, M.A.A.: Project time–cost trade-off: a Bayesian approach to update project time and cost estimates. Int. J. Manag. Sci. Eng. Manag. 12(3), 206–215 (2017) 4. Sacks, R.: Modern Construction: Lean Project Delivery and Integrated Practices (2013) 5. Ahiaga-Dagbui, D.D., Smith, S.D.: Dealing with construction cost overruns using data mining. Constr. Manag. Econ. 32(7–8), 682–694 (2014) 6. Alzara, M., Kashiwagi, J., Kashiwagi, D., Al-Tassan, A.: Using PIPS to minimize causes of delay in Saudi Arabian construction projects: university case study. Procedia Eng. 145, 932–939 (2016) 7. Wang, Y.R., Yu, C.Y., Chan, H.H.: Predicting construction cost and schedule success using artificial neural networks ensemble and support vector machines classification models. Int. J. Project Manage. 30(4), 470–478 (2012) 8. Do˘gan, S.Z., Arditi, D., Murat Günaydin, H.: Using decision trees for determining attribute weights in a case-based model of early cost prediction. J. Constr. Eng. Manag. 134(2), 146–152 (2008)
Cost Prediction Model Based on Hybridization of Artificial
85
9. Meharie, M.G., Shaik, N.: Predicting highway construction costs: comparison of the performance of random forest, neural network and support vector machine models. J. Soft Comput. Civ. Eng. 4(2), 103–112 (2020) 10. Kumar, A., Singla, S., Kumar, A., Bansal, A., Kaur, A.: Efficient prediction of bridge conditions using modified convolutional neural network. Wirel. Pers. Commun. 125(1), 29–43 (2022) 11. Magdum, S.K., Adamuthe, A.C.: Construction cost prediction using neural networks. ICTACT J. Soft Comput. 8(1) (2017) 12. Chandanshive, V., Kambekar, A.R.: Estimation of building construction cost using artificial neural networks. J. Soft Comput. Civ. Eng. 3(1), 91–107 (2019) 13. Soni, V., Sharma, A., Singh, V.: A critical review on nature inspired optimization algorithms. IOP Conf. Ser. Mater. Sci. Eng. 1099(1), 012055 (2021) 14. Feng, G.L., Li, L.: Application of genetic algorithm and neural network in construction cost estimate. In: Advanced Materials Research, vol. 756, pp. 3194–3198 (2013) 15. Alsarraf, J., Moayedi, H., Rashid, A.S.A., Muazu, M.A., Shahsavar, A.: Application of PSO– ANN modelling for predicting the exergetic performance of a building integrated photovoltaic/thermal system. Eng. Comput. 36(2), 633–646 (2019). https://doi.org/10.1007/s00 366-019-00721-4 16. Yuan, Z., Wang, W., Wang, H., Mizzi, S.: Combination of cuckoo search and wavelet neural network for midterm building energy forecast. Energy 202, 117728 (2020) 17. Fan, M., Sharma, A.: Design and implementation of construction cost prediction model based on SVM and LSSVM in industries 4.0. Int. J. Intell. Comput. Cybern. (2021) 18. Anand, V., Gupta, S., Koundal, D., Nayak, S.R., Barsocchi, P., Bhoi, A.K.: Modified U-net architecture for segmentation of skin lesion. Sensors 22(3), 867 (2022) 19. Anand, V., Gupta, S., Koundal, D., Singh, K.: Fusion of U-Net and CNN model for segmentation and classification of skin lesion from dermoscopy images. Expert Syst. Appl. 213, 119230 (2023) 20. Aarts, E., Korst, J., Michiels, W.: Simulated annealing. In: Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques, pp. 187–210 (2005) 21. Chicco, D., Warrens, M.J., Jurman, G.: The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. Peer J. Comput. Sci. 7, 623 (2021)
Optimum Design of Reinforced Concrete Footings Using Jaya Algorithm Hani Kerem Türko˘glu, Gebrail Bekda¸s, and Sinan Melih Nigdeli(B) Department of Civil Engineering, Istanbul University-Cerrahpa¸sa, 34320 Avcılar, Istanbul, Turkey {bekdas,melihnig}@iuc.edu.tr
Abstract. Reinforced concrete (RC) footings; which are more suitable to design where there is no risk of non-uniform settlements on the ground have many factors to be considered in the design phase. In the RC footing design, considering the soil and the structural properties, it is necessary to make a design in which safety, cost and resources are used optimally. In this study, cost optimization of RC footing is performed with a metaheuristic, Jaya algorithm. An objective function was found by synthesizing the formulas used in the RC footing calculations. It is seen that the Jaya algorithm is successful in the RC footing design and gives consistent results. Keywords: Jaya algorithm · Metaheuristic algorithms · RC footing · Optimization · Objective function
1 Introduction Creating an efficient, safe and cost-effective design without compromising the integrity of the system is a challenge for engineers [1]. Considered on the basis of civil engineering, once the safety criterion has been met, it is important to carry out an optimum design that ensures the efficient use of cost, time and resources. The optimum design of reinforced concrete (RC) footings is discussed within the scope of the study; reinforced concrete is the most widely used building material in the world [2] and its carbon emission is higher than steel and wood [3]; therefore, the optimum use of the resources, cost and safety are considered as a subject to be investigated because of the necessity of a design model that is considered together. At the design stage; design loads, geometric constraints, soil properties and cost should be considered together. After minimizing the additional effects that may occur in the structure by preventing different settlements on the ground by meeting the required soil strength, the internal section effects of the foundation element should be determined and the reinforced concrete section and reinforcement arrangement should be made [4]. When considering different parameters such as bending moment, shear force, normal force, soil properties, required geometry, required reinforcement ratio and cost for the optimum design of RC footings, iterative solutions are made and this can cause time loss [5]. With metaheuristic optimization algorithms; in engineering designs, the loss of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 86–96, 2023. https://doi.org/10.1007/978-3-031-50330-6_9
Optimum Design of Reinforced Concrete Footings Using
87
time arising from the search for an optimum solution can be avoided by pre-sizing the problem before starting the design and by gradually repetitive processes. In the optimum design of RC structural members, metaheuristic algorithms may have played an important role in finding the best design which is not possible to find via mathematical methods due to including a decision-making process on element dimensions. Also, these element dimensions are needed to be checked for several design constraints for safety. To make these safety checks automatically and find the best optimum solution with several iterative trials, metaheuristic algorithms have been used in the optimum design of RC members such as beams [6–11], columns [12–18], retaining walls [19–23] and frames [24–29]. In the present study, RC footings under bending moment and axial force were optimized via the Jaya algorithm. The numerical investigation was done for several cases of loading conditions.
2 Jaya Algorithm This algorithm, which aims to move away from the worst solution in order to reach the best solution, is a new method developed by Rao in 2016 [30]. The algorithm evaluates fewer functions than other types of metaheuristic algorithms and requires relatively less processing in the solution phase [31]. The formula of the Jaya algorithm, in which the optimum solution is reached by moving away from the worst solution with each iteration, is as follows: (1) Xi,new = Xi,j + rand() Xi,g(best) − |Xi,j | −rand() Xi,g(worst) − |Xi,j | The expression Xi,new denotes the new value of the project parameter to be optimized, while the rand() expression denotes a random number between 0–1. The expression Xi,g(best) represents the best value in the solution matrix, while the expression Xi,g(worst) is the most undesirable result in the solution matrix. Therefore, in the current solution matrix, steps are taken to find the optimum value by using the best result, the worst result and the value itself.
3 Design of RC Footings Reinforced concrete elements that are designed to spread the loads from the superstructure to the columns over wider bases in the ground are called RC footings [32]. In this study, uniaxial moment and normal force load carrying RC footings are taken into account (Fig. 1). 3.1 Design Constraints In the optimization process to be made with the Jaya algorithm, the base pressure to the foundation base due to the external loads and the foundation weight and the ground weight should not exceed the safe bearing capacity of the soil. Accordingly, the following equations to be used as the design constraint in the optimization processes to be made
88
H. K. Türko˘glu et al.
Fig. 1. Moment and normal force carrying RC footings
with the Jaya algorithm have been obtained in relation to the soil-bearing capacity control conditions. (2) g1 = Nd · Bx + 6Md − qt − γRv · γavg · ht · Bx2 · By ≤ 0 g2 = Nd · Bx − 6Md − qt − γRv · γavg · ht · Bx2 · By ≤ 0
(3)
g3 = 6Md − Nd · Bx ≤ 0
(4)
In these formulas, ‘Nd ’ denotes the normal force to be considered in the design and ‘Md ’ denotes the design moment. ‘Bx ’ is the length of the foundation in the x direction, ‘qt ’ is the soil bearing capacity, ‘ht ’ is the foundation height, γRv is the foundation bearing strength reduction coefficient, γavg is the foundation and above By denotes the average unit weight of the soil, and By the length of the foundation in the y direction. The design constraint relation from the shear control is given in Eq. 5. qof refers to the base pressure on the column face, qmax refers to the maximum value of the base pressure. h is the width of the column, Bx is the length of the foundation in the x-direction, By is the length of the foundation in the y-direction, d is the useful foundation height, and fctd is the design tensile strength. qmax + qof Bx − h · · By − 0.65 · fctd · By · d < 0 (5) g4 = 2 2 The final design constraint is established due to the punching verification and is expressed by Eq. 6. g5 = Nd − Ap · q00 − γ · fCtd · Up · d · 1000 ≤ 0
(6)
Optimum Design of Reinforced Concrete Footings Using
89
3.2 Objective Function To be used in the Jaya algorithm, which is a metaheuristic optimization algorithm to be discussed within the scope of the study; there is a need to obtain an objective function to be used in the RC footing cost optimization, which is the goal of this study. The objective function will be expressed as the sum of the individual foundation concrete cost and the steel cost and is shown in the following equation. The CF specified in Eq. 7 indicates the cost of a single foundation, CC the cost of concrete to be used in the manufacture of the single foundation, and Cs the reinforcement cost. These costs are calculated according to the reinforcement area and concrete area. These areas are calculated by using design variables such as base dimensions (Bx,f, By,f ) and height (Hf ) of the footing. CF = CC + Cs
(7)
3.3 Optimum Design of RC Footing First of all, in order to check the consistency of the optimization algorithm in the solution of the problem, the value of the algorithm parameters to be taken is found with a few iterations. In the example, the moment value is 250 KNm, the normal force value is 1500 KN, concrete class C30 column dimensions are 55 × 55 cm and the soil-bearing capacity is 240 KN/m2 . The optimum results are in Table 1 for different numbers of iterations. Consistent results were obtained with 500 iterations. Since the working codes will be optimized for many scenarios, the population number (pn) is taken as 15 and the number of iterations is 20000. After testing the consistency of the optimization algorithm, the effect of moment variation on the optimum results was investigated for different scenarios. The values found are given in Table 2. Table 1. Iteration number and optimum results relation Iteration
Population
Bx,f (m)
50
10
3.33
500
10
3.33
5000
10
3.33
50000
10
3.33
By,f (m)
Hf (m)
Cf (TL)
2.56
0.39
4859
2.56
0.39
4777
2.56
0.39
4777
2.56
0.39
4777
For several normal force values, the optimization results are given in Table 3. In Table 4; the results of optimum design of a RC footing bearing with 150 KNm moment and 750 KN normal force under different soil bearing forces are included. Data obtained from Table 5 gives optimum results of RC footing designed with 9 different concrete classes from C16 to C50. The effects of concrete grade change on optimum design are shown. The design optimization was made under 250 KNm moment and 1500 KN normal force, and the column size on an RC footing is 55 × 55 cm. Soil bearing capacity is chosen as 240 KN/m2 .
100
200
300
400
500
600
700
800
900
1000
1100
1200
1
2
3
4
5
6
7
8
9
10
11
12
Md (KNm)
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
1500
Nd (KN)
Design constants
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
fctd (MPa)
240
240
240
240
240
240
240
240
240
240
240
240
qt (KN/m2 )
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
Bx,c (m)
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
By,c (m)
Table 2. Moment – optimum values relation
4.80
4.65
4.52
4.38
4.24
4.09
3.94
3.78
3.61
3.42
3.22
3.00
Bx,f (m)
2.76
2.77
2.76
2.74
2.73
2.71
2.68
2.66
2.62
2.58
2.53
2.46
By,f (m)
Optimum values
0.48
0.47
0.46
0.45
0.44
0.43
0.42
0.41
0.40
0.39
0.38
0.37
Hf (m)
9178
8711
8279
7813
7397
6945
6485
6015
5532
5034
4514
3965
Cf (TL)
90 H. K. Türko˘glu et al.
100
200
300
400
500
600
700
800
900
1000
1100
1200
1
2
3
4
5
6
7
8
9
10
11
12
Md (KNm)
2150
2000
1850
1700
1550
1400
1250
1100
950
800
650
500
Nd (KN)
Design constants
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
fctd (MPa)
240
240
240
240
240
240
240
240
240
240
240
240
qt (KN/m2 )
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
Bx,c (m)
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
By,c (m)
3.66
3.51
3.36
3.21
3.05
2.90
2.73
2.56
2.39
2.21
2.23
2.29
Bx,f (m)
2.78
2.72
2.64
2.57
2.49
2.41
2.33
2.25
2.16
2.08
1.77
1.43
By,f (m)
Optimum values
Table 3. Normal force – optimum values relation
0.46
0.44
0.42
0.40
0.38
0.35
0.33
0.31
0.28
0.26
0.25
0.25
Hf (m)
6908
6166
5461
4794
4166
3578
3030
2524
2062
1645
1396
1162
Cf (TL)
Optimum Design of Reinforced Concrete Footings Using 91
150
150
150
150
150
150
150
150
150
150
150
150
1
2
3
4
5
6
7
8
9
10
11
12
Md (KNm)
750
750
750
750
750
750
750
750
750
750
750
750
Nd (KN)
Design constants
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
1.26
fctd (MPa)
245
230
215
200
185
170
155
140
125
110
95
80
qt (KN/m2 )
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
Bx,c (m)
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
By,c (m)
2.27
2.38
2.51
2.65
2.82
3.02
3.25
3.54
3.90
4.36
4.99
5.00
Bx,f (m)
2.12
2.12
2.12
2.13
2.13
2.13
2.13
2.13
2.13
2.13
2.13
2.57
By,f (m)
Optimum values
Table 4. Soil bearing force capacity – optimum values relation
0.25
0.25
0.25
0.26
0.26
0.26
0.26
0.26
0.26
0.26
0.26
0.27
Hf (m)
1716
1808
1913
2033
2171
2334
2527
2762
3053
3426
3923
4750
Cf (TL)
92 H. K. Türko˘glu et al.
250
250
250
250
250
250
250
250
1
2
3
4
5
6
7
8
Md (KNm)
1500
1500
1500
1500
1500
1500
1500
1500
Nd (KN)
Design constants
1.55
1.45
1,35
1.26
1.15
1.00
0.95
0.90
fctd (MPa)
240
240
240
240
240
240
240
240
qt (KN/m2 )
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
Bx,c (m)
0.55
0.55
0.55
0.55
0.55
0.55
0.55
0.55
By,c (m)
3.48
3.43
3.38
3.33
3.26
3.16
3.12
3.08
Bx,f (m)
2.41
2.46
2.51
2.56
2.63
2.75
2.79
2.84
By,f (m)
Optimum values
Table 5. Soil bearing capacity – optimum values relation
0.34
0.36
0.37
0.39
0.41
0.44
0.46
0.47
Hf (m)
4124
4323
4548
4777
5098
5634
5844
6075
Cf (TL)
Optimum Design of Reinforced Concrete Footings Using 93
94
H. K. Türko˘glu et al.
4 Discussion In the optimum design of an RC footing under the influence of uniaxial external forces, the Jaya algorithm yielded consistent results with 500 iterations and 15 population numbers. As expected; since a situation with a moment in one axis (x-axis) is considered within the scope of the research, it is seen that the increase in moment increases the foundation dimension in the x-axis more than the other foundation dimensions. It is seen that the increase in soil-bearing capacity reduces the dimensions of the foundation and reduces the cost. Due to the x-directed moment bearing of the single footing, which is considered within the scope of the study, the increase in soil strength in the optimum design of the single footing allows for reducing the length of the single footing in the x-direction and the foundation cost. Therefore, using Jaya algorithm to design RC footings gives consistent results.
5 Conclusion As it is supposed to be; the increase in moment increases the optimum dimensions of the single foundation and the optimum cost. In addition, an increase in the normal force has the same effect. When the tables showing the effects of the moment and normal force increases on the cost are examined, the rate of cost increase in the external load increase at the same rate was higher in the normal force than in the moment. This is another confirmation that the optimization gives a valid result in terms of being parallel to the design constraints written on the condition that the soil-bearing capacity is not exceeded. The effect of the moment increase on the individual foundation dimensions is less than the effect of the normal force increase. Because in the maximum stress formula, the normal force is divided by the product of the lengths of the single foundation in two directions; the moment is divided by the square of the direction in the x-direction times the y-direction. Therefore, desired results have been achieved. In short, the increase in normal force and moment increased the optimum foundation dimensions. In the rate of increase of the foundation height; the effect of the normal force is more effective than the moment. Since there is a moment in one direction, while the moment increases, the length of the foundation in the x direction shows an increase. The length in the y direction increases relatively less. As the soil-bearing capacity increases and the concrete class increases, the foundation sections get smaller and the optimum cost decreases. When different scenarios with the same external loads, soil properties and concrete class were investigated, it was determined that the increase in column sizes reduced the cost and reduced the size of the foundations. As a result, it is seen that the Jaya algorithm is successful in the singular basis optimum design and gives consistent results.
References 1. Arora, J.: Introduction to Optimum Design. Elsevier (2004)
Optimum Design of Reinforced Concrete Footings Using
95
2. European Ready Mixed Concrete Organization (ERMCO). ERMCO Statistics 2015. Available online: www.ermco.eu/document/ermco-statistics-2015-final-pdf/. Accessed on 24 Apr 2022 3. Maas, G.P.: Comparison of quay wall designs in concrete, steel, wood and composites with regard to the CO2 -emission and the life cycle analysis (2011) 4. Celep, Z.: Betonarme Yapılar. Beta Basım Yayım Da˘gıtım, ˙Istanbul (2017) 5. Nigdeli, S.M., Bekda¸s, G., Yang, X.-S.: Metaheuristic optimization of reinforced concrete footings. KSCE J. Civ. Eng. 22(11), 4555–4563 (2018). https://doi.org/10.1007/s12205-0182010-6 6. Nigdeli, S.M., Bekda¸s, G.: Optimum design of RC continuous beams considering unfavourable live-load distributions. KSCE J. Civ. Eng. 21(4), 1410–1416 (2017). https:// doi.org/10.1007/s12205-016-2045-5 7. Cakiroglu, C., Islam, K., Bekdas, G., Apak, S.: Cost and CO2 emission-based optimisation of reinforced concrete deep beams using Jaya algorithm. J. Environ. Prot. Ecol. 23(6), 1–10 (2022) 8. Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M.: Evaluation of metaheuristic algorithm on optimum design of T-beams. In: Proceedings of 6th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2020, Istanbul, pp. 155–169. Springer, Singapore (2021) 9. Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M.: Optimum design of reinforced concrete T-beam considering environmental factors via flower pollination algorithm. Int. J. Eng. Appl. Sci. 13(4), 166–178 (2021) 10. Yücel, M., Nigdeli, S.M., Bekda¸s, G.: Minimization of the CO2 emission for optimum design of T-shape reinforced concrete (RC) beam. In: Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2022, pp. 127–138. Springer Nature, Singapore (2022) 11. Ço¸sut, M., Bekda¸s, G., Ni˘gdeli, S.M.: Cost optimization and comparison of rectangular crosssection reinforced concrete beams using TS500, Eurocode 2, and ACI 318 code. In: Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2022, pp. 83–91. Springer Nature, Singapore (2022) 12. Bekda¸s, G., Cakiroglu, C., Kim, S., Geem, Z.W.: Optimization and predictive modeling of reinforced concrete circular columns. Materials 15(19), 6624 (2022) 13. Nigdeli, S.M., Yücel, M., Bekda¸s, G.: A hybrid artificial intelligence model for design of reinforced concrete columns. Neural Comput. Appl. 1–9 (2022) 14. Kayabekir, A., Bekda¸s, G., Nigdeli, S., Apak, S.: Cost and environmental friendly multiobjective optimum design of reinforced concrete columns. J. Environ. Prot. Ecol. 23(2), 1–10 (2022) 15. Kayabekir, A.E., Nigdeli, S.M., Bekda¸s, G.: Adaptive harmony search for cost optimization of reinforced concrete columns. In: Intelligent Computing & Optimization: Proceedings of the 4th International Conference on Intelligent Computing and Optimization 2021 (ICO2021), vol. 3, pp. 35–44. Springer International Publishing, Berlin (2022) 16. Cakiroglu, C., Islam, K., Bekda¸s, G., Kim, S., Geem, Z.W.: Interpretable machine learning algorithms to predict the axial capacity of FRP-reinforced concrete columns. Materials 15(8), 2742 (2022) 17. Bekda¸s, G., Nigdeli, S.M.: Optimum design of reinforced concrete columns employing teaching learning based optimization. Challenge J. Struct. Mech. 2(4), 216–219 (2016) 18. Nigdeli, S.M., Bekdas, G., Kim, S., Geem, Z.W.: A novel harmony search based optimization of reinforced concrete biaxially loaded columns. Struct. Eng. Mech. Int. J. 54(6), 1097–1109 (2015) 19. Bekda¸s, G., Cakiroglu, C., Kim, S., Geem, Z.W.: Optimal dimensioning of retaining walls using explainable ensemble learning algorithms. Materials 15(14), 4993 (2022)
96
H. K. Türko˘glu et al.
20. Yücel, M., Bekda¸s, G., Nigdeli, S.M., Kayabekir, A.E.: An artificial intelligence-based prediction model for optimum design variables of reinforced concrete retaining walls. Int. J. Geomech. 21(12), 04021244 (2021) 21. Yücel, M., Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M., Kim, S., Geem, Z.W.: Adaptive-hybrid harmony search algorithm for multi-constrained optimum eco-design of reinforced concrete retaining walls. Sustainability 13(4), 1639 (2021) 22. Kayabekir, A.E., Yücel, M., Bekda¸s, G., Nigdeli, S.M.: Comparative study of optimum cost design of reinforced concrete retaining wall via metaheuristics. Chall. J. Concr. Res. Lett 11, 75–81 (2020) 23. Kayabekir, A.E., Arama, Z.A., Bekdas, G.: Effect of application factors on optimum design of reinforced concrete retaining systems. Struct. Eng. Mech. 80(2), 113–127 (2021) 24. Bekda¸s, G., Nigdeli, S.M., Kim, S., Geem, Z.W.: Modified harmony search algorithm-based optimization for eco-friendly reinforced concrete frames. Sustainability 14(6), 3361 (2022) 25. Rakıcı, E., Bekda¸s, G., Nigdeli, S.M.: Optimal cost design of single-story reinforced concrete frames using Jaya algorithm. In: Proceedings of 6th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2020, Istanbul, pp. 179–186. Springer, Singapore (2021) 26. Ulusoy, S., Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M.: Optimum design of reinforced concrete multi-story multi-span frame structures under static loads. Int. J. Eng. Technol. 10(5), 403–407 (2018) 27. Bekda¸s, G., & Nigdeli, S.M.: Modified harmony search for optimization of reinforced concrete frames. In: Harmony Search Algorithm: Proceedings of the 3rd International Conference on Harmony Search Algorithm (ICHSA 2017), vol. 3, pp. 213–221. Springer, Singapore (2017) 28. Bekda¸s, G., Nigdeli, S.M.: Optimization of RC frame structures subjected to static loading. In: 11th World Congress on Computational Mechanics, pp. 20–25 (2014) 29. Kayabekir, A.E., Bekda¸s, G., Nigdeli, S.M.: Control of reinforced concrete frame structures via active tuned mass dampers. In: Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2022, pp. 271–277. Springer Nature, Singapore (2022) 30. Rao, R.V.: Jaya: a simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 7, 19–34 (2016) 31. Bekda¸s, G., Nigdeli, S.M., Yücel, M., Kayabekir, A.E.: Yapay Zeka Optimizasyon Algoritmaları ve Mühendislik Uygulamaları. Seçkin Yayıncılık, Ankara (2021) 32. Do˘gangün, A.: Betonarme Yapıların Hesap ve Tasarımı. Birsen Yayın Da˘gıtım, ˙Istanbul (2021)
AI Models for Spot Electricity Price Forecasting—A Review G. P. Girish1(B) , Rahul Bhagat2 , S. H. Preeti3 , and Sweta Singh4 1 Department of Finance, IBS Bangalore (Off-Campus Centre), IFHE University (a Deemed
to-be-University under Sec 3 of UGC Act 1956), Hyderabad, India [email protected] 2 Prestige Institute of Management and Research, DAVV Indore, Indore, Madhya Pradesh, India 3 GITAM School of Business, GITAM University (a Deemed to-be-University under Sec 3 of UGC Act 1956), Hyderabad, India 4 Department of Marketing, IBS Hyderabad, IFHE University (a Deemed to-be-University under Sec 3 of UGC Act 1956), Hyderabad, India [email protected]
Abstract. Electricity is a unique commodity that requires special attention due to its non-storable and perishable nature, limited transmission constraints and indispensability to modern life. Power exchanges and spot electricity markets facilitate the trading of electrical power between multiple stakeholders, providing an effective platform for electricity trading and promoting transparency and competition. Major power exchanges that offer spot markets include EPEX, Nord Pool, AEMO, NYMEX, and IEX. Artificial intelligence is increasingly playing a crucial role in spot electricity price forecasting due to its capability to process massive amounts of data and generate insights. This study presents and reviews state-of-the-art AI inspired models in spot electricity price forecasting literature, highlighting the way forward for stakeholders, policy makers, power industry participants and researchers. Keywords: Machine learning · Finance · Deep learning · Application
1 Introduction Electricity is viewed as a distinct commodity due to several reasons. Firstly, it has a non-storable attribute, which implies that it cannot be stocked in large quantities like other commodities, such as oil or gas, and must be generated on-demand to fulfill current requirements. Secondly, electricity is a perishable commodity, which means that it must be used instantly as it cannot be stored for later use, making it crucial to manage supply and demand in real-time. Thirdly, electricity is limited by transmission constraints, as it must be carried from the point of production to the point of consumption, and these constraints, such as the limited capacity of transmission lines, can hinder the transfer of electricity from one location to another. Fourthly, electricity is considered an indispensable commodity as it is used in almost all aspects of modern life, and its affordability and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 97–103, 2023. https://doi.org/10.1007/978-3-031-50330-6_10
98
G. P. Girish et al.
dependability are vital for the functioning of modern economies. Finally, the regulation of electricity markets is necessary to guarantee fair prices and consumer protection, and it can vary depending on the country or region, which makes it a regulated commodity. These aspects make electricity a unique and exceptional commodity that needs special attention concerning pricing, trading, and regulation [1, 2, 5]. Power exchanges and spot electricity markets play a pivotal role in the electricity industry by facilitating the bilateral trading of electrical power between multiple stakeholders. Power exchanges serve as sophisticated electronic trading platforms that effectively match the supply and demand of electricity among electricity producers, suppliers, and consumers. These platforms operate seamlessly and offer a transparent, open, and highly competitive marketplace that ensures equitable pricing of electricity. Spot electricity markets are a specialized subset of power exchanges where electricity is traded for immediate delivery, typically on a day-ahead or hour-ahead basis. These markets empower electricity producers and suppliers to adjust their output based on real-time demand and supply conditions, thus optimizing grid balance and ensuring a reliable and stable supply of electricity. By providing an effective platform for electricity trading, power exchanges and spot markets foster a highly competitive environment, thereby promoting transparency and reducing the market power of dominant players. Additionally, they provide renewable energy sources with an opportunity to participate in the energy market, thus encouraging investment in new generation capacity. Holistically power exchanges and spot markets are instrumental in creating an efficient and dependable electricity market, thereby benefiting consumers, producers, and suppliers alike [3, 4, 6]. The trading of electricity is facilitated by various power exchanges globally, including those that offer spot electricity markets. Some notable examples of major power exchanges that provide such markets are: (a) The European Power Exchange (EPEX) operates across multiple European countries and ranks among the largest power exchanges worldwide. It operates both day-ahead and intra-day markets, catering to power product trades in several currencies. (b) Nord Pool, the world’s biggest power exchange, operates across several Northern European countries, and is known for its trading activities in the Nordic and Baltic regions. It provides both day-ahead and intraday markets, offering products specifically for the Nordic, Baltic, and German markets. (c) The Australian Energy Market Operator (AEMO) runs a spot market in Australia, catering to the eastern and southern states. The exchange operates on a 24-h basis, providing both day-ahead and intra-day markets. (d) The New York Mercantile Exchange (NYMEX) manages a spot market for electricity, natural gas, and other energy products in the United States. It offers multiple trading contracts and operates on a 24-h basis. (e) The Indian Energy Exchange (IEX) is a leading power exchange in India and operates a day-ahead market specifically for electricity trading. The exchange offers multiple contracts for trading and operates on a 24-h basis [7, 8, 9]. Artificial Intelligence (AI) is increasingly playing a crucial role in the spot electricity price forecasting process, owing to its capability to process massive amounts of data and generate insights to facilitate more precise price forecasting. In this study we review the role played by AI inspired models in spot electricity price forecasting literature. In this study we present and review state-of-the-art AI inspired models in spot electricity
AI Models for Spot Electricity Price Forecasting—A Review
99
price forecasting literature. The insights provided by the review will be useful for all stakeholders, policy makers, power industry participants and researchers. The remainder of the paper is structured as follows. In Sect. 2 we review spot electricity price forecasting literature with emphasis on AI inspired models. In Sect. 3 we focus on stylized facts of spot electricity prices, how AI models can contribute in optimizing spot electricity price forecasting and the role of regulators. In Sect. 4 we summarize and conclude our study highlighting the way forward.
2 Literature Review Short-term electricity price contracts are typically characterized as agreements for the exchange of electricity for a duration ranging from one day up to one year. These contracts are utilized to regulate the prompt requirements of electricity or exploit transient price oscillations within the spot market. Compared to their longer-term counterparts, short-term contracts offer greater flexibility, providing parties with the ability to more frequently adjust their electricity procurement or selling strategies. Medium-term electricity price contracts, which span between one to three years, are employed to manage the supply and demand of electricity over an extended period, while also offering a greater level of price stability in comparison to short-term contracts. They may be utilized to mitigate against price volatility or to secure a consistent supply of electricity for a specified duration. Long-term electricity price contracts, on the other hand, typically refer to agreements that involve the purchase or sale of electricity for a period of three years or more. These contracts are frequently employed in large-scale electricity projects, such as the development of new power plants or renewable energy facilities. Long-term contracts offer greater certainty over future electricity prices and supply, thus enabling effective long-term planning and investment [1, 4, 6, 9]. AI is utilized in several ways to enhance spot electricity price forecasting: (a) Data processing: AI can analyze voluminous data from various sources, such as historical price data, weather forecasts, and electricity market data, to recognize patterns and relationships that can inform price forecasting. (b) Machine learning algorithms: Historical price data can train machine learning algorithms to predict future prices, with continuous improvements in accuracy over time as new data becomes available. (c) Neural networks: By modeling intricate connections between different variables that can affect electricity prices, neural networks can learn to predict patterns that humans may find too complex to detect. Large datasets can train these networks. (d) Predictive analytics: Predictive analytics techniques can identify trends and patterns in electricity market data, facilitating price forecasting. These techniques can also be used to forecast future trends and identify drivers of price changes, such as variations in supply or demand. Integrating AI in spot electricity price forecasting augments the accuracy of forecasts and facilitates more informed decision-making by market participants. This, in turn, ensures the efficiency of the electricity market, with prices that reflect the actual value of electricity at any given time [2, 5, 8, 9]. There exists a multitude of AI models for spot electricity price forecasting, such as Artificial Neural Networks, Support Vector Machines, Random Forests, Long ShortTerm Memory Networks and Convolutional Neural Networks. ANNs simulate the biological neural network structure to predict the intricate relationships between diverse
100
G. P. Girish et al.
variables that impact electricity prices. SVMs, a supervised learning algorithm, learn from historical price data to make accurate future price predictions, which can be refined over time. Random Forests, an ensemble learning method, generate a plethora of decision trees that predict the mode or mean prediction of the individual trees for classification or regression analysis, respectively. LSTMs, a type of recurrent neural network, are well-suited for analyzing time series data and can identify patterns in historical data to forecast spot electricity prices. CNNs, typically used for image recognition, can be utilized in time series forecasting, such as spot electricity price forecasting, by interpreting historical price data as an image and recognizing patterns in the data. Overall, these AI models utilize advanced techniques to analyze vast datasets and improve the accuracy of spot electricity price forecasting [6, 8, 9]. Artificial neural network models can be categorized based on their architecture and learning algorithm. The architecture, also known as topology, refers to the neural connections, while the learning algorithm describes how the ANN adjusts its weights for each training vector. In the context of electricity price forecasting, ANN models can also be classified based on the number of output nodes they have. The first group includes ANN models with a single output node used to forecast various electricity prices, such as the next hour’s price, the price h hours ahead, the next day’s peak price, the next day’s average on-peak price, or the next day’s average baseload price. Several studies have been conducted using these models, such as [10–18]. The second, less common group includes ANN models with multiple output nodes that forecast a vector of prices, typically 24 or 48 nodes, to predict the complete price profile of the following day [19]. Feed-forward networks are commonly favored for forecasting, while recurrent networks are particularly skilled in pattern classification and categorization, as highlighted by studies conducted by [20, 21]. The Levenberg-Marquardt algorithm is the second most commonly used training method, as demonstrated by its application in electricity price forecasting studies conducted by [22, 23]. [24] posits that this algorithm can train a network 10–100 times more quickly than back-propagation. The Multi-Layer Perceptron architecture has been employed in various studies such as those conducted by [25, 26]. On the other hand, the Radial Basis Function architecture, which is less popular, has been utilized in studies such as those conducted by [27].
3 Stylized Facts of Spot Electricity Prices and Role of AI Models Spot electricity prices are the real-time prices of electricity that are bought and sold in the wholesale electricity market. Several stylized facts about spot electricity prices are worth noting. These include high volatility, significant changes in prices within a day, week or year, which can be attributed to changes in demand and supply conditions, weather patterns, fuel prices, and other factors. Despite this, electricity prices exhibit meanreverting behavior, which implies that prices tend to return to their long-term average over time, even after experiencing sharp increases or decreases. Spot electricity prices also show significant seasonal patterns, with higher prices during periods of high demand such as summer or winter and lower prices during periods of low demand. Moreover, there is a significant autocorrelation in spot electricity prices, which indicates that current prices are correlated with past prices, implying that past prices can provide useful information
AI Models for Spot Electricity Price Forecasting—A Review
101
for forecasting future prices. Finally, there is the possibility of price spikes, which are sudden extreme changes in electricity prices caused by unexpected changes in demand or supply conditions such as plant outages, weather events, or transmission line failures, which can have significant economic impacts on electricity consumers and producers [1–9]. Regulators are instrumental in guaranteeing efficient, equitable, and transparent operation of spot electricity markets. They perform various crucial roles in these markets such as market design and structure, oversight and enforcement, market monitoring and analysis, and consumer protection. Regulators collaborate in designing and structuring spot electricity markets, creating rules and procedures for trading, and formulating pricing mechanisms that reflect the genuine value of electricity to facilitate equitable and efficient operation of the markets. They also monitor and enforce the rules and procedures of these markets to forestall potential abuses like insider trading or market manipulation. Through market monitoring and analysis, regulators identify market inefficiencies, instabilities, or trends, which inform their decision-making. Regulators ensure consumer protection by verifying that market participants operate transparently, ethically, and fairly, and that consumers are shielded from anti-competitive or fraudulent behavior. By promoting competition, transparency, and consumer protection in spot electricity markets, regulators ensure their efficient and fair operation for all stakeholders involved [1–9]. Artificial intelligence inspired models hold the potential to play a consequential role in the future of spot electricity price forecasting. These models can scrutinize large volumes of data, discovering patterns and trends that might be elusive to human analysts. Some of the ways in which AI models can enhance spot electricity price forecasting include: (a) Increased precision: AI models can scrutinize extensive data and detect patterns and trends that may not be perceivable to human analysts. This can lead to more precise forecasts and more informed decision-making. (b) Rapid forecasting: AI models can analyze data expeditiously and make predictions in real-time, which is essential in a swiftly changing market such as spot electricity prices. (c) Improved risk management: AI models can assist in identifying and managing risk in the electricity market. For instance, AI models can be used to anticipate extreme price spikes or recognize possible supply disruptions, enabling electricity companies to take measures to minimize their vulnerability to risk. (d) Streamlined efficiency: AI models can mechanize many of the activities associated with spot electricity price forecasting, enabling analysts to concentrate on more strategic and value-added operations. AI models for electricity price forecasting are relevant to stakeholders, policy makers, power industry participants, and researchers in several ways. Firstly, accurate electricity price forecasting is crucial for stakeholders, such as electricity producers, suppliers, and consumers, as it helps them make informed decisions about production, consumption, and investment. Accurate forecasting can help them optimize their operations, manage risk, and reduce costs. Secondly, policymakers can use accurate forecasting to develop effective energy policies that promote sustainability, efficiency, and affordability. Thirdly, power industry participants can use accurate forecasting to improve the efficiency of electricity trading, reduce price volatility, and ensure a reliable and stable
102
G. P. Girish et al.
supply of electricity. Fourthly, researchers can use AI models to advance the knowledge and understanding of electricity markets, identify patterns and trends in electricity prices, and develop new forecasting methodologies.
4 The Way Forward Artificial intelligence tools have the ability to handle complexity and non-linearity, making them better suited for modeling electricity prices than statistical techniques. However, this flexibility can also be a weakness as it may not result in better point forecasts. Nonlinear models have the potential to provide better interval and density forecasts than linear models, but this has not been thoroughly investigated. The diversity of available tools makes it hard to find an optimal solution and to compare different methods. Additionally, the performance of a given implementation of a method can only be evaluated under certain initial conditions and for a certain calibration dataset, limiting the ability to draw general conclusions about a method’s efficiency. Nonetheless Artificial intelligence inspired models hold the potential to play a consequential role in the future of spot electricity price forecasting owing to increased precision, rapid forecasting, improved risk management and streamlined efficiency. In this study we presented and reviewed state-of-the-art AI inspired models in spot electricity price forecasting literature, highlighting the way forward for stakeholders, policy makers, power industry participants and researchers.
References 1. Girish, G.P., Badri, N.R., Vaseem, A.: Spot electricity price discovery in Indian electricity market. Renew. Sust. Energ. Rev. 82, 73–79 (2018) 2. Weron, R., Misiorek, A.: Forecasting spot electricity prices: a comparison of parametric and semiparametric time series models. Int. J. Forecast. 24, 744–763 (2008) 3. Girish, G.P.: Spot electricity price forecasting in Indian electricity market using autoregressive-GARCH models. Energy Strategy Rev. 11–12, 52–57 (2016) 4. Aggarwal, S.K., Saini, L.M., Kumar, A.: Electricity price forecasting in deregulated markets: a review and evaluation. Electr. Power Energy Syst. 31, 13–22 (2009) 5. Amjady, N., Daraeepour, A.: Design of input vector for day-ahead price forecasting of electricity markets. Expert Syst. Appl. 36, 12281–12294 (2009) 6. Bowden, N., Payne, J.E.: Short term forecasting of electricity prices for MISO hubs: evidence from ARIMA-EGARCH models. Energy Econ. 30, 3186–3197 (2008) 7. Girish, G.P., Vijayalakshmi, S.: Spot electricity price dynamics of Indian electricity market. In: Lecture Notes in Electrical Engineering, vol. 279, pp. 1129-1135 (2014) 8. Girish, G.P., Vijayalakshmi, S., Panda, A.K., Rath, B.N.: Forecasting electricity prices in deregulated wholesale spot electricity market—a review. Int. J. Energy Econ. Policy. 4, 32–42 (2014) 9. Weron, R.: Electricity price forecasting: a review of the state-of-the-art with a look into the future. Int. J. Forecast. 30, 1030–1081 (2014) 10. Gonzalez, V., Contreras, J., Bunn, D.W.: Forecasting power prices using a hybrid fundamentaleconometric model. IEEE Trans. Power Syst. 27, 363–372 (2012)
AI Models for Spot Electricity Price Forecasting—A Review
103
11. Mandal, P., Senjyu, T., Funabashi, T.: Neural networks approach to forecast several hour ahead electricity prices and loads in deregulated market. Energy Convers. Manag. 47, 2128–2142 (2006) 12. Amjady, N.: Day-ahead price forecasting of electricity markets by a new fuzzy neural network. IEEE Trans. Power Syst. 21, 887–996 (2006) 13. Hu, Z., Yang, L., Wang, Z., Gan, D., Sun, W., Wang, K.: A game-theoretic model for electricity markets with tight capacity constraints. Int. J. Electr. Power Energy Syst. 30, 207–215 (2008) 14. Rodriguez, C.P., Anders, G.J.: Energy price forecasting in the Ontario competitive power system market. IEEE Trans. Power Syst. 19, 366–374 (2004) 15. Areekul, P., Senju, T., Toyama, H., Chakraborty, S., Yona, A., Urasaki, N.: A new method for next-day price forecasting for PJM electricity market. Int. J. Emerg. Electr. Power Syst. 11 (2010) 16. Guo, J.-J., Luh, P.B.: Improving market clearing price prediction by using a committee machine of neural networks. IEEE Trans. Power Syst. 19, 1867–1876 (2004) 17. Zhang, G., Patuwo, B.E., Hu, M.Y.: Forecasting with artificial neural networks: the state of the art. Int. J. Forecast. 14, 35–62 (1998) 18. Pao, H.-T.: A neural network approach to m-daily-ahead electricity price prediction. In: Lecture Notes in Computer Science, vol. 3972, pp. 1284–1289 (2006) 19. Yamin, H.Y., Shahidehpour, S.M., Li, Z.: Adaptive short-term electricity price forecasting using artificial neural networks in the restructured power markets. Int. J. Electr. Power Energy Syst. 26, 571–581 (2004) 20. Jain, A.K., Mao, J., Mohiuddin, K.M.: Artificial neural networks: a tutorial. Computer 29, 31–44 (1996) 21. Rutkowski, L.: Computational Intelligence: Methods and Techniques. Springer (2008) 22. Catalão, J.P.S., Pousinho, H.M.I., Mendes, V.M.F.: Hybrid wavelet-PSO-ANFIS approach for short-term electricity prices forecasting. IEEE Trans. Power Syst. 26, 137–144 (2011) 23. Pindoriya, N.M., Singh, S.N., Singh, S.K.: An adaptive wavelet neural network-based energy price forecasting in electricity markets. IEEE Trans. Power Syst. 23, 1423–1432 (2008) 24. Amjady, N.: Short-term electricity price forecasting. In: Catalão J.P.S. (ed.) Electric Power Systems: Advanced Forecasting Techniques and Optimal Generation Scheduling. CRC Press (2012) 25. Dong, Y., Wang, J., Jiang, H., Wu, J.: Short-term electricity price forecast based on the improved hybrid model. Energy Conv. Manag. 52, 2987–2995 (2011) 26. Garcia-Martos, C., Rodriguez, J., Sanchez, M.J.: Forecasting electricity prices by extracting dynamic common factors: application to the Iberian market. IET Gen. Transm. 6, 11–20 (2012) 27. Lin, W.-M., Gow, H.-J., Tsai, M.-T.: An enhanced radial basis function network for short-term electricity price forecasting. Appl. Energy 87, 3226–3234 (2010)
Comparison of Various Weight Allocation Methods for the Optimization of EDM Process Parameters Using TOPSIS Sunil Mintri, Gaurav Sapkota, Nameer Khan, Soham Das, Ishwer Shivakoti, and Ranjan Kumar Ghadai(B) Sikkim Manipal Institute of Technology, Sikkim Manipal University, Manipal, Sikkim, India [email protected]
Abstract. Multi-Criteria Decision Making (MCDM) techniques are widely used for optimization of process parameters in various engineering problems. The weight allotted to each criterion plays a crucial role in effectively implementing MCDM techniques. In this work, we use the TOPSIS technique with five different subjective and objective weight allocation methods to select the best operating conditions for the machining of SKD11 tool using Electro Discharge Machining (EDM) process. Our results indicate that experimental runs no. 22 and 25 are the best alternatives among all the options tested. The rank plot also suggests that an increase in peak current is better for the overall performance of the EDM process. We observe that the TOPSIS method is not very sensitive to the criteria weights for the current dataset, as evidenced by the correlation between the ranks obtained using the two methods. Moreover, we find that the weights allotted have little effect on the predicted optimum process parameters in the case of the TOPSIS method. Keywords: MCDM · TOPSIS · Electro discharge machining
1 Introduction Electro Discharge Machining (EDM) is a non-traditional material removal process that is widely used to machine good-conducting materials, regardless of their hardness [1]. It finds extensive applications in various industries such as nuclear energy, aircraft, molding, surgical instruments, sports, jewelry, automobile, and improvement areas [2]. This approach can improve the electrical conductivity of the machined material, irrespective of its mechanical characteristics [3]. EDM is preferred over other non-traditional methods as it is a contactless machining method based primarily on the erosive effect of electrical discharge. Due to its electro-thermal nature, a wide range of conductive materials can be machined regardless of their hardness and toughness. Its contactless nature has resulted in a satisfactory level of accuracy and surface texture [4]. In the literature, various statistical and non-statistical decision-making techniques have been proposed to model complicated engineering processes. Multi-Criteria Decision Making (MCDM) strategies are one of the techniques that have been gaining incredible popularity and large applications in present days [5]. MCDM is a method for aiding © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 104–113, 2023. https://doi.org/10.1007/978-3-031-50330-6_11
Comparison of Various Weight Allocation Methods
105
decision-makers in selecting the most suitable alternative(s) based on their preferences and goals, rather than identifying the optimal solution [5, 6]. Weight allocation plays a crucial role in MCDM methods such as MOORA, PROMETHEE, and TOPSIS [7]. Weight allocation can be objective, using constant formulas to compute the weight, or subjective, where decision-makers determine the best and worst possibilities using methods like Analytic Hierarchy Process and Best Worst Method. The decision-maker’s preferences are represented by the choice standards, and the results of the standard represent the decision-maker’s values [8]. Weight allocation helps express the values each decision-maker has for different aspects of the product, which can be used to determine the most appropriate decision from a set of options. TOPSIS is a widely used MCDM technique that aims to find the final rating of an opportunity that is closest to the best solution [9]. This approach is characterized by its simplicity, comprehensibility, and ability to assess relative performance for every option in a simple statistical fashion [7]. TOPSIS accounts for trade-offs between criteria, allowing poor performance in one criterion to be traded off by good performance in another, making it a more practical form of modeling than methods that exclude possibility solutions based on arbitrary cut-offs [8]. TOPSIS has numerous applications, including economic ratio performance, evaluation of organizational performance, and economic funding in advanced production systems [9]. Several MCDM techniques have been used to improve EDM response parameters such as Pulse off time (Toff), Gap voltage (U), Pulse on time (Ton), and Peak current (I) [10, 11]. In addition to these response parameters, performance measures like Average White Layer Thickness (WLT), Micro Hardness (HV), Material Removal Rate (MRR), and Average Surface Roughness (SR) have been considered to evaluate the EDM system [12]. Discharge strength during the sparking phenomenon determines MRR and SR; the higher the discharge strength, the higher the material removal rate, but the average surface roughness is lower [13, 14]. Increasing the values of the response parameters improves MRR but decreases the quality of the machined surface texture [15]. Nguyen et al. [11] considered Taguchi–grey relational analysis (TGRA) technique for optimizing the EDM approach parameters for the machining of SKD11 High-Chromium tool steel. The overall performance of the machining has been analyzed through numerous reaction factors i.e. WLT, MRR, SR, along with HV. From the result, it’s been determined that the surface performance of the machining method may be advanced by considering 50 V of gap voltage, 37 μs of Toff, 18 μs of Ton and 5 A of current. Rao and Kambagowni [16] used the DEAR technique where the effectiveness of the EDM response parameters (Toff, V, I, Ton) on surface roughness though machining of EN41 material was taken into concern. Verma and Sahu [17] used a full factorial design approach for the machining of titanium alloy to examine the workpiece through the EDM process, they observed, micro-cracks on the surface when the Ton duration was enriched. Rezaei [18] in his study, proposed a new technique named Best Worst Method (BWM) which resolve MCDM hassle in the real-world because it calls for fewer judgments as compared to matrix-based MCDM techniques and the final weights derived from BWM are noticeably dependable because it gives greater consistent. Manivannan et al. [19] used AISI304 steel as it is generally used in the application because of its mechanical, corrosion resistance, physical properties, good weldability and appearance.
106
S. Mintri et al.
The present study aims to optimize the performance of EDM process parameters using TOPSIS, a Multi-Criteria Decision-Making (MCDM) technique. The study addresses a research gap in the study on effects of weight allocation in optimization of EDM process parameters and provides valuable insights for choosing the best weight allocation strategy for better results. The research investigates the impact of weight allocation on the TOPSIS method to study the sensitivity of the MCDM method to different weight allocations. Additionally, correlation analysis is conducted to evaluate the similarities in the obtained ranks.
2 Experimental Details The experimental data in the existing research work has been taken from Nguyen et al. [11]. In their work the specimen used was SKD11 high-chromium tool, as complex shapes can be made using this material for fabricating dies, but due to its solidity it is difficult to machine the work piece, due to this drawbacks of traditional machining, EDM process is used as the machining process in the current work. Gap Voltage (U), Peak Current (I), Pulse off time (Toff ) and pulse on time (Ton ) were chosen as the input parameters. Microhardness (HV), White Layer thickness (WLT), Surface roughness (SR) and Material removal rate (MRR) were considered as the performance responses meant for the EDM process. MRR is described by means of the percentage of discount in the work piece weight throughout the machining procedure to its preliminary load. W1 − W2 (1) W2 where W1 is initial weight and W2 is final weight. The weights of the samples are considered by using five different process which are Standard deviation method, Critic method, Entropy method SWARA method and FUCOM method. The WLT, due to the varying intensity of the recasting sheet over the machined surface was calculated using this equation. MRR =
WLT =
Area of recast sheet Length of recast sheet
(2)
White layer together with polyline turns into sketched for figuring out the WLT region using EDM. The experimental research was carried out by L25 orthogonal array (OA) as there has been four input procedure parameters at six distinct stages in this study.
3 Methodology 3.1 Standard Deviation (SDV) Method SDV builds an impartial projection of values as it improves the MCDM and lowers the particular weight pressure [20]. The inherent steps in SDV are finalized through next relation to interchange the numerous norms and scales into structured & quantifiable measure to decide out their substantial weights. Bij =
xij − min(x)ij max(x)ij − min(x)ij
(3)
Comparison of Various Weight Allocation Methods
m
SDVJ =
i=1
Bij − Bj m
107
2 (4)
where Bij is the balance of the values for the ith portion, where j = 1, 2, 3, 4. The weight can be calculated by Eq. (5) as, SDVj Wj = n j=1 SDVj
(5)
The weights for the responses i.e. MRR, SR, HV and WLT are W = 0.276 0.253 0.236 0.235
3.2 Entropy Method Entropy is a new speedy and practical process to speculate the best alternative. This method removes uses objective influence to derive suitable weight vectors and removes subjective decisions to compute the predictable rank [21]. Here the weight function is presumed on the distinct probability distribution. −1 nij ln nij ln(m) m
ej =
(6)
i=1
The degree of diversity (d) is estimated as, dj = 1 − ej , j = 1, 2, 3
(7)
The weights for the responses i.e. MRR, SR, HV and WLT are W = 0.632 0.14 0.017 0.211
3.3 CRITIC Method Critic method also known as critical importance through inter-criteria co-relation, is applied to estimate unbiased weights of criteria [22]. This method is used when the decision makers are unable to compare standards or when decision makers have conflicting views of weights. Steps involved in CRITIC process are elucidated below: Step 1: Normalization of the Decision matrix using the following formula nij =
xij − xb xb − xw
(8)
Step 2: Information given by the jth indicator is then calculated as cj = σj
m k=1
1 − rjk
(9)
108
S. Mintri et al.
where rjk is the correlation between jth and kth indicator σj is the standard deviation of jth indicator Step 3: Objectives weights are calculated using Eq. 10: cj Wj = m
k=1 cj
(10)
For the current problem, the Critic weight vector for MRR, SR, HV and WLT are W = 0.358 0.222 0.212 0.208
3.4 SWARA In the field of multiple attribute decision making (MADM), the SWARA approach was proposed in 2010 with a new paradigm [23]. It was created for use in decision-making procedures where policymaking is more prominent than in traditional decision-making processes. The criteria should be prioritized first in the classic SWARA algorithm. The importance of this stage appears to be unavoidable because the accuracy rate in this prioritizing method looks to be near perfect. SWARA, in comparison to other weighting techniques, has the following advantages. (1) It manages to capture the experts’ opinions about the importance of the criteria in the process, (2) it aids in coordinating and getting data from experts, (3) it is simple, user-friendly, and straightforward, and experts can easily collaborate. Pair wise comparison of ranked criteria is done by the decision makers to calculate sj by determining the relative importance of jth criteria with respect to the (j − 1)th criteria. Coefficient values are then calculated as under. 1, j=1 kj = sj + 1, j > 1 The weight is calculated using Eq. 11. qj Wj = n
j=1 qj
(11)
For the current problem, the SWARA weight vector for MRR, SR, HV and WLT are W = 0.3396 0.1887 0.2830 0.1887
3.5 FUCOM The FUCOM algorithm is based on paired criteria comparisons, with the model requiring only the n − 1 comparison [24]. The model employs a simple technique for validating the model by calculating the comparison’s deviation from full consistency (DFC). The consistency of the model is determined by the fulfilment of mathematical transitivity conditions. One feature of the newly formed approach is that it lowers decision-makers’ subjectivity, resulting in symmetry in the weight values of the criterion. It is a method that
Comparison of Various Weight Allocation Methods
109
requires (1) a modest number of paired criteria comparisons, (2) the capacity to specify the DFC of the comparison, and (3) pairwise comparison is required to be transitive. The weight is calculated using the formula:
wj(k) (12) −ϕ k ≤ξ k+1 wj(k+1) wj(k) (13) − ϕ k ⊗ ϕ k+1 ≤ ξ, w k+1 k+2 j(k+2) n
wj = 1
(14)
j=1
Wj ≥ 0, for all ξ signifies the consistency of the model and ϕ function signifies the priority among criteria as assessed by the decision. For the current problem, the FUCOM vector for MRR, SR, HV and WLT are W = 0.4569 0.1714 0.2741 0.0977
3.6 TOPSIS TOPSIS is a well-established and widely used MCDM technique in operations research and manufacturing engineering. In this method the remarkable alternative is sought primarily based on the closeness coefficient between an appropriate best and worst solution. The underlying principle behind the technique is to evaluate the options using the Euclidian distance scale, with the goal of achieving the shortest span from the best and the greatest distance from the worst solution. The options are then prioritized based on their rank. To incorporate the couple of opportunities of multiple Decision-makers, we can contemplate the separation measures for TOPSIS by utilizing mathematics mean or geometric mean of the individuals. Normalization procedures and distance metrics are also taken into account. In comparison to the original TOPSIS technique, the proposed model provides a preferred perspective of TOPSIS with group preference aggregation. Procedural steps involved in TOPSIS are adapted from Diyaley et al. [25].
4 Results & Discussion Figure 1 depicts the Euclidean distances of different weights using TOPSIS from the positive and negative ideal solutions (PIS and NIS). The zero line in the picture represents the ideal solution, and Si− on the bottom side of the zero line represents the distance of the individual alternative from the negative ideal solution. The comparable Si+ shown on the top side of the zero line represents the distance of each individual alternative from the positive ideal solution. If the distance of Si+ is close to the zero line, the alternative has a good closeness coefficient; similarly, a bigger distance of Si− from the zero line suggests a strong closeness coefficient.
110
S. Mintri et al.
Fig. 1. Euclidean distances of each alternative from PIS and NIS for various employed weight determination methods.
In this study, the input parameters are I, U, Ton, and Toff, whereas the output parameters are MRR, SR, HV, and WLT. The recorded experimental data and the input parameters are shown in Table 1. The TOPSIS technique determines the best solution by taking into account both non-beneficial and beneficial aims. The beneficial objectives in our situation are MRR and HV, while the non-beneficial ones are SR and WLT. This method uses a basic mathematical formula and does not necessitate the use of any complicated software to evaluate. Figure 2 shows that the weight calculation utilizing CRITIC and standard deviation techniques ranks experiment no. 25 as the best choice. Entropy, SWARA, and FUCOM on the other hand, rank experiment 22 as the best alternative. Experiment 9 is regarded as the worst TOPSIS alternative based on standard deviation and critics but with Entropy, SWARA and FUCOM experiment 5 gives the worst possibilities. Better ranks towards the end of the Fig. 2 shows that increasing the current has positive effect on the quality of the machining. Looking at the rank trends, it can also be seen that high gap voltage at low current ratings results in decreased quality of machining. TOPSIS does not seem to be highly sensitive to the weights as a significant level of correlation can be established between the ranks obtained using the various weight determination methods. 4.1 Correlation Analysis The correlation matrix for TOPSIS is shown in Table 2 and 5 respectively. Pearson correlation coefficient is used for the current work to assess the similarity between two different weight determination method. The correlation matrix for TOPSIS shows that the method is not very sensitive to the weights and the ranks assigned to different experiments is very similar with each other. SWARA and CRITIC are two subjective and objective
Comparison of Various Weight Allocation Methods
111
Fig. 2. The graphical representation of different weights using TOPSIS
weight determination techniques respectively which have the highest correlation between them as suggested by the correlation coefficient. Table 1. Rank correlation for TOPSIS technique
ST DEV ST DEV ENTROPY CRITIC SWARA FUCOM
ENTROPY 1
0.813846 1
CRITIC
SWARA
FUCOM
0.936923
0.906154
0.822308
0.945385
0.962308
0.997692
1
0.988462
0.952308
1
0.968462 1
5 Conclusion In the current work an attempt has been made to study the effect of weight allocation methods on the working of an MCDM technique namely TOPSIS. The experimental data considered is an L-25 Taguchi Orthogonal array for machining of hard high chromium tool material using EDM process. MRR, HV, SR and WLT were the response parameters which are the criteria based on which ranking of alternatives were done. The following conclusions can be drawn from the current work.
112
S. Mintri et al.
• TOPSIS method suggest that the experimental run no. 22 and 25 are the best alternative among all the alternatives. The rank plot also suggests that increase in peak current is better for overall performance of EDM process. • From the current work, it can also be deduced that TOPSIS method is not very sensitive to the criteria weights for the current dataset. This can also be seen from the correlation between the ranks obtained using two methods.
References 1. Jung, J.H., Kwon, W.T.: Optimization of EDM process for multiple performance characteristics using Taguchi method and Grey relational analysis. J. Mech. Sci. Technol. 24, 1083–1090 (2010). https://doi.org/10.1007/s12206-010-0305-8 2. Kumar, P., Gupta, M., Kumar, V.: Surface integrity analysis of WEDMed specimen of Inconel 825 superalloy. Int. J. Data Network Sci. 2, 79–88 (2018). https://doi.org/10.5267/j.ijdns.2018. 8.001 ´ ´ 3. Swiercz, R., Oniszczuk-Swiercz, D., Chmielewski, T.: Multi-response optimization of electrical discharge machining using the desirability function. Micromachines 10, 72 (2019). https:// doi.org/10.3390/mi10010072 4. Joshi, A.Y., Joshi, A.Y.: A systematic review on powder mixed electrical discharge machining. Heliyon 5, e02963 (2019). https://doi.org/10.1016/j.heliyon.2019.e02963 5. Mardani, A., Jusoh, A., MD Nor, K., Khalifah, Z., Zakwan, N., Valipour, A.: Multiple criteria decision-making techniques and their applications—a review of the literature from 2000 to 2014. Econ. Res. [Ekonomska Istraživanja] 28, 516–571 (2015). https://doi.org/10.1080/133 1677X.2015.1075139 6. Dooley, A.E., Smeaton, D.C., Sheath, G.W., Ledgard, S.F.: Application of multiple criteria decision analysis in the New Zealand agricultural industry. J. Multi-Criteria Decis. Anal. 16, 39–53 (2009). https://doi.org/10.1002/mcda.437 7. Hwang, C.-L., Yoon, K.: Multiple Attribute Decision Making. Springer, Berlin (1981). https:// doi.org/10.1007/978-3-642-48318-9 8. Wang, Y., Sun, Z.: Development of the comprehensive evaluation methods in medicine. [Zhong nan da xue xue bao. Yi xue ban] J. Cent. South Univ. Med. Sci. 30, 228–232 (2005) 9. Triantaphyllou, E.: Multi-criteria Decision Making Methods: A Comparative Study. Springer, USA (2000). https://doi.org/10.1007/978-1-4757-3157-6 10. Singh, A., Ghadai, R., Kalita, K., Chatterjee, P., Pamucar, D.: EDM process parameter optimization for efficient machining of INCONEL-718. Facta Univ. Ser. Mech. Eng. 18, 473 (2020). https://doi.org/10.22190/FUME200406035S 11. Nguyen, P.H., et al.: Application of TGRA-based optimisation for machinability of highchromium tool steel in the EDM process. Arab. J. Sci. Eng. 45(7), 5555–5562 (2020). https:// doi.org/10.1007/s13369-020-04456-z 12. Ghosh, A., Mallik, A.: Manufacturing Science (2015) 13. Goldberg, D.E.: Genetic Algorithms. Pearson Education, India (2013) 14. Ghadai, R.K., Kalita, K., Gao, X.-Z.: Symbolic regression metamodel based multi-response optimization of EDM process. FME Trans. 48, 404–410 (2020). https://doi.org/10.5937/fme 2002404G 15. Ragavendran, U., Ghadai, R.K., Bhoi, A.K., Ramachandran, M., Kalita, K.: Sensitivity analysis and optimization of EDM process parameters. Trans. Can. Soc. Mech. Eng. 43, 13–25 (2019). https://doi.org/10.1139/tcsme-2018-0021 16. Ch M.R., Kambagowni, V.: Optimization of wire EDM process parameters in machining SS316 using DEAR method 5 (2021)
Comparison of Various Weight Allocation Methods
113
17. Verma, V., Sahu, R.: Process parameter optimization of die-sinking EDM on titanium grade— V alloy (Ti6Al4V) using full factorial design approach. Mater. Today Proc. 4, 1893–1899 (2017). https://doi.org/10.1016/j.matpr.2017.02.034 18. Rezaei, J.: Best-worst multi-criteria decision-making method. Omega 53, 49–57 (2015). https://doi.org/10.1016/j.omega.2014.11.009 19. Manivannan, R., Kumar, M.P.: Multi-response optimization of micro-EDM process parameters on AISI304 steel using TOPSIS. J. Mech. Sci. Technol. 30(1), 137–144 (2016). https:// doi.org/10.1007/s12206-015-1217-4 20. Mukhametzyanov, I.: Specific character of objective methods for determining weights of criteria in MCDM problems: entropy, CRITIC and SD. Decis. Making Appl. Manag. Eng. 4, 76–105 (2021). https://doi.org/10.31181/dmame210402076i 21. Chodha, V., Dubey, R., Kumar, R., Singh, S., Kaur, S.: Selection of industrial arc welding robot with TOPSIS and Entropy MCDM techniques. Mater. Today Proc. 50, 709–715 (2022). https://doi.org/10.1016/j.matpr.2021.04.487 22. Diakoulaki, D., Mavrotas, G., Papayannakis, L.: Determining objective weights in multiple criteria problems: the critic method. Comput. Oper. Res. 22, 763–770 (1995). https://doi.org/ 10.1016/0305-0548(94)00059-H 23. Keršuliene, V., Zavadskas, E.K., Turskis, Z.: Selection of rational dispute resolution method by applying new step-wise weight assessment ratio analysis (SWARA). J. Bus. Econ. Manag. 11, 243–258 (2010). https://doi.org/10.3846/jbem.2010.12 24. Pamuˇcar, D., Stevi´c, Ž, Sremac, S.: A new model for determining weight coefficients of criteria in MCDM models: full consistency method (FUCOM). Symmetry 10, 393 (2018). https://doi.org/10.3390/sym10090393 25. Diyaley, S., Shilal, P., Shivakoti, I., Ghadai, R.K., Kalita, K.: PSI and TOPSIS based selection of process parameters in WEDM. Periodica Polytech. Mech. Eng. 61, 255–260 (2017). https:// doi.org/10.3311/PPme.10431
Assessment of the Outriggers and Their Stiffness in a Tall Building Using Multiple Response Spectrum Shashank Dwivedi(B) , Ashish Kumar, and Sandeep Singla Department of Civil Engineering, School of Engineering, RIMT University, Gobindgarh, Punjab 147301, India [email protected], [email protected]
Abstract. The increasing population and its accumulation in the cities raise the need of tall buildings. The tall buildings can accommodate more families in a comparatively smaller area. The construction of tall buildings needs to be safe and serviceable. The primary concerns in a tall building are the lateral displacement due to horizontal loads arising from earthquakes and wind. The present study focuses on the model-based assessment of the outrigger’s location in a 50-story tall building. The overall height of the building is 175 m and is of square crosssection with each side of 28 m. The building has 3 bays of 9 m, 10 m & 9 m respectively. The slenderness ratio of the building is 6.75. The base model of beam-column framed structure is used for comparison with the models having a shear wall at the core and outriggers and different locations. The study analyzed the lateral displacement of each story in varying models. The inter-story displacement of each story is compared with all models having varying outrigger locations and stiffness. It was established that the optimum location of the outrigger is at zone 3 (85 m height from the top) with the least lateral displacement of the top story. The outrigger stiffness is compared at the optimum location of the outrigger by increasing the cross-sectional area of the outriggers for lateral displacement and inter-story displacement. The increase in the outrigger stiffness has a small impact on the lateral displacement of the top story of the building. The interstory displacement variation along with the height of the building is considerably stabilized with the increase of the stiffness of the outriggers. The placement of the outriggers accompanying at two locations results in a decrease in top-story lateral displacement, but no change is observed in inter-story displacement up to zone-4. Keywords: Outriggers · Optimum location · Inter-story displacement · Lateral displacement · Finite element analysis
1 Introduction The construction of buildings has changed face in recent two-three decades. The growing population in the countries and the desire to live in the cities have continuously increased the density in the cities. The resettlement of the increasing population in cities needs © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 114–125, 2023. https://doi.org/10.1007/978-3-031-50330-6_12
Assessment of the Outriggers and Their Stiffness in a Tall Building Using
115
more accommodation in the cities. The horizontal expansion of the cities will result in diminishing agricultural land and land for natural habitats. The construction of tall buildings to accommodate large populations as well as leaving free space is an effective solution. The use of tall buildings helps in greater accommodation in a smaller land area. Along with high rise and tall buildings, a greater risk of higher lateral displacement and stresses is a matter of concern [1]. Tall building phenomenon will continue on a greater scale to meet the needs of the growing population in future large cities [2]. The advancement of concrete technology with high-strength concretes has made it feasible to construct tall buildings [3]. As the slenderness ratio of the buildings increases the overturning moment due to lateral loads also increases. The tall buildings have a more significant impact on the lateral loads due to seismic and wind than superimposed and dead loads. One of the solutions to resist the lateral loads in tall buildings is the use of outriggers. Outriggers are placed as members connecting the core to the peripheral columns which increase the moment resistance of the structure. Outrigger structural system has been popular in construction since the 1980s due to their unique combination of architectural flexibility and structural efficiency [4, 5]. The structure without using the outriggers will be as a cantilever leg. The outriggers provided in opposite directions act as a couple which lengthens the arm for moment resistance [6]. The literature consists of studies on the location of the outriggers and their impacts on the effectiveness of the outriggers measured in terms of the lateral displacement of the top story of the building [7]. Many studies have suggested the various locations of the outriggers along the height of the building in different types and at locations. Lee et al., [3], studied the nonlinear behavior due to nonlinearity of the geometry to derive an equation for the structure. Lee and Tovar [8], proposed a different method that concluded as highly accurate for the simulation of tall buildings using finite element analysis. The study used topology-based assessment for the position of the outriggers for a 201 m high-rise building. The inter-story drift of the tall building structures can be efficiently controlled using outriggers. The author utilised the theoretical method to identify the inter-story displacement and overall lateral displacement of tall buildings using MATLAB for a 240 m high-rise building [9, 10]. The outriggers are easy to modify and are used with dampers reducing the wind effect in the building [11]. The axial shortening of the length between the outer columns and the core of the building cannot be restricted [12, 13]. The optimum number of outriggers were also worked out by placing the number of outriggers at various stories of the building [14–17]. The comparison of the tall buildings with conventional outriggers and energy-dissipation outriggers using different methods of assessment have been studied [18–21]. In outriggers, the maximum lateral displacement and inter-story displacement are important aspects along with the differential of the columns [22]. The assessment of the outriggers has been studied for many decades in the past. The assessment remains a tedious task with changing geometries, aspect ratios, and slenderness ratios. Along with the optimum location of the outriggers, the stiffness of the outriggers also impacts the structural behaviour. In the present research, efforts are done using a finite element analysis for the optimum location of the outriggers and their effects on the lateral displacement of the top of the buildings and inter-story displacement of all storeys. The same effects are also observed for the building having outriggers at
116
S. Dwivedi et al.
Fig. 1. Geometry of the model
optimum location and with increasing stiffness of the outrigger. In order to resist lateral loading in tall buildings, lateral displacement needs to be restricted. For the economy an appropriate structural system is to be utilised.
2 Research Program 2.1 Geometry Used In the present research, a 50-story building is used for the analytical study. The aspect ratio of the building used is one having 3 bays in each perpendicular direction. From the 3-bays central bay is of 10 m and both outer bays are 9 m each as shown in Fig. 1. The height of the building used for the study is 175 m. Each story of the tall building is assumed to be 3.5 m high. The size of the column utilised is 1000 mm × 1000 mm, considering the practicality of the tall building. The cross-section of the column is kept constant throughout the height of the building to observe the impact of outriggers only. The cross-section of the beams is assumed to be 500 mm × 850 mm for the model as mentioned in Table 1. The outriggers of the double story are used for the analysis at various locations as per the designed study. 2.2 Material Used The material used for the modelling of the structure is reinforced cement concrete. The concrete mix of characteristic strength 40 N/mm2 is used for the model. The grade of concrete used for slabs, beams, columns and outriggers is kept the same. The steel used for the modelling is assumed to be Fe650 high-strength steel. The material used for the peripheral walls and partition walls of the model is assumed to be of AAC blocks weighing 5.82 kN/m. To analyse the maximum intensity due to earthquakes and wind, the walls are modelled on all floors of the building.
Assessment of the Outriggers and Their Stiffness in a Tall Building Using
117
Table 1. Description of models and sectional area of the members Models
Nomenclature
Columns (m2 )
Beam (m2 )
Shear wall (m2 )
Outriggers (m2 )
Framed model
BC
1
0.425
–
–
Framed with shear wall
BC-S
1
0.425
12
–
Framed with shear wall and outriggers at different zones
BC-S-OZi i = 1 to 5
1
0.425
12
0.16
Framed with BC-S-OZ3-6-6 shear wall and outriggers at zone 3 of 600 × 600 mm
1
0.425
12
0.36
Framed with BC-S-OZ3-8-8 shear wall and outriggers at zone 3 of 800 × 800 mm
1
0.425
12
0.64
Framed with shear wall and outriggers at zone 3 of 1000 × 1000 mm
BC-S-OZ3-10-10
1
0.425
12
1
Framed with shear wall and outriggers at zone 3 & 5 of 1000 × 1000 mm
BC-S-OZ3&5-10-10
1
0.425
12
1
3 Modelling and Analytical Investigation The present study is conducted using analysis software E-Tabs based on finite element analysis. The 3-dimensional modelling is initially done in the software and analysis is run using multiple response spectrum analysis. The tall building is modelled firstly as a framed structure only. The analysis of the impact on lateral displacement of the building due to wind as well as earthquake along with superimposed and dead load is computed. The framed model is used as a base model for comparison with various other models. The further models were designed with the inclusion of the shear wall at the core of the building having a cross-sectional area of 12 m2 on four sides of the core. The variation of models is mentioned in Table 1.
118
S. Dwivedi et al.
3.1 Modelling The model is designed for a 50-story concrete building with a height of 175 m. The method of response spectrum analysis is used for the results. The concrete is modelled as per IS 456:2000 of the characteristic strength of 40 N/mm2 . The foundation soil is assumed to be a hard soil. The tall building is located in a plain area, and the place is assumed to lie in Zone-IV as per IS 1893. The loading is applied as per IS 875 and assumed to act on all floors of the tall building. The impact of earthquake and wind load is applied separately. The wind load is applied along all four sides with angles 0, 90, 180 and 270 degrees. 3.2 Positioning Outriggers The base models containing beams and columns framed structure was modelled. The model was further modified for the optimization of the outriggers in the tall building. The tall building was divided into 5 zones. Zone-I was assigned to storeys 0 to 10, Zone-II was assigned to storeys 11 to 20, Zone-III was assigned to storeys 21 to 30, similarly, Zone-IV and Zone-V were assigned to storeys 31 to 40 and 41 to 50 respectively as shown in Table 2. The first modification was applied by placing a shear wall at the core of the framed building. Thereafter, the outrigger of the size 400 mm × 400 mm was placed in succession at different zones. From the placement of the outriggers at different levels, most suitable location was identified. The outriggers at most suitable location was placed with increasing cross-section for increasing the stiffness of the outriggers. The model having outriggers at two locations (Zone-3 & 5) for comparison of lateral displacement of the top story was also worked out. Table 2. Defined zones of the models Zones
Story range
Zone-I
0 to 10
Zone-II
11 to 20
Zone-III
21 to 30
Zone-IV
31 to 40
Zone-V
41 to 50
4 Result & Discussion The outriggers are a significant part of the tall building structure. The major problem in tall buildings is the lateral displacement of the building. As per various standards for structures it should not be greater than 1/500 of the total height of the building. The results discussed in the section are based on comparison of the lateral displacement of the different tall building models and their inter-story displacements.
Assessment of the Outriggers and Their Stiffness in a Tall Building Using
119
4.1 Lateral Displacement The tall buildings stability is based on the lateral displacement of the top of the tall building. The impact of both earthquake and wind is considered for comparison in this study. The impact of wind found to be greater on the lateral displacement of the building. The results compared in the study are based on the lateral displacement due to wind. The lateral displacement for the base model of framed building (BC) is shown in Fig. 2. The displacement can be seen increasing with the height of the building. The displacement increase with the increase of the story’s can be seen to follow a curve. The displacement of the top story is maximum due to maximum wind effect with the increase in height. The lateral displacement of all the models studied are compared for the lateral displacement of the base model. As shown in Fig. 2, the lateral displacement of the building is large for base model. The inclusion of the shear wall and outriggers resulted in considerable decrease of 60% in the lateral displacement of the top story of the building. The comparison of the lateral displacement of the models with shear walls and outriggers at various locations is shown in Fig. 3. The nomenclature used is as mentioned in Table 1 for the reference.
Fig. 2. Lateral displacement of the building subjected to wind load
The thorough relation of the change in lateral displacement of a particular story with the variation of models is shown in Fig. 4. The x-axis of the graph shows the serial number of model type. The serial numbers are marked as 1 = BC, 2 = BC-S, 3 = BC-S-OZ1, 4 = BC-S-OZ2, 5 = BC-S-OZ3, 6 = BC-S-OZ4, 7 = BC-S-OZ5, 8 = BC-S-OZ3-6-6, 9 = BC-S-OZ3-8-8, 10 = BC-S-OZ3-10-10 and 11 = BC-S-OZ3&5-10-10. The y-axis of the graph shows the lateral displacement. The S0, S5, S15, S20, S25, S30, S35, S40, S45 and S50 are the graph line for lateral displacement of different models at that particular story. It is observed that when comparing any particular story, the change is significant when outriggers are used in the tall building. In all cases the displacement is minimum
120
S. Dwivedi et al.
Fig. 3. Comparison of the lateral displacement with varying outrigger locations
with the use of outriggers and least in model BC-S-OZ3 & 5-10-10. It can be concluded that the use of outriggers significantly affects the lateral displacement of the building and the increased stiffness resists the displacement with small change. The use of nominal outriggers at increased number of locations improves the lateral displacement then using high stiffness outriggers at single location.
Fig. 4. Change of lateral displacement at different stories with varying models
Assessment of the Outriggers and Their Stiffness in a Tall Building Using
121
4.2 Inter-story Displacement The inter-story displacement is the displacement of the storey with reference to the adjacent story below. In this study, the inter-story displacement of the Zones for all models is shown in Fig. 5. The inter-storey displacement in BC model is very large starting from the lower stories near base. The inter-story displacement for model BC as observed in zone-I is about 20 mm and that in zone-IV is 55 mm. As shown in figure, vast difference is observed in the story displacements making an irregular sinusoidal curve for the base model. The same is decreased by a large extent when shear wall and outriggers are placed in the models (Fig. 5).
Fig. 5. Inter-storey displacement of the models
Considering the models with basic cross-sectional outriggers, the least fluctuations in inter-story displacement is observed when the outrigger was placed at zone-III. Figure 6 compares the inter-story displacement of the models BC, BC-S-OZ3 and BC-S-OZ36×6 having no outriggers, outriggers with 400 × 400 mm and outriggers with 600 × 600 mm, respectively. It is observed that when the stiffness is increased with the increase in cross-section of the outrigger, the displacement between adjacent stories decreased. Figure 7 shows the comparison of the model BC, BC-S-OZ3 and BC-S-OZ3-10-10 and it is observed that when the curve of inter-story displacement of the BC-S-OZ36 × 6 and BC-S-OZ3-10×10 having no outriggers, outriggers with 400 × 400 mm, outriggers with 1000 × 1000 mm, respectively. The fluctuations in former model were higher while in the later it smoother. It can be concluded that the increase of the stiffness of the outriggers reduces the inter-story displacements and improves the stability of the structure. The placement of the outriggers can be at more than one place. In Fig. 8, the outriggers of same stiffness placed at more than one location is compared with the model having outrigger at only one location. The curve of the models BC-S-OZ3-10×10 consists of
122
S. Dwivedi et al.
Fig. 6. Inter-storey displacement of the zone 3 outriggers of BC, BC-S-OZ3 & BC-S-OZ3-6-6
Fig. 7. Inter-storey displacement of the BC, BC-S-OZ3 and BC-S-OZ3-10-10
outrigger at zone 3 having cross section of 1000 mm × 1000 mm and BC-S-OZ3&5-10× 10 having outriggers at zone 3 & 5 having cross-section of 1000 mm × 1000 mm is compared. It is observed that the displacement between adjacent stories in both models vary by very small difference up to zone 4. The major difference observed is the decrease in displacement of the top-story only. It can be concluded that the placement of outrigger at more than one location decreases the lateral displacement of the top story of the building, but the inter-story displacement is not improved by large extent.
Assessment of the Outriggers and Their Stiffness in a Tall Building Using
123
Fig. 8. Inter-storey displacement of the BC, BC-S-OZ3, BC-S-OZ3-10-10 and BC-S-OZ3&510-10.
5 Conclusion The present study focuses on assessing the outriggers using multiple response spectrum. The study utilised finite element analysis-based E-tabs software for making models of a 50-story tall building. The tall building is modelled firstly as a framed structure only. The analysis of the impact on lateral displacement of the building due to wind and earthquake along with superimposed and dead load is computed. The framed model is used as a base model for comparison with other models with shear walls and outriggers at different building zones. The following conclusions can be drawn from the present study. The framed structure with shear walls at the core significantly decreases the lateral displacement of the building. The lateral displacement of the tall building for the location of the outriggers at each story was computed to assess the location of the outriggers. The optimum location of the outriggers is found to in zone 3 which is near 90 m of the height of the building which is at half the height of the building. The increase of the stiffness of the outriggers decreases the lateral displacement of the building by small extent. The inter-story displacement of the building is very large in the base model, it is reduced with the use of the outriggers. The inter-story at each story was analyzed in all the models and it was observed that at the location of the outrigger, the inter-story displacement between adjacent stories was small and it increased with the increase in the height of the building from the location of outriggers. The increased stiffness of the outriggers helps in better stability of the building by reducing the range of inter-story displacement at each zone. The comparison of the model having outrigger at an optimum location and the model having outrigger at optimum location as well as top story of the tall building concludes that the increased
124
S. Dwivedi et al.
number of outrigger locations impacts only on the top-story lateral displacement and small improvement in inter-story displacement. Future Research The future research recommendation is to study the effect of outriggers with dampers. The outriggers with dampers can be studied in direct placement and as truss belt. This will increase the overall efficiency of the research. Acknowledgement. The authors acknowledge the support provided by the Department of Civil Engineeing, School of Engineering, RIMT University, Punjab, India.
References 1. Pullmann, T., et al.: Structural design of reinforced concrete tall buildings: evolutionary computation approach using fuzzy sets. In: Proceedings of the 10th European Group for Intelligent Computing in Engineering EG-ICE, Delft, The Netherlands (2003) 2. Fawzia, S., Nasir, A., Fatima, T.: Study of the effectiveness of outrigger system for high-rise composite buildings for cyclonic region. World Acad. Sci. Eng. Technol. 60(1), 937–945 (2011) 3. Lee, J., Bang, M., Kim, J.-Y.: An analytical model for high-rise wall-frame structures with outriggers. Struct. Design Tall Spec. Build. 17(4), 839–851 (2008) 4. Tan, P., et al.: Dynamic characteristics of novel energy dissipation systems with damped outriggers. Eng. Struct. 98, 128–140 (2015) 5. Amoussou, C.P.D., et al.: Performance-based seismic design methodology for tall buildings with outrigger and ladder systems. Structures 34 (2021) 6. Choi, H.S., Joseph, L.: Outrigger system design considerations. Int. J. High-Rise Build. 1(3), 237–246 (2012) 7. Ding, J.M.: Optimum belt truss location for high-rise structures and top level drift coefficient. J. Build. Struct. 4, 10–13 (1991) 8. Lee, S., Tovar, A.: Outrigger placement in tall buildings using topology optimization. Eng. Struct. 74, 122–129 (2014) 9. Zhou, Y., Zhang, C., Lu, X.: An inter-story drift-based parameter analysis of the optimal location of outriggers in tall buildings. Struct. Design Tall Spec. Build. 25(5), 215–231 (2016) 10. Soltani, O., Layeb, S.B.: Evolutionary reinforcement learning for solving a transportation problem. In: Intelligent Computing & Optimization: Proceedings of the 5th International Conference on Intelligent Computing and Optimization 2022 (ICO2022), pp. 429–438. Springer International Publishing, Cham (2022) 11. Ho, G.W.: The evolution of outrigger system in tall buildings. Int. J. High-Rise Build. 5(1), 21–30 (2016) 12. Kim, H.S.: Effect of outriggers on differential column shortening in tall buildings. Int. J. High-Rise Build. 6(1), 91–99 (2017) 13. Kim, H.S.: Optimum locations of outriggers in a concrete tall building to reduce differential axial shortening. Int. J. Concr. Struct. Mater. 12(1), 77 (2018) 14. Yang, Q., Lu, X., Yu, C., Gu, D.: Experimental study and finite element analysis of energy dissipating outriggers. Adv. Struct. Eng. 20(8), 1196–1209 (2017) 15. Baygi, S., Khazaee, A.: The optimal number of outriggers in a structure under different lateral loadings. J. Inst. Eng. (India) Ser. A 100(4), 753–761 (2019)
Assessment of the Outriggers and Their Stiffness in a Tall Building Using
125
16. Jiang, H.J., et al.: Performance-based seismic design principles and structural analysis of Shanghai Tower. Adv. Struct. Eng. 17(4), 513–527 (2014) 17. Nigdeli, S.M., Bekda¸s, G., Yücel, M., Kayabekir, A.E., Toklu, Y.C.: Analysis of non-linear structural systems via hybrid algorithms. In: Intelligent Computing & Optimization: Proceedings of the 4th International Conference on Intelligent Computing and Optimization 2021 (ICO2021), vol. 3, pp. 536–545. Springer International Publishing, Berlin (2022) 18. Jiang, H., Li, S., Zhu, Y.: Seismic performance of high-rise buildings with energy-dissipation outriggers. J. Constr. Steel Res. 134, 80–91 (2017) 19. Kamgar, R., Rahgozar, P.: Reducing static roof displacement and axial forces of columns in tall buildings based on obtaining the best locations for multi-rigid belt truss outrigger systems. Asian J. Civ. Eng. 20(6), 759–768 (2019). https://doi.org/10.1007/s42107-019-00142-0 20. Kim, H.S., Lim, Y.J., Lee, H.L.: Optimum location of outrigger in tall buildings using finite element analysis and gradient-based optimization method. J. Build. Eng. 31, 101379 (2020) 21. Kim, H.S., Lee, H.L., Lim, Y.J.: Multi-objective optimization of dual-purpose outriggers in tall buildings to reduce lateral displacement and differential axial shortening. Eng. Struct. 189, 296–308 (2019) 22. Lee, J., Park, D., Lee, K., Ahn, N.: Geometric nonlinear analysis of tall building structures with outriggers. Struct. Design Tall Spec. Build. 22(5), 454–470 (2013)
The Role of Artificial Intelligence in Art: A Comprehensive Review of a Generative Adversarial Network Portrait Painting Sunanda Rani1 , Dong Jining1 , Dhaneshwar Shah1(B) , Siyanda Xaba1 , and Prabhat Ranjan Singh2 1 School of Art Design, Wuhan University of Technology, Wuhan, China
[email protected] 2 Amity Institute of Information Technology, Amity University, Patna, Bihar, India
Abstract. Artificial intelligence (AI) technology is an emerging field in the field of digital art, paving the way for more efficient, effective ways to transfer visual information. This manuscript examines the current influence of AI on art and creativity. This research provides an overview of the artwork “Edmond de Bellamy,” created by the Paris-based art collective “Obvious” using Generative Adversarial Networks (GANs). GANs are a type of deep learning architecture for generative modeling, which uses two neural networks competing against each other in a zero-sum game framework. The manuscript approaches this topic using a descriptive, and qualitative approach to discuss the theoretical and practical aspects of AI art and its effects on the creation of art, with additional works providing further insight. Additionally, we analyze the possibility of computational creativity through the examination of computer programs that display creative artistic behavior. With this analysis, the manuscript provides valuable insight into how AI is transforming the current definition and practice of art and creative. Keywords: Artificial intelligence · Deep learning · Generative adversarial networks · Computer vision · Digital technology · Digital art
1 Introduction Artificial Intelligence is blurring the definition of an artist and art. As AI becomes more present in our daily lives, it is only natural for artists to explore and experiment with it. Technology is becoming increasingly advanced and sophisticated with each passing year, with a multitude of devices being designed to make life easier for people. The arts have also been impacted by such developments in technology; advancements in AI, for instance, have had a notable influence on the range of artistic expressions. It has increased the number of techniques available to artists and presented them with a wide range of opportunities. Artists can now create works of art using a paintbrush and an iPad just as well as they can with traditional art supplies. Art has been created since long before the invention of basic art supplies or technology; cave art, for example, dates back to around 17,000 years ago, when people in France’s Lascaux caverns created © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 126–135, 2023. https://doi.org/10.1007/978-3-031-50330-6_13
The Role of Artificial Intelligence in Art: A Comprehensive
127
lifelike drawings of bulls, bison, stags, horses, and other creatures on the walls. They also created stencils of their hands, illustrating the significant impact technology has had on art when compared to the methods and technologies available in ancient times [1]. We now have access to a plethora of knowledge and innovative practices thanks to technology. A few examples include graphic design, computer-generated artwork, Photoshop, digitally created music, e-books, and 3D printing, which show just how much art in all its forms has been impacted by technology [2]. A brand-new genre of digital art is pushing the boundaries of inventiveness and upending traditional methods of producing art. Artists create autonomous robots for collaboration, feed data through algorithms, and program machines to produce original visual creations. They use computer systems that incorporate artificial intelligence (AI) to simulate the human mind and create an endless stream of original works of art. The use of AI as a creative partner has gained popularity, revolutionizing creative disciplines such as music, architecture, fine arts, and science. Computers are playing a significant role in transforming the way people think and create. Additionally, the AI-generated works of art has led to an evolution in what is considered acceptable under the umbrella of art. As artificial intelligence technology continues to develop, its impact on the arts will continue to become increasingly present, creating an exciting new landscape for the arts [3]. The fact is that the computer already functions as a canvas, a brush, a musical instrument, etc. However, we think that the connections between creativity and computers need to be more ambitious. We might consider the computer to be a creative force in and of itself rather than just a tool to assist human creators. Computational creativity is a new branch of artificial intelligence (AI) that has been inspired by this viewpoint [4]. This manuscript addresses the question: what does AI art mean for artists, and how does artificial intelligence (AI) impact the art? It utilizes a practice-led technique, with descriptive qualitative data sourced from secondary sources such as publications, online news sources, and pertinent references. Through documentation techniques such as synthesizing data from written, visual, and other records of historical events, this study aims to analyze the characteristics of digital technology use, particularly artificial intelligence, in creative and cultural processes. Its scientific uniqueness lies in its initial analysis of the impact of AI on art and artists.
2 Literature Review 2.1 A History of Artistic Expression Through Technology The history of AI art dates back to the early 1950s, when scientists started to explore the potential of computers to create artwork. Early experiments focused on rendering images with simple programming. The first AI-generated art was created by John Whitney, in 1955. His program, called “Orchestration,” used a random number generator to produce abstract patterns and colors. In the 1960s, computer art became more sophisticated, with researchers such as Harold Cohen and Jean-Paul Bailly exploring what could be done with interactive programs. By the late 1970s, the first AI-generated paintings were being created. In the 1980s, Hal Lasko developed the earliest AI painting program, Painterbot [5]. In the mid-2000s, AI art became increasingly sophisticated with the introduction of deep learning, and today, AI art is becoming increasingly popular, with many galleries and museums hosting exhibitions of AI-generated artwork. AARON (Artificial Artists
128
S. Rani et al.
Robotic Operative Network), created by Harold Cohen in 1980, is credited as the first true AI art program and continues to inspire many modern AI-generated artworks. The rise of deep learning in the mid-2000s enabled computers to create even more complex, realistic images. In 2013, Google released Deep Dream, an algorithm that could generate abstract images by interpreting existing photos. Since then, AI art has gained popularity, with artists like Anna Ridler and Mario Klingemann leading the way [6]. The experimental project “Artificial Natural History” (2020) examines speculative, artificial life through the prism of a “natural history book that never was.” This intriguing example of modern AI being utilized to create art raises a number of philosophical questions about the nature of AI art, which are historically-based rather than unique to the current moment in art [7]. In recent years, AI art has become increasingly popular, with many galleries and museums hosting exhibitions of AI-generated artwork. As the technology continues to evolve and improve, AI art is sure to become increasingly sophisticated and integrated into our lives. 2.2 Types of AI for Generating Art There are three main types of artificial intelligence that can create art: Neural Style Transfer (NST), General Adversarial Networks (GANs), and Convolutional Neural Networks (CNNs). NST is a type of AI that takes the content and style of one image and remaps it onto a target image, creating a unique piece of artwork. GANs are a type of AI that enables machines to learn and generate data, similarly to how humans might create art. Lastly, CNNs are an AI technique used to classify images and identify objects within them. All three of these types of AI can be used to create art in different ways, with the aim of either mimicking human art or creating something entirely new. Neural Style Transfer (NST) is a type of Artificial Intelligence (AI) algorithm which can transfer the style from one image to another. It does this by using convolutional neural networks (CNNs) to look at image patterns and extract features from them. The CNNs take the content and style of the source image and remap it onto the target image in a process called stylization [8]. This process yields a new, unique piece of artwork that combines elements of both images. NST can be used to create art by converting a photograph into a painting or adding a particular artistic filter to a digital image. NST can also be used to create more complex pieces of art. For example, it can be used to take the colors from a sunset image and apply them to a portrait picture, creating a completely new artwork. Additionally, the AI can be programmed to blend multiple images together, such as combining a landscape with a portrait to create something entirely unique [9]. NST can also be used to generate random art, by having the AI create artwork based on predetermined parameters or rules. As such, with the help of NST and other AI techniques, machines can create artwork that mimics or even surpasses the art created by humans. Figure 1 provides an in-depth illustration of the basic structure of neural style transfer. General Adversarial Networks (GANs) are a type of Artificial Intelligence (AI) that enables machines to learn and generate data in a similar way to how humans create art [10]. GANs integrate two neural networks; a generator and a discriminator. The generator creates samples of data, such as images or music, based on predetermined parameters and rules, while the discriminator evaluates the generated data and tries to determine whether it was created by a machine or a human. With this approach, GANs can generate
The Role of Artificial Intelligence in Art: A Comprehensive
129
Fig. 1. Neural style transfer basic structure
realistic artwork that can mimic or surpass the quality of art created by humans. GANs can be used to create works of art in a variety of mediums, such as images, videos, or music. In this way, GANs have the potential to revolutionize art and lead to the creation of entirely new forms of art. For example, GANs can generate new types of artwork, such as digital sculptures or interactive virtual reality art. Additionally, GANs can be used to create art with specific styles or aesthetics, such as abstract art or minimalist art. As such, GANs are an exciting tool for creating innovative and unique artwork. Indeed, these three types of Artificial Intelligence (AI) are all powerful tools for artists. By using these AI tools, artists can expand their creative repertoire and explore new possibilities in their art. 2.3 Tools for Creating AI Art Artificial Intelligence is advancing the field of image generation and manipulation. Deep Dream is an AI-generated dream-like imagery, generated by a deep learning neural network that uses image style transfer. WOMBO Dream is another deep learning neural network for image style transfer. Gau GAN is an AI-based generative technology that creates photorealistic landscapes from simple drawings. Developed by NVIDIA, it applies generative adversarial networks to create photorealistic landscape images. DALL-E is an artificial intelligence image synthesis model developed by Open AI. It enables users to generate novel images based on textual descriptions of desired objects or scenes. Finally, Fotor is an online photo editor powered by advanced AI technology. It offers a variety of editing tools, such as auto photo enhancing, image retouching, and creative effects. All of these technologies involve using artificial intelligence to generate or manipulate images. It has enabled users to create amazing works of art and even change the way photos are looked at. In addition, AI-based image editing techniques can be used beyond personal enjoyment and creativity - they can also be used for a variety of practical applications such as medical imaging, forensic research, remote sensing, and more. By harnessing the power of AI-generated imagery, scientists and professionals in a variety of fields are able to make great leaps forward. In short, AI-based image
130
S. Rani et al.
generation and manipulation technologies such as Deep Dream, WOMBO Dream, Gau GAN, DALL-E, and Fotor are revolutionizing the way images are created, viewed, and utilized. AI-based image generation and manipulation technologies are paving the way for more efficient, effective ways to transfer visual information. For example, medical imaging applications such as dental X-rays or MRI scans can be further analyzed with AI-assisted algorithms. Similarly, remote sensing and mapping tasks can be completed in a fraction of the time it would take without AI assistance. Additionally, AI-based image synthesis can add another layer of authenticity to film production, providing producers with precise computer-generated visual effects that look incredibly realistic [11]. These technologies have opened up a world of possibilities for both amateur photographers and professionals alike. By utilizing these AI-based tools, users can create amazing works of art in a fraction of the time it would take to do so without them. It is without a doubt that AI-based image generation and editing technologies will continue to be refined and improved upon as AI technology advances.
3 Methodology This manuscript utilizes a practice-based methodology predominantly utilizing descriptive and qualitative approaches to investigate the responses to its queries. AI has advanced significantly over the past few decades and its powers have been employed to produce artwork that is more complex and realistic than ever before. One example of AI-created art is the painting “Portrait of Edmond de Belamy” by the French collective Obvious. A neural network trained on a dataset of 15,000 portraits produced the artwork, which brought in an incredible $432,000 at auction. This groundbreaking work has raised many questions on creativity that need to be answered: How can AI create art? What serves as its source of inspiration? How does AI gain access to muses’ power? AI is emotionless and has no feelings; it is purely scientific and is supported by big data, machine learning, and algorithms. AI-based tools are rapidly changing the art industry by enabling artists to create more complex works and experiment with new styles and techniques. To investigate the responses to these queries, this manuscript employs a practice-based approach with descriptive qualitative methods to discuss the theoretical and applied elements of artificial intelligence art, as well as how AI influences the creative process.
4 Result 4.1 Edmond De Belamy- AI Created Portrait Edmond De Belamy is an AI-generated portrait created through a process called Generative Adversarial Networks (GANs). The portrait was created by the Paris-based art collective Obvious and was put up for auction in October 2018 for an estimated $10,000– $12,000 [12]. The portrait features a mustachioed gentleman wearing a black jacket and a white collar, with the title “Portrait of Edmond De Belamy, from La Famille de Belamy” written underneath it. The artwork was created using a machine learning algorithm which was given a data set which included 15,000 portraits from the 14th to 20th centuries. The machine learning algorithm was then trained to create a new piece based on the data set
The Role of Artificial Intelligence in Art: A Comprehensive
131
it was given. Through the algorithm, Obvious was able to create a unique yet original artistic work which was unlike anything it had seen before. One of the most interesting aspects of this artwork is its combination of art and technology. It’s a perfect example of how AI can be used to produce something which has never been seen before, showing that art and technology can truly go hand in hand. For example, the artist did not have to manually create each element in the painting as the machine learning algorithm did that for them. The portrait of Edmond De Belamy has sparked a debate over the creative process and whether or not machines can truly be creative. While some people argue that AI can never replicate the human creative process, others argue that machines can produce unique works of art which are as valid as those created by humans. Ultimately, it is up to the individual to decide whether or not they consider Edmond De Belamy a work of art. Regardless of one’s opinion on the validity of AI-created artwork, Edmond De Belamy has certainly sparked an interesting debate and has served as a reminder that technology and art can coexist. As AI develops and more AI-generated artwork is seen, the debate will only become more intense as people attempt to define what constitutes a work of art. Obvious has signed the painting as a nod to traditional artworks and their signatures. In this artwork, the artists used a metaheuristic algorithm, “min G max D x [log (D(x))] + z [log(1 − D (G(z))]” which was used to generate the portrait as a signature [13]. This is because AI-generated artwork typically wouldn’t have a signature, due to it being created by an algorithm. By signing the painting, Obvious wanted to acknowledge the fact that although a machine created it, the artwork is still the product of their hard work and creativity. As Edmond De Belamy has been sold at auction and is gradually becoming more widely appreciated, it shows that AI-generated artwork is increasingly being accepted and appreciated as a form of art. The artwork serves as an example that there are no rules or restrictions when it comes to creativity, and that anything is possible with AI (Fig. 2).
Fig. 2. Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network). Image © Obvious
4.2 Method for Generating AI-Created Portrait of Edmond De Belamy The Belamy series was created using a GAN, a type of machine learning system. This system essentially uses two algorithms which ‘compete’ with one another to produce better
132
S. Rani et al.
outcomes. One algorithm produces data, while the other tries to distinguish between true and false data. Figure 3 illustrates how the Generative Adversarial Network functions.
Fig. 3. Generative adversarial network functions
A generator and discriminator are both present in GANs. The Generator produces phony samples of data (such as images, audio, etc.) to try to fool the Discriminator. The Discriminator is then tasked with distinguishing genuine from fraudulent samples. Both the Generator and Discriminator are neural networks, which compete with one another during the training phase. By repeating the procedures multiple times, both the Generator and Discriminator become better at their jobs. During this process, the Discriminator tries to reduce its reward V(D, G) in the minimax game the GANs are designed for, while the Generator attempts to maximize its loss by minimizing the Discriminator’s reward. GANs consists of two networks: the generator and the discriminator. During training, the generator runs once and the discriminator runs twice, once for real and once for fake inputs. The losses calculated from these runs are then used to independently calculate gradients that are propagated through each network. This process is shown in the Fig. 4, provided by Goodfellow et al. in their 2014 paper on GANs. Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernier used a generative adversarial network, a type of AI system employing machine learning, to generate a painting based on 15,000 historical portraits painted between the years 1300 and 2000. The Generator generated an image while the Discriminator tried to differentiate human-created images from those created by the Generator. As the Discriminator had difficulty distinguishing between human-made images and computer-generated ones, the output had a distorted appearance. According to Caselles-Dupré, this was due to the Discriminator being more susceptible to deception than a human eye as it looks for specific features in the image, such as facial features and shoulder shape [12]. As a result, the artwork created had a unique look that blended elements of both human and machine-generated art. It is an interesting example of how AI technology can be applied to create something completely new from existing works. This could have many applications in the world of art, from creating paintings to creating digital works of art.
The Role of Artificial Intelligence in Art: A Comprehensive
133
Fig. 4. Summary of the generative adversarial network training algorithm. Taken from: Generative Adversarial Networks.
5 Discussion AI art can never fully replace human creativity, since a human artist’s involvement is always necessary to select the subject and its context to achieve the desired outcome. Generative AI art may be aesthetically pleasing, but it still requires the skills of a human artist to complete it. Since AI art is still a relatively new phenomenon, it is difficult to determine how it will impact the traditional art industry. However, it appears that AI art presents conventional artists with more of an opportunity than a threat, given the situation of the market right now. A new kind of art is being produced with the development of artificial intelligence, and it fetches high prices. There is no doubt that AI-generated art is growing in popularity and value, despite the skepticism of some who object to the idea of purchasing works created by machines. For instance, “Edmond de Belamy,” a piece of art by the art collective “Obvious,” was sold for US$432,000 in October 2018. The artwork presents itself with a visually pleasing aesthetic, featuring warm and neutral colors. The portrait features a face composed of brush strokes and geometric shapes, subtly blending shades of brown and beige, and dark hair drawn with a single curved line. The composition is framed with a white background, allowing the focus to be solely on the portrait. Overall, the painting is executed in a pastel palette that reflects a classical aesthetic with a modern touch. The portrait exudes a sense of familiarity and resembles a classic European renaissance painting which evokes a sense of nostalgia yet still stands out as something unique and contemporary. The portrait has been painted with oil on canvas as a nod to traditional portraiture painting and as a signifier of its relationship to the past. The combination of the medium, the colors, and the form creates a memorable work of art that stands out in its own right. AI a technological revolution that has taken the world by storm. It has made it possible to complete complex tasks with less cost and effort, and has enabled multitasking, reducing the burden on resources available. AI is always active, never experiences downtime, and can be paused or stopped whenever necessary. Furthermore, it offers people with disabilities improved capabilities and decision-making processes are accelerated and improved. Being extremely versatile, AI may be used across industries and has a large market potential. AI has been an
134
S. Rani et al.
invaluable help for artists in various aspects. For example, AI-powered algorithms can be used to generate new artwork based on existing visual data. AI can also help in video editing and image manipulation, allowing for more creative functionality. AI can even be used for 3D modeling, creating models with higher levels of detail at a fraction of the time. AI-powered AI can also be used to offer suggestions to a composer while creating new music, thus providing them with a larger range of creative possibilities. Finally, AI can be used to enhance digital marketing campaigns and create engaging content. All of these capabilities unlock new possibilities for artists, helping them create new and unique works of art.
6 Conclusion The recent advances in AI have opened up new opportunities for artists to explore production and introspection through the use of high-performance technology and newly developed algorithms. Generative models, which are able to generate unique data by being fed with abundant training data, are often employed in these applications. AI acts as a tool, similar to a brush or a piano, in the creation of art. Its creative potential depends on how artists use it. The impact of AI on the art world is significant. It enables artists to explore more complex and intricate works of art and supports experimentation with new techniques and approaches.AI is being used to create new forms of art, such as by constructing algorithms that “learn” an aesthetic by studying a large number of images and then generating images with the same aesthetic. This reduces the workload involved in creating art, allowing artists more time to focus on their creative abilities. While some fear that AI may eventually replace artists, it is not likely to happen soon. AI is capable of producing technically proficient works of art, but it lacks the ability to create truly innovative and unique designs. The potential of AI art is both exhilarating and unsettling. Although it cannot yet produce truly original and unique works, it has the power to generate texts, movies, and images that can be misconstrued by humans. As digital technology continues to evolve, artists are using it to expand their creative options and explore new territory.
References 1. Joshua Thomas, J., Pillai, N.: A deep learning framework on generation of image descriptions with bidirectional recurrent neural networks. In: Advances in Intelligent Systems and Computing, vol. 866, pp. 219–230 (2019) 2. Karagianni, A., Geropanta, V.: Smart homes: methodology of IoT integration in the architectural and interior design process—a case study in the historical center of Athens. In: Advances in Intelligent Systems and Computing, vol. 1072, pp. 222–230 (2020) 3. Hossain, M.A., Hasan, M.A.F.M.R.: Activity identification from natural images using deep CNN, pp. 693–707 (2021) 4. Jahan, T., Hasan, S.B., Nafisa, N., Chowdhury, A.A., Uddin, R., Arefin, M.S.: Big data for smart cities and smart villages: a review. In: Lecture Notes in Networks and Systems, vol. 371, pp. 427–439 (2022) 5. Nake, F.: Paragraphs on Computer Art, Past and Present, Feb 2010
The Role of Artificial Intelligence in Art: A Comprehensive
135
6. Seshia, S.A., Sadigh, D., Sastry, S.S.: Toward verified artificial intelligence. Commun. ACM 65(7), 46–55 (2022) 7. Hossain, S.M.M., Sumon, J.A., Alam, M.I., Kamal, K.M.A., Sen, A., Sarker, I.H.: Classifying Sentiments from movie reviews using deep neural networks. In: Lecture Notes in Networks and Systems, vol. 569 LNNS, pp. 399–409 (2023) 8. Gatys, L.A., Ecker, A.S., Bethge, M.: Texture and art with deep neural networks. Curr. Opin. Neurobiol. 46, 178–186 (2017) 9. Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., Song, M.: Neural style transfer: a review. IEEE Trans. Vis. Comput. Graph. 26(11), 3365–3385 (2020) 10. Hertzmann, A.: Visual indeterminacy in GAN art. Leonardo 53(4), 424–428 (2020) 11. Nti, I.K., Adekoya, A.F., Weyori, B.A., Nyarko-Boateng, O.: Applications of artificial intelligence in engineering and manufacturing: a systematic review. J. Intell. Manuf. 33(6), 1581–1601 (2022) 12. Goenaga, M.A.: A critique of contemporary artificial intelligence art: who is ‘Edmond de Belamy’? AusArt (2020). ojs.ehu.eus 13. Zhang, M., Kreiman, G.: Beauty is in the eye of the machine. Nat. Human Behav. (2021). nature.com
Introducing Set-Based Regret for Online Multiobjective Optimization Kristen Savary and Margaret M. Wiecek(B) School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA {ksavary,wmalgor}@clemson.edu
Abstract. Interest in online optimization (OO) has grown as learning algorithms have become more prominent. While OO has been studied extensively for singleobjective optimization problems, very few have studied online multiobjective optimization (OMO). A scalar regret and its bound are extended to a weighted-sum regret in an OMO setting. A general concept of set-based regret is introduced for OMO to assess the regret for not computing some, or all elements, of the Pareto set of the offline problem. The set-based regret is computed using the hypervolume and is shown to achieve an upper bound that reduces to the bound for the scalar regret. Numerical examples are included. Keywords: Online convex optimization · Online gradient descent · Weighted-sum method · Hypervolume indicator
1 Introduction Multiobjective optimization (MO) considers problems with at least two objective functions conflicting with each other. Finding an optimal solution to these conflicting objectives requires the decision maker (DM) to balance tradeoffs, as an improvement in one objective results in a degradation of other objectives. MO has been studied extensively, with many methodologies and algorithms for finding a partial, or full, solution set, called the efficient or Pareto set [3, 12]. Online optimization (OO) addresses iterative and dynamic decision processes under uncertainty and can be viewed as an extension of stochastic optimization. While the latter assumes a priori knowledge of probability distributions of the uncertain variables modeling uncertainty, the fundamental assumption in OO is that the decision’s outcome is unknown when the decision is being made. For example, in agriculture, farmers are unaware of the market price of their crops at the time of planting, so it is difficult to maximize revenue with unknown sale prices. This decision situation illustrates an online single-objective optimization (OSO) problem. However, in addition to future market price, they may also consider whether a specific crop replenishes nutrients in the soil or requires higher maintenance costs for water or fertilizer. This now becomes an online multiobjective optimization (OMO) problem as there are multiple conflicting, unknown objectives when the decision to plant crops needs to be made. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 136–146, 2023. https://doi.org/10.1007/978-3-031-50330-6_14
Introducing Set-Based Regret for Online Multiobjective …
137
OO algorithms and other learning algorithms are important tools for machine learning methodologies, but they have mainly been studied in a variety of single-objective settings (e.g., [7, 11, 14]). There are limited studies in MO settings for OO. An OMO problem is first introduced in [13], along with concepts from competitive analysis, such as c-competitiveness, that are extended to an MO setting. OMO problems also arise in game theory with vector-valued payoff objective functions [10]. Algorithms for online stochastic MO are proposed in [8, 9]. While the concept of regret is a fundamental notion in OSO, its multiobjective counterpart has not been defined with the exception of a multiobjective robust regret that is introduced in MO under uncertainty rather than OMO [5]. In this paper, we introduce a general concept of regret for OMO problems to recognize that the solutions to MO problems come in the form of efficient sets rather than a single optimal vector, and the solution values come in the form of Pareto sets rather than a single number. Thus, this new concept utilizes sets rather than numbers to account for MO algorithms that output a set [1], as opposed to the algorithms proposed in [13] that output single solutions. Additionally, we propose an approach to measuring this setbased regret and observing its behavior. The theoretical results rely only on convexity with no assumption that objective functions follow known distributions or scenarios. The OMO problem is formulated in Sect. 2. The useful results from OSO, which are reviewed in Sect. 3, are extended in Sect. 4 to a multiobjective setting. In Sect. 5, the set-based regret is generally defined and measured using the concept of hypervolume. Section 6 contains numerical results while Sect. 7 concludes the paper.
2 Problem Formulation Let Rn and Rp denote Euclidean vector spaces as the decision and objective space, respectively. Let u, v ∈ Rp . We write u < v if ui < vi for each i = 1, . . . , p, u ≤ v if that ui < vi , and u v if ui ≤ vi ui ≤ vi for each i = 1, . . . , p with at least one i such p for each i = 1, . . . , p. Furthermore, let R≥/ = u ∈ Rp : u ≥ () 0 . Let {1, 2, . . .} denote an infinite sequence of iterations. The general online convex multiobjective optimization problem (MOP) solved at t ∈ {1, 2, . . .} is min f t (x) ,
x∈X
(OM OP (t))
where X ⊆ Rn is a convex set of feasible solutions, and ft : Rn → Rp is a vector-valued function composed of p convex scalar-valued functions fit , i = 1, . . . , p, that are revealed after a decision is made at iteration t. Without loss of generality, we t p assume ft (x) ≥ 0 for all x ∈ X . The outcome set Y ⊂ R at iteration t is defined as t t p Y = y ∈ R : y = f (x), x ∈ X . Solving a standard MOP is defined as computing its efficient solutions and Pareto outcomes. Definition 1 A feasible solution x ∈ X is (weakly) efficient to (OMOP(t)) if there does not exist a solution x ∈ X such that ft (x )( 0 are learning rates, ∇f t (xt ) denotes the gradient of f t at xt , and Proj(x) is a projection of thesolution onto the feasible set X. Let X := max x1 − x2 2 and ∇f := max∇f t (x)2 for t = 1, 2, . . . , T . The following
x1 ,x2 ∈X
results hold.
x∈X
140
K. Savary and M. M. Wiecek
Theorem 2 [14] If ηt = t −1/2 , then the regret of the OGD algorithm is √
√ √
X 2 T 1
∇f 2 ≈ O T . + T− regret(T ) ≤ 2 2 Stated below is a useful bound derived in the proof of Theorem 2.
Corollary 1 [14] Let xT be an optimal solution of minx∈X Tt=1 f t (x). For any t, when using the OGD algorithm to compute xt , the following holds:
t t T t x − xT ≤ ∇f x
1 1 ηt t t+1 T 2 T 2 x − x − x − x + ∇f 2 . 2 2 2ηt 2ηt 2
In the next section, we apply these results to (OSOP(wt , T )) to obtain a weighted-sum regret for (OMOP(T )).
4 Weighted-Sum Regret The OGD algorithm is applied to (OSOP(wt , T )), and xtw denotes the online solution computed at each t, t = 1, . . . , T . We define the weighted-sum regret as T T T wt ft xtw − min wt ft (x) , regretWS T , wt t=1 := x∈X
t=1
t=1
T p where wt t=1 denotes the sequence of weight vectors wt ∈ R≥ , t = 1, . . . , T . Let t T
∇F := max max max∇fi (x)2 and xw be an optimal solution to (SOP(wt , T )).
t=1,...,T i=1,...,p x∈X
Applying Theorem 2, we obtain the following result. T p Theorem 3 Let wt t=1 be a weight sequence where wt ∈ R≥ for all t, and wmax := t t max max wi . If the OGD algorithm is applied to (OSOP(w , T )) with learning rates
t=1,...,T i=1,...,p ηt = t −1/2 , then
√
√ √
X 2 T 1 t T + pwmax ∇F 2 ≈ O T . regretWS T , w t=1 ≤ T− 2 2 Proof For any x ∈ X , we have t t w ∇f (x) ≤ wt ∇f t (x) + . . . + wt ∇f t (x) ≤ pwmax ∇F . p p 1 1 2 2 2 Corollary 1’s bound can be extended to the weighted-sum problem as follows:
1 t+1 t T T 2 T 2 t t t t 2 max 2 w ∇f (xw )(xw − xw ) ≤ xw − xw − xw − xw + ηt pw ∇F . 2 2 2ηt
The remainder of this proof follows directly from the proof of Theorem 2 by utilizing this modified bound instead of the bound in Corollary 1.
Introducing Set-Based Regret for Online Multiobjective …
141
As expected, the weighted-sum regret bound extends the single-objective regret bound in Theorem 2 by accounting for the weight vector used in the weighted-sum formulation. However, because this sublinear regret bound has no restriction on w, it may not be tight in general. To choose w to possibly obtain a smaller average regret, in the OGD algorithm, we may replace −wt ∇ft (xt ) with a multiobjective steepest descent direction that, based on [2], can be computed in every iteration t as −
p
wit∗ ∇fit (xt ) ,
(1)
i=1
where wt∗ is an optimal solution to min p wt ∈ R≥ ,
p t i=1 wi = 1
p 2 wit ∇fit (xt ) . i=1
(2)
2
We call this optimal weight adaptive and present its effects on regret computation in Sect. 6. Based on Theorem 3, we obtain the following result. Corollary 2 If the OGD algorithm is applied to (OSOP(wt , T )) with learning rates ηt = t −1/2 , then T √ fit xtw − fit xTw ≤ O T , for each i = 1, . . . , p ,
t=1
and the modified weighted-sum regret is bounded, p T √ T regret T , wt t=1 = fit xtw − fit xTw ≤ O T .
i=1 t=1
Proof Recall that by assumption, solutions of (OSOP(wt , T )) are finite with ft (x) ≥ 0 T for all t = 1, . . . , T and x ∈ X . Thus, 0 ≤ wt ft (xTw ) ≤ O(C) . Then, by
t=1 T √ wt ft xtw − wt ft xTw ≤ O T implies
Theorem 3, T
t=1
T √ √ √ wt ft xtw ≤ O T + wt ft (xTw ) ≤ O T + O(C) = O T .
t=1
t=1
Thus, T t=1
p p T T √ wt ft xtw = wit fit xtw = wit fit xtw ≤ O T . t=1 i=1
i=1 t=1
Since wit fit (x) ≥ 0 for all x ∈ X , i = 1, . . . , p, and t = 1, 2, . . . , T , T t=1
√ wit fit xtw ≤ O T .
142
K. Savary and M. M. Wiecek
For each i, letting wimin = min wit , we have t=1,...,T
T
Thus,
t=1
T t=1
wimin
t=1
√ wimin fit xtw ≤ O T implies
T
1
T √ wimin fit xtw ≤ wit fit xtw ≤ O T .
wimin fit
T t xw = fit xtw ≤
t=1
t=1
1
O min
wi
√ √ T = O T .
√ T T T fit xtw ≤ O T . Lastly, we have fit xtw − fit (xTw ) ≤ t=1 t=1 t=1 The final claim holds as
p p T √ √ √ fit xtw − fit xTw ≤ O T = pO T = O T .
i=1 t=1
i=1
Corollary 2 indicates that the weighted-sum regret bound remains the same if the weights are not included in the computation. Thus, when we recompute the regret as the sum of the contributions of each individual objective function only using the online weighted-sum solutions, we obtain the same sublinear bound. We now turn attention to a regret computation when sets of online and Pareto solutions are considered.
5 Set-Based Regret We recognize that the weighted-sum regret does not consider the multiobjective nature of (OMOP(T )), where every iteration yields a solution set of vectors rather than a single vector. Before we propose a definition of the multiobjective, or set-based regret, that relies on the notion of the Pareto set, we introduce the concept of nondominated points for an arbitrary set. Definition 2 Let S ⊂ Rp . A point y ∈ S is said to dominate a point y ∈ S provided y ≤ y . A point y ∈ S is said to be nondominated provided there exists no y ∈ S such that y ≤ y. Let SN denote the set of nondominated points in S and N (·) denote the operator on a set such that SN = N (S). Given the set Y T ⊂ Rp at iteration T, we can extract the most significant information from this set by applying the operator N. We have YNT = N (Y T ). Definition 3 The set-based regret at iteration T is a measure, | · |, of the region (weakly) T , of (MOP(T )) but bounded above by the nondominated dominated by the Pareto set, Pw/·/ points of the online accumulated outcome set, YNT . That is, p p T T := YNT − R Pw/·/ + R . regret T , YNT , Pw/·/
Introducing Set-Based Regret for Online Multiobjective …
143
Fig. 1 depicts the offline Pareto set, the accumulated online outcome set, and the regret region. Since various mathematical tools can be used to measure this region, below we present such a tool.
Fig. 1. a. Pareto set and accumulated online outcome set. b. Region for set-based regret (shaded area).
5.1 Set-Based Regret via Hypervolume The hypervolume indicator has been widely used in evolutionary multiobjective optimization algorithms to assess the quality of an approximation to the Pareto set by its closeness to the true Pareto set. Definition 4 [6] Given a set S ⊆ Rp and a reference point r ∈ Rp , the hypervolume indicator of S is the measure of the region weakly dominated by S and bounded above by r. In this derivation, we relax Definition 3 and use the complete accumulated online outcome set as a set of reference points. Given p objectives, the hypervolume between T , such that a single accumulated outcome point y ∈ Y T and a Pareto point p ∈ Pw/·/ p ≤ y, is defined as HV(y, p) :=
p
yj − pj .
j=1 T Given a set W of weight sequences, let yw ∈ YwT and pw ∈ Pw,w/·/ be obtained for the same w ∈ W . The set-based regret via hypervolume is defined as T := regretHV T , YwT , Pw,w/·/ HV yw , pw . w∈W
144
K. Savary and M. M. Wiecek
We note that this definition may account for some regions between the sets YwT and T Pw,w/·/ more than once while other regions may be left out. For p objective functions, the average set-based regret via hypervolume is T regretHV T , YwT , Pw,w/·/ T = , avg regretHV T , YwT , Pw,w/·/ Tp which takes the multidimensional aspect of the hypervolume into account. Solving (OSOP(wt )) with the OGD algorithm and solving (SOP(wt , T )), it is possible to compute an accumulated outcome point and a corresponding weak Pareto point, respectively, resulting in a pair of points for each weight in a set of weight sequences W. Theorem 4 If the OGD algorithm is applied to (OSOP(wt , T )) for each w in a set of weight sequences W and with learning rates ηt = t −1/2 , then T regretHV T , YwT , Pw,w/·/ ≤ O T p/2 . T Proof Let xTw ∈ Ew,w/·/ . We have
p T T := fjt xtw − fjt xTw . regretHV T , YwT , Pw,w/·/ HV yw , pw =
w∈W j=1 t=1
w∈W
Utilizing Corollary 2, we can conclude p p W T √ t t t T O T ≤ O T p/2 = O T p/2 . fj xw − fj xw ≤
w∈W j=1 t=1
w∈W j=1
w∈W
Thus, the bound in Theorem 4 results in an average set-based regret via hypervolume approaching 0 as T tends to infinity regardless of the precision carried in the definition T ⊆ YwT , this bound holds when using only the nondominated of this regret. Because Yw,N accumulated online points as in Definition 3.
6 Numerical Results We demonstrate the online multiobjective optimization process on the strictly convex biobjective problem of the form
min
x∈Rn
1 1 xQt1 x + pt1 x, xQt2 x + pt2 x 2 2
(OBOP (t))
s.t. x = 1 , and consider two cases for the matrices Q1t and Q2t : they are (i) diagonal with elements in [−5, 5], and (ii) dense with elements in [0, 1]. We scalarize (OBOP(t)) into
Introducing Set-Based Regret for Online Multiobjective …
145
(OSOP(wt )) and run T iterations of the OGD algorithm with different choices of weights satisfying w1t , w2t ∈ [0, 1] and w1t +w2t = 1. At every iteration, we use either a collection of fixed weights or the adaptive weight being an optimal solution to (2). Table 1 shows the average regret for (OBOP(T )) in cases (i) and (ii) with these weight choices. Table 1. Average regret for (OBOP(T ))(i) and (OBOP(T ))(ii) with minimum average regret shaded
Iter.\Weight (0, 1) (1/10, 9/10) (1/2, 1/2) (9/10, 1/10) (1, 0) Adaptive 0.8566 1.032 1.106 0.9912 100 1.154 1.069 (i) 1000 0.6102 0.5253 0.3315 0.458 0.5239 0.3919 5000 0.289 0.2422 0.1527 0.2353 0.2797 0.1818 0.2312 0.2479 100 0.3114 0.3043 0.2878 0.2459 (ii) 1000 0.03693 0.03516 0.03192 0.02995 0.02964 0.02785 5000 0.009553 0.008823 0.007512 0.007791 0.008108 0.006742
For case (i), the minimum average regret occurs for wt = 21 , 21 . For case (ii), the minimum average regret occurs with the adaptive weight as the algorithm progresses. In general, it is difficult to determine which weight choice results in the smallest (average) regret. However, the results in Table 1 agree with Theorem 3 and show the average regret does tend to 0 as the iterations increase regardless of the weight choice. Figure 2 plots the adaptive weight computed for each iteration. The thicker band in the middle of the plot corresponds to wt = 21 , 21 , implying the computed adaptive weight is frequently close to this vector. Note that this wt corresponds to the smallest average regret in case (i).
Fig. 2. Iteration versus adaptive weight plot for ((OBOP(T ))(i)
146
K. Savary and M. M. Wiecek
7 Future Work We have shown that the regret for OSO computed with the OGD algorithm can naturally be extended to the multiobjective case as the weighted-sum regret when this algorithm is applied to the online weighted-sum problem scalarizing the OMO problem. We also proposed general notion of set-based regret and computed it with the hypervolume by solving appropriate weighted-sum problems. Future research directions include alternate ways to compute the set-based regret, addressing unconstrained OMO, or designing other OMO algorithms.
References 1. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, USA (2001). https://doi.org/10.5555/559152 2. Désidéri, J.A.: Multiple-gradient descent algorithm for multiobjective optimization. C. R. Math. 350(5–6), 313–318 (2012). https://doi.org/10.1016/j.crma.2012.03.014 3. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005). https://doi.org/10.5555/112 1732 4. Geoffrion, A.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22, 618–630 (1968). https://doi.org/10.1016/0022-247X(68)90201-1 5. Groetzner, P., Werner, R.: Multiobjective optimization under uncertainty: A multiobjective robust (relative) regret approach. Eur. J. Oper. Res. 296(1), 101–115 (2022). https://doi.org/ 10.1016/j.ejor.2021.03.068 6. Guerreiro, A., Fonseca, C., Paquete, L.: The hypervolume indicator: Computational problems and algorithms. ACM Comput. Surv. 54(6-119) (2022). https://doi.org/10.1145/3453474 7. Hazan, E.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325 (2016). https://doi.org/10.1561/2400000013 8. Liu, S., Vicente, L. N.: The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning. Ann. Oper. Res. 1–30 (2021). https:// doi.org/10.1007/s10479-021-04033-z 9. Mahdavi, M., Yang, T., Jin, R.: Stochastic convex optimization with multiple objectives. In: Proceedings of the 26th International Conference on NIPS‘13, Vol. 1, pp. 1115–1123. Curran Associates Inc., USA (2013). https://doi.org/10.5555/2999611.2999736 10. Mannor, S., Perchet, V., Stoltz, G.: Approachability in unknown games: Online learning meets multi-objective optimization. In: Conference on Learning Theory, pp. 339–355. PMLR (2014) 11. Shalev-Shwartz, S.: Online learning and online convex optimization. Found. Trends Mach. Learn. vol. 4(2), pp. 107–194 (2012). https://doi.org/10.1561/2200000018 12. Sumpunsri, S., Thammarat, Ch., Puangdownreong, D.: Multiobjective Lévy-flight firefly algorithm for multiobjective optimization. In: Vasant, P., Zelinka, I., Weber, G.-W. (eds) Intelligent Computing and Optimization, ICO 2020. Advances in Intelligent Systems and Computing. Springer, Berlin, vol 1324, pp. 145–153 (2021). https://doi.org/10.1007/978-3-030-681548_15 13. Tiedemann, M., Ide, J., Schöbel, A.: Competitive analysis for multi-objective online algorithms. In: Rahman, M.S., Tomita, E. (eds) WALCOM: Algorithms and Computation, WALCOM 2015, Lecture Notes in Computer Science, vol. 8973. Springer, Cham (2015). https:// doi.org/10.1007/978-3-319-15612-5_19 14. Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the Twentieth International Conference on ICML ‘03, pp. 928–935. AAAI Press (2003). https://doi.org/10.5555/3041838.3041955
The Concept of Optimizing the Operation of a Multimodel Real-Time Federated Learning System Elizaveta Tarasova(B) ITMO University, St. Petersburg, Russian Federation [email protected]
Abstract. The paper considers the optimization of federated learning for a realtime system with independent devices and centralized aggregation. A multimodel setting is considered, that is, various machine learning models participate in the system, while each device can work with any number of them. A new method of managing this system and work algorithms for its elements is proposed in order to improve the quality of global models and minimize delays between updates on local devices. The proposed approach is based on the idea of introducing a control center to control the correctness of updates, as well as considering the aggregation station as a single-processor scheduling theory model with due dates. In this case, the due dates are the predicted points in time for the next local update. The proposed approach is tested on synthetic data and compared in various combinations, as well as with the basic method, in which the aggregation starts on a first-comefirst-served basis for models that have received updates from all devices. Testing has shown that the new approach in the maximum configuration and with the SF algorithm for more than 90% of examples reduces the delay on local devices by an average of more than 19% compared to the rest. Keywords: Federated learning optimization · Online model planning · Real time system
1 Introduction The quality of machine learning (ML) modeling results directly depends on the set of incoming data. With a small volume or low representativeness, the results of the models may be inaccurate or even incorrect. The problem of lack or imbalance of data is faced by many researchers, as well as companies that use ML models in their work. Federated Learning (FL) offers a solution to this problem while maintaining data privacy. This approach is based on two principles: data confidentiality and collaborative learning. That is, on the one hand, organizations can enrich their models with data from other similar sources, and on the other hand, they will save their customers’ data. FL is not applicable to all ML models. For example, models based on trees cannot be
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 147–156, 2023. https://doi.org/10.1007/978-3-031-50330-6_15
148
E. Tarasova
considered in the framework of FL, since these trees cannot be retrained. Also, k-nearest neighbors algorithm (KNN) methods or similar methods may not benefit from using the FL concept, as they store training data during training. Basically, FL is suitable for parameterized learning, neural networks. When building a model based on federated learning, many management and planning questions arise. It is required to determine the frequency of model recalculation on local devices, which determines the frequency of updates to the station, taking into account the transmission time. At the same time, it is necessary to control the frequency of updates aggregation at the Station. Also, local devices can have different computing power and different intensity of incoming data, which affects the completion time of calculations. On the one hand, all updates, and recalculations on devices can be done with the arrival of new data (for example, a fixed size) and aggregation can be carried out at the Station when new data arrives. However, with this update approach, incoming updates to the Station for aggregation may not be correct when applied alone. It is also possible to introduce additional parameters that affect the efficiency of the system. For example, the readiness of devices to wait for the aggregation to complete and receive an updated model. FL planning and optimization is used to solve such issues. In this work, the multimodel federated learning system with independent devices on which local updates of various models of machine learning occur, and with centralized aggregation has considered. Several unrelated models can be used on each device. The system works in real time with ML models that require constant updating with new incoming data. The paper proposes a new control method for this FL system and an online FL scheduling algorithm for optimizing work in order to improve the quality of global models and minimize delays on local devices. Improving the quality of global models is achieved through the following control of the start of local updates on local ones, the start of aggregation, and control of the aggregation order for different models. In this case, the choice of the aggregation method remains optional.
2 Related Work As mentioned earlier, FL is based on two main ideas: privacy protection and collaborative learning, which requires the aggregation of model updates received from different devices. Consider approaches for these two ideas. There are various aggregation methods. The basic approach, calculating the arithmetic average of all updates coming from devices, is not resistant to damage: even one incorrect update in a cycle can greatly degrade the global model for all devices. An alternative approach is the median due to its robustness to outliers. The paper [4] considers the application of the classical multidimensional generalization of the median (geometric, spatial, or L1-median [2]) to FL. Other approaches are the weighted average, where the significance is determined for each local device—the weight, the Bayesian non-parametric structure for FL neural networks, in which the global model is formed by comparing neurons in local models
The Concept of Optimizing the Operation of a Multimodel …
149
[9], as well as various variations of the methods of summing and averaging local and the global model [3]. There are various studies aimed at managing FL in order to optimize various criteria. For example, in [8], planning of the FL model without information about the state of the wireless channel and the statistical characteristics of clients based on multi-armed bandits is considered for the minimization of the time spent on FL training in general, including transfer time and local computation time. The paper [8] proposes a CS algorithm based on the upper confidence bound (CS-UCB) policy. Another goal of FL planning might be to minimize the number of training cycles. The learning cycle for FL includes updating on local devices, pushing updates, and aggregation on the Station [4]. In the [4] an approach was proposed to reduce the number of training cycles along with minimizing the time interval per communication round by jointly considering the significance of local updates and the latency problems of each client. The proposed scheme is based on the -greedy algorithm. Federated learning can be considered in a multi-jobing setting (Multi-Job Federated Learning—MJ-FL): several models can be updated on local devices and aggregated at the Station. There are two possible cases: with a parallel learning process on local devices and with a sequential one. The direct approach is to train each job separately using the mechanism described in [3] while using existing single job FL scheduling (for example, FedAvg [3]). Thus, simple parallelism is considered when the devices are not fully utilized and the system efficiency is low. In addition, direct adaptation of existing scheduling methods to multi-job FL cannot provide efficiency and accuracy at the same time. For parallel learning, the research [10] proposed two approaches to improve the efficiency of the learning process and the accuracy of the global model. The effectiveness of training is determined by a cost model based on training time and data validity of different devices in the learning process of different jobs. The first approach proposed in [10] is based on reinforcement learning and is more suitable for complex jobs, the second one uses Bayesian optimization and is suited for simple jobs. Thus, when optimizing in order to reduce the time of the learning process, one should also take into account not only the computing and communication capabilities, but also the balance of the data.
3 Problems A system based on federal training for independent devices on which the same type of machine training models is considered. General model have devices with unrelated customer databases, K groups of machine learning models. On each device can be different amounts of machine learning models from different groups: {Mnk }, where n—device number, k—group number. Station for centralized global updates aggregation. A federal approach retains confidentiality—data remain on local devices, the training occurs locally, only the parameters of the model
150
E. Tarasova
are sent for aggregation. The system is considered during the time. It is required to set the system control algorithm in order to optimize the selected target functions. The general structure of the model under consideration is shown in the Fig. 1. At the entrance, the system accepts data (for example, customer transactions) in the assumption that the data comes with different frequency and different devices have different customers. The input data falls into the Unconnected devices area on their devices, where local updates occur. After that, the parameters of models are transferred to an optimized system of federal training (OFL System) for global update (aggregation). The results obtained by the system are updated parameters (call them global) that return to the devices.
Fig. 1. Common type of system of federal training.
The problems that are required to solve to optimize the system under consideration: 1. Control of local devices. At what point of time the local update occurs. As based on the frequency of data receipt to the device to regulate the aggregation of the model in the system. 2. Adjustment of the correctness of updates. How to determine whether it is possible to aggregate on the basis of local update received, provided that the update did not come from all devices. At what point in time to send updates to aggregation and how many devices should an update come for the correct aggregation. 3. Adjustment of the sequence of updates. In what sequence is the aggregation, provided that updates are received for different models. The solution of the problems described above will improve the quality of the global model (problems 1 and 2), as well as increase the efficiency of the system in terms of reducing the time of delays on local devices for updates (problems 1 and 3). The concept of optimizing the work under consideration is proposed in order to solve the problem posed, which is based on the prediction of the frequency of data receipt, planning algorithms and games theory.
4 Methods and Algorithms This section presents a new system control method, the general algorithm of work and the algorithm of the work of each level of the system. The section also presents several scheduling theory algorithms for the operation of one of the levels of the system and their adaptation for it.
The Concept of Optimizing the Operation of a Multimodel …
151
4.1 System Control Method and General Algorithm From the point of view of FL in the system there is a set of devices (see Fig. 2: Device 1, Device 2, Device 3), each of which has an isolated set of customers and data about them, and the center of updates “Aggregation Station—AS”, on which the aggregation is carried out. AS can consistently aggregate for different models.
Fig. 2. The new concept of the federal training system.
To solve the jobs described in the previous section, the model was divided into three levels (see Fig. 2): Local devices—Level 1—problem 1, Hub—control center—Level 2—problem 2, Aggregation Station (AS)—Level 3—problem 3. General algorithm for the operation of the system: 1. The principle of operation of local devices is set, which determines under what conditions the local update begins. 2. After the local update is completed, the device transfers the obtained model parameters. 3. The Hub abstract device is composed, which collects models after local updates and makes a decision based on the given principle of operation, at what point to send update for each group of models to the aggregation station as a job. Hub is also a link between the concept of federal training and the operation of the aggregation station as a single processor system of schedules, as it forms from many locally updated parameters of the jobs with a set of parameters selected for the station. 4. At the time of the release of the aggregation station, the selected planning algorithm is used to make a decision on setting up one of the available jobs for aggregation. After the aggregation, the updated model is sent back to all the devices that participated in the aggregation.
152
E. Tarasova
The described sequence of actions (local update and sending it, collection of updating to Hub, transmission to AS and newsletter back to devices) is an iteration of the system operation. Moreover, some devices may not participate in a particular iteration. If the update from the n device was not received before sending the job to AS, then this device does not participate in the aggregation and does not receive a subsequent global update. Next, the principles of the work of each level will be considered. 4.2 Local Device—Level 1 For each local device, a rule is proposed by which local updating of individual models occurs. Figure 3 displays the operation of the local device. Red Points are the moments of the time at which the data received.
Fig. 3. Example of local device.
The following designations are introduced. The cnk parameter is the number of data units related to the Mnk model received from the beginning of the last local update of this model on the n device, that is, data that have not yet participated in the update. Fink is a is a set of set of parameters obtained after the local update of the Mnk in iteration i. Fink parameters received after the global update of the Mnk in iteration i. Cnk is the parameter calculated empirically, which determines which data volume is optimal to start the next update. The parameter is a constant during the system. Dink —the deadline of the current iteration of the global renewal of the n model k—at what the next point of time new data will accumulate C—blue point (see Fig. 3). αink —the “weight” parameter of the device, displays the quality of the resulting update. Local device operation algorithm: 1. If the device is free, then for model Mnk of device n from group k after the start of iteration i:
The Concept of Optimizing the Operation of a Multimodel …
153
arrived at the device, the i + 1 local update iteration starts. 1.1. If cnk > Cnk and Fink Else: waiting. 1.2. At the start of a local update, the Dink parameter is predicted for each model.
2. If several models are ready for updating (by condition 1), then the model with the minimum Dink is given priority. 3. After the local update is completed, Fink , Dink and αink are sent to the OFL System. 4.3 Hub—Control Center—Level 2 The Hub performs a control function: collects local updates (sent parameters Fink , Dink and αink ) and determines when to send updates to the aggregation as a job, interpreting incoming updates in terms of scheduling theory. The Hub’s work algorithm is based on the concept of a paralegal in the process of voting for a new law: each party has a certain percentage of votes from all and can vote, the law is adopted if the percentage of votes exceeds a given parameter A. In the case of the Hub, devices are “parties”, and their αink parameters are votes. The question of sending the sending model parameters for aggregation is put up for a vote. Work algorithm: 1. If the HUB has received an update Fink , then 1.1. If i=1 Ik αink >= A, where Ik is the number of local updates received by the current moment for the group model k, then 1.1.1. A “job” vj is formed to be sent to the aggregation station (processor— in terms of scheduling theory): Dj = min Dink , pj —predicted aggregation time, qj = maxi∈[0,Ik ] {qink } is the delivery time, where qink is the time the update for model k was sent from aggregation station AS to local device n. 1.1.2. Ik = 0 1.2. Else: return to item [1]. 2. Else: wait. Ik + = 1. 4.4 Aggregation Station—AS—Level 3 The operation of the aggregation station is based on a single-processor online scheduling theory model with due dates. AS receives collected local updates from the Hub for a specific group of models as a vj job with a set of parameters. Processing on the processor means aggregation. Further, the work of the aggregation station is considered as a model of scheduling theory: one processor, the set of jobs V coming to the processor, interrupts disabled. For each operation vj ∈ V the following is specified: arrival time at the processor rj , execution time pj , due date Dj , delivery time qj . At each moment of time, only data about the assignments of the set V ∗ are known: V ∗ = {vj ∈ V : rj ≤ time},
154
E. Tarasova
where time is the current time on the processor. It is required to set a processor operation rule that minimizes the maximum latency for the final schedule S: L (S) = max(τi + pi − Di , 0) → min, (1) vi ∈V
where τi is the start time of job vi processing. The general idea of algorithms for the online model is based on the formation of job selection rules at the moment the processor is released. Since interrupts are disabled, the job is reduced to making a decision on the choice of the next job at the moment the processor is freed. The choice comes from the set V ∗ . There are different approaches to this online scheduling model with due dates. Studies [6, 7] consider existing algorithms: MINDL (minimum lead time based), WSPT (Smith algorithm [5] based on job’s size), and other algorithms SF [6] and LJSF [7]. According to research for this setting, the best objective functions are obtained using LJSF, however, in the absence of jobs that are much longer than the average, the SF and LJSF algorithms get close results. In this regard, for the operation of the considered control model FL and the SF algorithm was carried D −time out. For the SF algorithm, the safety factor formula has been changed: Kj = pj j +qj .
5 Results and Discussions The proposed concept and algorithms of the system were tested on synthetic data generated by adapting the Carlier [1] method for models with delivery and used in testing and comparing the scheduling algorithms MINDL, WSPT and SF in [6]. Testing was carried out in a simulation format without considering the application of this system for specific machine learning models. Groups of examples were generated for each device. Each group was divided into subgroups (imitation of the presence of different models on devices). For each example of each friend, the following were given: the time of arrival in the Hub from the local device rj , the execution time (time of aggregation and transfer back to the local device) Pj = pj +qj , due dates (a parameter appearing in the description of the work of level 1 and level 3): rj ∈ [1; Kn], Pj ∈ [1; 50], Di ∈ [0, Kn − 1], where n is the number of jobs, coefficient K was chosen from the range [10; 25], the values of this range were noted by Carlier as the most difficult for the problem under consideration. Consider the individual levels of the system. Layer 1 operation (local devices) controls the frequency of local updates. The approach in which updates are performed when new data is received in a given amount of Cink is quite common. The Cink parameter depends on the specific machine learning model and the frequency of data arrival. The αink weights should also depend on the ML model as well as the quality of the data being processed in terms of quantity and variety. All auxiliary parameters were calculated empirically. The efficiency of the system was evaluated by two parameters: the quality of the system and the objective function specified in Sect. 4.4 L (S) (see Eq. 1).
The Concept of Optimizing the Operation of a Multimodel …
155
Due to the fact that at this stage of research, the proposed concept was not applied to machine learning models, the quality of the system is determined by theoretical considerations. The quality of the entire system directly depends on the quality of local models on local devices, which in turn depend on the quality and volume of data, as well as the quality of previous local and global updates. At the same time, adding a Hub to the system allows you to control the impact of local updates on the global model, which leads to better global updates. For comparison, on the second basis, the system was tested in the following variations: 1. The system does not use suggested ideas. Model aggregation occurs when all devices are ready, that is, at the moment when all local updates are completed. The sequence of aggregation on the AS is determined by the principle “first come, first served”. The objective function is also defined, as in other cases. 2. The system works without the second level: aggregation occurs when all devices for a specific model are ready, the aggregation order is determined by the MINDL, WSPT, SF algorithms. 3. The system works on the basis of the entire proposed concept with different algorithms for the third level: MINDL, WSPT, SF. The use of option 3 with the SF algorithm gives the best performance in terms of the objective function: an improvement was observed in 92% of the examples in each group by an average of 25% compared to option 1 and in 90% of the examples by an average of 19% with option 2 with any of the algorithms. For the remaining examples, the functions received close values when comparing option 3 and option 2. At the same time, when compared with the option without using the Hub, in some cases option 3 gives the target values of the function less (better, since minimization was considered), by more than 38%. This is due to the fact that the Hub reduces the delay for waiting for updates from different devices, as it allows you to put updates on the aggregation that are not received from all devices. Analysis of the system operation using different algorithms for the aggregation station confirms the results obtained in the work [6]. For all groups of examples, new algorithms in more than half of the cases generated schedules with objective function values better than those of the MINDL and WSPT algorithms. Improvement averaged from 4 to 43%.
6 Conclusion This research was devoted to the optimization of multimodel federated learning. The paper proposes a new approach to managing a system with independent devices and centralized aggregation. In the proposed method, the system was divided into three levels of control, for each of which an operation algorithm was developed. The new method solves three posed problems: update sequencing on local devices, global update correctness, and aggregation sequencing when multiple models are ready for local update. Along with improving the quality of global models, the new method minimizes delays between updates on local devices. The proposed new approach was tested on synthetic data. Various combinations of the proposed method were tested with each other, as well
156
E. Tarasova
as with the basic method, in which aggregation starts according to the first-come-firstserved rule for models that have received updates from all devices. Testing has shown that the new approach in the maximum configuration and with the SF algorithm for more than 90% of examples reduces the delay on local devices by more than 19% on average. This article is devoted to the description of the new concept, as well as its theoretical justification and confirmation of the effectiveness of testing on synthetic data. Further research will be devoted to applying this approach to systems with specific machine learning models, developing rules for calculating various parameters, such as device weights or the send rate for the Hub. Acknowledgements. This research is financially supported by the Russian Science Foundation, Agreement 17-71-30029 (https://rscf.ru/en/project/17-7130029/), with co-financing of Bank Saint Petersburg.
References 1. Carlie, J.: The one machine sequencing problem. Euro. J. Opera. Res. 11, 42–47 (1982) 2. Maronna, R., Martin, D., Yohai, V.: Robust Stat. Theor. Methods (2006). https://doi.org/10. 1002/0470010940 3. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.Y.: CommunicationEfficient Learning of Deep Networks from Decentralized Data (2016). https://doi.org/10. 48550/ARXIV.1602.05629 4. Pillutla, K., Kakade, S.M., Harchaoui, Z.: Robust aggregation for federated learning. IEEE Trans. Signal Process. 70, 1142–1154 (2022). https://doi.org/10.1109/TSP.2022.3153135 5. Smith, W.: Various optimizers for single-stage production. Naval Res. Logist. Q. 3, 59–66 (1956) 6. Tarasova, E.: Online algorithm for single machine scheduling problem with deadlines. Actual Sci. Res. Mod. World 7(63)(2), 177–181 (2020). https://www.elibrary.ru/item.asp?id=438 08166 7. Tarasova, E., Grigoreva, N.: Accounting for large jobs for a single-processor online model. In: 2022 8th International Conference on Optimization and Applications (ICOA), pp. 1–5 (2022). https://doi.org/10.1109/ICOA55659.2022.9934593 8. Xia, W., Quek, T.Q.S., Guo, K., Wen, W., Yang, H.H., Zhu, H.: Multi-Armed bandit-based client scheduling for federated learning. IEEE Trans. Wirel. Commun. 19(11), 7108–7123 (2020). https://doi.org/10.1109/TWC.2020.3008091 9. Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, T.N., Khazaeni, Y.: Bayesian nonparametric federated learning of neural networks (2019). https://doi.org/10. 48550/ARXIV.1905.12022 10. Zhou, C., Liu, J., Jia, J., Zhou, J., Zhou, Y., Dai, H., Dou, D.: Efficient Device Scheduling with Multi-Job Federated Learning. https://doi.org/10.48550/ARXIV.2112.05928
Ambulance Priority Dispatch Under Multi—Tiered Response by Simulation Kanchala Sudtachat(B) and Nitirut Phongsirimethi School of Manufacturing Engineering, Institute of Engineering, Suranaree University of Technology, 111 University Avenue, Muang, Nakhon Ratchasima 30000, Thailand [email protected]
Abstract. Emergency medical service (EMS) systems are health care systems that provide medical care and transportation of patients to hospitals when needed, thus potentially saving lives. We determine an optimal policy for multiple unit dispatch and call priorities to increase the overall patient survival probability, which our model emphasizes the study of the priority 2 call policy. In addition, we present some extensions to the model by considering real on-scene conditions, such as the fact that dispatch decisions of the priority 2 calls can be changed. We study the models of dispatching emergency vehicles under multitier by considering a specific alternative policy for priority2 calls. The simulation models for multiple unit dispatch with multiple call priorities are used to investigate the performance of possible policies for dispatching ambulances by using a simulation method. Simulation is used to investigate the models to obtain the optimal dispatching policy for priority2 calls based on the real-world problem. The results show that the better policy provide the benefit over the closest policy of the improvement of 42 lives saved per 10,000 calls. Keywords: Optimal policy · EMS systems · Patient survival · Multiple unit dispatch · Call priorities · Simulation models
1 Introduction Medical priority dispatching is used to improve efficiency of EMS systems. The strategy of medical priority dispatching is to consider a faster response time to life-threatening patients. The study of pre-hospital mortality in EMS systems by Kuisma et al. [1] showed that dispatching a far ambulance to low priority patients does not negatively impact prehospital mortality rates. Therefore, the decisions regarding how to dispatch ambulances do not adversely affect low priority patients in terms of survival rates, since these patients are non-critical. Medical priority dispatching may make the closest ambulances unavailable to non-serious patients. Emergency calls are classified into three priority levels upon dispatch, and their classification may be updated once a responder reaches the scene and makes father assessment based on research paper of Nicholl et al. [2]. For example, BRAVO calls (prioirty2) are potentially life-threatening calls that could be upgraded to life-threatening (priority1). In this case, priority2 calls need a paramedic unit and rapid transport. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 157–167, 2023. https://doi.org/10.1007/978-3-031-50330-6_16
158
K. Sudtachat and N. Phongsirimethi
Considering the decision regarding how to dispatch ambulances based on assumptions of work, the optimal policy tended to dispatch nearby ambulances to the priority1 calls and a father ambulance to the priority2 calls. However, in this work we allow for priority2 calls that could be upgraded to life-threatening. Dispatching the father ambulance to the priority2 calls may affect the death rate of priority2 calls that are later upgraded. In this work, we extend the model of multiple unit dispatch from Sudtachat et al. [3]. The proposed simulation model determines how to dispatch a BLS unit (non-paramedic unit) for prority2 calls by considering alternative policies based on Sudtachat [4]. In the case of using a single dispatch in response to priority2 calls, we consider the changing situations on-scene based on information of the first arriving ambulance. We examine the condition of upgraded BLS in which the patients need ALS care (paramedic unit). In this situation, the BLS unit provides initial care and waits for the arrival of the next available ALS unit. In addition, we examine an ambulance dispatch policy for the ALS unit for the priority2 calls that affect the death rate for priority2 calls. After, we determine the better dispatch policy for dispatching the ALS unit for priority2 calls while balancing the average waiting time between patient priorities. The simulation model of multi-tiered responses is formulated by incorporating equity constraints into the model. We consider EMS systems with multiple unit dispatch, multiple call priorities, and a zero-queue. We assume that once the ambulances complete their service, they return to their original (“home”) station. The proposed simulation model determines how to dispatch a basic life support unit (BLS unit) for prority2 calls by considering two alternative policies. In the case of using a single dispatch in response to priority2 calls, we consider the changing situations on-scene based on information of the first arriving ambulance. We examine the condition of BLS upgrade in which the patients need ALS care (an advance life support unit). In this case, the BLS unit provides initial care and waits for the arrival of the next available ALS unit. In addition, we examine an ambulance dispatching policy for the ALS unit for the priority2 calls that affect the death rate for priority2 calls. We extend the model by considering how to dispatch the ALS unit for priority2 calls to improve the outcome of EMS systems. We determine the better dispatching policy for the ALS unit for priority2 calls. The simulation model of multi-tiered responses is formulated to investigate the model by using the real-world data which could implement for elaborately realistic situation of the EMS system.
2 Literature Review Since the late 1960’s the rapid US population growth generated on increasing demand for ambulance services. In 1967, the study of EMS systems began to determine the distribution and workload of the existing systems. King and Sox [5] were the first to conduct a study to evaluate the workload of EMS systems to improve performance. In 1972, the EMS systems were analyzed in a study of dispatching models to minimize average response time, as seen in Carter et al. [6]. This study considered two ambulance units which were dispatched to each call, given the different locations of the units. The study then determined the district boundary for each unit to respond to calls. The EMS planners then studied the number and type of ambulances to deploy to certain locations, as seen in Eaton et al. [7]. This study researched how to design the EMS
Ambulance Priority Dispatch Under Multi—Tiered Response
159
systems to reduce cost. The idea to study dispatching policies was proposed by Repede and Bernarde [8]. They evaluated two alternative dispatch policies. The first policy always sends the closest ambulance to the call. Lim et al. [9] studied the impact of the dispatch policies on the performance of EMS systems. The effect of dispatch strategies on the performance of EMS systems is based on a determining code of the urgency of calls. Recent studies considered the fairness among demand zones was presented by Chanta et al. [10] Several previous works relevant to fairness above analyzed model by not taking account the real conditions at on-scene of accidents into the model. 12. Zarkeshzadeh et al. [11] considered the improvement of ambulance dispatching by using a novel hybrid method: network centrality measure, nearest neighbor method. Nicoletta et al. [12] studied the ambulance location and dispatching models. They formulated the model and validated a robust of the model. Enayati et al. [13] formulated the model of the integrated redeployment and dispatching model. They studied the model under personnel workload limitation. In this work we extend the multiple-unit dispatch with multiple call priorities proposed in Sudtachat et al. [3] and [4]. However, our work differs in which we consider the realistic on-scene conditions that the potentially-life-threatening calls might need the paramedic unit.
3 Model Description In this section, we discuss the EMS systems which are extended from the original model in Sudtachat et al. [3]. This paper proposes the multiple unit dispatch of EMS systems while considering on-scene conditions. The systems have three call priorities and two types of ambulances (the ALS unit and BLS units). The response area is partitioned into demand zones, each with a distinct dispatch preference list. When a call arrives at the dispatch center, the dispatch planners make the decision about which ambulances to assign in response to the call according to the preference lists. In the case when all ambulances in the preference list are busy, the call will transfer to another dispatch center. The classification of call priorities is also considered in this paper. The dispatching of different types of ambulances depends on call priorities. The characteristics of the EMS systems, described in Sudtachat et al. [3], showed that priority1 calls require a double dispatch of the ALS unit and the BLS unit. Single dispatching of the BLS unit is when the BLS unit is assigned to respond the priority2 or 3 calls. In this paper, we consider the dispatching policy for priority2 calls. The configuration of the EMS system process with BLS-upgrade of priority2 calls is described in Fig. 1. The main assumptions of priority1 and 3 calls are the same as the original studied in Sudtachat et al. [3], except for the situation of on-scene upgrades/downgrades for priority2 calls. The adapted models of possible situations at on-scene priority2 calls are: • Vehicle dispatch decisions: Priority2 calls require a single dispatch (BLS unit). We dispatch the available BLS unit for priority2 calls according to two possible policies; priority1 (closest policy) or 3 (heuristic policy) calls in which the inputs for the dispatching policies of priority1 or 3 calls are based on results from Sudtachat et al. [3]. To obtain high efficiency of EMS systems, we compare the two alternative policies. We decide to dispatch the BLS unit by choosing the policy that provides the better overall expected survival probability of life-threatening patients. If the first ambulance
160
K. Sudtachat and N. Phongsirimethi
in the rank of ordered preference list is busy, the next one will be dispatched. In case of all BLS unit are busy, a call would be transferred to another dispatch center. • On-Scene: If patients require BLS care at on-scene priority2 calls, BLS serves and then returns to the home station base. However, if patients require ALS care, judged by the BLS personnel, the BLS unit will provide the initial care, wait for the ALS unit to determine if patients need transportation to hospitals, and then head back to their original station. The dispatch of the ALS unit is assigned according to the available ALS units in rank of the ordered preference list for prirority2 calls. We refer to this as BLS-upgrade.
Fig. 1. The EMS system process with BLS upgrade for priority2 calls
Table 1. Types of calls, types of ambulances and their corresponding dispatching policies Types of call
Types of ambulances ALS unit
BLS unit
Priority1
Closest policy
Closest policy
Priority2
Closest policy
Policy of priority1(closest) or priority3 (heuristic)
Priority3
Not needed
Heuristic policy
To dispatch different types of ambulances depends on the severity levels of calls. We introduced a specific dispatching preference list to each priority type and each type of required different type of ambulances. Table 1 shows the policies for dispatching ambulances for each call priority. Different performance measures are considered when we decide on how to dispatch the BLS unit for priority2 calls that should be treated like priority1 or 3 calls. The objective is to maximize the expected survival probability by comparing two alternative policies. We consider the expected survival probability of patients as a function of priority1 calls response time, as discussed in Sudtachat et al. [3]. 3.1 A Simulation Model The simulation models are implemented using Arena Version14. The designed simulation models are then used to investigate the performance of a given dispatching policy. The status of EMS systems is described by the state of each ambulance. The states could
Ambulance Priority Dispatch Under Multi—Tiered Response
161
be: “idle” (at station base), “busy” (on the way to respond a call), or “busy” (serving and providing transportation a call). Table 2 shows the state space of EMS systems. We consider integer numbers to represent status of the ambulances. We generate different modules to dispatch ambulances according to attributes of the calls (priority and location). When a call arrives on scene, we assign a call priority and a demand zone. In addition, the dispatch centers will decide which units to dispatch depending on call priorities. We then assign a status to dispatch ambulances according to the state of ambulances shown in Table 2. Double dispatch is when assigning a pair of dispatched ambulances. When the first ambulance arrives on the scene, we calculate the survival rate by using the response time of the first ambulance to priority1 calls. The survival probability is then calculated using Eq. (1) in Sudtachat et al. [3]. Considering a single dispatch for prority2 calls with status3, as the BLS arrives on-scene of accident, the state would be “waiting” for another unit with status4, if patients need ALS care. When the ALS unit arrives on scene, the status of both ambulances would be changed to “busy” (offering service to patients). After the ambulances provide service to the patients and go head to their original stations the state would become “idle” again. We investigate the better dispatch policy when the EMS systems reach the steady state. Table 2. The status of ambulances in EMS systems Indicator
σj
Status of ambulance
j ε [1,…, J]: ALS
0
Idle at base
1
Double dispatch of ALS for priority1 calls
2
Only ALS unit dispatch to respond to priority1 calls
5
ALS unit dispatched to priority1 or 2 calls following a BLS unit which was sent when no ALS units were available
0
Idle at base
jε [J+1,…, J+K]: BLS 0
Idle at base
1
Double dispatch of BLS for priority1 calls
3
BLS unit dispatch to respond to priority2 or 3 calls
4
Only BLS unit dispatch and waiting for ALS unit to respond to priority1 calls. Waiting for ALS unit to respond to priority2 calls
The simulation models analyze different dispatching policies and evaluate patient survival probability. When a priority2 call arrives to systems, we dispatch the BLS unit according to the dispatching policy like prirority1 or 3 calls. When the BLS unit arrives on the scene of an accident, dispatchers decide to upgrade or not. In case of no upgrade, BLS unit provides care to patient and then return to home station. If BLS upgrade occurs, we will dispatch the ALS unit according to the policy where the closest ALS is always sent. In the case where all ALS units are busy, the BLS unit provides initial care and waits for the next available ALS unit. The simulation models assume they operate 24 h per day. In this study, we investigate the better policy of dispatching BLS unit for priority2 calls,
162
K. Sudtachat and N. Phongsirimethi
where we treat the policy of dispatching the BLS unit for priority2 calls like the policy for priority1 or 3 calls. The Process Analyzer in Arena Version14 is used to obtain the better policy. These simulators run 1800 replications per one simulation with half width of 0.0001 the 95% confidence interval around the survival probability. Each replication runs 10 weeks to reach steady-state results. The performance of two alternative policies is compared to obtain the better policy.
4 Computational Result for Dispatching of Priority2 Call In this section, we investigated the alternative policies by using collected real- world data at Hanover Fire and EMS department. The system operates 24 hour per day. The data set contains response time and transportation time from 4 stations to 12 demand zones seen in Table 3. The service times are shown in Table 4. We study the performance of systems in which the number of ALS and BLS units are fixed at three. They are located at different stations: ALS1 is located at Station4, ALS2 is located at Station1, and ALS3 is located at Station3. In addition, BLS4 is located at Station4, BLS5 is located at Station1, and BLS6 is located at Station1. There are three priorities where a proportion of call priorities depends on the demand zone. Regarding the improvements for Sudtachat et al. [3], we fixed the closest policy for priority1 calls and the heuristic policy for priority3 calls. Note that we obtained the heuristic policy for the results in Sudtachat et al. [3]. We study the policy of priority2 calls that could be treated like priority1 or 3 calls by varying the percent of BLS upgrade for priority2 calls. Like a previous study, the objective is to maximize the patient survival probability. Table 5 showed the comparison of two alternative policies with the closest policy. The underlines indicate the expected survival probability according to the closest policy and bolded indicate the expected survival probability according to the better dispatching policy for each case. When the proportion of priority1 and 3 calls were close to balanced, the better policy for priority2 calls was to treat them like priority3 calls. However, when systems provided service for higher demand rate such as 1.25 calls per hour, the better policy for priority2 calls was treating them like priority1 calls. In Fig. 2 we investigated the two alternative policies and the closest policy. There were slight differences in performance of priority2 calls when we treated them like priority1 or 3 calls. The results indicated that there was a slight impact on number of lies saved. When the percent of BLS upgrade was changed, the trends in the graphs showed no difference between upgrades at 20 and 30 percent. We observed that multiple unit dispatch for priority2 calls according to the heuristic policy could increase the patient survival probability as compared to the closest policy. We observed the “busy” probability of each. The better policy for priority2 calls from changed from being treated like priority3 calls to being treated like priority1 calls when the busy probability of each server was full. There was no difference between the closest policy and the heuristic policy.
5 Conclusion We analyzed a simulation model for multiple unit dispatch in EMS systems. We consider classifications of call priorities and two types of ambulances. The simulation model is formulated as a model given a particular dispatching policy which could implement for
Call proportion
0.226034
0.019513
0.060281
0.043914
0.02657
0.09327
0.326744
0.065128
0.007525
0.077626
0.029886
0.023509
Demand zone
Zone1
Zone2
Zone3
Zone4
Zone5
Zone6
Zone7
Zone8
Zone9
Zone10
Zone11
Zone12
(25.71, 23.89)
(18.38, 17.08)
(10.08, 9.36)
(25.71, 23.89)
(15.06, 13.99)
(20.02, 18.61)
(8.06, 7.48)
(13.51, 12.56)
(21.01, 19.52)
(18.98, 17.64)
(25.71, 23.89)
(13.42, 12.47)
Station 1
(14.904, 13.85)
(14.624, 13.59)
(15.696, 14.59)
(25.712, 23.89)
(25.712, 23.89)
(7.88, 7.32)
(13.056, 12.13)
(19.648, 18.26)
(25.712, 23.89)
(7.936, 7.38)
(25.712, 23.89)
(12.344, 11.47)
Station 2
(25.712, 23.89)
(15.816, 14.70)
(11.704, 10.87)
(21.872, 20.32)
(10.992, 10.21)
(11.344, 10.54)
(25.712, 23.89)
(13.728, 12.76)
(20.856, 19.38)
(10.736, 9.97)
(15.896, 14.77)
(10.704, 9.95)
Station 3
Table 3. Response times (Lognormal distribution) and proportion of calls
(15.776, 14.66)
(15.752, 14.63)
(10.16, 9.44)
(16.72, 15.53)
(20.632, 19.17)
(12.032, 11.18)
(12.472, 11.59)
(22.752, 21.14)
(12.312, 11.44)
(15.072, 14.01)
(25.712, 23.89)
(6.424, 5.97)
Station 4
Ambulance Priority Dispatch Under Multi—Tiered Response 163
164
K. Sudtachat and N. Phongsirimethi
Table 4. Service times (Exponential distribution) and proportion of priority1, 2 and 3 calls Demand zone
Proportion of priority1 calls
Proportion of priority2 calls
Proportion of priority3 calls
Service times
Zone1
0.394
0.098
0.508
67.07
60.24
Zone2
0.452
0.113
0.435
100.32
90.29
Zone3
0.394
0.098
0.508
62.44
55.86
Zone4
0.425
0.106
0.469
66.90
59.42
Zone5
0.409
0.102
0.489
65.25
57.76
Zone6
0.404
0.101
0.495
56.32
49.78
Zone7
0.443
0.111
0.446
54.18
48.36
Zone8
0.438
0.109
0.453
84.42
75.5
priority1
priority2, 3
Zone9
0.417
0.104
0.479
104.31
92.93
Zone10
0.442
0.111
0.447
58.27
51.82
Zone11
0.434
0.109
0.457
81.38
72.32
Zone12
0.446
0.112
0.442
59.60
52.49
elaborately realistic situation of the EMS system. We consider the dispatching policy of BLS units based on possible situations that can be changed at on-scene of accidents for priority2 calls. We compare two alternative dispatching policies of BLS units for priority2 calls. Numerical results showed that the dispatching policy of the BLS unit for priority2 calls treated like priority3 calls (heuristic policy) provided improvement over the closest dispatching policy. When average busy probability of servers was over 80 percent, there was no difference between the heuristic and the closest dispatching policy for priority2 call. We suggest the managerial insight to the EMS administrators. They could implement the better policies that apply to the priority 2 calls to choose to dispatch the proper ambulance for each situation. The policies are pre-installing for dispatching and monitoring the ambulances by integrated with the GPS system. In future work, we will expand the dispatching model to consider the location of ambulances that lead to increasing the expected survival probability of EMS systems.
Arrival rate calls/hr.
0.25
0.50
0.75
1.00
1.25
ID
1
2
3
4
5
11.70
11.73
P1
P3
10.82
11.83
P3
Closest
10.89
P1
9.59
P3
11.06
9.56
Closest
9.72
8.03
P3
P1
8.08
Closest
8.22
7.17
P3
P1
7.19
Closest
7.28
P1
Resp. time P1: mins
Closest
Policy Treat like
16.48
14.90
15.25
16.98
14.62
15.08
16.80
14.28
14.79
17.67
13.63
14.27
17.55
12.85
13.31
Resp. time P2: mins
16.51
16.56
15.16
16.87
16.88
15.01
16.82
16.88
14.72
17.61
17.66
14.19
17.49
17.52
13.25
Resp. time P3: mins
0.61
0.61
0.60
0.64
0.64
0.63
0.67
0.67
0.66
0.72
0.71
0.70
0.74
0.74
0.74
% of covered P1 < 9 mins
0.60
0.67
0.66
0.60
0.69
0.67
0.60
0.70
0.67
0.58
0.71
0.69
0.59
0.73
0.72
% of covered P2 Wheat) [AND] ‘Classification < OR > ‘Sorting In order to find important papers using machine learning approaches, we lastly utilized the following search term: (Machine learning < OR > Deep learning < OR > Neural network) [AND] Image processing The process of choosing studies is done in two stages: the first is primary selection, and the second is the final selection. 3.1 Primary Selection The titles, keywords, and abstracts of the primary sources were first used to choose them; however, when these three criteria proved insufficient, the evaluation was expanded to include the conclusions section. This phase resulted in the creation of 222 publications, including conference papers, journals, summaries, books, symposiums, and other pieces of literature. 3.2 Final Choice The potential of a research paper was evaluated using several criteria, such as the breadth, significance, and future research influence of the research. Table 1 displays the inclusion/exclusion criteria that were applied to select papers for inclusion/rejection. 3.3 Publication Distribution A crucial step in preparing a survey report is choosing reliable research articles. Not every research article that has been published in a certain field is of a high caliber. In order for our survey to encompass both the most recent research and earlier research efforts in the fields of seeds categorization, DNN, CNN, and image processing, we chose 5 noteworthy research pieces from reputable journals that were published in five different time frames. Table 2 provides a chronological overview of the projects carried out across various time periods for the readers’ perusal. We chose six publications on seed classification, three papers on DNN/CNN, and two papers on image processing systems from a variety of publishers, including ACM, ResearchGate, MDPI, Hindawi, ScienceDirect, IEE, and others. Figure 2 depicts the distribution of the 11 final selected papers by the data source.
172
H. Al Fahim et al. Table 1. Evolutions of seed classification using DNNs.
Publication year
Publications
2018
Qiu et al. [1]
Kurtulmu¸s et al. [2]
Parnian et al. [3]
2019
Eldem [5]
Sheng et al. [6]
Nindam et al. [7]
2020
Yonis et al. [8] Medeiros et al. [9]
Ahmed et al. [10]
Javanmardi et al. [11]
2021
Ebrahimi et al. Koklu et al. [13] [12]
Sun et al. [14]
Tu˘grul et al. [15]
2022
Onmankhong et al. [16]
Dyrmann et al. [4]
Fig. 2. Publication distribution based on data source.
4 Detailed Review of Papers The discussion of these contents is primarily based on a previous research paper. This section focuses on the contribution, methodology implementation, and evaluation of each paper. Deep Neural Network (DNN) [5] with deep layers performs better for big data classification. In order to classify wheat seeds, this research suggests using a Trained model from the UCI Machine. Adding up the data on Kama, Rosa, and wheat production in Canada yields a grand total of 210. The model has 70% train data and 30% test data. Data classification was 100% successful with the developed model. The proposed model classifies 100% of UCI wheat seed and synthetically created datasets. In [1], Deep learning distinguishes between four different types of single rice seeds. Four different types of rice seeds were photographed using hyperspectral imaging methods in the (380–1030 and 874–1734) nm spectral regions. For training, three different models
Seeds Classification Using Deep Neural Network: A Review
173
were available (KNN, SVM, CNN). KNN, a widely used pattern recognition technique, was initially employed. Where the distance between the unknown and the predetermined training set of samples is calculated, following that, SVM was also used for comparison with KNN and CNN here. As a result, Support vector machines are a popular technique for pattern recognition. Where the information is translated in fractal pattern regions, where a polygon of both sets is created to increase the separation between both the nearest examples of distinct classes. Convolutional Neural Networks were employed for pattern recognition for the provided datasets in addition to KNN and SVM. Using all three approaches in this dataset demonstrates that CNN outperformed the other two. Using the CNN model, this model properly categorized all four varieties. For the classification of wheat seeds, machine learning techniques were used in [8], the classification of 14 well-known seeds was performed using powerful deep-learning methods. The technique applied comprises model checkpoint, degraded learning rate, and hybrid weight modification. Seed classification can offer more information based on impurity detection, seed quality control, and quality production. In order to determine which performs better on the classification of this dataset of 14 common seeds. To do this, we deployed deep-learning technology to make use of algorithms as well as the CNN method. It appears that the categorization accuracy during the training set is approximately 99% or exactly 99%. Incorporating the proposed model, it was discovered that the provided visual was, in fact, just a mustard seed, but that the concept had selected it as being, thus establishing a genuine positive foundation for the prototype. In the second stage, we also see examples of positive results, negative results, and correct negative findings. Working on the model involves ongoing monitoring in some senses. After all, 14 seeds have been classified using the suggested approach or model, it is evident that the model is correctly classifying all seeds with an accuracy of 99%. As a result, the proposed model performed at 99% accuracy and the training accuracy in the early phase was 99%. The deep convolutional neural network (CNN) that is employed in this research as the general feature extractor presents a novel technique in [11], algorithmic techniques such as balanced k-nearest neighbors, cubic SVM, nonlinear SVM, and ANN A convolutional neural network (CNN) is employed here as a stand-in for the more generalized characteristic. We employed neural network models to categorize data that wasn’t initially included (ANN). Multiple other models, including SVMs with both quadratic terms and balanced KNN, were also used. Through numerous tests, this research shows that CNNs and ANNs are the most effective. The categorization has a 98.1% efficiency on just a scale ranging from 0 to 100, with an identical percentage for speed, memory, or Fp rate. The results of the investigation show that perhaps the CNN-Ann classification can be effectively used to sort out the many types of maize seeds available. According to [2], a classification method for differentiating pepper seeds has been put out on the basis of neural networks and computer vision. Images with a 1200 dpi resolution were recorded using an office scanner. Images of the color, shape, and texture of pepper seeds were gathered in order to categorize them. The characteristics were calculated from different color components and stored in a feature database. For this case’s classification, a neural network approach was applied. The number of characteristics was reduced from 257 to 10 for the procedures to accurately classify the particular dataset of pepper seeds in this instance. Also utilized were the cross-validation rules. The amount of nodes varies
174
H. Al Fahim et al.
from implementation to implementation and is described in the “technology” section. To choose the optimum perceptron model for multilayer, multiple training procedures were applied. In a system having 30 cells in its “secret” level, the best performance was consequently obtained. In contrast, as a result, the accuracy rate is 84.94%. On a scale of 100, it signifies that 84% of the time it will classify things correctly. Also noteworthy is the fact that this method was applied to classify eight different pepper seed kinds. Medeiros et al. [9] machine learning can be used to put soybean seeds into groups based on how they look and to predict how strong soybean seedlings will be. There is a strong relationship between how soybean seeds look and how well they grow. The quality of seeds depends a lot on the weather and what happens after harvesting, like threshing by hand or drying them in a machine. Seed performance can be hurt by damage from mechanical kneading, attacks from pathogens, and other things. Many things have been done to stop this damage and make the seeds better. It is especially important to find seed lots that aren’t very good. The term IML refers to a subset of the broader area of computer vision that encourages human-machine collaboration. When traditional machine learning methods don’t work well for a problem, IML approaches can be useful. This is especially true for problems with small datasets or complex datasets. A HP Scanjet 200 scanner was placed upside down in an aluminum box and used to take pictures of soybean seedlings. The seeds were put into seven groups based on their appearance, health, and how they work. Image analysis was used to look at each image of a seedling to figure out how strong the seed was. Enhanced image and classification were performed using the image classifier in the Ilastik software. The two pieces which were described are the “grain or sapling” as well as the “context.” Coloring defines quality, boundary, and values of parameters that were created utilizing which before filtering with sigma values from 0.3 to 1.0 to identify the characteristics of the plants. Delicate sensibilities higher than 0.97 have only been achieved by the BRS mixed SCT courses, while the Headquarters and Dms courses were less reliable when used alone. It’s estimated that 21% of Provides the requirements class crops were mislabeled as MDS category plants, and that percentage rises to 30% whenever the misunderstanding is in the opposite direction. Low physiological quality is shown by changes in the seeds’ chlorophyll, stains from fungi, and damage from being handled. Soybeans can be accurately classified by appearance using experiential supervised learning. This method works well to find damaged seeds and sorting seedlings by how strong they are. In [10], experts used both classic ML and pattern recognition to categorize the watermelon seeds. In this instance, x-ray methods were employed to look into the interior characteristics of the watermelon seeds. The dataset being used includes several pictures of watermelon seeds. During the pre-phase, the image underwent different preprocessing procedures. Three different types of ML and DNN techniques were employed in this instance to properly classify the seeds (LDA, QDA, and KNN). SFS was also utilized to optimize features. In order to determine which algorithm works better on the following dataset, comparisons are made between LDA (KNN), but also QDA. After the system has finished classifying the input, the findings will be shown. The accuracy of the LDA was 83.6%, the QDA was 80.8%, and the KNN was 63.7%, according to the results section. However, for the purpose of seed quality inspection based on morphology, comparison of such accuracy of simple VGG-19, ConvNet, ResNet101, AlexNet and ResNet-50
Seeds Classification Using Deep Neural Network: A Review
175
designs for classifying. With an accuracy of 87.3%, it demonstrates that ResNet-50 outperforms other transfer learning architectures in accuracy. So, it’s safe to say that X-ray scanning has promise as an imaging method for identifying and categorizing seeds based on morphological characteristics. In [14], used multispectral imaging techniques to perform discriminant analysis of 15 different cultivars of eggplant seeds. From multispectral images, 78 features of individual eggplant seeds were obtained. In this paper, the overall approach is to classify eggplant seeds using SVM and 1D CNN. Accuracy of 90.12–100% is achieved by using a support vector machine model. On the other hand, a one-dimensional CNN achieves 94.80% accuracy. Again, a 2D CNN was used to distinguish seed varieties. If 2D-CNN achieves 90.67% accuracy, using this approach and all results suggest that genetic and environmental factors may cause the seed coat to not be exactly the same as that of the female parent. According to [6], deep learning has recently achieved great success in the field of picture recognition. The purpose of this study is to evaluate the efficacy of CNN in identifying seeds into quality in comparison to more conventional techniques for ML. As a result, this research demonstrates how successfully deep learning has worked in the area of picture recognition. Here, the quality categorization of seeds was established, and it was compared to a conventional convolutional neural network. This experiment demonstrates that in the specific area of picture identification, the deep learning approach outperforms all machine learning techniques. On the following dataset connected to this paper, deep learning (CNN) performs with an accuracy of 95% immediately following the model’s execution on the learning algorithm. SVM and SURF perform with an accuracy of 79.2%. Here, too, visualizations were used to generate the extracted features for every of the network’s layers in CNNs, as well as a scatterplot was used to display the probabilities of the judgment findings. From here, CNNs can be used to automate the production of seeds. In [13], Here are ANN, DNN, and CNN classification results. The study’s data set includes 75,000 rice grain photos. Both ANN and DNN received this information. The results of the categorization were: Ipsala, Karacadag, Arborio, Basmati, and Jasmine, CNN’s introduction used data set pictures. The sensibility, selectivity, forecast, F1 measure, correctness, probability of false alarm, and network designs that calculate these metrics are all dependent on the equipment characteristics used to perform these methods. And false negative rates were supplied in tables. ANN, DNN, and CNN all achieved 99.87% classification accuracy. The study’s algorithms for classifying rice types may be successfully utilized here, according to the results. Image processing and computer vision are non-destructive and cheaper in agriculture. Image processing-based computer vision applications beat manual methods (Barbedo 2016). Manually classifying grains is time-consuming and costly. Manual procedures rely on professionals’ experience. Large-scale manual evaluations might be slow. Rice quality is rated using digital picture attributes. Some samples include measurements of length, breadth, brightness, and the frequency with which rice grains break. Grain characteristics may be extracted from images. Classification uses ANN, SVM, DNN, and CNN. This effort intends to enhance rice classification non-destructively. ANN and DNN classified 106 rice pictures based on morphological and color features. CNN classified 75,000 rice pictures from 5 classifications without preprocessing. Classification success of ANN, DNN, and CNN. In [15] The five varieties of rice that are most prevalent in Turkey
176
H. Al Fahim et al.
were used to perform this work. To achieve the greatest results throughout the experiment, they used the CNN model. Five different types of photographs of rice seeds were included in the collection. Residual Network (ResNet) and EfficientNets architecture were employed in the pre-stage Visual Geometry Group (VGG) for additional comparison. The CNN model was created, and when it was compared to the other three models, it became clear that the VGG model performed the best in the CNN example. This has a 97% accuracy. In [12], The classification of wheat varietal level is done using a standard deep-learning method. CNN was utilized to categorize the grain picture of wheat seeds into four different groups. There are four teams: Hd, Arz, Vitron, and Simeto. CNN was indeed trained to a new degree of excellence. This was trained because It will give a boost to the classification performance. For their model, they used a dataset that contains 31,606 images of single-grain. The dataset was collected from different regions of Algeria and captured through different scanners. After all, the pre-process runs perfectly. Then the model starts giving its performance. The results reveal that the DensNet201 can achieve an efficiency of 95.68 percent at its finest, with a span of 85.36 to 95.68 percent. Where the proposed model gives a reliable result. In [7] After seven days of fertilization, convolutional neural networks were used for the classification stage to separate aberrates from normal jasmine rice seedlings. To evaluate the entire suggested model, 1562 sample photos of jasmine rice seeds were used as the dataset. The dataset contains about 76 mixtures of regular and atypical seeds. The remaining 25% were kept aside for training, while 75% were used for the training set. Six CNN hidden layers were constructed after the pre-phase. Where 0 denotes normalcy and 1 denotes abnormality. Following all the procedures, CNN achieves a very respectable accuracy of 99.57%. In [3], After wheat is harvested, the seeds must be sorted by quality, size, variety, etc. It’s time-consuming and error-prone to measure and analyze wheat seeds. The system leverages the famous K-means clustering technique to assess and categorize wheat seeds more quickly and with better confidence. K-means is based on the squares of the errors. After receiving several data points, they are divided into k groups according to their proximity towards the cluster hubs. To classify each data point, we look at the center of each group and place them in the one that is geographically nearest. K-means clustering is applied on the wheat sample from the Machine Learning Repository. Smart systems sense, act, and control. Smart systems can evaluate a situation, make decisions based on facts, and perform suitable tasks. Smart systems include sensors, CCUs, information transmitters, and actuators. Agriculture uses smart systems. Smart systems categorize wheat seeds by the quality and other characteristics. As each item is described by certain attributes, attribute differences may be used to categorize objects. Each attribute is counted as a dimension; thus, an object is a multi-dimensional attribute vector. The aim is to arrange an object in a group with the most comparable attribute values. K-means is the most popular, recognized, and commonly used technique for clustering a dataset of objects into a specified number of groups. K-means distributes data into k clusters unsupervised. Each cluster employs a notion called centroid; each dataset point is categorized into a cluster whose centroid is closest to it. Results of the testing system using the UCI Machine Learning Repository seed dataset are reported. Dataset is a 210*7 matrix with 3 clusters. K-means takes the dataset and cluster number as inputs. Optional parameters exist. As there are three clusters, three centroids are randomly chosen. A technique for grouping
Seeds Classification Using Deep Neural Network: A Review
177
wheat seeds is proposed. Experiments with the UCI machine learning repository wheat dataset indicate great accuracy and success. The system clusters seeds nearly accurately. K-means is quick and efficient. It’s also cheaper. In [4], The network was trained and evaluated on 10,413 pictures of early-stage weeds and crops. Photographs from six separate data sets vary in illumination, resolution, and soil type. This includes images shot utilizing handheld smartphones in areas with varying material and lighting conditions. The network classifies these 22 species with 86.2% accuracy. Mechanical fertilizer application technique requires the precise position of agricultural plants, whereas herbicide-optimized approaches require weed species knowledge. By employing accurate information regarding weed species, development phases, and plant densities, pesticide use may be decreased by 40% on aggregate. Image processing was utilized to identify weeds and crops. This study demonstrates how else to teach the DNN to distinguish between various plant species. A gradient of identity features is built by CNN from less abstract components in prior levels. Due to these identity characteristics, CNN is less sensitive to environmental conditions such as illumination, shadows, crooked leaves, and obscured plants. Segmenting the soil and plants is not necessary for the categorization strategy. CNN quickly learns different species of plants since they can discover visual properties by themselves. A total of 10,413 images of young plants and crops were used to train and test the system. Images from six different data sets have different lighting, quality, and soil types. This includes images taken with palm mobile devices in areas with varying levels of sunshine and soil types. The network classifies these 22 species with 86.2% accuracy. In [16] A system was developed for examining different rice kinds using convolutional neural networks. Without any data preparation, deep CNN extracted spatio-spectral features in this instance. Whereas the existing common classification approaches, which are based on SVM, give a lower accuracy, the proposed model ResNetB gives greater accuracy. The accuracy of the suggested technique is 91.09%, while SVM’s accuracy is 79.23%. In [7] After seven days of fertilization, convolutional neural networks were used for the classification stage to separate aberrates from normal jasmine rice seedlings. To evaluate the entire suggested model, 1,562 sample photos of jasmine rice seeds were used as the dataset. The dataset contains about 76 mixtures of regular and atypical seeds. The remaining 25% were kept aside for training, while 75% were used for the training set. Six CNN hidden layers were constructed after the pre-phase. Where 0 denotes normalcy and 1 denotes abnormality. Following all the procedures, CNN achieves a very respectable accuracy of 99.57%. In [13], Here are ANN, DNN, and CNN classification results. The study’s data set includes 75,000 rice grain photos. Both ANN and DNN received this information. The results of the categorization were: Ipsala, Karacadag, Arborio, Basmati, and Jasmine, CNN’s introduction used data set pictures. The sensibility, selectivity, forecast, F1 measure, correctness, probability of false alarm, and network designs that calculate these metrics are all dependent on the equipment characteristics used to perform these methods. And false negative rates were supplied in tables. ANN, DNN, and CNN all achieved 99.87% classification accuracy. The study’s algorithms for classifying rice types may be successfully utilized here, according to the results. Image processing and computer vision are non-destructive and cheaper in agriculture. Image processing-based computer vision applications beat manual methods (Barbedo 2016). Manually classifying
178
H. Al Fahim et al.
grains is time-consuming and costly. Manual procedures rely on professionals’ experience. Large-scale manual evaluations might be slow. Rice quality is rated using digital picture attributes. Some samples include measurements of length, breadth, brightness, and the frequency with which rice grains break. Grain characteristics may be extracted from images. Classification uses ANN, SVM, DNN, and CNN. This effort intends to enhance rice classification non-destructively. ANN and DNN classified 106 rice pictures based on morphological and color features. CNN classified 75,000 rice pictures from 5 classifications without preprocessing. Classification success of ANN, DNN, and CNN.
5 Discussions The global population increase has coincided with a strong increase in the agriculture industry. Therefore, it is projected that more variety of product offers and higher-quality things will be produced. As a result of this supposition, the classification of the objects is also the most pressing issue. Using features such as product class, size, clarity, brightness, texture, product images, and product colors, it is possible to classify the expanding number of products. In this regard, the goods were categorized based on the detection of diseased products, the detection of freshness, the detection of weeds, the counting of products, edge characteristics, and textures. In the past few years, there have been a lot more applications that use deep artificial neural networks. Deep learning is frequently utilized for numerous tasks, such as categorization, analysis, image processing, picture commentary, sound processing, question answering, and language translation. In addition, DNN permits the interpretation of additional samples using previously gathered data. Evaluating deep learning versus conventional neural network techniques. Deep neural network topologies and supervised and unsupervised machine learning algorithms can generate accurate categorization outcomes. 5.1 Answering the Research Questions This section contains the solutions to our research questions: 1. What kinds of deep learning or machine learning algorithms were employed? Answer: A few well-known machine learning or deep learning techniques were used in the research we evaluated. Which essentially served as a classification. Consequently, our issue is likewise a classification-related one. Support vector machines (SVM) [17, 18], Artificial Neural Networks (ANN) [3, 4, 16, 19], Convolutional Neural Networks (CNN) [13, 20–23], Deep Neural Networks (DNN) [24, 25], K nearest neighbor (KNN), Gaussian Naive Bayes, Linear Regression, and Decision Trees [26] are some examples of such methods [27–38]. 2. How can these algorithms be used to improve the models? Answer: All of these models have advantages and disadvantages. We could improve the model and achieve more accurate results if we researched the drawbacks and applied the advantages more effectively. It will occasionally improve over the current state, but sometimes it won’t be able to (model-wise varies).
Seeds Classification Using Deep Neural Network: A Review
179
Table 2. Evolutions of online service and finding the workers. Paper
Major Contribution
Dataset
Eldem [5]
Classification of wheat seeds
210 instances of wheat Deep neural network, seed deep learning, linear regression, Naïve byes
Qiu et al. [1]
Four different types 4 types of rice seeds of rice seed varieties photographed using classification hyperspectral imaging data
K-nearest Neighbor, support vector machine, convolutional neural network
Yonis et al. [8]
Classification of 14 types of common common seeds using seeds powerful deep learning methods
Convolutional neural network
Javanmardi et al. [11]
Classification of corn seed varieties
Corn seeds
SVM, ANN, KNN, CNN, weighted K-nearest neighbor, cubic support vector, deep neural network
Kurtulmu¸s et al. [2]
Differentiating variety of pepper seeds
257 instances of pepper seed
Artificial neural network, deep learning
Medeiros et al. [9]
Classification of major verities soybean seedling
Soybean seedlings dataset
Pixel classification tools (Ilastik), interactive machine learning
Ahmed et al. [10]
Categorize watermelon seeds
Images of melon plants LDA, QDA, ANN, DNN
Sun et al. [14]
Multispectral imagie Fifteen different types classification of of eggplant seeds eggplant seeds
SVM, 1D-CNN, 2D-CNN
Sheng et al. [6]
Traditional machine learning with deep learning are used in this study to categorize seed lots based on quality
Deep learning, convolutional neural network, support vector machine, SURF
Koklu et al. [13]
Classification of rice dataset containing varieties 75,000 rice grain photos
Maize seed
Implemented Model
ANN, DNN, CNN
(continued)
180
H. Al Fahim et al. Table 2. (continued)
Paper
Major Contribution
Dataset
Implemented Model
Tu˘grul et al. [15]
Classification of five Photographs of five different Turkish rice different types of rice seeds seeds
Convolutional neural network, ResNet, EfficientNets, visual geometry
Ebrahimi et al. [12]
An application of deep learning to the task of identifying wheat varieties
Single grain images
Convolutional neural network, DensNet201
Nindam et al. [7]
Classification and data collection of jasmine rice
1562 sample photos of Convolutional neural jasmine rice seed network
Parnian et al. [3]
Autonomous classification of wheat seed
Clusters seeds dataset
K-means, UCI machine learning
Dyrmann et al. [4]
Plant species classification
Early stage weeds and crops
Convolutional neural network
Jiraporn et al. [16]
Classification of rice Rice seeds dataset varieties using cognitive spectroscopy
CNN, ResNetB, SVM
6 Conclusion The goals of seed type include preserving seed quality, providing excellent seeds to the general population, and quickening growth. The type also guarantees the quality and purity of every full-size component found inside seeds. Additionally, type provides the seeds with great assistance for growth. Therefore, 40 papers with term dates earlier than 2018 to 2022 are referenced in this study. As a result of each author’s contributions to the field of seed categorization evaluation, we choose 17 of those 40 publications as our top pick.
References 1. Qiu, Z., et al.: Variety ıdentification of single rice seed using hyperspectral ımaging combined with convolutional neural network. Appl. Sci. 8(2), MDPI AG, Jan 2018, p 212. https://doi. org/10.3390/app8020212 2. Kurtulmu¸s, F., et al.: Classification of pepper seeds using machine vision based on neural network | Kurtulmu¸s. Int. J. Agric. Biol. Eng. 31 Jan 2016. https://doi.org/10.25165/ijabe. v9i1.1790 3. Parnian, R., Ahmad, Javidan, R.: Autonomous wheat seed type classifier system. Int. J. Comput. Appl. 96(12), 14–17 June 2014. Foundation of Computer Science. Crossref, https://doi. org/10.5120/16845-6702
Seeds Classification Using Deep Neural Network: A Review
181
4. Dyrmann, M., et al.: Plant species classification using deep convolutional neural network. Biosyst. Eng. 151, 72–80 Nov. 2016, Elsevier BV. https://doi.org/10.1016/j.biosystemseng. 2016.08.024 5. Eldem, A.: An application of deep neural network for classification of wheat seeds. Avrupa Bilim ve Teknoloji Dergisi 19 (2020). https://doi.org/10.31590/ejosat.719048 6. Huang, S., et al.: Research on classification method of maize seed defect based on machine vision. J. Sens. 2019, 1–9, Nov. 2019, Hindawi Limited. https://doi.org/10.1155/2019/271 6975 7. Nindam, S., et al.: Collection and classification of jasmine rice germination using convolutional neural networks. Proc. Int. Symp. Inf. Technol. Convergence (ISITC 2019) (2019) 8. Gulzar, Y., et al.: A convolution neural network-based seed classification system. Symmetry 12(12), 2018, MDPI AG, Dec 2020. https://doi.org/10.3390/sym12122018 9. de Medeiros, A.D., et al.: Interactive machine learning for soybean seed and seedling quality classification. Sci. Rep. 10(1), Springer Science and Business Media LLC, July 2020. https:// doi.org/10.1038/s41598-020-68273-y 10. Ahmed, M.R., et al.: Classification of watermelon seeds using morphological patterns of X-ray ımaging: A comparison of conventional machine learning and deep learning. Sensors 20(23), 6753, MDPI AG, Nov 2020. https://doi.org/10.3390/s20236753 11. Javanmardi, S., et al.: Computer-vision classification of corn seed varieties using deep convolutional neural network. J. Stored Prod. Res. 92, 101800, Elsevier BV, May 2021. https:// doi.org/10.1016/j.jspr.2021.101800 12. Ebrahimi, E., Mollazade, K., Babaei, S.: Toward an automatic wheat purity measuring device: a machine vision-based neural networks-assisted imperialist competitive algorithm approach. Measurement 55, 196–205 (2014) 13. Koklu, M., et al.: Classification of rice varieties with deep learning methods. Comput. Electron. Agric. 187, 106285, Aug. 2021, Elsevier BV. https://doi.org/10.1016/j.compag.2021.106285 14. Sun, L., et al.: Research on classification method of eggplant seeds based on machine learning and multispectral ımaging classification eggplant seeds. J. Sens. edited by Eduard Llobet, Hindawi Limited, 2021, 1–9, Sept. 2021. https://doi.org/10.1155/2021/8857931 15. Tu˘grul, B.: Classification of five different rice seeds grown in Turkey with deep learning methods. Communications Faculty of Sciences University of Ankara Series A2-A3 Phys. Sci. Eng. 64(1), 40–50 (2022). Laabassi, K., et al.: Wheat varieties ıdentification based on a deep learning approach. J. Saudi Soc. Agric. Sci. 20(5), 281–89, Elsevier BV, July 2021. https://doi.org/10.1016/j.jssas.2021.02.008 16. Onmankhong, J., et al.: Cognitive spectroscopy for the classification of rice varieties: a comparison of machine learning and deep learning approaches in analysing long-wave nearinfrared hyperspectral ımages of brown and milled samples. Infrared Phys. Technol. 123, 104100, June 2022, Elsevier BV. https://doi.org/10.1016/j.infrared.2022.104100 17. Bakhshipour, A., Jafari, A.: Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 145, 153–160 Feb. 2018, Elsevier BV. https://doi.org/10.1016/j.compag.2017.12.032 18. Bishop, C.M., Nasrabadi, N.M.: Pattern recognition and machine learning, Vol. 4. No. 4. Springer, New York (2006) 19. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015). https://doi.org/10.1016/j.neunet.2014.09.003 20. Dharani, M.K., et al.: Review on crop prediction using deep learning techniques. J. Phys. Conf. Ser. 1767(1), 012026, Feb. 2021, IOP Publishing. https://doi.org/10.1088/1742-6596/ 1767/1/012026
182
H. Al Fahim et al.
21. Yu, Z., et al.: Hyperspectral ımaging technology combined with deep learning for hybrid Okra seed ıdentification. Biosyst. Eng. 212, 46–61, Dec. 2021, Elsevier BV. https://doi.org/ 10.1016/j.biosystemseng.2021.09.010 22. Sabanci, K., et al.: A convolutional neural network -based comparative study for pepper seed classification: analysis of selected deep features with support vector machine. J. Food Process Eng. 45(6), Dec. 2021, Wiley. https://doi.org/10.1111/jfpe.13955 23. Zhao, L., et al.: Automated seed ıdentification with computer vision: challenges and opportunities. Seed Sci. Technol. 50(2), 75–102, Oct. 2022, International Seed Testing Association. https://doi.org/10.15258/sst.2022.50.1.s.05 24. Loddo, A., et al.: A novel deep learning based approach for seed ımage classification and retrieval. Comput. Electron. Agric. 187, 106269, Aug. 2021, Elsevier BV. https://doi.org/10. 1016/j.compag.2021.106269 25. Xu, P., et al.: Research on maize seed classification and recognition based on machine vision and deep learning. Agriculture 12(2), 232, Feb. 2022. MDPI AG. https://doi.org/10.3390/agr iculture12020232 26. Cristin, R., Kumar, B.S., Priya, C., Karthick, K.: Deep neural network based Rider-Cuckoo search algorithm for plant disease detection. Artif. Intell. Rev. 53(7), 4993–5018 (2020). https://doi.org/10.1007/s10462-020-09813-w 27. Huang, Z., et al.: Deep learning based soybean seed classification. Comput. Electron. Agricul. 202, 107393 Nov. 2022, Elsevier BV. https://doi.org/10.1016/j.compag.2022.107393 28. Dietrich, F.: Track seed classification with deep neural networks. arXiv preprint arXiv:1910. 06779 (2019) 29. Bakumenko, A., et al.: Crop seed classification based on a real-time convolutional neural network. SPIE Future Sens. Technol. 11525. SPIE, 2020. https://doi.org/10.1117/12.2587426 30. Liu, J., et al.: EEG-based emotion classification using a deep neural network and sparse autoencoder. Front. Syst. Neurosci. 14, Frontiers Media SA, Sept 2020. https://doi.org/10. 3389/fnsys.2020.00043 31. Rakhmatulin, I., et al.: Deep neural networks to detect weeds from crops in agricultural environments in real-time: a review. Remote Sens. 13(21), MDPI AG, Nov 2021, 4486. https://doi.org/10.3390/rs13214486 32. Vlasov, A.V., Fadeev, A.S.: A machine learning approach for grain crop’s seed classification in purifying separation. J. Phys. Conf. Ser. 803, 012177. IOP Publishing, Jan. 2017. https:// doi.org/10.1088/1742-6596/803/1/012177 33. Wei, Y., et al.: Nondestructive classification of soybean seed varieties by hyperspectral ımaging and ensemble machine learning algorithms. Sensors 20(23), 6980. MDPI AG, Dec. 2020. https://doi.org/10.3390/s20236980 34. Khatri, A., et al.: Wheat seed classification: utilizing ensemble machine learning approach. Sci. Program. 2022, 1–9 Feb. 2022 edited by Punit Gupta, Hindawi Limited. https://doi.org/ 10.1155/2022/2626868 35. Kundu, N., et al.: Seeds classification and quality testing using deep learning and YOLO V5. In: Proceedings of the International Conference on Data Science, Machine Learning and Artificial Intelligence, USA, ACM, Aug. 2021. https://doi.org/10.1145/3484824.3484913 36. Gao, H., Zhen, T., Li, Z.: Detection of wheat unsound kernels based on improved ResNet. IEEE Access 10, 20092–20101 (2022). https://doi.org/10.1109/LRA.2018.2849513 37. Taheri-Garavand, A., et al.: Automated in Situ seed variety identification via deep learning: a case study in Chickpea. Plants 10(7), 1406. MDPI AG, July 2021. https://doi.org/10.3390/ plants10071406 38. Ebrahimi, E., Mollazade, K., Babaei, S.: Toward an automatic wheat purity measuring device: a machine vision-based neural networks assisted imperialist competitive algorithm approach. Measurement 55, 196–205 (2014). 10.1016/j.measurement.2014.05.003
Agrophytocenosis Development Analysis and Computer Monitoring Software Complex Based on Microprocessor Hardware Platforms K. Tokarev1
, N. Lebed1 , Yu Daus2(B)
, and V. Panchenko3,4
1 Volgograd State Agricultural University, Universitetskiy Ave., 26, 400002 Volgograd, Russia 2 Kuban State Agrarian University, Kalinina St. 13, 350044 Krasnodar, Russia
[email protected]
3 Russian University of Transport, Obraztsova St. 9, 127994 Moscow, Russia 4 Federal Scientific Agroengineering Center VIM, 1St Institutsky Passage 5, 109428 Moscow,
Russia
Abstract. The authors propose a software package for analysis, visualization and computer monitoring of agrophytocenoses with biotechnological methods of reproduction, which provides detection of contamination of nutrient media and plant explants through technical vision, digital sensors, analysis of digital images using machine learning libraries for subsequent processing by artificial neural networks. Serial communication is provided by means of the RS-485 data transmission standard. The software and hardware complex is built on the modern hardware platform of the ATmega-328/2560 microprocessor and the high-performance ESP32 microprocessor with the Tensilica Xtensa LX6 core in single-core and dual-core versions. The controlled parameters of visual assessment include: drying of the nutrient medium; detachment of the dense nutrient medium from the surface of the test tubes; the presence of crystals on the surface or in the depth of the medium; uneven filling of the dense nutrient medium; insufficient amount of medium in the test tube (volume less than 35 ml), a change in the color of the medium compared to the regulated one (may be caused by deviations in the pH of the medium or the growth of contaminants): color change and the appearance of turbidity may indicate bacterial or fungal infection (contamination); obvious contamination by microorganisms of the explant itself (a color different from various shades of green for growing meristemic micro-plants); the presence of foreign inclusions in the medium. A promising direction for further research in the framework of studying the problems of increasing the productivity of agrophytocenoses based on automated analysis, visualization and computer monitoring using microprocessor hardware platforms is the development of intelligent algorithms and software implementing them to control the growth and development of plants with the formation of datasets of images on a video series for the purpose of subsequent expert markup and processing by artificial neural networks of deep learning. Keywords: Software package · Artificial intelligence · Agrophytocenosis · Microprocessor platform · Deep learning neural networks
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 183–191, 2023. https://doi.org/10.1007/978-3-031-50330-6_18
184
K. Tokarev et al.
1 Introduction Increasing the productivity of agrophytocenoses is of great importance in solving the global problem of improving food security and import substitution policy, being the most important food crop. Currently, there is a shortage of high-quality seed and planting material in Russia. Despite the presence on our territory of a number of state and private enterprises engaged in the provision of seed and planting material, the share of imported is about 70%, and for some strategically important crops – up to 99.4% (sugar beet). In addition, there is a lack of a zoned fund of domestic varieties of seed and planting materials. The use of non-zoned imported varieties obtained under different soil and climatic conditions from domestic ones and not adapted to local conditions does not allow to obtain high yields. In addition, much attention is paid to the improvement of cultivated plants from viral diseases that reduce their yield. The main direction of obtaining healthy planting is microclonal reproduction, the advantages of which are also the ability to obtain highquality seed potatoes in the laboratory all year round in much larger volumes. Currently, there are data on the microclonal reproduction of cultivated plants in the literature, but the search for an effective method of sterilizing explants is still underway. Compliance with the conditions of sterility in the process of microclonal reproduction of crops, including potatoes, is the most important factor in its success. Plants are highly susceptible to microorganisms that inhibit their growth, and are not able to resist infections. And since the nutrient medium on which explants are grown is an ideal medium for the growth of microorganisms, it is necessary not only to exclude the possibility of contamination from the external environment, but also to ensure effective sterilization of plant material, since extraneous microflora is almost always present on its surface and inside tissues. A wide range of chemicals are used for these purposes, but sterilizing agents are very species-specific and their choice depends on many factors: the size of the explant, the presence of the peel, the density of the integumentary tissues, etc. It is necessary to choose such methods and treatment modes of the explant that would ensure its complete sterilization without damaging the cells and tissues of the plant. It is advisable to choose drugs that have not only bactericidal, but also fungicidal, as well as sporicidal activity, which increases the effectiveness of sterilization [1–4]. Currently, the problem of sterility of media and plant explants in biotechnological laboratories is quite acute. There are many reasons for bacterial and mycotic infections, but the most common are two: the first is the microorganisms that get together with the plant that is introduced into the culture for the first time, and the second is the microorganisms that get into any kind of work with tissue culture (most often when culture is replanted) [5–7]. As a rule, the identification of contamination of the nutrient medium or explant is carried out by visually trained people responsible for the cultivation of micro-plants and conducting monitoring to monitor the state of growth and development of crop production in regulated microclimate conditions. However, providing visual reliable identification of contamination is complicated by the problem presented by individual and subjective factors, such as low qualifications or lack thereof, insufficient work experience and fatigue of the performers. In addition, the
Agrophytocenosis Development Analysis and Computer Monitoring
185
lack of personnel, optimization of production obliges the development and implementation of automated digital systems for the effective implementation of plant cultivation based on reliable data. The solution to the problem lies in the development and implementation of a contamination detection system through technical vision, analysis of digital images using machine learning libraries for subsequent processing by artificial neural networks [8–10].
2 Materials and Methods In the course of studying the problems of increasing the productivity of agrophytocenoses, an intelligent system for detecting visual parameters of contamination during microclonal propagation of plants using means of technical vision and subsequent data processing by artificial neural networks of various architectures has been developed. In the first approximation, monitoring of the number of samples of more than one, as well as their relative positions relative to each other, is not taken into account, let’s consider monitoring of one sample in vitro in a single test tube mounted in a tripod. Figure 1 shows the general functional scheme of the proposed system.
Fig. 1. General functional scheme of contamination detection using computer vision: 1 – phytocamera, 2 – digital video camera, 3 – tripod, 4 – test tube with micro-growth
The monitored object is monitored by 3 digital video cameras, two of which are located inside the phyto-meter on the side at the level of the object of observation, as well as from below directly under the object. In addition, a tripod with a test tube periodically makes an incomplete rotation around its axis to monitor all sides of the object. The image of an object is transmitted through an optical device to a light-signal converter, the electrical signal in the primary image processing device is amplified and stored.
186
K. Tokarev et al.
The image analysis device (secondary processing) is used to highlight and recognize an object, determine its coordinates and position. If necessary, the processed information about the object is displayed on the visual control device, and can also be duplicated by an audio signal. In addition, the proposed device can record the results of image analysis on data carriers. The functions of the control unit include control of the parameters of processing units, as well as synchronization of processes running in the system. The algorithm of automated control of contamination in the nutrient medium/plant object, characteristics of the nutrient medium (deviations from the regulated indicators) is shown in Fig. 2.
Fig. 2. Algorithm for automated control of contamination in the nutrient medium/plant object, characteristics of the nutrient medium (deviations from the regulated indicators)
According to the regulated characteristics of appearance, the MS (Murashige-Skuga) nutrient medium, most often used in microclonal reproduction, is a ghostly liquid without opalescence and sediment. However, it can be used both in liquid and semi-liquid and dense forms, where the consistency depends on the presence of Agar thickening agent (regulated characteristics: white powder with a yellowish tinge). Together, the liquid and dense agarized medium MS (without additional modifications) has a white translucent color.
3 Results and Discussion The developed algorithm of the automated intelligent control system for contamination in the nutrient medium/plant object, the characteristics of the nutrient medium (deviations from the regulated indicators) provides identification of a high degree of accuracy of
Agrophytocenosis Development Analysis and Computer Monitoring
187
biological contamination during microclonal propagation of plants by analyzing the color characteristics of the object, as well as digital images using specialized libraries of deep machine learning for subsequent processing by artificial neural networks of various architectures. The use of this algorithm in the design of automation systems of climatic chambers for growing plants contributes to the timely detection of contamination of the nutrient medium and explants, which will allow not only to exclude infected samples from the sterile chamber in time, but also to perform surgical intervention for replanting the contents, thereby preventing production losses [11–15]. Figure 3 shows a diagram of the device of a full cycle of production of cultivated plants using biotechnological methods of reproduction. The right part of the device (climate chamber) ensures the creation and maintenance of conditions of high biological purity by ozonating incoming air and converting ozone by passing through a carbon filter to the oxygen required by the plant. The use of such a degree of air purification makes it possible to widely use the climate chamber in conditions where compliance with sterility with a high purity class is necessary – clonal micro-propagation of plants in vitro. The left part of the device is used for growing and adapting plants to real growing conditions [16–19].
Fig. 3. Combined electro-mechanical circuits of the device: 1 – independent sections of the climate chamber, 2 – air ozonator, 3 – air filter, 4 – carbon filter, 5 – section control panel, 6 – LED lighting, 7 – tripod-platform for controlled samples in vitro, 8 – rotary table, 9 – servo, 10 – rotary sleeve, 11 – in vitro controlled samples, 12 – tripod, 13 – tripod positioning ball bearing, 14 – color sensor positioning ball bearing, 15 – intelligent color analyzer, 16 – plant samples in the zone of adaptation to real growing and growing conditions, 17 – control and automation unit,18 – computer communication interface
To control contamination of samples in test tubes, a software and hardware complex for analysis, visualization of computer monitoring of agrophytocenoses of biotechnological reproduction methods is proposed (Fig. 4). The scheme includes as an analyzing device an intelligent digital sensor TCS-230 3 (Fig. 4), which allows to determine the color of an object with a high degree of accuracy in the range of an adaptive color model
188
K. Tokarev et al.
RGB spectrum (8 bits, a numerical value in the range from 0 to 255 for each component of the spectrum). Sensor 3 is mounted on a tripod 12 (Fig. 3) with two ball bearings 13, 14 to increase the degrees of freedom of position variation. The controlled samples 11 (Fig. 3) are mounted on a tripod platform 7, fixed by a rotary sleeve 10 on the servo shaft 9.
Fig. 4. Electronic circuit of the device (built using CAD “Fritzing”): 1 – ATmega 2560 programmable microcontroller, 2 – SG-90 servo, 3 – TCS-230 color recognition sensor, 4 – layout board, 5 – feedback indication of automatic positioning mode, 6 – feedback indication of manual positioning mode, 7 – feedback indication of precise positioning mode, 8 – feedback indication the relay of the biological purity maintenance device, 9–clock buttons for controlling the operating modes of the device, 10 – potentiometer for controlling the rotation angles of the servo, 11 – LCD1602 display, 12 – electromechanical relay of biological purity maintenance device, 13 – DS1302 Real–time clock, 14 - RS–485 to TTL Converter, 15 - USB to RS-485 Converter
The software part of the development is made with the ability to calibrate the TCS230 sensor for specific color deviations and the light mode of the working camera. In addition, the positioning of controlled samples is possible both automatically (with the choice of the number of sectors of the tripod-platform) and manually, including finetuning the angle of rotation of the platform (up to 1 degree). Ensuring the creation and maintenance of biological purity conditions is realized by means of control via an electromechanical relay by forcibly switching on or in automated mode according to a set response time by means of a DS1302 real-time clock. The provision of serial communication 18 (Fig. 4) between the device 17 (Fig. 4) and the computer is carried out by means of the RS-485 data transmission standard. The software and hardware complex is built on the modern hardware platform of the ATmega-2560 microprocessor [9] and the high-performance ESP-32 microprocessor with the Tensilica Xtensa LX6 core in single-core and dual-core versions. Figure 5 shows the interface (mnemonic) of the developed SCADA system and the distribution of Modbus tags (converted into OPC server channels) on its display and control elements.
Agrophytocenosis Development Analysis and Computer Monitoring
189
Fig. 5. Interface of the developed SCADA system and distribution of Modbus tag on its display and control elements: 1 – automatic positioning mode, 2 – manual positioning mode operation resolution, 3 – manual positioning mode feedback indication, 4 – precise positioning mode operation resolution, 5 – controlled sample position number selection, 6 – tripod platform position angle selection for controlled samples in vitro, 7 - input of the number of tripod sectors used-platforms, 8 – manual adjustment of the spectrum intensity range “Red” of the color analyzer, 9 – received actual data of the spectrum intensity “Red”, 10 – manual adjustment of the spectrum intensity range “Green” of the color analyzer, 11 – received actual spectrum intensity data “Green”, 12 – manual adjustment of the spectrum intensity range “Blue” of the color analyzer, 13 – received actual spectrum intensity data “Blue”, 14 – contamination detection indicator (according to the object colors defined by the control system: red, blue, green, azure, pink, etc.), 15 – IP camera 16 - automatic positioning mode operation resolution
4 Conclusion The authors propose a software and hardware complex for analysis, visualization of computer monitoring of agrophytocenoses in biotechnological reproduction methods, which provides detection of contamination of nutrient media/plant explants through technical vision, digital sensors, analysis of digital images using machine learning libraries for subsequent processing by artificial neural networks [20–25]. The proposed hardware and software complex includes an intelligent digital sensor as an analyzing device, which allows to determine the color of an object with a high degree of accuracy in the range of the adaptive RGB color model of the spectrum (8 bits, a numerical value in the range from 0 to 255 for each component of the spectrum). The development software is designed with the ability to calibrate the sensor for specific color deviations and the light mode of the working camera. In addition, the positioning of the controlled samples is possible both automatically and manually, including by fine-tuning the angle of rotation of the platform (up to 1 deg.) Ensuring the creation and maintenance of biological purity conditions is implemented by means of forced activation or in automated mode according to the set response time. Serial communication between the device and the computer is provided by means of the RS-485 data transmission standard. The software and hardware complex is built on
190
K. Tokarev et al.
the modern hardware platform of the ATmega-328/2560 microprocessor and the highperformance ESP-32 microprocessor with the Tensilica Xtensa LX6 core in single-core and dual-core versions. A software (SCADA system) has been developed for indicating, configuring and controlling all parameters when monitoring research objects. In case of detection of the fact of deviations, an alert is made on the operator’s screen, an entry is made in the system’s accident table, as well as in the created Telegram messenger chatbot. A promising direction for further research in the framework of studying the problems of increasing the productivity of agrophytocenoses based on automated analysis, visualization and computer monitoring using microprocessor hardware platforms is the development of intelligent algorithms and software implementing them to control the growth and development of plants with the formation of datasets of images on a video series for the purpose of subsequent expert markup and processing by artificial neural networks of deep learning. Acknowledgements. The article is prepared with the financial support of the Russian Science Foundation, project № 22-21-20041 and Volgograd region.
References 1. Atkinson, P.M., Tatnall, A.R.L.: Neural networks in remote sensing. Int. J. Remote Sens. 18(4), 699–709 (1997) 2. Walker, W.R.: Integrating strategies for improving irrigation sistem design and management water management synthesis. Proj. WMS Repot. 70 (1990) 3. Ceballos, J.C., Bottino, M.J.: Technical note: The discrimination of scenes by principal components analysis of multi-spectral imagery. Int. J. Remote Sens. 18(11), 2437–2449 (1997) 4. Huete, A., Justice, C., Van Leeuwen, W.: Modis vegetation index (MOD13): Algorithm theoretical basis document. USGS Land Process Distrib. Active Archive Center. 129 (1999) 5. Garge, N.R., Bobashev, G., Eggleston, B.: Random forest methodology for model-based recursive partitioning: the mobForest package for R. BMC Bioinform. 14, 125 (2013) 6. Chang, D.-H., Islam, S.: Estimation of soil physical properties using remote sensing and artificial neural network. Remote Sens. Environ. 74(3), 534–544 (2000) 7. Mair, C., et al.: An investigation of machine learning based prediction systems. J. Syst. Softw. 53(1), 23–29 (2000) 8. Osborne, S.L., Schepers, J.S., Francis, D.D., Schlernrner, M.R.: Use of spectral radiance to estimate in-season biomass and grain yield in nitrogen- and water-stressed corn. Crop Sci. 42, 165–171 (2002) 9. Tokarev, K.E.: Agricultural crops programmed cultivation using intelligent system of irrigated agrocoenoses productivity analyzing. J. Phys. Conf. Ser. 1801. 012030 (2021) 10. Plant, R.E., et al.: Relationship between remotely sensed reflectance data and cotton growth and yield. Trans. ASAE 43(3), 535–546 (2000) 11. Tokarev, K., et al.: Monitoring and intelligent management of agrophytocenosis productivity based on deep neural network algorithms. Lect. Notes Netw. Syst. 569, 686–694 (2023) 12. Tokarev, K.E.: Raising bio-productivity of agroecosystems using intelligent decision-making procedures for optimization their state management. J. Phys.: Conf. Ser. 1801, 012031 (2021)
Agrophytocenosis Development Analysis and Computer Monitoring
191
13. Petrukhin, V., et al.: Modeling of the device operating principle for electrical stimulation of grafting establishment of woody plants. Lect. Notes Netw. Syst. 569, 667–673 (2023) 14. Isaev, R.A., Podvesovskii, A.G.: Application of time series analysis for structural and parametric identification of fuzzy cognitive models. CEUR Workshop Proc. 2212, 119–125 (2021) 15. Churchland, P.S.: Neurophilosophy: Toward a Unified Science of the Mind/Brain. MIT Press, Cambridge (1986) 16. Aleksander, I., Morton, H.: An Introduction to Neural Computing. Chapman & Hall, London (1990) 17. McCulloch, W.S., Pitts, W.A.: Logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943) 18. Ivushkin, D., et al.: Modeling the influence of quasi-monochrome phytoirradiators on the development of woody plants in order to optimize the parameters of small-sized LED irradiation chamber. Lect. Notes Netw. Syst. 569, 632–641 (2023) 19. Yudaev, I., Eviev, V., Sumyanova, E., Romanyuk, N., Daus, Y., Panchenko, V.: Methodology and modeling of the application of electrophysical methods for locust pest control. Lect. Notes Netw. Syst. 569, 781–788 (2023) 20. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408 (1958) 21. Cheng, G., Li, Z., Yao, X., Guo, L., Wei, V.: Remote sensing image scene classification using bag of convolutional features. IEEE Geosci. Remote Sens. Lett. 14(10), 1735–1739 (2017) 22. Bian, X., Chen, C., Tian, L., Du, Q.: Fusing local and global features for high-resolution scene classification IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 10(6), 2889–2900 (2017) 23. Mohammed, A.K., Mohammed, H.A.: Convolutional neural network for satellite image. Classifi. Stud. Comput. Intell. 165–178 (2020) 24. Tokarev, K.E.: Overview of intelligent technologies for ecosystem bioproductivity management based on neural network algorithms. IOP Conf. Ser. Earth Environ. Sci. 1069, 012002 (2022) 25. Lebed, N.I., Makarov, A.M., Volkov, I.V., Kukhtik, M.P., Lebed, M.B.: Mathematical modeling of the process of sterilizing potato explants and obtaining viable potato microclones. IOP Conf. Ser. Earth Environ. Sci. 786, 012035 (2021)
Reducing Fish Ball’s Setting Process Time by Considering the Quality of the Product Maria Shappira Joever Pranata and Debora Anne Yang Aysia(B) Petra Christian University, Surabaya 60236, Indonesia [email protected]
Abstract. The demand for fish ball products in a frozen food company has increased drastically. Companies need to increase production output in line with the increase in product demand. However, the company wants to avoid paying for additional equipment or employees and wants to minimize overtime hours. The suggestion is to reduce the setting time of the fish balls without reducing product quality, which is assessed by the gel strength test, organoleptic test, and microbiological test. Some trials have been carried out to reduce the fish ball’s setting time, where several proposed setting times were tried at temperatures of 41–45 ˚C and 46–50 ˚C for 10 g and 15 g fish balls. Each trial was carried out with three replications, for which 20 pieces of samples were taken for each replication. The result of quality testing from products sample will be compared to obtain products with quality following company standards. Based on trials that have been carried out, the most appropriate solution is to decrease the setting time to 20 min to produce 10 g of fish balls and 25 min to produce 15 g of fish balls. The product with the decreasing time has the quality according to the company standard and can meet the increasing demand. Keywords: Production capacity · Product quality · Gel strength · Organoleptic · Microbiological
1 Introduction Surimi is the primary raw material of fish balls. Surimi is a fish paste from deboned fish used to make simulated crab legs and other seafood. The paste is blended with cryoprotectants, such as sucrose, sorbitol, and phosphates, and frozen for preservation. For making the final product, the frozen paste is thawed, blended with starch, and extruded [1]. The quality of surimi can be seen from its color, taste, and strong gel ability. The ability of surimi to form a gel will affect the elasticity of advanced products that use surimi as a raw material. The mechanism of surimi gel formation is divided into three stages: suwari, modori, and ashi. Suwari gel is a gel that is formed when heated at temperatures reaching 50 °C. This gel will slowly form into an elastic gel paste. However, if heating is increased above 50 °C, the gel structure will be destroyed [2]. This event is referred to as a modori. Meanwhile, gel ashi is a gel that is formed after passing through the two temperature zones. A strong and elastic gel will form if surimi is maintained long in the suwari phase and quickly passes through the modori stage [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 192–200, 2023. https://doi.org/10.1007/978-3-031-50330-6_19
Reducing Fish Ball’s Setting Process Time by Considering
193
The demand for fish ball products in a frozen food company in Surabaya, Indonesia, has increased drastically. Demand for 10 g of fish balls increased by 85%, while demand for 15 g increased by 78%. Companies need to increase production output in line with the increase in product demand. However, the company wants to avoid additional equipment or employees and minimize overtime hours. One way that can be done to increase production output is to reduce the production time of fish balls without lowering the quality of the product itself, which is assessed by gel strength test, organoleptic test, and microbiological test. A gel strength test is used to measure the gel strength of a product. Previous studies state that the individual gel-forming ability of the mince’s product varied greatly due to their compositional differences [4]. Organoleptic tests measure a product’s quality (texture, taste, smell, and appearance) so that it complies with company standards and is acceptable to customers. The fish ball’s texture is critical for the quality and acceptability of seafood substitutes product. But most companies prioritize imitating seafood’s appearance, smell, and flavour rather than the texture attributes [5]. A microbiological test is an examination performed to detect the presence of microorganisms in a food product. This study aims to reduce fish ball production time so that the output can increase without lowering the quality of the product.
2 Research Method This research begins by identifying the longest process on the production floor and finding each process’s standard time. Standard time is the time required to carry out an activity by a reasonable workforce in normal situations and conditions [6]. Performance rating and allowance are essential in determining the standard time. Determination of performance rating is an activity to assess the speed of operators in completing their products. Since the actual time required to perform each work element depends to a high degree on the skill and effort of the operator, it is necessary to adjust upward the good operator’s time and downward the poor operator’s time to a standard level. Giving a performance rating is the most important step and the step that is most criticized because it is entirely based on experience, training, and assessment of work measurement analysis [7]. Allowance is the amount of time added to meet personal needs, unavoidable waiting times, and fatigue. The maximum production output can be seen from the production capacity. Production capacity can be defined as the maximum amount of output produced from a production process in a certain period. Production capacity planning aims to optimize the resources owned to get maximum output. Some trials at the longest process were carried out to reduce the fish ball’s production time. Several proposed times were tried at 41–45 ˚C and 46–50 ˚C for 10 g and 15 g of fish balls. Previous studies state that maximal production of the round-shaped fish ball could be made when the paste is extruded into a setting tank filled with 10% salt solution and held for 20 min [8]. In this study, for 10 g of fish ball products, the setting time was reduced to 20, 15, and 10 min, while for 15 g of fish balls, the setting time was reduced to 25, 20, 15, and 10 min. Each trial was carried out with three replications, for which 20 pieces of samples were taken for each replication. The next stage is quality testing so that the quality of fish ball products that have undergone the trial process remains by company standards. Testing the quality of fish
194
M. S. J. Pranata and D. A. Y. Aysia
balls was carried out using three tests: gel strength, organoleptic, and microbiological. The gel strength test was conducted using a Rheo Tex-type SD-700II machine. The product to be tested is placed onto the device so that the pusher is in the center of the product. The machine will start to detect the gel strength of a product and stop when it reaches the result. The strength of the gel will be recorded on the paper connected to the thermal printer. The organoleptic test uses the human senses as the primary tool. The senses used in the organoleptic test are sight, touch, smell, and taste. The sense of sight is used to judge the product’s color, size, and shape. The sense of touch is used to assess the texture and consistency of the product. The sense of smell is used to evaluate the appearance of signs of damage to the product, such as a foul odor indicating that the product is damaged. Finally, the sense of taste is used to assess the taste of a product by predetermined standards. According to previous studies, several factors affect the fish balls’ texture. Two-step cooking obtained better texture and mouth feel and reduced cook loss of fish balls compared to other cooking processes [4]. Texture force increased in fried and boiled canned fish balls when the processing temperature increased. Fish muscle’s toughness significantly increases as the heating temperature increases [9]. Frozen storage decreased the quality (texture, flavor, and color) of the fish flesh affecting the quality of the final fish ball product. Washing of fish mince caused decreases in the color and taste of the fish ball products [10]. The microbiological test is divided into two, namely qualitative test and the quantitative test. The qualitative test is used to determine the type of microorganism, while the quantitative test is used to determine how many microorganisms are present in the product. There are 5 test parameters used, namely, Total Plate Count (TPC), Coliform, Escherichia Coli, Salmonella, and Vibrio cholerae. Each parameter has the maximum number of microorganisms that grow in a product. TPC is a method used to calculate the number of microorganisms in a sample in a medium. The TPC method was tested using Plate Count Agar (PCA) media as a medium for bacterial growth. Testing with the TPC method aims to show the total number of microorganisms in the tested sample. The TPC method is the most frequently used because it does not require the aid of a microscope to count the number of growing microorganisms. The maximum limit for the number of microorganisms growing in a precooked product (fish balls, shrimp balls, squid balls, fish rolls, etc.) is 100,000. Coliform bacteria are microorganisms used to assess the quality of contaminated water sources. The nature of Coliform bacteria can be divided into two: fecal coliform, bacteria originating from human feces, and no-fecal coliform, bacteria originating from dead animals or plants. The company tests coliform bacteria to determine the presence of coliform microorganisms that grow in fish ball products. The maximum limit for the number of coliform bacteria that can grow in a product is 100. Escherichia Coli bacteria (E. Coli bacteria) belong to the fecal coliform group. This bacterium is one of the most polluting bacteria in food. The growth of E. Coli bacteria in the product can cause diarrhea for people who consume it. E. Coli bacteria are not allowed to grow in the product, so a separate test is carried out to see if this bacterium grows in the fish ball product. Salmonella bacterial test results must be negative because the growth of Salmonella bacteria in products can interfere with the digestive tract, causing diarrhea, abdominal pain, pain, and cramps in the stomach.
Reducing Fish Ball’s Setting Process Time by Considering
195
Vibrio cholerae bacteria can usually grow in food through poor food processing. Vibrio cholerae test results must be negative because these bacteria can cause disturbances in human digestion, such as diarrhea.
3 Result and Discussion The fish ball production process consists of starting with the mixing process, then proceeding with printing and setting, cooking, aging, freezing, polybag checking, polybag coding, polybag packing, metal detecting, stamping, and ending with packing. Table 1 summarizes the standard times for producing 10 gr and 15 gr fish balls. Table 1. Standard time. Process
Production times for 10 g fish balls (s)
Production times for 15 g fish balls (s)
Mixing
660
660
Setting
2.880
3.000
Cooking
690,82
670,07
Aging
600
600
Freezing
14.400
14.400
Polybags checking
373,49
405,07
Polybags coding
326,32
344,75
Polybags packing
1.160,40
1.034,07
Metal detecting
393,05
401.,4
Stamping
311,28
311,73
Packing
646,09
642,09
3.1 Reduce Setting Time Without Changing the Temperature The longest processing time for fish ball production is the freezing process. The freezing process uses an Air Blast Freezer (ABF) machine, a room filled with the desired air, and utilizes convection flow for its distribution [11]. The ABF machine has a frozen surface rate of 0.5–3 cm per hour, requiring a minimum freezing time of four hours [12]. In other words, the freezing process time cannot be shortened. The company has four ABF engines. In one run, the capacity of one ABF machine can accommodate as many as 12 trolleys, whereas one trolley can contain 144 kg of product. ABF machines can run twice a day, for 8 h per day. When product demands increase, ABF’s machine capacity is still sufficient to accommodate all products. The capacity of one ABF machine can accommodate as many as 24 trolleys per day, with a total of 96 trolleys for four devices. The highest product demand requires as many as 70 carts in one day. Thus, the available
196
M. S. J. Pranata and D. A. Y. Aysia
freezing capacity can accommodate the request. Therefore, the amount of product that enters the ABF machine will depend on the second longest process, namely the setting process. The setting process aims to form a texture, so the product has a supple texture. In the setting process, boiling is carried out for 30 min for 10 g of fish ball products and 35 min for 15 g. The water temperature in the setting tub is 41–45 °C. Some trials were carried out to reduce the setting process time. For 10 g of fish ball products, the setting time was reduced to 20, 15, and 10 min, while for 15 g of fish balls, the setting time was reduced to 25, 20, 15, and 10 min. These trials were carried out with three replications on three different production batches. Samples taken were 20 pieces for each replication. The samples were then tested for quality to know whether the trial product had the same quality as the product quality that has become the company’s standard. Gel Strength Test. This test was carried out by the same operator on five replicates for each piece of meatball product to obtain optimal results. Gel strength is considered good if it has a value above 150 g.cm. The gel strength test was carried out on 30 pieces of 10-g fish balls for three replications in each trial. The results of the gel strength test can be seen in Table 2. For the 10-g fish meatball product, the trial was considered to have passed the test was a 20-min setting time. The trial considered to have passed the test on 15-g fish ball products were 20 and 25 min setting process times. Table 2. The results of the gel strength test and organoleptic test at a temperature of 41–45 °C. Product
Setting time (minutes)
Gel strength test (g.cm)
Organoleptic test
10-g
10
92,93
12
10-g
15
143,83
14,67
10-g
20
230,15
16
15-g
10
95,85
11
15-g
15
129,59
14,67
15-g
20
207,4
16
15-g
25
248,94
16
Organoleptic Test. Organoleptic tests were carried out on a sample of 5 pieces of fish balls for each replication. Organoleptic results are considered reasonable if they have a minimum total score of 15 from the four existing categories. When carrying out a product taste test, neutralization is done by drinking water after tasting the product in each trial to make the value more accurate. The organoleptic test results can be seen in Table 3. The trial that passed the test on the 10-g fish ball product was the 20-min setting time. Trials considered to have passed the test on 15-g fish ball products were 20 and 25-min trials. Microbiological Test. This test was carried out on five pieces of fish balls for each replication, with a total of 15 pieces of fish balls for three replications. This test was carried out by cutting a sample of 15 pieces of fish balls from three different replications.
Reducing Fish Ball’s Setting Process Time by Considering
197
Samples that have become small pieces will then be weighed and given different treatments according to the test method for the five parameters used. After that, the treated samples will be allowed to stand and test according to each test parameter’s provisions. The results of the microbiological tests can be seen in Table 3. It is known that the trials that passed the test were the 20-min setting time for producing 10 g of fish balls and the 25-min setting time for producing 15 g of fish balls. Table 3. Microbiological test results at 41–45 °C. Product
Setting time (minutes)
Total Plate Count
Coliform
Escherichia Coli
Salmonella
Vibrio cholerae
10-g
10
136.000
270
Negative
Negative
Negative
10-g
15
109.000
190
Negative
Negative
Negative
10-g
20
63.000
60
Negative
Negative
Negative
15-g
10
143.000
210
Negative
Negative
Negative
15-g
15
123.000
180
Negative
Negative
Negative
15-g
20
101.000
80
Negative
Negative
Negative
15-g
25
56.000
70
Negative
Negative
Negative
3.2 Reduce Setting Time with Increasing the Temperature Efforts to reduce the setting process time with a temperature of 41–45 ˚C have been carried out and obtained results of 20 min for 10-g fish balls and 25 min for 15-g fish balls. After finding these results, further trials were conducted to increase the water temperature in the setting process to determine whether the setting process time could be shortened by increasing the water temperature and whether the fish balls still meet company quality standards. As a result, the temperature was increased to 46–50 ˚C. The gel strength test, organoleptic test, and microbiological test results showed that for 10 g of fish balls, the products still met the quality standard for the 15 and 20-min setting time; meanwhile, for 15 g of fish balls, the product still met the quality standard for 20 and 25-min setting process time. 3.3 Comparison of the Result When compared with the results of the test for decreasing the processing time at a temperature of 41–45 ˚C, the experiment of increasing the temperature to 46–50 ˚C succeeded in shortening the setting time from 20 min at a temperature of 41–45 ˚C to 15 min for 10-g fish balls and shortening the time from 25 min with a temperature of 41–45 ˚C to 20 min for 15-g fish balls. In addition to the successful shortening of the processing time, the quality of the fish balls also increased, as can be seen from
198
M. S. J. Pranata and D. A. Y. Aysia
the gel strength test values at 46–50 ˚C, which were better than the experiments with temperatures of 41–45 ˚C. The results of the microbiological tests show that the bacteria that grew in the trials with a temperature of 46–50 ˚C were less than those in the trials with a temperature of 41–45 ˚C. One essential thing to note is maintaining the water temperature in the setting process. In the current process, the water temperature is checked manually and periodically during the setting process using a thermometer. If the water temperature exceeds the standard, water is added to keep the temperature stable below the standard of 45 °C or 50 °C. Water temperature plays a vital role in maintaining the quality of fish balls. As previously mentioned, if the temperature exceeds 50 °C, the gel structure will be destroyed [2]. Improvements can be made by adding a tool/sensor for detecting water temperature and increasing water flow to automatically reduce heat in the water when the temperature exceeds the standard. With this sensor, temperature manual and periodic checks can be eliminated to avoid human error. The working method of the proposed tool can be adapted from the research Design of Pond Water Temperature Monitoring Built Using NodeMCU ESP8266 [13]. This research aims to make a prototype to detect pond water temperature and increase pond water discharge, which is useful for reducing pond water temperature to cooler and reducing the risk of fish death. The prototype developed is a NodeMCU ESP 8266 microcontroller, a temperature detection sensor in water, a relay as a switch, and a mini pump. The setting time for the 10-g fish ball product was reduced from 30 to 20 min at 41– 45 ˚C and 15 min at 46–50 ˚C. Meanwhile, the setting time for the 15-g fish ball product has been reduced from 35 to 25 min at 41–45 ˚C and 20 min at 46–50 ˚C. Decreasing the setting process time in fish ball production has several impacts, including increasing production output and yield. Increased Production Capacity. The decrease in setting process time impacts production capacity (Table 4). With the initial setting time, the company must add employee overtime hours to fulfill product demand. If the setting time is reduced to 15 min or 20 min for 10-g fish balls and 20 min or 25 min for 15-g fish balls, then the company does not need to add additional overtime hours because product demand can be fulfilled. Table 4. Increased production capacity. Product
Setting time (minutes)
Temperature (°C)
Production capacity (kg)
10-g
30
41–45
32.234,40
10-g
20
41–45
40.717,14
10-g
15
46–50
46.886,40
15-g
35
41–45
19.236,10
15-g
25
41–45
24.045,12
15-g
20
46–50
27.480,14
Reducing Fish Ball’s Setting Process Time by Considering
199
Improved Yields. The second impact of reducing the setting process time is that it can increase production yields. Yield is measured based on a process’s input and output weight. Yield is calculated using the following formula. yield =
output weight × 100% input weight
(1)
Some data are needed to calculate yield, such as raw material weight, after-themixing dough weight, after-cooking product weight, after-aging product weight, and finished good weight. Table 5 shows that for the 10-g fish meatball product, the average increase in yield in the 20-min reduction trial at 41–45 ˚C was 1.17%. In the 15-min experiment and a temperature of 46–50 ˚C, the production yield increased by 1.50%. For 15-g fish balls, the trial of decreasing the setting time to 25 min at a temperature of 41–45 ˚C experienced an increase in production yield by an average of 1.31%. In the 20-min experiment and a temperature of 46–50 ˚C, the production yield increased by an average of 1.21%. Table 5. Increased yield. Product
Setting time (minutes)
Temperature (°C)
Yield increase (%)
10-g
20
41–45
1,17
10-g
15
46–50
1,5
15-g
25
41–45
1,31
15-g
20
46–50
1,21
3.4 Conclusion The increasing demand for fish ball products has resulted in companies trying to reduce the production time for fish balls without reducing product quality. Therefore, three tests were carried out for product quality assessment: gel strength, organoleptic, and microbiological. Product quality is considered good if the three tests have values according to company standards. In producing fish balls measuring 10 g, trials were conducted to reduce the setting time to 10, 15, and 20 min at temperatures of 41–45 and 46–50 ˚C. In producing fish balls measuring 15 g, trials were conducted to reduce the setting time to 10, 15, 20, and 25 min at temperatures of 41–45 ˚C and 46–50 ˚C. Each trial was carried out with three replications. In the production of 10-g fish balls, it was found that product quality that complies with company standards is a product that is produced using a setting time of 20 min at a temperature of 41–45 ˚C and a setting time of 15 min at a temperature of 46–50 ˚C. Whereas in the production of 15 g of fish balls, it was found that the setting time decreased to 25 min at 41–45 ˚C and 20 min at 46–50 ˚C. A decrease in setting time has an impact on increasing production capacity and increasing production yields. Based on the calculations that have been done, the production capacity of 10 g of fish balls with an experiment of decreasing the time to 20 min is
200
M. S. J. Pranata and D. A. Y. Aysia
40,717.14 kg per month, and the yield increase is 1.17%. In the 15-min trial, the production capacity of 10 g of fish balls was 46,886.40 kg per month, with a yield increase of 1.50%. In the production of 15 g of fish balls, it was found that the production capacity was 24,045.12 kg per month, and the yield increase was 1.31% for the trial of reducing the time to 25 min. Meanwhile, in the trial of reducing the time to 20 min, it was found that the production capacity increased to 27,480.14 kg per month, and the yield increased by 1.21%. Therefore, the maximum solution for the company is to reduce the setting time for producing 10-g fish balls to 20 and 25 min for producing 15-g fish balls. 3.5 Future Research Works A tool/sensor for detecting water temperature in the fish balls setting process can be developed for future research. This tool can be modified to automatically increase water flow to reduce heat when the temperature exceeds the standard.
References 1. Mason, W.R.: Starch: chemistry and technology, chapter 20. 3rd edn. Elsevier Inc. (2009) 2. Moniharapon, A.: Surimi technology and it’s processing product. BIAM Mag. 10(1), 16–30 (2014) 3. Iqbal, M., Ma’aruf, W.F., Sumardianto: The impact of Microalgae spirulina platensis and Microalgae skeletonema costatum addition on the quality of milkfish sausages (Chanos chanos Forsk). J. Pengolahan dan Bioteknologi Hasil Perikanan 5(1), 56–63 (2016) 4. Hoque, M.S., Nowsad, A.A.K.M., Hossain, M.I., Shikha, F.H.: Improved methods for the preparation of fish ball from the unwashed mixed minces of low-cost marine fish. Progressive Agric. 18(2), 189–197 (2007) 5. Ran, X., Lou, X., Zheng, H., Gu, Q., Yang, H.: Improving the texture and rheological qualities of a plant-based fishball analogue by using konjac glucomannan to enhance crosslinks with soy protein. Innovative Food Sci. Emerg. Technol. 75 (2022) 6. Sutalaksana, I.Z., Anggawisastra, R., Tjakraatmadja, J.H.: Teknik perancangan sistem kerja. 2nd edn. ITB, Bandung (2006) 7. Freivalds, A., Niebel, B.W.: Niebel’s methods, standards, and work design, 13th edn. McGrawHill Higher Education, New York (2013) 8. Kok, T.N., Park, J.W.: Elucidating factors affecting floatation of fish ball. J. Food Sci. 71(6), E297–E302 (2006) 9. Lazos, E.S.: Utilization of freshwater bream for canned fish ball manufacture. J. Aquat. Food Prod. Technol. 5(2), 47–64 (1996) 10. Akter, M., Islami, S.N., Reza, M.S., Shikha, F.H., Kamal, M.: Quality evaluation of fish ball prepared from frozen stored striped catfish (Pangasianodon hypophthalmus). J. Agrofor. Environ. 7(1), 7–10 (2013) 11. Cold Storage Indonesia: https://coldstorageindonesia.co.id/air-blast-freezer-abf/. Last accessed 23 Feb 2023 12. Kustyawati, M.E.: Mikrobiologi hasil pertanian. Pusaka Media, Bandarlampung (2020) 13. Muhammad, S.A., Haryono.: Design of pond water temperature monitoring built using NodeMCU ESP8266. Sinkron: J. Penelitian Teknik Informatika. 7(2), 579–585 (2022)
Optimal Fire Stations for Industrial Plants Ornurai Sangsawang(B) and Sunarin Chanta King Mongkut’s University of Technology North Bangkok, 129 M.21 Noenhom Muang Prachinburi, Bangkok 25230, Thailand {ornurai.s,sunarin.c}@itm.kmutnb.ac.th
Abstract. The ability to solve fire incidents on time is very important for safety and reliability in the industry. This research proposes the establishment of disaster management centers for industrial plants in Chachoengsao Province, one of the provinces in the Eastern Economic Corridor (EEC). The research considers the risk level of industrial plants causing disasters by applying the Maximal Covering Location Problem (MCLP) with the workload capacity of fire stations to determine the optimal location of fire stations. According to the study, the location of the current main fire stations can cover the risk level of the plant by 74.35%, and define the optimal location of fire stations, while one fire station is able to accommodate up to 50 and 100 factories. Keywords: MCLP · Industrial estates · EEC · Fire stations
1 Introduction When a fire breaks out, especially in the industry, it causes millions of dollars in damage. Failure to reach the scene in time will result in a long wait to control the fire and hazardous chemicals. Some incidents took more than 20 h to put out the fire. There are side effects caused by chemicals that affect the people living around them and affect their long-term health. In Thailand, the cabinet approved the project to develop the country, the “Eastern Economic Corridor” (EEC). The EEC focused on three eastern provinces, namely Chachoengsao, Chonburi, and Rayong to be the main industrial production base of the country and strengthen the industry, be a leader in manufacturing, and be export centers to Asian countries. In order to support the upcoming economic expansion, there is a need to develop infrastructure, transport, and logistics systems that are connected by road, air, and water. Creating effective personnel and labor for entering the industrial sector. Development of communication systems, standardization of international rules, systems of work and control, etc. All aspects of development must be integrated and implemented in tandem. To build confidence for foreigners to invest, the Emergency and Public Disaster Prevention and Mitigation System is of great importance for the development of the country. The industrial sector and urban communities with high density of population are particularly © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 201–208, 2023. https://doi.org/10.1007/978-3-031-50330-6_20
202
O. Sangsawang and S. Chanta
affected. When an incident occurs, there is a high amount of violence and damage compared to the area outside the city. The Eastern Economic Corridor is expected to house several establishments, including five large oil refineries, three petrochemical industries, 20 power plants, and 29 industrial estates, which are production bases for 3786 industrial plants. Chachoengsao province, one of the three provinces under the EEC project, is located in the east of Thailand. Chachoengsao is the province that has invested in medium- and large-sized industrial plants for a long time. Laem Chabang Port has been constructed to transport out and import in the Eastern Sea Board development project. There is also a new Bangkok-Chonburi highway (motorway) and the location of Chachoengsao province is close to Suvarnabhumi Airport, making it convenient to transport and import goods. When there is an emergency or public disaster, the most important thing for the authorities to do is help the victims and control the situation as quickly as possible. Therefore, it is important to determine the location of service stations in the event of an emergency or public disaster. By determining the appropriate location of the station, the authorities will be able to assist the victims in a timely manner. This research proposes determining the optimal location of fire stations in industry. Locating fire service stations based on relevant factors such as the risk level of an industrial type and industrial spatial distribution.
2 Literature Review Mathematical models have been developed for determining the location of service units in various ways, as well as developing ways to find answers to problems. Research has been conducted on the optimal location of fire stations by using mathematical model such as Covering location problem such as Badri et al. [1], Huang [2], Naji-Azimi et al. [3], Integer programming introduced by Shitai et al. [4], Jian et al. [5], and several studies have applied metaheuristics, including Ant colony optimization presented by Huang [2], Scatter search proposed by Naji-Azimi et al. [3], GA developed by Shitai et al. [4], Variable neighborhood search examined by Davari et al. [6], Particle swarm optimization developed by Elkady and Abdelsalam [7], Drakuli´c et al. [8], Differential evolution, Artificial bee colony algorithm, Firefly algorithm examined by Soumen et al. [9], Whale Optimization Algorithm introduced by Toloueiashtian et al. [10]. In case the size of the problem is large, as follows: Badri et al. [1] proposed multiobjective optimization to determine the location of 31 fire stations in Dubai by considering several objectives such as travel time, distance, and cost to find the best solution. Huang [2] introduced a solution to the problem of fire station locations in Singapore. The author applied a multi-objective optimization model, including a linear feature covering problem (LFCP) and Ant algorithm to two-phase local search, which defined cell space and proposed reducing the time to arrive at the scene from 8 min to 5 min. Yin and Mu [11] developed the Modular Capacitated Maximal Covering Location Problem (MCMCLP), which expands on the MCLP model to determine the optimal location of ambulances in Georgia, Athens, with the aim of covering as many service areas as possible and at the same time. The objective is to reduce the total distance between areas outside the coverage of the nearest ambulance location. Naji-Azimi et al. [3] proposed a model for setting up satellite distribution centers to provide assistance to victims in disaster areas by using the MCLP and the Scatter search
Optimal Fire Stations for Industrial Plants
203
algorithm. The objective is to keep the total distance as low as possible, and the Scatter search method could solve large-scale problems well and was more effective than the solution by mathematical models. Shitai et al. [4] proposed a set covering model using integer programming and multiobjective GA to select the location of a watchtower. The purpose of the set covering model in this study was to reduce construction costs to the lowest value and cover the area most in need of wildfire surveillance (cost and coverage) using GIS data analysis. The results showed that for bi-objective, the GA method found the most appropriate answer according to the pareto front. Jian et al. [5] offer mixed integer nonlinear program (MINLP) model to select fire station locations to cover the needs of the local population in Harbin, China. The objective is to reach the population quickly with sufficient number of vehicles to meet the demand. The authors applied the maximal covering location model and testing the model using GAM algorithms. According to research, in the past, mathematical models were used to solve numerous service unit search problems. Most consider only the amount of demand for services and often apply specific case studies in residential communities. However, the location of industrial plants and the risk of their occurrence are rarely considered according to the type of plant in the industrial zone.
3 Material and Methods 3.1 Data Collection of Industrial Plants in Chachoengsao Province 1. Data collection. To collect data, face-to-face interviews with stakeholders are used, as well as questionnaires to gather evidence and store additional data over the phone. 2. Inquiries. Inquests are collected from the office work safety officers of the Chachoengsao Provincial Industrial Estate Office. To determine the risk points that may cause disasters, and to inquire about the types of industrial plants where disasters occur frequently. There are 3 phases of fire duration: the initial, intermediate, and severe stages. (1) The initial stage from the time of seeing the flame until 4 min can be extinguished by the initial fire extinguisher. (2) The intermediate to severe stage, which is the time that the fire has already burned, which is from 4 to 8 min. There will be a temperature of 400 degrees Celsius, which is a good time to get help in time because if the rescue is too late, the fire will be difficult to control. (3) The severe stage is considered when the fire has burned for more than 8 min and there is still a lot of fuel, the temperature will be above 800 degrees Celsius. The fire spreads violently and quickly in all directions. Therefore, fighting a raging fire is difficult. The length of time it takes to assist burn victims from disaster assistance centers to effective scenes is within 5 min [12]. The vehicle’s speed at a standard
204
O. Sangsawang and S. Chanta
speed is 60 km/h. Therefore, the disaster management center must have a radius of covering the accident site of 5 km. So that, there are three minutes left to control the fire before it reaches a severe stage. In the objective of this research, types of factories with different levels of disaster risk are multiplied by different weight values. Information on the number of factories in the industrial estates is divided into 3 types according to the risk level, as follows: • High-risk factories (red) are hazardous spots, such as those that use flammable materials or fuel or explode easily. This is in accordance with the type or type of plant specified in the notification of the Ministry of Industry on fire prevention and suppression in factories such as energy, petrochemicals, electronic components, etc. • Medium-risk factories (yellow) are areas to be monitored or a factory that operates businesses other than those specified in the Ministry of Industry’s notification on fire prevention and suppression in factories, such as the food industry, steel, etc. • Low-risk factories (green) are low-risk areas for related types of businesses, such as the service industry, building materials, etc. 3.2 The Maximal Covering Location Problem Disaster assistance centers in industrial estates and outside industrial estates in Chachoengsao Province are determined. The case is determined by the risk point of the scene. The problem type, referred to as the Maximum Covering Location Problem, is an enhanced problem that is based on the demands of customers in limited-service units, as formulated by Church and ReVelle [13]. The mathematical model is as follows: Maximize
m n
ri zij
(1)
xj = P
(2)
wi zij , ∀i
(3)
i=1 j=1
Subject to
m j=1
m
kj aij xj ≥
j=1
m j=1
m
zij ≤ 1, ∀i
(4)
xj ∈ {0, 1}, ∀j
(5)
zij ∈ {0, 1}, ∀i, j
(6)
j=1
Indices: i = set of demand nodes = 1,…, n j = set of fire station locations = 1, …, m n= number of demand nodes
Optimal Fire Stations for Industrial Plants
205
m = number of fire station locations. Parameters: wi = number of industrial plants at node i. r i = risk score of area at demand node i. k j = the workload capacity of fire station at location j. d ij = distance matrix from demand node i to fire station at location j. aij = 1 if the fire station at location j can serve demand node i in standard response time or d ij ≤ S 0 otherwise . P = maximum number of fire stations to be opened. S = distance that a call can be served in a standard response time. Decision variables: x i = 1 if node j is selected to be fire station. = 0 otherwise. zij = 1 if industrial plants at demand node i are covered by fire station at location j. = 0 otherwise. The objective in Eq. (1) is to maximize the total risk scores. The risk score is calculated by multiplying the risk level of the area i by the number of industrial plants in the area. Note that in this study, there are three levels of severity, which are high risk, medium risk, and low risk. Constraint in Eq. (2) limits the number of fire stations to be opened should not exceed P. Constraint in Eq. (3) forces that a demand node i can be covered if there exists a fire station at location j that can serve demand node i in a standard response time. Note that the workload of the fire station at location j cannot exceed k j . Constraint in Eq. (4) prevents multiple coverage counting, so demand node i can cover by a fire station. Constraints in Eqs. (5)–(6) assign the domain of the decision variables. Based on the data collected from the location of industrial plants in the 3 main industrial estates which are Gateway city, Wellgrow, and TFD industrial estates. Table 1 shows the number of factories in Chachoengsao Province, 1699 factories, by fire risk level 3 levels.
4 Results and Discussion In the computational experiment, we examine the coverage percentage of current fire stations, the coverage percentage at each fire station up to 50 and 100 factories can be serviced.
206
O. Sangsawang and S. Chanta Table 1. Number of factories classified by fire risk level.
Risk level
Number of factories
Percentage
High-risk
1037
61.04
Medium-risk
535
31.49
Low-risk
127
7.47
To determine the location of the fire stations, we analyzed the service coverage of the current fire stations in Chachoengsao Province. According to the survey, there are currently 20 fire stations in the province. Determining the importance or risk of fire at each plant differently depends on the level of risk. In determining the proper location of the fire stations, MCLP is applied for the maximal demand coverage issues. Designate the coverage of the assistance as 8 min or 8 km. According to the analysis in Table 2, the current fire stations in Chachoengsao province, 20 locations cover a risk score of 74.35%. In Table 3, when designated one fire station to serve up to 50 factories, we varied number of fire stations P: 10, 15, 20, 21 and 22. From the result, P is increased, the coverage percentage is also increased. The optimal number of fire stations is 22. In Table 4, we defined each fire station to serve up to 100 factories, we varied number of fire stations P: 10, 15, 20, 21 and 22. From the result, the optimal number of fire stations is 22. Table 2. The current fire stations and the coverage percentage. P
Stations
% Coverage
20
5 11 16 18 19 57 70 77 79 86 92 93 100 124 126 150 152 185 209 210
74.35
Table 3. Results of the coverage percentage according to the risk score with capacity of 50 factories. P
Stations
% Coverage
10
8 16 43 66 70 122 151 154 175 210
15
8 16 43 57 66 69 92 121 137 144 150 188 194 200 216
98.23
20
1 8 16 36 50 59 60 65 107 110 134 137 140 144 156 183 187 200 205 216
99.88
21
1 8 16 24 51 56 59 60 65 106 110 116 137 140 144 156 183 187 200 205 216
99.94
22
1 8 16 24 36 51 59 60 65 110 116 128 137 140 144 156 182 186 200 205 211 216
100.00
85.69
From Fig. 1, the workload capacity of fire station is limited to 50 and 100 factories. When the number of fire stations P is 10, the percentage of coverage of the workload
Optimal Fire Stations for Industrial Plants
207
Table 4. Results of the coverage percentage according to the risk score with capacity of 100 factories. P
Stations
% Coverage
10 9 42 47 70 84 122 151 154 175 210
90.34
15 9 17 50 55 63 73 95 104 141 148 156 169 192 205 216
98.70
20 1 9 17 36 50 59 62 65 107 110 134 137 144 156 162 183 187 200 205 216
99.88
21 1 9 17 24 36 50 59 63 95 107 110 116 123 144 156 159 183 187 200 205 216
99.94
22 1 9 17 24 36 44 59 66 69 89 116 119 140 144 156 165 170 182 200 205 211 100.00 216
Fig. 1. Comparison of the coverage for capacity of 50 and 100 factories
capacity of 100 factories is higher than that of 50 factories. When the number of fire stations P is increased, the difference in maximum coverage percentage decreases. When P = 22, the maximum coverage provided by the workload capacities of fire stations 50 and 100 is optimal.
5 Conclusion and Future Work In this study, the proposed research determined the appropriate fire station location for 1699 factories in Chachoengsao, one of three provinces in Eastern Economic Corridor, Thailand. By determining the risk level of industrial plants causing disasters and using the MCLP, the results showed that the current location of the main fire station can cover 74.35% of the plant’s risk with 8 min. Response time. From the result, we defined one
208
O. Sangsawang and S. Chanta
fire station to be able to accommodate up to 50 and 100 factories. The optimal number of fire stations is 22 in both cases. For future research, locating the location of fire stations using metaheuristics, multiobjective whale optimization, will be conducted, such as examining the impact on the spread of hazardous chemicals that affect public health and environmental conditions.
References 1. Badri, M.A., Mortagy, A.K., Alsayed, C.A.: A multi-objective model for locating fire stations. Eur. J. Oper. Res. 110(2), 243–260 (1998) 2. Huang, B., Yao, L.: A GIS supported ant algorithm for the linear feature covering problem with distance constraints. Decis. Support Syst. 42(2), 1063–1075 (2006) 3. Naji-Azimi, Z., Renaud, J., Ruiz, A., Salari, M.: A covering tour approach to the location of satellite distribution centers to supply humanitarian aid. Eur. J. Oper. Res. 51, 365–378 (2021) 4. Shitai, B., Ningchuan, X., Zehui, L., Heyuan, Z.: Optimizing watchtower locations for forest fire monitoring using location model. Fire Saf. J. 71, 100–109 (2015) 5. Jian, W., Han, L., Shi, A., Na, C.: A new partial coverage locating model for cooperative fire services. Inf. Sci. 373, 527–538 (2016) 6. Davari, S., Zarandi, M.H.F., Turksen, I.B.: A greedy variable neighborhood search heuristic for the maximal covering location problem with fuzzy coverage radii. Knowl. -Based Syst. 41, 68–76 (2013) 7. Elkady, S.K., Abdelsalam, H.M.: A modified particle swarm optimization algorithm for solving capacitated maximal covering location problem in healthcare systems. Appl. Intel. Optim. Biol. Med. 117–133 (2016) 8. Drakuli´c, D., Takaci, A., Mari´c, M.: New model of maximal covering location problem with fuzzy conditions. Comput. Inform. 35, 635–652 (2016) 9. Soumen, A., Priya, R.S.M., Anirban, M.: Solving a new variant of the capacitated maximal covering location problem with fuzzy coverage area using metaheuristic approaches. Comput. Indust. Eng. 170 (2022). https://doi.org/10.1016/j.cie.2022.108315 10. Toloueiashtian, M., Golsorkhtabaramiri, M., Rad, S.Y.B.: An improved whale optimization algorithm solving the point coverage problem in wireless sensor networks. Telecommun. Syst. 79, 417–436 (2022) 11. Yin, P., Mu, L.: Modular capacitated maximal covering location problem for the optimal siting of emergency vehicles. Appl. Geogr. 34, 247–254 (2012) 12. Murray, A.T.: Maximal coverage location problem: impacts, significance, and evolution. Int. Reg. Sci. Rev. 39(1), 5–27 (2016) 13. Church, R., ReVelle, C.: The maximal covering location problem. Pap. Reg. Sci. Assoc. 32, 101–118 (1974)
Optimisation of a Sustainable Biogas Production from Oleochemical Industrial Wastewater Mohd Faizan Jamaluddin1(B) , Kenneth Tiong Kim Yeoh1 , Chee Ming Choo1 , Marie Laurina Emmanuelle Laurel-Angel Guillaume1 , Lik Yin Ng2 , and Vui Soon Chok3 1 Centre for Water Research, Faculty of Engineering, Built Environment and Information
Technology, SEGi University, Petaling Jaya, Malaysia [email protected] 2 Department of Chemical and Petroleum Engineering, Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur, Malaysia 3 KL-Kepong (KLK) Oleomas Sdn. Bhd, 42920, Pulau Indah, Malaysia
Abstract. The purpose of this study is to develop an optimisation model by using mathematical optimisation via LINGO version 18 software to optimise the biogas production process from oleochemical wastewater. Improper disposal of oleochemical wastewater can have significant environmental consequences, therefore recovering biogas from wastewater can offer both environmental and economic benefits. This optimisation is achieved by evaluating various biodigester types and pre-treatment methods. Two objective functions are formulated, maximisation of economic performance and maximisation of biogas quality. A multi-objective optimisation model is developed utilizing a mathematical model that maximises both economic performance and biogas yield. This optimisation method serves as a guide for biogas and wastewater treatment plants to achieve a sustainable biogas production. The results of this study provide the most economically and environmentally sustainable pathway for biogas production. Keywords: Biogas · Mathematical optimisation model · Oleochemical wastewater · Sustainability
1 Introduction The global energy demand is rapidly increasing, with fossil fuels accounting for about 88% of this demand [1]. Fossil fuel-derived carbon dioxide emissions are the main contributor to the rapidly increasing concentrations of greenhouse gases [2]. In order to minimise the environmental impacts of the energy production industry, the quest for sustainable and environmentally friendly sources of energy has become an urgency in recent years. Consequently, an interest in the production and use of fuels from organic waste can be observed in recent years. In this context, biogas from wastes, residues, and energy crops can play an important role as energy source in the future. Generally, biogas consists of 50–75% of methane © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 209–215, 2023. https://doi.org/10.1007/978-3-031-50330-6_21
210
M. F. Jamaluddin et al.
which is flammable, 25–50% of carbon dioxide, 2–3% of hydrogen sulphide, 1% hydrogen and water vapor [1, 3, 4]. Oleochemical industry is a major industry in Malaysia due to a high supply in palm oil. This effluent has characteristics of high chemical oxygen demand (COD) above 10,000 mg/L and biochemical oxygen demand (BOD) above 1000 mg/L [5]. The wastewater consists of a high oil and grease content above 1000 mg/L which is an abundant source of energy. Oleochemical industrial wastewater therefore, represents an alternative for energy source and at the same time can reduce the adverse effect of industrial wastewater on the environment [6]. The benefits of using oleochemical wastewater as a feedstock for biogas production involves a reduction in production cost, the production of renewable energy, a reduction of organic wastes disposed, and a reduction of adverse effects of improper wastewater disposal [7, 8].
2 Methodology 2.1 Optimisation Process Flow Diagram The flow diagram depicts the individual steps of the process sequential order. This study requires the collection of data for the optimisation model construction. The generic structure of the biogas production is also developed before formulating the mathematical optimisation equations. The simulation is then run using all the data collected for different scenarios, Pathway A and Pathway B, which aim to maximise biogas yield and maximise economic performance respectively (Fig. 1).
Fig. 1. Methodology process flow diagram
Pathway A can show the digester responsible for the highest biogas yield, while Pathway B provides the economic upper and lower limits for use in the multi-objective
Optimisation of a Sustainable Biogas Production
211
optimisation simulation. By integrating fuzzy logic, the optimisation method can provide the production plant with the highest achievable economic performance and biogas yield, as well as the optimal type of pre-treatment and digester. Consequently, am optimised and sustainable biogas production plant using oleochemical wastewater as feedstock can be obtained. 2.2 General Superstructure In this study, the objective is to synthesise a sustainable biogas production from oleochemical industrial wastewater with the economics taken into consideration.
Fig. 2. General superstructure for the optimisation of biogas from oleochemical wastewater
Figure 2 shows the generic superstructure, with a given flowrate, Fa of oleochemical wastewater supply, a as main feedstock for the anaerobic digesters j J. Before entering the digesters, the feed goes through a series of physical pre-treatment p P and chemical pre-treatment c C. A final product, biogas g will be produced from the different types of biogas production technologies j. As shown in Fig. 3., this study takes into consideration several types of digesters j including CSTR, CL, single stage stirred digester, AF, USAB, UASFF and IAAB. Physical pre-treatment p and chemical pre-treatment c which involves ultrasonic pretreatment, microwave irradiation, alkaline pre-treatment and oxidation with Fenton. This superstructure is used for the mathematical model formulation using mathematical model optimisation to optimise the biogas production process. 2.3 Mathematical Formulation Mathematical equations for material balance and economic performance can be formulated based on Fig. 3. The objectives of this study are conflicting and therefore, a multi-objective optimisation is used to solve this model. Optimisation model is used to adapt to trade-off economic performance and biogas yield. Fa =
P p=1
fap ∀a
(1)
212
M. F. Jamaluddin et al.
Fig. 3. Superstructure for the optimisation of biogas from oleochemical wastewater
Fp =
A P
fap CODp Xpg ∀p
(2)
fpc CODc Xcg ∀c
(3)
p=1 a=1
Fc =
C P c=1 p=1
Fg =
C J
fcj CODj Xjg ∀g
(4)
j=1 c=1
The generic equations for the volumetric flowrate balance of oleochemical wastewater are shown in Eqs (1) and (2). Given the flowrate of oleochemical wastewater a, Fa (m3 /day) is fed into physical pre-treatment technology p with a flowrate of f ap as shown in Eq (1). The flowrate of pre-treated stream p, F p (m3 /day) from the physical pre-treatment technology p can be determine by Eq (2) given the conversion rate (Xpg ) of biogas g from wastewater which is based on the COD removal efficiency. The COD removed by physical pre-treatment technology p is denoted by CODp . Similarly, the flowrate of stream from chemical pre-treatment c, F c (m3 /day) given the conversion rate (Xcg ) where the COD removal from chemical pre-treatment c is denoted by CODc can be obtained from Eq (3) and the biogas g with flowrate g(F g ) from the anaerobic digester j can be obtained from Eq (4) given the conversion rate (Xjg ). 2.4 Economic Performance The sustainability of the integrated system must be analysed in order to develop a sustainable biogas production using oleochemical wastewater. The economic impacts are evaluated using the following equations. The economic performance (EP) is evaluated based on the subtraction of total annual capital cost (AC TOT ) and annual operating cost (OC TOT ) from annual total revenue (REV ) as shown in Eq. (5). (5) EP = REV − AC TOT + OC TOT
Optimisation of a Sustainable Biogas Production
213
AC TOT = CAP TOT xAF
(6)
[r(1 + r)y ] AF = (1 + r)y − 1
(7)
The annual capital is determined by multiplying the capital cost, CAPTOT with the annualised factor, AF as shown in Eq. (6). Where the annualised factor, AF can be obtained by using Eq. (7). The capital cost, CAPTOT can be obtained using the following equation. CAP TOT =
P A (fap x CCp ) p=1 a=1
+
C P C J (fpc x CCc ) + (fcj x CCj ) c=1 p=1
(8)
j=1 c=1
where CCp is the capital cost of physical pre-treatment technologies p, CCc is the capital cost of chemical pre-treatment technologies c, and CCj is the capital cost of digesters j. The total annual operating cost, OC TOT is estimated based on the flowrate and operating cost of each equipment and technology. The operating cost of the physical pretreatment technologies, chemical pre-treatment technologies and digesters j are given by OCp , OCc , and OCj respectively OC TOT =
C P C J A P (fap x OCp ) + (fpc x OCc ) + (fcj x OCj ) p=1 a=1
c=1 p=1
(9)
j=1 c=1
The annual total revenue, REV generated by the production of biogas is associated with its selling price, Pg . REV =
G
Fg x Pg
(10)
g=1
3 Fuzzy Optimisation The objective of a general multi-objective optimisation is to minimise or maximise an objective function set subject. Fuzzy optimisation can determine the ideal alternatives in decision making problems by solving an objective function on a set of alternatives given by the constraints. In this study, fuzzy optimisation is used due to its flexibility and reliability. A degree of satisfaction, λ is required for the fuzzy model. The value of λ ranges from 0 to 1, whereby 0 indicates that the economic performance approaches its upper limit while 1 indicates that the optimisation objective is approaching its lower limit. These objective functions are assumed to be linear functions bounded by boundaries. The upper limit is represented as UL and the lower limit as LL. The economic performance EP (USD/year) which needs to be maximised is determined by the following equation. EP UL − EP ≥λ EP UL − EP LL
(11)
214
M. F. Jamaluddin et al.
4 Case Study 4.1 Maximising Biogas Production In this scenario, the optimisation objective is set to maximise the production of biogas. Maximise
G
Fg
(12)
g
4.2 Maximising Economic Performance In this scenario, the optimisation objective is set as maximise the economic performance EP. A high value in economic performance proves that the business is profitable and the process is economically feasible. MaximiseEP
(13)
4.3 Multi-objective Optimisation In this scenario, a multi-objective optimisation problem is solved by maximising economic performance and maximising the biogas yield using fuzzy optimisation approach. The degree of satisfaction, λ is maximised by using fuzzy optimisation (Eq. 10). The optimised pathway indicates the selection of processes for the production of biogas which can maximise both the economic performance of the process as well as the biogas yield.
5 Conclusion The results of this optimisation can therefore provide the most appropriate pathway for an economically feasible biogas production with the highest achievable biogas yield. The pathway obtained provide the pre-treatment and bioreactor technologies that provide the maximum yield of biogas and at the same time the maximum economic performance. The use of fuzzy logic optimisation allows the representation and manipulation of uncertain data to provide an accurate model to improve the biogas production system using oleochemical wastewater. The main advantage of this model is that both constraints, economic performance and biogas yield are taken into consideration to provide a sustainable biogas production process. This model can be further altered by also taking into consideration environmental factors such as carbon footprint, greenhouse gases emission and energy consumption. For future research purposes a larger selection of pre-treatment technologies can be included to the model such as inclusion of preliminary treatment and upgrading technologies in order to produce bio-methane. The biogas will be upgraded to bio-methane of 95% methane content which have higher energy source. The environmental impact in the form of greenhouse gas emission will also be included in the new model.
Optimisation of a Sustainable Biogas Production
215
Acknowledgement. This study was supported in part by the RMIC SEGi University (Award No: SEGiIRF/2022-Q1/FoEBEIT/005). KLK Oleomas Sdn Bhd deserves special thanks for providing access to their archives and technical advice throughout the project.
References 1. Weiland, P.: Biogas production: current state and perspectives. Appl. Microbiol. Biotechnol. 85(4), 849–860 (2009). https://doi.org/10.1007/s00253-009-2246-7 2. Naik, S., Goud, V.V., Rout, P.K., Dalai, A.K.: Production of first and second generation biofuels: a comprehensive review. Renew. Sustain. Energy Rev. 14(2), 578–597 (2010). https://doi.org/ 10.1016/j.rser.2009.10.003 3. Isa, M.H., et al.: Improved anaerobic digestion of palm oil mill effluent and biogas production by Ultrasonication Pretreatment. Sci. Total Environ. 722, 137833 (2020). https://doi.org/10. 1016/j.scitotenv.2020.137833 4. Schnurer, A., Jarvis, A.: Microbiological handbook for biogas plants. Swed. Waste Manage 1–74 (2010) 5. Ismail, Z., Mahmood, N.A., Ghafar, U.S., Umor, N.A., Muhammad, S.A.: Preliminary studies on oleochemical wastewater treatment using submerged bed biofilm reactor (SBBR). IOP Conf. Ser. Mater. Sci. Eng. 206, 012087 (2017). https://doi.org/10.1088/1757-899x/206/1/012087 6. Ohimain, E.I., Izah, S.C.: A review of biogas production from palm oil mill effluents using different configurations of bioreactors. Renew. Sustain. Energy Rev. 70, 242–253 (2017). https:// doi.org/10.1016/j.rser.2016.11.221 7. Makisha, N., Semenova, D.: Production of biogas at wastewater treatment plants and its further application. MATEC Web Conf. 144, 04016 (2018). https://doi.org/10.1051/matecconf/201814 404016 8. Bennich, T., Belyazid, S.: The route to sustainability-prospects and challengesof the bio-based economy. Sustainability (Switzerland) 9(6), 887 (2017). https://doi.org/10.3390/su9060887
Ensemble Approach for Optimizing Variable Rigidity Joints in Robotic Manipulators Using MOALO-MODA G. Shanmugasundar1 , Subham Pal2 , Jasgurpeet Singh Chohan3 , and Kanak Kalita4(B) 1 Department of Mechanical Engineering, Sri Sairam Institute of Technology, Chennai, India 2 Department of Aerospace Engineering and Applied Mechanics, Indian Institute of
Engineering Science and Technology, Shibpur, India 3 Department of Mechanical Engineering and University Centre for Research and Development,
Chandigarh University, Mohali, India 4 Department of Mechanical Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of
Science and Technology, Avadi, India [email protected]
Abstract. A novel ensemble-based optimization technique is proposed for designing a robot manipulator variable stiffness joint. A multi-objective optimization is performed by considering the maximization of generated torque and the minimization of the weight of the joints. This investigation considers three design parameters (i.e., inner stator width, outer stator width, and magnet height) to maximize torque and reduce weight. Multi-objective ant lion optimization (MOALO) and multi-objective dragonfly algorithm (MODA) optimization techniques are employed to generate Pareto fronts. These multiple Pareto fronts from the same algorithm are collated and condensed using non-dominated sorting (NDS) to reject the most dominant solutions. These new and improved Pareto fronts are called MOALO and MODA ensembles. Next, the MOALO and MODA ensemble Pareto front solutions are collated together, and NDS is performed again. The resulting Pareto front containing solutions of both MOALO and MODA is referred to as the MOALO-MODA ensemble. Keywords: Pareto front · Optimal design · Robot joint · Ensemble · Multi-objective
1 Introduction Lately, there has been a surge in academic attention towards sophisticated robots that necessitate direct collaboration and engagement with users. Traditional industrial robotics technology faces considerable hurdles in meeting these demands due to inherent system constraints. Classic robots function in segregated settings, utilizing highly stiff joints to attain exceptional positional accuracy. As a solution, researchers have proposed and designed compliant joints, which integrate mechanical flexibility into the robotic joint structure. Nonetheless, ensuring human safety in proximity to robotic systems remains crucial. The incorporation of compliance in joints has been extensively © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 216–224, 2023. https://doi.org/10.1007/978-3-031-50330-6_22
Ensemble Approach for Optimizing Variable Rigidity Joints
217
researched as a means to enhance safety measures. One method to achieve this is through active compliance, which emulates mechanical flexibility by employing sensors and actuators. However, these sensors and actuators sometimes become unreliable due to the failure of the sensors. Li et al. [1] used variable stiffness mechanisms to design a novel cable-driven joint in robotic components. Memar et al. [2] designed a robot gripper with variable stiffness actuation to reduce the damage due to collision. Bilancia et al. [3] presented a virtual and physical prototype of a beam-based variable stiffness actuator. Their aim is to enhance safety in human-machine interaction. Nalson et al. [4] used a variable stiffness mechanism to develop a redundant rehabilitation. Similarly, Hu et al. [5] designed a novel antagonistic variable stiffness dexterous finger. Yun et al. [6] utilized permanent rotary magnets to accomplish variable rigidity. The joints with variable rigidity must have adequate stiffness during manipulator movement so that the robotic system can bear the load. Tonietti et al. [7] demonstrated a robotic arm with variable rigidity facilitated by a linear actuator. Within a standard human living space, Yoo et al. [8] suggest that variable rigidity joints must produce over 10 Nm of torque to support a 1 kg payload on a 1 m robotic arm. Nevertheless, employing traditional electric motors could result in excessively heavy variable stiffness joints. To address this issue, Yoo et al. [8] introduced neodymium-iron-boron ring-shaped permanent magnets in conjunction with variable stiffness joints. Magnets are sometimes utilized alongside direct current actuators in robotic systems. Optimizing such variable stiffness joints is very important to maximize the performance and minimize the risk of any robot manipulator. Researchers proposed many optimization techniques to perform single and multi-objective optimization. Again many of them combined two optimization algorithms to build a hybrid algorithm. Because some optimization techniques are good at exploration and some are good at exploitation, a good exploration optimization technique is merged with a good exploitation technique to obtain a hybrid optimization technique with good both in exploration and exploitation. The above literature study reveals that the optimization of variable stiffness joints in robot manipulators is essential. However, most research is performed on this by considering only straightforward single-objective and multi-objective optimization techniques. Hybrid optimization techniques are not used to design the variable stiffness joint. So, in this paper, a novel MOALO-MODA ensemble-based optimization technique is proposed for designing the robot manipulator variable stiffness joint. The following contributions are made in this paper— • A novel MOALO-MODA ensemble is proposed to design variable stiffness joints in robot manipulators. • The advantage of the MOALO-MODA ensemble is established by comparing it with its individual counterparts.
2 Methodology 2.1 Multi-objective Antlion Optimization (MOALO) Multi-objective Antlion Optimization (MOALO) is a nature-inspired, population-based optimization algorithm inspired by antlions’ hunting mechanism [9]. The algorithm simulates the behaviour of antlions and ants, where antlions are considered as the candidate
218
G. Shanmugasundar et al.
solutions, and ants represent the search agents. MOALO is particularly effective for solving multi-response optimization problems, owing to its efficient exploration and exploitation capabilities. The pseudocode for MOALO is detailed below: Algorithm 1: MOALO Initialize the population of antlions and ants Evaluate the fitness of each antlion Initialize the archive of non-dominated solutions While stopping criteria are not met: For each ant: Select two antlions using binary tournament selection Generate a new solution by random walk and update the ant’s position Update the archive with the new solution if it is non-dominated Update the antlion population with the solutions from the archive Evaluate the fitness of each antlion Update archive using new NDS solutions Return the final archive of NDS solutions
2.2 Multi-objective Dragonfly Algorithm (MODA) Multi-objective Dragonfly Algorithm (MODA) is a swarm-based nature-inspired optimization technique, inspired by the swarming behaviours of dragonflies [10]. The algorithm mimics the movement of dragonflies in nature, where each dragonfly is considered a candidate solution. MODA is highly effective for solving multi-response optimization problems due to its adaptability and exploration capabilities. The pseudocode for MODA is detailed below: Algorithm 2: MODA Initialize the population of dragonflies Evaluate the fitness of each dragonfly Initialize the archive of non-dominated solutions Calculate the initial velocities of the dragonflies While stopping criteria are not met: Update the position and velocity of each dragonfly Evaluate the fitness of each dragonfly Update the archive with the new NDS solutions Update the neighbourhood of each dragonfly Update the global best solution Return the final archive of NDS solutions
2.3 MOALO-MODA Ensemble The MOALO-MODA ensemble is a novel approach that combines the strengths of both MOALO and MODA to solve multi-objective optimization problems [11]. Inspired by
Ensemble Approach for Optimizing Variable Rigidity Joints
219
ensemble machine learning models, this approach aims to build a robust optimization technique by integrating the Pareto fronts generated from multiple independent trials of both MOALO and MODA algorithms. The MOALO-MODA ensemble comprises three stages: primary non-dominated sorting (NDS) within each trial, secondary NDS for MOALO and MODA ensembles separately, and tertiary NDS for the combined MOALO-MODA ensemble. This method allows the ensemble to exploit potentially better NDS solutions at different segments of the Pareto front, enhancing the overall performance of the optimization process. Algorithm 3: MOALO-MODA ensemble Specify the objective function, design variables, and their bounds Set the number of independent trials (n) For each trial in n trials: Perform MOALO optimization and store the Pareto front Perform MODA optimization and store the Pareto front Combine the n Pareto fronts from MOALO trials and apply secondary NDS The resulting Pareto optimal front is the MOALO ensemble Combine the n Pareto fronts from MODA trials and apply secondary NDS The resulting Pareto optimal front is the MODA ensemble Combine the MOALO ensemble and MODA ensemble Pareto fronts Apply tertiary NDS to obtain the MOALO-MODA ensemble Pareto front Return the final MOALO-MODA ensemble Pareto o front
3 Problem Description The objective is to design a robot manipulator with variable stiffness joints. Three design parameters namely, inner stator width (x1 ), outer stator width (x2 ), and magnet height (x3 ) are used. The study’s objective is to simultaneously minimize the variable stiffness joint’s overall weight while maximising the torque. This study uses a MOALOMODA ensemble technique to perform multi-objective optimization. MOALO-MODA is a nature-inspired hybrid optimization technique combination of multi-objective ant lion optimization (MOALO) and multi-objective dragonfly algorithm (MODA), and an ensemble technique of machine learning is used to perform the study. The case study is from the works of Yoo et al. [8]. The design of a variable rigidity joint for a robotic manipulator aims to accommodate a robotic arm with a payload capacity of 1 kg. Furthermore, it is presumed that the rotational stiffness of such a joint should possess a value of 10 Nm. So, this problem is treated as a constraint optimization technique considering a constraint function that rejects the solutions of rotational stiffness value less than 10 Nm. This ensures that the non-dominated solution archive contains only those solutions that satisfy the rotational rigidity constraint criteria. The upper limits and the lower limits of the design parameters in mm are given as follows: 6 ≤ x1 ≤ 10; 5 ≤ x2 ≤ 15; 6 ≤ x3 ≤ 10
(1)
A central composite design (CCD) is performed using the upper and lower limits of the design variables to get the design points and to perform the experiment. Based on
220
G. Shanmugasundar et al.
the experiment studies Yoo et al. [8] modelled a second-order polynomial mathematical equation of torque and weight as shown in Eqs. (2) and (3). Yoo et al. [8] reported a very good co-efficient of regression value (R2 ) of 0.97 and 1 for torque and weight, respectively. However, the design variables’ upper and lower limits are further coded in the range between ± 1. T = 8.8949 + 0.9555x1 + 0.8100x2 + 5.7616x3 − 0.0703x12 − 0.1246x22 + 0.2499x32 + 0.0292x1 x2 + 0.6722x1 x3 + 0.6028x2 x3
(2)
W = 0.4954 + 0.0483x1 + 0.0520x2 + 0.2483x3 + 0.0009x12 + 0.0009x22 + 0.0019x1 x2 + 0.0242x1 x3 + 0.0260x2 x3
(3)
A mathematical model of torque and weight is used as an objective function in the MOALO and MODA optimizer to find the Pareto optimal points.
4 Results and Discussion In this study, two nature-inspired multi-objective optimizers (i.e., MOALO and MODA) are used to find the Pareto front considering maximization of torque and minimization of weight as two objectives of the problem. Every point on the Pareto front represents the optimal result. Figure 1a illustrates the Pareto fronts obtained from three MOALO trials, while Fig. 1b illustrates the Pareto front obtained from three MODA trials. The Pareto front obtained using MOALO and MODA is quite similar, as shown in Fig. 1.
Fig. 1. Pareto front generated in three different trials by (a) MOALO (b) MODA.
The box plot of three trials for torque and weight utilizing the MOALO optimizer is depicted in Fig. 2, and it can be seen that for trial two, there are outliers for both cases.
Ensemble Approach for Optimizing Variable Rigidity Joints
221
Figure 2 demonstrates that in the first trial of torque, the most significant number of higher values are obtained. In contrast, in the second trial of weight minimization, the most significant number of lower values are obtained.
Fig. 2. Box plot of Pareto fronts generated by MOALO in three different trials.
Figure 3 displays the box diagram of all functional evaluations for torque and weight using the MODA optimizer. The ensemble method is used to condense further the Pareto front obtained from MOALO and MODA optimizers. The ensemble method is introduced to evaluate the non-dominant solutions generated by the MOALO and MODA optimization strategies. Figure 4a depicts the Pareto front derived from the MOALO ensemble method, whereas Fig. 4b depicts the Pareto front derived from the MODA ensemble method. In addition, the MOALO-MODA ensemble technique is used to condense the Pareto front derived from the MOALO and MODA ensemble techniques. Figure 5 depicts the Pareto front derived using MOALO-MODA ensemble techniques. Consequently, the Pareto front derived from the MOALO-MODA ensemble can be used to design robot manipulator joints with variable stiffness according to the design requirements. It is important to state here that during the initial independent n runs of the MOALO and MODA algorithm, the archive size is limited to 500. This ensures that sufficient Pareto solutions are recorded on the Pareto fronts. In this study, for all three runs of MOALO, 500 Pareto solutions were obtained, whereas in the case of MODA in 2 out of 3 trials 500 Pareto solutions were obtained. The third trial recorded 499 Pareto solutions. On the other hand, the ensemble MOALO and ensemble MODA contain only 114 and 263 Pareto solutions. Thus, a reduction of approximately 77% and 47% is undergone in the ensemble MOALO and ensemble MODA as compared to a typical Pareto front of MOALO or MODA. The final MOALO-MODA ensemble contains only 340 Pareto solutions. This is about 32% compared to a typical Pareto front of MOALO or MODA.
222
G. Shanmugasundar et al.
Fig. 3. Box plot of Pareto fronts generated by MODA in three different trials.
Fig. 4. Pareto fronts (a) MOALO ensemble (b) MODA ensemble.
Out of the 340 Pareto solutions in the MOALO-MODA ensemble, 87 are from MOALO and 253 are from MODA. This corresponds to 25.6% of the MOALO-MODA ensemble being made out of MOALO solution whereas the remaining 74.4% of the solution was originally generated by MODA. This highlights the importance of the heterogeneous mixing of Pareto solutions from multiple multi-objective algorithms in forming an ensemble Pareto front. In terms of computational time, the MOALO and MODA algorithms are found to be comparable with an average of 99 s (± 21 s) and 91 s (± 33 s) for both algorithms
Ensemble Approach for Optimizing Variable Rigidity Joints
223
Fig. 5. Pareto front by MOALO-MODA ensemble.
respectively. The ensemble process using non-dominated sorting takes approximately 3.67 s (± 0.47 s).
5 Conclusions This study presents a novel MOALO-MODA ensemble technique for optimizing a robot manipulator’s variable stiffness joint. The objective of the study is to maximize the generated torque and minimize the weight. The design parameters for this investigation are the inner stator width (x1 ), outer stator width (x2 ), and magnet height (x3 ). Initially, MOALO and MODA optimization techniques are used to generate Pareto fronts. The obtained Pareto solutions are then collated and condensed using NDS to form the MOALO ensemble and MODA ensemble. Subsequently, the Pareto fronts derived from both the MOALO and MODA ensembles are combined, followed by the application of the NDS. The emerging Pareto front is referred to as the MOALO-MODA ensemble. This study demonstrates that the Pareto front generated by the MODA optimizer yields a more significant proportion of non-dominant outcomes.
References 1. Li, Z., et al.: A novel cable-driven antagonistic joint designed with variable stiffness mechanisms. Mech. Mach. Theory 171, 104716 (2022) 2. Memar, A.H., Esfahani, E.T.: A robot gripper with variable stiffness actuation for enhancing collision safety. IEEE Trans. Industr. Electron. 67, 6607–6616 (2019) 3. Bilancia, P., Berselli, G., Palli, G.: Virtual and physical prototyping of a beam-based variable stiffness actuator for safe human-machine interaction. Rob. Comput.-Integr. Manuf. 65, 101886 (2020) 4. Nelson, C.A., Nouaille, L., Poisson, G.: A redundant rehabilitation robot with a variable stiffness mechanism. Mech. Mach. Theory 150, 103862 (2020)
224
G. Shanmugasundar et al.
5. Hu, H., Liu, Y., Xie, Z., Yao, J., Liu, H.: Mechanical design, modeling, and identification for a novel antagonistic variable stiffness dexterous finger. Front. Mech. Eng. 17, 35 (2022) 6. Yun, S., Kang, S., Kim, M., Hyun, M., Yoo, J., Kim, S.: A novel design of high responsive variable stiffness joints for dependable manipulator. Proc. ACMD (2006) 7. Tonietti, G., Schiavi, R., Bicchi, A.: Design and control of a variable stiffness actuator for safe and fast physical human/robot interaction. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation (2005) 8. Yoo, J., Hyun, M.W., Choi, J.H., Kang, S., Kim, S.-J.: Optimal design of a variable stiffness joint in a robot manipulator using the response surface method. J. Mech. Sci. Technol. 23, 2236–2243 (2009) 9. Mirjalili, S., Jangir, P., Saremi, S.: Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 46, 79–95 (2017) 10. Mirjalili, S.: Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 27, 1053– 1073 (2016) 11. Kalita, K., Kumar, V., Chakraborty, S.: A novel MOALO-MODA ensemble approach for multi-objective optimization of machining parameters for metal matrix composites. Multiscale Multidisciplinary Model Exp. Des. 1–19 (2023)
Future Design: An Analysis of the Impact of AI on Designers’ Workflow and Skill Sets Kshetrimayum Dideshwor Singh(B) and Yi Xi Duo School of Art and Design, Wuhan University of Technology, Hubei, China [email protected]
Abstract. This research examines the current state of artificial intelligence (AI) in design, its potential impact, and the skills that will be required for future designers to work effectively with AI. The future of design with AI is investigated. This article explores the ethical and responsible application of AI in design, analyzing case studies of AI-driven design projects from a variety of industries, including architecture, fashion and discussing these initiatives. It is not the end of human invention, and designers need to be aware of the potential biases and limitations in AI algorithms. Although artificial intelligence has the ability to improve and optimize the design process, it is not the end of human innovation. In its conclusion, the study offers several thoughts on the trajectory of design in conjunction with AI as well as some directions for future research. Keywords: Artificial intelligence · Future design · AI-driven tools · Skill · Ethic
1 Introduction Since the beginning of human civilization, design has played an important role in the development of many aspects of society. Design is an essential component in the production of anything from tools and structures to works of art and clothing, and it plays an important part in the formation of our environment [1]. Recent years have seen tremendous breakthroughs in the field of design, brought about by the rise of artificial intelligence (AI). These advancements have had an impact on a variety of industries and changed the way that we approach design. It is impossible to overestimate the significance of design because of the countless ways in which it affects our day-to-day lives. Design plays an essential role in the process of defining the physical world around us, from the structures in which we live and work to the objects that we use on a daily basis [2]. It is also an essential component in the process of producing unforgettable experiences, like in the entertainment and hotel industries. The way things are designed has an effect on everything from our mental and emotional health to our level of productivity and efficiency [3]. The function of design is undergoing a period of rapid transition as a result of the development of AI technologies. AI is already being utilized in a variety of facets of design, including product design, graphic design, and architectural design, to name just a few of these specializations. AI is revolutionizing the process that designers follow, from the time they first conceive of a concept until they ship the finished product [4–6]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 225–234, 2023. https://doi.org/10.1007/978-3-031-50330-6_23
226
K. D. Singh and Y. X. Duo
This study will investigate the relationship between AI and designers, focusing on the effects of AI technology on the design industry as well as the potential of design in conjunction with AI. Additionally, it will investigate the necessary skill set for future designers to have in order to work effectively with AI, as well as the ethical implications that surround AI-driven design. The purpose of this paper is to provide insight into the ways in which AI is influencing the area of design as well as the potential and problems that lie ahead through the exploration of the aforementioned themes [7]. The field of computer science known as AI focuses on the development of intelligent machines that are capable of doing activities that would normally require the intelligence of a human being. The beginning of AI can be dated back to the 1950s, and ever since that time, it has been utilized in many different areas, including design [4, 7]. Since its earliest uses in computer-aided design (CAD), AI has seen significant development in the design industry, giving rise to more sophisticated tools and methods that make use of machine learning and natural language processing [8]. Generative design is one application of AI in the design industry. The use of algorithms and artificial intelligence in generative design allows for the creation and exploration of design choices depending on specified parameters and limitations. When adopting this methodology, designers are able to explore more design choices than they would be able to using traditional methods, which ultimately results in solutions that are more innovative and efficient [9]. Another illustration of this is image recognition, which is powered by AI and can examine pictures in order to recognize objects, colors, and patterns. This technology can be used to automate repetitive chores for designers, such as the choosing of colors and the categorization of images. As a result, designers are freed up to focus on more complex and creative jobs. In spite of the numerous benefits that artificial intelligence brings to the design process, there are also some constraints to take into consideration. For instance, it is possible that AI will not be able to mimic the creative and intuitive parts of human design thinking. Additionally, there’s a chance that AI-driven designs won’t have the same emotional resonance and human touch that humans bring to the design process. In addition, the quality of the data that is used to train the AI models and the potential biases that may be present in that data may both be factors that prevent AI-driven design from reaching its full potential [10]. AI is playing an increasingly crucial role in the design process, bringing new prospects for creativity and efficiency. However, it is essential to take into account both the benefits and drawbacks of employing AI in design and to look for ways to strike a balance between the advantages offered by AI and the distinct advantages offered by human designers.
2 Research Methodology The research for the article was most likely conducted using a variety of different approaches. In addition to the examination of the relevant literature in order to acquire a complete comprehension of the current state of the field as well as developing tendencies and problems in the second step of the process, case studies of real-world AI-driven design projects in architecture and product design would be investigated in order to
Future Design: An Analysis of the Impact of AI
227
comprehend the influence that AI has on the design process and the results. Interviews with Subject Matter Experts In order to acquire insights into the future of design with AI as well as the essential skills and training for future designers, interviews with subject matter experts in the fields of AI research, design, and industry professionals would be carried out [9]. A data analysis of design projects that were developed utilizing AI-driven design tools would be carried out in order to gain an understanding of the efficacy and efficiency of these tools, in addition to any limitations or biases that they may have. In order to comprehend the distinctions between AI-driven design tools and traditional design processes, as well as their respective advantages and limitations, a comparative study of both types of design tools and methodologies will be carried out. To get an all-encompassing understanding of the role AI plays in the design process, the methodology of the article would, in general, entail a mix of qualitative and quantitative research approaches to investigate various aspects of the topic.
3 Future of Design with Artificial Intelligence The future of design using AI will be a landscape that is constantly shifting, with new tools and methods being developed every year. The application of AI in design holds a tremendous amount of promise, and it is anticipated that its impact will increase in the years to come. In this chapter, we will investigate some of the developing trends in design with artificial intelligence, the implications of AI on the work of designers, as well as the potential benefits and problems of using AI in design. 3.1 Innovation in Design Made Possible by AI One of the developing trends in design using AI is the usage of AI-powered virtual assistants, which may assist designers with tasks such as picture identification, color selection, and layout creation. This is one of the rising trends in design using AI. These assistants are able to provide designers with comments and suggestions in real time, which enables the designers to work in a manner that is both efficient and productive [5].The usage of AI-powered generative design, which can produce a large number of design alternatives in a manner that is both speedy and efficient, is another trend that has emerged. This approach is very helpful for solving complicated design challenges that involve a number of different needs and constraints. 3.2 AI’s Impact on Designer The increasingly widespread application of AI in design raises critical concerns for the position of designers in the decades to come. Drafting and layout design are two examples of the types of mundane operations that could potentially be automated as AI technology continues to progress [7]. It’s possible that this will free up designers to concentrate more on being creative and coming up with new ideas. However, the job of designers will continue to be essential in order to guarantee that AI-driven designs are both visually beautiful and functional, and that they satisfy the requirements of consumers. Additionally, designers will need to have a solid understanding of AI technology and the ability to work effectively with design tools that are powered by AI [11].
228
K. D. Singh and Y. X. Duo
3.3 Potential Benefits and Challenges of AI in Design The application of AI in design could result in a multitude of benefits, including higher productivity, accelerated prototyping, and novel approaches to design problems. AI may also assist designers in recognizing patterns and trends, which is something that can be challenging to do when using more conventional design methodologies [4]. Having said that, one must also take into consideration the difficulties. One drawback of AI-driven design could be that it lacks the human element, which is the emotional and personal dimension that humans provide to the design process [8]. It’s possible that AI will have trouble replicating the intricate and intuitive quality of human design thinking, which could result in ideas that are less innovative or less effective. The future of design will involve AI in some capacity, and this future promises to be both a fascinating and rapidly changing field, with new tools and methods appearing all the time. It is crucial to explore the implications of AI on the work of designers and to discover ways to strike a balance between the benefits of AI and the particular skills of human designers while still taking advantage of the possible benefits that may be gained from employing AI in design.
4 AI-Powered Design Tools The AI-powered design tools are software programs that designers can use to assist them with their work. These tools have the capability to automate a great deal of the normal design work, produce new design possibilities, and provide real-time feedback and ideas. In this chapter, we will investigate some of the AI-driven design tools that are now available, as well as their applications and the effectiveness and efficiency of using them. An Overview Image recognition and generative design software, as well as virtual assistants and design collaboration platforms, are among the numerous AI-powered design tools presently available on the market. These instruments are applicable for a variety of design applications, including graphic design, product design, architectural design, and fashion design [3]. 4.1 Currently Available AI-Driven Design Tools AI-Driven Design Tool Examples and Their Applications Adobe Sensei, which uses machine learning to automate duties such as object removal, color matching, and image cropping, is an example of an AI-driven design tool [5, 9]. Autodesk Dreamcatcher is another example of a program that employs generative design to explore numerous design options based on particular parameters and constraints. H&M uses an AI tool called Color on Demand to analyze consumer preferences and generate custom color palettes for its clothing lines. In the field of architectural design, the AI-powered platform Archistar.ai enables architects to generate and examine building designs based on particular requirements and regulations [12].
Future Design: An Analysis of the Impact of AI
229
4.2 Generative Design Tools for generative design use algorithms and machine learning to generate multiple design options based on the user’s specified constraints and parameters. Dreamcatcher by Autodesk is an example of a generative design tool for architecture, while DesignSpark Mechanical and Solidworks are generative design tools for product design [12]. Augmented reality (AR) enables designers to visualize their designs in real-world settings and make adjustments in real-time. In architecture and product design, AR design tools such as SketchAR and Morpholio Trace are used. Virtual Reality (VR) design tools such as IrisVR and Enscape enable designers to immerse themselves in their designs and make modifications in a 3D virtual environment [13]. 4.3 Computational Design In architecture and product design, computational design tools such as Grasshopper and Rhino are used to construct complex geometric forms and patterns using algorithms and data [14]. Material intelligence tools such as Autodessys FormZ and Solidworks Plastics simulate material properties and performance, allowing designers to make informed decisions regarding material selection and design optimization. These are just a few examples of the AI-driven architecture and product design tools available on the market. As technology advances, we can anticipate the emergence of more sophisticated and innovative tools, which will further improve the design process in these disciplines. 4.4 The Effectiveness and Efficiency of AI-Driven Design Tools Depending on the design tool and application, the effectiveness and efficacy of AIpowered design tools can vary. In certain instances, AI-powered design tools can substantially increase productivity by automating routine tasks and rapidly generating numerous design options. In other instances, however, the limitations of AI technology may result in less effective or inventive designs. A potential advantage of AI-driven design tools is their capacity to analyze vast quantities of data and identify patterns and trends that may be difficult to recognize using conventional design methods [14]. In addition, AI-driven design tools enable designers to work more collaboratively and in real time, resulting in more rapid and effective design processes. Notably, AI-driven design tools may not be able to entirely replace the creative and intuitive aspects of human design thinking. Moreover, AI-driven design tools are only as effective as the data used to train the AI models, and there is a hazard of bias and limitations in the data affecting the results. AI-driven design tools are swiftly evolving and offer designers numerous advantages in terms of productivity and creativity. Nonetheless, it is essential to carefully consider the efficacy and limitations of these tools and to find methods to balance the advantages of AI with the distinct advantages of human designers.
5 Design Expertise for the Long Term As AI-powered design tools become more pervasive in the industry, designers will need to acquire new skills and knowledge in order to utilize these tools effectively. In this chapter, we examined the skills required for designers to work with artificial intelligence,
230
K. D. Singh and Y. X. Duo
the training and education necessary to prepare future designers, and the balance between creativity and automation in design. Competencies Required for Designers Working with AI, Designers utilizing AI-powered design tools will need to possess both technical and creative skills [7, 14]. Technical abilities include proficiency with software applications associated with AI development, data analysis, and programming languages. Creative abilities include imagination, critical reasoning, and problem-solving [8, 10]. In addition, designers must be able to effectively collaborate and communicate with clients, engineers, and data scientists, among others. To integrate these tools into the design process effectively, it will be necessary to understand the capabilities and limitations of AI technology. Required Education and Training for the Preparation of Future Designers To prepare future designers to work with AI-powered design tools, design colleges and programs must incorporate AI education into their curricula [15]. This education should include both technical and creative skills, in addition to a comprehensive understanding of the ethical and social implications of AI in design. Continuing education and professional development will also be necessary for designers to stay apprised of the most recent advancements in AI technology and best practices for employing these tools. Design with a Balance of Creativity and Automation [5, 12, 14]. A discussion of how creativity in the design process may be impacted by the increasing use of AI-driven design tools is a potential cause for concern. AI may lack the creativity and intuition of human designers, despite its ability to automate commonplace tasks and generate numerous design options. To achieve a balance between creativity and automation in design, designers should view AI-powered design tools as a supplement to their own creative process, as opposed to a replacement. By combining their own distinct perspectives and creative abilities with AI’s capabilities, designers can produce innovative and effective designs [8]. The incorporation of AI-driven design tools into the design process will necessitate the development of new skills and knowledge among designers. Design schools and programs will be required to integrate AI education into their curricula, and designers will be required to remain abreast of the most recent AI technological advancements. By balancing creativity and automation in design, designers can capitalize on the assets of both human and AI design thinking to produce effective, innovative designs.
6 Case Study By examine several case studies of AI-driven design projects in a variety of industries, analyze their success and limitations, and address the impact of AI on the design process and its outcomes. The case studies illustrate the potential and limitations of AI-driven design in various industries. AI has shown promise in improving design efficiency and outcomes, but it is essential to consider the potential impact on creativity and the need for human designers to add their own unique perspective and touch to the final product. By harmonizing automation and originality, designers can utilize AI-powered design tools to create innovative and effective designs.
Future Design: An Analysis of the Impact of AI
231
6.1 Automotive Sector and Graphic Design In the development of autonomous vehicles, the automotive industry has also incorporated AI-driven design. Improving safety and efficiency, AI algorithms can analyze sensor data in real-time to make decisions regarding steering, halting, and acceleration. However, limitations include the requirement for extensive testing and validation, as well as ethical concerns regarding the potential loss of employment for human chauffeurs [5, 12]. Business In the graphic design industry, AI-driven design tools have been developed for the construction of logos, business cards, and websites. These tools can generate multiple design options rapidly, sparing designers time and increasing their productivity. 6.2 Architectural Industry In 2018, the Japanese architectural firm Nikken Sekkei collaborated with the technology company NVIDIA to create an AI-generated design for a Tokyo office structure [6]. The team trained a neural network on a dataset of buildings and then used the network to generate 10,000 building design options. The ultimate design was then chosen based on a variety of factors, including structural efficiency and aesthetic appeal [13, 14]. The architectural firm Gensler designed the new terminal at Los Angeles International Airport using AI. They analyzed data on passenger flows, luggage handling, and other variables using machine learning algorithms to optimize the terminal’s layout for optimum efficiency and passenger comfort. Beijing-based MAD Architects used an AI program to generate the design for the new China Philharmonic Hall in 2019 [16]. The program analyzed data on the neighboring area, including topography, wind patterns, and sunlight levels, in order to generate a design optimized for the site’s particular conditions [5]. The architectural firm COOKFOX utilized AI to optimize the design of a residential superstructure in New York City in 2020 [17]. The team generated thousands of design options using a generative design algorithm and then identified the most efficient and structurally sound options using machine learning. This enabled them to design a structure that maximized natural light and ventilation while minimizing energy consumption. These examples illustrate how AI can assist architects and designers in creating designs that are more efficient, optimized, and aesthetically pleasing. AI cannot yet completely replace human creativity and intuition, but it can improve the design process and assist architects and designers in making more informed and data-driven decisions.
7 Ethical Considerations in AI-Driven Design The significance of ethics in AI-driven design, the responsibility of designers to ensure the ethical use of AI, and the potential biases and limitations of AI in design. Ethics’ Importance in AI-Driven Design, AI algorithms are only as effective as the data used to train them, and if they are not closely monitored, there is a risk of perpetuating bias and discrimination in their design. Therefore, it is imperative that designers consider the ethical implications of AI in design, such as privacy, data security, and impartiality.
232
K. D. Singh and Y. X. Duo
Designers’ Obligation to Ensure the Ethical Use of AI. It is the responsibility of designers to use AI in an ethical, transparent, and inclusive manner [10]. This includes designing AI algorithms that are impartial and equitable, and being transparent about the data sources used to train the algorithms. In addition, designers must prioritize user privacy and security and ensure that their work does not perpetuate detrimental stereotypes or discrimination [11]. AI Design Taking Into Account Potential Biases and Limitations AI algorithms may be subject to biases and limitations, which can have a substantial impact on the outcomes of design [6]. Due to biases in the data sets used to train the algorithms, facial recognition technology has been criticized for being less accurate when identifying individuals with darker skin tones. Designers must be aware of such constraints and biases and take measures to mitigate them in their work. The incorporation of AI into design has the potential to revolutionize the industry, but designers are responsible for ensuring AI is used ethically and inclusively. By considering the potential biases and limitations of AI algorithms, designers can create designs that are transparent, equitable, and inclusive [10]. As AI technology continues to advance, it is crucial for designers to remain current on ethical considerations and prioritize the wellbeing of their users.
8 AI User Designer Versus Old-School Designer There are a number of significant distinctions between an AI user designer and a conventional designer. An AI user designer must possess the knowledge and abilities necessary to work with AI-driven design tools, such as data analysis, machine learning, and programming. A traditional designer, on the other hand, typically possesses skills related to creativity, design principles, and aesthetics [14]. An AI user designer may have a different methodology than a conventional designer, as they may be required to generate designs using data sets and algorithms. This may necessitate a more data-driven and analytical design strategy. As a result of their ability to automate certain duties and rapidly generate multiple design options, AI-driven design tools can make design processes faster and more effective. It may take more time for traditional designers to manually construct designs and iterate through various options [13, 16]. AI-driven design tools have numerous advantages, but they also have limitations. AI algorithms can be biased or constrained by the data sets on which they are trained, and they may lack the creativity and intuition of human designers. Traditional designers may be better suited to push the boundaries of design and come up with innovative solutions that AI alone cannot accomplish. The primary distinction between an AI user designer and a conventional designer is their approach to design, the tools they employ, and their abilities. Nonetheless, both types of designers can complement one another and collaborate to create innovative and effective designs.
9 Conclusion Examined the intersection of design and AI, analysing the history and advancements of AI in design, the current landscape of AI-driven design tools, the skillset necessary for future designers to work with AI, case studies of AI-driven design projects, and the ethics
Future Design: An Analysis of the Impact of AI
233
and responsibilities of designers when using AI. The most important topics covered in this article were the potential benefits and limitations of using AI in design, the importance of designers maintaining a healthy balance between creativity and automation in design, the need for designers to be aware of the potential biases and limitations of AI algorithms, and the responsibility of designers to ensure that the use of AI is both ethical and inclusive. With its potential to streamline repetitive tasks and foster greater innovation, AI holds great promise for the future of the design industry. However, designers must be cautious not to perpetuate prejudices and discrimination, and they should prioritize the wellbeing of their consumers in their work. Future designers will need to possess a deeper understanding of AI and how it influences design. To investigate the effects that AI will have on the design industry and to develop new methods, tools, and strategies that will facilitate the incorporation of AI into design practices, additional research is required. As AI technology evolves, designers must remain current with ethical considerations and continue to prioritize user needs and the welfare of their communities. In general, AI holds great promise for the future of design, and if designers collaborate with AI technology, they will be able to create designs that are forward-thinking, inclusive, and efficient, all of which will benefit society as a whole.
References 1. Alfrink, K., Keller, I., Kortuem, G., Doorn, N.: Contestable AI by design: towards a framework. Minds Mach 1–27, Aug (2022) 2. Dhanorkar, S., Wolf, C.T., Qian, K., Xu, A., Popa, L., Li, Y.: Who needs to know what, when?: Broadening the Explainable AI (XAI) design space by looking at explanations across the AI lifecycle. In: DIS 2021—Proceeding of the 2021 ACM designing interactive systems conferences nowhere everywhere, vol. 12, pp. 1591–1602, Jun 2021 3. Zhao, D., et al.: Overview on artificial intelligence in design of organic Rankine cycle. Energy AI 1, 100011 (2020) 4. Zhu, J., Liapis, A., Risi, S., Bidarra, R., Youngblood, G.M.: Explainable AI for designers: a human-centered perspective on mixed-initiative co-creation. ieeexplore.ieee.org 5. E.K.-O.J. of A. and Design and undefined 2018: usage of artificial intelligence in today’s graphic design. adjournal.net, vol. 6, no. 4 (2018) 6. Endo, J., Ikaga, T.: Nikken Sekkei building: Tokyo, Japan. Sustain. Build. Pract. What Users Think, 183–191, Jan (2010) 7. Castro Pena, M.L., Carballal, A., Rodríguez-Fernández, N., Santos, I., Romero, J.: Artificial intelligence applied to conceptual design. A review of its use in architecture. Autom. Constr. 124, 103550, Apr (2021) 8. Chaillou, S., Chaillou, S.: ArchiGAN: artificial intelligence x architecture. Archit. Intell. 117–127 (2020) 9. Olsson, T., Väänänen, K.: How does AI challenge design practice? Interactions 28(4), 62–64 (2021) 10. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30(1), 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8 11. Smithers, T., Conkie, A., Doheny, J., Logan, B., Millington, K., Tang, M.X.: Design as intelligent behaviour: an AI in design research programme. Artif. Intell. Eng. 5(2), 78–109 (1990) 12. Candeloro, D.: Towards sustainable fashion: the role of artificial intelligence—H&M, Stella McCartney, Farfetch, Moosejaw: a multiple case study. Zo. J. 10(2), 91–105 (2020)
234
K. D. Singh and Y. X. Duo
13. Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhardt, U., Meier, M.: BUILD-IT: a planning tool for construction and design. 13–028 14. Irbite, A., Irbite, A., Strode, A.: Artificial intelligence vs designer: the impact of artificial intelligence on design practice. Soc. Integr. Educ. Proc. Int. Sci. Conf. 4, 539–549 (2021) 15. Gero, J.S.: IJCAI-91 Workshop on Ai in design position paper ten problems for Ai in design 16. Yansong, M.: Designing the realizable Utopia (2011) 17. Orhon, A.V., Altin, M.: Utilization of alternative building materials for sustainable construction. Green Energy Technol. 727–750 (2020)
Clean Energy, Agro-Farming, and Smart Transportation
Application of Remote Sensing to Assess the Ability to Absorb Carbon Dioxide of Green Areas in Thu Dau Mot City, Binh Duong Province, Vietnam Nguyen Huynh Anh Tuyet(B) , Nguyen Hien Than, and Dang Trung Thanh ` Mô.t, Binh Duong Province, Vietnam Thu Dau Mot University, Thu Dâu [email protected] ij
Abstract. Climate change has been causing certain impacts on the environment and public health, especially in large urban areas, where the population density is dense and the green area is not enough. Trees play an important role in climate regulation because of their ability to absorb CO2 generated from city activities. The study was conducted to evaluate the CO2 absorption of green areas in Thu Dau Mot City, Binh Duong province, where has many large urban and residential areas, by using remote sensing technology and field investigation. Total load of CO2 absorbed by green space of Thu Dau Mot city is about 874 thousands tons, including 698 thousands tons from high vegetation areas (accounting for 80%) and 176 thousand tons from low vegetation areas (making up 20%). Hoa Phu is the suburban ward that has the highest load of CO2 absorbed by green space while the central ward of Phu Cuong is the lowest one. The economic value of CO2 stored in urban green area of Thu Dau Mot city is about 15 millions USD, equivalent to 346 billions VND. This value will increase every year due to the growing of tree biomass. This research proves that trees have a great role in absorbing CO2 arising from human activities and therefore it is necessary for authorities of Thu Dau Mot city as well as other places to promote the planting, caring and protecting of green spaces as much as possible to protect the environment and community’s health. Keywords: Green area · Carbon absorption · Green areas · Remote sensing
1 Introduction Climate change is now one of top environmental challenges of many countries around the world because of its immeasurable direct dangers to the ecological environment, biodiversity, and human life, especially in big cities where green areas are little, and population is crowded. Therefore, urban green space plays an important role in the sustainable development of urban ecosystems and people’s quality of life. They have a direct impact on microclimate conditions, air quality, recreational and aesthetic value of urban areas [1]. In addition, urban green space plays an important role in managing the physical, psychological and public health of urban residents [2]. Therefore, the study © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 237–247, 2023. https://doi.org/10.1007/978-3-031-50330-6_24
238
H. A. T. Nguyen et al.
of distribution, species composition, carbon sequestration and storage capacity of urban trees play a great important role in many fields such as urban planning and climate change response, environmental protection and public health [3]. With high spatial resolution, short repeat cycle, low cost and large coverage area and ability to reach remote areas, remote sensing technology has made great contributions to the management of urban green space management in many countries around the world such as monitoring urban green space and landscape fragmentation in Osmaniye, Turkey [2], mapping urban green space distribution in Bulgaria and Slovakia [1], developing an urban green space development plan in Colombo, Sri Lanka [4], assessing the quantity and quality of urban green space in Boussaada, Algeria [5], assessing the carbon dioxide absorption of a forest Mediterranean [6], Carbon Dioxide Sequestration Capability of Green Spaces in Bogor City, Indonesia [7]. In Vietnam, the application of remote sensing in the calculation of carbon stocks is also carried out in many localities such as estimating the carbon stock of mangrove land in the North of Vietnam [8], estimating the Carbon content of coastal mangroves of Hai Phong [9] and Bach Ma National Park - Thua Thien Hue [10], assessment of carbon stock and CO2 absorption in mixed broadleaf and coniferous in the Bidoup - Nui Ba National Park, Lam Dong province [11], evaluate the carbon storage capacity of seagrasses through biomass in Thi Nai lagoon, Binh Dinh province [12], determine the distribution and carbon sequestration capacity of the forest in Cam My commune, Cam Xuyen district, Ha Tinh province [13]. Binh Duong is one of the economic centers of the Southeast region, located in the Southern key economic region and is one of the localities with the highest urbanization rate of the country (82%) with 3 cities and 2 towns; synchronous and modern transport infrastructure. Binh Duong is one of the leading localities in the country in industrial production, attracting foreign investment capital with 38 industrial parks in operation. Thu Dau Mot City (TDMC) is a class I urban and the economic, cultural, educational and political center of Binh Duong province. Thu Dau Mot city currently has 07 Industrial Parks includes: VSIP II, Song Than 3, Phu Tan, Kim Huy, Dai Dang, Dong An 2, Mapletree Binh Duong. Currently, Thu Dau Mot City also has been establishing many large urban and residential areas with complete infrastructure such as Binh Duong New City urban area (scale of 1,000 ha), Becamex City Center urban area, Phu My Commercial - Service housing area, Chanh Nghia Residential Area, Phu Hoa 1 Residential Area, Hiep Thanh III Residential Area, Phu Thuan Residential Area, resettlement areas in Hoa Phu and Phu Tan wards belong to Binh Duong Industrial - Service - Urban Complex. Therefore, the urban green space in Thu Dau Mot City is very meaningful for environmental protection, climate improvement and public health protection. Although urban trees play an important role in environmental protection, however, there has been no study to evaluate the urban green area in Thu Dau Mot City on the ability to store and absorb carbon of urban greenery. From these issues, the study “Application of remote sensing to assess the ability to absorb carbon dioxide in Thu Dau Mot city, Binh Duong Province” is very meaningful for development planning for the environmental protection program and green space development policies of Thu Dau Mot City.
Application of Remote Sensing to Assess the Ability to Absorb
239
2 Research Data and Methods 2.1 Research Data This study used remote sensing image at the path of 125 and the row of 052 which captured by Landsat 8 on 23rd December 2021. This image has a good quality of 9 and a low cloud cover of 0.09%. Its spatial resolution is 30m. This data is available for free at the US Geological Survey on the website: https://earthexplorer.usgs.gov/. 2.2 Research Methods The process of performing this research and research methods are presented in Fig. 1.
The collected remote sensing image
For high vegetation area
Resize the image
Measure parameters of trees in 100-meter square plots
For low vegetation area
Calculate the vegetable Index: NDVI Classify the landuse types based on NDVI
Calculate the average biomass including above and below ground biomass
Calculate the area of high and low vegetation areas
Calculate the average Carbon storage load
Refer the average Carbon storage of similar researches
Calculate load of CO2 absorption and establish thematic map of CO2 absorption
Estimation of the value of CO2 absorbed in TDMC Fig. 1. Diagram of performing this research.
Field survey method: this method was used for the two following purposes:
240
H. A. T. Nguyen et al.
Field survey for image classification: This survey was used to evaluate the accuracy of the classification result. This survey was carried out in December 2021 with a total of 70 samples divided into 4 different types of objects: water surface, non-vegetation area, low vegetation area and high vegetation area. Field Survey for calculation of carbon stocks: Eight standard plots with the size of 10mx10m was chosen that represents for many kinds of high vegetation areas in TDMC to investigate this information: plant name and the trunk diameter at a height of 1.3 m (D1, 3) of all trees. This information was used to calculate the tree biomass. Method of image classification: Remote sensing images are classified into 4 objects: water surface, non-vegetation area (bare land, house, rock, sand, road, yards…), low vegetation area (shrub and grassland, annual plants with plant height < 2m) and high vegetation area (perennial plants, forest… with plant height > 2m) based on the results of vegetation index calculation. Vegetation index is an indicator to determine the plant greenness captured by satellite image. There are several vegetation indices, but the most frequently used index is the Normalized Difference Vegetation Index (NDVI) [14]. The formula of NDVI calculation can be expressed as: NDVI = (NIR−RED)/(NIR + RED) where: RED stands for the spectral reflectance measurements acquired in the red (visible) region, is band 4 of Landsat 8 image. NIR stands for the spectral reflectance measurements acquired in the near-infrared regions, is band 5 of Landsat 8 image. The NDVI varies between −1 and + 1. Negative values of NDVI correspond to water. Values close to zero (−0.1 to 0.1) generally correspond to barren areas of rock, sand, or snow. Lastly, low, positive values represent shrubs and grassland (approximately 0.2 to 0.3), while high values indicate temperate and tropical rainforests (0.6–0.8). Method of assessing the accuracy of image classification: This study used the error matrix to evaluate the accuracy of the classification results. Confusion matrix is established based on field survey data combined with classified images. In which, the columns of the matrix represent the reference data, the rows represent the classification applied to this data to calculate the error. Kappa index is used to evaluate the accuracy level of the classification results. A Kappa coefficient close to zero means that the accuracy is poor whereas a value equal to 1 means the accuracy is perfect. The formula is as follows [15]: κ=
(T − E) (1 − E)
where: T: Observed accuracy, E: Chance agreement incorporates off-diagonal Method of calculating the biomass of green space area: The amount of carbon stored in the TDMC green space is estimated from two components, including C accumulated in the high vegetation and the low vegetation cover. Therefore, the study applied a combination of the following rapid assessment methods to relatively quantify the amount of C accumulated in the green area at TDMC:
Application of Remote Sensing to Assess the Ability to Absorb
241
The biomass and C absorption of high vegetation area: The biomass of a tree in surveyed plots is calculated according to the formula: Y = 0.11 * r * D2 + c [16], where: Y = biomass per tree (kg/plant), D = Diameter at a height of 1.3 m (cm), r = Wood density = 0.6 mg/cm3 = the standard average density of wood, c = 0.62. Root biomass is one per fourth of tree biomass. The biomass of the whole plots is calculated according to the formula: W = ni=1 Yi, where: W = total biomass of the whole plot (kg/plot area), n = the number of tree in a standard plots, i = the order of tree, from 1 to n. The Carbon storage calculation formula is: MC = (Y x C) (kg/plant), where: MC = amount of stored carbon (kg/plant), C = percentage of carbon in the wood = 46%. [17]. The biomass of low vegetation area: The calculation of low vegetation area in TDMC is based on the research results of biomass of shrubs and greengrass as follow (Table 1): Table 1. Load of biomass and C storage in many kinds of shrubs and Greengrass [18]. No
Kind of low vegetation
Fresh biomass (tons/ha)
Dry biomass (tons/ha)
C storage (tons/ha)
1
Mascarene grass
21.94
7.92
3.96
2
Cow grass
30.92
13.17
6.58
3
Bedding grass
29.49
9.84
4.92
4
Reed
103.72
40.40
20.20
5
Gleicheniaceae
43.69
20.21
10.10
6
Shrub less than 2m high
45.50
20.48
10.24
7
Shrub 2-3m high
60.57
27.19
13.59
Calculation of CO2 absorption: Load of CO2 absorbed by green spaces in TDMC is calculated according to the formula: CO2 = C * 44/12 [17]. Calculation the value of CO2 absorbed by green areas: To predict the value of CO2 absorbed by green areas, the research referenced the price provided by the Institute of Strategy and Policy for Natural Resources and Environment (Ministry of Science and Technology). Cost of CO2 is 17 USD/ton CO2 and exchange rate 1 USD = 23,300 VND.
3 Research Result and Discussion 3.1 Result of Image Classification The landuse type in this area is classified based on the NDVI calculation. NDVI values in reasearch area ranges from −0.43 to 0.58. The NDVI image then was classified into 4 kinds of landuse based on the value range of NDVI as presented in Table 2. These ranges are suitable to other similar studies [19, 20] and have the highest accuracy assessment of the classification with the overall accuracy was 87.14%, and kappa coefficient was 0.82. The confusion matrix is as follow (Tables 3):
242
H. A. T. Nguyen et al. Table 2. NDVI classification.
No
NDVI values
Classification
1
0.34
High vegetation area
Table 3. Confusion matrix of classification. Landuse types
Water
Non-vegetation
Low vegetation
High vegetation
Total
Water
7
3
0
0
10
Non-vegetation
0
18
2
0
20
Low vegetation
0
1
17
2
20
High vegetation
0
0
1
19
20
Total
7
22
20
21
70
Overall accuracy: 87.14%; Kappa statistic: 0.82 (The classification is almost perfect)
3.2 Establishing Map and Calculating the Area of Green Spaces in TDMC Map of green space distribution and area statictis of landuse types in Thu Dau Mot City is presented in Figs. 2 and 3. Total area of green space in TDMC is about 63.05km2, accounting for about 61,68% total area. Where as, area of low vegetation in TDMC is about 46.96 km2 , accounting for 39.65% total natural area and that of high vegetation area is about 26.09 km2 , making of 22.03%. This result is suitable to the statistic data of TDMC on land use types in 2020. According to this data, the area of perennial agricultural land is 26.98 km2 , accounting for 22.7% of natural land area. It is equivalent to the area of high vegetation given by the study. The area of non-vegetation given by this study is about 43.52 km, accounting for 36.75%, much lower than area of non-agriculture in TDMC is 91.92 km2 , accounting for 77.3%. This can be explained that the non-agricultural land area is counted from the type of land use recorded on the Land Use Right Certificate. However, in reality, that land use purpose may be different, the land can be used for agricultural purposes or unused land with shrubs and grasslands growing. Therefore, the area of low vegetation is much more than the total area of annual agricultural land and unused land. Among 14 wards of TDMC, Dinh Hoa, Phu Tan, Phu My, Tuong Binh Hiep are units having the highest rate of low vegetation area with over 45% of natural land area of each ward. Tan An and Chanh My have the highest rate of high vegetation area with more than 44%. Wards in the central of TDMC have the lowest rate of vegetation, including Phu Cuong, Hiep Thanh, Chanh Nghia and Phu Hoa.
D13 (cm)
17.8
18.7
19.0
25.0
29.7
37.0
29.5
29.5
25.8
No
Plot 1
Plot 2
Plot 3
Plot 4
Plot 5
Plot 6
Plot 7
Plot 8
Average
126.7
177.2
183.1
158.7
139.6
90.1
88.6
80.3
95.9
Above ground biomass (tons/ha)
31.7
44.3
45.8
39.7
34.9
22.5
22.2
20.1
24.0
Below ground biomass (tons/ha)
158.4
221.5
228.9
198.4
174.4
112.6
110.8
100.4
119.9
Total biomass (tons/ha)
Table 4. Parameters of trees in surveyed plots.
72.9
101.9
105.3
91.2
80.2
51.8
51.0
46.2
55.2
Carbon storage (tons/ha)
267.4
374.0
386.5
334.9
294.5
190.1
187.1
169.5
202.5
CO2 absorption (tons/ha)
Application of Remote Sensing to Assess the Ability to Absorb 243
244
H. A. T. Nguyen et al.
Fig. 2. Map of green area in TDMC. 1.58%
22.02 % 36.75 %
39.65 %
Water
Fig. 3. Ratio of landuse types.
3.3 Calculation Result of the Carbon Storage of Green Areas in TDMC Carbon storage in high vegetation areas: The results showed that the average biomass of high vegetation area of TDMC is about 158.4 tons/ha, average load of C storage in trees is about 72.9 tons/ha, therefore, average load of CO2 absorption that green trees have been contributing to reduce green house
Application of Remote Sensing to Assess the Ability to Absorb
245
Fig. 4. Map of CO2 absorption in wards of TDMC.
gas emissions is about 267.4 tons (Table 4). This result is suitable with similar research results of Nguyen Hai Hoa in 2016, average load of CO2 absorption of Acacia Hybrid plantation forest in Yen Lap District, Phu Tho Province was 296,64 tons/ha, little higher than load of CO2 absorption given by this study due to the more diversity of forest ecosystems than perennial tree ecosystems [21]. Carbon storage in low vegetation areas: According to the field survey, the low vegetation in TDMC is mainly shrub less than 2m high, therefore the load of dry biomass is about 20.48 tons/ha and load of C storage is about 10.24 tons/ha (Table 1). So, the load of CO2 absorption of low vegetation area is about 37.6 tons/ha. Total carbon storage of TDMC and its wards: Total load of CO2 absorption of green areas in each administrative of TDMC and whole TDMC is presented in Figs. 4 and 5. Total load of CO2 absorption in TDMC is about 874 thousands tons, including 698 thousands tons from high vegetation areas (accounting for 80%) and 176 thousands tons from low vegetation area (making up 20%). Hoa Phu has the highest loads of CO2 absorption with over 210 thousand tons, accounting for 24%, Phu Cuong has the lowest load of CO2 absorption with less than 1%. This result show that the more area of green space, the more CO2 load can be absorded. 3.4 Estimation of the Value of CO2 Absorbed by Green Areas in TDMC Total economic value of CO2 stored in green area of TDMC is about 15 millions USD, equivalent to 346 billions VND. This value is so hight and continues increasing every year due to the growing biomass of trees. This result only partly reflects the great benefits of trees in terms of being a CO2 absorption tank, contributing to reducing the greenhouse
246
H. A. T. Nguyen et al. Hoa Phu Tan An Chanh My Phu Tan Dinh Hoa Phu Loi Tuong Binh Hiep Hiep An Phu My Phu Tho Phu Hoa Hiep Thanh Chanh Nghia Phu Cuong 0.0
5.0 10.0 15.0 20.0 25.0 30.0
Fig. 5. Ratio of CO2 absorption in wards of TDMC.
effect. In addition to this role, trees also have many other social, economic and environmental benefits that the study has not mentioned. Therefore, it is necessary for local authorities to plant and protect trees to not only contribute to environmental protection but also create regional landscapes and protect community’s health.
4 Conclusion Urban green tree system has a certain important role in the process of storing CO2 , participating in climate regulation, creating landscape and mitigating climate change. The study was carried out to evaluate the CO2 absorption capacity of green spaces in TDMC by applying GIS and remote sensing and field investigation. Research results show that total load of CO2 absorbed by green spaces is about 874 thousands tons, including 698 thousands tons from high vegetation areas (80%) and 176 thousand tons from low vegetation area (20%). Hoa Phu, a suburban ward with highest green spaces, has the highest loads of CO2 absorption and Phu Cuong has lowest load of CO2 absorption because of high density of population and lowest green spaces. The economic value of CO2 stored in urban green area of Thu Dau Mot city is about 15 millions USD, equivalent to 346 billions VND. This result proves that trees have a great role in reducing CO2 , one of main green house gases that arises from human activities and cause climate change. Therefore, the authorities of TDMC as well as other places should pay attendance to plant, care and protect more and more green spaces for the communities.
References 1. Vatseva, R., et al: Mapping urban green spaces based on remote sensing data: Case studies in Bulgaria and Slovakia. In 6th International Conference on Cartography and GIS. Albena, Bulgaria (2016)
Application of Remote Sensing to Assess the Ability to Absorb
247
2. Atasoy, M.: Monitoring the urban green spaces and landscape fragmentation using remote sensing: a case study in Osmaniye, Turkey. Environ. Monit. Assess. 190(12), 1–8 (2018). https://doi.org/10.1007/s10661-018-7109-1 3. Shahtahmassebi, A.R., et al.: Remote sensing of urban green spaces: a review. Urban Forestry Urban Greening 2020 4. Senanayake, I.P., Welivitiya, W.D.D.P., Nadeeka, P.M.: Urban green spaces analysis for development planning in Colombo, Sri Lanka, utilizing THEOS satellite imagery—A remote sensing and GIS approach. Urban For. Urban Greening 12(3), 307–314 (2013) 5. Malika, O., et al.: Quantitative and qualitative assessment of urban green spaces in Boussaada City, Algeria using remote sensing techniques. Geogr. Reg Planning 14(3), 123–133 (2021) 6. Garbulsky, M.F., et al.: Remote estimation of carbon dioxide uptake by a Mediterranean forest. Glob. Change Biol. 14(12), 2860–2867 (2008) 7. Hazarin, A.Q., Rokhmatuloh, Shidiq, I.P.A.: Carbon dioxide sequestration capability of green spaces in bogor city. In: IOP conference series: earth and environmental science (2019) 8. Pham, T.D., et al.: Improvement of mangrove soil carbon stocks estimation in north Vietnam using sentinel-2 data and machine learning approach. GIS Sci. Remote Sens. 58(1) (2020) 9. Nguyen, H.H., et al.: Biomass and carbon stock estimation of mangrove forests using remote sensing and field investigation- based data on Hai Phong coast. Vietnam J. Sci. Technol. 59(5), 560–579 (2021) 10. Nguyen, H.K.L., Nguyen, B.N.: Mapping biomass and carbon stock of forest by remote sensing and GIS technology at Bach Ma National Park, Thua Thien Hue province. J. Vietnamese Environ. 8(2) (2016) 11. Do, H.T.H., et al.: Forest carbon stocks and evaluating CO2 sequestration in the mixed broadleaf and coniferous of Bidoup—Nui Ba National Park, Lam Dong province. Sci. Technol. Dev. J. Sci. Earth Environ. 5(2), 95–105 (2021) 12. Van Luong, C., Nga, N.T.: Initial assessment of carbon storage capacity of seagrasses through biomass in Thi Nai lagoon, Binh Dinh province. J. Mar. Sci. Technol. 17(1), 63–71 (2017) 13. Bao, T.Q., Son, L.T.: Research and application of high-resolution satellite images to determine the distribution and carbon absorption capacity of forests. J. Agric. Rural Dev. (2012) 14. Taufik, A., et al.: Classification of Landsat 8 satellite data using NDVI thresholds. J. Commun. Electron. Comput. Eng. 8(4), 37–40 (2016) 15. Adam, A.H.M., et al.: Accuracy assessment of land use & land cover classification (LU/LC) “Case study of Shomadi area-Renk County-Upper Nile State, South Sudan. Int. J. Sci. Res. Publ. 3(5) (2013) 16. Da, T.B.: Estimating the CO2 absorption capacity of the forest recovered from cultivation land in the Thuong Tien nature reserve, Hoa Binh District. J. Agric. Rural Dev. 10(154), 85–89 (2010) 17. Kiran, S., Kinnary, S.: Carbon sequestration by urban trees on roadsides of Vadodara City. http://www.ijest.info/docs/IJEST11-03-04-075.pdf. Last accessed 22 May 2022 18. Phuong, V.T.: Research on carbon stocks and shrubs—Basis for determining baseline carbon in afforestation/reforestation projects under the clean development mechanism in Vietnam. J. Agric. Rural Dev. 2 (2006) 19. Akbar, T.A., Hassan, Q.K., Ishaq, S., Batool, M., Butt, H.J., Jabbar, H.: Investigative spatial distribution and modelling of existing and future urban land changes and its impact on urbanization and economy. Remote Sens. 11(105) (2019) 20. Rizvi, R.H., Yadav, R.S., Singh, R., Datt, K., Khan, I.A., Dhyani, S.K.: Spectral analysis of remote sensing image for assessment of agroforestry areas in Yamunanagar district of Haryana. Precis. Farming Agro-meteorol. (2009) 21. Hoa, N.H., An, N.H.: Application of Landsat 8 remote sensing images and GIS to build a map of biomass and carbon reserves of acacia hybrid plantations in Yen Lap District, Phu Tho Province. J. Forest. Sci. Technol. (2016)
Efficient Cooling System of Cloud Data Center by Reducing Energy Consumption Nazia Tabassum Natasha1 , Imran Fakir1 , Sabiha Afsana Falguni1 , Faria Tabassum Mim1 , Syed Ridwan Ahmed Fahim1 , Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2,3(B) 1 Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
[email protected]
2 Department of Computer Science and Engineering, Daffodil International University, Dhaka,
Bangladesh [email protected] 3 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram, Bangladesh
Abstract. Cloud computing makes computers a utility and allows scientific, consumer, and corporate applications. This implementation raises energy, CO2, and economic problems. Cloud computing companies are concerned about energy use in cloud data centers. Green Cloud Environments, known as GCE, have provided formulations, solutions, and models to reduce the environmental effect as well as energy consumption under the latest models while considering components for static and dynamic clouds. Our technique models cloud computing data centers. To accomplish this, you must understand trends in cloud energy usage. We analyze energy consumption trends and show that by using appropriate optimization techniques guided by our energy consumption models, cloud data centers may save 20% of energy. Our study is incorporated into cloud computing while monitoring energy usage and helping to optimize on a system level. Keywords: Green cloud environment · Resource management · Cloud computing · Energy efficiency · Green cloud environment · Cooling system
1 Introduction Recent advances in cloud computing enable the sharing of computer resources, including web applications, processing power, memory, networking infrastructure, etc. [1]. The prevalent utility computing paradigm followed by the majority of cloud computing service providers inspires features for clients whose need for virtual resources fluctuates over time. The tremendous scalability potential of e-Commerce, online banking systems, social media and networking, e-government, and information processing has resulted in diverse and extensive workloads. In the meantime, the computer and information processing capabilities of a variety of commercial corporations and organizations that are public, ranging from banking and transportation to manufacturing and housing, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 248–261, 2023. https://doi.org/10.1007/978-3-031-50330-6_25
Efficient Cooling System of Cloud Data Center by Reducing
249
have been rapidly expanding. Such a dramatic rise in computer resources necessitates a scalable and efficient information technology infrastructure known as IT infrastructure, consisting of data centers, power plants, storage facilities, network transfer rates, staff, and enormous capital expenditures and operating costs. Power consumption is a limiting factor in the architecture of today’s data centers and infrastructures for cloud computing. The electricity as well as the energy used by computers and their associated cooling systems account for a significant portion of these high energy expenditures and greenhouse gas emissions. With a predicted annual growth rate of 12% [2, 3], the global energy consumption of data centers is expected to reach 26 GW or around 1.4% of global electrical energy consumption. In Barcelona, a data center with a medium-sized supercomputer has an annual energy expenditure of about 11 million dollars since its consumption of 1.2 megawatts [4] is almost equivalent to that of 1200 homes [5]. The United States Environmental Protection Agency (EPA) reported to Congress [6] that in 2006, data centers in the United States used 61 billion kWh of electricity, or 1.5 percent of all electricity consumed in the country, at a cost of $4.5 billion. While servers, cooling, connectivity, backup storage, as well as power distribution equipment or PDU account is between 1.71% and 2.21% of total data center electricity consumption [3], US data centers, which host 40% of the world’s cloudbased data center servers, with their electrical consumption increasing by approximately 40% during the economic crisis. From 2000 to 2005, total US consumption of energy increased from 0.8 to 1.5% [7]. Also, in 2006, cloud data centers will cause 116.2 million tons of CO2 to be released into the atmosphere [6]. In 2010, Google’s data center used approximately 2.26 MWh of power, leaving 1.47 million metric tons of CO2 , which is called a carbon footprint [8]. The Intergovernmental Panel on Climate Change has recommended a reduction of 60 to 80% by 2050 to prevent catastrophic environmental damage. A cloud computing system consists of hundreds to tens of thousands of server computers providing client services [9, 10]. Existing servers lack energy homogeneity. Even with 20% usage, servers require 80% of the maximum power [11]. In a cloud computing context, the non-uniformity of energy servers is a major source of inefficiency. Servers are often operated at 10–50% of their maximum load while undergoing frequent periods of inactivity [12]. This suggests that servers most of the time are not being used at their optimal and efficient power-performance trade-off points, even when they are not actively processing data, they are consuming a considerable amount of power. The cost of running air conditioners and refrigerators (CACU), which accounts for approximately 30 percent of the cloud environment’s total energy cost of the cloud environment [13], is one of the main contributors to cloud computing environments’ power inefficiency. The cooling of cloud data centers continues to be a significant contributor to inefficiencies in cloud computing due to excessive energy consumption. The majority of the UPS modules in cloud data centers run between 10 and 40% of their maximum capacity [14]. The efficiency conversion of UPS is pretty low, unfortunately. The effectiveness of PUE or power usage shows how much energy is lost during power conversion, cooling, and air conditioning distribution [15]. The PUE measure has gradually decreased over the past decade. A typical cloud-based data center PUE estimation would be around 2.6 in 2003 [16]. One analysis conducted by Koomey in 2010 indicated that the average PUE
250
N. T. Natasha et al.
during that year ranged from 1.83 to 1.83 [3]. The PUEs of the latest cloud data centers owned by Google, Microsoft, and Facebook are reported to be exceptionally low, with values ranging from below 1.20 to as low as 1.10 [8]. The measured efficiency of cloud-based data center energy, known as CDCEE (Cloud-based Data Center Energy Efficiency) is as follows: CDCEE = ITU ×
ITE PUE
(1)
When referring to information technology, efficiency is defined as the number of productive joules per unit of energy. Cloud data center IT utilization is the ratio of typical IT use to peak IT capacity. With dynamic workload distribution and server consolidation, energy proportionality can be achieved at the server pool or cloud data center level. This reduces the importance of the server power use curve while reducing it to a straight line to the origin in the cloud data center [11]. In addition, the coordination of low-power modes for inactive servers has been shown to enable energy-proportional operation [17]. The authors claim that a 50% reduction in energy consumption can be achieved by switching to energy proportional servers, which consume only 10% of peak power during their idle state. The authors demonstrated that one method of creating energy-efficient servers is to reduce the power consumption of the hard drive, RAM, network cards, and central processing unit. Combining air handling units as well as computer room air conditioning (CRAC) units with supply, variable frequency drive, called VFD fans in heat exchanges, can help to cool a cloud data center while using less energy. This allows the use of varying airflow rates with a variety of heat loads. Lastly, reducing this energy use may have unintended financial benefits. The high cost of energy is only one downside to using more power; the increased heat produced also increases the likelihood that hardware systems will fail [18]. For these reasons, cutting back on energy consumption is good for the environment. The work in [19–29] shows good contributions and guidelines. To solve this problem and make sure that cloud computing and data centers can continue to grow in an energy-efficient way in the future, especially when cloud resources need to meet Quality of Service (QoS) requirements set by users through Service Level Agreements (SLAs), energy use needs to go down. The main goal of this study is to come up with a new energy consumption model that gives a full picture of how much energy is used in data centers that are virtualized. The aim is to make cloud computing ecofriendlier and sustainable in the forthcoming years. The remaining sections are grouped as follows. In Sect. 2, we introduce related research on cloud data center settings, and in Sect. 3, we detail the formulas and patterns behind our energy use; energy consumption models are formulated for cloud computing environments. Section 4 contains results and a discussion of our energy consumption architecture, our models’ assessment, and implementation are presented in Sect. 5. Section 6 finishes the study with a review of the main difficulties and ideas for further research. This study is about how much energy a cloud data center consumes and how cooling can be efficient and energy-saving. It has explored and analyzed the energy consumption and cooling models of cloud-based data centers and describes how they can be modeled to be efficient cloud data centers. In addition, the study is based on a random cloud data
Efficient Cooling System of Cloud Data Center by Reducing
251
center as a case study. The study will be based on how a cloud-based data center can be efficient by reducing energy consumption.
2 Related Work Table 1 shows a summary of related work. Table 1. A review of the literature. Power modeling approach
Goal
Technique
Drawback
[30]
Decrease power consumption
Dynamic voltage scaling technique (DVS)
There will be a delay and a slowdown in data transfer
[31]
Propose architectural components for energy efficiency to reduce electricity usage
Modified best-fit decreasing approach
For energy profiling, only CPU power consumption is considered
[32]
Cut operational costs and offer eco-friendly alternatives for cloud computing
More precisely, model Energy usage is not energy consumption considered during migration
[33]
Reduce power consumption
Virtualization clique star cover number method
Energy modeling is not performed
[34]
Reduce power consumption
Energy usage is determined by the application, system components, and software application
No new modeling for energy is proposed
The authors proposed an economical approach for computing the overall expenses of owning and operating cloud-based systems [35]. For this computation, they created quantifiable measures. Their calculating granularity, however, is predicated on a single piece of hardware. Keeping track of the energy profiles of certain system elements, such as CPU, RAM, disk, and cache, can enable these. Due to the inbuilt card controllers that are utilized to wake up distant nodes, Anne et al. observed that nodes continued to use energy even after being turned off [36]. For varying the server’s operational modes, Sarji et al. suggested two energy models [37]. They look at the server’s AC input to figure out how much energy is used in the idle, sleep, and off states, and to switch between these states while calculating the real power measurement. But if the load increases suddenly, switching between power modes could result in poor performance.
252
N. T. Natasha et al.
However, other research initiatives have also been conducted, focusing mainly on virtualization to reduce energy use in cloud systems. Cloud virtualization was suggested by Yamini et al. [33] as a viable means of reducing energy use and global warming. Instead of employing many servers to provide services for various devices, their method uses fewer servers. For us, the most relevant power modeling methodologies are those put out by Pelley et al. [38] for physical infrastructure which are cooling and power systems. They developed the first model that was designed to fully represent a data center. This research study provides methods for assessing and examining the overall energy consumption in cloud computing settings. Our strategy targets both analytical models and cloud data center simulations. We show how putting energy optimization rules into place and using energy consumption models can save a lot of energy in data centers and the cloud.
3 Materials and Methods 3.1 Data Collection Methods This paper specifies the methods and procedures to achieve its research objective. The design and data sources have been designed here for the study. We collected data from various online resources on how cloud-based data centers consume energy and how efficient their cooling is based on 6 cases. 3.2 Energy Consumption Formulas During a given time interval, a server’s overall energy consumption, denoted by ETotal , is the sum of the energy consumed in both the static and dynamic states of the system, as formulated below: ETotal = EFix + EDyn
(2)
The energy used by the server when it is idle, computation, cooling systems, and utilization of communication resources are the main topics of this paper. EIdle ECool ECompu EStore ECommu ESched
= energy consumption of idle server. = energy consumption of the cooling system. = energy consumption of computation resources. = energy consumption of storage resources. = energy consumption of communication resources. = energy generated by scheduling overhead.
Therefore, the above formula (2) can be transformed into: ETotal = EIdle + ECool + ECommu + EStore + ECompu + ESched
(3)
Efficient Cooling System of Cloud Data Center by Reducing
253
3.3 Modeling Energy Consumption of Idle State The well-known equation number 4 [39] from Joule’s Ohm’s law can be used to figure out how much energy a server uses when it is not being used: E =I ×V
(4)
Ei = Ii × Vi
(5)
i = idle
Here E i = power, I i = current and V i = voltage of the i-th core, respectively. Also, by looking at the relationship between I i and V i , we can get the current leakage can be modeled by the following polynomial of the second order: Ii = αVi2 − βVi+γ
(6)
where α = 0.114 ((V )−1 ), β = 0.228 ()−1 , and γ = 0.139 (V/ ) are the coefficient [40]. With the installation of energy-saving techniques, a core’s (processor’s) idle energy usage drops dramatically. This is achieved by reducing a core’s Dynamic Voltage and Frequency Scaling (DVFS). Eri = βi Ei
(7)
where βi = reduction factor, E i = power consumption, and Eri = lowered power consumption of ith core. However, β i varies according to the energy-saving method employed. Given a processor with nth cores and a certain energy-saving strategy, it’s energy consumption when it is in an idle state can be calculated as follows: EIdle =
n
Eri
(8)
i=1
For several types of Intel processor, the values of the reduction factor βi is also different. 3.4 Modeling the Energy Consumption of the Cooling System The material used for the chiller, the speed of air leaving the CRAH unit, and other factors affect the effectiveness of the cooling process. According to the thermodynamic principle, heat is generally transmitted between two bodies as follows: ρ = ωChc (Tha − Tca )
(9)
where = fluid mass flow, the power transmission from a single device to the existing fluid, and C hc = fluid’s specific heat capacity. T ha = hot temperature and T ca = cold temperature. The values of , T ha and T ca rely on the air recirculation and airflow throughout the data center.
254
N. T. Natasha et al.
We offer a confinement index, κ, which is based on earlier measurements of recirculation [41, 42]. The percentage of air that a server consumes that is provided by a CRAH is referred to as the confinement index. Therefore, if κ is 1, there is no recirculation within the system. Our model represents typical behavior by a single global containment index, leading to the following results. ρ = kωChcair (T aha − T aca )
(10)
By the server or CRAH, ρ = heat transfer. air = total volume of air passing through the equipment; Taha = temperature of the air exhaled by the server and Taca = temperature of the cold air supplied by the CRAH. Therefore, We utilize an adapted effectiveness-NTU approach to simulate Eq. (11) which is mentioned previously [43]: ρCRAH = EkωChcair f 0.7 (kTaha + (1 − k)Taca − Twt )
(11)
By the CRAH, the heat, which will be removed is ρCRAH . E, the transfer efficiency, flow rate (0 to 1) at the flow maximum mass, f denotes the rate of the volume and T wt is the chilled water temperature. The Coefficient of Performance (COP), which is the ratio between the heat removed by the CRAH unit (Q) and the total amount of energy used to chill the air by the CRAH unit (E), is used to evaluate the efficiency of a CRAH unit [44]: COP = Q/E
(12)
The cold air temperature (T S ), that a CRAH unit provides cloud facilities with supplies affects its coefficient of performance. The sum of the energy consumed by the CRAH (ECRAH ) and IT (EIT ) devices [45] is equal to the cloud environment’s overall power dissipation. Energy usage can be stated as: ECRAH =
EIT COP(Ts )
(13)
In the CRAH unit, the power required by the fans is the primary contributor to the total energy budget. The magnitude of the force grows proportionally to the cubic function of the mass flow rate ( f ), and plateaus at a maximum threshold. The addition of sensors and control systems consumes a predetermined amount of energy. Consequently, the energy spent by the CRAH unit equals the sum of its fixed and dynamic activity. Ecool = ECRAHIdle + ECRAHDyn f 3
(14)
At constant outdoor and chilled water supply temperatures, the energy needed for the chillers will expand quadratically when the amount of heat it will lose during use. The HVAC industry has created several modeling techniques to evaluate the performance of the chiller. Despite the fact that there are models based on physics, we chose the DOE-2 chiller model [46]. Numerous physical measures are needed to fit the DOE-2 model to a specific chiller. The California Energy Commission [19] has given us a set of regression curves that we used.
Efficient Cooling System of Cloud Data Center by Reducing
255
3.5 Modeling Power Conditioning Systems Energy Consumption The infrastructure required for cloud computing systems must be substantial just to provide uninterrupted and reliable electricity. They constantly lose energy and use energy loss that is inversely correlated with the square of the load [19]: EPDU = EPDU idle + ηPDU
2 μSrv
(15)
Server
where E PDU = energy consumption of PDU (Power Distribution Unit), the energy loss coefficient of PDU denotes ηPDU , and E PDU Idle = idle PDU’s energy consumption. UPS energy costs are determined by the relationship [19]: EPDU = EUPS idle + ηUPS EPDU (16) PUDs
where ηUPS = UPS loss coefficient. About nine percent of the input energy is lost at full load. Loss accounts for a total of 12% of server energy use at peak load. These losses produce additional heat that the cooling system must remove. 3.6 Analysis Tool for Energy Consumption As illustrated in Fig. 1, the design of our power-saving system is based on optimization, customization, and monitoring. Every component of the cloud environment is automatically monitored. Another important contribution is the paper’s commitment to a comprehensive analysis of the status of data center resources and cloud environments with relevant attributes of energy consumption and links. Periodically, the optimization unit examines this state to identify a stand-in software application, as well as service-allocated designs that facilitate the minimization of energy. After a suitable energy-efficient configuration has been identified, executing a sequence of operations on the cloud system to adjust the placement of this energyefficient architecture completes the loop. The target configurations are ranked according to the energy consumption projected by the energy calculator module, which is done by applying energy-efficient measures without going against already existing service level agreements. The capacity of this module to predict energy consumption after a potential restructuring choice is essential for making the most effective energy savings decisions. In Fig. 2, the flow of the power system and the cooling system of the cloud environment shows how it works. The water for cooling is chilled in a chiller and flows to the internal air handling unit, which is powered by CRAH humidification and CRAH electrical power. The warm water that the unit returns goes back to the external water chiller again, and the loop keeps going on. On the other hand, chilled air goes to the server, which is powered by electrical power, and releases warm air to the air mixing area in the internal computer room air handling unit and gets chilled again.
256
N. T. Natasha et al.
Fig. 1. Energy consumption architecture.
Fig. 2. Cloud system flow of power system and cooling system.
4 Results and Discussion In this part, we have demonstrated the usefulness of our models. Using energy consumption models [42], the power requirements of several speculative cloud data centers can be compared. Each scenario is based on a new power-efficient trait that was recently introduced. Each cloud data center has been decomposed based on a 25% utilization. The configured cloud data centers and the induced workload are presented in Table 2. 4.1 Case 1, 2 These are typical cloud-based data centers with outdated physical infrastructure, which are found in buildings built during the last three to five years. For the temperature of the outside air, we use the annual average. Furthermore, we consider an inefficient and typical server with idle power by setting peak power at 60%, static chilled water set at 7 °C, and setting CRAH air supply at 18 °C. We scaled the cloud data center so that, at peak usage, the Case 1 facility uses precisely 10 MV.
Efficient Cooling System of Cloud Data Center by Reducing
257
Table 2. Assumptive comparison of cloud-based data centers. Cloud case
χ
κ
E Idle /E Peak
T(◦F)
Optimal cooling
1
0.95
0.9
0.6
70
No
2
0.93
0.9
0.6
50
No
3
0.25
0.9
0.6
50
No
4
0.25
0.9
0.05
50
No
5
0.25
0.9
0.05
50
Yes
6
0.25
0.99
0.5
50
Yes
4.2 Case 3 This case shows a decrease was observed in the percentage of operational servers within a data center, going from 81 to 44%. The aggregate power draw is drastically reduced by improved consolidation, but paradoxically, the PUE increases. These findings highlight the flaw as a measure of energy efficiency since it ignores the inefficiency of IT equipment. 4.3 Case 4 As a result, in Case 4, servers can operate at 5% of their maximum power while quickly entering a low-power sleep mode, thus reducing the power consumption of the entire data center by 22% [18]. The goal of both this strategy and the integration of virtual machines is to overcome the energy inefficiency of the same source, which is the waste of idle server power. 4.4 Case 5 The cooling infrastructure is proposed to be under integrated dynamic control. We consider an optimizer that seeks to reduce chiller power and has a broad understanding of data center load and environmental circumstances. The optimizer’s selection of the optimal T wt value is crucial for CRAHs to reach the maximum allowable server temperature input. This example shows the possibility of intelligent cooling control. 4.5 Case 6 In this case, the power consumption of the cooling system is significantly reduced, and power efficiency is constrained by the infrastructure for power conditioning. We have provided comprehensive models of cloud data center foundations that are practical to apply in a thorough cloud environment simulation architecture, as well as abstract estimating and green energy forecast as an effective solution. Figure 3 summarizes all six cases that are discussed above and validates the efficiency of our proposed model for cases 5 and 6.
258
N. T. Natasha et al.
Total Power(KW)
Power saving features 4500 4000 3500 3000 2500 2000 1500 1000 500 0 1
2
3
4
5
6
Cases server
chiller
crah
pdu
Fig. 3. Comparison of six cases
5 Conclusion As it offers many advantages to its users, cloud computing is gaining increasing importance in the IT field. IT resources are available in the cloud environment due to the high demands of users. Because of this, the amount of power and energy used by the cloud has become a problem for environmental and economic reasons. In this paper, we give formulas for figuring out the total amount of energy used in cloud environments and show the reasons to use less energy. In this regard, we talked about tools for measuring energy use and ways of doing empirical analysis. Also, we have models for how much energy a server uses when it is idle and when it is working. This research result is important for coming up with possible energy laws and ways to manage energy use to reduce energy use while keeping system performance high in cloud environments.
6 Future Work As part of our future work, we are determined to in estigate various cloud environments. Additionally, we develop new optimization policies to reduce CO2 emissions from cloud environments. We want to incorporate the rate of energy cost into our future re-designed models to reduce the total energy cost and ensure that they have a minimal environmental impact.
References 1. Armbrust, M.: Above the clouds: a Berkeley view of cloud computing. Technical Rep. UCB/EECS-2009–28 (2009)
Efficient Cooling System of Cloud Data Center by Reducing
259
2. BONE Project: WP 21 tropical project green optical networks: report on year 1 and update plan for activities. No. FP7-ICT-2007–1216863 BONE project, Dec. 2009 3. Koomey, J.: Estimating Total Power Consumption by Server in the U.S and the World, Lawrence Berkeley National Laboratory, Stanford University (2007) 4. Toress, J.: Green computing: The next wave in computing. In: Ed. UPC Technical University of Catalonia (2010) 5. Kogge, P.: The tops in flops. IEEE Spectrum, 49–54 (2011) 6. U.S Environmental Protection Agency.: Report to Congress on Server and Datacenter Energy Efficiency Public Law (2006) 7. Liu, Z., Lin, X., Hu, X.: Energy-efficient management of data center resources for cloud computing: a review. Front. Comp. Sci. 7(4), 497–517 (2013) 8. Miller, R.: Google’s energy story: high efficiency, huge scale (2011). Available at: https://www.datacenterknowledge.com/archives/2011/09/08/googles-energy-story-highefficiency-huge-scale. (Accessed: 15 Oct 2022) 9. Armbrust, M., et al.: A view of cloud computing. Commun. ACM 53(4), 50–58 (2010) 10. Buyya, R.: Market-oriented cloud computing: Vision, hype, and reality of delivering computing as the 5th utility. in Proc. Int. Symp. Cluster Comput. Grid, p. 1 (2009) 11. Barroso, L.A., Hölzle, U.: The case for energy-proportional computing. IEEE Comput. 40(12), 33–37 (2007) 12. Barroso, L.A., Hölzle, U.: The datacenter as a computer: an introduction to the design of warehouse- scale machines. Morgan and Claypool, San Rafael, CA (2009) 13. Rasmussen, N.: Calculating total cooling requirements for datacenters. Am. Power Convers. white paper 25 (2007) 14. U.S. Department of Energy.: Data center energy efficiency training. Electr. Sys. (2011). Available at: https://www.energy.gov/eere/amo/energy-efficient-cooling-control-systems-data-cen ters (Accessed: 15 Oct 2022) 15. Belady, C., Rawson, A., Pfleuger, J., Cader, T.: The green grid datacenter power efficiency metrics: PUE and DCIE. GreenGrid, White Paper-06 (2007) 16. Ghemawat, S., Gobioff, H., Leung, S.-T.: The google file system. In: Proceeding of the ACM Symposium Operating Systems Principles, pp. 29–43 (2003) 17. Meisner, D., Gold, B.T., Wenisch, T.F.: PowerNap: eliminating server idle power. In: Proceeding of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), USA (2009) 18. Feng, W.C., Feng, X., Rong, C.: Green supercomputing comes of age. IT Prof 10(1), 17–23, Jan.-Feb (2008) 19. Uchechukwu, A., Li, K., Shen, Y.: Improving cloud computing energy efficiency. In: Proceeding of the Asia Pacific Cloud Computing Congress (2012) 20. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable system of data center cooling and power management utilizing renewable energy. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_67 21. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing E-waste through improved virtualization. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/ 978-3-031-19958-5_97 22. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable E-waste management system and recycling trade for bangladesh in green IT. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing &
260
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
N. T. Natasha et al. Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_33 Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable approach to reduce power consumption and harmful effects of cellular base stations. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_66 Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a sustainable E-waste management framework for Bangladesh. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham. https://doi.org/10.1007/978-3-031-19958-5_104 Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sustainable approach between satellite and traditional broadband transmission technologies based on green IT. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham. https://doi.org/10.1007/978-3-031-19958-5_26 Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.: Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_35 Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S. (2023). Developing an energy cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/ 978-3-031-19958-5_75 Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustainable and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_18 Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for improving E-waste management in Bangladesh. In: Vasant, P., Weber, GW., MarmolejoSaucedo, J.A., Munapo, E., Thomas, J.J. (eds) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol 569. Springer, Cham (2023). https://doi. org/10.1007/978-3-031-19958-5_95 Shang, L., Peh, L.S., Jha, N.K.: Dynamic voltage scaling with links for power optimization of interconnection networks. In: The 9th International Symposium on High-Performance Computer Architecture (HPCA 2003), pp. 91–102, Anaheim, California, USA (2003) Buyya, R., Beloglazov, A., Jemal, A.: Energy efficient management of data center resources for cloud computing: a vision architectural elements and open challenges. In: Proceeding of the International Conference on Parallel and Distributed Processing Techniques and Applications (2010) Chen, F., Schneider, J., Yang, Y., Grundy, J., He, Q.: An energy consumption model and analysis tool for Cloud computing environments. In: 1st International Workshop no Green and Sustainable Software (GREENS), pp. 45–50 Yamini, B., Selvi, D.V.: Cloud virtualization: a potential way to reduce global warming. In: Recent Advances in Space Technology Services and Climate Change (RSTSCC), pp.55–57 (2010)
Efficient Cooling System of Cloud Data Center by Reducing
261
34. Zhang, Z., Fu, S.: Characterizing power and energy usage in cloud computing systems. In: 2011 IEEE Third International Conference on Cloud Computing Technology and Science (CloudCom), pp. 146–153 (2011) 35. Li, X., Li, Y., Liu, T., Qiu, J., Wang, F.: The method and tool of cost analysis for cloud computing. In: The IEEE International Conference on Cloud Computing (CLOUD 2009), pp. 93–100, Bangalore, India (2009) 36. Orgerie, A.C., Lefevre, L., Gelas, J.P.: Demystifying energy consumption in grids and clouds. Green Comput. Confer. Int. 335–342 (2010 ) 37. Sarji, I., Ghali, C., Chehab, A., Kayssi, A.: CloudESE: energy efficiency model for cloud computing environments. In: International Conference on Energy Aware Computing (ICEAC), pp. 1–6 (2011) 38. Pelley, S., Meisner, D., Wenisch, T.F., VanGilder, J.W.: Understanding and absracting total data center power. In: WEED: Workshop on Energy Efficienct Design 39. Meade, R.L., Diffenderfer, R.: Foundations of Electronics: Circuits & Devices. Clifton Park, New York (2003). ISBN: 0-7668-4026-3 40. Zimmer, P.A.Z., Brodersen, R.W.: Minimizing Power Consumption in CMOS Circuits. University of California at Berkeley. Technical Report (1995) 41. Tozer, R., Kurkjian, C., Salim, M.: Air management metrics in data centers. In: ASHRAE (2009) 42. VanGilder J.W., Shrivastava, S.K. Capture index: an airflow-based rack cooling performance metric. ASHRAE Trans. 113(1) (2007) 43. Çengel, Y.A.: Heat transfer: a practical approach, 2nd ed. McGraw Hill Professional (2003) 44. Moore, J., Chase, J., Ranganathan, P., Sharma, R.: Making scheduling “Cool”: temperatureaware workload placement in data centers. In: Proceeding of the 2005 USENIX Annual Technical Conference, Anaheim, CA, USA (2005) 45. Ehsan, P., Massoud, P.: Minimizing data center cooling and server power costs. In: Proceeding of the 4th ACM/IEEE International Symposium on Low Power Electronic and Design (ISLPED), pp. 145–150 (2009) 46. Rasmussen, N.: Electrical efficiency modeling for data centers. APC by Schneider Electric, Tech. Rep. #113 (2007)
Reducing Energy Usage and Cost for Environmentally Conscious Cooling Infrastructure Md. Jakir Hossain1 , Fardin Rahman Akash1 , Sabbir Ahmed1 , Mohammad Rifat Sarker1 , Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2,3(B) 1 Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
[email protected]
2 Department of Computer Science and Engineering, Daffodil International University,
Dhaka 1341, Bangladesh [email protected] 3 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram 4349, Bangladesh
Abstract. We live in a technological world where everything is powered by electricity. Most of the energy is derived from fossil fuels, a nonrenewable resource. According to the International Energy Agency, only 28% of the world’s energy comes from renewable sources. Residential energy use accounts for around 21% of overall energy consumption in the United States. Our dwellings and other gadgets consume most of our electricity. If we could have passive cooling solutions and more efficient devices, we could reduce our reliance on fossil fuels and our power use. We focused on designing a cost-effective framework for passive cooling and automation. We have covered a passive cooling solution. This study proposes a cost-effective framework combining passive cooling solutions with renewable energy sources to create a greener atmosphere. Our framework utilizes automation to optimize energy consumption and reduce waste. The result is a sustainable living environment that reduces energy consumption and promotes a healthier planet. Keywords: Renewable · Solar · Turbine · Passive cooling · Cost-effective
1 Introduction Climate change, a significant problem today, has significantly influenced air conditioning-related energy use, as has population increase and economic development. According to recent research, if global temperatures rise as projected, global household energy consumption for cooling will be up to 34% higher in 2050 and up to 61% higher in 2100, except in the Mediterranean region, where a much higher number may be expected [1]. Therefore, we are incorporating passive cooling solutions for homes into our model. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 262–271, 2023. https://doi.org/10.1007/978-3-031-50330-6_26
Reducing Energy Usage and Cost for Environmentally
263
Most of the electricity in Bangladesh is generated from coal [2]. Our country’s educational institutions, such as schools, colleges, and universities, need a lot of electricity. Thus, most of the electricity consumed is to cool the rooms of schools, colleges, and universities, which is very expensive. So, we are proposing a new model to reduce electricity costs. As energy is being used worldwide and most electricity comes from nonrenewable sources, we need to invest more in efficient systems that require less energy and are more effective than existing systems. Home, educational institutions, and universities are perfect places to start implementing intelligent procedures for lowering our power needs and for a greener future. The work in [16–25, 26] shows good contributions and guidelines. In our research, we mainly worked on relevant topics correlating with our proposed modes. We organized our findings regarding these topics. These are some essential considerations we took when researching the issues. How to calculate the total energy outcome of the solar panel? How much does the temperature differ using the bucket fan model? How to calculate the total energy output of the wind turbine? Our paper is organized into subsections. We have explained our Related Works for existing methodologies of word embeddings in Sect. 2. We have defined our Materials and Method in Sect. 3. In Sect. 4, we discussed our Results, followed by a Conclusion in Sect. 5.
2 Literature Review Our model design must be cheap and easy to implement, as the primary goal of our project is to minimize carbon emissions and low energy usage and generate clean energy with a single model. While building our models, we also considered environmentally friendly and more efficient methods. Following an in-depth examination of the research issues, we have divided the primary objectives of our research into many sub-goals. Passive cooling solution model that cools the air using cold water, a new design of a solar panel and wind turbine combination to generate power. We determined that we would concentrate our whole inquiry on the areas in Table 1. Table 1. Review of literature Sl. No. Area of study
Drawbacks
1
Thermodynamic power generation [3]
This paper doesn’t show how to use and manage a thermodynamic generator for a room
2
Cover solar energy to convert light energy They did the study by only considering from the sun into electrical energy (charge the climate of India emission) [4]
3
Energy usage reduction and cost savings [5]
It is not a total representation of our model
264
M. J. Hossain et al.
3 Methodology In our paper, we have included a new model that combines a passive cooling solution model that cools the air using cold water, a new design of a solar panel and wind turbine combination to generate power, and a classroom automation system. This proposed system aims to minimize the dependency on nonrenewable energy by reducing the energy usage of the existing system. Our passive cooling system can be used with existing cooling systems to cool a room. Our automation system lowers classroom energy usage by controlling electronic appliances, and our energy-producing system can be used on rooftops and in cities. 3.1 Proposed Design We are considering some parameters such as: Outside temperature and humidity, Cooling requirements of the equipment, the Desired temperature inside the space, Efficiency of the cooling system, and Heat absorption capacity of the water used in the cooling system. Energy consumption and waste heat generation of the power generation system (e.g., wind turbine, solar panels, thermoelectric generator). These parameters may vary depending on the specific Design and Implementation of the cooling and power generation systems, as well as the environmental conditions and user requirement. Passive cooling solution. To reduce energy consumption and protect the environment while maintaining appropriate levels of comfort, stagnant cooling circumstances necessitate using passive natural solutions. Solar control systems, surplus heat dispersion into low-temperature natural heat sinks (air, water, and soil), and using a building’s thermal mass to absorb excess heat are all natural and passive solutions. Biological cooling systems have a high cooling capacity and can cut cooling-induced energy consumption considerably [6]. Although such facilities can serve as indoor pre-conditioning systems, the weather strongly impacts their effectiveness. They could, however, be insufficient to meet your needs. We propose a model that employs natural and passive cooling methods to cool semi-enclosed spaces as an option for indoor comfort. Figure 1 shows the visual Implementation of the passive cooling system, which we designed by AutoCAD. We have made an AutoCAD model for a passive cooling system, electricity produced with a turbine windmill, solar panel combination, and a Thermodynamic Power Generator [7]. Our renewable energy sources model must be small, mobile, and space efficient as we develop a model for homes and universities, typically in dense areas. Our model combines a small air turbine and solar panels that can produce energy in densely populated areas and cities. This also includes a thermoelectric generator to generate electricity from hot air. Figure 2 shows the new cooling system model of the Datacenter. When the turbine fan pulls the wind from outside when hot air from outside enters inside, it will absorb the heat of hot air in the water. As a result, cold air will enter inside. No matter how hot the air is in Dhaka City, the water will absorb the hot air and let the cold air in. Cold air is entering the data center/classroom. Once inside, the equipment will continue to cool. Moreover, the heat generated inside will go out from the other side. Below the data center, there is also a water-cooling system. When hot air comes out of another site
Reducing Energy Usage and Cost for Environmentally
265
Fig. 1. Visual implementation of passive cooling system
through a pipe, it goes into a heat converter. The heat converter (Thermodynamic Power Generator) will act like a one-of-a-kind thermal power plant so that the heat transmits to electricity. As a result, the electricity generated from here can be re-supplied to power the data center.
Fig. 2. Architecture of cooling system model
In this model, we use a new electricity production model: a Blade turbine with a solar panel. It works together to produce electricity through wind and sunshine. The model that is shown in Fig. 3 is a model that is created with a vertical wind turbine with a solar panel on top of it. This makes solar and wind energy simultaneously [8] and can be used in cities and densely populated areas as it is small and can work with wind from any direction. We used a compact Vertical Helix Wind Power Turbine Generator and put a solar panel on top of it [9]. We used wind power to generate electricity in our Design, around 400 W monthly. The following formula can determine the overall cost of wind power: This model uses the thermal heat-sharing property of air and water to share heat. This model is very primitive, therefore, straightforward to implement. This requires very little knowledge and is very cheap [10]. This model does not use energy,
266
M. J. Hossain et al.
so it has zero energy dependence on nonrenewable sources, making it carbon-neutral and green technology efficient. This model harvests air and solar energy simultaneously with minimal loss. It can generate electricity day and night but is rated for sunny but windy days. Mobility: His model is smaller than a typical wind turbine with a solar panel on top. It mentions the total energy outcome of a solar panel and the total energy output of a wind turbine, indicating that energy refers to the total amount of work that can be done by a system. On the other hand, power refers to the rate at which energy is produced or consumed. For example, a wind turbine might have a power output of 5 MW, which means it can produce 5 MW of energy per unit of time. We can use this anywhere with access to sunlight and air. So it is highly mobile. Zero carbon emission and clean energy: This model does not use power, so it has zero energy dependence on nonrenewable sources, making it carbon neutral. As a renewable source, it has zero waste, producing 100% clean energy.
Fig. 3. Model of renewable energy sources
4 Results Analysis and Discussion Table 2 shows the specification in our model, and Table 3 shows the solar panel specification we use in a wind turbine [11, 12]. In this part, we discuss the Implementation and the result by calculation. We cannot implement this model in our university for The Limitation of proper Resources. However, we calculated Dhaka city’s perspective and found the best result using this cooling system. This model saves the usage amount of electricity cost.
Reducing Energy Usage and Cost for Environmentally
267
Table 2. Specification of turbine Rated power
1 kW
Start wind speed
6.27 mph
Swept area
3.71 m2
The energy produced (electricity)
400 W 12 V
Table 3. Specification of solar panel Height
42.4 in.
Width
20 in.
Rated power
100 W
The energy produced (electricity)
700–800 W-h (depending on sun availability)
Model of renewable energy sources data calculation: We took daily sun coverage and wind speed data for Dhaka, Bangladesh, to simulate average solar and wind speeds near East West University [13]. (Average wind data per month) (1) Average wind speed in Dhaka = 12 Using Eq. (1), we found average wind speed in Dhaka is 6.9 mph. (Average sunlight hours per month) Average sunlight(sunrise to sunset) = 12
(2)
Using Eq. (2) average sunlight in Dhaka is 12.15 h. 4.1 Solar Power Calculation We have a total of 20 solar panels in our Design. Each solar panel has a 100-W capacity [14]. The following formula can be used to compute total solar power: Per day solar power generated = solar panel s power × Sunlight of average day × 80% (3) Using Eq. (3), we found that solar power is 400 W. Total solar power generated per day = Total solar panels × Per day power generated (4) If we use 20 solar panels, we found the total power generated per day 400 W. Total solar power generated per month = Total solar power generated per day × 30 (5)
268
M. J. Hossain et al.
Using Eq. (5), we got the total solar energy generated per month, 240000 W. The 240000 W of solar power generated per month can be a cost-effective solution for reducing electricity costs in Dhaka city [15]. This amount of solar power can provide a significant portion of the city’s energy needs, reducing the dependence on traditional fossil fuels and potentially leading to lower energy bills for consumers. To further reduce electricity costs, the city could consider implementing energy efficiency measures such as using energy-efficient appliances, improving building insulation, and encouraging public transportation. Additionally, the city could explore other renewable energy sources, such as wind and hydropower, to diversify its energy portfolio and further reduce. Given the total solar power generated per month in Dhaka city, we can estimate the potential cost savings as follows: 1. Assuming 30 days a month, the total energy generated would be 7200000 Wh. 2. For households, for 0.053 U.S. dollars per kWh, the total cost of electricity from traditional sources would be 38064 USD. 3. For businesses, for 0.084 U.S. dollars per kWh, the total cost of electricity from traditional sources would be 60320 USD. So, the 240000 W of solar power generated per month would result in a savings of 38064 USD for households and 60320 USD for businesses, based on these calculations. To convert these savings into Bangladeshi Taka, use the exchange rate of 1 USD = 106.40 (according to the exchange rate of February 2023) Taka: 1. For households: 4,038,336.64 Taka 2. For businesses: 6,428,064.00 Taka Please note that these calculations are based on assumptions and rough estimates, and the actual savings could be different based on several factors, including the actual cost of implementing and maintaining the solar power system, changes in the price of electricity over time, and the Efficiency of the solar power system itself. 4.2 Future Scope The most critical details in this text are that it is crucial to conduct a feasibility study to determine if the proposed solutions are practical and economically viable for the institutions, develop a pilot project, conduct a cost-benefit analysis, involve stakeholders, develop a monitoring and evaluation framework, conduct research on the Bucket Fan Model, explore other eco-friendly solutions, and explore ways to reduce carbon immunity and increase sustainability in institutions. The feasibility study will help identify potential challenges, costs, and benefits associated with the proposed solutions, while the pilot project will test the feasibility and effectiveness of the proposed solutions. The cost-benefit analysis will help identify the return on investment (ROI) and the payback period. Finally, research on the bucket fan model will help to determine its Design, Efficiency, and cost-effectiveness.
Reducing Energy Usage and Cost for Environmentally
269
5 Conclusion In this study, we attempted to find the best solution for every institution, making them green initiatives and eco-friendly. Here, we proposed a model that will make the institution a better place. Our main goal was to reduce carbon immunity and digitalize our classroom by making it paperless for every work and developing an intelligent control system to reduce electric costs. We proposed another model, which is called the Bucket Fan Model. On our model Based on the exchange rate of 1 USD = 106.40 Taka, the estimated cost savings for households in Dhaka city would be 4,052,736 Taka (38064 USD × 106.40 Taka/USD), and for businesses, it would be 6,423,488 Taka (60320 USD × 106.40 Taka/USD), per month. These savings could potentially have a significant impact on the budgets of households and businesses in Dhaka city, leading to increased disposable income and potentially contributing to economic growth in the region. It is a new cooling system model for data centers. It is very eco-friendly and environmentally friendly, which our country needs. The limitation of our project is that we cannot calculate it practically because it is a new model. If this project is successfully applied, it will bring good results for any institution. Acknowledgments. This paper researched the passive cooling solution, electricity cost reduction, and classroom automation. We were motivated after attending a seminar by Mr. Palash Roy, Managing Director of Bright-I Systems Ltd. We got ideas from analyzing many research papers and online materials.
References 1. Climate and Average Weather Year-Round in Dhaka Bangladesh [Online]. Available: https:// weatherspark.com/y/111858/Average-Weather-in-Dhaka-Bangladesh-Year-Round 2. Energy Efficiency and Renewable Energy [Online]. Available: https://rpsc.energy.gov/ 3. Ismail, B.I., Ahmed, W.H.: Thermoelectric power generation using waste-heat energy as an alternative green technology. Received: 1 Aug 2008; Accepted: 20 Nov 2008; Revised: 24 Nov 2008 4. Purohit, D., Singh, G., Mamodiya, U.: A review paper on solar energy systems. Int. J. Eng. Res. Gen. Sci. 5(5) (n.d.). www.ijergs.org 5. Roslizar, A., Alghoul, M.A., Bakhtyar, B., Asim, N., Sopian, K.: Annual energy usage reduction and cost savings of a school: end-use energy analysis. Sci. World J. (2014). https://doi. org/10.1155/2014/310539 6. Ghaffarianhoseini, A., Ghaffarianhoseini, A., Tookey, J., Omrany, H., Fleury, A., Naismith, N., Ghaffarianhoseini, M.: The essence of smart homes: application of intelligent technologies towards a brighter urban future, pp. 79–121 (2016) 7. Brush, A.J.B., Lee, B., Mahajan, R., Agarwal, Saroiu, S., Dixon, C.: Home automation in the wild: challenges and opportunities. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2115–2124 (2011) 8. Global CO2 Emissions from Energy Combustion and Industrial Processes [Online]. Available: www.iea.org
270
M. J. Hossain et al.
9. Kumar, M., Arora, A., Banchhor, R., Chandra, H.: Energy and exergy analysis of hybridization of solar, intercooled reheated gas turbine trigeneration cycle. World J. Eng. ahead-of-p(aheadof-print). https://doi.org/10.1108/WJE-09-2021-0567/FULL/PDF 10. Kudelin, A., Kutcherov, V.: Wind ENERGY in Russia: the current state and development trends. Energy Strategy Rev. 34 (2021). https://doi.org/10.1016/J.ESR.2021.100627 11. Weatherspark.com. Dhaka Climate, Weather by Month, Average Temperature (Bangladesh)— WeatherSpark (n.d.). Retrieved 24 May 2022, from https://weatherspark.com/y/111858/Ave rage-Weather-in-Dhaka-Bangladesh-Year-Round#:~:text=The%20windier%20part%20of% 20the,September%206%20to%20March%2029 12. Amazon.com: LOYAL HEART DIY Wind Turbine Generator, 12 V 400 W Portable Vertical Helix Wind Power Turbine Generator Kit with Charge Controller—White: Patio, Lawn & Garden 13. What is Renewable Energy? https://www.un.org/en/climatechange/what-is-renewable-ene rgy?gclid=Cj0KCQiA0oagBhDHARIsAI-BbgfLlhSUlivz48z1TwWP5xtiB4lHyr3zJXSRiK2M3OJfiuazQG7yOUaAqviEALw_wcB 14. Solar Energy Basics. https://www.nrel.gov/research/re-solar.html 15. Electricity Bill per Unit in Bangladesh in 2023. https://bdesheba.com/electricity-bill-perunit-in-bangladesh/#:~:text=Electricity%20Bill%20Per%20Unit%20Price%202023,-Now% 20come%20to&text=An%20average%20increase%20of%205,%25%20from%20January% 2013%2C%202023 16. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable system of data center cooling and power management utilizing renewable energy. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_67 17. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste through improved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/ 978-3-031-19958-5_97 18. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste management system and recycling trade for Bangladesh in green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_33 19. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable approach to reduce power consumption and harmful effects of cellular base stations. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_66 20. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_104 21. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sustainable approach between satellite and traditional broadband transmission technologies based on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_26
Reducing Energy Usage and Cost for Environmentally
271
22. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.: Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_35 23. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-03119958-5_75 24. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustainable and profitable IT infrastructure of bangladesh using green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_18 25. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for improving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., MarmolejoSaucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi. org/10.1007/978-3-031-19958-5_95
Investigation of the Stimulating Effect of the EHF Range Electromagnetic Field on the Sowing Qualities of Vegetable Seeds P. Ishkin1
, D. Klyuev1 , O. Osipov2 , Yu. Sokolova2 Yu. Daus3(B) , and V. Panchenko4,5
, A. Kuzmenko1
,
1 Samara State Agrarianuniversity, Uchebnayast., 2, 446442 Ust-Kinelskiy, Russia 2 Povolzhskiy State University of Telecommunications and Informatics, L. Tolstoy Str., 23,
443010 Samara, Russia 3 Kuban State Agrarian University, Kalininast. 13, 350044 Krasnodar, Russia
[email protected]
4 Russian University of Transport, Obraztsova St. 9, 127994 Moscow, Russia 5 Federal Scientific Agroengineering Center VIM, 1st Institutsky Passage 5, 109428 Moscow,
Russia
Abstract. The paper presents the results of a study of the stimulating effect of the electromagnetic field of the EHF (Extremely High Frequency) range on the sowing qualities of vegetable seeds. Vegetable seeds, a tomato of the “Wonder of the Market” variety, a pepper of the “Tenderness” variety and an eggplant of the “Black Beauty” were subjected to electromagnetic radiation with an EHF electromagnetic field in three versions: radiation 7 days before sowing dry seeds, radiation on the day of sowing dry seeds, radiation on the day of sowing wet seeds. In the course of the study, such characteristics of crops as germination, dynamics of germination, plant height were assessed. Studies have shown that the dynamics of tomato seedlings showed the best efficiency of the method according to the second option, i.e. Radiation of dry seeds before sowing. Eggplant responded better to the processing of soaked seeds. For pepper, treatment of soaked seeds also showed better, and analysis of the dynamics of tomato growth showed the greatest stimulation effect when processing soaked seeds. Eggplant also showed a better effect when treating soaked seeds, but for pepper, treating soaked seeds gave a depressing effect on the growth rate and this option was significantly inferior to all other options. The investigated effects of the stimulating action of the electromagnetic field of the EHF range can be widely used in the accelerated cultivation of seedlings of vegetable crops in a short agricultural period. Accelerating the growth of seedlings allows you to reduce their cost and increase profitability. Keywords: EHF range · Electromagnetic field · Sowing qualities
1 Introduction With the growth of the world’s population, methods of increasing the productivity of agricultural crops are currently relevant [1–3]. These parameters include seed germination, because when sowing, part of the seeds die at the initial stage, and therefore, on © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 272–281, 2023. https://doi.org/10.1007/978-3-031-50330-6_27
Investigation of the Stimulating Effect of the EHF
273
the same sown area, you can get different yields, and the growth and development of vegetable crops, which is due to the fact that with an increase in the rate of growth and development, you can get a crop in the shortest possible time [4]. Starting in the 70-s, began to attract the attention of experts effects of electromagnetic fields (EMF), the short-millimeter (EHF) (40–60 GHz) and microwave (MW) (1–3 GHz) wavelengths on different biological systems [5]. It was found that under certain conditions, there are marked effects of incentive-based, including improved crop characteristics of seeds of different crops [5]. This is due to the fact that working on the seeds of the short-EMF millimeter (EHF): active biochemical process contributes to a more rapid absorption of nutrients in the processed seeds, most clearly manifested in the effects of old and not certified [6]. It is important to note that these effects are the result of exposure to EMF is very low intensity (less than 10 mW/cm2), have the frequency dependence of the resonant nature and are characterized by intensity thresholds at which the effect begins to show jump like manner [6].
2 Materials and Methods The aim of the research is to study the effect of processing vegetable seeds with EHF radiation on the growth and development of vegetable crops. In connection with this goal, the following tasks were solved [7–9]: to carry out phenological observations in closed ground; to take into account biometric indicators (dynamics of seed germination and dynamics of linear growth of a seedling). The research was carried out in the nursery of horticultural crops of the Department of Horticulture, Botany and Plant Physiology, Samara State Agrarian University. Seeds of the following vegetable crops were selected for research: tomato of the “Miracle of the Market” variety, pepper of the “Tenderness” variety and an eggplant of the “Black Beauty” variety (Fig. 1). Seeds are divided into two groups: control (without radiation) and experimental (radiation with EHF waves (42.25 GHz) for 30 min.). Radiation was carried out on a setup that is an EHF oscillator based on a Gunn diode with a horn section of 15 × 15 mm. The installation created in the irradiated volume of the material an electromagnetic field of the EHF range, characterized by the following parameters: wavelength 7.1 mm (42.25 GHz); energy flux density, 10 mW/cm2 [4]. The studies assessed the following characteristics of crops: germination,%; dynamics of germination, (days after sowing); plant height, cm. Germination was assessed by the percentage of germinated seeds from the number of sown seeds [10–14]. The dynamics of germination was determined by daily counting of the number (percentage) of germinated seeds in the period from the appearance of the first shoots to the termination of the increase in the number of seedlings [7, 8, 15].
274
P. Ishkin et al.
The seeds were processed with low-intensity electromagnetic waves of the EHF range. The waves are modulated. Linear frequency modulation with a depth of 50% was used. Presowing treatment provides 4 variants in 3 times repetition: 1. 2. 3. 4.
Control without radiation. Radiation 7 days before sowing dry seeds. Radiation on the day of sowing dry seeds. Radiation on the day of sowing wet seeds after 3 h soak.
Pre-sowing treatment of seeds and sowing of the studied crops (tomatoes—variety “Miracle of the market”, pepper—variety “Tenderness”, eggplant—variety “Black beauty”) with an electromagnetic field of the EHF range according to the variants of the experiment was carried out on March 20. According to variant 1, where dry seeds were treated 7 days before sowing, the treatment was carried out on March 13. For each treatment variant and each replication, 80 seeds of each culture were selected and sown. By variants, the distribution was maintained for all studied crops. Stages of growing plants: 1. 2. 3. 4.
Treatment of seeds with a modified EHF irradiator for 30 min. Soaking seeds in phytosporin; Sowing seeds in boxes (size 40–50cm) (Fig. 1). Picking into separate vessels with a volume of 0.5 l in the phase of 1–2 true leaves.
Fig. 1. Investigations of the effect of EHF-Radiation on the dynamics of entry and growth of vegetable crops
Accounts and observations:
Investigation of the Stimulating Effect of the EHF
275
The counts and observations are carried out in accordance with the method of field experiment in vegetable growing. The method was compiled by Doctor S.-kh. Sci., Professor, Academician of the Russian Agricultural Academy Litvinov [15]. 1. Phenological observations are carried out on all experimental plots daily, usually in the morning. The beginning of each phase is marked when it is observed in 10% of plants, and a massive onset—in 75% of plants. 2. Biometric indicators (accounting for linear dimensions and accumulation of biomass) are determined from the seedling phase every 2 days. The results of all measurements were recorded in the observation diary, after which the data were processed and, using the MicroSoft Excel office application, graphical dependencies of changes in the biometric parameters of plants were built: the dynamics of seedlings and the dynamics of growth.
3 Results of Investigations of the Dynamics of Germination of Vegetable Crops Analysis of the graph for the change in the germination of tomato seeds (Fig. 2) allows us to conclude that the best effect was obtained by variant-2, i.e. as a result of Radiation on the day of sowing dry seeds. In this variant, all 80 sown seeds emerged on day 9. For the rest of the variants, the final germination did not exceed 64.
Fig. 2. Dynamics of tomato seed germination (the number of germinated seeds on the specified date)
Analysis of the graph for changes in the germination of eggplant seeds (Fig. 3) allows us to conclude that the best effect was obtained by variant-3, i.e. as a result of radiation
276
P. Ishkin et al.
on the day of sowing wet seeds. In this variant, the largest number of seeds emerged—73 sown seeds emerged on the 14th day. Variant-2 is slightly inferior to the best variant, where 68 seeds grew on the 14th day. For the rest of the variants, the final germination did not exceed 49.
Fig. 3. Dynamics of germination of eggplant seeds
Fig. 4. Dynamics of germination of pepper seeds
Analysis of the graph for the change in the germination of pepper seeds (Fig. 4) allows us to conclude that the best effect was also obtained by variant-3, i.e. as a result
Investigation of the Stimulating Effect of the EHF
277
of radiation on the day of sowing wet seeds. In this variant, the largest number of seeds emerged—44 sown seeds emerged on the 18th day. Variant-2 is slightly inferior to the best variant, where 34 seeds emerged on the 18th day. For the rest of the variants, the final germination did not exceed 19.
4 Research Results of the Growth Dynamics of Vegetable Crops Senses After the appearance of the first shoots, the dynamics of the linear growth of seedlings was measured every two days to assess the effect of the treatment method on this indicator. Analysis of the graphical dependencies of the growth dynamics of tomato seedlings (Fig. 5) allows us to conclude that the plant develops best according to variant-3, i.e. as a result of Radiation on the day of sowing wet seeds. In this variant, the average plant height exceeded all other variants. It is also worth noting that the height of plants according to variant-3 reached the required height of 150 mm a week earlier and were ready for sale as seedlings. The difference is clearly visible in the photograph (Fig. 6).
Fig. 5. Growth dynamics of tomato seedlings
Analysis of the graphical dependencies of the dynamics of growth of eggplant seedlings (Fig. 7) allows us to conclude that the plant develops best of all after presowing treatment with an electromagnetic field of the EHF range according to variant-3, i.e. as a result of radiation on the day of sowing wet seeds. In this variant, the average plant height exceeded all other variants by almost two times. It is also worth noting that the height of plants according to variant-3 reached the required height of 150 mm 2 weeks earlier and were ready for sale as seedlings. The difference is also clearly visible in the photograph (Fig. 8).
278
P. Ishkin et al.
Fig. 6. Tomato seedlings 34 days after sowing (April 24)
Fig. 7. Growth dynamics of eggplant seedlings
Analysis of the graphical dependencies of the growth dynamics of pepper seedlings (Fig. 9) allows us to conclude that after pre-sowing treatment with an electromagnetic field of the EHF range according to variant-3, i.e. as a result of radiation on the day of sowing wet seeds after active germination, they slowed down their development and lagged behind in growth dynamics from other variants. In this variant, the average plant height was lower than other variants, and within a month did not reach a height of 150 mm and were not ready for sale. The rest of the variants in the dynamics of growth
Investigation of the Stimulating Effect of the EHF
279
Fig. 8. Eggplant seedlings 61 days after sowing (May 21)
showed almost the same results and almost simultaneously reached the required height of 150 mm and were ready for sale as seedlings.
Fig. 9. Growth dynamics of pepper seedlings
Thus, as a result of the studies carried out, the following conclusion can be drawn that the seeds of tomatoes and eggplants should be radiated in soaked form immediately before sowing and the seeds of pepper should be radiated only in dry form and also immediately before sowing.
280
P. Ishkin et al.
5 Conclusion Based on the results of experimental studies, the following conclusions can be drawn: 1. The dynamics of tomato seedlings showed the best efficiency of the method according to the second variant, i.e. radiation of dry seeds before sowing. Eggplant responded better to the processing of soaked seeds. For pepper processing of soaked seeds also showed better. 2. Analysis of the dynamics of tomato growth showed the greatest stimulation effect when processing soaked seeds. Eggplant also showed a better effect when treated with soaked seeds. But for pepper, the treatment of soaked seeds gave a depressing effect on the growth rate and this variant was significantly inferior to all other variants. In this regard, it is recommended to radiate the seeds of tomatoes and eggplants soaked immediately before sowing and the seeds of pepper only dry and also immediately before sowing.
References 1. Tokarev, K., Lebed, N., Prokofiev, P., Volobuev S., Yudaev, I., Daus, Y., Panchenko, V.: Monitoring and Intelligent Management of Agrophytocenosis Productivity Based on Deep Neural Network Algorithms. Lecture Notes in Networks and Systems, 569, 686–694 (2023) 2. Yudaev, I., Eviev, V., Sumyanova, E., Romanyuk N., Daus, Y., Panchenko, V.: Methodology and Modeling of the Application of Electrophysical Methods for Locust Pest Control. Lecture Notes in Networks and Systems, 569, 781–788 (2023) 3. Petrukhin, V., Feklistov, A., Yudaev, I., Prokofiev P., Ivushkin D., Daus, Y., Panchenko, V.: Modeling of the Device Operating Principle for Electrical Stimulation of Grafting Establishment of Woody Plants. Lecture Notes in Networks and Systems, 569, 667–673 (2023) 4. Klyuev, D.S., Kuzmenko, A.A., Sokolova, Y.V.: Influence of millimeter-wave electromagnetic waves on seed quality. In: Proceedings of International Conference on Physics and Technical Applications of Wave Processes. Samara (2020) 5. Morozov, G.A., Blokhin, V.I., Stakhova, N.E., Morozov, O.G., Dorogov, N.V., Bizyakin, A.S.: Microwave technology for treatment seed. World J. Agric. Res. 1(3), 39–43 (2013). https:// doi.org/10.12691/wjar-1-3 6. Swicord, M., Balzano, Q., Sheppard, A.: A review of physical mechanisms of radiofrequency interaction with biological systems. In: 2010 Asia-Pacific Symposium on Electromagnetic Compatibility, APEMC 2010, 21–24 (2010). https://doi.org/10.1109/APEMC.2010.5475537 7. Pilyugina, V.V., Regush, A.V.: Electromagnetic stimulation in crop production. All-Union Scientific Research Institute of Information and Technical and Economic Research in Agriculture. Moscow (1980) 8. Vasilev, S.I., Mashkov, S.V., Syrkin, V.A., Gridneva, T.S., Yudaev, I.V.: Results of studies of plant stimulation in a magnetic field. Res. J. Pharmaceutical Biol. Chem. Sci. 9(1), 706–710 (2018) 9. Litvinov, S.S.: Methodology of Field Experience in Vegetable Growing. All-Russian Research Institute of Vegetable Growing. Moscow (2011) 10. Klyuev, D.S., Kuzmenko, A.A., Trifonova, L.N.: Influence on germination of pre-sowing treatment of seeds by irradiation with waves of different lengths. Phys. Wave Processes Radio Syst. 23(1), 84–88 (2020)
Investigation of the Stimulating Effect of the EHF
281
11. Ivushkin, D., Yudaev, I., Petrukhin, V., Feklistov, A., Aksenov, M., Daus, Y., Panchenko, V.: Modeling the Influence of Quasi-Monochrome Phytoirradiators on the Development of Woody Plants in Order to Optimize the Parameters of Small-Sized LED Irradiation Chamber. Lecture Notes in Networks and Systems, 569, 632–641 (2023) 12. Mashkov, S., Vasilev, S., Fatkhutdinov, M., Gridneva, T.: Using an electric field to stimulate the vegetable crops growth. Int. Trans. J. Eng. Manage. Appl. Sci. Technol. 11(16), 1–11 (2020) 13. Baev, V.I., Yudaev, I.V., Petrukhin, V.A., Prokofyev, P.V., Armyanov, N.K.: Electrotechnology as one of the most advanced branches in the agricultural production development. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development. IGI Global, Hershey, PA, USA (2018) 14. Yudaev, I.V., Daus, Y.V., Kokurin, R.G.: Substantiation of criteria and methods for estimating efficiency of the electric impulse process of plant material. IOP Conf. Ser. Earth Environ. Sci. 488(1), 012055 (2020) 15. Seeds of agricultural crops. Methods for determining germination. GOST 12038-84. Standartinform. Moscow (1984)
Prototype Development of Electric Vehicle Database Platform Using Serverless and Microservice Architecture with Intelligent Data Analytics Somporn Sirisumrannukul(B) , Nattavit Piamvilai, Sirawich Limprapassorn, Tanachot Wattanakitkarn, and Touchakorn Loedchayakan Department of Electrical and Computer Engineering, Faculty of Engineering, King Mongkut’s University of Technology North Bangkok, Pracharat 1 Rd. Wongsawang, Bangsue, Bangkok 1518, Thailand [email protected], {s6301011910027,s6501011810021, s6501011810012,s6401011810064}@kmutnb.ac.th
Abstract. To fully realize the benefits of EVs, this paper proposes a prototype platform that can perform essential functions as an EV data center for storing and centralizing the data acquired from EVs and chargers directly. Using serverless and microservice architecture, this platform is purposely designed and built for scalability and includes intelligent data analytics capabilities with two different tasks: calculation of CO2 emission, and intraday spatial charging demand analysis. The results confirm that the communication between EVs, chargers, and the platform using the REST API protocol satisfies the response time of the Open Charge Point Protocol (OCPP) 2.0.1 standard, and the backend performance is flexible in terms of scalability and database optimization to accommodate the number of EVs. Given real-time data, the data analytics functions can display CO2 emissions, and predict an intraday power demand. Keywords: Electric vehicle platform · Intelligent data analytics · CO2 emission · Spatial demand forecast · Serverless architecture · Microservice architecture
1 Introduction It is well known that the global development of electric vehicles (EVs) has directly affected the energy sector and power system management in terms of electric infrastructure because EVs are another sort of electric equipment that requires a substantial amount of demand from the grid for charging. The existing electric infrastructure may need upgrading, and therefore requires significant investment. To improve the benefits of EV usage while avoiding system reinforcement more than necessary, numerous approaches have been proposed for controlling and monitoring EV user behaviors. EVs and their charger data can be utilized by system operators and researchers to enhance generation planning and power system operation. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 282–294, 2023. https://doi.org/10.1007/978-3-031-50330-6_28
Prototype Development of Electric Vehicle
283
In general, there are many participants in the EV/EVSE data collection business. EV communication begins at the charger and continues to the controller of the charge point operator. From there, it is connected to other actors involved, such as the distribution system operator and demand response aggregators. Communication at different layers is based on the predefined protocol [1]; for instance, Open Charge Alliance (OCA), which is a global cooperation of public and commercial electric car infrastructure [2]. With the fast-growing development of Internet of Vehicle (IoV), numerous advantages can be captured by EV technology and can be effectively integrated into power system operation and planning, including Vehicle to Grid (V2G), and Demand Side Management (DSM) and electricity market such as Peer to Peer energy trading. The implementation of IoV utilizes cloud computing infrastructure [3] and there are four distinct levels that make up the architecture of an IoV platform: sensor layer (or device layer) for data gathering, communication layer for data transmission from the device layer to the internet, processing layer for data analysis, and application layer for enduser interface [4, 5]. The collected data from EVs can be evaluated and mined in the computational layer to uncover useful information in a variety of areas, as demonstrated in Fig. 1 [6].
Fig. 1. Various analysis applications for electric vehicle data [6].
To enable the implementation of advanced EV-related services and technology such as vehicle-to-grid (V2G) and virtual power plant (VPP), a reliable and secure EV data platform is essentially required. This platform should be able to collect, process, and analyze data from different EVs and charging stations and make the information available to relevant stakeholders in the energy sector. For instance, [7] provides a simulation platform for evaluating flexible-demand management concepts in the context of EV charging and [8] proposed a method for developing a blockchain-based quota trading platform that distributes the initial charging power quota evenly and securely. Furthermore, wireless charging is a promising technology for electric vehicles because it eliminates the need for a physical connection between the vehicle and the charging station. Multiple studies
284
S. Sirisumrannukul et al.
have proposed wireless charging methods and systems for electric vehicles. For example, [9] introduced a wireless charging system and method for EVs, including charging stations via mobile application. However, the development of applications for analyzing EV usage data on a large scale with analytical systems on the impact of EV deployment in different areas is still lacking. This research presents a novel design and develops a real prototype of an EV data platform that uses the most recent data gathering, processing, architecture, and analysis capabilities. The platform’s core components and functional features are described along with the opportunities connected with EV data management. We also offer a case study to demonstrate the effectiveness of our platform in displaying CO2 emissions and predicting an intraday power demand in an area of interest. The performance of the platform has been tested with a simulated data transfer in the JSON (JavaScript Object Notation) format through a REST (Representational State Transfer) API communication from EVs and chargers.
2 Platform Architecture To demonstrate the benefits of EV data analysis in a more realistic context, we have designed a prototype platform for an EV data center, which functions as an administrator to link and store data between different businesses, relevant agencies, and future software products. In addition, this platform can analyze collected data from EVs to identify CO2 emissions, geographical energy demand required by EVs being charged, and intelligent optimization of EV charging load profiles. The designed platform includes 4 different layers: frontend layer, backend layer, communication layer, and data analytics layer, all of which require a proper technology to improve the platform’s performance. The primary objective of the EV data center is to collect and centralize data already stored by service providers and to utilize that information in various dimensions, such as charging station development planning at the national level, including real-time investigation of the availability status of charging stations and the position and status of EVs at the application level. It also drives standardization and control for data collection and usage, including a mechanism to protect data privacy. The software development paradigm is built on the microservices architecture, which breaks the system into smaller sections known as services. Each service is easily scalable as each component may be scaled independently and separately and can be modified or repaired without affecting other services [10] as shown in Fig. 2. 2.1 Frontend Layer The front-end layer communicates with the back-end layer using an API gateway, which routes the requests to the appropriate microservices. The application’s data are stored in Amazon S3, a highly scalable object storage service that can handle large amounts of data. The Amazon S3 provides durability, availability, and scalability, and is a popular choice for storing static assets, multimedia files, backups, and log data. To display the application’s data to users, CloudFront is used as a content delivery network (CDN) service. CloudFront caches the content at edge locations around the
Prototype Development of Electric Vehicle
285
world, reducing latency and improving the user experience. It also provides security features such as SSL/TLS encryption, access control, and origin authentication [10].
Fig. 2. Overview of developed prototype platform architecture using serverless and microservice technologies for EV data management
2.2 Communication Layer In this layer, the AWS API Gateway employs the REST format, which is a method of Server-Client data communication. It transmits data collected from the use of electric vehicles and electric vehicle chargers to a data storage system via the HTTP protocol and displays it on the user interface. AWS API Gateway serves to expose API endpoints and accept incoming queries. May separate Endpoints to fire to distinct services by context path or other techniques, then continue routing with security mechanisms such as AWS Identity and Access Management (IAM) or a bespoke authentication solution [11]. 2.3 Backend Layer This layer interfaces with numerous components using serverless database processing technologies to develop and administer the backend platform. The cloud computing event trigger type can alleviate most server management responsibilities. Because workflows are designed to be scalable and can be executed automatically, we can focus exclusively on administering the application, and there is no concern about system scalability during peak hours [12]. The developed backend of Microservices covers three services: • Data storage for information gathered from EV chargers, • Data storage for information gathered from EVs,
286
S. Sirisumrannukul et al.
• Data analytics for intelligent computations, in which the processed data is transmitted to storage and displayed on the frontend for users to access the platform. For data storage obtained from the use of EVs and EV chargers, non-relational storage is utilized. It is a model that does not require obvious pattern relationships and has a flexible schema, making it appropriate for big data and real-time online applications [13]. 2.4 Data Analytics Layer The analytical layer of this platform is considered the most significant layer. On this layer, the vast quantities of data collected from EVs can be analyzed and investigated in a variety of domains. As the number of EVs increases, the amount of EV data that needs to be collected and analyzed also increases, thus making use of a platform that can be scalable becoming necessary. While data collected can be usefully analyzed in various ways for different purposes as shown in Fig. 1, the three main functions are of interest in this research: CO2 reduction and energy consumption analysis, day-ahead optimal load profile management, and intraday spatial charging demand analysis. 2.4.1 CO2 Emission and Energy Consumption Analysis EVs are frequently referred to as “zero emission vehicles” due to the fact that they emit zero CO2 [14] or no greenhouse emissions during operation. The study in [15] indicated that over 14% of the greater percentage of greenhouse gas emissions from vehicles with internal combustion engines were attributable to the transportation sector. Reference [16] emphasized that the usage of EVs greatly reduced greenhouse gas emissions for urban driving, although the reduction rate was smaller for traveling outside the city. Likewise, according to the study in [17], although the rise of EVs will greatly reduce the air pollution produced by combustion engine vehicles, pollution may still be affected by charging due to increasing energy demand. To be specific, although EVs do not emit CO2 directly, generating and distributing the electricity required to power EVs for the recharging process typically results in substantial emissions. Therefore, a sustainable solution for carbon neutrality and net zero emission is to use renewable energy as sources of charging electric vehicles, thereby minimizing electricity generation from fossil-based power plants. In this analysis, indirect carbon dioxide emissions resulting from EV charging are examined. The average CO2 emission rate from all the power plants in the system can be referred to (1). The emission contribution for an EV requires the energy consumed by the EV that is obtained from the difference between the final and initial SOC of the EV multiplied by the size of the battery, given in (2). The emission contribution for the EV can be calculated by the product between the energy consumed by the EV and the average CO2 emission rate, given in (3). CO2,Gen = CO2,PP /ENgen
(1)
EVcon, i = SOCstart, i −SOClast, i × Batti
(2)
Prototype Development of Electric Vehicle
CO2,i = EVcon,i × CO2,Gen
287
(3)
where CO2,Gen CO2,PP EN gen EV con, i SOC start, i SOC last, i Batt i CO2,i
Average CO2 emission rate (kg-CO2 /kWh) CO2 produced by all power plants (kg-CO2 ) Electrical energy produced by power plants (kWh) Total EV energy consumption of the ith EV (kWh) Initial state of charge of the ith EV (%) Final state of charge of the ith EV (%) Battery size of the ith EV (kWh) CO2 produced by the ith EV (kg-CO2 ).
Table 1. Pseudocode for carbon dioxide emissions and energy consumption Input
GPS location of EVs, SOC, battery size, and boundary GIS data
Output
Carbon dioxide emissions and energy consumption in each area
1
FOR i = 1 to NEV
1-1
FOR j = 1 to NAREA
1-1-1
IF EV GPS, i is in AREAj
1-1-1-1
EV con, i = (SOC start, i – SOC last, i ) × Batt i
1-1-1-2
CO2, i = EV con, i × CO2, Gen
1-1-1-3
IF EV con, j and CO2, j is existed
1-1-1-3-1
EV con, j = EV con, i + EV con, j
1-1-1-3-2
CO2, j = CO2, i + CO2, j END IF
1-1-1-4
ELSE EVcon, j = EVcon, i
1-1-1-4-1
CO2, j = CO2, i
1-1-1-4-2
END ELSE END IF END FOR END FOR
where NEV NAREA EV GPS, i AREAj EV con, j CO2,j
Number of EVs Number of areas of interest based on the GIS data (i.e., provinces in Thailand) Position (latitude and longitude in the GIS data) of the ith EV jth area Energy consumption in the jth area CO2 emission in the jth area (kg-CO2 ).
288
S. Sirisumrannukul et al.
2.4.2 Intraday Spatial Charging Demand Analysis Due to the movement nature of EVs, their power demand will vary by area and time. Thus, each area has a distinct electric power requirement from movable EVs. Spatial charging demand analysis is particularly useful for projecting short-term electricity demand if the collected data can include the number and status of scattering EVs being charged in an area of interest. With this information, we can forecast the power demand that the system in that area needs to serve within the next short period of time. Such forecasts allow retailers to procure their energy in the intraday electricity market, while the system operator can be aware if the EV demand can cause problems to the grid. Reference [18], as an example, used the data from a GIS to predict a spatial load demand of charging EVs by dividing the study region into smaller portions and simulating the proliferation of EVs based on the diffusion theory. In this article, the spatial electricity demand of an EV is based on dynamic data of its charging status being collected in real-time and static data of its battery size. The demand requirement of an EV can be calculated from (4). Table 2 presents the pseudocode of spatial charging demand of the total EVs in an area. EVpower, i = [(SOClast, i −SOCstart, i ) × Batti ]/tstep
(4)
where EV power, i Power demand of the ith EV (kW) Time steps (e.g., 15 min). t step where; NEV charged Number of EVs being charged NEV charged, j Number of EVs being charged in the area jth Total EV power demand in the area jth. EV power, j
3 Implementation Results 3.1 Communication Between EVs, Chargers, and Platform To assess the efficiency of receiving data, events with a different number of data sets are simulated and transferred from EVs to chargers using the Request Put method via a REST API gateway. Figure 3 depicts how to receive and send data from a No SQL database via the Rest API gateway utilizing the HTTP method based on serverless processing and an event trigger-driven architecture that provides low processing time and cost savings due to an on-demand pricing model. From the sample data shown in Fig. 4, the results of transferring data from EVs and EV chargers are as follows and shown in Fig. 5. • To save 100 data sets, it takes about 31 s with an average duration of 222 ms per data set. • To save 1000 data sets, it takes about 5 min and 5 s with an average duration of 229 ms per data set.
Prototype Development of Electric Vehicle
289
Table 2. Pseudocode for intraday spatial charging demand Input
GPS location of EVs, SOC, battery size, boundary GIS data and EVs status
Output
Number of EVs being charged in each area, power demand for each area
1
FOR i = 1 to NEVcharged
1-1
FOR j = 1 to NAREA
1-1-1
IF EV GPS, i is in AREAj
1-1-1-1
EV power, i = [(SOC last, i – SOC start, i ) × Batt i ] ÷ t change
1-1-1-2
IF NEV charged, j and EV power, j is existed
1-1-1-2-1
NEV charged, j = NEV charged, j + 1
1-1-1-2-2
EV power, j = EV power, i + EV power, j END IF
1-1-1-3
ELSE
1-1-1-3-1
NEV charged, j = 1
1-1-1-3-2
EVpower, j = EVpower, i END ELSE END IF END FOR END FOR
• To save 10,000 data sets, it takes about 51 min and 52 s with an average duration of 233 ms per data set. The above test results indicate that the REST API gateway has acceptable response speeds of less than 4 s. According to the OCPP 2.0.1 standard [2], the suggested maximum duration for connecting data is 5 s (300 ms), at which the test results are satisfied. It can be obviously seen that the average duration of each of the three sets with different data volumes is slightly different and quite independent on the number of data sets. The results confirm that the API Gate has successfully managed to continue performing its functions properly under heavy demand. The requests were appropriately processed, and the platform operated smoothly with less stress during peak hours due to the REST API gateway designed being scalable and able to sustain traffic increase.
Fig. 3. Example of REST API communication.
290
S. Sirisumrannukul et al.
Fig. 4. Example of received information from EVs.
Fig. 5. Data transmission test results from EVs and chargers.
3.2 Backend Performance The backend manages data obtained from EVs and chargers, performs analytic functions, and controls access to the frontend. The backend performance can be determined from two communication and data connection based on the following criteria. • Scalability: The platform can automatically scale up or down in response to fluctuating demand; namely, the platform has managed to withstand traffic spikes without slowing down or crashing. Due to the No SQL database optimized for performance and scalability and a serverless database, the platform was able to retrieve data fast and effectively as shown in Fig. 6. Figure 7 is a test of scalability of the No SQL database with 1 data set at the beginning of the test. The PUT method was requested to the No SQL database of 200 data sets, which took about 5 min. The graph shows the No SQL database can comfortably support the increased size of data sets. • Database optimization: The key-value data storage format utilized by the No SQL database made data querying and connecting far simpler and more efficient as shown in Fig. 7, which is an example of retrieving data in the No SQL database. The advantage is that this type of data storage uses a cache to improve performance, making the operation speed significantly higher than other types of databases.
Prototype Development of Electric Vehicle
291
Fig. 6. Scalability of No SQL database
Fig. 7. Example of available data queries with key-values stored in No SQL database.
3.3 Data Analytics Results This section presents the data analytics results on the three different functions embedded in the platform: namely, carbon reduction and energy consumption, day-ahead optimal load profile management, and intraday spatial analysis of the power demand. Given a rate of carbon dioxide emissions per amount of electricity generation of 0.407 kgCO2 /kWh [19], the analysis of reducing carbon dioxide and energy consumption can be calculated based on the algorithm in Table 1. The amount of energy consumption and CO2 emissions from electric vehicles in 5 example areas (5 provinces) is shown in Table 3. Obviously, the amount of emission normally varies with the number of EVs. In this case, real-time data emission can be evaluated and could be useful for carbon markets. The forecast results of spatial power demand based on the number of EVs being charged in the area of the five provinces are shown in Table 4. With a sampling rate of every 15 min, the platform has succeeded in identifying at the time of data recording how many of them were being charged at public stations or at homes and the charging power rate of each EV. It is seen that the forecast power demand for the next 15 min depends generally on the number of EVs being charged.
292
S. Sirisumrannukul et al. Table 3. Result of carbon emission and energy consumption for five example areas.
Area
Energy consumption (kWh)
Chon Buri Rayong Chanthaburi
CO2 emission (kg-CO2 )
99,072.00
40,322.30
165,223.20
67,245.84
26,935.20
10,962.63
Trat
22,704.00
9240.53
Chachoengsao
30,065.60
12,236.70
Table 4. Result of spatial base power demand analytics. Area
No. of EVs in area (unit)
No. of EVs being charged (unit)
Power demand (kW)
Chon Buri
57,600
2880
113,346
Rayong
96,060
4803
190,308
Chanthaburi
15,660
783
30,544
Trat
13,200
660
25,766
Chachoengsao
17,480
874
32,360
4 Conclusion This article has presented the architecture and prototype of an EV database platform that can serve as a data center for storing and centralizing EV and EV charger data from several available sources with different stakeholders. The developed platform was integrated with GIS-based maps and is capable of visualizing data such as the real-time locations of EVs. This platform contains data analytics functionalities featuring three different tasks: calculating the carbon emissions from EVs and forecasting intraday spatial charging demand. The experimental results demonstrate that the platform can communicate with EVs and EV chargers in real time with good response times satisfying the OCPP standard protocol. In addition, the architecture of the platform is based on serverless and microservices, allowing for scalability and independent modification of functionality. The platform’s potential can further be extended to include, for example, data security, additional functionalities, and associated technologies like blockchain and smart contacts. Sharing data with an EV data center requires policies to promote and incentivize data sharing for specific purposes of analytics without imposing an excessive burden on data sharers. In addition, because a substantial amount of data can be obtained from EVs, a system capable of handling big data analytics becomes necessary for optimizing computation time. The EV data platform implemented in this research can be used in practice by policymakers and electric power utilities for effective operation and planning of power system infrastructure to accommodate the wide adoption of EVs in the future.
Prototype Development of Electric Vehicle
293
References 1. Driivz.: EV Charging Standards and Protocols. Retrieved January 2023, from https://driivz. com/blog/ev-charging-standards-and-protocols/. 2021, August 12. 2. Open Charge Alliance. (n.d.). Home. Retrieved January 2023, from https://www.opencharg ealliance.org/ 3. Rimal, B., Kong, C., Poudel, B., Wang, Y., Shahi, P.: Smart electric vehicle charging in the era of internet of vehicles, emerging trends, and open issues. Energies 15(5), 1908 (2022). https://doi.org/10.3390/en15051908 4. Sharma, S., Kaushik, B.: A survey on internet of vehicles: applications, security issues & solutions. Vehic. Commun. 20, 100182 (2019). https://doi.org/10.1016/j.vehcom.2019.100182 5. Lv, Z., Qiao, L., Cai, K., Wang, Q.: Big data analysis technology for electric vehicle networks in smart cities. IEEE Trans. Intell. Transport. Syst. 22(3), 1807–1816 (2021). https://doi.org/ 10.1109/TITS.2020.3008884 6. Li, B., Kisacikoglu, M.C., Liu, C., Singh, N., Erol-Kantarci, M.: Big data analytics for electric vehicle integration in green smart cities. IEEE Commun. Mag. 55(11), 19–25 (2017). https:// doi.org/10.1109/MCOM.2017.1700133 7. Palensky, P., Widl, E., Stifter, M., Elsheikh, A.: Modeling intelligent energy systems: CoSimulation platform for validating flexible-demand EV charging management. In: 2014 IEEE PES General Meeting. Conference & Exposition, National Harbor, MD, USA, 2014, p. 1. https://doi.org/10.1109/PESGM.2014.6939434 8. Ping, J., Chen, S., Yan, Z., Wang, H., Yao, L., Qian, M.: EV charging coordination via blockchain-based charging power quota trading. In: 2019 IEEE Innovative Smart Grid Technologies—Asia (ISGT Asia), Chengdu, China, 2019, pp. 4362–4367. https://doi.org/10.1109/ ISGT-Asia.2019.8881070 9. Qian, Y.M., Ching, T.H., Abidin, Z.M.B.Z.: Mobile application system for chargEV charging stations. In: 2022 IEEE 2nd International Conference on Mobile Networks and Wireless Communications (ICMNWC), Tumkur, Karnataka, India, 2022, pp. 1–5. https://doi.org/10. 1109/ICMNWC56175.2022.10031669 10. Fowler, M.: Microservices. Retrieved January 2023, from http://martinfowler.com/microserv ices/ (2014) 11. Amazon Web Services. (n.d.). Amazon API Gateway Developer Guide. Retrieved January 2023, from https://docs.aws.amazon.com/apigateway/latest/developerguide/ 12. Serverless, Inc. (n.d.). Serverless Framework. Retrieved January 2023, from https://www.ser verless.com/framework/ 13. Amazon Web Services. (n.d.). Amazon Web Services: NoSQL databases. Retrieved January 2023, from https://aws.amazon.com/nosql/ 14. Kumar, R., Lamba, K., Raman, A.: Role of zero emission vehicles in sustainable transformation of the Indian automobile industry. Res. Transp. Econ. 90, 101064 (2021). https://doi.org/ 10.1016/j.retrec.2021.101064 15. Ehsani, M., Singh, K.V., Bansal, H.O., Mehrjardi, R.T.: State of the art and trends in electric and hybrid electric vehicles. Proc. IEEE 109(6), 967–984 (2021). https://doi.org/10.1109/ JPROC.2021.3072788 16. Smith, W.J.: Can EV (electric vehicles) address Ireland’s CO2 emissions from transport? Energy 35(12), 4514–4521 (2010). https://doi.org/10.1016/j.energy.2010.07.029 17. Brenna, M., Longo, M., Zaninelli, D., Miceli, R., Viola, F.: CO2 reduction exploiting RES for EV charging. In: 2016 IEEE International Conference on Renewable Energy Research and Applications (ICRERA), Birmingham, UK, 2016, pp. 1191–1195. https://doi.org/10.1109/ ICRERA.2016.7884521
294
S. Sirisumrannukul et al.
18. Heymann, F., Pereira, C., Miranda, V., Soares, F.J.: Spatial load forecasting of electric vehicle charging using GIS and diffusion theory. In: 2017 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), Turin, Italy, 2017, pp. 1–6. https://doi.org/10. 1109/ISGTEurope.2017.8260172 19. CO2 statistic. (n.d.). Eppo.Go.Th. Retrieved March 6, 2023, from https://www.eppo.go.th/ index.php/en/en-energystatistics/co2-statistic
A Cost-Effective and Energy-Efficient Approach to Workload Distribution in an Integrated Data Center Obaida Jahan1 , Nighat Zerin1 , Nahian Nigar Siddiqua1 , Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2(B) 1 Department of Computer Science and Engineering, East West University, Dhaka 1212,
Bangladesh [email protected] 2 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram 4349, Bangladesh [email protected]
Abstract. The paper aims at low-cost and energy-efficient workload distribution for each data center by utilizing green energy while reducing the consumption of brown energy from non-renewable sources like coal, gas, and oil. To this end, the proposed algorithm takes into account that the local green and brown energy is never going off in non-peak time in the data center, extra renewable energy is saved and used later at peak time. We propose a way to use the concept of green workload and green service against brown workload and brown service depicting green and economic factors equally. This displays the differences between green energy maximization and total cost minimization resulting in green and brown workload allocation with savings calculated for each time slot and per day. Hence, this workload distribution algorithm is designed, and the business process modelings, graphs, algorithm flowchart, and result analysis are demonstrated in the paper such that the energy consumption and cost are reduced. Keywords: Green energy · Brown energy · Maximization · Cost · Power consumption
1 Introduction Demand increased for online services due to the exponentially growing number of services being distributed from large to small, and every data center has remarkably craved electric power usage. Meeting those demands for keeping the environment alive does not become an inevitable job. Due to such excess waste for the rising data center and less management for those waste, the world is slowly moving towards a much more polluted one. Hence some techniques and options have been made to work on efficiency in both energy and cost of data centers in the past few years. Renewable energy sources include solar cells, windmills, waterwheels, and more [1]. This usage of green energy more and more will lead to excess excretion of carbon footprints. However, there has © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 295–307, 2023. https://doi.org/10.1007/978-3-031-50330-6_29
296
O. Jahan et al.
been research conducting groups considering the electricity market, Internet Data Centers (IDC), and solutions to the problems of minimizing the cost of electricity [2]. They have proposed various frameworks, models, and solutions to reduce electricity costs. The proposed model reduces the power consumption cost and enhances the on/off schedule. However, the only thing that almost all of them still need to include is that they do not consider using renewable resources and demonstrate the environmental advantages of this. In our research, one more issue is also focused on where the saving of renewable energy can be used later. As per all these investigations, a new workload distribution strategy is investigated, while green energy and brown energy have different impacts on the environment, including the cost. Moreover, our proposed algorithm is not only based on when the local green energy is less than the local power consumption; instead, it takes accordingly the situation where the local green and brown energy has never been going off as in non-peak time in the data center, the extra renewable energy is saved, which in peak-time used or utilized before burning coal, gas, or oil (Brown Energy) [2]. This saves the cost and power consumption a lot. We propose a framework to show the BETTER algorithm by taking each data center, their hourly services, and the whole process. The data center utilizes green energy in our algorithm as much as possible. When the data center is relatively small and in non-peak time, it saves green energy for further utilization and is less dependent on brown energy [3]. After those two steps, when there is no adequate renewable energy, they will step down to brown energy. Moreover, we have kept a variable for saved brown energy to show how much brown energy is less needed due to green energy maximization. Green versus Brown: We proposed the concept of green workload and green service against brown workload and brown service. The concept bolds the boundary of green energy utilization and maximization and the cost reduction of the total workload. The motive is to design an almost or near-green workload and low-cost strategy [1, 4].
2 Related Work Areekul et al. [2] have discussed a better solution in their paper to address the problem of energy price forecasting. They have proposed an improvised version called Hybrid ARIMA, a cross between ARIMA and ANN architecture. They tried to bring on the Artificial Intelligence (AI) aspect in the prediction mechanism powered by Autoregressive Integrated Moving Average (ARIMA). So, they are bringing on a new architecture of ARIMA + ANN. They have worked on predicting short-term electricity prices. Here, they have used a bare ARIMA model and composed that with Deep Learning Mechanisms. The Deep Learning aspect had a significant role in deciding the prices. In [2], the authors proposed a model that works on the hybrid time series data composed of linear and non-linear relational variables to improve forecasting performance. They have found that their Adaptive Wavelet Neural Network (AWNN) model performs better than the ARIMA model. According to them, AWNN is better at modeling nonstationary electricity prices which would be a revolution for data centers. Again, they argued how good it is in predicting electricity price hikes. They have claimed that their model is better than the literature they quoted.
A Cost-Effective and Energy-Efficient
297
In this following paper, the power distribution and the impact of renewable energy sources are talked about. Byrnes et al. [5] have shown the changing dynamics in the renewables economy and industry. They mentioned all the aspects that affect the market and industry related to this concept. Though the concept was incepted many decades ago, awareness and interest are getting popular lately due to the visibility of the negative changes in the climate. They have shown the market share for energy sources, including renewables. The distribution is like this, 10% market is under Renewables, 1% is for Oil Products, 20% Gas, 22% Brown Coal, 46% Black Coal, and 1% is occupied by other sources. Australia is investing highly in this sector with the ARENA, the RET, and the CEFC designs. In [6], Soman et al. have argued in their paper that wind energy price prediction can be more accurate and better. They have considered different time horizons and tried to consider all the edge cases. They have tried to convince us that if wind power consumption and wind speed can be forecasted, supply and demand management will be more efficient. This way, power distribution will be more efficient, and businesses will grow substantially. They have applied conventional methods, such as wind speed and power estimations of wind speed and power based on statistical data and numeric weather information (NWP). They put ANN in different time scales to get the optimal value. They provided some evidence in support of their research. In [7], Ghamkhari et al. have taken a unique approach to address the energy and performance management issue of Green DCs. They have argued that profit maximization can be achieved through internal system optimization. They emphasized the notion of profit maximization by improvements and optimizations. They tried to lessen the effect of trade-offs that come with optimization and costing. They have considered some practices, such as Service-Level Agreements (SLA) between the vendors and clients in a particular period. They also studied different pricing models such as Day-Ahead Pricing (DAP), Time-Of-Use Pricing (TOUP), Critical-Peak Pricing (CPP), and Real-Time Pricing (RTP). Furthermore, they have also given some ideas on implementing smart grids. In [8], Khalil et al. discussed the workload distribution. They proposed an essential algorithm to address distributing workload and managing individual components. They have given an algorithm named Geographical Load Balancing (GreenGLB) based on Greed Algorithm design techniques. Here, they have talked about taking pre-measures for the impending energy crisis. Big techs such as Google, Amazon, and Microsoft pay vast amounts of their operating cost only on account of power bills. Researchers tried to address many issues related to green computing [9–18] to make our planet safe for living.
3 Materials and Methods 3.1 Materials The required materials were accessed online through various journals. We gathered many resources first. Then, we scrutinized the entire profile of the materials and took the best and most usable assets into our consideration. We have reviewed the most reliable and trusted data sources, such as papers with code, UCI, and more. We found the best
298
O. Jahan et al.
one on Kaggle, titled “Hourly Energy Demand Generation and Weather” [19]. It was not a dataset for competition. Instead, the publishers published it as an open-source contribution. 3.2 Datasets The dataset includes comprehensive statistics on electricity output, consumption, and pricing for four years in Spain. Red Electrica Espaa and ENTSOE are two public sources from which this information was gathered. Red Electrica Espaa is the company running the electricity market in Spain. ENTSOE is a website that gives access to information about the nation’s electrical transmission and service activities. The dataset also contains data on the weather in Spain’s five largest cities, in addition to the electricity statistics. It is crucial to remember that this weather data should be kept from being analyzed or interpreted because it should not be included in the primary dataset. The information, made available to the public through the ENTSOE and REE websites, is designed to give a thorough overview of the Spanish electrical market over four years [19]. It is important to remember that ENTSOE is a gateway that offers access to data from Transmission Service Operators (TSOs) throughout Europe in the context of the previously defined dataset. Europe has 39 TSOs, and ENTSOE acts as a common clearinghouse for data on the service operations and electrical transmission offered by these companies. Table 1 shows some samples from the dataset. Table 1. Sample data from dataset Time
Total incoming green energy
Total incoming brown energy
Total incoming load
Total actual load
Green energy price
Brown energy price
2015-01-01 00:00:00 + 01:00
12.59
7.12
19.71
17.71
644.71
364.25
2015-01-01 01:00:00 + 01:00
12.1
7.31
19.41,
17.01
619.21
374.28
2015-01-01 02:00:00 + 01:00
11.75
6.95
18.69
15.86
601.24
355.75
3.3 Data Collection Methods As discussed earlier, we have collected the dataset from Kaggle. We only took part of the dataset as we intended to work with the energy-related data. We already know the author had put Weather data with the Energy data. The dataset was released for public
A Cost-Effective and Energy-Efficient
299
consumption. The authors elaborated on how they collected the data. While compiling the dataset, they took Energy consumption-related data from the European TSOs. They have mostly quoted ENTSOE and REE as their prime source of data. 3.4 Data Sampling We have gone through a bunch of valid assumptions while sampling the data. According to the dataset: Spain had 43 data centers in 2021, with a combined power capacity of over 300 MW. These data centers are distributed across the country, with the majority located in the Madrid and Barcelona regions. According to a report by the same organization (ASCADIC), data centers in Spain consume approximately 3% of the total electricity demand in the country. However, it is essential to remember that this amount is only an estimate, and the actual energy consumption of data centers in Spain may be different. Hence, we assumed that there have to be 43 functioning data centers in Spain, and they consume 3% of the total electricity. By assuming the dataset of Spain regarding their data center we are estimating it for any integrated data centers for any such countries worldwide. 3.5 Research Ethics From the beginning, our approach always respected the law, whether it was data collection or processing. We churned the data very carefully and avoided any unlawful activity. The dataset we had chosen was not even tightly licensed. The author was very generous about the licensing and work principle. The dataset has a very generous license tier: CC0: Public Domain. Despite this, we were not disrespectful to any activity that may harm the community and society. 3.6 FrameWork In Fig. 1 Green energy is saved when it is greater than the total incoming workload else brown energy is utilized [1, 20]. Resources for green energy are solar, wind, and water whereas brown energy is coal, oil, and gas. In Fig. 2 Green energy is both used and saved while it is furthermore greater than the total incoming workload. Brown energy is utilized [1] and excess brown energy is saved while a shortage of green energy is there. 3.7 Algorithm Better Algorithm Inputs: Total Incoming Workload = li . Total Green Incoming Workload = lgi Total Brown Incoming Workload = lbi .
300
O. Jahan et al.
Fig. 1. AS-IS model (before saving green energy)
Fig. 2. TO-BE model (after saving green energy)
Outputs: The allocated workload to each data entry. Green workload, Ygi . Brown workload, Ybi . Total workload, Y # Assume we have in datacenter # Assume we are optimizing for each datacenter and the time slot is divided into hours. # Assume we have N divisions in time slots. # For each time slot we will calculate the Y. # Per day savings SAVE_G = 0 SAVE_B =
A Cost-Effective and Energy-Efficient
301
SAVE = 0 # Total workload, Y (per day) Y=0 # For test Li = 0 Li = Total incoming workload # Per hour savings. Save = 0 Save_g = 0 Save_b = 0 # Per hour workload, Yi = 0 Total allocated Workload (Yi ) = Lgi + Lbi . 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
For each time slot do Brown Workload distribution optimization. If Lgi < Li # green workload is less If Lbi >Li Save_b + = Lbi − Li #save brown energy Green Workload optimization, Else: If Lgi > Li #green workload is more # utilize green Yi + = Li Save_g + = Lgi − Li #save green energy Else, Yi + = Lgi # green workload is same as allocated workload. SAVE = Save_g + Save_b SAVE_G + = SAVE_g SAVE_B + = Save_b Yi = Lgi + Lbi + SAVE End for The flow chart of the proposed algorithm is shown in Fig. 3.
3.8 Experimental Setup Time Plot (Per Day and Per Month) Figure 4 shows the total actual load a Data Center (DC) can face against a particular unit of time. We can see a natural curve that takes place in the graph. The natural curve shows how the total actual load changes with time (per day). Figure 5 illustrates the total incoming load a data center (DC) can handle for a certain period. The graph exhibits a natural curve that is visible and it shows how the total incoming load is increasing with time (per day). Figure 6 shows a DC’s total actual load due to its monthly usage. There we see two massive downward spikes indicating the DC’s inactivity. The workload was down on
302
O. Jahan et al.
Fig. 3. Flow chart of the BETTER algorithm
Fig. 4. Total actual load versus time (per day)
just 2015-01-06 09:00:00 + 01:00 and 2015-01-28 12:00:00 + 01:00 which shows less consumption of energy where the total workload was optimized. As we can see in Fig. 7, multiple drops in the total incoming load per month were down on just 2015-01-05 09:00:00 + 01:00, 2015-01-28 00:00:00 + 01:00 and so many times due to a shortage of energy. Figure 8 is to introspect the available total incoming load against the required total actual load of a DC. The turbulence means there are some cases we do not meet the requirement, so after utilizing the energy the actual load was minimized whereas the incoming load increased for an abundance of energy.
A Cost-Effective and Energy-Efficient
303
Fig. 5. Total ıncoming load versus time (per day).
Fig. 6. Total actual load versus time (per month)
Fig. 7. Total ıncoming load versus time (per month)
Figure 9 describes the ups and downs of prices for green energy for 3 consecutive years. (2015–2018). After the utilization of energy through the allocation of workload by the proposed algorithm there was a reduction of cost in the total cost.
304
O. Jahan et al.
Fig. 8. Total ıncoming load versus total actual load
Fig. 9. Time versus green energy price
4 Result Analysis Each megawatt of Green Energy = 51.19 e Each megawatt of Brown Energy = 51.19 e Cost of electricity ci = 51.19 e Total workload per day = yi. Results have been taken from the 1st and 2nd day and estimated for 1 year. Saved cost for Green Energy = 226.77 e Cost for 7 days = 1587.40 e Cost for 30 days = 47622 e Cost for 1 year = 619086 e.
A Cost-Effective and Energy-Efficient
305
4.1 Comparison Brown Energy versus Green Energy Cost Utilization Saved cost for Brown Energy: Cost for 1 year = 0 [1st and 2nd day no brown energy saved]. Saved cost for Green Energy: Cost for 1 year = 619086 e Total cost after utilization = 4607761e (Only through Green energy). In a year we can utilize so much energy by saving Green Energy and utilizing it later. Brown energy, if shown to consume less than more cost could be utilized and ecofriendly too. We only took 1st and 2nd days from the dataset. That’s why brown energy is not saved and also it would be 0.
5 Conclusion It is suggested to separate the challenges of brown energy cost reduction and green energy utilization maximization by dividing the task between the two types of energy, green and brown, severally. In this context, the words green workload and service rate, as opposed to brown workload and service rate, have been introduced for data centers. As a result, it has been demonstrated that a new distribution algorithm outperforms the most effective methods for allocating workloads currently in use in terms of power. To lower the green data center’s overall energy costs, we employ a variety of workloads and power management techniques. We investigated the challenge of reducing total energy expenses while accounting for varying electricity pricing, on-site renewable energy, and the number of active servers. In this paper, we investigated the challenge of reducing total energy expenses while accounting for varying electricity pricing, on-site renewable energy, and the number of active servers. To address the issue, we developed a n algorithm to design a strategy for allocating the incoming workload. The studies’ findings demonstrated that our suggested method would significantly reduce data centers’ overall energy costs. We expect that our suggested effort will positively impact lowering power consumption costs, utilizing renewable resources, burning coal, gas, or oil at peak times, utilizing green energy to the greatest extent possible, and becoming less reliant on brown energy.
References 1. Kiani, A., Ansari, N.: Toward low-cost workload distribution for integrated green data centers. IEEE Commun. Lett. 19(1), 26–29 (2015) 2. Areekul, P., et al.: Notice of violation of IEEE publication principles: a hybrid Arima and neural network model for short-term price forecasting in a deregulated market. IEEE Trans. Power Syst. 25(1), 524–530 (2010)
306
O. Jahan et al.
3. Madhumathi, D.C.: Comparative Analysis on Various Workload Balancing Algorithms with the Proposed Time Based Load Balancer (Tblb) Algorithm For Efficient Resource Management ın an Academic Cloud Computing Environment. https://www.journalaquaticscience. com/article_135179_cb758e6d5ba9883c8071760afd88f5f3.pdf 4. Han, X., et al.: Optimization of workload distribution of data centers based on a self-learning in situ adaptive tabulation model. In: Building Simulation Conference Proceedings (2019) 5. Meisner, D., Wenisch, T.F.: Stochastic queuing simulation for data center workloads. https:// citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.467.7982&q=DWT%3A+ Decoupled+Workload+Tracing+for+Data+Centers 6. Soman, S.S., et al.: A review of wind power and wind speed forecasting methods with different time horizons. In: North American Power Symposium (2010) 7. Ghamkhari, M., Mohsenian-Rad, H.: Energy and performance management of green data centers: a profit maximization approach. IEEE Trans. Smart Grid 4(2), 1017–1025 (2013) 8. Khalil, M.I., et al.: Energy efficient indivisible workload distribution in geographically distributed data centers. IEEE Access 7, 82672–82680 (2019) 9. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable system of data center cooling and power management utilizing renewable energy. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_67 10. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste through improved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/ 978-3-031-19958-5_97 11. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste management system and recycling trade for Bangladesh in green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_33 12. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable approach to reduce power consumption and harmful effects of cellular base stations. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_66 13. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_104 14. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sustainable approach between satellite and traditional broadband transmission technologies based on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_26 15. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.: Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_35
A Cost-Effective and Energy-Efficient
307
16. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-03119958-5_75 17. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustainable and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_18 18. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for improving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., MarmolejoSaucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi. org/10.1007/978-3-031-19958-5_95 19. Jhana, N.: Hourly Energy Demand Generation and Weather. https://www.kaggle.com/dat asets/nicholasjhana/energy-consumption-generation-prices-and-weather 20. EPA Report to Congress on Server and Data Center Energy Efficiency. https://www.energy star.gov/ia/partners/prod_development/downloads/EPA_Report_Exec_Summary_Final.pdf
Optimization of the Collection and Concentration of Allelopathically Active Plants Root Secretions A. N. Skorokhodova , Alexander A. Anisimov , Julia Larikova , D. M. Skorokhodov(B) , and O. M. Mel’nikov Russian State Agrarian University – Moscow Timiryazev Agricultural Academy, Moscow 127550, Russia [email protected]
Abstract. The ways of optimization of the collection and concentration of root secretions of allelopathically active plants are discussed in the article. Root exudates of plants are considered to be one of the main forms of allelochemicals. Allelopathy of this kind is typical of most plant species on our planet. They may have potential biogerbicidal activity, which is relevant for the development and implementation of new eco-friendly techniques in agricultural production. The device for collecting and concentrating root extracts of allelopathically active plants is described in the article. The activated carbon was used to collect the root secretions. The chemical structure of adsorbent is presented in the article. The schematic diagram and the 3D-model of the device have been developed in the computer-aided design system (Compass-3D) program. The technical parameters and the principle of operation of the device are review. According to the research results, the positive effect of root exudates of lupine and sunflower on the growth and development of 10-day cucumber seedlings was revealed. There was an increase in the length of the root system by 12.5% (lupin) and by 16.6% (sunflower). This reaction was also observed on the aboveground part of cucumber seedlings by 44.4% (lupin) and 56.1% (sunflower). Keywords: Root exudates · Allelopathy · Filtration · Device · Absorbents · 3D model
1 Introduction The relevance of the work is due to the fact that obtaining root secretions of plants is important for the study of allelopathy and the use of plant allelochemicals to create biogerbicides. This direction in biology is relevant from the point of view of the development and implementation of new eco-friendly techniques in agriculture [1–4]. The aim of the work is the development of the device that allows optimizing the collection of the concentrated root exudates of allelopathically active plants. To achieve this aim, it is necessary to perform a number of tasks:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 308–316, 2023. https://doi.org/10.1007/978-3-031-50330-6_30
Optimization of the Collection and Concentration
309
1. Analyze comparable devices for root exudates of allelopathically active plants concentration; 2. Draw a schematic diagram of the developed device; 3. Design the 3D model of the developed device in the Compass-3D program; 4. Perform the installation of the operating device, conduct the tests; 5. Identify the prospects for further modernization and optimization of the device. The importance of this study is based on the fact that the developed device can be used to obtain concentrated root exudates, which can used as plant growth stimulants and biogerbicides capable of suppressing the growth of weeds, using an environmentally safe method and stimulating the growth of agricultural crops.
2 Materials and Methods Prototypes of the device that allow the collection of plant allelochemicals were constructed in the European countries. Unfortunately, they are expensive and difficult to operate with. Device we are developing is free from these drawbacks. Our device is based on the phenomenon of absorption - the uptake of sorbate by the entire volume of the sorbent [5]. Root exudates of plants are substances of a completely diverse chemical nature, therefore, it is necessary to use universal absorbents for their collection [6, 7]. There is a laborious method of obtaining root exudates of plants. This method consists in growing donor plants in black-colored glass funnels in a neutral substrate (sand, perlite, etc.), when watering, excess moisture flows into a conical flask, into which a funnel with planted donor plants of root exudates is inserted. After that, the liquid is collected from a conical flask (water with root divisions), then the water is evaporated in a vacuum evaporator and after that the root secretions remain, in a very small volume. The whole process can take from 2 to 3 weeks. We have proposed an optimized method for collecting root exudates of plants using the developed device. The device is described by the technical characteristics shown in Fig. 1. To obtain root secretions, our installation uses an absorbent that allows to absorb particles of root secretions, this leads to an increase in the quantitative volume of collected root exudates. The root secretions produced by allelopathically active plants enter the liquid circulating inside the device and, passing through the compartment with the adsorbent, are concentrated in it. After at least 7 days of collection, the compartment with the adsorbent is removed from the device and the root secretions are washed out of it with a solvent. The resulting extract of root secretions is used to germinate seeds of experimental plants in it. If necessary, the solution of root secretions is evaporated in a vacuum evaporator to increase their concentration. The length of the root according to the standard method when working with seedlings is measured once on the seventh day from the beginning of germination. The experiment was carried out in a fourfold repetition. The industry today produces a large number of specialized absorbents. One of the most convenient for root secretions collecting is the absorbent Amberlite XAD 4. However, despite all the advantages, this absorbent is relatively expensive. To reduce the cost of root secretions collecting, it is proposed to use activated carbon as an adsorbent.
310
A. N. Skorokhodova et al.
Activated carbon is a substance with a highly developed porous structure, which is obtained from various carbon–containing materials of organic origin, such as charcoal, coal coke, petroleum coke, coconut shell, walnut, apricot seeds, olives and other fruit crops. [8, 9]. Activated carbon (carbolene) made from coconut shell is considered to be the best in terms of cleaning quality and service life, and due to its high strength it can be regenerated many times, which is important during the cyclic operation of our device. From the chemical point of view, activated carbon is one of the forms of carbon with an imperfect structure, containing practically no impurities. Activated carbon is 87–97% carbon by weight, and may also contain hydrogen, oxygen, nitrogen, sulfur and other substances. The presence of chemically bound oxygen in the structure of activated carbons, forming surface chemical compounds of a basic or acidic nature, significantly affects their adsorption properties. Activated carbon has a huge number of pores and, therefore, has a very large surface, as a result of which it has a high absorption (1 g of activated carbon, depending on the manufacturing technology, has a surface of 500 to 1500 m2 ). Macro-, meso- and micropores are distinguished in activated carbons. Depending on the size of the molecules to be retained on the surface of the carbon, carbon with different pore size ratios can be produced. The pores in the active carbon are classified according to their linear dimensions - X (half-width - for a slit-shaped pore model, radius - for cylindrical or spherical): X < = 0.6–0.7 nm - micropores; 0.6–0.7 < X < 1.5–1.6 nm - supermicropores; 1.5–1.6 < X < 100–200 nm mesopores; X > 100–200 nm - macropores. Adsorption in micropores (specific volume 0.2–0.6 cm3 /g and 800–1000 m2 /g), commensurate in size with the adsorbed molecules, is characterized mainly by the mechanism of volumetric filling. Similarly, adsorption also occurs in supermicropores (specific volume 0.15–0.2 cm3 /g) - intermediate regions between micropores and mesopores. Intermolecular attraction exists in the pores of activated carbon, which leads to the emergence of adsorption forces (Van der Waltz forces), these forces cause a reaction similar to the precipitation reaction, in which the adsorbed substances can be removed from water or gas streams. The molecules of the pollutants being removed are retained on the surface of activated carbon by intermolecular Van der Waals forces. Thus, activated carbons remove pollutants from the purified substances (unlike, for example, from discoloration, when molecules of colored impurities are not removed, but chemically turn into colorless molecules), due to the presence of various types of sorption in activated carbon and the possibility of its regeneration, it is a very suitable substance for extraction root exudates. Intermolecular attraction exists in the pores of activated carbon, which leads to the emergence of adsorption forces (Van der Waltz forces), these forces cause a reaction similar to the precipitation reaction, when the adsorbed substances can be removed from water or gas streams. The molecules of the removed pollutants are retained on the surface of activated carbon by intermolecular Van der Waals forces. Thus, activated carbons remove pollutants from the purified substances (unlike, for example, from discoloration,
Optimization of the Collection and Concentration
311
when molecules of colored impurities are not removed, but chemically turn into colorless molecules), due to the presence of various types of sorption in activated carbon and the possibility of its regeneration, it is a very suitable substance for extraction root exudates.
3 Results To achieve the aim of the research, a schematic diagram and a 3D model of a device for root exudates collecting were developed (Fig. 1). The device allows extraction, concentration and study of the secretions of the root system of various plants.
Fig. 1. The developed device for root exudates collecting: (a) schematic diagram of the device; (b) 3D model of the device; 1 – container for plants; 2 – adjustable fastening device; 3 – root discharge filter; 4 – fine particle filter; 5 – peristaltic pump; 6 – large particle filter; 7 – compartment with an absorbent; 8 – pump control unit; 9 –support; 10 – filtered water outlet; 11 – power supply
The device consists of the container for plants 2, fixed on the station support 9. The number and size of containers for plants can be changed depending on the needs due to the types of plants from which root exudates must be obtained, as well as their required quantity. Depending on this, the number of containers can be increased, or a larger capacity can be taken. The device uses 1 L containers for the convenience of experimental work. The adjustable fixing device 2 allows the necessary configuration of the device for the operation convenience. To exclude the ingress into the solution of root secretions of various pollutants of large and small fractions, filters for small 4 and large particles 6 are provided in the device. The constancy of the water current inside the device is ensured by the inclusion of a peristaltic pump 5 in its device, the uniform operation of which allows us to obtain a flow of solution at the required preset speed, which will ensure high-quality isolation of root exudates from the surface of the root system of plants, and will also prevent possible injury to the plants used.
312
A. N. Skorokhodova et al.
Depending on the type and number of plants, it may be necessary to provide and maintain the required microclimate [10, 11], or to change the flow rate of water through the device. Depending on the type and number of plants, the need to change the speed of the water current through the device may occur. For simplicity and convenience, the device has a pump control unit, where you can digitally specify the required rotation speed of the peristaltic pump and, accordingly, set the required water flow rate. Root exudates are collected in the root extraction filter 3, which has a compartment with adsorbent 7 (Fig. 3). This is the key link of the device, since this part of it collect and concentrate the root secretions of plants, with which it is possible to carry out subsequent work. The water passes through all the above units of the device, it exits through a special hole 10 - the outlet of filtered water. The device has a closed circuit of operation, so after use, water is re-supplied to the root system of the plant and the process is closed in a cycle. On the one hand, the closed principle of operation of the device determines its economy – it is only necessary to ensure that the water level in the device does not fall below the limit values, and also, a closed cycle allows the most complete collection of dissolved root secretions, thereby avoiding significant losses. The device is powered by an electrical network, so its design provides for the presence of a power supply 11. If necessary, the device can be equipped with modern plant lighting sources [12, 13]. Based on the schematic diagram and 3D model, the device was installed (Fig. 2) and experimental studies were carried out (Fig. 3).
4 Discussion The operation of the device is confirmed by experimental data (Fig. 4) showing the effect of root exudates of allelopathically active plants obtained using a device for collecting and concentrating root secretions of allelopathically active plants on the growth and development of test plant seedlings. The obtained data show the effect of root exudates of lupine and sunflower contained in an aqueous solution obtained using a device for collecting and concentrating root secretions of allelopathically active plants on the pre-sowing treatment [14] of germinating cucumber seeds had a stimulating effect on the 3rd day of the experiment. The amount of substances obtained contained in the root system of allelopathically active plants is maximal, this is confirmed by experimental data. The device works according to the following mechanism: The donor plants for root secretions are planted in the container for plants, which is filled with a neutral substrate – perlite or expanded clay. The substrate must be prewashed and disinfected. After planting, the peristaltic pump is turned on, which causes the water inside the device to circulate in a closed system. Macro- and microelements necessary for plants for normal growth, which will not be sorbed, are added to the water. The water washes the root system of plants and enters the coarse and fine filters, where various mechanical impurities are removed from it. Then the water with the root
Optimization of the Collection and Concentration
313
Fig. 2. Appearance of the device for collecting and concentrating root secretions of allelopathically active plants
Fig. 3. The compartment with the absorbent - the key link of the device
secretions dissolved in it gets to the root secretions filter filled with sorbent, where their
314
A. N. Skorokhodova et al.
0.8 0.7
lenght, sm
0.6 0.5 0.4
root length
0.3
shoot length
0.2 0.1 0 control
variant (growing in the root excudates)
а) 0.8 0.7
lenfht, sm
0.6 0.5 0.4
root length
0.3
shoot length
0.2 0.1 0 control
variant (growing in the root excudates)
b) Fig. 4. The effect of root exudates on the length of the root and shoot of cucumber seedlings: (a) lupine root exudates; (b) sunflower root exudates
absorption takes place. The remaining water is supplied to the root system of plants for further irrigation. This closes the cycle. To obtain a solution of root secretions, it is necessary to remove the sorbent cartridge from the root secretions filter and rinse it with the appropriate solvent (depending on the
Optimization of the Collection and Concentration
315
type of sorbent). Thus, we get a solution of root exudates of plants, with which we can continue to work. Currently, laboratory studies on the study of root secretions of such crops as oats, lupin, hogweed and other allelopathically active crops are being conducted in the Russian State Agrarian University - Moscow Timiryazev Agricultural Academy. As an upgrade of the device, its optimization and automation, it is planned to develop a remote application that allows you to output the results to a computer and smartphone.
5 Conclusion The device for optimization of the process of obtaining root secretions of allelopathically active plants has been developed. The device has the following advantages in comparison with analogues: simplicity of design, reliability – any unit of the device can be quickly replaced with a similar one if necessary, high efficiency, automation, low cost of construction and environmental friendliness. According to the research results, the positive effect of root exudates of lupine and sunflower on the growth and development of 10-day cucumber seedlings was revealed. There was an increase in the length of the root system by 12.5% (lu-pin) and by 16.6% (sunflower). This reaction was also observed on the aboveground part of cucumber seedlings by 44.4% (lupin) and 56.1% (sunflower). The use of this device opens up the prospect of new studies of root exudates of plants, and will also be able to help in the development of new environmentally friendly herbicides – biogerbicides and growth stimulants based on root secretions of plants. Acknowledgements. This research was conducted with the support of the Ministry of Science and Higher Education of the Russian Federation in accordance with agreement № 075-15-2022-317, 20 April 2022, and a grant in the form of subsidies from the federal budget of the Russian Federation. The grant was provided for state support for the creation and development of a world-class scientific center: “Agrotechnologies for the Future”.
References 1. Opender, K., Suresh, W.: Comparing impacts of plant extracts and pure allelochemicals and implications for pest control. Perspect. Agric. Vet. Sci. Nutr. Nat. Resour. 4(049), 1–30 (2009) 2. Reiss, A., Fomsgaard, I.S., Mathiassen, S.K., Stuart, R.M., Kudsk, P.: Weed suppression by winter cereals: relative contribution of competition for resources and allelopathy. Chemoecology 28(4–5), 109–121 (2018). https://doi.org/10.1007/s00049-018-0262-8 3. Zhou, L., Fu, Z.S., Chen, G. F., et al.: Research advance in allelopathy effect and mechanism of terrestrial plants in inhibition of Microcystis aeruginosa. Chin. J. Appl. Ecol. 29(5), 1715– 1724 (2018). https://doi.org/10.13287/j.1001-9332.201805.038 4. Gerasimov, A.O., Polyak, Yu. M.: Assessment of the effect of salinization on the allelopathic activity of micromycetes in sod-podzolic soil. Agro-chemistry 3, 51–59 (2021). https://doi. org/10.31857/S0002188121030078 5. Bertin, C., Yang, X., Weston, L.A.: The role of root exudates and allelochemicals in the rhizosphere. Plant Soil 256, 67–83 (2003)
316
A. N. Skorokhodova et al.
6. Vorontsova, E.S.: Description of methods of influence associated with allelopathy and allochemical substances in agriculture. Sci. Electron. J. Meridian 6(40), 261–263 (2020) 7. Kondratiev, M.N., Larikova, Yu.S., Demina, O.S., Skorokhodova, A.N.: The role of seed and root exudates in interactions between plants of different species in cenosis. Proc. Timiryazev Agric. Acad. 2, 40–53 (2020). https://doi.org/10.26897/0021-342X-2020-2-40-53 8. Cunningham, W., Berkluff, F.A., Felch, C.L., et al.: Patent No. 2723120 C1 Russian Federation, IPC B01D 61/16, C02F 3/12, C02F 9/14. Systems and methods for cleaning waste streams that make direct contact between activated carbon and the membrane possible: No. 2019104869: application 20.07.2017: publ. 08.06.2020. Applicant Siemens Energy, Inc. 9. Thomson, A.E., Sokolova, T.V., Navosha, Y., et al.: Composite enterosorbent based on peat activated carbon. Nat. Manag. 2, 128–133 (2018) 10. Ignatkin, I., et al.: Developing and testing the air cooling system of a combined climate control unit used in pig farming. Agriculture 13, 334 (2023). https://doi.org/10.3390/agriculture1302 0334 11. Ignatkin, I.Y., Arkhiptsev, A.V., Stiazhkin, V.I., Mashoshina, E.V.: A method to minimize the intake of exhaust air in a climate control system in livestock premises. In: IOP Conference Series: Earth and Environmental Science, Michurinsk, 12 Apr 2021, Michurinsk, p. 012132 (2021). https://doi.org/10.1088/1755-1315/845/1/012132 12. Erokhin, M.N., Skorokhodov, D.M., Skorokhodova, A.N., et al.: Analysis of modern devices for growing plants in urban farming and prospects for its development. Agroengineering 3(103), 24–31 (2021). https://doi.org/10.26897/2687-1149-2021-3-24-31 13. Skorokhodova, A.N., Anisimov, A.A., Skorokhodov, D.M., et al.: Photosynthetic activity of wheat - wheatgrass hybrids and winter wheat under salinization. In: IOP Conference Series: Earth and Environmental Science, Ussurijsk, 20–21 June 2021, Ussurijsk, p. 022134 (2021). https://doi.org/10.1088/1755-1315/937/2/022134 14. Vasilyev, A.N., Vasilyev, A.A., Dzhanibekov, A.K., Samarin, G.N.: On the development of model for grain seed reaction on pre sowing treatment. In: Vasant, P., Zelinka, I., Weber, G.-W. (eds.) ICO 2019. AISC, vol. 1072, pp. 85–92. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-33585-4_8
Potential Application of Phosphorus-Containing Micronutrient Complexates in Hydroponic System Nutrient Solutions E. A. Nikulina1 , N. V. Tsirulnikova1 , N. A. Semenova2(B) M. M. Godyaeva2,3 , and S. V. Akimova4
,
1 National Research Center “Kurchatov Institute”, sq. Academician Kurchatova, 1, 123182
Moscow, Russia 2 Federal Scientific Agroengineering Center VIM, 1st Institutsky passage 5, 109428 Moscow,
Russia [email protected] 3 Department of Radioecology and Ecotoxicology, Faculty of Soil Science, Moscow State University, 1-12, Leninskie Gory, 119991 Moscow, Russia 4 Russian State Agrarian University - Moscow Agricultural Academy named after K.A. Timiryazev, 49, Timiryazevskaya st., Moscow 127550, Russia
Abstract. In this study, for the first time, the composition of a nutrient solution for hydroponic growing of plants with chelated forms of 4 essential trace elements (Fe2+ , Zn, Cu, Mn) with an organophosphorus ligand hydroxyethylidene diphosphonic acid (HEDP) was tested on the example of ‘Ivolga’ variety summer wheat and ‘Azart’ variety lettuce. Obtained results were compared with plants grown using pure water and a solution with chelates of the same trace elements with a carboxyl-containing ligand - EDTA. The replacement of micronutrients with chelated forms with bisphosphonate led to a number of effects. On the one hand, the experimental seedlings were characterized by lower growth and biomass compared to the nutrient solution with EDTA-chelates (36–55%). On the other hand, an increase in the root system mass (18–26%) was observed. Also, resistance to stress conditions (lack of nutrients) of plants cultivated in a solution with metal bisphophonates was increased. The absence of bacterial films on surfaces in contact with the solution was observed, too. Thus, phosphone-containing chelate components have great potential for application in hydroponic growing systems and require further extensive and systematic study in various aspects. Keywords: Nutrient solution · Organophosphorus ligand · Wheat · Phosphone-containing chelate · Micronutrients · Stress conditions
1 Introduction Due to the global trend of cropland reduction and the complexity of modern natural resource management, there is an urgent need for the development and further improvement of sustainable cultivation systems, which include hydroponic methods [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 317–324, 2023. https://doi.org/10.1007/978-3-031-50330-6_31
318
E. A. Nikulina et al.
Although research in the field of hydroponics has been conducted for more than 70 years, recently, due to new scientific knowledge, new methods and approaches aimed at obtaining maximum yield and product quality indicators have been introduced into practice [2]. Optimization of the crop nutrition regime in hydroponic conditions remains one of the key factors of intensification. At the same time, the issues of not only determining the optimal amounts of the necessary nutrients for specific crops, but also the search for formulas that can additionally activate the internal reserves of plants, which ultimately should affect the target indicators of cultivation, become relevant [3, 4]. In this context, the use of biostimulating components in nutrient solutions is undoubtedly a progressive step. However, in hydroponic systems, in comparison with traditional methods of agriculture, for a number of reasons (low buffering of the aquatic environment compared to the soil, high concentrations of nutrients in the compositions, etc.), the use of biostimulants has been little studied [5]. In the course of our research, the use of components containing biologically active phosphonic groupings in nutrient solutions is considered for the first time. It is known from previous studies that phosphonates are not metabolized by plants into phosphates and thus cannot enhance growth through the mechanism of nutrition [6–8]. But their effect is unusual: it improves the architecture of the root system, increases the level of cis-zeatin (cytokinin), the activity of nitrate reductase and improves the distribution of nutrients and water, resistance to abiotic stress [8, 9]. For example, successful results (100%) were obtained on the rhizogenesis of hard-to-root stone rootstocks on the example of VC-13 when all inorganic salts of trace elements were replaced in the Murashige and Skoog medium with the corresponding chelated forms with bisphosphonate as a ligand [10]. In this study, 4 essential trace elements (Fe2+ , Zn, Cu, Mn) were introduced in the form of chelated soluble compounds with an organophosphorus ligand – oxy-ethylidenediphosphonic acid (HEDP). Practically, chelated forms of nutrients are used relying solely on known carboxyl-containing ligands (EDTA, DTPA, EDDHA, etc.) and solely for the purpose of increasing the availability of nutrients and the solubility threshold [3, 5, 11–13]. However, organophosphate complexes are able not only to provide an accessible soluble form of metal ions, but also to have a pronounced physiological, regulatory effect on plant development. Another important advantage of HEDP is the ability to form a stable complex with the ferrous cation, which is directly available for metabolic reactions and incorporation into molecular structures [14, 15]. In our study, we aimed to study the effect of phosphorus-containing chelates of trace elements on the growth and development of wheat and lettuce in hydroponic conditions [8, 10, 16, 17].
2 Materials and Methods Plants belonging to different classes were chosen as objects of research: wheat (Monocotyledones) and lettuce (Dicotyledones). Summer wheat (Triticum aestivum L.) of the ‘Ivolga’ variety is mid-season, medium-sized, resistant to lodging. Lettuce (Lactuca sativa L.) of ‘Azart’ variety is semi-head, with a delicate texture of light green leaves, semi-crispy. The grade is intended for cultivation in an open ground and film greenhouses. The experimental scheme provided for variants of nutrient solutions: control (water); S2 - nutrient composition with bisphosphonates of trace elements Fe2+ , Zn, Cu, Mn; S3 -
Potential Application of Phosphorus-Containing Micronutrient
319
nutrient composition based on a composition produced by GHE (France) with carboxylcontaining chelates (EDTA) of trace elements Fe (DTPA), Zn, Cu, Mn. Samples of nutrient aqueous solutions were provided by the Laboratory of Technology of Complexones and Complex Compounds of the Research Center “Kurchatov Institute” - IREA (Institute of Chemical Reagents and Highly Pure Chemical Substances). Analytical research of reagent samples was done using equipment of NRC “Kurchatov Institute”—IREA Shared Knowledge Center. The plants were grown from grains/seeds (200 pieces) in containers with nutrient solutions, which were placed in a phytochamber with humidity and temperature control, equipped with LED lighting (Fig. 1). All the experiment variants were continuously aerated to maintain the oxygen concentration near optimum - 15 mg/l [18]. The nutrient solution S2 contained the following concentrations of macro- and microelements (%): (NO3 )− - 2.7; (NH4 )+ – 0.3; (P2 O5 ) – 1.5; K2 O – 4.3; CaO – 1.9; MgO – 1; (SO4 )2− - 0.9; Fe2+ - 0.03, B – 0.003; Cu2+ - 0.003; Zn2+ - 0.004; Mn2+ - 0.0014; Mo3+ 0.011. pH values were up to 5.5–6.5. The air temperature in the chamber was 24–26 °C, humidity was maintained at 70%, the level of photosynthetic photon flux (PPF) was 275 mmol m–2 s–1 . Sampling for the biometric indicators measurements was performed on the 14th day after germination. 10 plants of each variant were selected for the main growth parameters’ determining. The aboveground and root part fresh mass were determined by weighing on the laboratory scale Sartorius LA230S Laboratory Scale (Germany) with an accuracy of 0.0001 g. In order to dry mass determination, the samples were crushed and dried in an oven at the temperature of 60–70 °C for 3 h until a constant mass. The photo-planimeter LI-COR - LI-3100 AREA METER (USA) was used for the leaf surface area measuring. The experiments were repeated threefold.
Fig. 1. Plant growing solution variants (control (H2 O), S2 and S3 solutions) placed in the phytochamber
320
E. A. Nikulina et al.
Statistic processing of the obtained data was carried out using the methods of variation statistics in the form of a one-factor analysis of variance, calculations of the arithmetic mean, standard deviation, coefficient of variation and Fisher’s criterion using Microsoft Office Excel 2007 software.
3 Results and Discussion The first 2 weeks after germination seedlings can grow using their own seed nutrients [18], that is why we chose water as a control solution. The results showed, that the inclusion of trace elements in the nutrient solution in a chelated form based on bisphosphonates (HEDP) contributes to the improvement of wheat growth and development indicators comparable to the results of well-known GHE production nutrient composition use with trace elements, chelated by widespread EDTA ligand. Changes in biometric and morphometric indicators of growth and development of wheat seedlings were recorded on 14th day (Table 1). At the same time, a number of specific effects were observed. Table 1. Plant growth indicators for ‘Ivolga’ variety wheat on 14th day of cultivation (July, 2021) Nutrient solution
Plant height, cm
Fresh leaf mass, g
Dry leaf mass, g
Root fresh mass, g
Leaf surface area, cm2
Control (H2 O) 17.4 ± 1.3
0.12 ± 0.01
11.9 ± 0.9
0.08 ± 0.02
3.9 ± 0.5
S2
25.0 ± 3.1
0.27 ± 0.05
8.9 ± 0.9
0.21 ± 0.05
9.2 ± 1.6
S3
34.6 ± 4.2
0.42 ± 0.08
8.4 ± 0.8
0.17 ± 0.05
13.8 ± 2.8
Least significant difference (p < 0.05)
3.6
0.07
0.99
0.04
2.2
Compared with the control variant, S2 solution treatment caused an increase in biomass by 2.3 times and an increase in root mass by 2.6 times. On the one hand, seedlings grown using the S3 solution variant were distinguished by the best growth rates: height of plants (28% higher), accumulation of fresh mass (36% higher) and a larger area of leaf plates (33% higher). On the other hand, plants grown on a solution with phosphone-containing chelates of trace elements had a more powerful development of the root system, they formed an average of 18% more root mass. The observed effect is associated with the retardant properties of the organophosphorus ligand - HEDP, which demonstrate themselves in a characteristic decrease in the growth of seedlings and a simultaneous strengthening of the root system. Earlier, in the works of Soviet scientists, it was reported about the retardant properties of HEDP, which were found in the cultivation of rye, barley, and buckwheat [14]. The effect of the retardant properties of HEDP in modified Murashige-Skoog nutrient media on the rhizogenesis of stone fruit rootstock microseedlings was also reported in a recent study [10]. In general, all plants were distinguished by good development, strong stems and shiny foliage without
Potential Application of Phosphorus-Containing Micronutrient
321
chlorosis. Moreover, it was also revealed increased resistance of plants cultivated on S2 solution to stressful conditions - nutrition lack during the final stage of the experiment, when the presence of nutrients in the solution is significantly reduced due to the absorption and plant growth. Samples grown on S3 solution quickly lost turgor, which led to the lodging of seedlings (Fig. 2). Seedlings grown using microelement phosphonates retained elasticity and healthy appearance without signs of chlorosis until the very end of the experiment, up to the biomass harvesting.
Fig. 2. Photos of wheat samples of ‘Ivolga’ variety at the final stage of the experiment (on 14th day of cultivation) with a decrease in the nutrient content in the hydroponic solution: (a) S1 control solution (H2 O); (b) S2 solution - trace elements Fe2+ , Zn, Mn, Cu in chelate form with a phosphorus-containing ligand (HEDP); (c) S3 solution - trace elements Fe3+ , Zn, Mn, Cu in a chelate form with a carboxyl-containing ligand (EDTA).
Similar trends in the accumulation of leaf and root fresh mass, as well as the increase in leaf surface area, were also observed on the 14th day of ‘Azart’ variety lettuce (Table 2, Fig. 3). The accumulation of dry mass when using solution S2 compared with solution S3 in lettuce plants was somewhat higher (18.6%) than in wheat plants (5.6%), which is associated with different growth rates of these crops and species features. Based on the data obtained, we can also assess the phytotoxicity of various ligands, specifically bisphosphonates for plants grown on hydroponics. Moreover, replacement of trace elements with chelated forms with a phosphorous-containing ligand led to a positive side technological effect – the absence of an increase in bacterial films and mucus on surfaces in contact with the solution, which is an undoubted advantage in exploitative terms. Together with an increase in the resistance of cultivated crops to water and nutrient stress, the absence or very slow growth of bacterial films significantly expand the possibilities for optimizing and intensifying hydroponic crop production.
322
E. A. Nikulina et al.
Table 2. Plant growth indicators for ‘Azart’ variety lettuce on 14th day of cultivation (August, 2021) Nutrient solution
Fresh leaf mass, g
Dry leaf mass, %
Root fresh mass, g
Leaf surface area, cm2
Control (H2 O)
0.015 ± 0.003
7.5 ± 1.5
0.007 ± 0.003
0.46 ± 0.09
S2
0.219 ± 0.068
5.4 ± 0.5
0.051 ± 0.024
6.98 ± 2.48
S3
0.481 ± 0.174
4.3 ± 0.7
0.037 ± 0.018
15.23 ± 6.35
Least significant difference (p < 0.05)
0.13
1.21
0.02
4.59
Fig. 3. Photos of lettuce plants of ‘Azart’ variety at the final stage of the experiment (on 14th day of cultivation) with a decrease in the nutrient content in the hydroponic solution: (a) Control solution (H2 O); (b) S2 solution - trace elements Fe2+ , Zn, Mn, Cu in chelate form with a phosphoruscontaining ligand (HEDP); (c) S3 solution - trace elements Fe3+ , Zn, Mn, Cu in a chelate form with a carboxyl-containing ligand (EDTA).
4 Conclusion Improving the composition of solutions and preparing them for the cultivation of various crops is the most urgent task for the technological development of the cultivation of closed ground crops. Depending on the ligand used, the availability of elements will change, as well as the biometric characteristics of plants. It has been shown that the valence of trace elements and the use of different forms of phosphorus-containing and carboxylcontaining ligands play an important role in the germination, growth and development of plants at different phases of vegetation. Therefore, experimental tests have shown that phosphon-containing chelated components have a great potential for use in hydroponic growing systems and require further extensive and systematic study in various aspects: from the selection of individual components and their concentrations of the various crop requirements to the water regime optimization and engineering and technological equipment parameter improvement. Chemical compounds with phosphone-containing groups, such as HEDP in this case, have a pronounced regulatory effect, which can be effectively used in the practice of hydroponic cultivation of agricultural crops. For example, the use of phosphonatecontaining microelement chelates for growing leafy greens, where biomass accumulation is important, is not a suitable decision because of growth inhibition on the first growing
Potential Application of Phosphorus-Containing Micronutrient
323
stage during the treatment. But the situation is changing with regard to fruit and vegetable crops, such as tomatoes, strawberries, etc. In this case, it is possible to increase productivity, reduce ripening, regulate flowering and ripening cycles, and achieve a more saturated color of fruits. Therefore, further research will be focused on growing various fruit-bearing crops, selecting the concentrations of phosphon-containing chelates and determining the phase of their use in nutrient solutions to achieve the maximum effect.
References 1. Orsini, F., Kahane, R., Nono-Womdim, R., Gianquinto, G.: Urban agriculture in the developing world: a review. Agron. Sustain. Dev. 33(4), 695–720 (2013) 2. Kozai, T., Niu, G., Takagaki, M.: Plant Factory: An Indoor Vertical Farming System for Efficient Quality Food Production. Academic Press, San Diego (2015) 3. Jones, J.B.: Hydroponics: A Practical Guide for the Soilless Grower. CRC Press, Boca Raton (2005) 4. Yakhin, O.I., Lubyanov, A.A., Yakhin, I.A., Brown, P.H.: Biostimulants in plant science: a global perspective. Front. Plant Sci. 7, 2049 (2017) 5. Raviv, M., Lieth, J.H., Bar-Tal, A.: Soilless Culture, Theory and Practice, 2nd edn. Academic Press (2019) 6. Rossall, S., Qing, C., Paneri, M., Bennett, M., Swarup, R.: A “growing” role for phosphites in promoting plant growth and development. Acta Hort. 1148, 61–68 (2016) 7. Verreet, J.A.: Biostimulantien – schlummerndes Potenzial? TopAgrar 8, 56–60 (2019) 8. Bityutskii, N.P.: Effects of carboxylic and phosphonic fe-chelates on root and foliar plant nutrition. Russ. J. Plant Physiol. 42, 444–453 (1995) 9. Thao, H.T.B., Yamakawa, T.: Phosphite (phosphorous acid): fungicide, fertilizer or biostimulator? Soil Sci. Plant Nutr. 55(2), 228–234 (2009) 10. Tsirulnikova, N.V., Nukulina, E.A., Akimova, S.V., Kirkach, V.V., Glinushkin, A.P., Podkovyrov I.Yu.: In vitro effect of replacement mineral salts of trace elements with P-containing chelates to improve rooting of cherry rootstock (cv. VC-13). In: All-Russian Conference with International Participation Economic and Phytosanitary Rationale for the Introduction of Feed Plants 2020, IOP Conf. Series: Earth and Environmental Science, vol. 663, p. 012042. IOP Publishing (2021) 11. Marschner, H.: Mineral Nutrition of Higher Plants. Academic Press, London (1986) 12. Savvas, D.: Nutritional management of vegetables and ornamental plants in hydroponics. In: Dris, R., Niskanen, R., Jain, S.M. (eds.) Crop Management and Postharvest Handling of Horticultural Products, Quality Management, vol. 1, pp. 37–87. Science Publishers, Enfield (2001) 13. Sonneveld, C.: Composition of nutrient solution. In: Savvas, D., Passam, H. (eds.) Hydroponic Production of Vegetables and Ornamentals, p. 179. Embryo Publisher, Athens (2002) 14. Pierson, E.E., Clark, R.B.: Ferrous iron determination in plant tissue. J. Plant Nutr. 197, 107–116 (1984) 15. Walker, E.L., Connolly, E.L.: Time to pump iron: iron-deficiency-signaling mechanisms of higher plants. Curr. Opin. Plant Biol. 11(5), 530–535 (2008) 16. Diatlova, N.M., Lavrova, O.Yu., Temkina, V.Y., Kireeva, A.Yu., Seliverstova, I.A., Rudakova, G.Ya., Tsirulnikova, N.V., Dobrikova E.O.: The use of chelating agents in agriculture. Overview of Ser. “Reagents and Highly Purified Substances”. NIITEKHIM, Moscow (1984) 17. Nikulina, E., Akimova, S., Tsirulnikova, N., Kirkach, V.: Screening of different Fe(III) and Fe(II) complexes to enhance shoot multiplication of gooseberry. In: ECOBALTICA “FEB” 2020, IOP Conference Series: Earth and Environmental Science, vol. 578, p. 012015. IOP Publishing (2020)
324
E. A. Nikulina et al.
18. Grishin, A.P., Grishin, A.A., Semenova, N.A., Grishin, V.A., Knyazeva, I.V., Dorokhov, A.S.: The effect of dissolved oxygen on microgreen productivity. In: II International Symposium “Innovations in Life Sciences” (ILS 2020), vol. 30, p. 05002. BIO Web Conf. (2021)
Impact Analysis of Rooftop Solar Photovoltaic Systems in Academic Buildings Pranta Nath Nayan1 , Amir Khabbab Ahammed1 , Abdur Rahman1 , Fatema Tuj Johora1 , Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2,3(B) 1 Department of CSE, East West University, Dhaka, Bangladesh [email protected], [email protected] 2 Department of CSE, Daffodil International University, Dhaka, Bangladesh [email protected] 3 Department of CSE, Chittagong University of Engineering and Technology, Chattogram, Bangladesh
Abstract. Solar energy is a non-depleting and eco-friendly source of renewable energy that is generated through the use of solar panels, which convert the energy from the sun into electricity. In the current world, we need electricity supply constantly but produced electricity cannot meet the world’s demand. Even in our country, the situation is much more crucial. The government attempts to handle this situation by shedding loads, which has a detrimental impact on industrial, commercial, and educational institutions. In this paper, we provided a solution to produce renewable energy through photovoltaic solar panels that are environmentally friendly. The paper also provides the working process of solar panels, how solar panels meet load demand and reduce the cost of electricity bills, and future trends and aspects of solar energy. Keywords: Renewable energy · Solar panel · Photovoltaic system · Green campus
1 Introduction Energy is the key to the economy. There is a correlation between the growth rate of electricity GDP and the growth rate. Today, the number of uses of technology is gradually increasing. Smart technology is now taking over the world. As there is an increasing energy demand, there is a chance of running out of energy as we are dependent on limited sources. For this reason, renewable energy has become a very important part of our time, as the cost of energy has increased rapidly in recent years. Our earth receives a large amount of sunlight (1366 W nearly). This energy source is limitless to us and this is no-charge energy. For this, we use a solar panel system to transform sunlight into energy. The solar cell, which is the fundamental part of a solar power system, produces energy by absorbing photons that the sun emits. The primary advantage of solar energy over other conventional © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 325–337, 2023. https://doi.org/10.1007/978-3-031-50330-6_32
326
P. N. Nayan et al.
power sources is that it can be produced using the smallest photovoltaic (PV) solar cells, allowing sunlight to be directly transformed into solar energy [1]. Solar panels are considered one of renewable energy sources because excess energy from solar panel systems can be easily stored in a battery when not used. The solar cell is the most significant component of a solar power system, which gets energy from the sun’s photons and turns it into electricity [2]. As one of the renewable, CO2-free energy sources, it will have the least environmental impact on any system. On a daily basis, we receive an infinite amount of energy from the Sun. Using solar panel systems, we can generate additional energy from the Sun with relative ease. Any location where direct sunlight is available can be used to generate energy. In some systems, such as the production of power from renewable resources, fossil fuels are utilized [3]. However, as we use fossil fuels every day, the supply is decreasing. Using wind or the Sun to create power is a smart concept if we want to reduce our reliance on fossil fuels because the Sun will always shine on the Earth’s surface. After converting sunlight into energy, an endless amount of sunlight remains for the future. This is what makes solar energy a renewable source of energy. Furthermore, solar energy is green since, unlike other systems, it does not emit greenhouse gases. As the need for electricity increases, people are investing more money in solar panel systems to make the panels work better and save more energy. Using certain methods, it is possible to make PV systems work better. Increasing the efficiency of solar cells, using maximum power point tracking (MPPT) control algorithms, and implementing solar tracking systems are the three main ways to get the most energy from the Sun. Solar panel systems can be made more powerful by buying more concentrated photovoltaic (CPV) cells and more efficient variants of solar panels. This saves more energy than before [4]. As the price of fossil fuels fluctuates, renewable energy is quickly gaining prominence as an energy source. The most abundant source of energy is solar energy. Furthermore, in this paper, we have discussed cost-effectiveness analysis and economics, where simulations of the results are presented and examined from a financial point of view.
2 Literature Review In this section, we focus on existing and related work on solar photovoltaic-based power generation. To enhance our understanding of the application and sustainability of solar photovoltaic energy in the pursuit of a greener environment, we have investigated a variety of supplementary materials. In the majority of previous publications, the authors have investigated and proposed several models based on various parameters. Islam et al. [5] address the potential effect of using solar energy to resolve the current demand for power imbalances in the world. They also state that Bangladesh’s geographical location is considered to be an ideal place for the utilization of solar energy. The discussion is about 3 types of solar power technology, such as concentrated solar power (CSP) technology, grid-connected solar PV systems, and solar home systems. Ahmed et al. [6] have presented their research on rooftop solar power generation in Bangladesh and its potential. The authors argue that Bangladesh might be one of the leading rooftop solar installation proponents. The authors of the study asserted that the rooftop solar potential in urban Bangladesh is very high. They suggested implementing the solar energy plant on the roofs of industrial and commercial buildings,
Impact Analysis of Rooftop Solar Photovoltaic
327
residential buildings, etc. They reported that 47.2%, 30.5%, and 4.85% of Bangladesh’s total energy consumption was consumed by the industrial, residential, and commercial sectors, respectively. This can be reduced by implementing solar energy on the roofs of respective structures. Podder et al. [7] have modeled a rooftop photovoltaic system (RPS) for an academic institution in Bangladesh. The authors asserted that their proposed strategy may create a moderate amount of electricity and reduce the exorbitant price of electricity. The authors built four configurations of 46, 64, 91, and 238 kW solar (PV) systems and investigated their economic, ecological, and sensitivity benefit analysis to determine that the 91 kW setup is the most optimal. Furthermore, they showed that 91 kW delivers 97% of the total PV-generated usage, while the grid delivers only 3%. In fact, the 91 kW RPS can export 26% of the surplus electricity to the grid. Hasapis et al. [8] have installed large-scale solar electricity production systems (PV) on university campuses in an effort to achieve energy independence. They created a model that first calculates power usage with a real-time metering system and then collects solar irradiation data to evaluate solar potential. If the prospect is high, a PV system design is proposed based on the identification of suitable regions. Researchers then select the right technology for electrical design, such as photovoltaic module, inverter, mounting structure, and layout. Furthermore, they demonstrated that a 2 MWp on-grid solar power plant can supply 1899 MWh of yearly power, which meets 47% of campus consumption, saves 1234 tons of CO2 , and has a projected cash flow of 4.2 years with only an LCOE of 11 cents per kilowatt hour. Paudel et al. [9] have examined the techno-economic and ecological impact of a 1 MW rooftop solar photovoltaic installation on a college campus using PVsyst. The authors anticipate that the plant will generate approximately 1660 MWh of sustainable AC power per year, of which 95% can be sent to the grid. The repayment period for this solar photovoltaic project was predicted to be 8.4 years. Baitule et al. [10] demonstrated the viability of creating an academic campus powered entirely by solar photovoltaics. In the study, the study illustrates that a 5-MW solar photovoltaic system can generate 8000 MWh of power annually while reducing its carbon footprint by 173,318 tons annually. Chakraborty et al. [11] presented their solar photovoltaic technical mapping strategy for the green campus initiative. The authors analyzed the performance of nine commercially available solar panels and concluded that “a-Si” is the best in terms of power losses, land requirement, PR, CUF, YF, and cost. Furthermore, they demonstrated that the total energy produced by solar photovoltaic technology is approximately 8 MWh/day, which supplies 40% of the campus’s net energy demand/day. They also claimed that the use of solar photovoltaic energy could reduce electricity costs by approximately 27.4% of the current annual energy bill. Barua et al. [12] investigate the feasibility report and the relationship between a rooftop solar photovoltaic system. When the project area is provided with PVsyst and the software, the model is carried out with NASA surface meteorological data through the geographic project location coordinator. The PVsyst software was used to simulate the results. The simulation results show that using a solar photovoltaic system to generate
328
P. N. Nayan et al.
power could save 42 tonnes of CO2 . The work in [13–22] shows good contributions and guidelines. To construct this proposed methodology, the above-mentioned research publications were surveyed. In this paper, we have done a feasibility study to implement solar photovoltaic panels on East West University’s (EWU’s) rooftop.
3 Methods and Materials At East West University, there is a huge amount of electricity consumed on a daily basis. The amount becomes massive when it comes to a monthly estimation. To reduce this huge power consumption, we propose a model where we produce renewable energy by implementing solar panels. We will use the roof of the university to set up our model. It will convert electricity usage to solar energy, which will help the authorities cut the electricity bill. In this section, we provide a detailed explanation and reasoning for our research. 3.1 Data Collection Here is a picture (Fig. 1) of the roof of East West University and a list of how many buildings there are and where they are. Usable areas are shown in Table 1. Due to the presence of an air conditioning compressor, the area on the eastern side of the rooftop has been rendered inaccessible and unusable. As a result, the eastern section of the rooftop is blocked off and it is not possible to use it for any purpose. 3.2 Flow Diagram First, all data sets necessary for the research are collected and verified by the executive engineer of Bangladesh Electric Company of Bangladesh Ltd. (PGCB). Then we calculate and compare the university load demand and the power derivation from the implemented solar panel. The steps taken are given in Fig. 2. 3.3 Working Process of Solar Panel A solar panel is made up of photovoltaic cells that are responsible for collecting sunlight. Solar energy is converted to direct current when photovoltaic cells are exposed to daylight. Through the use of inverter technology, DC energy stored in PV cells can be transformed into AC electricity. The net metering process is how energy is transferred and used as the source of electricity for your home electronics. Once you have solar panels and an inverter set up, this is how solar energy will benefit you. 3.4 Peak Hour for Sun The time between sunrise and sunset is not when the sun is at its strongest. They directly refer to the amount of solar insolation that a specific location would experience during a specific period of time when the sun is at its most intense. In Fig. 3, an hour is considered
Impact Analysis of Rooftop Solar Photovoltaic
329
Fig. 1. EWU rooftop design.
a peak sun hour when the amount of solar irradiance (or sunlight) averages 1000 (W) of energy per square meter (roughly 10.5 feet) [23]. Early mornings and late afternoons are approximately to have less than 500 W/m2 of sunlight. In contrast, under optimum conditions midday on a bright, sunny day it is possible to receive more than 1000– 1100 W/m2 . Despite the fact that solar panels receive an average of 7 h of daylight each day, the average peak solar hours are often between 5 and 6 which varies from region to region. At solar noon, when the sun is at its highest in the sky, solar radiation peaks.
330
P. N. Nayan et al. Table 1. Measuring the usable area.
Rooftop index
Position
Area (m2 )
Usable percent (%)
Usable area (m2 )
1
South-West
377
70
264
2
West
520
80
416
3
West
351
70
246
4
North-West
550
70
385
5
North
882
30
265
6
North-East
570
90
513
7
East
288
0
0
8
South-East
240
90
216
9
South
580
30
174
Total usable area
2479
In our work field, East West University is situated in Dhaka, which is located in the Bangladesh region. Therefore, the peak hour of the Sun in this region is given below in Fig. 4: The average peak hours of sunlight in the annual year are 7 h. The solar path of the university area is presented in Fig. 5 which is imported from the Global Solar Atlas website. The average produced energy can be calculated using Eq. (1). The peak hours of sunlight on a daily basis are presented in Table 2. Average Produced Energy Max Peak Hours % ∗ Max Peak Hours of Sunlight ∗ Produced Solar Energy (1) = Average Peak Hours of Sunlight
3.5 Cost of Solar Setup The installation of solar panels in Bangladesh is a great step toward mitigating catastrophic issues such as climate change. Given the current power situation in Bangladesh, it is also preferable to produce one’s own electricity at home. What is great about solar panels is that they don’t cost much money to install and do not need to be maintained. On-demand rooftop or exterior solar panels are placed, with standard batteries installed for use when the sun goes down. A charge controller is included to ensure that the battery is not overloaded and therefore cannot damage itself. As soon as the battery is fully charged, the controller will stop sending power from the PV array to the storage device. We are using a photovoltaic solar panel in our model. To produce 1-W energy from the solar panel will cost 75 BDT. So, producing 244 kW energy from the solar panel will cost 18300000 BDT (Table 3).
Impact Analysis of Rooftop Solar Photovoltaic
331
Fig. 2. Work process flow diagram.
4 Results and Discussion In this section, we will compare and discuss the result of our proposed model and the university’s cost of load demand. We also show how beneficial our model is. Additionally, we will analyze the marginal profit of our model. In the initial period, we will experience a negative figure because of our model, but in long term, it will offer a huge amount of profit throughout our model. The total usable roof size of each building is 2479 m2 (0.61 acres). Land availability is one of the greatest obstacles to the establishment of renewable energy projects on a utility-scale in Bangladesh. Wind or solar farms require a substantial amount of land (based on the solar irradiation of Bangladesh, approximately 2.5–4.0 acres of land are needed for 1 MW of solar energy) [24]. 1 MW of energy was produced using 2.5 acres of land. Using 0.61 acres of land, the amount of energy that will be produced is 0.244 MW < 244kWh.
332
P. N. Nayan et al.
Fig. 3. Total solar irradiation over the day.
Fig. 4. Peak hours of sunlight throughout the year.
In our region, the maximum peak hour of sunlight is 5 h and the average peak hour of sunlight is 7 h (Table 2). The maximum peak hour provides sunlight of about 80%. Our proposed solar panel will produce 244 kW per hour. From Eq. (1), we get an average produced energy of 139.43–140 kWh. Therefore, our proposed solar panel will produce 140 kWh. Here, in this (Table 4), we show the load demand calculation of our institution. As our model works only in daylight, we will not store any amount of energy in the battery because cutting off the cost of the battery. Therefore, we will compare our solar energy with the off-peak energy consumption. In off-peak hours, the energy consumption is 266.67 units 267units.
Impact Analysis of Rooftop Solar Photovoltaic
333
Fig. 5. Solar path of the university area.
Table 2. Average peak hour in a day. Month
Peak hours of sunlight (daily)
January
7.1
February
8.2
March
8.5
April
8
May
6.8
June
7
July
4.7
August
4.5
September
5
October
7.1
November
8
December
8.1
Here, in (Table 5), we show the cost of the load demand of our institution. From Tables 2 and 4, we see that the monthly energy charge (off-peak) is 192,000 units. The monthly unit cost is 1,461,120 BDT. The cost per unit is 7.61 BDT.
334
P. N. Nayan et al. Table 3. Approximate setup cost.
Component name
Total cost (BDT)
Solar panels
7,564,000
Inverter
5,073,000
Mounting system
4,002,320
Installation
1,660,680
Total cost
18,300,000
Table 4. Load demand of EWU. Load demand
Variation
Monthly (unit)
Daily (unit)
Per hour (unit)
Total
264000
8800
366.67
Off-peak
192000
6400
266.67
Peak
72000
2400
100
Table 5. Electricity cost of the EWU. Cost
Variation
Monthly (BDT)
Per unit (BDT)
Total
2,221,440
8.41
Off-peak
1,461,120
7.61
Peak
760,320
10.56
The university consumes 267 units and our model is producing 140 units, which is 52.43% of the consumed energy. So, 766,065 BDT will be saved from the university electricity bill per month. To adjust this set-up cost, we need approximately 2 years. The lifetime of the PV panel used is 25 years. Therefore, our proposed model will save 17,619,495 BDT in its lifetime (Fig. 6). The overall rooftop area is 4 358 m2 , and we are allowed to utilize 2479 m2 , or 56.89% of the total area, due to other circumstances. If we could utilize the entire roof, we could generate more energy, which could be used to replace the demand for power during the day. It could be more sustainable and efficient for the university.
Impact Analysis of Rooftop Solar Photovoltaic
335
Fig. 6. Cash flow over 25 years.
5 Conclusion The article illustrates a feasibility study of the implementation in an academic building in Dhaka, Bangladesh, to become self-sufficient, reduce the huge cost of electricity, and create a greener environment. Bangladesh is facing a fuel scarcity for electricity generation and due to the world’s limited fuel supply, the entire planet may soon suffer the same problem. It is high time that the world should focus on producing renewable energy to overcome the issue. As a result, we chose solar photovoltaic cells to generate solar energy as a secondary source of energy. The advantage of our proposed system is (1) less dependence on grid supplies, (2) less carbon emission, and (3) building a green energy infrastructure. Acknowledgment. The authors are thankful to Mr. Nur Mohammad, Executive Engineer and System Planner of the Bangladesh Power Grid Company for providing vital statistics.
References 1. Shaikh, M.: A review paper on electricity generation from solar energy. Int. J. Res. Appl. Sci. Eng. Technol. V(IX), 1884–1889 (2017). https://doi.org/10.22214/ijraset.2017.9272 2. Ronay, K., Dumitru, C.: Technical and economical analysis of a solar power system supplying a residential consumer. Procedia Technol. 22, 829–835 (2016). https://doi.org/10.1016/j.pro tcy.2016.01.056 3. Choifin, M., Rodli, A., Sari, A., Wahjoedi, T., Aziz, A.: A Study of Renewable Energy and Solar Panel Literature Through Bibliometric Positioning During Three Decades (2021) 4. Keskar Vinaya, N.: Electricity generation using solar power. Int. J. Eng. Res. Technol. (IJERT) 02(02) (February 2013). https://doi.org/10.17577/IJERTV2IS2420 5. Islam, M., Shahir, S., Uddin, T., Saifullah, A.: Current energy scenario and future prospect of renewable energy in Bangladesh. Renew. Sustain. Energy Rev. 39, 1074–1088 (2014). https:// doi.org/10.1016/j.rser.2014.07.149
336
P. N. Nayan et al.
6. Ahmed, S., Ahshan, K., Nur Alam Mondal, M.: Rooftop solar: a sustainable energy option for Bangladesh. IOSR J. Mech. Civ. Eng. (IOSR-JMCE) 17(3), 58–71. https://doi.org/10.9790/ 1684-1703025871 7. Podder, A., Das, A., Hossain, E., et al.: Integrated modeling and feasibility analysis of a rooftop photovoltaic systems for an academic building in Bangladesh. Int. J. Low-Carbon Technol. 16(4), 1317–1327 (2021). https://doi.org/10.1093/ijlct/ctab056 8. Hasapis, D., Savvakis, N., Tsoutsos, T., Kalaitzakis, K., Psychis, S., Nikolaidis, N.: Design of large scale prosuming in universities: the solar energy vision of the TUC campus. Energy Build. 141, 39–55 (2017). https://doi.org/10.1016/j.enbuild.2017.01.074 9. Paudel, B., Regmi, N., Phuyal, P., et al.: Techno-economic and environmental assessment of utilizing campus building rooftops for solar PV power generation. Int J Green Energy 18(14), 1469–1481 (2021). https://doi.org/10.1080/15435075.2021.1904946 10. Baitule, A., Sudhakar, K.: Solar powered green campus: a simulation study. Int. J. Low-Carbon Technol. 12(4), 400–410 (2017). https://doi.org/10.1093/ijlct/ctx011 11. Chakraborty, S., Sadhu, P., Pal, N.: Technical mapping of solar PV for ISM -an approach toward green campus. Energy Sci. Eng. 3(3), 196–206 (2015). https://doi.org/10.1002/ese 3.65 12. Barua, S., Prasath, R., Boruah, D.: Rooftop solar photovoltaic system design and assessment for the academic campus using PVsyst software. Int. J. Electron. Electr. Eng. 2017, 76–83 (2017). https://doi.org/10.18178/ijeee.5.1.76-83 13. Yeasmin, S., Afrin, N., Saif, K., Reza, A.W., Arefin, M.S.: Towards building a sustainable system of data center cooling and power management utilizing renewable energy. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_67 14. Liza, M.A., Suny, A., Shahjahan, R.M.B., Reza, A.W., Arefin, M.S.: Minimizing e-waste through improved virtualization. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/ 978-3-031-19958-5_97 15. Das, K., Saha, S., Chowdhury, S., Reza, A.W., Paul, S., Arefin, M.S.: A sustainable e-waste management system and recycling trade for bangladesh in green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_33 16. Rahman, M.A., Asif, S., Hossain, M.S., Alam, T., Reza, A.W., Arefin, M.S.: A sustainable approach to reduce power consumption and harmful effects of cellular base stations. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_66 17. Ahsan, M., Yousuf, M., Rahman, M., Proma, F.I., Reza, A.W., Arefin, M.S.: Designing a sustainable e-waste management framework for Bangladesh. In: Vasant, P., Weber, GW., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_104 18. Mukto, M.M., Al Mahmud, M.M., Ahmed, M.A., Haque, I., Reza, A.W., Arefin, M.S.: A sustainable approach between satellite and traditional broadband transmission technologies based on green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_26
Impact Analysis of Rooftop Solar Photovoltaic
337
19. Meharaj-Ul-Mahmmud, Laskar, M.S., Arafin, M., Molla, M.S., Reza, A.W., Arefin, M.S.: Improved virtualization to reduce e-waste in green computing. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_35 20. Banik, P., Rahat, M.S.A., Rafe, M.A.H., Reza, A.W., Arefin, M.S.: Developing an energy cost calculator for solar. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-03119958-5_75 21. Ahmed, F., Basak, B., Chakraborty, S., Karmokar, T., Reza, A.W., Arefin, M.S.: Sustainable and profitable IT infrastructure of Bangladesh using green IT. In: Vasant, P., Weber, G.W., Marmolejo-Saucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-19958-5_18 22. Ananna, S.S., Supty, N.S., Shorna, I.J., Reza, A.W., Arefin, M.S.: A policy framework for improving e-waste management in Bangladesh. In: Vasant, P., Weber, G.W., MarmolejoSaucedo, J.A., Munapo, E., Thomas, J.J. (eds.) Intelligent Computing & Optimization. ICO 2022. Lecture Notes in Networks and Systems, vol. 569. Springer, Cham (2023). https://doi. org/10.1007/978-3-031-19958-5_95 23. Megantoro, P., Syahbani, M., Sukmawan, I., Perkasa, S., Vigneshwaran, P.: Effect of peak sun hour on energy productivity of solar photovoltaic power system. Bull. Electr. Eng. Inform. 11(5), 2442–2449 (2022). https://doi.org/10.11591/eei.v11i5.3962 24. Hernandez, R., Hoffacker, M., Murphy-Mariscal, M., Wu, G., Allen, M.: Solar energy development impacts on land cover change and protected areas. Proc. Natl. Acad. Sci. 112(44), 13579–13584 (2015). https://doi.org/10.1073/pnas.1517656112
A Policy Framework for Cost Effective Production of Electricity Using Renewable Energy Sazzad Hossen1 , Rabeya Islam Dola1 , Tohidul Haque Sagar1 , Sharmin Islam1 , Ahmed Wasif Reza1(B) , and Mohammad Shamsul Arefin2(B) 1 Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
{2019-1-60-096,2019-1-60-156,2018-3-60-078}@std.ewubd.edu, [email protected] 2 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogarm 4349, Bangladesh [email protected], [email protected]
Abstract. The amount of energy a country uses can be used to measure its level of development. Bangladesh has struggled to develop sustainably, which calls for a reliable electricity source. Our nation has a wealth of natural resources, but because of its reliance on fossil fuels, it has been going through an energy crisis for a while. The only way to ensure a better overall electricity supply is to use renewable energy sources in combination with current energy sources. Renewable energy sources including solar power, solar photovoltaic (PV) cells, wind energy, and hydroelectricity can be workable alternatives to supply ongoing energy needs while ensuring complete energy security. The power problem, which has turned into a substantial obstacle to future economic expansion, is currently posing significant challenges to the nation’s energy business. The purpose of the article is to lower energy prices from the consumer’s perspective by choosing the first renewable energy plant that can best offer a necessary load for a given length of time. The first analytical model of a solar, wind, and hydroelectric power plant was given in order to construct an objective function. Cost limitations were added later. Due to the government’s solely dependent energy policy and the declining state of the natural environment, finding alternative energy sources has become essential for the country. The settings are ideal for producing electricity, particularly in the winter when PV or diesel hybrid technology is used. In order to balance the operational expenses of hybrid renewable energy sources such as solar, wind, and hydroelectric electricity while meeting consumer demand for electricity load. Keywords: Solar photovoltaic · Hydroelectricity · Solar radiation · Renewable energy · Alternative energy · Wind energy · Cost optimization
1 Introduction “Clean energy” refers to energy generated from renewable, non-polluting resources that emit no emissions, as well as energy saved through energy-saving techniques. Renewable resources are used to generate energy because they can be replenished gradually over © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 338–352, 2023. https://doi.org/10.1007/978-3-031-50330-6_33
A Policy Framework for Cost Effective Production of Electricity
339
time (Rahman et al. 2022). It comes from a variety of sources, including the sun, breeze, rain, tides, waves, and geothermal heat. Solar electricity is the most eco-friendly and renewable source of energy. Solar power is the conversion of sunlight or sun rays into energy using photovoltaics or mirrors. It can also convert both electrical and thermal energy (Rahman et al. 2022). When using solar energy, no noise is generated. Regular energy is generated by burning fossil fuels, which causes air pollution, greenhouse gas emissions, and increased CO2 production (Zhu et al. 2022). It also causes water pollution and noise pollution by producing irritating sounds, while solar power is pollution-free. Solar batteries can store power to be used when it is demanded. It can be installed anywhere. Renewable energy sources account for just 3.5% of overall energy generation in Bangladesh. Bangladesh’s energy sector is primarily based on fossil fuels. Natural gas provides 75% of the main commercial energy. Day by day Bangladesh has become increasingly reliant on natural gas. Bangladesh is also heavily dependent on oil. The country’s total annual electricity production is 423 megawatts (Tajziehchi et al. 2022). In order to transform Bangladesh into an energy-secure nation with sustainable energy sources (solar, wind, hydro, biomass, etc.), the government has given renewable energy a high priority. Solar power is used more in villages of Bangladesh than in cities. More than a quarter of the rural population in Bangladesh still lacks access to power. After sundown, daily tasks like cooking, working, and learning become difficult. Solar energy to areas where the traditional grid does not reach. Over 4 million homes and 20 million people are now powered by small-scale solar home systems in rural areas, making for roughly one-eighth of the total population of the nation. Bangladesh’s use of solar energy is expanding quickly and will eventually supply a larger portion of the nation’s energy needs. Bangladeshi garment manufacturers will install solar panels on factory rooftops as part of an effort to “green” the sector. The United States has certified over 100 clothing manufacturers as green. There are many types of solar panels-polycrystalline, monocrystalline, and thin-film solar. All of them work differently from each other on uses, efficiency, and setup cost. All these have their own assets and weaknesses. In 2022 because of the energy crisis, rooftop solar energy is becoming popular. Our objective is to use alternative energy sources to green all organizations in Bangladesh. We will be scaling up net-metered rooftop solar for an organization (Barlev et al. 2011). Bangladesh has experienced a serious electricity problem in recent years. Day by day, fossil fuels are being depleted. Bangladesh should explore other sources of power. Wind could be able to solve this problem. A good source of renewable energy is wind. Long coastlines may be found in Bangladesh. Seasonal variations in wind patterns can be seen. Wind turbines should be capable of surviving wind of up to 250 km/h in coastal zones. (Emrani et al. 2022). The potential for wind energy has been assessed using previously gathered data in various locations in Bangladesh. A brief description of the viability of installing wind turbines on a large scale in various zones for irrigation and energy production is provided. Also included are the operating concept and design factors for installing wind turbines (Chu 2011). In Bangladesh, the Kutubdia Hybrid Power Plant and Feni Wind Power Plant, which have a combined wind power capacity of more than 20,000 MW, produce just 3 MW of energy. We anticipate that Bangladesh will enhance the output of wind energy, and BPDB has set up several efforts to produce 1.3 GW of wind energy. While BPDB has launched numerous projects to produce 1.3 GW of electricity from wind, we
340
S. Hossen et al.
believe that Bangladesh would enhance its wind power. In the Bay of Bengal, there are several little inhabited islands, and Bangladesh is surrounded by 724 km of coastline. In coastal locations, the yearly average wind speed is more than 5 m/s at a height of 30 m (Yu et al. 2022). In the northeastern region of Bangladesh, the wind velocity is higher than 4.5 m/s2 , although, in other areas, it is only around 3.5 m/s. Installations of small wind turbines might aid in the research of wind energy production in coastal regions in real-time (Archer and Jacobson 2005). Hydropower has the lowest levelized cost of electricity of all the reactionary energy and renewable energy sources and is actually less expensive than alternatives for energy efficiency (McKenna et al. 2022). Compared to other significant sources of electrical power that rely on fossil fuels, hydropower is better for the environment (Sen et al. 2022). Hydropower shops don’t emit the waste heat and feasts common with reactionary-energy-driven installations—which are major contributors to state pollution, global warming, and acid rain. The only hydroelectric plant in the nation, Karnafuli Hydro Power Station, is situated at Kaptai, about 50 km from Chittagong, a major port city. This plant is one of Bangladesh’s largest water coffers development systems and was built as part of the Karnafuli Multipurpose Project in 1962. Most countries have easy access to significant amounts of water via gutters and conduits. Electricity can be produced using this renewable resource without polluting the environment. Because of the increasing demand for energy, it is critical to forecasting the future of hydropower. It would also be possible to plan growth using a mix of energy and to implement measures to control the development of the requested electricity using sustainable small hydropower systems. We will keep energy from the three systems, and we will use this energy for an organization. We see that this idea would be more efficient than using electricity (Archer and Jacobson 2005). That’s why we want to check this system with this project. Is it efficient for any organization or not? We can provide electricity to any organization through these three clean energies when we face load shedding at various times. The main objective of the project will be to make this clean energy much more efficient than the amount of electricity we pay monthly or annually (Celikdemir et al. 2017). Hydropower is one of the cleanest forms of energy among those available as renewable sources that may be used to meet the need for electricity in an educational institution.
2 Related Work In this section, we will look at alternative ways to produce electricity, optimize energy costs and develop renewable energy for an organization for future exciting and related work. most of the authors proposed a model and researched how to minimize energy consumption costs and use green technology. Rahman Abidur addressed electrical power facilities that use renewable energy sources in Rahman et al. (2022). Even though RES are considered to be environmentally friendly, they have some negative effects on the environment. They examined a few RES-based power plants and tallied the results for each plant individually. As it was discovered that inappropriate use of RES could affect the environment, a selection guideline is offered. They suggested that RES should be carefully chosen and appropriately implemented when used in electrical power plants.
A Policy Framework for Cost Effective Production of Electricity
341
Zhu et al. (2022) analyzed the effects of green tag prices on investments in renewable energy. Additionally, they examined the effects of green tag prices on carbon emissions. It was determined that the price of green tags had a non-monotonic impact on investments in renewable energy and carbon emissions. Tajziehchi et al. (2022) discussed a study that looks at the relationship among the aspects of the environment, business, and community facets in order to determine if massive hydropower plants will be profitable in the long run. The Environmental Costs Analysis Model (ECAM) for hydropower was described as a renowned user-friendly variant of the model with depth information for the benefit plan prediction model. Barlev et al. (2011) presented the advancements in CSP technology that have been introduced over the years. The reflector and collector design and construction, heat transmission and absorption, power generation, and thermal storage have all improved. While keeping in mind the benefits that reduced fossil fuel consumption provides for the environment, several applications that may be connected with CSP regimes to save (and occasionally create) power have been developed and implemented. Emrani et al. (2022) aimed to enhance the technical and economic competitiveness of a hybrid PV-Wind power plant by deploying a large-scale GES (Grid Energy Storage) system. The study found that the GES system outperformed battery energy storage in terms of its high depth of discharge (DOD) and longer lifespan, as well as its superior efficiency. The study’s findings indicate that incorporating GES into a hybrid PV-Wind power plant can improve performance and cost-effectiveness. Chu (2011) wish to comprehend and compare the fundamental working principles of numerous extensively researched solar technologies in order to select the best solar system for a given geographic area. This study also assists in reducing future long-term switching costs and improving the performance of solar systems. They analyze each technology and determine how likely it is to be implemented commercially. Yu et al. (2022) investigated the growing issues regarding climate change and the requirement to cut greenhouse gas emissions in order to lessen its effects. According to the authors, the use of solar-based renewable energy can be extremely important for lowering CO2 emissions and combating climate change. Additionally, they analyze how solar energy might lower CO2 emissions in many industries, including building construction, transportation, and power generation. McKenna et al. (2022) gives an overview of the many approaches and models used to calculate the potential for wind energy, as well as the data sources needed to complete these calculations. The availability and quality of data, the effects of climate change on wind energy supplies, and the possible environmental effects of large-scale wind farms are some of the difficulties the writers look at when doing these analyses. Sen et al. (2022) research indicates that there is a significant chance that small or micro-hydropower plants might be established using indigenous technologies, making it possible to electrify a sizable area of the Chattogram Hill Tracts. Celikdemir et al. (2017) goal is to assess the hydropower potential with regard to the technical and financial feasibility of building medium and large hydropower plants. Similar to many other nations, Turkey’s mini and micro hydropower potential is not completely assessed. This work aims to facilitate economic analysis by developing an
342
S. Hossen et al.
empirical formula for tiny and micro hydropower plants, which are becoming more and more significant in this context. de Barbosa et al. (2017) expects power networks for South and Central America in 2030 using a 100% renewable energy scenario. The model’s objective was to reduce the energy system’s yearly total cost Sen et al. (2022). According to the findings of this study, the development of a renewable electricity grid in these areas in the near future can be sped up by present laws governing renewable energy and low-carbon development methods. Zalhaf et al. (2022) created models for a 100 km transmission system line and a High voltage direct Current (HVDC) transmission line. Eero Vartiainen, Christian Breyer, David Moser, Eduardo Román Medina, Chiara Busto, Gatan Masson, Elina Bosch, and Arnulf Jäger-Waldau address the cost of solar photovoltaics(PV) in Vartiainen et al. (2022). They displayed both the global cumulative PV volume and the levelized cost of hydrogen (LCOH). The author also investigates the use of green hydrogen in transportation, industrial processes, and electricity generation. Al-Quraan and Al-Mhairat (2022) proposed using several mathematical models to estimate wind farm extracted energy. Five turbine models and cost analyses were created by the authors. They also show Jordan’s wind energy capacity. Table 1. Solar comparison table Type of solar panel
Polycrystalline silicon solar panel (PSSP)
Monocrystalline silicon solar panel (MSSP)
Thin-film solar panel (TFSP)
Cost
High
High
Low
Efficiency
Medium (15–17%)
High (20–27%)
Low (11–15%)
Weight
Heavy
Heavy
Lightweight
Outlook
Unattractive
Attractive
Attractive
Lifetime
Not more than 25–35 years
Not more than 25–40 years
Not more than 20 years
Hardness
Inflexible
Inflexible
Flexible
Material
Polycrystalline silicon
Monocrystalline silicon
Cadmium telluride (CdTe)
In a single Panel
300–335 W
390 watts
Depend on material quality
Cost per 1 kw in USD
(275.49–313.11)
(310–400)
Depend on material quality
Using requirement
More than 25°c
Low temp area
Light user
Table 1 expressed the various types of solar panels and their comparisons. MSSP is the most efficient, but it is also the costliest. The efficiency and lifespan of PSSP are lower than those of MSSP. TFSP are intended for light users. We can see from Table 1
A Policy Framework for Cost Effective Production of Electricity
343
that MSSP is the best option if the organization’s area and temperature are both low. MSSP is the best option when space and temperature requirements are high but the budget is limited.
3 Methodology The power system model used in this research is based on the direct optimization of power distribution variables under specified initial conditions and includes a number of technologies for energy production, collection, and manufacturing, as well as processed water and the production of composite natural gas (SNG). Gas (PtG) for artificial usage that is operable in a flexible manner as needed (Müller and Fichter 2022). A completely integrated script that also takes into consideration the need for warmth and motion must be modeled, even though this is outside the purview of this study, in order to grasp the entire energy system. A detailed explanation of the applied energy system model, its input data, and the relevant technologies is not provided in the following sections because it has already been covered in and. Yet, it offers a thorough justification for each additional piece of information that the model in the present investigation presupposed. It is possible to create additional specialized and monetary hypotheses in the area of this work devoted to supporting data (de Barbosa et al. 2017). Modeling a fully integrated scenario that includes accounts for heat and mobility needs is important to fully comprehend the overall energy system, albeit this is outside the purview of this study. The applied model of the power system was previously described and, hence a full description of the model, its foundational information, and its utilized technology is not provided in the following parts. It is a thorough summary, though, and any new material is taken into account for the model in this study. In the background portion of this article, more detailed financial hypotheses can be formed (de Barbosa et al. 2017).
Fig. 1. Block diagram of the energy system model
Grid structure and region subdivision are shown in Fig. 1, The South American continent and the Central American nations that link it to North America were examined in this study (Panama, Costa Rica, Nicaragua, Honduras, El Salvador, Guatemala, and
344
S. Hossen et al.
Belize). The superregion has been broken up into 15 subregions, including Central America (representing Panama, Costa Rica, Honduras, El Salvador, Nicaragua, Guatemala, and Belize), Colombia, Venezuela (representing Guyana, French Guiana, Venezuela, and Suriname), Ecuador, Peru, Latin South Central (including Paraguay and Bolivia), South Brazil, Brazil, So Paulo, Southeast Brazil, North Brazil, Northeast Brazil, Northeast Argentina (including Uruguay), East Argentina, West Argentina, and Chile. Brazil and Argentina, the two countries with the most folks and homes, were split into five or three sub-regions, each with its own area, population, and access to the public grid. This document presents four concepts for energy generation possibilities. (Zhou et al. 2022). • Indigenous energy power, in which all areas are autonomous (without links to the HVDC network) and the required amount of power must be produced locally to meet demand (Liang and Abbasipour 2022). • An internal high voltage direct current (HVDC) grid connecting indigenous power systems to the country’s main power grid (Zalhaf et al. 2022). • A vast energy network that links together rural energy systems. • Network situation Using SWRO desalination and creating fake natural gas demand, a power plant scenario is shown for the entire region. In this environment, nodal sources coupled with PtG technology are employed as interconnection technologies in the power industry to satisfy desalination and artificial gas demands, enhancing the stiffness of the system. Figure 1 Illustrates the branch and network configuration in both South and Central America. HVDC links for rural power systems are depicted as dotted lines. The HVDC network structure is based on the network configuration of South and Central America (de Barbosa et al. 2017). The model was optimized in terms of its technical and financial status in 2030 in monetary terms at the time of 2015. The late structure approach, which is common in nuclear power, was considered. Table A in chain S1 presents fiscal hypotheses for capital costs (cost of capital), functional costs, and the duration of all factors for all guidance documents. In all scenarios, the weighted average capital cost (WACC) is established at 7, however, for domestic PV utilization, the WACC is assigned at 4 due to reduced tax revenue conditions (Vartiainen et al. 2022). Tables A, B, and C for line S1 contain technical hypotheses for power systems and energy storage technologies, as well as the efficiency of production and transmission technological advances and power dissipation in HVDC transformers and lines. When evaluating the project’s electricity model, the prices of residential, commercial, and industrial electricity consumers in aggregate by region for the period 2030 are required, with the exception of Suriname, Ecuador, Venezuela, Guyana and French Guiana, whose energy bills are obtained from the original versions. On the S1 train, Price increases are displayed in Table E. Electricity prices vary by country; for example, electricity costs are the same in Argentina subregions and Brazil. Because the production and consumption of electricity are not simultaneous, consumers do not consume all of the electricity generated by their solar photovoltaic systems. Surplus electricity from prosumers is surely sold on the grid at a transfer price of 2 cents per kWh (Al-Quraan and Al-Mhairat 2022). Consumers are not allowed to sell more energy than they utilize on the grid.
A Policy Framework for Cost Effective Production of Electricity
345
Fig. 2. Diagram of the smart integrated renewable energy system
In developing countries in Fig. 2 like Bangladesh, such small states and institutions are typically offline. According to the proposed studies (Zalhaf et al. 2022), these villages can be connected or act, establishing an energy center point and a distribution line that is a group of all organizations. Renewable resources can be integrated with solar energy, oil, and other energy conversion institutions, among those that are available, depending on the local conditions of the territory and the possibility of permission (Al-Quraan and Al-Mhairat 2022). Solar energy (thermal and photovoltaic), wind energy, and hydraulic energy are all part of the proposed system. It is suggested that solar energy be used only for hot water supply and that other renewable energy sources be used only for electricity generation (Celikdemir et al. 2017). Figure 2 shows how these resources are used to generate energy. In the context of renewable energy, Particle Swarm Optimization (PSO) used to optimize the operation of renewable energy systems, such as wind turbines, solar panels, and hydroelectric generators. The objective function in this case maximizes the power output of the renewable energy system while minimizing the cost of operation. PSO works by simulating the behavior of a swarm of particles in a search space. Each particle represents a potential solution to the optimization problem, and the swarm collectively searches the solution space by adjusting its position and velocity according to a set of rules. Identify the decision-making factors that have an impact on the goal function. The availability and capacity of renewable energy sources, the location of power plants, the price of fuel, and the price of electricity are a few examples. By computing the value of the objective function for each particle’s location in the search space, determine the fitness of each particle. Using the PSO algorithm, modify the particle locations and velocities. Each particle’s new position combines its present location, its previous best location, and the best location of every particle in the swarm. Evaluate the PSO algorithm’s output to identify the best set of decision factors that maximizes the use of renewable energy sources while minimizing the cost of producing electricity.
346
S. Hossen et al.
4 Implementation Bangladesh experiences vast periods of sunlight with a 4–6.5 kWh/m2 /day solar intensity, yet the majority of the solar energy is not used. The months that get the highest and least solar energy respectively are March and April. Solar energy technology comes in a variety of forms, including concentrated solar power, solar home systems, and solar PV. Compared to other solar energy systems, solar provides a number of advantages for pastoral living in developing countries like Bangladesh. Things are developing swiftly in the sector of wind energy, which is a renewable resource (Li et al. 2022). If we install solar in an educational institution then it can be implemented by following ideas. Total Solar installation cost = (Solar cost × Number of solar)
(1)
Solar inverter cost with battery, ICB15 = ICB × Number of Battery
(2)
Solar produce = (Amount Solar produce × Number of Solar)
(3)
Because more power is needed for cooling and ventilation during the summer, the amount of electricity used fluctuates depending on the month of the year. The quantity of connected users’ equipment in use also affects consumption. How fast the mini-grids reach their maximum capacity is predicted by how rapidly the load is taken on in the first year. Annual Cost =
Energy required for PV systems(per year) Energy output of PV system per year
(4)
Table 2. Solar installation cost and per day produced electricity and cost (unit per cost = 8.41) Inverter and battery cost (tk)
Installation cost (tk)
Electricity producing (kWh)
Producing cost (tk)
205,000
290,000
5250
44,152.50
In this Table 2, we mentioned Inverter and Battery Costs, Electricity Producing, and Electricity producing costs. The months that get the highest and least solar energy respectively are March and April. Solar energy technology comes in a variety of forms, including concentrated solar power, solar home systems, and solar PV. In comparison to other solar energy systems, solar provides a number of advantages for pastoral living in developing countries like Bangladesh. The goal of this study is that wind energy cost can be determined by the capacity factor, and unit cost price necessary for the construction of a wind power plant as a consequence of the analysis that was done (Vartiainen et al. 2022). By increasing the rule foundation created using the different methods and approaches, these acquired
A Policy Framework for Cost Effective Production of Electricity
347
values are planned to be utilized to construct a compliance factor for the establishment of the wind power plant. 1 unit (1000 watts) of wind energy can be generated if the wind speed is 2.3–2.5 m/s. However, the wind turbine generates profitable wind energy if the wind speed is 5–6 m/s. Turbine produce electricity = (turbines capacity × produce electricity)
(5)
Monthly energy output = (turbine produce electricity per day × 30 days)
(6)
The very last issue in using wind turbines is the capacity factor (or any other type of energy source). The capacity factor of a source displays how much energy it produces in comparison to the maximum amount it is capable of producing. Capacity factor = actual output/maximum possible output
(7)
Table 3. Wind installation cost and per day produced electricity and cost. Turbine cost (TK) Installation cost (TK) Electricity producing (kWh) Producing cost (TK) 395,500
300,500
4583
33,845.83
Using turbines, wind energy is transformed into electricity. In Table 3, there are Turbine Costs, Electricity production per day, and Electricity Producing Costs. In spite of having an abundance of water resources, Bangladesh’s ability to construct a sizable hydroelectric power plant is now restricted. Bangladesh has the lowest hydroelectric output in the whole South Asian area, at 230 MW. So, for building a hydroelectric plant in an educational institution the following cost will be counted. The capacity factor is the proportion of the maximum power output to the actual power output (it is around 0.5 for a micro hydro power plant). Monthly Energy Output = (Max power output × hour ∗ 30)
(8)
Monthly cost = (Monthly power Output × Unit per cost)
(9)
Annual power output = (max power output × hour × 365 × capacity factor)
(10)
Water is used to produce electricity via the hydroelectric process, which converts water flow into kinetic energy that powers the turbine’s rotor. Hydroelectricity is a naturally renewable source of energy. Bangladesh’s hydroelectric power resources are less abundant than those throughout the world. In this Table 4, We can see the hydroelectricity installation cost and per day produced electricity cost. Here, we mentioned Hydro Turbine cost, Electricity Producing, and also given Producing cost (de Barbosa et al. 2017).
348
S. Hossen et al. Table 4. Hydroelectricity installation cost and per day produced electricity and cost.
Hydro turbine cost (tk)
Installation cost (tk)
Electricity producing (kWh)
Producing cost (tk)
325,900
250,500
2935
24,557.20
5 Mathematical analysis The goal of the mathematical optimization model is to combine the many renewable energy sources that are present in an area while taking into consideration both their strengths and weaknesses (Aboagye et al. 2021). Solar and wind energy are utilized in accordance with their daily availability and are temporarily stored. Energy System : PV solar, Is = Ib Fb + Id Fd + Fr (Ib + Id )
(11)
where Is = total solar radiation(kWh/m2 ); (Ib , Id ) = the two types of solar radiation are direct and diffuse (kWh/m2 ); (Fb , Fd , Fr ) = factors for the beam, diffuse and reflected parts of solar radiation. Wind Energy, Pwt = Pw Aw Rw
(12)
where Pwt = obtaining electric power by a wind turbine (kWh), Pw = power of the wind generator, Aw = total area, Rw = overall efficiency generator.
Monthly Produce Electricity Using Renewable Energy 2000 1000 0 Electricity SOLAR(kWh)
WIND(kWh)
HYDRO(kWh)
Fig. 3. Monthly produce electricity using renewable energy
In Fig. 4, we can see the total installation cost using the pie chart (obtained from Fig. 3 and Table 5). The need for energy fluctuates throughout the day in Bangladesh. Peak hours are usually counted from 5 p.m. to 11 p.m. since this is when there is the most demand for power. As a result, the price of power is high from 5 to 11 p.m. to persuade users to use less.
A Policy Framework for Cost Effective Production of Electricity
349
Table 5. Total installation cost and monthly produced electricity and cost. Renewable energy
Total installation cost (tk)
Monthly producing electricity (kWh)
Monthly saving Cost (tk)
Solar
4,350,000
157,500
1,324,575
Wind
7,512,500
137,490
1,015,375
Hydroelectricity
2,505,000
88,050
736,716
Monthly saving cost using renewable Energy
SOLAR-13,24,575 TK
WIND-10,15,375 TK
HYDRO-7,36,716 TK
Fig. 4. Monthly saving cost using renewable energy Table 6. After collecting data (electricity bill) from East West University. Common/check meter use
Unit (kWh)
Amount (TK)
Energy charge (off peak)
192,000
1,461,120
7.61
760,320
10.56
2,221,440
8.41
Energy charge(peak) Total:
72,000 264,000
Per Unit (kWh)
After collecting data from East West University to Table 6, we got Energy charges for peak hours and off-peak hours. Finally, we show in Table 6: the total amount, total unit, and per unit cost.
6 Result and Discussion The effects of RES-grounded electrical power shops on the environment have been thoroughly reviewed in this work. This analysis takes into account all potential RESgrounded power sources, including solar, wind, and hydropower (Celikdemir et al. 2017). From this Eq. (13) and (14), we will get EMC and EAC. Electricity Monthly cost, EMC = (Amount + VAT + Demand charge) Electricity annual cost, EAC = (EMC ∗ 12)
(13) (14)
350
S. Hossen et al. Table 7. Comparison of electricity cost for east west university.
East West University
Renewable Energy (RE)
2,521,512.00
3,076,666.00
By comparing the Electricity cost for East West University (EWU) from Table 7. According to our below Eq. (12), we will get a cover-up installation cost for EWU.
Fig. 5. Comparison of electricity cost (EWU vs. renewable energy).
In this Fig. 5, we can see that Renewable Energy saves more cost except for Local Electricity costs after comparing EMC and RE. Below mentioned Eq. (12) Cover up installation Cost for EWU: Cover up Installation Cost for EWU = Total Installation Cost/Montly saving Cost
(15)
From Eq. (12): we have to assure by Eq. (13) that how much annually saved money using renewable energy. Annually saved money using renewable energy = (EAC − (Montly saving Cost × 12))
(16)
The total clean energy cost will be less than the local electricity cost. That would be our expected result. The annual cost of electricity may be lower than the total clean energy cost initially. But considering 5 years total clean energy cost will be profitable after 5 years. The annual cost of electricity may be lower than the total clean energy cost initially. But considering 2 years total clean energy cost will be profitable after 2 years.
7 Conclusion Developing nations’ ability to electrify themselves is seriously under threat. One of the biggest challenges to obtaining 100% energy availability throughout these nations is getting power to remote areas from the main grid. This problem exists in Bangladesh,
A Policy Framework for Cost Effective Production of Electricity
351
a South Asian developing country with low per-capita energy usage. Bangladesh’s economic expansion is being stopped by an energy problem in the nation. A problem that cannot be handled without addressing the issue of powering remote areas is the approximately 4% of the population that still does not have access to electricity. People who live in electrified areas are also impacted by load shedding. Renewable energy may greatly aid in sustainably supplying Bangladesh’s needs. In order to implement these, Bangladesh’s outdated electrical power infrastructure needs to be reorganized, including attempts to ensure energy efficiency within the context of environmental sustainability as well as the development of an electricity market that encourages competition. A reliable, efficient, and responsible energy production, transmission, and distribution system is necessary; these problems compelled electrical power corporations, governmental organizations, and the community to discover an appropriate solution. Renewable energy sources affect occupational health and safety across the entire system but provide relatively small average harm to specific workers. As a result of lower energy and labor intensity per unit of energy provided, a typical coal energy cycle has more systemic effects than almost all of the examined renewable energy sources. Yet, for almost half of the renewable energy sources, the average risk to one employee is still lower. The objective of this paper is to reduce energy costs from the consumers’ point of view. We applied the Particle Swarm Optimization (PSO) optimization method in this paper to satisfy the objective function.
References Aboagye, B., Gyamfi, S., Ofosu, E.A., Djordjevic, S.: Status of renewable energy resources for electricity supply in Ghana. Sci. Afr. 11, e00660 (2021) Al-Quraan, A., Al-Mhairat, B.: Intelligent optimized wind turbine cost analysis for different wind sites in Jordan. Sustainability 14, 3075 (2022). https://doi.org/10.3390/su14053075 Archer, C.L., Jacobson, M.Z.: Evaluation of global wind power. J. Geophys. Res. D: Atmos. 110, 1–20 (2005). https://doi.org/10.1029/2004JD005462 Barlev, D., Vidu, R., Stroeve, P.: Innovation in concentrated solar power. Sol. Energy Mater. Sol. Cells 95, 2703–2725 (2011) Celikdemir, S., Yildirim, B., Ozdemir, M.T.:Cost analysis of mini hydro power plant using bacterial swarm optimization. Int. J. Energy Smart Grid 2, 64–81 (2017). https://doi.org/10.23884/ IJESG.2017.2.2.05 Chu, Y.: Review and Comparison of Different Solar Energy Technologies. Global Energy Network Institute (2011) de Barbosa, L.S.N.S., Bogdanov, D., Vainikka, P., Breyer, C.: Hydro, wind and solar power as a base for a 100% renewable energy supply for South and Central America. PLoS ONE 12, e0173820 (2017). https://doi.org/10.1371/journal.pone.0173820 Emrani, A., Berrada, A., Bakhouya, M.: Optimal sizing and deployment of gravity energy storage system in hybrid PV-Wind power plant. Renew. Energy 183, 12 (2022). https://doi.org/10. 1016/j.renene.2021.10.072 Li, S., Gong, W., Wang, L., Gu, Q.: Multi-objective optimal power flow with stochastic wind and solar power. Appl. Soft Comput. 114, 8328 (2022). https://doi.org/10.1016/j.asoc.2021. 108045 Liang, X., Abbasipour, M.: HVDC transmission and its potential application in remote communities: current practice and future trend. In: IEEE Transactions on Industry Applications (2022)
352
S. Hossen et al.
McKenna, R., Pfenninger, S., Heinrichs, H., et al.: High-resolution large-scale onshore wind energy assessments: a review of potential definitions, methodologies and future research needs. Renew. Energy 182, 659–684 (2022) Müller, M., Fichter, C.: Zeitschrift für Energiewirtschaft 46(1), 21–26 (2022). https://doi.org/10. 1007/s12398-021-00314-z Rahman, A., Farrok, O., Haque, M.M.: Environmental impact of renewable energy source based electrical power plants: Solar, wind, hydroelectric, biomass, geothermal, tidal, ocean, and osmotic. Renew. Sustain. Energy Rev. 161, 112279 (2022) Sen, S.K., al Nafi Khan, A.H., Dutta, S., et al.: Hydropower potentials in Bangladesh in context of current exploitation of energy sources: a comprehensive review. Int. J. Energy Water Resour. 6 (2022). https://doi.org/10.1007/s42108-021-00176-8 Tajziehchi, S., Karbassi, A., Nabi, G., et al.: A cost-benefit analysis of Bakhtiari hydropower dam considering the nexus between energy and water. Energies 15, 871 (2022). https://doi.org/10. 3390/en15030871 Vartiainen, E., Breyer, C., Moser, D., et al.: True cost of solar hydrogen. Solar RRL 6 (2022). https://doi.org/10.1002/solr.202100487 Yu, J., Tang, Y.M., Chau, K.Y., et al.: Role of solar-based renewable energy in mitigating CO2 emissions: evidence from quantile-on-quantile estimation. Renew. Energy 182, 216–226 (2022). https://doi.org/10.1016/j.renene.2021.10.002 Zalhaf, A.S., Zhao, E., Han, Y., et al.: Evaluation of the transient overvoltages of HVDC transmission lines caused by lightning strikes. Energies 15, 1452 (2022). https://doi.org/10.3390/ en15041452 Zhou, H., Yao, W., Ai, X., et al.: Comprehensive review of commutation failure in HVDC transmission systems. Electr. Power Syst. Res. 205, 107768 (2022) Zhu, Q., Chen, X., Song, M., et al.: Impacts of renewable electricity standard and renewable energy certificates on renewable energy investments and carbon emissions. J. Environ. Manage. 306, 114495 (2022). https://doi.org/10.1016/j.jenvman.2022.114495
Technology of Forced Ventilation of Livestock Premises Based on Flexible PVC Ducts Igor M. Dovlatov , Sergey S. Yurochka , Dmitry Y. Pavkin , and Alexandra A. Polikanova(B) Federal Scientific Agroengineering Center VIM, 1 st Institutsky Passage 5, 109428 Moscow, Russia
Abstract. Modern dairy farming has a number of urgent problems. In the most relevant works, the authors suggest using natural ventilation in cowsheds, but an insufficient number of exhaust and supply shafts create unfavorable microclimate conditions throughout the year. Today in Russia and Belarus, duct-type supply ventilation systems have been put into operation piece by piece and actively continue to be implemented by «RusAgroSystem», «Continental Technologies», etc. The duct circuit was developed in Compass-3D. Modeling of air movement was carried out in the SolidWorks 2020. The authors have developed a functional and structural scheme of forced ventilation in livestock premises based on flexible PVC ducts. Two modes of operation are proposed depending on the time of year (winter and summer). In summer, wide openings located at the bottom of the bag are used, through which air passes. In winter, the upper part of the PVC bag with small holes is used. Keywords: Dairy farming · Microclimate · Simulation modeling · Natural ventilation
1 Introduction Modern dairy farming has a number of urgent problems. These include the control of optimal conditions for keeping animals [1–4]. Due to the high concentration of livestock on farms, indoor air quality on most domestic farms, especially those actively under construction in the 80s, exceeds all maximum permissible concentrations (MPC) (established by the results of their own research), especially in winter. In the summer, animals often experience heat stress. Indicators exceeding the norms include the content of carbon dioxide, hydrogen sulfide, ammonia, dust content, and concentrations of pathogenic microflora in the air. Such farms still contain most of the livestock of cows, while there is no widespread technical solution to improve the situation with air quality inside the livestock premises. It is not possible for most farms to rebuild new premises in order to improve air quality, so this work is aimed at offering a solution for the modernization of microclimate systems, which, when implemented, will improve the air quality situation. In addition to the fact that it is necessary to reduce the concentration of the gases already listed, it is also necessary to additionally monitor the following parameters: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 353–360, 2023. https://doi.org/10.1007/978-3-031-50330-6_34
354
I. M. Dovlatov et al.
temperature, humidity, air flow velocity—their control will allow you to build the correct operating modes of actuators to normalize the indoor microclimate. The results obtained in the course of many years of research by both our own and domestic researchers indicate that non-compliance with the above standards reduces the productivity of livestock, which leads to financial losses. In the most relevant works, the authors propose to use natural ventilation in cowsheds [5, 6]. Analysis of the results of the study [7] makes it possible to assert that in cowsheds of loose and tethered type of content, with a natural ventilation system, changes in microclimate parameters directly depend on the season of the year and the area of the room. Insufficient number of exhaust and supply shafts create unfavorable microclimate conditions throughout the year. The authors of the article speak for the benefit of natural ventilation [8], citing statistics that to purify the air in livestock premises from microorganisms formed in it, about 2 billion kWh of electricity is spent on ventilation alone per year, additionally 1.8 billion kWh, 0.6 million m of natural gas, 1.3 million tons of liquid are spent on heating the premises and 1.7 million tons of solid fuel. On motherland farms, natural ventilation often does not work efficiently enough in the temperate zone of Russia (established on the basis of our own research), therefore it is necessary to resort to additional forced ventilation. One of the disadvantages of forced ventilation is that it is necessary to maintain a balance between the efficiency of the ventilation system, which increases the productivity of animals and the energy expended, which ensures the operation of the actuators. Let’s consider some of the research results of foreign scientists who describe the influence of microclimate on productivity. Microclimate as a constantly acting environmental factor has a great impact on the health and productivity of farm animals. The nature and intensity of the processes of heat regulation, gas exchange, physiological and other vital functions of the body depend on it. If the microclimate parameters deviate from the established norms, it leads to a decrease in milk yields in cows by 10–20%, a decrease in live weight gain by 20–30%, an increase in the case of young animals by 5–40%, a decrease in productivity by 30– 35%, a reduction in service life by 15–20%. The costs of feed and labor per unit of production are increasing. There is an increase in the cost of repairing the repair of technological equipment. The service life of livestock buildings is reduced three times [9, 10]. Foreign authors [11] propose a solution to the problem of heat stress in dairy cows bred in a free stall using an automatic irrigation system that is installed on a network of water pipes in which water is constantly located. The system also involves the inclusion of forced ventilation. Such a system has been tested by several researchers, for example, Nordlund [12]. In his work, he found that the natural ventilation system, supplemented by a duct-type supply ventilation system, provides an optimal cooling effect. The study also provides recommendations on the diameter of pipes, the height of their location above the floor, the ventilation rate and the optimal distance between the holes.
Technology of Forced Ventilation of Livestock
355
Also, scientists [13] have proved that a properly designed, installed and timely serviced PPTV (positive-pressure tubular ventilation) system can effectively cool each animal individually. Researcher from the USA, based on scientific work [14] suggests using a duct-type supply ventilation system, since such a system is a PPTV system. This system can have a significant positive impact on the reduction of respiratory diseases of cattle (BRD) in the premises for keeping calves. The central part of Russia has weather conditions significantly different from those in Europe, where PVC duct systems are already widely used. At the moment, in Russia and Belarus, duct-type supply ventilation systems have been put into operation by the companies «RusAgroSystem» and «Continental Technologies» and are actively continuing to be implemented. Such a system is little known and not very widespread in the territory of the Russian Federation because of its relative novelty, but despite all the shortcomings, the system is promising because of its efficiency and cost. When studying the solution of flexible duct ventilation of Rusagrosystem, the studied solution has disadvantages and there are no functions that cannot provide the solution with permanent all-season operation.
Fig. 1. Examples of flexible PVC ducts on existing farms in the Russian Federation. A,C—PVC duct installed above the boxes for keeping calves, B—PVC duct installed in the milking parlor.
Figure 1 shows examples of an implemented forced microclimate system based on flexible PVC ducts. According to the photos in Fig. 1(A, C), it can be seen that the air ducts have a single (in a row) vertically downward hole for delivering fresh air and blowing calves at a lazy time. Figure 1(B) shows that the ducts have side openings designed to deliver fresh air into the room. The disadvantages of these solutions are that the ducts do not imply use in winter, because the point supply of cold air to the boxes for calves will have a negative impact on their health. In winter, numerous thermal energy costs are required to maintain optimal microclimate parameters. There should always be enough light and fresh air in the cowshed. The optimal indoor temperature for cattle is 10 °C with fluctuations from 4 to 20 °C, and the relative humidity is 40–75%. In some locations of the placement zone, it is allowed to reduce the air temperature by 3 °C. At these temperatures, the animal spends less energy and feed on heating its own body and maintaining a constant body temperature. Consequently, most of the feed nutrients are used for the formation of milk or for the gain of live weight in young animals. The cowshed should be periodically ventilated, but drafts should be avoided.
356
I. M. Dovlatov et al.
With poor ventilation, due to the release of a large amount of water vapor by the cow when breathing in the room, the humidity of the air increases rapidly. Thus, the development of technology for the operation of a forced ventilation system using PVC bags on the territory of the central part of Russia can contribute to the popularization and widespread use of this type of ventilation system. The problems described above suggest the development of a forced ventilation system technology based on flexible PVC ducts. The technology developed in this work is a scheme of a flexible PVC duct, the selection of operating modes using simulation modeling to check the quality of the designed system, the operating modes of the system and the decision-making algorithm for choosing the operating mode.
2 Materials and Methods The duct circuit was developed in solid-state modeling programs such as Compass-3D. Modeling of air movement was carried out in the SolidWorks 2020 software package. The boundary conditions for the theoretical modeling of indoor air movement in the SolidWorks 2020 software package were set as follows: a section of the building was taken in a section displaying a window acting as a supply channel. Figure 2 shows a similar model of a flexible PVC duct, according to which it is a circuit and according to which simulation modeling was carried out.
Fig. 2. Flexible PVC duct diagram for simulation
Technology of Forced Ventilation of Livestock
357
Figure 2 shows a diagram of a flexible PVC duct. Under the number 1, an industrial axial pressure fan is installed, with a blade rotation speed of 500 rpm, the diameter of the outer sleeve is 0.8 m, the diameter of the central sleeve is 0.1 m, the direction of rotation is counterclockwise. The established properties of the fan in terms of volume flow and pressure drop are shown in A3. Air flow guides are installed under the number 2, which exclude the swirling flow (5) presented on A2 and turn it into a turbulent but directional flow (6). Under the number 3 is a confuser with a local resistance coefficient of 0.29, and a pressure loss of 11 Pa. Under the number 4 is a metal duct without holes 6 m long, 7 is a flexible PVC duct with a diameter of 700 mm, 8/V are holes with a diameter of 100 mm for summer mode. The total length of the flexible duct is 27 m. During calculations, the total air temperature is set to 24 °C, the flow is turbulent, humidity is 70%, the current medium is air, the ambient pressure is 101325 Pa, gravity is on.
3 Results and Discussion According to the requirements of SNIP 2.10.03–84, livestock complexes must be equipped with systems. According to sanitary standards, the room in which the animals are located should be ventilated all year round—including in the cold season. In winter, air exchange is recommended 4 times a day, and in summer during particularly hot periods up to 10 times. Various ventilation systems are used to normalize the microclimate parameters, such as temperature and air velocity on the blowout. However, the system based on flexible PVC ducts “blowing sleeve” remains the least widespread and well-known in the territory of the Russian Federation. The proposed system (Fig. 3) has 4 functional units: a housing made of flexible PVC material, a decontaminator, a fan with a limiter, fasteners. Each of the nodes is designed to perform certain functions. So, the housing, which has holes on top and bottom, participates in air exchange and ensuring safety during operation. A spray nozzle with a disinfectant is responsible for disinfection and humidification of air masses. The fan assembly with a limiter participates in air exchange and control of incoming air. Fasteners ensure the placement of this system in a room for keeping cattle. Fastening is carried out through grommets/carabiners fixed on a cable under the ceiling. This system assumes two modes of operation depending on the time of year (winter and summer). In summer, wide openings located at the bottom of the bag are used, through which air passes with additional humidification for its partial cooling. In winter, the upper part of the PVC bag with small holes is used. In this mode, the air rises up, and while it sinks to the bottom on the floor, such a phenomenon occurs from mixing the air flows of incoming air with the air of the cowshed, since the temperature does not fall below zero even in winter. Figure 4 shows a PVC duct with the highest possible system characteristics. The simulation was carried out in order to understand the effectiveness of the system. The arrows show the direction of the air, the color fill shows the speed of air movement, where red is the speed of ~9 m/s, yellow is 6 m/s, green is 4 m/s, blue is less than 1 m/s. The simulation was carried out at the maximum operating mode of the fan, taking into
358
I. M. Dovlatov et al.
Fig. 3. Functional and structural diagram of the developed system based on flexible PVC ducts
Fig. 4. Result of modelling
account the throughput of the PVC duct, where the noise level does not exceed 75 dB. Simulation modeling has shown that it is impractical to place animals under the first part of the duct (the first 10 m of the duct with holes), since there is no point blowing of animals at a speed of 2 m/s. Animals should be placed in the second and third zones of the duct, but it is necessary to reduce the fan power, since the blowing speed of animals—6 m/s is excessive. This speed (6 m/s) is acceptable when airing rooms in summer, especially those rooms where the technology of keeping animals on a leash is used. In further studies, an increase in the size of the bag will be modeled both in length and in internal volume, as well as with different supply fans in performance. Based on the information presented in Fig. 5, it was found that in order to achieve optimal blowing of animals and reduce the likelihood of heat stress while maintaining productivity, the following operating modes were established:
Technology of Forced Ventilation of Livestock
359
Fig. 5. The value of the heat stress index depending on the temperature and relative humidity of the air
– at initial thermal stress, the system operates with an installed capacity of 10–15 thousand m3 /hour of air. – at mild heat stress—16–20 thousand m3 /hour of air. – at average heat stress—21–25 thousand m3 /hour of air. – at strong heat stress—26–30 thousand m3 /hour of air. – at extreme heat stress—30–36 thousand m3 /hour of air. It is also important to note that it is planned to use the system based on flexible PVC ducts in combination with indoor temperature and humidity sensors.
4 Conclusions 1. The operation mode of a PVC flexible duct is proposed and described (L–27 m, D– 0.7 m, fan power 36900 m3 /h), where at the maximum power of the system in the 2nd and 3rd zones, the air velocity at a height of ~0.2 m reaches 6 m/s; 2. The conducted simulation showed that with a PVC duct length of 27 m, the optimal design is described by the following characteristics: D–700 mm, PVC duct material, 2 holes with a distance of 1, 2 m from each other, located at an angle of 80°, the diameter of the holes is 100 mm. The speed at the inlet to the duct is 12 m/s. The noise level is not more than 75 dB. 3. The functional and structural scheme of the developed system based on flexible PVC ducts has been developed. It is planned to continue conducting research on this issue in the future.
References 1. Dovlatov, I.M., Yuferev, L.Y., Mikaeva, S.A., Mikaeva, A.S., Zheleznikova, O.E.: Development and testing of combined germicidal recirculator. Light Eng. 29(3), 43–49 (2021) 2. Tomasello, N., Valenti, F., Cascone, G.: Development of a CFD model to simulate natural ventilation in a semi-open free-stall barn for dairy cows. Buildings 9(8), 183 (2019) 3. Dovlatov, I.M., Rudzik, E.S.: Improvement of microclimate in agricultural premises due to disinfection of air with ultraviolet radiation. Innovations Agric. 3(28), 47–52 (2018)
360
I. M. Dovlatov et al.
4. Tikhomirov, D., Izmailov, A., Lobachevsky, Y., Tikhomirov, A.V.: Energy consumption optimization in agriculture and development perspectives. Int. J. Energy Optim. Eng. 9(4), 1–19 (2020). https://doi.org/10.4018/IJEOE.2020100101 5. Gay, S.W. Natural Ventilation for Free Stall Dairy Barns (2009) 6. Tikhomirov, D., Izmailov, A., Lobachevsky, Y., Tikhomirov, A.V.: Energy consumption optimization in agriculture and development perspectives. Int J Energy Optim Eng 9(4), 1–19 (2020). https://doi.org/10.4018/IJEOE.2020100101 7. Martynova, E.N., Yastrebova, E.A.: Features of the microclimate of cowsheds with a natural ventilation system. Vet. Med., Anim. Sci. Biotechnol. (6), 52–56 (2015) 8. Nalivaiko, A.: The system of microclimate regulation on farms and complexes of cattle. Scientific and Educational Potential of Youth in Solving Urgent Problems of the XXI Century, vol. 6. pp. 177–180 (2017) 9. Martynova, E.N., Yastrebova, E.A.: The physiological state of cows depending on the microclimate of the premises. Achievements Sci. Technol. Agro-Ind. Complex. 8, 53–56 (2013) 10. Dovlatov, I., Yuferev, L., Pavkin, D.: Efficiency optimization of indoor air disinfection by radiation exposure for poultry breeding. Adv. Intell. Syst. Comput. 1072, 177–189 (2020) 11. D’Emilio, A., Porto, S.M.C., Cascone, G., Bella, M., Gulino, M.: Mitigating heat stress of dairy cows bred in a free-stall barn by sprinkler systems coupled with forced ventilation. J. Agric. Eng. 48(4), 190–195 (2017). 691 12. Nordlund, K.V., Halbach, C.E.: Calf barn design to optimize health and ease of management. Vet. Clin.: Food Anim. Pract. 35(1), 29–45 (2019) 13. Mondaca, M.R., Choi, C.Y.: An evaluation of simplifying assumptions in dairy cow computational fluid dynamics models. Trans. ASABE 59(6), 1575–1584 (2016) 14. Middleton, G.: What Every Practitioner Should Know About Calf Barn Ventilation
Author Index
A Ahammed, Amir Khabbab 325 Ahmed, Sabbir 262 Akash, Fardin Rahman 262 Akimova, S. V. 317 Al Fahim, Hafiz 168 Anisimov, Alexander A. 308 Arefin, Mohammad Shamsul 53, 168, 248, 262, 295, 325, 338 Aydın, Yaren 66 Aysia, Debora Anne Yang 192
G Ganesan, Timothy 19 Ghadai, Ranjan Kumar 37, 104 Girish, G. P. 97 Godyaeva, M. M. 317 Guillaume, Marie Laurina Emmanuelle Laurel-Angel 209
I Ishkin, P. 272 Islam, Sharmin 338
216
D Dang, Trung Thanh 237 Das, Soham 104 Daus, Yu. 272 Daus, Yu 183 Dayrit, Aries 43 De Luna, Robert 43 Dib, Omar 3 Dola, Rabeya Islam 338 Dovlatov, Igor M. 353 Duo, Yi Xi 225 Dwivedi, Shashank 114 F Fahim, Syed Ridwan Ahmed
248
H Hasan, Md. Abid 168 Hossain, Md. Jakir 262 Hossen, Sazzad 338
B Bansal, Aarti 75 Bekda¸s, Gebrail 27, 66, 86 Bhagat, Rahul 97 Bijoy, Md. Hasan Imam 168 C Chanta, Sunarin 201 Chen, Xu 3 Chohan, Jasgurpeet Singh Chok, Vui Soon 209 Choo, Chee Ming 209
Fakir, Imran 248 Falguni, Sabiha Afsana
J Jahan, Obaida 295 Jamaluddin, Mohd Faizan Jekrul Islam, Md. 53 Jining, Dong 126 Johora, Fatema Tuj 325
209
K Kalita, Kanak 216 Khan, Nameer 104 Klyuev, D. 272 Kumar, Ashish 114 Kumar, Vijay 75 Kuzmenko, A. 272
248
L Larikova, Julia 308 Lebed, N. 183 Limprapassorn, Sirawich Liu, Yinhao 3
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Vasant et al. (Eds.): ICO 2023, LNNS 852, pp. 361–362, 2023. https://doi.org/10.1007/978-3-031-50330-6
282
362
Author Index
Loedchayakan, Touchakorn
282
M Marmolejo-Saucedo, José Antonio 19 Mel’nikov, O. M. 308 Mim, Faria Tabassum 248 Mintri, Sunil 104 N Natasha, Nazia Tabassum 248 Nayan, Pranta Nath 325 Ng, Lik Yin 209 Nguyen, Huynh Anh Tuyet 237 Nguyen, Hien Than 237 Nigdeli, Sinan Melih 27, 66, 86 Nikulina, E. A. 317 O Ocak, Ayla 27 Osipov, O. 272 P Pal, Subham 216 Panchenko, V. 183, 272 Pavkin, Dmitry Y. 353 Phongsirimethi, Nitirut 157 Piamvilai, Nattavit 282 Polikanova, Alexandra A. 353 Pradhan, Satish 37 Pranata, Maria Shappira Joever 192 Preeti, S. H. 97 R Rahman, Abdur 325 Rani, Sunanda 126 Rattananatthawon, Ongorn 282 Reza, Ahmed Wasif 53, 168, 248, 262, 295, 325, 338 Rodriguez-Aguilar, Roman 19 Rosales, Marife 43 Roy, Manish Kumar 37 S Sagar, Tohidul Haque
338
Sangsawang, Ornurai 201 Sapkota, Gaurav 104 Sarker, Mohammad Rifat 262 Savary, Kristen 136 Semenova, N. A. 317 Shah, Dhaneshwar 126 Shanmugasundar, G. 216 Sharme, Nadia Hannan 53 Shivakoti, Ishwer 37, 104 Siddiqua, Nahian Nigar 295 Singh, Kshetrimayum Dideshwor Singh, Prabhat Ranjan 126 Singh, Sweta 97 Singla, Sandeep 75, 114 Sirisumrannukul, Somporn 282 Skorokhodov, D. M. 308 Skorokhodova, A. N. 308 Sokolova, Yu. 272 Sudtachat, Kanchala 157 Sumona, Rehnuma Naher 53 T Tarasova, Elizaveta 147 Tokarev, K. 183 Tsirulnikova, N. V. 317 Türko˘glu, Hani Kerem 86 V Vasant, Pandian
19
W Wattanakitkarn, Tanachot 282 Wiecek, Margaret M. 136 X Xaba, Siyanda
126
Y Yeoh, Kenneth Tiong Kim 209 Yurochka, Sergey S. 353 Z Zaman, Shamsun Nahar Zerin, Nighat 295
53
225