336 11 25MB
English Pages 712 [713] Year 2023
Lecture Notes in Mechanical Engineering
B. B. V. L. Deepak M. V. A. Raju Bahubalendruni D. R. K. Parhi B. B. Biswal Editors
Intelligent Manufacturing Systems in Industry 4.0 Select Proceedings of IPDIMS 2022
Lecture Notes in Mechanical Engineering Series Editors Fakher Chaari, National School of Engineers, University of Sfax, Sfax, Tunisia Francesco Gherardini , Dipartimento di Ingegneria “Enzo Ferrari”, Università di Modena e Reggio Emilia, Modena, Italy Vitalii Ivanov, Department of Manufacturing Engineering, Machines and Tools, Sumy State University, Sumy, Ukraine Editorial Board Francisco Cavas-Martínez , Departamento de Estructuras, Construcción y Expresión Gráfica Universidad Politécnica de Cartagena, Cartagena, Murcia, Spain Francesca di Mare, Institute of Energy Technology, Ruhr-Universität Bochum, Bochum, Nordrhein-Westfalen, Germany Mohamed Haddar, National School of Engineers of Sfax (ENIS), Sfax, Tunisia Young W. Kwon, Department of Manufacturing Engineering and Aerospace Engineering, Graduate School of Engineering and Applied Science, Monterey, CA, USA Justyna Trojanowska, Poznan University of Technology, Poznan, Poland Jinyang Xu, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
Lecture Notes in Mechanical Engineering (LNME) publishes the latest developments in Mechanical Engineering—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNME. Volumes published in LNME embrace all aspects, subfields and new challenges of mechanical engineering. To submit a proposal or request further information, please contact the Springer Editor of your location: Europe, USA, Africa: Leontina Di Cecco at [email protected] China: Ella Zhang at [email protected] India: Priya Vyas at [email protected] Rest of Asia, Australia, New Zealand: Swati Meherishi at [email protected] Topics in the series include: . . . . . . . . . . . . . . . . .
Engineering Design Machinery and Machine Elements Mechanical Structures and Stress Analysis Automotive Engineering Engine Technology Aerospace Technology and Astronautics Nanotechnology and Microengineering Control, Robotics, Mechatronics MEMS Theoretical and Applied Mechanics Dynamical Systems, Control Fluid Mechanics Engineering Thermodynamics, Heat and Mass Transfer Manufacturing Precision Engineering, Instrumentation, Measurement Materials Engineering Tribology and Surface Technology
Indexed by SCOPUS and EI Compendex. All books published in the series are submitted for consideration in Web of Science. To submit a proposal for a monograph, please check our Springer Tracts in Mechanical Engineering at https://link.springer.com/bookseries/11693
B. B. V. L. Deepak · M. V. A. Raju Bahubalendruni · D. R. K. Parhi · B. B. Biswal Editors
Intelligent Manufacturing Systems in Industry 4.0 Select Proceedings of IPDIMS 2022
Editors B. B. V. L. Deepak Department of Industrial Design National Institute of Technology Rourkela Rourkela, Odisha, India D. R. K. Parhi Department of Mechanical Engineering National Institute of Technology Rourkela Rourkela, Odisha, India
M. V. A. Raju Bahubalendruni Department of Mechanical Engineering National Institute of Technology Puducherry Puducherry, India B. B. Biswal National Institute of Technology Meghalaya Shillong, Meghalaya, India
ISSN 2195-4356 ISSN 2195-4364 (electronic) Lecture Notes in Mechanical Engineering ISBN 978-981-99-1664-1 ISBN 978-981-99-1665-8 (eBook) https://doi.org/10.1007/978-981-99-1665-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Contents
Robotics, Automation and Expert Systems Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs . . . . . . . . . . Rashmi Prava Das and Tushar Kanta Samal
3
Automatic Underground Water Pipeline Fault Detection with Control in IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Raja, N. Muthu Selvi, R. V. Reshnuvi, and R. Varatharaj
17
Applying Python Programming to the Traditional Methods of Job Sequencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nekkala Ganesh, B. Hemanth, and P. H. J. Venkatesh
29
Stress Detection and Performance Analysis Using IoT-Based Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Srinivasulu Raju, Jalalu Guntur, T. Niranjan, G. Venkata Sneha, and N. Aleshwari Development of Smart Vehicles Using Visible Light Communication . . . . Parameswara Reddy Duggi, Sri Naga Chandu Dadi, Ajay Tanguturi, Poorna Veerendra Sambhana, and Fayaz Ahamed Shaik Mapping and Visualization of Constrained Internal Spaces Using a 2-DoF Robotic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pudureddiyur Manivannan Venkata Nagarjun, S. Hirshik Ram, Joseph Winston, N. Mahendra Prabhu, and Joel Jose An Efficient Distributed Energy and Consumption Method for Ensuring Wireless Sensor Network (WSN) Coverage Using the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Uma, C. Ramesh Kumar, and Thirumurugan Shanmugam
35
47
57
67
v
vi
Contents
A Critical Review of the Usage of Auxiliary Drones in Passenger Aviation Safety Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aditya Sharma, Evans Yaw Amo, Bwalya Mwila, Kamal Kumar, and Ujjal Kalita Implementation of Underground Cable Fault Detection Using IoT . . . . . S. Gomathy, C. Akash Kumar, T. Dhanushya, G. Hariharan, and S. P. Hariharan
79
87
Implementing IIoT in Garment Production Line: A Case Study of a Full-sleeve Shirt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Swarna Prahha Jena, Subrat Kumar Pradhan, Mangaldeep Chakraborty, G. Arun Manohar, Surya Bhasakar Rao, and S. Deepak Kumar Advancements in Manufacturing Processes Mechanical Performance of Friction Stir Welded AA6063 Plates with Variation in Tool Pin Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Atul Suri and D. K. Chaturvedi Recent Advances in Magnetorheological Damping Systems for Automotive Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 M. B. Kumbhar, R. G. Desavale, and T. Jagadeesha Enhancement of Turning by Low Frequency Vibration . . . . . . . . . . . . . . . . 135 Oorja Dorkar and Anurag Jha Design of Multi-angle Welding Fixture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Ganesh S. Kadam, Bharathiseshan Nagarajan, Siddhesh Sawant, and Varadaraj Mirji Optimal Disassembly Sequence Generation Using Tool Information Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Gunji Bala Murali, Anuj Jariwalab, Samarth Savaliac, Vrushabh Kadamd, G. S. Mahapatra, and Amruta Rout Electricity Generation Using a Rooftop Ventilator with Attachment of Vertical Axis Wind Turbine on the Top of It . . . . . . . . . . . . . . . . . . . . . . . 165 Sarabjitsingh Bombra, Prem Prakash, Sandeep Das, Richa Patwa, Puneet Kumar, and Rahul Kumar Welding of Stainless Steel 316 Joined by Using Multipass SAW, MIG, and Hybrid (SAW and MIG) Welding . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Salina Sai Sandeep, Sali Renuka, Vavilala Sai Pavan, Nerella Hari, and G. B. Veeresh Kumar
Contents
vii
UI/UX/HCI/Ergonomics Considerations A Study on Assessment of Tourism Experience on Boat Ride of Narmada River for Sighting Marble Rocks of Bhedaghat, Jabalpur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Siddharth Das, Sangeeta Pandit, Avinash Sahu, and Rajat Kamble Ergonomics Issues Among Last Mile Delivery Rides in Jabalpur, India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Sangeeta Pandit, Puspendu Adhikari, Avinash Sahu, Rajat Kamble, Sai Prakash Bangaru, Harsh Banswal, Nitin Maravi, Dipil Dileep, and Aryan Verma A Critical Review on Risk Assessment Methods of Musculoskeletal Disorder (MSD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Venkatachalam Siddhaiyan, R. Naveen Kumar, P. Ramya, Monisha Balasubramani, C. Sakthi, C. Sitheaswaran, V. G. Sandhiya, and G. Sakthivignesh Canny Click: A Board Game Designed to Stimulate Mindful Online Behavior Among Teenagers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Shubhangi Tripathi, Arundhati Guha Thakurta, and Indresh Kumar Verma Pathological Lying Behaviour in Indian Children: Game Design as Coping Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Niharika Trikha, Arundhati Guha Thakurta, and Indresh Kumar Verma Effect of Occupational Exposure to Ergonomic Risk Factors on Musculoskeletal Diseases Among the Construction Workers-A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 S. Venkatachalam, R. Naveen Kumar, J. Pavadharani, S. K. Pavithra, K. Vishnuvardhan, K. Raja, P. Ramya, B. Vikash Bala, S. Sindhujaa, and S. Tamil Selvan A Critical Review of Multiway Adjustable Car Seats for Physically Challenged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 M. Boopathi, G. Manikandan, R. Dhanush Guru, S. R. Anson, S. Sudhakar, and S. Logesh Design of a Biomorphic Solar Installation with Improved Aesthetics in the Context of Irrigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Sidhant Patnaik and Amrita Bhattacharjee Enhancing the Existing Plant Buying Experience Using a Persuasive Design Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Shalini Bhattacharjee and Anirban Chowdhury
viii
Contents
Computer Aided Design, Analysis, Integration and Manufacturing Development of Hybrid Control Strategy for Dual Channel Electromechanical Stabilizer Bar for Roll Stabilization of Vehicle . . . . . . 311 Bhooshan Gavhare and P. V. Manivannan Mechanical Seed Implanter for Sunflower and Groundnut Seeds: Design and Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 B. Jeeva, K. R. Raaj Khishorre, and P. Rahul Investigative Study of Anterior Cervical Discectomy and Fusion Surgery Using Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Akshansh Sharma and S. K. Parashar System Design for Earth-to-Earth Commute—Research and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Shivam Anand and Debashish Majumder Design and Development of Mobile Robot Manipulator for Patient Service During Pandemic Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Amruta Rout, B. B. V. L. Deepa, N. S. Pranay, Atmuri Venkata Sai Rajkumar, Sri Vardhan, and T. Renin Francy Structural and Thermal Analysis of a CubeSat . . . . . . . . . . . . . . . . . . . . . . . 363 M. R. Aswin, Akshay Pavithran, Yash Mangrole, and Balaji Ravi CFD Analysis of Droplet Morphology in Electric Field Using Level Set Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Arkadeep Paul and Shibendu Shekhar Roy Design and Development of an Autonomous Robot Assistant . . . . . . . . . . . 381 Krishna Kant Pandey, Tapan Shah, Sheetal Yadrave, Siddharth Shinde, and Viraj Pagare Application of Artificial Intelligence and Machine Learning A Hybrid Machine Learning Model for House Price Prediction . . . . . . . . 393 B. Subbulakshmi, M. Nirmala Devi, Sriram, Srimadhi, and M. Arvindhan Change Detection in Multispectral Remote Sensing Images . . . . . . . . . . . . 405 Kolli Naga Vidya, Sai Sanjana Parvathaneni, Yamarthi Haritha, and Boggavarapu L. N. Phaneendra Kumar Studying the Impact of Agent Technology on the Manufacturing of Medium Technology Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Devesh Rana, Deepak Pathania, Parshant Kaushal, Prabal Sharma, Aman Choudhary, Lakhmi Kumar, and Somesh Kr. Sharma
Contents
ix
Change Detection in Multispectral Remote Sensing Images: A Case Study on Polavaram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Singuluri Devi Naga Sai Pranathi, Nara Vineela, Nagubandi Sai Sreya, and Boggavarapu L. N. Phaneendra Kumar A Study on the Various Machine Learning Techniques Used in Predictions and Forecasting Related to Covid-19 . . . . . . . . . . . . . . . . . . . 447 R. Dhanalakshmi, A. Nivashini, N. Vijayaraghavan, and S. Narasimhan A Text Mining Approach for Identifying Agroforestry Research Narratives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Parisa Monika, Nakka Pavan Kalyan, Indumathi Bai, and M. Suneetha Epilepsy Prediction Using Spark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Papasani Pravalika, Shaik Shabeer, Jampana Meenakshi, and Fathimabi Shaik Application of Machine Learning Algorithms for Order Delivery Delay Prediction in Supply Chain Disruption Management . . . . . . . . . . . . 491 Arun Thomas and Vinay V. Panicker Review on Image Steganography Transform Domain Techniques . . . . . . 501 G. Gnanitha, A. Swetha, G. Sai Teja, D. Sri Vasavi, and B. Lakshmi Sirisha Plant Leaf Disease Detection and Classification with CNN and Federated Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Jangam Ebenezer, Pagadala Ganesh Krishna, Medasani Poojitha, and Ande Vijay Krishna Real-Time Voice Cloning System Using Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 G. Rajesh Chandra, Venkateswarlu Tata, and D. Anand Artificial Neural Network (ANN) Approach in Evaluation of Diesel Engine Operated with Diesel and Hydrogen Under Dual Fuel Mode . . . . 535 Shaik Subani and Domakonda Vinay Kumar Machine Learning Model for Medical Data Classification for Accurate Brain Tumor Cell Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Gnana Sri Sai Sujith Navabothu, Himanshu Sakode, Jagathi Gottipati, and Polagani Rama Devi Brightness Preserving Medical Image Contrast Enhancement Using Entropy-Based Thresholding and Adaptive Gamma Correction with Weighted Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Kurman Sangeeta and Sumitra Kisan
x
Contents
Optimisation and Performance Enhancement of Mechanical Systems Statistical Study of the Influence of Anthropometric Parameters on the Hand Grip Strength of an Individual . . . . . . . . . . . . . . . . . . . . . . . . . . 583 M. Rajesh, H. Adithi, P. Prathik, and Sadhashiv J. Panth Repeatability Analysis on Multi-fidelity and Surrogate Model Based Multi-objective Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . 593 Anand Amrit A New Ranking Approach for Pentagonal Intuitionistic Fuzzy Transportation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 Indira Siguluri, N. Ravishankar, and Ch. Uma Swetha Dynamic Allocation the Best-Fit Resource for the Specific Task in the Environment of Manufacturing Grid . . . . . . . . . . . . . . . . . . . . . . . . . . 621 Avijit Bhowmick, Arup Kumar Nandi, and Goutam Sutradhar Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using NSGSA Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629 Ch. Syam Sundar and Gummadi Srinivasa Rao Route Optimization as an Aspect of Humanitarian Logistics: Delineating Existing Literature from 2011 to 2022 . . . . . . . . . . . . . . . . . . . . 647 Shashwat Jain, M. L. Meena, Vishwajit Kumar, and Pankaj Kumar Detwal Parametric Optimization of Rotor-Bearing System with Recent Artificial Rabbits Optimization and Dynamic Arithmetic Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663 Pravajyoti Patra, Debivarati Sarangi, Arati Rath, and Dilip Kumar Bagal Influence of Various Operating Characteristics on the Biodiesel Preparation from Raw Mesua Ferrea Oil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673 D. Chandravathi, S. B. Padal, and J. Prakasa Rao A Comprehensive Review on Acoustical Muffler Used in Aircraft’s Auxiliary Power Unit (APU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683 Ashish Pradhan, Shaik Zeeshan Ali, Korupolu Lok Chakri, Paleru Ranjan Kumar, and Ujjal Kalita A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi Technique for Low Tension Application . . . . . . . . . . . . . . . . . . . . . . 699 S. Thirumurugaveerakumar, V. Manivelmuralidaran, and S. Ramanathan
Contents
xi
Combustion and Vibration Investigations of a Reactor-Based Dual Fuel CI Engine that Burns Hydrogen and Diesel . . . . . . . . . . . . . . . . . . . . . . 711 Shaik Subani and Domakonda Vinay Kumar Evolution of Automotive Braking system—A Mini Review . . . . . . . . . . . . . 721 M. S. Sureshkumar, R. Rahul, J. Aishwarya Shree, S. Chandine, and A. Sai Darshan
About the Editors
B. B. V. L. Deepak is currently working at the National Institute of Technology, Rourkela, in the Department of Industrial Design. He received his Master’s and Ph.D. degrees from the National Institute of Technology, Rourkela, in 2010 and 2015, respectively. He has 11 years of research and teaching experience in manufacturing and product design fields. He produced 3 Ph.D. theses and is supervising 4 Ph.D. scholars. He published more than 100 papers in various peer-reviewed journals and conferences along with 1 patent. He is also currently handling two sponsored research projects in the field of robotics. He received several national and international awards such as Ganesh Mishra Memorial Award-2019, IEI Young Engineer Award-2018, and Early Career Research Award-2017. M. V. A. Raju Bahubalendruni is currently working as Assistant Professor in the Department of Mechanical engineering at NIT Puducherry, India, has completed his Master’s in Machine Design at Andhra University College of Engineering and joined at HCL technologies, Bangalore. He worked on several aircraft programs like c27J, Bombardier, Honda Jet, and Bell-407. He completed his Ph.D. degree from the National Institute of Technology, Rourkela. He is currently Active Researcher in the manufacturing and automation domain (assembly, disassembly, industrial robotics, human–robot collaboration, additive manufacturing, topology optimization, advanced cellular structures, augmented reality/virtual reality). He is one of the potential research professionals with more than 11 years of total academic, industrial, and R&D experience. He has received multiple research grants from Science and Engineering Research Board (SERB), Department of Science and Technology (DST). He has published close to 80 articles in reputed national and international journals and well cited. He has received many best paper awards and best reviewer awards by several publishers. He has been awarded with Gold Medal “Brundaban Sahu Memorial Award” by Institute of Engineers India (IEI-Odisha) in 2016 for his research contribution and received National level (Indian) Young Engineer award-2017 in the mechanical stream by IEI.
xiii
xiv
About the Editors
D. R. K. Parhi is working in NIT, Rourkela, as Professor (HAG). He is currently heading the Department of Mechanical Engineering. He has received his Ph.D. in Mobile Robotics field from Cardiff School of Engineering, UK. He has 26 years of research and teaching experience in Robotics and Artificial Intelligence fields. He has guided more than 20 Ph.D. theses and published more than 300 papers in various journals and conferences along with 3 patents. He has also completed and is currently handling several sponsored research projects in the field of Robotics. B. B. Biswal is currently acting as Director of the National Institute of Technology, Meghalaya. He is also Professor (HAG) in the Department of Industrial Design of NIT, Rourkela. He has 33 years of research and teaching experience in FMS, CAD/ CAM, and Robotics fields. He has guided more than 15 Ph.D. theses and published more than 200 papers in various journals and conferences along with 3 patents/ copyrights. He has international collaboration with Loughborough University and Slovak University of Technology in Bratislava. He has also completed and is currently handling several sponsored research projects in his research field.
Robotics, Automation and Expert Systems
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs Rashmi Prava Das and Tushar Kanta Samal
Abstract A more complete application of digital technology and smart manufacturing procedures is required to successfully address present and future concerns as well as foresee and prevent future problems. Industry 4.0 has enabled intermachine communication, adaptation, and operational efficiency, all while lowering costs and increasing production and efficiency. However, despite the immense promise that this technology has across a wide range of businesses, there are still a number of hurdles that impede its widespread adoption. Due to these considerations, this paper focuses on existing manufacturing technologies that are connected to Industry 4.0. It does so by highlighting the opportunities and constraints, as well as the readiness of industries and small- and medium-sized businesses to adopt these technologies in a timely manner. Regrettably, no publication or research study has conducted a complete and comprehensive examination of these difficulties. With its findings, this research adds a lot to the knowledge about Industry 4.0. Keywords AI · Big data · Industry 4.0 · IoT · SME
1 Introduction Industry 4.0 (I4.0) technology-enabled intelligent manufacturing techniques facilitate conversion of operations in the manufacturing sector to a smart approach through the adaptation of digital technologies such as artificial intelligence (AI), the Internet of things (IoT), big data and cloud computing [1]. According to forecasts, the industrial sector will benefit from the ongoing transition to full digitization, which will bring with it remarkable benefits such as greater efficiency, faster production rates, enhanced efficiency and many more in future. Industry 4.0 strategies for the manufacturing industry position organizations on the cusp of unrelenting windfall gains in the quality and sophistication of the things they produce, increasing their chances R. P. Das (B) · T. K. Samal Department of Computer Science and Engineering, CV Raman Global University, Bhubaneswar, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_1
3
4
R. P. Das and T. K. Samal
of success. To be successful with Industry 4.0, it is important to automate processes with integrated human–machine interaction and help build a long-term, sustainable manufacturing ecosystem [2]. Autonomous robotics, the Internet of things, cyber-security, cloud computing, simulation, additive manufacturing (such as 3D printing), virtual and augmented reality, and big data analytics are some of the essential building blocks of Industry 4.0, according to the World Economic Forum. Consequently, industrial transformation trends mirror the direction of scientific and technological developments, which include digitization and an increase in horizontal and vertical value integration innovative digital business models, and the digitization of products and offered services through intelligent networking [3]. This technology is being launched at a time when there is a great need to minimize the time required for new product development and customization, as well as to construct an efficient production system for resource and energy management. This is to the benefit of the industry. The only approach to deal with this problem is through process automation and digitalization of systems in the manufacturing process line [4]. In the past few years, as more smart devices have been added to the list of Internet of things-enabled devices, the potential to address information exchange and capture in real time has increased exponentially. This is especially true with the development of specialized sensors that are revolutionizing the manufacturing industry. This is significantly easier to accomplish in larger organizations since their readiness indicators for new technology adoption are substantially greater than in smaller organizations [5]. Large-scale industries, on the other hand, have a slightly different operation strategy as a result of their limited resources and experience gaps. As a result, small- and medium-sized enterprises (SMEs) do not demonstrate the same enthusiasm for the implementation of digital technologies as large-scale industries. Despite the fact that these smaller corporate entities make up a bigger share of all enterprises when compared to major corporations, they should be given top priority when it comes to digital transformation. Small- and medium-sized enterprises (SMEs) have had disappointing results from their successful conversion of outdated operational procedures into I4.0-enabled operational practices. This is despite the fact that governments in various countries provide various incentives to encourage the digitalization of small- and medium-sized enterprises’ operational practices. According to a study by Muduli and Barve [6], a person’s level of understanding of barriers and how they work together is very important to the success of any management method that includes them. This paper discusses the limits of past research, the motivation for producing this study, and the contributions that have been made. A comprehensive and integrated approach to these difficulties of Industry 4.0 and the associated technologies does not yet exist in either a publication or a research study in the form outlined above. As a result, this paper discusses the existing manufacturing technologies that are considered to be part of Industry 4.0, as well as the potential and challenges that come with implementing them in practice. There is also discussion of how prepared sectors and small enterprises are to take advantage of this new technology.
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs
5
As a result, this research makes a significant contribution to the literature on Industry 4.0 because it adds something entirely new to the field. The following objectives are being pursued by this investigation: • To understand how the Industry 4.0 affects the operational performance of smalland medium-sized manufacturing industries. • To explore the obstacles that SMEs face in implementing I4.0 technology. • To rank these road blocks of Industry 4.0. The paper is organized in the following manner. The methodology of literature review and data collection and analysis is outlined after which the readiness towards the adoption of these technologies is considered. Furthermore, the opportunities and barriers to Industry 4.0 adoption, influence of digital technologies (AI, big data, and IoT) on industrial operations and supply chain management were discussed, and finally, the implication of study, conclusion and future works were outlined within the study context.
2 Literature Review It was only in 2011 that the I4.0 standard was implemented, making it feasible to integrate smart technologies into the production sector and so signalling the beginning of the Fourth Industrial Revolution. Machine-to-machine (M2M), the Internet of things (IoT), cyber-physical systems (CPSs), artificial intelligence (AI), and big data analytics (BDA) are all examples of technological advancements that have led to the creation of Industry 4.0. Because of these technological breakthroughs, particularly CPS and the Internet, people, and machines were merged with gadgets and business activities to form a cohesive unit that permitted autonomy as well as a thriving production system. That is, high-quality outputs that are reinforced by efficient resource use and long-term viability, which are indisputable characteristics of smart manufacturing businesses, according to the findings of the research [7, 8]. Many businesses are striving to deploy the Industrial Internet of Things because they recognize the potential benefits of doing so. Therefore, educators and researchers are becoming increasingly interested in examining the factors that influence the adoption of Industry 4.0 by businesses and other organizations. Tortorella and Fettermann [8] did an empirical analysis of manufacturing companies in Brazil in order to identify the relationship between lean production practices and the application of I4.0 in their research. According to the findings of the study, data from 110 different businesses showed a positive link between the two variables. To make it easier to investigate academic research in the I4.0 and smart manufacturing domains, Fatorachian and Kazemi [8] developed a theoretical framework for investigating academic research in the I4.0 and smart manufacturing domains. The researchers conducted this study to gain a full understanding of the execution of I4.0 and to propose a theoretical framework for the operationalization of I4.0 in manufacturing industries in the UK. Kumar et al. [9] conducted an investigation into the impediments to I4.0 adoption in Indian
6
R. P. Das and T. K. Samal
SMEs engaged in food processing activities. Among those who have contributed to this work are Kumar et al., a combination of expert opinion and a thorough review of the literature was used to identify 15 difficulties, which were then evaluated for their potential impact on I4.0 adoption using the DEMATEL technique, which classified them as either cause group barriers or effect group barriers, depending on their impact. A study conducted by Shayganmehr et al. [10] looked into the elements that make the deployment of Industry 4.0 in Iranian industries more feasible and successful. Using a combination of literature review and expert opinion, the enablers were found, and the enablers were validated using the fuzzy Delphi approach, which was applied in this study. Another recent study by Stentoft et al. [11] looked at the motives and barriers that influence the readiness of small- and medium-sized firms in Denmark to apply I4.0 practices, as well as the implications of these findings. Their research looked at data from 308 different organizations and statistically corroborated the elements that had previously been demonstrated to be influential in the business environment. After reviewing prior literature, it was discovered that little research has been conducted on the barriers of I4.0 adoption in SMEs in developing countries, particularly in developing nations, with the exception of a few case studies.
2.1 Road Blocks (RB) of Industry 4.0 Adoption in SMEs While numerous manufacturers have welcomed the industrial revolution by incorporating these new digital technologies in their operations, some issues would have to be solved for a seamless transition from the traditional legacy models to a digitalized one. The road blocks of Industry 4.0 adoption which are identified from the literature review are summarized in Table 1.
3 Solution Methodology This research focuses on 12 interrelated road blocks to the adoption of Industry 4.0 by SMEs. Because of its ability to create a map of the complicated relationships between the numerous parts involved in a complex situation, interpretive structural modelling (ISM) has been used in this study. Equation 1 is used to build the first relationship matrix, known as the structural self-interaction matrix (SSIM), which is based on the opinions of the respondents [21–23]. Step 1: Preparation of a structural self-interaction matrix (SSIM) The SSIM is a p × n matrix S as shown in Eq. (1).
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs
7
Table 1 Road blocks of Industry 4.0 adoption Road Blocks of Industry 4.0 adoption
Description
References
R B1
Organizational resistance:
Resistance of employees’ for any change or [12] the existing work culture of the organization is often cited as a major road block to success of any new technology or process
R B2
Broadband access issues
Interconnected enabled machineries and gadgets in the industries rely significantly on sensors for data collecting. In this aspect, the access to a stable and effective Internet connection becomes vital
R B3
Low level of technology adoption readiness
According to discuss goes on, the possibility [14] for disruption when untested, early-stage technology is deployed. Some technologies may lack consistency in terms of standards, privacy concerns, and data protection, and the growing number of untested gadgets might lead to pandemonium
R B4
Lack of digital policy
While adopting Industry 4.0, businesses must [15] take into account data privacy and artificial intelligence liability rules. Because the current laws are ineffective in ensuring that firms moving data online do so safely and do not violate privacy laws
R B5
Implementation cost and ROI uncertainty
Most smart industrial solutions require substantial capital investments for implementation for equipment, hardware’s, sensors, communication modules, storage facilities such as cloud storage, dedicated servers, technical support from manufacturers, and running cost. This has an impact on the profitability or breakeven point which is a crucial factor before organizations can choose to employ these systems
R B6
Component interoperability As earlier mentioned, several systems [16] and complexity communicate and function concurrently in a hybrid industrial setup and hence it might become cumbersome to efficiently integrate systems with different functionalities to understand each other due to their independent functionality efficiency, variation in communication bandwidths, frequency operating systems, etc
[13]
[16]
(continued)
8
R. P. Das and T. K. Samal
Table 1 (continued) Road Blocks of Industry 4.0 adoption
Description
References
R B7
Job interruptions
Recent technology growth will promote [17] automation, which will affect the functionality of job positions and, as a result, cause labour market issues which shows employees dissatisfaction in the direction of their work
R B8
Integration:
One key challenge is the implementation of [18] new technology or smart systems combined with current or legacy technology which sometimes leads to compatibility-related issues owing to changes in communication protocols, connectivity, etc. Most times, it is impossible to combine systems without data loss as well as inconsistencies in data security
R B9
Degradation of old manufacturing model
Consumers’ expectations of the end product [19] have risen as disruptive technologies have progressed. As a result, manufacturing sector is being forced to adapt their present business models in order to meet the challenge given by greater consumer demands. Thereby companies are forced to move the new technology which has been running now and future also
R B10 Lack of perseverance
The emphasis is on the outcome rather than the resolutions; there is little patience required again for transition
[20]
R B11 Skill gap
In implementing new interconnected systems, [14] there is always a need for rigorous education of all actors such as the operators of equipment, workers, managers, technicians, and even maintenance personnel’s which comprise a costly and time-consuming procedure
R B12 Data security
Given that the application of these digital [16] technologies requires the integration of several systems which all communicates with each other as well as with third-party applications and customers, the security of data becomes a major concern given that the occurrence of a data breach is not uncommon even in companies operating the most advance systems
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs
9
Table 2 Guidelines for constructing SSIM for road blocks of industry 4.0 adoption Indicative relationship between R Bk and R Bl
SSIM (k, l) symbols
Input for RM
R Bk leads by R Bl
A
0
R Bk leads R Bl
V
1
R Bk and R Bl are mutually dependent
X
1
No direct relation between R Bk and R Bl
O
0
R B1 R B2 S = R B3 .. . R Bp
R B1 C R11 ⎜ C R21 ⎜ ⎜ C R31 ⎜ ⎜ . ⎝ .. ⎛
R B2 C R12 C R22 C R3 .. .
· · · R Bn ⎞ · · · C R1n · · · C R2n ⎟ ⎟ · · · C R3n ⎟ ⎟, .. ⎟ .. . . ⎠
(1)
C Rp1 C Rp2 · · · C Rpn
where R Bk refers to the Industry 4.0 adoption Road Block (k = 1,2… p) and C Rk,l denotes a contextual relationship between a pair of Industry 4.0 adoption road blocks R Bk and R Bl . The SSIM is generated using Eq. 1, with respondents’ opinions taken into consideration using the rules listed in Table 2. Step 2: Building the Initial Reachability Matrix (RM) The rules for building the RM from SSIM shown in Table 2 are followed to develop the RM for Industry 4.0 road blocks. Step 3: Building Final Reachability Matrix (FRM) The creation of FRMs is based on transitivity, which takes into account indirect relationships. Transitivity is a useful concept in constructing reference-based linkages; for example, if entries in cells (k, l) and (l, m) are equal to one, then cell entry in (k, m) is equal to one by applying the concept of transitivity [22, 23]. Table 3 shows a road block produced using the steps 1, 2, and 3 of the FRM for Industry 4.0 road block development process. Step 4: Partitioning of Industry 4.0 Road Blocks into Different Levels and Developing an ISM-based model The reachability set (RS) and antecedent set (AS) for each Industry 4.0 road block must be identified first in order to finish this phase. Then, for each of the RB, intersection set (IS) between the reachability set (RS) and the antecedent set (AS) that intersects the reachability set (RS) is determined. By scanning the relevant row in the FRM and choosing all of the entities with value 1, the RS for each RB is constructed. It is necessary to repeat this procedure for each RB until the RS is completed in full. When constructing the AS, a check is made against each RB’s corresponding column in the FRM, and the entities with values of 1 are selected for inclusion in the final AS. The RB for which the IS and RS entities are the same is promoted to the top level of the ISM hierarchy and is deleted from the list of RBs in this category, thus completing the hierarchy. Repeating the process is necessary until all RBs have been placed in the ISM-based hierarchical model.
10
R. P. Das and T. K. Samal
Table 3 Final reachability matrix of road blocks of industry 4.0 adoption RBs
RB1
RB2
RB3
RB4
RB5
RB6
RB7
RB8
RB9
RB10
RB11
RB12
RB1
1
1
1
1
1
1
1
1
1
1
1
1
RB2
0
1
0
0
0
0
0
0
0
0
0
0
RB3
0
1
1
1
0
1
1*
1
1
1
1
1
RB4
0
1
1
1
0
1*
1
1
1
1
1
1
RB5
0
1
1
1
1
1
1
1
1
1
1
1
RB6
0
1*
0
0
0
1
0
0
1
0
1
0
RB7
0
1
0
0
0
0
1
0
1*
0
1
0
RB8
0
1
0
0
0
1
1
1
1
1
1
0
RB9
0
0
0
0
0
0
0
0
1
0
0
0
RB10
0
1
0
0
0
1
1
0
1
1
1
0
RB11
0
1
0
0
0
0
0
0
1
0
1
0
RB12
0
1
1*
1
0
1*
1
1
1*
1
1
1
The final reachability matrix (Table 3) and the level partitioning matrix (Table 4) are used to build the structural model in the form of a digraph. After removing the transitivity as detailed in the ISM approach, the digraph is finally turned into the ISM, as illustrated in Fig. 1. Table 4 Level partitioning matrix for road blocks of industry 4.0 adoption RBs
Reachability set
Antecedent set
Intersection set
Level
R B1
1
1
1
VIII
R B2
2
1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12
2
I
R B3
3, 4, 12
1, 3, 4, 5, 12
3, 4, 12
VI
R B4
3, 4, 12
1, 3, 4, 5, 12
3, 4, 12
VI
R B5
5
1, 5
5
VII
R B6
6
1, 3, 4, 5, 6, 8, 10, 12
6
III
R B7
7
1, 3, 4, 5, 7, 8, 10, 12
7
III
R B8
8
1, 3, 4, 5, 12
8
V
R B9
9
1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
9
I
R B10
10
1, 3, 4, 5, 8, 10, 12
10
IV
R B11
11
1, 3, 4, 5, 6, 7, 8, 10, 11, 12
11
II
R B12
3, 4, 12
1, 3, 4, 5, 12
3, 4, 12
VI
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs
11
Degradation of old Manufacturing Model
Broadband Access Issues
Skill Gap
Component Interoperability & Complexity
Job Interruptions
Lack of Perseverance
Integration
Data Security
Lack of Digital Policy
Low Level of Technology Adoption Readiness
Implementation Cost & ROI Uncertainty
Organizational Resistance
Fig. 1 ISM-based model of SSCM design criteria
4 Discussions A significant transformation of the manufacturing environment is underway as a result of the current state of digital technology adoption in the manufacturing industry, particularly in the SME sector. This transformation is being driven by the adaptation of IoT applications, which integrate interconnected sub-manufacturing components such as supply chain and customers, among others, to create an intelligent and smart setup [24]. Manufacturers’ effectiveness, responsiveness, and efficiency are just a few of the things that are affected directly and indirectly because of Industry 4.0’s more open production process, which is made possible by Industry 4.0’s more readily available data.
12
R. P. Das and T. K. Samal
5 Implications of This Study Industry 4.0 and the digital technologies that underpin it have definitely piqued the interest of researchers and practitioners across a wide range of industries, and this interest is expected to continue to grow [25–27]. In recent years, a lot of studies have been undertaken on the issues related with the adoption of I4.0 in large-scale organizations. However, studies focusing on Industry 4.0 is scarce in the literature; in this regard, this study that not only identified the road blocks but also their interdependence will help organizations interested for I4.0 adoption by providing them a thorough understanding of the challenges they might encounter during I4.0 adoption. Apart from this several other implications, those emerge from this study as follows: • This study developed a hierarchical structure of the Industry 4.0 road blocks using ISM which could facilitate the decision-makers formulating their course of action for minimizing the impact of these road blocks. • Organizational resistance is found to be the most important factor having the highest inhibiting impact on I4.0 adoption. The research also reveals that this factor influences all other road blocks. Hence, top-level management should develop awareness programmes to educate employees regarding benefits of Industry 4.0 adoption to minimize this resistance. • Implementation cost and ROI uncertainty are another important road block as per the findings of this study. SMEs have limited fund; however, they could explore suitable funding schemes of government which are offered by governments of various countries. Governments are also developing infrastructural facilities and offering tax subsidies to help SMEs. The SMEs could take benefits of these schemes. • Data security, lack of digital policy, and low level of technology adoption readiness are found to be prominent barriers having significant inhibiting impact on I4.0. SMEs could work collaboratively with government to develop data security platforms, suitable digital policies. • The study proposed use of ISM for analysis of interplay existing among the barriers of I4.0 adoption. Researchers and practitioners could use this technique to have a better understanding of other similar issues. • In developing countries, SMEs have limited fund and expertise availability. Hence, they generally try to imitate the strategies adopted by their competitors instead of developing their own processes and operational strategies. This approach often found to be suicidal for many SMEs. The findings of this research could be helpful for decision-makers in having a better knowledge of the factors influencing I4.0 integration in their operational practices and developing suitable policies to limit the influence of barriers.
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs
13
6 Conclusion It is the first article to critically assess the wide range of applications of Industry 4.0 adoption, as well as the opportunities and advantages, it can provide in manufacturing industries, particularly in the SME sector, following a thorough review of the literature in this field, and it is a landmark contribution. The findings of this research add to the increasing body of knowledge about the adoption of Industry 4.0. Many academics have advocated for the use of the literature review as a research method because it has the potential to make major contributions to methodological, thematic, and conceptual development across a wide range of disciplines [28]. Therefore, the review approach was used in this study to collect and analyse relevant material from numerous journals and other published publications that were relevant to Industry 4.0-based applications in the manufacturing business. The research team also looked into the road blocks that are hindering widespread adoption of Industry 4.0 in the workplace. When it comes to keeping systems up to date and making them easier to operate, the Internet of things (IoT) is essential. However, to do so, skilled operational employees must be employed. In order to be successful in logistics and supply chain management, it is necessary to employ networked assets to accomplish this. High levels of productivity and waste reduction are achieved as a result of the interconnection of assets, which results in autonomous operations, hazard reduction, and increased operational performance. While the good outcomes of its acceptance are well-known, the barriers to its adoption have received less attention, and as a result, they are the subject of this study, which will address such impediments. It was established as a result of the study’s findings that there are eight potential road blocks to the adoption of Industry 4.0 in SMEs operating in developing nations. A better understanding of the obstacles that may occur will allow legislators, managers, and industry practitioners to make intelligent judgments about the future integration of their organizations.
7 Limitations and Scope of Future Work A review of the literature published between 2010 and 2022 was used to identify the barriers to I4.0 adoption in small- and medium-sized enterprises (SMEs) in developing countries. No statistical validation was used to identify these barriers. In future research, a statistical analysis will be performed to generate a confirmed collection of obstacles to I4.0 adoption in small- and medium-sized organizations (SMEs). More in-depth empirical studies could be conducted in future to establish the impact of these obstacles on the adoption of I4.0 in small- and medium-sized businesses [12, 28]. It may be possible to employ fuzzy logic-based analysis to deal with the vagueness that is inherent in human judgement when conducting questionnaire-based surveys.
14
R. P. Das and T. K. Samal
References 1. Ivanov D, Tang CS, Dolgui A, Battini D, Das A (2020) Researchers’ perspectives on Industry 4.0: multi-disciplinary analysis and opportunities for operations management. Int J Prod Res. https://doi.org/10.1080/00207543.2020.1798035 2. Oztemel E, Gursev S (2020) Literature review of Industry 4.0 and related technologies. J Intell Manuf. https://doi.org/10.1007/s10845-018-1433-8 3. Thuemmler C, Bai C (2017) Health 4.0: how virtualization and big data are revolutionizing healthcare 4. Frazzon EM, Agostino ÍRS, Broda E, Freitag M (2020) Manufacturing networks in the era of digital production and operations: a socio-cyber-physical perspective. Annu Rev Control. https://doi.org/10.1016/j.arcontrol.2020.04.008 5. Moeuf A, Pellerin R, Lamouri S, Tamayo-Giraldo S, Barbaray R (2018) The industrial management of SMEs in the era of Industry 4.0. Int J Prod Res. https://doi.org/10.1080/00207543.2017. 1372647 6. Muduli K, Barve A (2015) Analysis of critical activities for GSCM implementation in mining supply chains in India using fuzzy analytical hierarchy process. Int J Bus Excellence 8(6):767– 797 7. Tortorella GL, Fettermann D (2018) Implementation of Industry 4.0 and lean production in Brazilian manufacturing companies. Int J Prod Res 56(8):2975–2987 8. Fatorachian H, Kazemi H (2018) A critical investigation of Industry 4.0 in manufacturing: theoretical operationalisation framework. Prod Plann Control 29(8):633–644 9. Kumar R, Singh RK, Dwivedi YK (2020) Application of industry 4.0 technologies in SMEs for ethical and sustainable operations: analysis of challenges. J Cleaner Prod 275:124063 10. Shayganmehr M, Kumar A, Garza-Reyes JA, Moktadir MA (2021) Industry 4.0 enablers for a cleaner production and circular economy within the context of business ethics: a study in a developing country. J Cleaner Prod 281:125280 11. Stentoft J, Adsbøll Wickstrøm K, Philipsen K, Haug A (2020) Drivers and barriers for Industry 4.0 readiness and practice: empirical evidence from small and medium-sized manufacturers. In: Production planning & control, pp 1–18 12. Muduli K, Barve A, Tripathy S, Biswal JN (2016) Green practices adopted by the mining supply chains in India: a case study. Int J Environ Sustain Dev 15(2):159–182 13. Oyekola P, Swain S, Muduli K, Ramasamy A (2021) IoT in combating Covid 19 pandemics: lessons for developing countries. In: Blockchain technology in medicine and healthcare. Concepts, methodologies, tools, and applications 14. Colotla I et al (2016) Winning the Industry 4.0 race—how ready are danish manufacturers? Bost Consult Gr 15. Reinhard G, Jesper V, Stefan S (2016) Industry 4.0: building the digital enterprise. In: 2016 Glob Ind 4.0 Surv 16. Sartal A, Bellas R, Mejías AM, García-Collado A (2020) The sustainable manufacturing concept, evolution and opportunities within Industry 4.0: a literature review. Adv Mech Eng. https://doi.org/10.1177/1687814020925232 17. Vaidya S, Ambad P, Bhosle S (2018) Industry 4.0—a glimpse. https://doi.org/10.1016/j.pro mfg.2018.02.034 18. Ghadge A, Er Kara M, Moradlou H, Goswami M (2020) The impact of Industry 4.0 implementation on supply chains. J Manuf Technol Manag. https://doi.org/10.1108/JMTM-10-20190368 19. Herceg V, Kuc V, Mijuškovic VM, Herceg T (2020) Challenges and driving forces for industry. Sustainability 20. Garetti M, Taisch M (2012) Sustainable manufacturing: trends and research challenges. Prod Plan Control. https://doi.org/10.1080/09537287.2011.591619 21. Biswal JN, Muduli K, Satapathy S, Yadav DK, Pumwa J (2018) Interpretive structural modeling-based framework for analysis of sustainable supply chain management enablers: Indian thermal power plant perspective. J Oper Strategic Plann 1(1):1–23
Evaluation of Road Blocks of Industry 4.0 Adoption in SMEs
15
22. Biswal JN, Muduli K, Satapathy S, Yadav DK (2019) A TISM based study of SSCM enablers: an Indian coal-fired thermal power plant perspective. Int J Syst Assurance Eng Manage 10(1):126– 141 23. Muduli K, Kusi-Sarpong S, Yadav DK, Gupta H, Jabbour CJC (2021) An original assessment of the influence of soft dimensions on implementation of sustainability practices: implications for the thermal energy sector in fast growing economies. Oper Manage Res 14(3):337–358 ˙ 24. Joshi S, Sharma M, Das RP, Rosak-Szyrocka J, Zywiołek J, Muduli K, Prasad M (2022) Modeling conceptual framework for implementing barriers of AI in public healthcare for improving operational excellence: experiences from developing countries. Sustainability 14(18):11698 25. Swain S, Oyekola P, Ramasamy A, Muduli K (2021) Block chain technology for limiting the impact of pandemic: challenges and prospects, computational modelling and data analysis in COVID-19 research. CRC Press, pp 165–186 26. Dash M, Shadangi PY, Muduli K, Luhach AK, Mohamed A (2021) Predicting the motivators of telemedicine acceptance in COVID-19 pandemic using multiple regression and ANN approach. J Stat Manag Syst 24(2):319–339 27. Swain S, Peter O, Adimuthu R, Muduli K (2021) Blockchain technology for limiting the impact of pandemic: challenges and prospects. In: Computational modeling and data analysis in COVID-19 research, pp 165–186 28. Swain S, Muduli K, Kommula VP, Sahoo KK (2022) Innovations in internet of medical things, artificial intelligence, and readiness of the healthcare sector towards health 4.0 adoption. Int J Soc Ecol Sustain Dev (IJSESD) 13(1):1–14
Automatic Underground Water Pipeline Fault Detection with Control in IoT M. Raja, N. Muthu Selvi, R. V. Reshnuvi, and R. Varatharaj
Abstract Water is indispensable for our daily life such as drinking, cleaning, washing, and bathing. Continuing to wastewater can be very problematic in the future. Water benefits us in every aspect of our lives, from our homes to hydroelectric power plants. This is a very critical situation, and everyone has a responsibility to use water effectively in their daily life. This project aims to provide fault detection technology using an IoT technology. If it has leaks somewhere, it won’t always find it right away, but when it becomes a bigger problem, water will be wasting a lot. Therefore, it is desirable to act as soon as a leak occurs. To remedy this, we proposed a system in that uses multiple sensors to monitor water level, volume, and leakage and are detected by the ultrasonic sensor and floc sensor, respectively. Water pipeline can lead to system-wide failure. Faults in water pipes are classified into types. Identifying shortcomings water pipe process is done. Therefore, the water pipe error will detect the real-time and use the Arduino system to handle it. Keywords NodeMCU · Arduino UNO · Sensor · Fault detection · Automatic detection · IoT
1 Introduction Water plays a role in the development of agriculture. To make the most of these resources, water waste should be avoided. This research provides a more effective approach for monitoring the pressure of water flowing through a pipeline and detecting leaks. The pipeline is outfitted with an observatory kit that includes a pair of pressure sensors, and a water quality sensor. In order to upload to the cloud, we require Internet access. However, even if there is no access to the Internet, alarm messages can be delivered via the SMS services, which are based on GPRS. The flow of water is supplied and controlled via valves located at the intersection of pipelines. M. Raja (B) · N. Muthu Selvi · R. V. Reshnuvi · R. Varatharaj Department of Electronics and Instrumentation Engineering, Kongu Engineering College, Perundurai, Erode, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_2
17
18
M. Raja et al.
To reduce water loss owing to leakage, the processor adjusts the position of the control valve, reducing water flow via the same pipeline, and resulting in less water loss. Because oil and gas have always been a critical aspect of the country’s prosperity, the monitoring of oil and gas pipelines has gotten a lot of attention. Future water scarcity, on the other hand, has always been overlooked. Many people wastewater because they are unaware that no substantial initiatives have been taken to avoid water loss around the world. Many countries like China, Germany, UK, and USA are facing the problem of lack of water resources and taking advanced management systems. The largest leakage and leakage from most distribution systems is one of the biggest problems faced not only by the developing countries, but also by the developed countries around the world. Faults in water pipelines may cause a system failure in society. An automatic water shut-off valve detects water either by monitoring the flow in the pipe or by detecting water on the floor. When flow is irregular, the valve will shut off the water supply to your home. Larger enterprises also use their own private tube wells to fill tanks. Tube wells and over-pumping water from underground resources result in decreased freshwater levels, water contamination, and higher total costs connected with deeper drilling and water pumping. Because of the road infrastructure, proper water storage is difficult in mountainous terrains. In some desert-type regions, such as Saudi Arabia, there may be no underground reserves or they may be extremely deep. Water is typically transported in water tankers in such instances, which can result in significant expenditures for end-users and wastewater. These can prevent a significant amount of the damage caused by water leaks. It is necessary to identify the faults in the water pipeline before installing them into the system were discussed. This proposal proposes a simple and inexpensive way to monitor pipelines. It not only monitors the situation, but also sends SMS alerts to the concerned employees about the approximate location of the leak. If the water pressure falls below the desired level, necessary but temporary measures will be taken. The project also proposes a method to detect cracks in pipelines. Water pollution and use of polluted water are a major public health problem, causing death and disease. Existing technology does not have the capability to have these multiple functions. Where existing technology has these multiple capabilities, there are few or no drawbacks.
2 Literature Survey Several researches have been proposed for pipeline leakage detection, focusing on leakage detection using microcontrollers [1–3], wireless sensor networks [4, 5], and SVM classification [6]. It is fluid dynamics and kinematics focus physics based on the water flow rate obtained from the liquid meter sensor [7]. On the other hand, Seyoum et al. [8] proposed an approach to detect leaks in homes using audio signal recordings. Her work records acoustic signals produced by plumbing fittings and devices and uses the recordings to detect anomalies indicative of leaks. Additionally, [9, 10] detected leaks with acoustic sensors based on the Internet of things (IoT). Although much research has been done, there is limited research on water
Automatic Underground Water Pipeline Fault Detection with Control …
19
leakage when applying the Internet of things (IoT) approach and extending this technology to industrial drinking water quality measurement. A more advanced study can be gleaned from [11]. It demonstrates a novel leak detection based on a relative pressure sensor mounted on the outside of the pipe in combination with a temperature difference measurement between the wall and bottom of the pipe. A study in [12] proposed an intelligent monitoring approach to efficiently monitor water pipe systems. It focuses on continuous recording of water parameters, pressure, and flow in this case. An interesting study was presented in [13], a cloud-based NBIOT system designed to achieve cloud traffic data management and online remote water leakage diagnosis by comparing flow data at both ends of a pipeline. On the other hand, [14] incorporates the Internet of Things (IoT), smart sensors, actuators, and linked devices while utilising big data analytics to provide real-time processing of different acquired data on pipeline damage and hydraulic problems. It proposed a cognitive water distribution system that another work using an IoT approach is presented in [15]. It implements a neural network that monitors water leaks in real-time by connecting his IoT-enabled to monitor tank levels, water quality, and pipe leaks.
3 Proposed System In our everyday life, the most vital factor is water. Water is wasted if the pipeline develops faults. We chose to use the Blynk program to construct an automated underground water pipeline defect detection system to tackle the problem. With a message, we detect the defect in the middle pipeline, bottom pipeline, and top pipeline, and the LCD shows the output as well. It includes Arduino UNO, NodeMCU, sensors like sensor and LCD display. Identify the pipeline fault from the detection of fault in pipe and the features of a pipe fault have been extracted by using Blynk technique in Arduino and then identify the difference between fault and good condition in the pipeline by using the algorithm. Program is used to detect the fault water pipeline from the not detected pipeline, and this is used to separate the fault region. Figure 1 shows water pipeline fault.
4 Methodology Identify the pipeline fault from the detection of a pipe and the features of a pipe fault extracted by using the Blynk technique in Arduino. Then identify the difference between fault and not fault pipeline with the help of using an algorithm. The program is to detect the fault water pipeline from the not detected pipeline and to separate the fault region. A block diagram reflecting the basic result of the project is shown in Fig. 2. It also contains the components used in the project, such as the LCD display, sensors, NodeMCU, and power supply used in this experiment.
20
Fig. 1 Water pipeline fault
Fig. 2 Block diagram
M. Raja et al.
Automatic Underground Water Pipeline Fault Detection with Control …
21
Fig. 3 NodeMCU
5 Hardware Description 5.1 NodeMCU NodeMicroController Unit (NodeMCU) is an open-source software and hardware development environment based on ESP266 system-on-chip. The ESP8266 firmware is open source and built using the chip manufacturer’s proprietary SDK. The firmware features an easy-to-use programming environment based on embedded Lua, a scripting language with a large developer community that is both simple and fast. Lua is a simple programming language. Figure 3 shows NodeMCU.
5.2 LCD Display The LCD is an integrated studio LCD shield with six buttons in one piece. The studio UNO R3 control board is fully compatible with it. Stacking this display on a UNO board is straightforward and convenient. The 8-Bit connection and the 4Bit connection are the two communication mechanisms for the 16 * 2 LCD and the UNO R3 board, respectively. The LCD has five control buttons and an 8-bit connector. The up, down, left, right, and reset buttons are linked to analog input A0, which can monitor the button state with just one Arduino input. A potentiometer is included with this shield to adjust the LCD lighting. Figure 4 shows LCD display.
22
M. Raja et al.
Fig. 4 LCD display
5.3 Power Supply A battery is used to store the power that stores energy to provide power when it is needed. Lead–acid batteries are pre-owned for high power supply. Lead–acid batteries are big, have a more robust and heavier architecture, and are widely used to store large amounts of energy in automobiles and inverters. There are two types of plates in batteries: positive and negative. Lead dioxide makes up the positive, whereas sponge lead makes up the negative. These two plates are split by a separator, which is an insulating substance. Plastic housing with an electrolyte encases the entire unit. The electrolyte is assembled of water and sulfuric acid. Figure 5 shows 12 V 4.5 Ah sealed lead–acid battery (Table 1).
5.4 Alarm It’s a warning signal. Used for electrical signals, used in automobiles, household items such as microwave ovens, and game shows. It consists of a number of switches or sensors connected to the control unit that detects if and which buttons have been pressed, and usually cast a light on the appropriate button or panel of the, emits a continuous alarm or beep. First of all, the alarm is an electromechanical system that functions similarly to an electric bell, but with a metal gong it does not sound. These soundboards are wall or ceiling mounted. Another way to use an AC-connected device was to convert the AC to noise loud enough to drive a speaker and build a circuit to drive it into a cheap 8 Ω. Connect your speakers. Today it is more common
Automatic Underground Water Pipeline Fault Detection with Control …
23
Fig. 5 12 V 45 Ah sealed lead acid battery
Table 1 Sealed lead–acid process specifications
Parameter
Rating
Nominal Voltage
12.0 V
Nominal Capacity
4.5 Ah
Max. Charging current
1.35 A
Max. Discharging current
67.5 Amax
to use ceramic-based piezoelectric sounders such as B. A sound alarm that sends a high-pitched tone as an alarm. Figure 6 Alarm. Fig. 6 Alarm
24
M. Raja et al.
5.5 Internet of Things The Internet of things (IoT) is a network of interconnected computing devices, mechanical and digital machines, products, animals, and humans with UIDs for human-to-human or human-to-human communication. It has the ability to send data without needing it. Computer-to-computer communication interaction is required. The Internet of Things Gateway connects the cloud to controllers, sensors, and smart devices through physical hardware or software. His gadgets are connected to the Internet worldwide through IP networks. Local communication is often Bluetooth or Ethernet in commercial IoT (wired or wireless). In most cases, the IoT device will only communicate with local devices.
5.6 Setting up of BLYNK Application To set up a Blynk server, you need to follow these steps: Create an account on Blynk, log in, and enter your account details. Clicking Channels allows you to create a new channel by clicking the button and entering the following details: Click the API key to go to the Write API Key page button. Paste your API key. You can access the “private view.“ Click the button to change the view of the window to your liking. All of these are part of the Blynk IoT cloud. Let’s now proceed to the ATMEL board program part of the IoT-based automatic error detection system source code. You can upload your code directly to your development board. Make sure to install the OLED display library, Adafruit GFX library, and SSD1306 library before each. Change Blynk’s API key and enter Wi-Fi SSID and password to run the program.
6 Results This paper proposes and evaluates an IoT processing-based method for detecting water pipeline failures. The proposed method is divided into three phases. The early stage used Arduino code to extract characteristics from the Blynk approach to find the state of the pipeline. The algorithm stated to compare the difference between defective and non-defective water pipes. The defect is separated in the third stage. Finally, using Blynk processing, the fault is recognized and isolated. The primary goal is to determine whether the water pipeline is in good condition. As a result, the water pipeline fault has been satisfactorily discovered. Figure 7 shows the safe state prototype where the engine automatically detects and controls error. Figure 8 shows a working prototype of top-line fault detection that uses microcontrollers and sensors to automatically detect faults in water pipes and maintain pipelines.
Automatic Underground Water Pipeline Fault Detection with Control …
25
Fig. 7 Working prototype safe condition
Fig. 8 Working prototype for top-line fault detection
Here, damage defects are detected and classified according to whether the water pipe is damaged or not. The message and the LCD will detect the fault and that provides the result obtained from the underground water pipeline fault detection. The output of the water pipeline fault will be obtained using the Blynk application in the IoT technique. This algorithm gives the result of whether the pipeline is at fault or not (Figs. 9 and 10; Table 2). Here only when the pipeline gets damaged it will show which area gets damaged and we need to take action immediately otherwise the pipeline will show like it is in safe condition.
26
M. Raja et al.
Fig. 9 Working prototype for middle-line fault detection
Fig. 10 Working prototype for bottom-line fault detection Table 2 Testing the pipeline along with SMS alert Name of the pipes
Top pipeline
Middle Pipeline
Bottom pipeline
Pipe 1
Damage means, Top-line damage…! Take action immediately
–
–
Pipe 2
–
Damage means, – Middle-line damage…! Take action immediately
Pipe 3
Damage means, Bottom-line damage…! Take action immediately
Automatic Underground Water Pipeline Fault Detection with Control …
27
7 Conclusion This proposal proposes a simple and cost-effective method of pipeline monitoring. It not only monitors the situation but also sends SMS alerts to worried employees about the leak’s approximate location. When the water pressure falls below the desired level, it takes the necessary but preliminary action. This project also proposes a method for detecting pipeline cracks. Water pollution and the usage of contaminated water cause mortality and disease, making them major public health concerns. In the existing technology, there are no such features with these multiple workings, if the existing technology has these multiple features, it has few or more drawbacks. Fortunately, several studies are being conducted to improve water quality monitoring. As a result, the scope of our research was purposefully limited to IoT applications in smart water tank monitoring. As previously stated, world freshwater reserves are rapidly depleting.
References 1. Uddin MA, Mohibul Hossain M, Ahmed A, Sabuj HH, Yasar Seaum S (2019) Leakage detection in water pipeline using micro-controller. In 2019 1st international conference on advances in science, engineering and robotics technology (ICASERT), May 2019, pp 1–4. https://doi.org/ 10.1109/ICASERT.2019.8934648 2. Rosli N, Aziz IA, Mohd Jaafar NS (2018) Home underground pipeline leakage alert system based on water pressure. In: 2018 IEEE conference on wireless sensors (ICWiSe), Nov 2018, pp 12–16. https://doi.org/10.1109/ICWISE.2018.8633278 3. Elleuchi M, Khelif R, Kharrat M, Aseeri M, Obeid A, Abid M (2019) Water pipeline monitoring and leak detection using soil moisture sensors: IoT based solution. In: 2019 16th international multi-conference on systems, signals devices (SSD), Mar 2019, pp 772–775. https://doi.org/ 10.1109/SSD.2019.8893200 4. Daadoo M, Eleyan A, Eleyan D (2017) Optimization water leakage detection using wireless sensor networks (OWLD). https://doi.org/10.1145/3102304.3102309 5. Saraswati Vasantrao K, Rajbhoj SM (2017) WSN based water pipeline leakage detection. In: 2017 international conference on computing, communication, control and automation (ICCUBEA), Aug 2017, pp 1–5. https://doi.org/10.1109/ICCUBEA.2017.8463752 6. Porwal S, Akbar SA, Jain SC (2017) Leakage detection and prediction of location in a smart water grid using SVM classification. In: 2017 international conference on energy, communication, data analytics and soft computing (ICECDS), Aug 2017, pp 3288–3292. https://doi.org/ 10.1109/ICECDS.2017.8390067 7. Rahmat RF, Satria IS, Siregar B, Budiarto R (2017) Water pipeline monitoring and leak detection using flow liquid meter sensor. IOP Conf Ser Mater Sci Eng 190:12036. https://doi.org/ 10.1088/1757-899x/190/1/012036 8. Seyoum S, Alfonso L, van Andel SJ, Koole W, Groenewegen A, van de Giesen N (2017) A Shazam-like household water leakage detection method. Proc Eng 186:452–459. https://doi. org/10.1016/j.proeng.2017.03.253 9. Rabeek SM, Beibei H, Chai KTC (2019) Design of wireless IoT sensor node platform for water pipeline leak detection. In: 2019 IEEE Asia-Pacific microwave conference (APMC), Dec 2019, pp 1328–1330. https://doi.org/10.1109/APMC46564.2019.9038809 10. Narayanan LK, Sankaranarayanan S (2019) IoT enabled smart water distribution and underground pipe health monitoring architecture for smart cities. In: 2019 IEEE 5th international
28
11.
12.
13.
14.
15.
M. Raja et al. conference for convergence in technology (I2CT), Mar 2019, pp 1–7. https://doi.org/10.1109/ I2CT45611.2019.9033593 Sadeghioon AM, Metje N, Chapman D, Anthony C (2018) Water pipeline failure detection using distributed relative pressure and temperature measurements and anomaly detection algorithms. Urban Water J 15(4):287–295. https://doi.org/10.1080/1573062X.2018.1424213 Abdelhafidh M, Fourati M, Fourati LC, Abidi A (2017) Remote water pipeline monitoring system IoT-based architecture for new industrial era 4.0. In: 2017 IEEE/ACS 14th international conference on computer systems and applications (AICCSA), Oct 2017, pp 1184–1191. https:// doi.org/10.1109/AICCSA.2017.158 Shi Z, Wang M, Lin H, Lin H, Gao Z, Huang L (2019) NB-IOT pipeline water leakage automatic monitoring system based on cloud platform. In: 2019 IEEE 13th international conference on anti-counterfeiting, security, and identification (ASID), Oct 2019, pp 272–276. https://doi.org/ 10.1109/ICASID.2019.8925228 Abdelhafidh M, Fourati M, Fourati LC, Ben Mnaouer A, Mokhtar Z (2018) Cognitive Internet of Things for smart water pipeline monitoring system. In: 2018 IEEE/ACM 22nd international symposium on distributed simulation and real time applications (DS-RT), Oct 2018, pp 1–8. https://doi.org/10.1109/DISTRA.2018.8600999 Aramane P, Bhattad A, Madhwesh M, Aithal N, Akshay P, Prapulla SB, Shoba G (2019) Iot and neural network based multi region and simultaneous leakage detection in pipelines. Int J Adv Res Eng Technol 10(6):61–68
Applying Python Programming to the Traditional Methods of Job Sequencing Nekkala Ganesh, B. Hemanth, and P. H. J. Venkatesh
Abstract Job sequencing problems are used to determine a suitable or an appropriate order for a series of respective jobs that are to be performed to optimize some efficiency measure such as total elapsed time or overall cost, among other things. Efficiency in that type of circumstance is determined by the order or sequence in which the jobs are completed. The efficiency can be quantified in terms of money, time, or miles, among other things. There are so many applications of sequencing problems such as jobs at a manufacturing facility, landing, and clearance of aircraft, maintenance schedule in an industry, programmers to be run on a digital computer center, consumers in a banking sector, etc. It is widely used in many manufacturing industries in the department of production, planning, and scheduling of jobs. There are many methods to solve job sequencing problems such as TABU search algorithm, fuzzy TOPSIS method, payoff system/model, and Johnson’s sequencing rule. One of the most widely used job sequencing methods is Johnson’s algorithm. Traditional methods are time taking, and there is more probability of human-made errors, so we have developed a code using Python programming for job sequencing problems which will significantly save time and reduce human-made errors. Keywords Job sequencing · Efficiency · Johnson’s algorithm · Python programming
1 Introduction The sequencing problem is regarded as one of the best and most important applications in the field of operations research [1–3]. In sequencing, the situation in which the order or sequence in which a set of activities or occupations is to be completed determines the effectiveness measure [4–7]. The algorithm for production scheduling systems that reduced idle machine time and the total time required to complete tasks N. Ganesh · B. Hemanth · P. H. J. Venkatesh (B) Department of Mechanical Engineering, Vignan’s Institute of Information Technology (A), Duvvada, Visakhapatnam, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_3
29
30
N. Ganesh et al.
[8–10]. The elements of sequencing and scheduling, complete numbering, integer programming, and techniques such as bound to obtain optimal sequences [11]. But they do not provide efficient solutions for large size problems [12]. The difficulties of shortening the make-span in a double machine flow with a relatively complex or deterministic job Johnson’s algorithm are used to determine the best job sequence based on processing time [13]. Theoretically feasible to choose because of the large number of computations, the optimal sequence is determined by testing each one in real-time [14, 15].
2 Methodology In the job sequencing problems, let us assume that there is n number of jobs, and they are processed through m number of machines. Let us assume that the order of the machines M1, M2, …, Mm is M1, M2, M3, …, Mn. Let the time which is taken by the i-th to complete the j-th job is T ij . The following assumptions are taken while solving the job sequencing problems: Once a work has been started on a machine, it must be only passed to another machine once the job is done on the present machine. The time taken for each machine to process a particular job is known. Such a period is jobs that are no trillion ton the sequence in which they are to be processed. Mathematically, it takes a lot of time to do the above process for finding out the sequence of the jobs that are to be done using different methods like Johnson’s algorithm, etc. To calculate the sequence of jobs that are to be performed, we have developed a Python code using some of the in-built functions in Python such as max () from which the maximum number in the list of an array is found. In a similar way, min () is used to find out the minimum number in the list of an array, split () which is used to split the given input in to elements inside a list append () which is used to add an element into the list, map () which is used to group the given data some of the slicing operators such as [::−1] which is used to reverse the elements in the array list. Besides these some of the iterative statements such as while loop, for loop and conditional statements like if else is used. Figure 1 shows the flowchart that describes the steps involved in getting an optimum sequence. In the flowchart, ‘m’ denotes the number of machines, ‘n’ denotes the number of jobs, M1 and M2 denote the time taken by machine1 and machine 2 to complete their jobs, respectively. J1, J2, and JN denote the time taken for each job on the particular machine.
3 Results and Discussion When the code is compiled and executed, it first asks for the number of machines as the first input and then the number of jobs as the second input. Now, the series of times that each machine works to complete the job is to be given one very newline as
Applying Python Programming to the Traditional Methods of Job …
31
Fig. 1 Flowchart for optimum sequence
the input. The sequence of jobs that are to be performed to get maximum efficiency is then obtained as the output based on the given input. The following are some of the examples that are taken to test the code, and the exact results are obtained when these results are compared by solving the problem in the traditional approach. Figure 2 shows the output which is obtained when jobs performed are 5 and the number of machines is 2, Fig. 3 depicts the output which is obtained when jobs performed are 5 and the number of machines is 2, and Fig. 4 shows the output of the sequence of jobs.
32
N. Ganesh et al.
Fig. 2 Output for n-jobs on 2 machines
Fig. 3 Output for n-jobs on 3 machines
Fig. 4 Output for n-jobs on m-machines
4 Future Scope In the present scenario, almost all the machines are semi-automated. Traditional methods require more time for job sequencing, and the inputs are given manually to the machines. With the help of the code that we have developed, the order or the job sequence can be directly imported to the machines, thus making the process more automated and time-saving.
Applying Python Programming to the Traditional Methods of Job …
33
5 Conclusion In this paper, we have developed a code using Python programming for job sequencing problems. The following advantages have been observed while using this method. The solution is generated very fast as it is computer operated. The difficulties faced during the solving of the problem can be avoided. Human-made errors can be avoided. From simple jobs to very complex scheduling; this method can be simply applied.
References 1. Abbas M, Abbas A, Khan WA (2016) Scheduling job shop—a case study. IOP Conf Ser: Mater Sci Eng 146(1):012052 2. Admi Syarif AS, Pamungkas A, Mahendra RK, Gen M (2021) Performance evaluation of various heuristic algorithms to solve job shop scheduling problem. Int J Intell Eng Syst 14(2):334–343 3. Ahmadian MM, Salehipour A, Cheng TCE (2021) A meta-heuristic to solve the just-in-time job-shop scheduling problem. Eur J Oper Res 288(1):14–29 4. Amaro D, Rosenkranz M, Fitzpatrick N, Hirano K, Fiorentini M (2022) A case study of variational quantum algorithms for a job shop scheduling problem. EPJ Quantum Technol 9(1):1–20 5. Asadzadeh L (2015) A local search genetic algorithm for the job shop scheduling problem with intelligent agents. Comput Ind Eng 85(1):376–383 6. Ashwin S, Shankaranarayanan V, Anbuudayasankar SP, Thenarasu M (2022) Development and analysis of efficient dispatching rules for minimizing flow time and tardiness-based performance measures in a job shop scheduling. Intell Manuf Energy Sustain 34(1):337–345 7. Bakuli DL (2006) A survey of multi-objective scheduling techniques applied to the job shop problem (JSP). Appl Manage Sci Prod Finance Oper 12:129–143 8. Bulbul K, Kaminsky P (2013) A linear programming-based method for job shop scheduling. Int J Scheduling 16(2):161–183 9. Chaube S, Singh SB, Pant S, Kumar A (2018) Time-dependent conflicting bifuzzy set and its applications in reliability evaluation. In: Advanced mathematical techniques in engineering sciences, pp 111–128 10. Cowling P, Johansson M (2002) Using real time information for effective dynamic scheduling. Eur J Oper Res 139(2):230–244 11. Delgoshaei A, Aram AK, Ehsani S, Rezanoori A, Hanjani SE, Pakdel GH, Shirmohamdi F (2021) A supervised method for scheduling multi-objective job shop systems in the presence of market uncertainties. RAIRO-Oper Res 55:S1165–S1193 12. Jiang T, Zhang C (2018) Application of grey wolf optimization for solving combinatorial problems: job shop and flexible job shop scheduling cases. IEEE Access 6:26231–26240 13. Kahraman C (2006) Metaheuristic techniques for job shop scheduling problem and a fuzzy ant colony optimization algorithm. Fuzzy Appl Ind Eng 201:401–425 14. Kalita R, Barua PB, Dutta AK (2016) Development of a heuristic algorithm for finding optimum machine loading sequence in fabrication shop with job shop layout. In: International conference on electrical, electronics, and optimization techniques, IEEE, pp 1324–1329 15. Kapanoglu M, Alikalfa M (2011) Learning IF-THEN priority rules for dynamic job shops using genetic algorithms. Robot Comput Integr Manuf 27(1):47–55
Stress Detection and Performance Analysis Using IoT-Based Monitoring System S. Srinivasulu Raju, Jalalu Guntur, T. Niranjan, G. Venkata Sneha, and N. Aleshwari
Abstract In order to better streamline various stress management strategies, this paper will cover recent studies on a number of techniques for stress detection and prediction. People all over the world worry about mental stress for a variety of reasons, such as family troubles, financial issues, societal issues, and environmental challenges. A person can develop under variable stress, but a strong reaction or ongoing stress becomes problematic. Therefore, it becomes necessary to foresee, detect, and treat physical and mental stress. To prevent the major chronic health illnesses that can seriously harm a person’s wellbeing, it is vital to estimate the stress level earlier. More studies are being conducted today to discover various stress detection methods based on physiological indicators including skin conductance (SC), sleep pattern, electrocardiogram (ECG), galvanic skin response (GSR), electromyogram (EMG), and electroencephalogram (EEG). With the use of cutting-edge technology, such as the Internet of things, the behaviour of physical and mental stress is investigated (IoT). Cloud-based work and mobile applications effectively manage the identified stress measures to track and calculate vast amounts of data over an extended period of time. In this paper, various approaches of researchers are found for stress detection, prediction, and management are reviewed. It suggests directions towards future scope of the recent research and interventions. Keywords Detection of stress · ECG · GSR · Stress management · EMG · SC · EEG · IoT
S. Srinivasulu Raju (B) · J. Guntur · G. Venkata Sneha · N. Aleshwari Department of Electronics and Instrumentation Engineering, VR Siddhartha Engineering College, Vijayawada, India e-mail: [email protected] T. Niranjan Department of Mechanical Engineering (Mechatronics), MGIT, Hyderabad, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_4
35
36
S. Srinivasulu Raju et al.
1 Introduction Modern technology improves the comfort of human beings’ daily lives. All along with the luxurious routine style, knowledge also made unhealthful behaviours including poor eating, insufficient exercise, and sleep deprivation. These habits have an impact on a person’s daily activities [1]. With recent innovations in medical such as earlier detection and analysis techniques, it is possible to reduce the mental and physical stress and also the related diseases such as depression, diabetes, gastric problems, and heart disease. Basically, HRV signals and ECG are the good stress indicators. EMG signals are efficiently investigating the stress of a person when compared with GSR and ECG signals. The stress management techniques develop and increase the social, psychological, physiological, and physical health of the person [2]. Detecting and diagnosing stress earlier is the first step to overcome the problem of physical and mental health stress. Many researchers focussed on EMG signal for stress prediction and detection [3]. The researchers evaluated the variations of EMF signal separated from trapezius left muscle in together stress and rest situations. In the right trapezius muscle, the electromyogram signal was eliminated because of excessive movements in the right hand. The EMG and HRV classification rates are 71.3% and 93.8%, respectively. The EMG signal measures the rate of emotion recognition between the respiratory signal and the trapezius muscle. The stress prediction rates of 86.7% and 92.8% were achieved at stress and relax states using EMG [4]. In order to assure improved health, the technology IoT is utilized in numerous uses, including watching, tracking, and monitoring human health problems. Software and sensors are integrated into this technology to allow for connectivity and data exchange with end users over the Internet. IoT is crucial in medical applications since it collects the essential real-time data and sends notifications to the end customer about a significant situation via the connected devices. Using mobile apps and gadgets, IoT too guarantees well tracing and reporting of mental and physical anxiety problems to healthcare providers at any time [5]. Numerous methods and approaches by several scientists employed for sensing the stress managing amount are assessed in the following section.
2 Literature Review In present scenario, mental health and physical health stress are enormously and also to get better of stress aspects. Stress may be often developed from surroundings around and the things you feel burden about [6, 7]. Stress creates a feeling of burden on the human brain. The level of stress varies from person to person depending on their level of tolerance, handling capacity and the way you deal with the cumbersome circumstances, and it is also distinctive in each person. Depending on the seriousness of the stressed person, it may become a risk and harmful. Some stress issues help keep people target their goals, which is not at all toxic and has zero risk to their liver
Stress Detection and Performance Analysis Using IoT-Based …
37
life, and not only that it motivates and encourages the individual to focus and attain their goal with high success rate. In order to measure a person’s physiological stress, a variety of indicators are used. PPG measures blood flow, muscle activity is measured by electromyography, whereas respiratory activity is measured by piezoelectric transducers and electromagnets. Technology can be used to measure physical stress in the eye, body, and face using technologies like automated gesture analysis, infrared eye tracking, and automatic facial expression analysis. This study compares the present skills in use with those created for physical stress assessments, and its literature is evaluated. Imagine that a person feels and in stress state, it designates a “flight or fight” replication to detect the stress level. As a result of this duplication, the human bloodstream is exposed to the chemicals cortisol and adrenaline, prompting an authoritative command to respond and deal with the stress. Additionally, due to the lack of oxygenated blood supply to each muscle to promote energy metabolism. Human body responds to stress with a cascade of physiological processes. It is sensitive to the hormone cortisol. When that sensitivity is low or off, stress becomes chronic. Chronic stress and inflammation cause neurons and tear-worthy blood vessels to your brain; this can change the way your brain works. Researchers relegate the stress predicated on sundry approaches. For example, based on expressions, opinions, gestures, levels of stiffness, and other various judging parameters (Table 1). Other approaches use the wearable sensors, such as the wristbands, to get physiological signals and extract the signals on the human body, such as electro dermal activity (EDA) and electrocardiogram (ECG) signals. These methods can detect and predict the stress that might be experienced by them [8, 9]: Psychological studies suggest two main basic causes for stress. The first cause is the psychological or social/economic factors affecting career, family, work, and other situations. The second cause is the stress from physiological stressors such as extremes of temperature, “strain of vision effect”, fatigue, and fear along with secondary calculations. Table 1 Categories of stressors Stress
Example
Stressor
Stress of body
Soreness, sickness, harm, illness, etc
Physical
Neighbouring stress
Contamination, temperature changes, over gathering, noise, intensity of light, pests, temperature variation
Environmental
Social challenges
Long-term illness, unnecessary change of residence
Psychosocial
Looming or encountering struggle to take some contest
Financial problem, examinations, unemployment
Psychological
38
S. Srinivasulu Raju et al.
3 Factors Related to Stress Based on how the body reacts to threats or activities, stress can be predicted. Different factors, including humidity, temperature, heart rate, blood pressure, and speech, are used to estimate a person’s level of stress [10].
3.1 Heart Rate HRV is used to detect a person’s bodily stress. The human heart rate is regulated and controlled by the autonomous nervous system. The interval between heartbeats is used to compute the variation. Based on physical body health, HRV varies for each individual. The concern person’s stress level is healthy and fits if the heart rate variability is greater than 50. The heart rate of the concerned person is between 14 and 25, and their level of stress is high between 2 and 15 [11] whilst they are feeling slightly tense.
3.2 Temperature Chronic stress in some persons results in a recurring low-grade fever that ranges from 99 to 100 F (37–38 °C). When exposed to an emotional event, some persons notice a rise in body temperature that can go as high as 106 F (41 °C). Temperature stress is a type of physiological stress brought on by extremes of heat or cold that can harm or even kill. The body’s temperature and heart rate rise when exposed to extreme heat. By measuring the temperature of the body of the person, we can easily predict the stress which is the easiest way [12].
3.3 Humidity Employees who spent the most of their time in low humidity had 25% greater stress than those who worked in moderate humidity, according to our research. Additionally, we saw increased stress levels in people who spent the majority of their time in extremely humid air, indicating a sweet spot in the centre. The fluctuations in skin temperature caused by humidity are what determine a person’s accurate stress level when measured with the GSR (Table 2).
Stress Detection and Performance Analysis Using IoT-Based … Table 2 Humidity level and state of the persons
39
GSR value
Person state
Minimum value
Relaxed state
Large value
Depressed state
3.4 Speech Speech is the verbal signal that communicates information to other people. The person is in a relaxed state if their voice is in normal mode. The speaker’s unambiguous substance and paralinguistic data are obtained for evaluations of the stress level when the voice is louder than usual amongst other people. The machine learning approach uses the artificial neural network and discrete wavelet transform techniques to determine the amount of stress management. Speech detection of stress is 85% accurate [13].
3.5 Blood Pressure Researchers discovered that those who consistently reported high stress levels had a 22% higher risk of acquiring high blood pressure than those who consistently reported low stress levels. According to research, persistent stress might cause blood pressure to rise permanently. Blood pressure rises for a person in difficult settings [14]. Higher blood pressure is defined as multiple reliable readings of 120–129/80, whilst high blood pressure is defined as several consistent readings ≥130/>80.
4 Stress Detection and Management Methods According to the temporal constraints, stress is divided into three categories: acute, chronic, and episodic acute. Acute stress is a kind of both types of stress that can be felt as a quick light flashing in the eyes, a sense of immediacy based on mental or emotional factors, and is often present for a brief period of time. Acute episodic stress occurs when a person experiences acute stress frequently. A person experiences chronic stress over a longer period of time [15]. Chronic stress increases the chance of long-term, major health issues like body aches, stomach pain, type 2 diabetes, obesity, and heart issues. Heart rate variability, skin conduction, and sleep patterns are used to define the stress detection and management strategy to combat stress.
40
S. Srinivasulu Raju et al.
4.1 Heart Rate Variation Heart rate variability (HRV) is a result of variations in the periods amongst heartbeats. The common way for predicting physical stress is heart rate variability (HRV), which can occasionally increase the risk of extreme blood pressure, which in turn raises heart rate. Heartbeat fluctuations typically vary from 60 to 100 beats per minute for 60 s. Beyond this threshold, it is considered abnormal, and the heart rate accelerates abruptly, which causes stress [16]. Smart watches with sensors to measure heart rate and identify whether a person is in a normal or stressed condition are commercially available for highly accurate HRV detection.
4.2 Skin Conductance EDA’s division of skin conductance is also known as galvanic skin response. It is divided into two categories; the first category is the skin conductance level, also known as the SCL, which does not frequently move from the level. The second component, which varies according to events, is the skin conductance. When under stress, the body produces sweat, the electrical qualities of the skin change, and the stress is identified [17].
4.3 Sleep Pattern Change The REM and NREM sleep patterns are used to category the variations in sleep pattern. The REM phase, or dream phase, is referred to by the fast eye movement. The human eye travels quickly, and the fluctuation of the brain waves causes dreams. Three different sorts of phases make up the non-REM phase. In the initial stage, the eyes are merely closed but not yet asleep. The human body corrects repaired tissues and strengthens the immune system when in deep sleep, which is the third phase after the second period of light sleep. There are two main stages of deep slumber. The person sleeps deeply for longer than two hours in stage one and for less than two hours in stage two. Stress affects a person when deep sleep declines and sleep patterns alter over time.
5 Stress Detection and Management Sensors Several sensors, including the joint GSR and HR sensor, the PPG optical sensor, the GSR sensor, and the HR sensor, are used to forecast and detect the stress on both mental and physical health.
Stress Detection and Performance Analysis Using IoT-Based …
41
Fig. 1 Schematic diagram for photo detector
Table 3 Photo diode specifications
Maximum ratings
Wave length Dimensions
VCC
3.0–5.5 V
Imax (maximum current draw)
30
A Text Mining Approach for Identifying Agroforestry Research Narratives
471
Fig. 19 Word cloud from abstracts of various papers in IEEE
topic. K-means is used for analysing the YouTube metadata based on the views of the videos and duration of the video. Topic modelling is also one of the most important techniques for text mining analysis [8]. LDA is one of the best models for topic modelling; it gives the result as two parts which includes intertopic distance on the left side and the right side it shows the top 30 salient features every particular topic in the graphical representation. Intertopic distance map shows the distance of topics using PCA, and for every selected topic, the graphical representation of the top 30 salient features will be different. Each salient feature represents a topic, and based on the topic selected, each word will be given some estimated frequency.
7 Conclusion and Future Work 7.1 Conclusion The text analysis on the agroforestry data which is extracted from the various resources such as YouTube, Wikipedia, IEEE research papers results in deriving typical insights. This helps in understanding the resources and research carried out in the particular domain. The clustering techniques help in segregating the data into various groups. The corpus which is extracted from the analytics helps in identifying the words which are associated with the relative keyword. The text summarisation techniques such as extractive text summarization give the detailed summary of the YouTube video without necessity of watching [9]. Overall, this project helps in achieving the several insights and trends that are ongoing on the agroforestry-related keywords like agroforestry and watershed.
472
P. Monika et al.
Fig. 20 Topics discovered using LDA
7.2 Future Work As of now, this project is an analysis of some keywords like agroforestry, watershed, trees, soil, forest, land. The analysis on some more keywords that are related to agroforestry keywords helps in gaining new insights. The dashboard needs to be developed such that it will be easy to view, and the analysis on some more keywords that are related to agroforestry keywords helps in gaining new insights. The dashboard is needed to be developed such that it will be easy to view and understand by the user. The text summarisation on the YouTube videos is performed by the transcript of the video. If the transcript is not available, the video content should be converted
A Text Mining Approach for Identifying Agroforestry Research Narratives
473
Fig. 21 Word having frequency > 15
Fig. 22 Word cloud from abstract of various papers collected through Wikipedia Table 2 Analysis of YouTube data Keyword
Source
Agroforestry
YouTube
Watershed
YouTube
Max views
Max duration in minutes
High-frequency keywords
118,525
534
Forest, food, course, farm, plant, crop
1,000,069
497
Management, video, river, model
474 Table 3 Analysis of IEEE and Wikipedia
P. Monika et al. Source
High-frequency keywords
IEEE
Land, forest, tree, use, fire
Wikipedia
Tree, agroforestry, forest
from speech to text, and then, text summarization should be implemented on that extracted speech text. It reduces the dependency on the transcripts.
References 1. Schober A, Šimunovi´c N, Darabant A, Stern T (2018) Identifying sustainable forest management research narratives: a text mining approach. J Sustain For 37(6):537–554. https://doi.org/ 10.1080/10549811.2018.1437451 2. Antons D, Grünwald E, Cichy P, Salge TO (2020) The application of text mining methods in innovation research: current state, evolution patterns, and development priorities. R&D Manag 50:329–351. https://doi.org/10.1111/radm.12408 3. Brandt J (2019) Text mining policy: classifying forest and landscape restoration policy agenda with neural information retrieval. arXiv preprint arXiv:1908.02425 4. Suleiman D, Awajan A (2020) Deep learning based abstractive text summarization: approaches, datasets, evaluation measures, and challenges. Math Probl Eng 2020, Article ID 9365340, 29 pages. https://doi.org/10.1155/2020/9365340 5. Zinngrebe Y, Borasino E, Chiputwa B et al (2020) Agroforestry governance for operationalising the landscape approach: connecting conservation and farming actors. Sustain Sci 15:1417– 1434. https://doi.org/10.1007/s11625-020-00840-8 6. Bongini P, Osborne F, Pedrazzoli A, Rossolini M (2022) A topic modelling analysis of white papers in security token offerings: which topic matters for funding? https://doi.org/10.1016/j. techfore.2022.122005 7. Capó M, Pérez A, Lozano JA (2020) An efficient K-means clustering algorithm for tall data. Data Min Knowl Disc 34:776–811. https://doi.org/10.1007/s10618-020-00678-9 8. Suh Y (2021) Sectoral patterns of accident process for occupational safety using narrative texts of OSHA database. https://doi.org/10.1016/j.ssci.2021.105363 9. García-Hernández RA, Ledeneva Y (2013) Single extractive text summarization based on a genetic algorithm. In: Carrasco-Ochoa JA, Martínez-Trinidad JF, Rodríguez JS, di Baja GS (eds) Pattern recognition. MCPR 2013. Lecture notes in computer science, vol 7914. Springer, Berlin. https://doi.org/10.1007/978-3-642-38989-4_38 10. Mupepele A-C, Keller M, Dormann CF (2021) European agroforestry has no unequivocal effect on biodiversity: a time-cumulative meta-analysis. https://doi.org/10.1186/s12862-02101911-9
Epilepsy Prediction Using Spark Papasani Pravalika, Shaik Shabeer, Jampana Meenakshi, and Fathimabi Shaik
Abstract One of the most frequent neurological disorders in the world is epilepsy. The ability to detect upcoming seizures early has a significant impact on the lives of epileptic patients. In this study, a patient-specific seizure prediction method based on machine learning is proposed. It may be used with long-term scalp electroencephalogram (EEG) recordings. Early detection of epilepsy is key to determining whether it is present or not. The goal is to properly detect the epilepsy which is present or not as early as feasible. The raw EEG signals are taken as an input, and by using different machine learning algorithms, we will predict the result that epilepsy is present or not (Daoud and Bayoumi in IEEE Trans Biomed Circ Syst 13:804–813, 2019 [3]). The main purpose of this project is to compare the machine learning techniques to predict the epilepsy. All the techniques are running on the Apache Spark to facilitate the advantage of multi-node cluster. The different machine learning techniques that we used are Decision Tree, Random Forest, and SVM. From these techniques, Random Forest gives the good result with an accuracy of 95.2% and remaining as 92.78%, 81.24% for Decision tree, SVM, respectively, and the execution time is less for Decision Tree, i.e., 29.9 s compared to Random Forest and SVM. Keywords Epilepsy · Machine learning · EEG · Apache Spark · Multi-node cluster · Random Forest · Decision Tree · SVM
P. Pravalika (B) · S. Shabeer · J. Meenakshi · F. Shaik Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India e-mail: [email protected] S. Shabeer e-mail: [email protected] J. Meenakshi e-mail: [email protected] F. Shaik e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_41
475
476
P. Pravalika et al.
1 Introduction Epilepsy is one of the central nervous system (CNS) illnesses that affects roughly 1% of the population in the India (10 million individuals) and more than 65 million people worldwide. Furthermore, one out of every 26 people will get epilepsy at some point in their lives. Seizures come in a variety of forms, each with its own set of symptoms, such as loss of consciousness, jerking movements, or bewilderment. Some seizures are significantly more difficult to detect visually; patients frequently show signs like not reacting or staring blankly for a short amount of time. Seizures can strike at any time, causing injuries such as stumbling, biting the tongue, or losing control of one’s pee or stool. As a result, these are some of the reasons why seizure detection is critical for patients who are under medical monitoring and are suspected of having seizures. This research will employ different machine learning models which is used to determine whether a person is suffering from a seizure or not. Predicting epilepsy is a major challenge in the society, in which it affects about 10,000,000 in the India. The third most basic cerebrum problem is epilepsy. In the meanwhile, there are a few possible causes of epilepsy, one of which is an atomic change that results in unpredictable neuronal conduct or the translocation of neurons. Even if the primary cause of epilepsy is still unknown, finding it early can be helpful for treating it. Patients with epilepsy are treated with medications or surgery. However, these techniques are ineffective [1]. Drugs or surgery is used to treat epileptic patients. Regardless, these methods are not working. Paroxysms that cannot be treated effectively limit the patient’s powerful presence. In these cases, patients are unable to self-manage their work and perform their markable activity. This leads to social isolation and financial difficulties. So, early prediction of epilepsy helps the patients and to take some respective actions based on their type of epilepsy. Early treatment has more chances of decreasing the impact on the patient. So, our main aim is to predict the epilepsy by taking the input of brain signals for a specified time which are called as EEG signals. In real time, prediction of epilepsy helps the patients to know about the disease. Early prediction helps to take the required treatment, and there are more chances of recovery. Predicting epileptic seizures before they occur is extremely important for medication-assisted seizure prevention. Machine learning approaches and computational technologies are utilized to predict epileptic seizures based on EEG readings. Big data technologies are playing important role in health domain. Big data frameworks like Hadoop HDFS and MapReduce are used to seizures’ prediction. HDFS is used to store the big data, and MapReduce is used to process the data. As MapReduce is having number of limitations like I/O cost for each MapReduce job, so it is not useful for machine learning algorithms. To overcome the limitations of this, researches started using the Spark. Spark is an in-memory distributed computational framework used to interactive and iterative algorithms for machine learning applications [2]. Spark is using
Epilepsy Prediction Using Spark
477
Resilient Distributed Datasets (RDDs) which is distributed across the RAM of all the machines in the cluster. This research work uses HDFS for storage of large EEG data and Spark which is used to process the data on a multi-node cluster for predicting the seizures. The outline of the paper is as follows, Sect. 2 includes literature survey, Sect. 3 includes the proposed method whereas Sect. 4 includes results. Comparision is discussed in the Sect. 5 and final section is all about conclusion and future work.
2 Literature Survey In [3], they presented a linear dimensionality reduction and classification, a machine learning-based approach to the seizure detection problem. With this method, they were able to average 99.32% accuracy, 99.41% sensitivity, and 95.25% specificity across all patients (N = 24). Additionally, the Spark-based seizure detection system has an average latency of roughly 0.38 ms. In [1], according to the experimental findings of a study on the method for choosing training data for EEG emotion analysis using machine learning algorithms, emotion classification was carried out by modeling the KNN and SVM models by choosing training data from EEG data based on Valence and Arousal values determined using the Self-Assessment Manikin (SAM) method. In [2], results demonstrate that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with chosen HOS-based characteristics for the automatic identification of epileptic electroencephalogram data. Ten patients’ EEG recordings totaling about 2 h were used in this investigation. In [4], their approach, Parallel Real-Time Seizure Detection in Large EEG Data, opens up new opportunities for employing private cloud infrastructures to identify epileptic seizures in EEG data in real time. In [5], the results of a MapReduce-based rotation forest classifier for epileptic seizure prediction demonstrate that the suggested framework considerably cuts training time while achieving a high degree of performance in classifications. In [6], in this paper, they proposed a deep learning-based IoT platform for reliable epileptic seizure prediction. They employed a model based on Convolutional Neural Networks (CNNs) to extract significant spatiotemporal information from non-stationary and nonlinear EEG data. Reliable, efficient, and suited for real-time epileptic seizure prediction. The suggested system is an excellent solution for a smart healthcare system to improve the quality of life for epileptic patients because to its high prediction accuracy of 96.1%, lesser complexity, and smaller memory footprint. In [7], they discuss using an intracranial electroencephalogram (iEEG) recording of brain activity to predict epileptic episodes using data-analytic modeling. Although it is commonly acknowledged that statistical properties of the iEEG signal alter before seizures, robust seizure prediction is still a difficult problem since data-analytic modeling is subject-specific in nature. Good prediction performance is only achievable if the training data contain a substantial number of seizures, i.e., at least 5–7
478
P. Pravalika et al.
seizures. The suggested approach employs subject-specific modeling and imbalanced training data. During the training and testing rounds, this system also employs three distinct time scales. In [8], they talk about the scalp. This study’s EEG signals are weaker than intracranial EEG signals, but they are easier to collect. The prediction procedure consisted of four steps: preprocessing of epileptic scalp with discrete wavelet transform (DWT). In the time domain, EEG signals can be used to extract features, compare the general model and the patient-specific model, and optimize prediction using RF with nonidentical features and sub-band combinations. The coefficient of variation (CV) with all sub-bands was optimally chosen, and the seizure prediction accuracy reached 99.8102%. Numerical calculations were used to demonstrate the effectiveness of the proposed prediction mechanism. In [9], the Random Forest technique was used to categorize EEG data into three separate seizure stages: ictal, pre-ictal, and normal. Other well-known machine learning techniques for performance analysis include Multilayer Perceptron, Bayes Net, Radial Basis Function Neural Network, Naive Bayes, and C4.5 Decision Tree. The outcomes of the tests demonstrate that the Random Forest classifier has a classification accuracy of 99.40%, a specificity of 99.66%, a sensitivity of 99.40%, and a mean square error of 0.0871. In [10], they used several ANN models, and the suggested approach provides a dependable method for seizure prediction with better performance. The proposed system incorporates a Random Forest technique for feature selection with ANN classifiers for seizure prediction to create an automated seizure prediction framework. There are several types of neural networks, each with its own specific use case. This methodology allows for faster analysis of specified EEG signal properties to detect seizures and minimizes the time complexity of processing big datasets. Among the several ANN approaches (RPROP+, RPROP−, SAG, SLR), SLR has a high accuracy of 99.56% and a precision of 99.2%. In [11], in this paper, the primary difficulty of seizure categorization for drugresistant epileptic patients utilizing two alternative algorithms is described in this paper. Two cascaded classification models, SVM and deep ANN, are described and implemented. The models were constructed in order to extract features and classify pre-ictal and inter-ictal EEG data. The first classification approach achieved an accuracy of 97 and 96.7% with cross-validation, whereas the second classification model achieved an accuracy of 92 and 92.3% with cross-validation.
3 Proposed System System architecture diagram (Fig. 1): Read the dataset: The dataset that we took from UCI repository is placed in HDFS [12] and then read the dataset. The dataset has 179 columns, 178 of which are numerical values and the last column has the label y {1, 2, 3, 4, 5}.
Epilepsy Prediction Using Spark
479
System Architecture Diagram:
Fig. 1 Architecture diagram
Preprocessing: As a preprocessing step, check the dataset that has null values or not. If present, remove the null values using different techniques and also add missing values and convert categorical values into numerical if present [13]. As the label is a multiclass, it can be converted into binary class {0, 1}. Splitting the dataset: After that, convert the columns as features, i.e., all the columns except label y as a vector and target label as y. The dataset is then split into training data (70%) and testing data (30%). The models are trained using training data. Testing data are used to evaluate the model. Implementing the models: Different machine learning models like SVM, Random Forest, and Decision Tree are implemented. Evaluation of models: Model performance is measured using measures like as accuracy, precision, recall, and F1-score. All the above steps are written in Python code and that saved Python file is used in the Apache Spark for processing.
480
P. Pravalika et al.
Apache Spark: A cluster computing platform called Apache Spark is meant to be quick and versatile. Spark, at its core, is a “computational engine” that manages the distribution, application scheduling, and monitoring made up of many computational jobs spread across a computing cluster or multiple worker workstations. A cluster computing platform called Apache Spark11 is meant to be quick and versatile [14]. Spark is a “computational engine” at its heart, which is in charge of app scheduling, distribution, and monitoring made up of various computational jobs distributed across multiple worker computers or a computing cluster. Spark is also intended to be userfriendly, with straightforward APIs in Python, Java, Scala, and SQL, as well as many built-in libraries. It also works well with other big data technologies. Spark, in particular, can run on Hadoop clusters and connect to any of its data sources as well as any file in the Hadoop-distributed file system (HDFS) or other storage systems supported by the Hadoop APIs (that include local file system, Amazon S3, Cassandra, Hive, HBase, etc.). Spark ML: Spark includes a library that contains common machine learning (ML) functionality, known as ML lib. ML lib includes a variety of machine learning algorithms, such as classification, regression, clustering, and collaborative filtering, as well as model evaluation and data import. All of these methods have been designed to work across a cluster. ML Algorithms: 1. Support Vector Machine: Support Vector Machines are supervised learning models with related learning algorithms used in machine learning that examine data used for regression and classification analysis. The straight line needed to fit the data is referred to as a hyperplane in Support Vector Regression. Support vectors (“essential” training tuples) and margins are used by SVM to find a hyperplane (defined by the support vectors). 2. Decision Tree: A Decision Tree is a supervised learning method used for regression and classification problems, but it is often preferred for doing so. It is a tree-structured classifier, with core nodes reflecting dataset attributes, branches representing decision-making processes, and leaf nodes providing classification results [2]. 3. Random Forest: Random Forest is a type of supervised machine learning algorithm that is commonly utilized in classification and regression tasks. It constructs Decision Trees from distinct samples, with the majority vote for classification is obtained and for regression they got an average. The most important quality of Random Forest is the ability to grasp the datasets which are having both categorical and continuous values as same as that of the regression and classification. It achieves better classification results.
Epilepsy Prediction Using Spark
481
4 Results and Discussion The execution of this work is done in multi-node cluster and also in the local system using Jupyter Notebook. In both, multi-node cluster and local system, we used the same three algorithms, Random Forest, Decision Tree, and SVM. The execution in multi-node cluster is as follows: The operating system that is being utilized is Ubuntu 19, and it has all the necessary tools installed to work on the issue, including Hadoop, Apache Spark, Java, Python, Scala, and SSH. Apache Spark 2.7 was used to build the entire application. Version 1.8 of the Java JDK, generally known as Java 8, is used. Step 1: Installing required software’s: Update your Ubuntu Operating System to the latest version, and the version we have used is 19.04. Install the important prerequisites like the Hadoop, and the Hadoop version used in this application is Hadoop3.2.0. Also install Apache Spark which is available over the internet, and the Spark version utilized is Spark 2.7. Download and install Java JDK and JRE along with Python. The respective versions used are Java8 and Python3.7. Step 2: Setting up the multi-node cluster: . The initial step is to check whether all the installed versions of Apache Spark, Hadoop, Java, Python, Scala, etc. are the same on all the nodes that are to be formed as a cluster. . Configure the etc./hosts file which states the master and the slaves in the cluster. Enter the hostnames and their ip address to configure the cluster. . Install and configure the SSH to establish a connection between the master node and the slave nodes. Generate the SSH key on master and copy it to the slaves to access workers as passwordless SSH. Test the SSH using ssh command. . Now set up the Hadoop cluster and also YARN cluster and format the HDFS and make sure that the name node and the data node are working. . Set up the Spark environment by editing the bashrc file in all the workers as well as the master. This bashrc file consists of the configurations and paths of the software’s that are to be used by the Apache Spark application. . Also edit the Spark master configuration by editing the spark-env.sh file and add workers to the slaves file located in conf directory in Spark folder. . Now, you can start the cluster which has been configured with Hadoop, YARN, and Apache Spark. . To start the cluster, navigate to the Spark directory and type the command startall.sh. . To check whether all the services in the cluster are started and running successfully, type the command jps which will display the daemons on master and slaves. . Now, you can check the UI of Spark, Hadoop, YARN along with HDFS which display the statistics and information relating to the working of cluster.
482
P. Pravalika et al.
. UI as shown in Fig. 4.1.8 displays information relating to the data nodes in the cluster. . UI on port 9870 on the master displays the overview of the HDFS and files in it. . UI on port 8080 is the Spark cluster where the workers are displayed along with their resources like IP address, state of node, cores, available memory, etc. Different machine learning models used are: 1. Support Vector Machine: The method explain as follows: Method: Step 1: Initial step is to load the crucial libraries. Step 2: After loading libraries, we need to import the dataset and segregate X and Y variable extraction. Step 3: In this step, we divide the required dataset into train, test groups, respectively. Step 4: Initialize the classifier model SVM. Step 5: Fit the classifier model SVM. Step 6: Making predictions. Step 7: Final step is analyzing the performance of the model (Figs. 2 and 3). 2. Decision Tree: The method explains as follows: Method:
Fig. 2 Results of SVM in cluster
Fig. 3 Results of SVM without cluster
Epilepsy Prediction Using Spark
483
Step 1: Construct the tree with root node, which includes the whole dataset, explains S. Step 2: Try to find out the top attribute from the dataset using the Attribute Selection Measure (ASM). Step 3: Subdivide the S into subsets which containing potential values for the premier qualities. Step 4: The node for the Decision Tree is built based on the best attribute. Step 5: Design new Decision Trees iteratively using the subgroups of the dataset that are acquired in step 3. Do this method as last as you reach a point where you can no longer categorize the nodes and refer to the last node as a leaf node (Figs. 4 and 5). 3. Random Forest: The method explains as follows: Method: Step 1: Randomly, start by choosing the samples from the specified dataset. Step 2: For every sample, the algorithm itself constructs the Decision Tree. Each Decision Tree’s anticipated outcome will then be attained. Step 3: Priority will be given for each anticipated outcome in this step. Step 4: Finally, pick the most accepted forecast result as the final predicted result (Figs. 6 and 7). Fig. 4 Results of decision tree in cluster
Fig. 5 Results of decision tree without cluster
484
P. Pravalika et al.
Fig. 6 Results of random forest in cluster
Fig. 7 Results of random forest without cluster
Comparing the models All models are compared using several measures such as accuracy, precision, recall, and F1-score. The execution time is considered in both, cluster and without cluster. We acquire the best model to forecast by comparing.
5 Result Analysis We have considered the accuracy, precision, recall, F1-score as evaluation metrics for our project and compared all the models with respect to them. In the three algorithms, the best result was obtained by the Random Forest algorithm with an accuracy of 0.952 (in cluster) and 0.944 (without cluster), and the execution time for Decision Tree is less in both cluster and non-cluster, but in cluster it takes less time as compared to the non-cluster execution.
5.1 Comparison of the Machine Learning Classifiers in Multi-node Cluster The following figures illustrate the comparison analysis of all the classifiers in the cluster that are used in this work (Figs. 8, 9, 10 and 11).
Epilepsy Prediction Using Spark Fig. 8 Accuracy obtained in multi-node cluster
Fig. 9 Precision obtained in multi-node cluster
Fig. 10 Recall obtained in multi-node cluster
485
486
P. Pravalika et al.
Fig. 11 F1-score obtained in multi-node cluster
Table 1 Execution time of each classifier in multi-node cluster
Classifier
Execution time (s)
Random Forest
36.5
Decision Tree
29.9
SVM
87.7
The following table illustrates the run time of all the classifiers in cluster that are used in this work (Table 1 and Fig. 12).
Fig. 12 Bar chart of execution time of each algorithm in cluster
Epilepsy Prediction Using Spark
487
5.2 Comparison of the Machine Learning Classifiers Without Cluster The following figure illustrates the comparison analysis of all the classifiers without cluster that are used in this work (Figs. 13, 14, 15 and 16). The following table illustrates the run time of all the classifiers without cluster that are used in this work (Table 2 and Fig. 17). Fig. 13 Accuracy obtained without cluster
Fig. 14 Precision obtained without cluster
488
P. Pravalika et al.
Fig. 15 Recall obtained without cluster
Fig. 16 F1-score obtained without cluster
Table 2 Execution time of each classifier without cluster
Classifier
Execution time (s)
Random Forest
470.1
Decision Tree
196.7
SVM
201.4
6 Conclusions and Future Work In this research work, we used different machine learning algorithms for predicting epilepsy based on the EEG signals. Machine learning is an important decision support tool for epilepsy prediction. We tried Random Forest, Decision Tree, and SVM
Epilepsy Prediction Using Spark
489
Fig. 17 Bar chart of execution time of each algorithm without cluster
machine learning approaches in both cluster and non-cluster, with Random Forest having the greatest accuracy of 95.2% and 94.4% in cluster and non-cluster, respectively, and it is the best algorithm to predict the epilepsy compared to other machine learning techniques that we used. Also, we compared the execution time in which the algorithms in cluster took the less time compared to non-cluster. In the future, we will develop an application which predicts the epilepsy by using different technologies like Internet of Things, cloud, etc. [15]. Application will help the epilepsy patients to know about their health status, and it may also recommend some suggestions to improve their health condition. Also, we will further work on different machine learning techniques to get more performance to predict the epilepsy.
References 1. Singh K, Malhotra J (2019) IoT and cloud computing based automatic epileptic seizure detection using HOS features based random forest classification. J Ambient Intell Humanized Comput 1–16 2. Daoud H, Williams P, Bayoumi M (2020) IoT based efficient epileptic seizure prediction system using deep learning. In: 2020 IEEE 6th world forum on internet of things (WF-IoT). IEEE 3. Daoud H, Bayoumi MA (2019) Efficient epileptic seizure prediction based on deep learning. IEEE Trans Biomed Circ Syst 13(5):804–813 4. Vergara PM et al (2017) An IoT platform for epilepsy monitoring and supervising. J Sens 2017 5. Alhussein M, Muhammad G, Hossain MS et al (2018) Cognitive IoT-cloud integration for smart healthcare: case study for epileptic seizure detection and monitoring. Mob Netw Appl 23:1624–1635 6. Vergara PM, Marín E, Villar J, González VM, Sedano J (2017) An IoT platform for epilepsy monitoring and supervising. J Sens 2017:1–18 7. Yin Y, Zeng Y, Chen X, Fan Y (2016) The internet of things in healthcare: an overview. J Ind Inf Integr 1:3–13 8. Hou L, Zhao S, Xiong X, Zheng K, Chatzimisios P, Hossain MS, Xiang W (2016) Internet of things cloud: architecture and implementation. IEEE Commun Mag 54(12):32–39
490
P. Pravalika et al.
9. Litt B, Echauz J (2002) Prediction of epileptic seizures. Lancet Neurol 1(1):22–30 10. Gupta A, Chakraborty C, Gupta B (2019) Medical information processing using smartphone under IoT framework. In: Studies in systems, decision and control, pp 283–308 11. Yun J-S, Kim JH (2018) A study on “Training data selection method for EEG emotion analysis using machine learning algorithm”. Int J Adv Sci Technol 119:79–88 12. Ahmed L et al (2016) Parallel real time seizure detection in large EEG data. In: IoTBD 13. Chua KC et al (2009) Automatic identification of epileptic electroencephalography signals using higher-order spectra. Proc Inst Mech Eng H 223(4):485–495 14. Hussein R et al (2018) Epileptic seizure detection: a deep learning approach. arXiv preprint arXiv:1803.09848 15. Azimi I et al (2017) Internet of things for remote elderly monitoring: a study from user-centered perspective. J Ambient Intell Humanized Comput 8(2):273–289 16. Jukic S, Subasi A (2017) A MapReduce-based rotation forest classifier for epileptic seizure prediction. arXiv preprint arXiv:1712.06071
Application of Machine Learning Algorithms for Order Delivery Delay Prediction in Supply Chain Disruption Management Arun Thomas and Vinay V. Panicker
Abstract Over the last few years, studies on supply chain disruptions are getting more importance since the frequency of occurrence of disruptive events is increasing. Delivery delay of orders is one of the impacts of supply chain disruptions on the downstream side of a supply chain. The prediction of delay in the delivery of orders is one of the most difficult predictive challenges in supply chain disruption management that many companies are facing nowadays. This work aims to fulfill the research gap with the development of a machine learning-based predictive framework for order delivery delay. A dataset from a public repository is considered for the analysis, and area under the curve (AUC) score is selected as the performance metric to evaluate the prediction model. From the results, it can be inferred that the Random Forest model performs well with an AUC score of 0.98. Keywords Supply chain disruptions · Machine learning · Delivery delay · Prediction model
1 Introduction Any occurrence that disrupts the manufacture, sale, or distribution of items is referred to as a supply chain disruption. Natural disasters, regional conflicts, and pandemics are examples of such events. In the last decade, businesses have faced an increasing number of supply chain disruptions caused by a variety of natural, man-made, and legal disruptions [1]. According to research conducted by the Business Continuity Institute (BCI), more than 56% of businesses in the globe face supply chain disruptions on an annual basis, and companies are beginning to take supply chain disruptions more seriously [2]. Over the last two decades, the researchers are mainly focused on disruption management with quantitative modeling approaches such as simulation modeling and stochastic optimization. Singh et al. [3] developed public distribution A. Thomas · V. V. Panicker (B) Supply Chain and Systems Simulation Laboratory, Department of Mechanical Engineering, National Institute of Technology Calicut, Kozhikode 673601, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_42
491
492
A. Thomas and V. V. Panicker
system (PDS) simulation model with different scenarios to analyze disruptions in the food supply chain. With the advent of digitalization, information, robotics, communication technologies, and artificial intelligence (AI), the “fourth industrial revolution” is underway [4]. Machine learning (ML) is a technique that is concerned with the deployment and development of algorithms that “learn” from previous experience [5]. The machines’ ability to process vast volumes of data has increased, which resulted in the development of machine learning, and some machines, especially for disruptive and discontinuous data, could even make sense of hidden patterns and intricate relationships to make proper conclusions where people fail to do the same [6]. On-time delivery of orders is one of the key metrics for the assessment of the performance of a supply chain [7]. Emerging technologies such as ML and AI are widely used in the management of disruptions in the supply chain [8]. This research investigated the possibility of utilizing data analytics and machine learning in supply chain disruption management with the development of a data-driven machine learning-based prediction framework for predicting the delivery delay of orders. The prior identification of delivery delays of orders will help the supply chain managers for taking corrective measures and mitigation strategies to avoid its impact on the supply chain performance. The novelty of the presented work lies in the manner in which emerging technologies such as ML and AI are integrated with the supply chain disruption management process by developing a prediction framework for order delivery delay. The significant contributions of the present work are listed as follows: . To familiarize the application of emerging technologies such as machine learning in supply chain disruption management. . Development of a generalized machine learning-based risk prediction framework for order delivery delay.
2 Literature Review The potential application of emerging technologies such as ML and AI in supply chain disruption management has been explored in the recent literature. But, very few papers have considered the application of machine learning-based prediction modeling in the supply chain. The literature in the selected domain was identified through a systematic procedure. Figure 1 shows the methodology adopted for the same. The Web of Science data source was used for the literature search. A systematic search routine was devised and uniformly implemented to get results appropriate to the requirements. Relevant keywords were selected and categorized as primary and secondary keywords. The primary keywords are “supply chain disruptions” or “supply chain risk,” and the secondary keywords are “machine learning” or “Big data” or “Artificial Intelligence.” From the final search result, relevant papers related
Application of Machine Learning Algorithms for Order Delivery Delay …
493
Fig. 1 Systematic methodology for identifying literature in the selected domain
to prediction modeling in the supply chain are identified. The search includes journal articles published from 2013 to 2021 (Fig. 2). Baryannis et al. [9] developed a risk prediction model for predicting delivery delay at the upstream side of a supply chain. They considered the interpretability along with the performance of the model, and a case study was done to validate the proposed model. Brintrup et al. [10] explored the application of data analytics and machine learning in the supply chain for the prediction of disruption at the supplier side. They have considered an imbalanced dataset for the analysis. The developed prediction model was validated with a case study from a complex asset manufacturing company. Shajalal et al. [11] presented a backorder prediction model in the supply chain using a deep neural network. The authors considered a dataset from a public repository for
Number of articles published
Annual research outputs 18 16 14 12 10 8 6 4 2 0 2013 2014 2015 2016 2017 2018 2019 2020 2021
Year Fig. 2 Number of articles published over the last nine years
494
A. Thomas and V. V. Panicker
the analysis. The result shows that the proposed prediction model performs well with an AUC score of 0.95. Kosasih and Brintrup [12] discussed a Graph Neural Networks (GNNs)-based link prediction problem for supply chain visibility. The authors have validated the proposed prediction model with a real-case automotive network. To make their method more transparent, the authors highlighted the input attributes that influenced GNN’s decisions using an Integrated Gradient.
3 Data-Driven Delivery Delay Prediction Framework The efficiency of the supply chain disruption management process can be improved by integrating artificial intelligence within the disruption management process. Figure 3 shows a two-way interaction framework between supply chain experts and AI experts. The right side of the figure represents different steps involved in the traditional disruption management planning in the supply chain. The left side of the figure shows important steps involved in data-driven disruption management methodology. Effective interaction between both teams is required for effective disruption management planning. Fig. 3 Data-driven delay prediction framework [9]
Application of Machine Learning Algorithms for Order Delivery Delay … Table 1 Feature subset used for model development
S. No
Feature name
495 Data type
1
Origin port
Categorical
2
Carrier
Categorical
3
Customer
Categorical
4
Product ID
Categorical
5
Plant code
Categorical
6
Destination port
Categorical
7
Unit quantity
Numerical
8
Weight
Numerical
Table 2 Summary of the dataset Dataset
Number of features
Positive class
Negative class
Total
Imbalance ratio
Supply chain logistics problem dataset
15
192
9023
9215
3:97
3.1 Data Collection The dataset for the development of the prediction model was downloaded from a public repository (https://doi.org/10.17633/rd.brunel.7558679.v2) which includes a logistic problem dataset of a freight shipping company. Initially, the dataset contains 12 features, including identity details. These identity features are removed from further analysis. Table 1 shows the feature subset used for the development of the prediction model. The output feature is created from the date of delivery of the material. If the date of delivery of the item exceeds the scheduled date of delivery, it is coded as 1; otherwise, 0. The output feature value equal to 1 means there is a delay in delivery of the item, and zero means there is no delay in delivery. The proposed model aims to predict the risk involved in the delivery of items from the historical data available to the company. Table 2 shows the summary of the selected dataset. The 1 and 0 values are coded as positive class and negative class, respectively.
3.2 Data Preprocessing and Analysis The dataset contains 9215 order details of products in which 3% of orders are shipped later than the scheduled date. Initially, the dataset contains twelve features, and identity details are removed from further analysis. The features with large missing
496
A. Thomas and V. V. Panicker
Fig. 4 Class imbalance statistics of the dataset
values are removed from the analysis, and other missing values are replaced with the mean.
3.3 Class Imbalance The selected dataset is highly imbalanced since the occurrence of a positive class is less frequent than the negative class, i.e., the orders which do not have delivery risk. Figure 4 shows the imbalanced statistics of the dataset. In a binary classification problem, the supervised machine learning algorithms are biased toward the majority class. This problem can be resolved by using suitable oversampling methods. The Synthetic Minority Oversampling Technique (SMOTE) is used for balancing the dataset [13].
3.4 Feature Engineering and Selection Feature engineering is the process of converting the object or string-type features into numerical values. The forward feature selection method is used for identifying the best features for model building. After feature selection, the dataset contains eight features, of which six features are categorical, and two features are numerical. The categorical features are encoded by the frequency encoding method.
3.5 Selection of Algorithm and Performance Metrics The recent literature has explored the application of predictive modeling in the supply chain by using various algorithms, including K-Nearest Neighbor (KNN),
Application of Machine Learning Algorithms for Order Delivery Delay …
497
Random Forest (RF), Support Vector Machine (SVC), Logistic Regression, and Deep Learning. The selection of algorithm and performance metric becomes highly influential when the dataset is imbalanced. The performance metric commonly used in binary classification problems is accuracy. But, for problems dealing with imbalanced outcomes, accuracy is not a metric to assess the performance of the model [14]. AUC score is widely used for analyzing the performance of the model when the data are imbalanced [15].
3.6 Results and Discussion The risk prediction model was developed using three machine learning algorithms, including Logistic Regression, K-Nearest Neighbors, and Random Forest. The data points are divided into training and test sets, with 80% for training the model and 20% for testing. The proposed model is tested with two oversampling techniques, including SMOTE and a combination of over and undersampling methods (SMOTE– Tomek). The AUC score is selected as the performance metric to evaluate the performance of the proposed model. The different performance metric values of algorithms are given in Table 3. A stratified five-fold cross-validation technique is used to minimize overfitting in training. From the result, it can be inferred that the Random Forest model performs well in both oversampling techniques giving an AUC score of 0.98. The logistic regression model performs better with SMOTE oversampling technique giving an AUC score of 0.83. The Receiver Operating Characteristic (ROC) curves of chosen models for a specific case of the problem are shown in Fig. 5. The AUC is calculated for each model using the traced curves, where an area of 0.50 is regarded as a worthless test and an area of 1.00 as a perfect test. The probability that a particular classifier will rank a random delayed order, i.e., a positive class, above a random non-delayed order, i.e., a negative class, is represented by the AUC score, calculated by computing the area under the ROC curve, shown in Fig. 5. All the models were able to achieve AUC scores better than 0.80 in any of the oversampling methods, which is considered exceptional for a diagnostic system, whereas a random classifier Table 3 Performance matrices’ values of different algorithms Oversampling technique
Algorithm/performance metrics
Precision
Recall
F1_score
Accuracy
AUC
SMOTE
Logistic regression
0.61
0.91
0.73
0.67
0.83
SMOTE + Tomek
K-nearest neighbors
0.81
0.91
0.86
0.85
0.63
Random forest
0.97
0.98
0.96
0.97
0.98
Logistic regression
0.63
0.90
0.73
0.67
0.77
K-nearest neighbors
0.84
0.93
0.88
0.87
0.97
Random forest
0.98
0.99
0.99
0.98
0.98
498
A. Thomas and V. V. Panicker
Fig. 5 Receiver operator characteristic curve of different models
is expected to provide an AUC of 0.50. The area under the ROC curve represents the predictive capability of the developed models. From the figure, it can be inferred that the Random Forest model has the highest predictive capability with an AUC score of 0.98. Since the dataset considered in this research is unbalanced, accuracy and error cannot be taken as the metric to evaluate the performance of the prediction model in training and test sets. So, in order to compare the performance of the prediction model in training and test sets, the AUC scores of the RF model with different oversampling methods are found and shown in Fig. 6. From the figure, it can be inferred that overfitting of the model has not happened since a balanced performance is observed in both the training and test set.
0.9812
0.973
0.9895 0.9792
0.8531 0.8395
Without oversampling method
SMOTE SMOTE+Tomek Link Training set Test set
Fig. 6 AUC scores of the RF model in training and test sets with different oversampling methods
Application of Machine Learning Algorithms for Order Delivery Delay …
499
3.7 Comparison of the Proposed Prediction Model with Existing Models Shajalal et al. [11] presented a prediction model for the product backorder in the supply chain using a deep neural network, and the model has performed well with an AUC score of 0.95. Baryannis et al. [9] developed a delivery delay prediction model for suppliers using the support vector machine algorithm, and the model gives an AUC score of 0.94. The authors have discussed the interpretability of the selected performance metrics, and the developed model was validated with a case study. This research has explored an order delivery delay prediction problem, and the developed model performs well with an AUC score of 0.98.
4 Conclusions and Future Research Directions 4.1 Conclusion On-time delivery of products is one the main problems that companies face nowadays. Since it has a significant impact on the performance of their supply chain, all companies are focusing on smoothening the delivery performance of their supply chain. Predictive modeling is one of the methods to identify the risk involved in the delivery of orders. It will help the companies to identify the chance of delivery delay of orders and take corrective measures to mitigate their impacts. The present work developed a prediction model for order delivery delay on the downstream side of the supply chain. An imbalanced supply chain data from a public repository was considered for the analysis. Two different oversampling techniques were adopted for balancing the dataset. Feature engineering techniques are used to find the best subset of features that can be used for model creation. Since the selected dataset was imbalanced, the area under the ROC curve is selected as the performance metric to assess the predictive capability of the model. A stratified five-fold cross-validation technique is used to minimize overfitting in training. From the results, it can be inferred that the Random Forest model performs well with SMOTE–Tomek oversampling method, giving an AUC score of 0.98.
4.2 Future Research Directions The present work can be extended by adopting advanced machine learning algorithms such as Deep Learning. Improving the predictive performance of the model by adding engineered features from the given feature space is also a promising research direction.
500
A. Thomas and V. V. Panicker
References 1. MacKenzie C, Barker K, Santos J (2014) Modeling a severe supply chain disruption and postdisaster decision making with application to the Japanese earthquake and tsunami. IIE Trans 46:1243–1260 2. Katsaliaki K, Galetsi P, Kumar S (2021) Supply chain disruptions and resilience: a major review and future research agenda. Annals Oper Res 3. Singh S, Kumar R, Panchal R, Tiwari MK (2021) Impact of COVID-19 on logistics systems and disruptions in food supply chain. Int J Prod Res 59(7):1993–2008 4. Gupta S, Keen M, Shah A, Verdier G (2017) International monetary fund. Digital revolutions in public finance. International Monetary Fund, Washington 5. Mitchell T (1997) Machine learning. McGraw-Hill, New York 6. Ni D, Xiao Z, Lim MK (2019) A systematic review of the research trends of machine learning in Supply Chain Management. Int J Mach Learn Cybern 11:1463–1482 7. Kusrini E, Sugito E, Rahman ZM, Setiawan TN, Hasibuan RP (2020) Risk mitigation on product distribution and delay delivery: a case study in an Indonesian manufacturing company. IOP Conf Ser Mater Sci Eng 722(1):012015. IOP Publishing 8. Baryannis G, Validi S, Dani S, Antoniou G (2019) Supply chain risk management and artificial intelligence: state of the art and future research directions. Int J Prod Res 57(7):2179–2202 9. Baryannis G, Dani S, Antoniou G (2019) Predicting supply chain risks using machine learning: the trade-off between performance and interpretability. Futur Gener Comput Syst 101:993– 1004 10. Brintrup A, Pak J, Ratiney D, Pearce T, Wichmann P, Woodall P, McFarlane D (2020) Supply chain data analytics for predicting supplier disruptions: a case study in complex asset manufacturing. Int J Prod Res 58(11):3330–3341 11. Shajalal M, Hajek P, Abedin MZ (2021) Product backorder prediction using deep neural network on imbalanced data. Int J Prod Res 1–18 12. Kosasih EE, Brintrup A (2021) A machine learning approach for predicting hidden links in supply chain with graph neural networks. Int J Prod Res 1–14 13. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority oversampling technique. J Artif Intell Res 16:321–357 14. López V, Fernández A, García S, Palade V, Herrera F (2013) An insight into classification with imbalanced data: empirical results and current trends on using intrinsic data characteristics. Inf Sci 250:113–141 15. De Santis RB, de Aguiar EP, Goliath L (2017) Predicting material backorders in inventory management using machine learning. In: 2017 IEEE Latin American conference on computational intelligence (LA-CCI). IEEE, pp 1–6
Review on Image Steganography Transform Domain Techniques G. Gnanitha, A. Swetha, G. Sai Teja, D. Sri Vasavi, and B. Lakshmi Sirisha
Abstract In today’s society, data security is of the utmost importance. Securing data by ensuring confidentiality and integrity is crucial. Security of information is essential for transferring confidential data, data storing, and database systems. Steganography is one of the methods of protecting data. As you can see, steganography is the art and science of concealing communication, i.e., encasing the secret message inside another medium, such as images, audio, video, or text. Steganography is a method with several advantages, including high concealment capacity and undetectability. There are different types of steganography. The type of steganography depends on the type of cover object such as text, image, video we use to conceal the secret. Image steganography is hiding the existence of the data using the image as the cover image. Lastly, the parameters Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), and Structured Similarity Index Measure (SSIM) are calculated which give an indication of the efficiency of the proposed method. Keywords Data security · Integrity · Confidentiality · Steganography · Efficiency
1 Introduction Technology develops very rapidly. One of the positive impacts of this development is the Internet. The Internet with the ability to share information has no guarantee of protection. Steganography is one of the main techniques of information hiding. This is the art and science of concealing secret information within a cover object. Steganography is derived from the Greek steganos, which means covered, and graphia, which means writing. Based on the type of cover object that is used to conceal the secret, the types of steganography are categorized into video steganography, image steganography, audio steganography, text steganography, and network steganography. The secret message can be any plain text, cipher text, or image. Image steganography hides the data using an image as a cover. G. Gnanitha (B) · A. Swetha · G. Sai Teja · D. Sri Vasavi · B. Lakshmi Sirisha V R Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_43
501
502
G. Gnanitha et al.
The spatial domain technique and the transform domain technique are the most commonly used image steganography techniques. In the spatial domain technique, the bits of secret are directly embedded into the cover file. There are different spatial domain techniques such as the least significant bit (LSB)-based approach and pixel value differencing (PVD)-based approach. A transform domain technique embeds secret data within the transform domain coefficients of an image. Transform domain techniques include Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Discrete Fourier Transform (DFT), as well as Integer Wavelet Transform (IWT). As a result of using transform domain techniques, there is greater resistance to various attacks, such as steganalysis.
2 Literature Survey Sharma et al. [1] the Discrete Wavelet Transform and the Discrete Cosine Transform are combined in this suggested method to yield superior results. We divided the cover and secret image sections into three colors: red, green, and blue. Now, we perform 3-DWT twice on the red component of the secret image, yielding 4*4 blocks and a total of eight cells. Each block is now subjected to DCT. For the cover image, the same process is used. This encrypted message is delivered to the recipient. At the receiver, the Inverse Discrete Cosine Transform (IDCT) is used to extract the secret image from the stego image, proceeded by the Inverse 3-Discrete Wavelet Transform. The Peak Signal-to-Noise Ratio (PSNR) is 52.0138, and NAE is 0.996. The PSNR values have improved, and this method is more efficient and protects against illegal access. Sheidaee et al. [2] hybrid techniques outperform conventional techniques in terms of robustness, imperceptibility, and payload capacity. This framework is designed using the hybrid nature of the spatial domain and transform domain technique. We convert a secret image of the M*N dimension RGB image to a gray image. This gray image is now divided into blocks of varying sizes. Now, we use Discrete Cosine Transform (DCT) followed by quantization of each block to generate Random Noise using HMAC, a noise-like cipher image (hash-based message authentication code). Using the least significant bit (LSB) approach, we now embed this hidden image inside the cover image, which is the stego image. This is delivered to the user, who then reverses the procedure to obtain the concealed image from the stego image. This approach yields PSNR = 42.51 dB and SSIM = 0.9928. The visual quality of the Stego image remains unchanged. However, the LSB technique is well-known for making it easier for attackers to detect the secret image and cannot withstand attacks. The goal for the future is to create an efficient algorithm that can withstand modification attacks and improve PSNR values. Sun et al. [3] this technique differs from all other traditional steganography methods. We begin by converting a secret image into a QR code. The cover image and the hidden image have both been modified to grayscale images. Now, we perform DCT on the cover image to generate coefficient matrices, which are then quantized.
Review on Image Steganography Transform Domain Techniques
503
Now, using the least significant bits of the DCT coefficient matrix of the cover image and the QR code data matrix, we compare the values and assign binary values 0 and 1 where data encoding is completed, and we finally obtain a JPEG image that is transmitted to the receiver. At the receiver, the exact opposite procedure happens to retrieve the secret QR code, and the hidden image is then extracted. PSNR = 48.286 dB, NC = 0.976, embedding time = 2.33 s, and extracting time = 2.26 s1 are obtained using this method. This technique is robust in nature, as any encoding errors can be recovered during decoding, and QR-coded images provide high security. However, this technique works well for small secret files but fails for large files. Yadahalli et al. [4] the LSB method conceals the hidden image by using the cover image’s least significant bits. To disguise the hidden image, the wavelet coefficients of the cover image are modified in DWT. Numerous factors are taken into consideration, including capability, intangibility, recovery ability, integrity, and confidentiality. The results of this approach are as follows: MSE = 0.7556, PSNR = 49.5, 4, and BPP = 29.0043. In LSB, a pixel’s shift in hue or intensity is frequently not visible to the human eye. DWT provides better energy compaction than DCT and is computationally efficient. However, LSB might not be noise-resistant and is susceptible to steganalysis. The MSE, PSNR, BPP, correlation coefficient, and mean SSIM data are more abundant with DWT. The transform domain technique can be used in conjunction with the already existing special domain methods to give a higher level of security. Sharma et al. [5] in this strategy, we employ a cover image and a hidden image. To obtain the encrypted image, apply advanced encryption standards to the hidden image. Get the estimated coefficient of the matrix by applying two-level 2-D HAAR DWT on the cover image. The cover image and approximation coefficient of matrix LA1, as well as the feature coefficients’ matrices LH1, LD1, and LV1 of the hidden image’s one-level 2-D HAAR DWT, are both components of the one-level 2-D HAAR DWT. Combine the cover and hidden images using an alpha-blending technique. The stego image is produced by applying 2-D HAAR IDWT on the output image and the reverse procedure to get the hidden image. This technique secures data concealing while providing the stego image with excellent picture quality. The PSNR for the suggested method is 58.7501. This method offers greater PSNR values than the earlier methods and enhances the stego image’s image quality. Nevriyanto et al. [6] a transform domain technique, DWT, is used in conjunction with the spatial domain technique, SVD, in this framework. We start by doing the two-step Haar DWT procedure. In the initial stage, arithmetic operations are carried out while examining pixels horizontally from left to right. In the following stage, pixels are vertically scanned from top to bottom. After that, SVD is used to create the three components S, U, and V. The cover image combines and incorporates these components. An inverse method is performed to separate the concealed image from the stego image. Peak Signal-to-Noise Ratio (PSNR) = 56.952 and Mean Square Error (MSE) = 0.1311 are the results. Better parameters with higher robustness and lower MSE include MSE and PSNR values. However, because SVD and DWT are combined, this approach is more complicated.
504
G. Gnanitha et al.
Jothy et al. [7] the IWT technique converts integers to integers. Rather than concealing the secret image, a key is produced and thereafter encrypted and run length encoded. This key is concealed in the cover image, increasing security. There are four bands available: LL, HL, HH, and LH. The IWT LL band appears near the cover image. The key is produced first, and then, IWT is used to embed it in the cover image. The key is extracted from the stego image, and then, the hidden image is generated to recover the secret image. The PSNR of the obtained stego image is 44.3. The PSNR of the retrieved hidden image is 37.5. It is simpler and produces higher PSNR values. Because the stego image and cover image appear to be identical, the steganalyzer has a tough time discovering concealed information. This approach allows us to send the secret information to the receiver independently since third parties with access to the stego image cannot extract the secret information and restore the original cover image. Since it is impossible for unauthorized parties to examine the stego image and still obtain the secret information and retrieve the original cover image, this method enables us to independently convey the secret to the receiver. ElSayed et al. [8] the sender algorithm is the first step in the suggested algorithm, while the receiver algorithm is the second. The sender algorithm uses the DCT technique to encrypt the secret image. The hidden image is taken out by the receiver algorithm. The sender algorithm extracts the cover image’s RGB components, calculates its low-frequency curvelet components, and then embeds the data back into the image. The output stego image is then generated after computing the inverse curvelet. The receiver algorithm uses the reverse procedure to extract the hidden image. The low-frequency part of the curvelet transform can reduce computing time. Because it only uses a minimal number of coefficients to manage curve discontinuities, suppressing the low-frequency components has no impact on the edge coefficients, leading to higher stego image quality. The secure steganography developed in this research outperforms existing concealment methods that use other transforms and low-frequency curvelet components (Fig. 1). Soni et al. [9] this research compares the discrete fractional Fourier transform (DFrFT) to various image processing transformations for steganography. In this approach, the message is encrypted before transmission and decoded using a key at the recipient. Anyone can discern the key’s content, but only the person who possesses the key may use it. The message is referred to as plain text, and the encrypted form is referred to as cipher text. The cover image transports the embedded image, but
Sender algorithm
DCT Encryption Inverse DCT
Proposed method Receiver algorithm
Fig. 1 Proposed method of ElSayed et al. [8]
Decryption
Review on Image Steganography Transform Domain Techniques
505
the concealed image is also embedded in and conveyed by the cover image. An image is hidden under a cover image using the LSB technique. A cover image and a concealed image combined to form a stego image. To transform spatial stego objects into frequency domain stego images, FrFT is used. The steganographic image is converted using fractions of order alpha = 0.78 and beta = 0.25 during steganography, which is done in the transform domain. PSNR is the same as before, but it includes additional security keys, such as order fractional Fourier transformRice.png serves as the message image, while cameraman.tif serves as the cover image. In the spatial domain, the PSNR of the hidden-extracted message is 29.01, while the PSNR of the cover steganographic image is 32.46. In the transform domain, the PSNR of the hidden-extracted message is 29.01, and the PSNR of the cover steganographic image is 7.81. The Discrete Fractional Fourier Transform yields an additional stego key. Using transform domain methods, a hidden message may be inserted into the cover’s various frequency ranges. Transform domain algorithms have a smaller payload than spatial domain algorithms. Kalitha et al. [10] this contourlet transform produces a cover image with a smooth contour that can be effectively used for data embedding. The cover image is initially divided into two levels using a contourlet transform two-level decomposition. The wavelet transform method is then used to compress the hidden message. Here, the grayscale cover image and grayscale concealed image are used as the input, and the output is the grayscale stego image. With the suggested approach, PSNR is 41.79, MSE is 4.305, and NMSE is 0.0167. It has a low level of embedding complexity. To compute the multiscale decomposition and capture the point of discontinuity, the Laplacian Pyramidal filter is used. It also includes a cover image for the bandpass. Ultimately, this method can preserve the cover image’s visual and statistical properties. Kim et al. [11] the block-matching algorithm is a technique for image concealment. The hidden image is inserted in the frequency domain through block matching and IWT. IWT is used to transform the cover image (CI), and the LL sub-band of the CI is used as a database for block matching. As a result, the secret is encoded in the remaining LH, HL, and HH bands. To increase the quality of the reconstructed hidden image, this method addresses the differences between the mean values of the cover and secret blocks. For block matching, this method employs high-frequency components. To obtain the hidden image, IWT transforms the stego image. The PSNR of the image produced by BM is 32.053, while the image produced using the suggested approach has a PSNR of 34.278. The high-capacity image can be embedded by the BM method employed in this image. However, the quality of the retrieved hidden image changes based on the cover image. The proposed approach aimed at improving the quality of the concealed picture reconstruction. Tripathi et al. [12] in this technique, we use three secret images in addition to one cover image. The suggested solution uses two algorithms: a hiding algorithm and a recovering algorithm. We begin by using the contrast enhancement approach to improve all of the hidden images and the cover image. Each image should be split into three planes: red, blue, and green. The cover image’s red component includes the entirety of the first secret image, the green component includes the entirety of the
506
G. Gnanitha et al.
Cover image(CI)
CI red component
Secret image 1
CI green component
Secret image 2
CI blue component
Secret image 3
Fig. 2 Proposed method of Tripathi et al. [12]
second secret image, and the blue component includes the entirety of the third hidden image. Peak Signal-to-Noise Ratio and Structural Similarity Index Measure values are used to assess a performance metric. The proposed method’s SSIM value is 0.8549 and its PSNR value is 66.1399, respectively. The suggested approach offers more robustness and greater image quality. The suggested technique’s PSNR demonstrates strong imperceptibility and robustness against attacks (Fig. 2). Mudnur et al. [13] in the suggested method to calculate DWT, the discrete time domain signal is processed through successive low-pass and high-pass filters. The total and differences of subsequent pixel pairs in the rows and columns are used to calculate the Haar wavelet transform on the two-dimensional image. The LSB technique is employed to conceal the secret image in a single cover image. For added security, LSB and MSB bits use the exclusive OR (EX-OR) function. The Haar wavelet transform is used to embed a previously prepared stego image in another cover image. The final stego image’s PSNR score is 39.74. The robustness gets stronger. If the attacker discovers that the secret information is disguised in the stego picture, the information received from the stego image would be one of the cover images that would lead the attacker astray. Yadav et al. [14] choose a cover image and a secret image of any size, following which the hidden image should be converted to grayscale. The cover picture’s embedded text message should not be bigger than the image itself. To generate keys, use the RSA algorithm. There are three channels in the cover image: two-dimensional red, blue, and green. To embed the data into the cover, LSB is performed on each matrix by reading the last bit of each pixel and applying the XOR operation with the stego key and text data to be inserted. LSB of the red and blue pixels should include the secret message. SVD is used, and the singular matrices are later modified for security. Using the appropriate formula, embed these singular matrices with orthogonal matrices. Combine LSB planes to form a stego image. The extraction process begins with recovering the LSB of the R, G, and B channels before decrypting the message with the private key. PSNR = 64.182 and MSE = 0.332 are the results of this method. PSNR and MSE values are improved. It improves imperceptibility
Review on Image Steganography Transform Domain Techniques
507
and robustness. However, this method is more difficult to implement and involves numerous steps that take a significant amount of time and memory. Kozin et al. [15] in this framework, we will develop a technique for setting up a covert communication channel that guarantees the accuracy of perception. First, we separate the image into blocks of any size that do not overlap. The information is encoded in a specific formula, and this formula is used to construct the matrix coefficients. These coefficients are retrieved as integer values for subsequent operations. We now divide the even and odd coefficients using a formula. The LSB technique is presently being used to embed the data. The information is transformed into binary matrices, which are incorporated into the cover matrix. Through the use of an inverse Hartley Transform, the hidden image is retrieved. PSNR values (when the secret was concealed under the cover) at various positions were Pos2 = 45, Pos3 = 39, and Pos4 = 36. This novel method is difficult for outsiders to steganalyze and offers high data efficiency. However, this technique is only effective for images that have a gray level. Yadav et al. [16] this method employs a payload image (PI) and a cover image (CI). Initially, we read the CI and PI and divide each image into three colors using the RGB scheme. Now, we normalize all of the additives and operate the 2-DWT on both images, which are divided into four frequency bands as HL, HH, LH and LL, which have four units as LLa1, LHa1, HLa1, and HHa1. Following the second DWT, we perform FFT on the LHa1 and LH. Now that we have applied SVD to these FFT coefficients, we can generate our stego image (SI) by merging the R, G, and B planes in an orthogonal matrix for SI as W. They now use some attacks on the SI to increase security and robustness. This method generates stego images with higher PSNR and lower error values. This method improved PSNR values while decreasing error values. The use of attacks improves security and robustness. Image quality can be improved in the future by improving attack processes. Singh et al. [17] to assure security and confidentiality, this paper combines a steganography procedure with cryptography. In the beginning, either an RGB color image or a gray image is taken; if a color image is taken, it is converted to a gray image. In this study, we present two strategies, one of which uses the RSA algorithm to encrypt the message. To do this, we choose two prime integers, begin the encryption process with the recipient’s public key, and then proceed to the RSA procedure for decryption. Now, we determine the LSB for each pixel in the red image. We use LSB to integrate the text into the RGB image, then reverse LSB on the red image to change each message independently. In the alternative technique, we follow the same steps up until the LSB and then apply DCT to the stego picture for an improved image. The color picture should be converted to YCbCr and divide the image into eight blocks, apply DCT to the blocks, calculate their ratio, determine the amount of local background illumination, and then combine the blocks by rows and columns. To generate the enhanced image, use block processing and convert the YCbCr to an RGB image. The PSNR values for this study range from 67.49 to 62.43. For secure communication, this method employs the RSA algorithm, and the resulting PSNR values were higher than the base PSNR values.
508
G. Gnanitha et al.
Najih et al. [18] initially, an image of a specific size is divided into blocks. We can generate coefficients for each block using DCT. The secret is now encrypted using keys before being embedded in the cover using the OTP method. The PN sequencing method is then used to generate binary numbers in the following step. Using these binary numbers and particular conditions, we embed the hidden image in the cover image. The extraction process is now carried out using the keys generated during the embedding process. Finally, we use an inverse DCT process to obtain the secret image from stego. OTP stands for One-Time Password. This method yields a PSNR of 48.338 on average, an MSE of 0.995, and a robust test with an intensity factor of 0.843. High imperceptibility is obtained by combining steganography and cryptography. Similarly, we obtain good PSNR and MSE values. However, the proposed median filter and crop attack method is inadequate, retaining only 50% of the image quality. Bansal et al. [19] the shield algorithm is the one suggested in this paper. The hidden information is concealed by the shield algorithm using the cover image’s quantized DCT coefficients. Following the quantization step of JPEG compression, the secret information is inserted using the cover image’s non-zero DCT coefficients. A hiding algorithm and an extracting algorithm are both present in the suggested steganography method. The cover image’s 8×8 blocks are subjected to DCT. To obtain the quantized DCT blocks, the DCT of the cover image is split by the quantization matrix. These quantized DCT blocks contain the concealed message. The 8×8 blocks are reorganized into a single array using the dequantized matrix that has been obtained, and the stego image is created as a result. Similar stages are taken by the extraction method as well as when DCT is applied to 8×8 chunks of stego picture. Extraction of the hidden image should follow acquiring the quantized DCT blocks. After using inverse DCT, group the 8×8 blocks into a single array. We get both the cover photo and the hidden message. The proposed shield algorithm yields a PSNR of 29.77 for the stego image. Higher PSNR values and more accurate classification are achieved by the suggested shield method. The suggested approach has a significant capacity for information concealment. Emad et al. [20] the suggested method consists of two phases. They are the embedding and extraction processes, respectively. During the embedding procedure, the secret data are embedded in the least significant bits of the approximation coefficients of the IWT of the cover picture. The stego image is produced by using inverse IWT. The median filter and IWT are first applied to the stego image as part of the extraction procedure. The secret data will be taken from the approximation coefficients before performing inverse IWT. Color images can also be created using the suggested technique. IWT is performed to the cover image’s R, G, and B channels, and the text’s length and signature are encoded in the R channel, while secret information is inserted in the B and G channels. By combining the inverse IWT of all three channels, a stego image is produced. The extraction procedure is the same as the procedure used on photos in grayscale. The PSNR value of suggested embedding techniques, when used on cover images in grayscale, is 53.46. The MSE value of suggested embedding techniques used on cover images in grayscale is 0.270. The proposed embedding methods’ PSNR value, when used on color cover images, is 51.51. The proposed embedding methods’ PSNR value, when used on color cover pictures, is
Review on Image Steganography Transform Domain Techniques
509
0.182. The suggested approach produces higher PSNR results and enhanced security. It has more transparency and is more computationally efficient. Bhatu et al. [21] cover and hidden images are first used as inputs. In this framework, the cover image’s brightness component is chosen to carry out actions. First, 2-D DCT is applied using the proper equations to eliminate redundant pixel data. The next exercise is LWT, which is just lifted DWT. In addition to simple DWT, this LWT technique also includes split, prediction, and update stages, where information loss is avoided. The LH band is used for SVD in the following step, which generates the three matrices S, V, and U. Using the previously generated S matrix and secret image, a new S component is produced. The LH band of the cover image has these coefficients. Finally, the stego image is produced by applying Inverse SVD, Inverse LWT, and Inverse DCT. Stego image is used as the input for the process of extracting the secret from the cover, and coefficients are extracted using DCT, LWT, and SVD. The secret is now exposed after performing Inverse SVD. The results show an average PSNR of 62.4, average MSE of 0.042, and average SSIM of 0.99. Contrary to the standard DWT method, information loss is avoided with the implementation of LWT. While having a good payload capacity and maintaining consistent image quality compared to other methods, this technique is exceedingly difficult to forecast. The process is complicated, and it takes a long time to complete.
3 Discussion and Conclusions Analyzing the above research works, the below comparisons are done. Comparison of methods only using transform domain technique is shown in the below Table 1. Comparison of methods using transform domain techniques mixed with other techniques such as spatial domain techniques, other transform domain techniques, and cryptography is in Table 2. The transform domain techniques such as Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Integer Wavelet Transform (IWT) give the most accurate results for secure image steganography. The values such as Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structured Similarity Table 1 Comparison of results only using transform domain technique
Method
Technique used
PSNR (dB)
Sharma et al. [5]
2-D-Haar DWT
58.75
Jothy et al. [7]
IWT
37.5
Kalitha et al. [10]
Contourlet transform
41.79
Mudnur et al. [13]
DWT
39.75
Najih et al. [18]
DCT
48.33
Bansal et al. [19]
DCT
29.74
Emad et al. [20]
IWT
55.51
510
G. Gnanitha et al.
Table 2 Comparison of results using transform domain techniques mixed with other techniques
Method
Technique used
PSNR (dB)
Sharma et al. [1]
DCT and DWT
52.01
Sheidaee et al. [2]
DCT and LSB
42.51
Sun et al. [3]
DCT and LSB
48.28
Yadahalli et al. [4]
DWT and LSB
49.34
Nevriyanto et al. [6]
SVD and DWT
56.95
Soni et al. [9]
DFrFt and LSB
29.01
Kim et al. [11]
Block matching and IWT
34.27
Yadav et al. [14]
SVD, LSB, and RSA algorithm
64.18
Singh et al. [17]
DCT and LSB
67.49
Bhatu et al. [21]
DCT and LWT
62.4
Index Measure (SSIM) have been analyzed. Transform domain techniques such as DWT and DCT give more PSNR values and fewer MSE values and give robustness (Fig. 3). 80 70 60 50 40 30 20 10 0
PSNR(dB)
Fig. 3 PSNR of the methods performed in the survey
Review on Image Steganography Transform Domain Techniques
511
When compared to the other approaches in the survey, transform domain techniques paired with other techniques provide high PSNR values. Hence, the image quality and payload capacity should be improved. Declaration I affirm that none of the manuscript’s authors has any conflicts of interest to declare. The text is the author’s review work, and it has not previously been published or is being considered for publication anywhere.
References 1. Sharma P, Sharma A (2018) Robust technique for steganography on Red component using 3DWTDCT transform. In: 2018 2nd international conference on inventive systems and control (ICISC) 2. Sheidaee A, Farzinvash L (2017) A novel image steganography method based on DCT and LSB. In: 2017 9th international conference on information and knowledge technology (IKT) 3. Sun Y, Yu M, Wang J (2020) Research and development of QR code steganography based on JSteg algorithm in DCT domain. In: 2020 IEEE 15th international conference on solid-state and integrated circuit technology 4. Yadahalli SS, Rege S, Sonkusare R (2020) Implementation and analysis of image steganography using Least Significant Bit and Discrete Wavelet Transform techniques. In: 2020 5th international conference on communication and electronics systems (ICCES) 5. Sharma VK, Srivastava DK (2017) Comprehensive data hiding technique for discrete wavelet transform-based image steganography using advance encryption standard. In: Vishwakarma H, Akashe S (eds) Computing and network sustainability 6. Nevriyanto A, Sutarno S, Siswanti SD, Erwin E (2018) Image steganography using combine of discrete wavelet transform and singular value decomposition for more robustness and higher peak signal noise ratio. In: 2018 international conference on electrical engineering and computer science (ICECOS) 7. Jothy N, Anusuyya S (2016) A secure color image steganography using integer wavelet transform. In: 2016 10th international conference on intelligent systems and control (ISCO) 8. ElSayed A, Elleithy A, Thunga P, Wu Z (2015) Highly secure image steganography algorithm using curvelet transform and DCT encryption. In: 2015 long island systems, applications and technology 9. Soni A, Jain J, Roshan R (2013) Image steganography using discrete fractional Fourier transform. In: 2013 international conference on intelligent systems and signal processing (ISSP) 10. Kalitha M, Majumder S (2016) A new steganographic method using Contourlet transform. In: 2016 international conference on signal processing and communication (ICSC) 11. Kim J, Park H, Park J-I (2017) Image steganography based on block matching in DWT domain. In: 2017 IEEE international symposium on broadband multimedia systems and broadcasting (BMSB) 12. Tripathi D, Sharma S (2016) A robust 3-SWT multiple image steganography and contrast enhancement technique. In: 2016 international conference on inventive computation technologies (ICICT) 13. Mudnur SP, Raj Goyal S, Jariwala KN, Patel WD, Ramani B (2018) Hiding the secret image using two cover images for enhancing the robustness of the stego image using Haar DWT and LSB techniques. In: 2018 conference on information and communication technology (CICT) 14. Yadav S, Yadav P, Tripathi AK (2017) Image steganography on color image using SVD and RSA with 2-1-4-LSB technique. In: 2017 international conference on computation of power, energy information and communication (ICCPEIC)
512
G. Gnanitha et al.
15. Kozin A, Papkovskaya O, Kozina M (2016) Steganography method using Hartley transform. In: 2016 13th international conference on modern problems of radio engineering, telecommunications and computer science (TCSET) 16. Yadav SK, Dixit M (2017) An improved image steganography based on 2-DWT-FFT-SVD on YCBCR color space. In: 2017 international conference on trends in electronics and informatics (ICEI) 17. Singh YK, Sharma S (2016) Image steganography on gray and color image using DCT enhancement and RSA with LSB method. In: 2016 international conference on inventive computation technologies (ICICT) 18. Najih MNM, Setiadi DRIM, Rachmawanto EH, Sari CA, Astuti S (2017) An improved secure image hiding technique using PN-sequence based on DCT-OTP. In: 2017 1st international conference on informatics and computational sciences (ICICoS) 19. Bansal D, Chhikara R (2014) An improved DCT based steganography technique. Int J Comput Appl 102(14) 20. Emad E, Safey A, Refaat A, Osama Z, Sayed E, Mohamed E (2018) A secure image steganography algorithm based on least significant bit and integer wavelet transform. J Syst Eng Electron 29(3):639–649 21. Bhatu B, Shah HY (2016) Customized approach to increase capacity and robustness in image steganography. In: 2016 international conference on inventive computation technologies (ICICT)
Plant Leaf Disease Detection and Classification with CNN and Federated Learning Approach Jangam Ebenezer, Pagadala Ganesh Krishna, Medasani Poojitha, and Ande Vijay Krishna
Abstract Farmers face a variety of challenges when growing crops, including leaf spots. It is critical to recognize the illness and take precautions to avoid it. It is critical because our country is known for its agriculture, and the majority of people rely on it for a living. This problem can be solved by loading image data sets of many common leaf diseases, then using convolutional neural network and federated learning to identify sick areas and develop a deep learning model. Our farmers can classify the disease and spray pesticides by comparing data established using characteristics and segments. The current method for plant disease detection is professional nakedeye inspection, which allows disease identification and detection. This necessitates a large team of experts as well as continuous plant monitoring, both of which are costly when dealing with large farms. At the same time, farmers in some countries lack adequate facilities or even know how to contact professionals. As a result, hiring specialists is both costly and time-consuming. To analyze the leaf images and extract important information for further research, image processing techniques are used. When the disease is detected, the appropriate pesticide is sprayed on the affected leaf in the appropriate amount. It will be advantageous to society. Crop productivity will increase if disease detection is automated. Federated learning (also known as collaborative learning) is a machine learning technique that involves training an algorithm across a network of decentralized edge devices or servers that retain local data samples rather than transferring them.
J. Ebenezer · P. G. Krishna (B) · M. Poojitha · A. V. Krishna Department of Information Technology, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, India e-mail: [email protected] J. Ebenezer e-mail: [email protected] M. Poojitha e-mail: [email protected] A. V. Krishna e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_44
513
514
J. Ebenezer et al.
Keywords Image processing · Classification · Convolutional neural network · Federated learning
1 Introduction The Indian economy is heavily dependent on agricultural productivity. Tomatoes are a well-known fruit in India. It grows well in a variety of climates, including tropical, subtropical, and temperate. There are various climates and soil types in different parts of India. Tomatoes are a popular vegetative propagated crop with a high socioeconomic value as a result. When a tomato plant becomes infected, it yields little and grows slowly. Insect, rust, and nematode infections, among other things, cause viral, bacterial, and fungal infections that cause the diseases. Farmers judge these diseases based on their experience or with the help of experts based on nakedeye observation, which is an inexact and time-consuming process. Early disease identification is critical in agriculture and horticulture to maximize crop yields. We developed a method for detecting and identifying diseases in tomato plant leaves issue in society. This study’s main focus is on: . Building a CNN model that can categories plant diseases. . Applying a federated learning strategy and reviewing the results. . Establishing both a global and local model so that we may export models rather than data and maintain the privacy of the latter. . Federated learning enables us to jointly develop a shared prediction model while retaining all of the training data on local devices, as opposed to exchanging data.
2 Preliminaries This section provides definitions of the fundamental terminology and ideas used in the study.
2.1 CNN A convolutional neural network (ConvNet/CNN) [1, 2] is a deep learning system that can take an input picture and assign importance (learnable weights and biases) to distinct aspects/objects in the image as well as distinguish between them. The amount of pre-processing needed by a ConvNet is much smaller than that of other classification techniques. In contrast to basic techniques, which call for hand-engineering of filters, ConvNets can learn these filters and properties with enough training.
Plant Leaf Disease Detection and Classification with CNN …
515
2.2 Federated Learning Federated learning, often referred to as collaborative learning [3], is a machine learning approach that requires training an algorithm across several decentralized edge devices or servers that store local data samples rather than sending them.
2.3 Image Processing Deep neural networks may be used for image processing [1, 4], tasks including noise reduction and image-to-image translation. Deep learning uses neural networks to derive useful feature representations from data.
2.4 Classification Sorting items into groups is the process of classification [5]. Multiple classes are anticipated for this sort of categorization. Layers are groups of neural units in neural networks. The input is processed, and an output is produced in the first layer.
3 Related Work Inception-ResNet-V2, according to Islam et al. [6], had the best accuracy, recall, and F1 score of all the methods, with a score of 0.9286. The accuracy of Resnet-101 after Inception-ResNet-V2 is 0.9152. The VGG-19 model’s precision is 0.8143, whereas the Xception model’s precision is 0.8942. It not only has the worst accuracy, but also the worst F1 Score, recall, and precision. The total number of epochs for all training procedures was set at 100. The fundamentals of deep learning were described by Zhang et al. [5], who also provided a complete overview of recent deep learning research in plant leaf disease identification. Deep learning algorithms are capable of accurately identifying plant leaf diseases when sufficient data is made available for training. The importance of large datasets with high variability, data augmentation, transfer learning, and visualization of CNN activation maps in enhancing classification accuracy have all been discussed, as well as small sample plant leaf disease detection and hyper-spectral imaging for early plant disease detection. Decisions are made using algorithms employed in artificial intelligence, according to Sujatha et al. [7]. ML and DL are the best performers in the domain. The neuronal organization of the human brain and deep learning both use layers and optimizers to help construct trustworthy models with higher accuracy. In our study, we take
516
J. Ebenezer et al.
into account both learning approaches, and DL’s outcomes stand out when measured against ML. All of the variables—precision, F1 score, accuracy, and area under the curve—are taken into account. The CA for RF was 76.8%, 86.5% for SGD, 87.5% for SVM, 87.5% for VGG-19, 87.4% for Inception-v3, 89.5% for VGG-16, and 89.5% for VGG-16, respectively. Distributed machine learning, according to Hu et al. [8], pools the computational capacity of several machines. Its primary objective is to divide computer jobs into manageable portions that may be handled by a variety of local processors. In order to complete its final training, a central server must be used to handle data submitted by local clients; as a result, communication and privacy security are challenging to guarantee. FL is distinct from distributed ML in that the data that each participant uploads to the server is now a trained sub model rather than the original data. The FL also allows asynchronous transmission, which enables acceptable reductions in communication needs. To identify and categorize tomato leaf diseases, Sardogan et al. [9] introduced a technique based on the convolutional neural network with learning vector quantization algorithm. The dataset contains 500 pictures of tomato leaves. Three separate input matrices for the R, G, and B channels were acquired before the convolution process could start for each image in the dataset. The input picture matrices are all twisted. Max pooling and the reLU activation function have inferred the output matrix. For training and testing purposes, the LVQ algorithm utilized a total of 500 feature vectors that were taken from the source photos. The studies were run on photographs of healthy and sick leaves to accomplish categorization. Federated learning is a paradigm for learning in which statistical models are taught at the edge of the network, as described by Li et al. [10]. We discussed the distinctive characteristics and difficulties of federated learning and how it varies from conventional distributed data center computing and conventional privacy-preserving learning. We gave a thorough overview of both historical findings and more contemporary research on federated settings. We concluded by listing a few unresolved issues that demand more research. It will take a team effort from many different research communities to solve these problems. In this part, we’ve established that deep learning algorithms outperform machine learning strategies. Additionally, because of the advanced frameworks employed, model training takes longer. This study came to the conclusion that we need a technique that is easier to teach and a framework that is simpler to create. In order to accomplish data privacy, we must also employ a strategy that enables us to share trained models rather than sensitive data. Therefore, by employing a federated learning strategy, we may exchange trained models rather than data and preserve the privacy of that data.
Plant Leaf Disease Detection and Classification with CNN …
517
Fig. 1 System architecture
4 System Architecture See Fig. 1.
5 Design Methodology In this research, we propose a system where trained CNN models, which can predict and categorize plant illnesses, are exchanged instead of data. So that we can safeguard the confidentiality of our data. As a consequence, the federated learning technique enables us to successfully safeguard data privacy in comparison to the conventional machine learning approach. The below-mentioned methodology and classification system make it simple to classify diseases in plant leaves. Because we are using a federated learning approach, we can share both our trained local models and our global model rather than just data (Fig. 2).
6 Dataset Description Here is a detailed description of the dataset that was used to train and validate our suggested model. . Plant leaf diseases dataset as a source. . There are 10 classes overall.
518
J. Ebenezer et al.
Fig. 2 Design methodology
Fig. 3 Dataset description
. There are 16,010 images in all. . 12,808 images were utilized for training. . 3202 images were utilized for validation (Fig. 3).
7 Implementation and Results We will demonstrate how we implemented our suggested system in the phases that follow.
Plant Leaf Disease Detection and Classification with CNN …
519
Fig. 4 Plotted results of CNN
7.1 Phase 1 During this phase, the train split of our dataset was used to train our CNN model. Then, in order to evaluate the effectiveness of our model, we calculated accuracy. We also received a validation accuracy of 94.25% while testing our model on validation data. The following charts show training and validation accuracy (Fig. 4).
7.2 Phase 2 We partitioned our dataset into three parts in this step to execute the federated learning technique, and we trained a CNN model for each portion of the dataset. After that, we locally stored all three models so that we could exchange and export trained CNN models rather than raw data. As a result, providing trained models rather than raw data can help to safeguard the privacy of your data.
520
J. Ebenezer et al.
Fig. 5 Plotted results of global model
7.3 Phase 3 We must combine the inputs from all of the local models we saved on our local machine in order to create a global model. To easily combine all of the inputs, we’ll first load all of the local models into variables and then use some functions. We’ll have a global model by the time we’re done that can categorize and forecast every class in the dataset. On validation data, we evaluated the performance of our global model and found it to be 93.45% accurate (Fig. 5).
7.4 Test Case Results We must now assess how well our model predicts brand-new data. We thus constructed a model to forecast the disease’s class using a few test instances.
Plant Leaf Disease Detection and Classification with CNN …
Fig. 6 Yellow leaf curl virus
7.4.1
Test Case-1
Expected output: Yellow Leaf Curl Virus Actual output (Fig. 6).
7.4.2
Test Case-2
Expected output: Tomato Healthy Actual output (Fig. 7).
Fig. 7 Tomato healthy
521
522
J. Ebenezer et al.
Fig. 8 Tomato Septoria leaf spot
7.4.3
Test Case-3
Expected output: Tomato Septoria leaf spot Actual output (Fig. 8).
8 Conclusion and Future Work For the purpose of identifying illnesses in tomato leaves, we suggest a deep learningbased autonomous leaf disease detection system in this study. A convolutional neural network serves as the system’s foundation (CNN). We largely employed the Federated Learning method, dividing our dataset into three sections, and building three local CNN models for each. Later, we integrated those three models to produce a global model that could categorize every category in the dataset. Data privacy is crucial when using a federated learning approach. As a result, we can draw the conclusion that using a federated learning approach, we can share/export trained models to a global server instead of sharing sensitive data, allowing the models to be used by everyone. Future applications of the federated learning strategy might include the medical, academic, and other sectors where privacy is a top priority. Hospitals can exchange trained models for illness analysis rather than sensitive patient data. By keeping student information private, educational institutions may also utilize the federated learning model to assess student performance.
Plant Leaf Disease Detection and Classification with CNN …
523
References 1. Barbedo JGA (2018) Factors influencing the use of deep learning for plant disease recognition. Biosyst Eng 172:84–91. https://doi.org/10.1016/j.biosystemseng.2018.05.013 2. Ghosal S, Sarkar K (2020) Rice leaf diseases classification using CNN with transfer learning. In: 2020 IEEE Calcutta conference (CALCON), Kolkata, India, pp 230–236.https://doi.org/ 10.1109/CALCON49167.2020.9106423 3. Hu K, Li Y (2021) Federated learning: a distributed shared machine learning method. https:// doi.org/10.1155/2021/8261663 4. Lee SH, Goëau H, Bonnet P, Joly A (2020) New perspectives on plant disease characterization based on deep learning. Comput Electron Agric 170, 105220 5. Li L, Zhang S, Wang B (2021) Plant disease detection and classification by deep learning, China.https://doi.org/10.1155/2021/8261663 6. Islam MdA, Shuvo MdNR (2021) An automated convolutional neural network based approach for paddy leaf disease detection, Dhaka, Bangladesh. https://doi.org/10.14569/ijacsa.2021.012 0134 7. Sujatha R,Chatterjee JM, Jhanjhi NZ, Brohi SN (2021) Performance of deep learning vs machine learning in plant leaf disease detection, Vellore, India.https://doi.org/10.1016/j.mic pro.2020.103615 8. Dhingra G, Kumar V, Joshi HD (2018) Study of digital image processing techniques for leaf disease detection and classification. Multimedia Tools Appl 77(15):19951–20000. https://doi. org/10.1007/s11042-017-5445-8 9. Ebrahimi MA, Khoshtaghaza MH, Minaei S, Jamshidi B (2017) Vision-based pest detection based on SVM classification method. Comput Electron Agric 137:52–58. https://doi.org/10. 1016/j.compag.2017.03.016 10. Wang B, Wang D (2019) Plant leaves classification: a few-shot learning method based on Siamese network. IEEE Access 7:151754–151763. https://doi.org/10.1109/access.2019.294 7510
Real-Time Voice Cloning System Using Machine Learning Algorithms G. Rajesh Chandra, Venkateswarlu Tata, and D. Anand
Abstract This real-time voice cloning system uses machine learning algorithms and has a lot of ease of use applications like bringing back the capacity to speak naturally users, who have missing their voices and unable to provide a lot of novel training instances. Using a neural technique, from speaker confirmation to multi-speaker text-to-speech synthesis, transfer learning is possible. In this paper, the model will strive to provide output that is completely independent of any pre-existing record while dealing with pre-existing information in the datasets. The model is built on the dataset’s base, but it is not confined to it. Keywords Real-time voice cloning · Encoder · Synthesizer · Vocoder · Machine learning
1 Introduction The world as we know it is slowly transitioning to a voice-driven environment. Audio material and voice-based automated services are becoming increasingly popular. Many content makers are migrating to soundcloud and Amazon’s Audible audio book service. It may be deduced from [1] the truth that technology conglomerates such as Google, Amazon, Samsung, Apple, and others are significantly invest in voice-based services, often claiming to be better than their competitors. With these developments, we will be able to personalize the voices of multiple voice agents. Believe that G. Rajesh Chandra (B) Department of Computer Science and Engineering, KKR & KSR Institute of Technology & Sciences, Guntur, AP, India e-mail: [email protected] V. Tata Department of CSE, Guntur Engineering College, Guntur, AP, India Annamalai University, Chidambaram, AP, India D. Anand Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, AP, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_45
525
526
G. Rajesh Chandra et al.
Morgan Freeman reads your grocery list or Amitabh Bachchan directing you through traffic. Voice cloning is a deep-learning system that takes in an individual’s voice recordings and synthesizes it into a voice that sounds quite close to the real thing. This paper investigates the use of a real-time vocoder to transmit learning from speaker verification to many speakers text-to-speech synthesis (SV2TTS). This paper is developed as follows. In next section, we discussed about the methodology of proposed system. Section 3 deals with result and analysis and final section deal with conclusion and future scope of the paper.
2 Methodology In this paper, we are using speaker encoder methodology for voice cloning system [2]. By using this methodology, we can get accurate results for voice cloning. Nowadays, this methodology is very useful and needful. The first part of the SV2TTS model is the speaker encoder.
2.1 Speaker Encoder The voice information of each speaker is encoded in an embedding. A neural network with speaker verification loss was used to create this embedding. By attempting to anticipate whether two utterances are from the same user or not, speaker verification loss is measured. The speaker encoder’s job is to take some input audio and create an embedding that represents “how the speaker sounds.” The speaker encoder is indifferent about the words the speaker is saying or any background noise; all it cares about is the speaker’s voice, such as high/low pitched voice, accent, tone, and so on. All of these characteristics [3] are integrated into a low-dimensional vector called the d-vector or the speaker embedding informally (Fig. 1). To learn to generate these embeddings, the authors describe the following process: The voice audio samples are first divided into 1.6 s segments with no transcript and then turned into mel spectrograms [4]. The speaker encoder is then trained to compare two audio samples and determine whether they were produced by the same speaker. As a result, the speaker encoder is forced to construct embeddings that reflect the speaker’s sound.
2.2 Synthesizers The synthesizer is a component of SV2TTS [5] that analyzes text input and generates Mel spectrograms, which are then converted to sound by the vocoder. The synthesizer takes a series of text, which is mapped to phonemes (the smallest units of
Real-Time Voice Cloning System Using Machine Learning Algorithms
527
Fig. 1 Visual representation of the embeddings, each color is a different speaker
human sound, such as the sound you make when speaking ‘a,’) and embeddings produced by the speaker encoder, and utilizes the Tacotron 2 architecture to recurrently construct frames of a mel spectrogram [6]. To train the synthesizer we: To begin, we gather a phoneme sequence and a mel spectrogram of a speaker saying that sentence. The mel spectrogram is then sent to the speaker encoder, which creates a speaker embedding. The encoder of the synthesizer then concatenates the phoneme sequence encoding with the speaker embedding. The synthesizer’s decoder and attention sections generate the mel spectrogram on a regular basis. Finally, the mel spectrogram is compared to the original target, resulting in a loss that is optimized (Fig. 2).
2.3 Neural Vocoder At this point, the synthesizer has created a mel spectrogram, but we can’t hear anything. To convert the mel spectrogram into raw audio waves, we use a vocoder. The vocoder used in this project is based on DeepMind’s WaveNet model, which generates raw audio waveforms from text and was formerly the industry standard for TTS systems. The model’s dilated convolution blocks [7] are pretty interesting. They guarantee [8, 9] the causality of sequence data by limiting the convolution to values from earlier time steps. However, because the receptive fields of neurons are narrowed, very high depth models are required. Dilation is a nice concept that skips over a few neurons in previous time steps to increase each neuron’s range in deeper layers. Because the spectrogram [10] contains all of the information, this
528
G. Rajesh Chandra et al.
Fig. 2 Synthesizer training
Mel Spectrogram
Audio Waveform
Fig. 3 Output generated by vocoder
model does not require a separate representation of the target speaker. It becomes good at creating voices of unknown speakers when the model has been trained on a big corpus comprising multiple speakers’ voices (Fig. 3).
2.4 Flow of the Model The SV2TTS model is made up of three elements, each of which is trained separately [11]. This allows each element to be trained on its own data, minimizing [6] the need for multi-speaker data of good quality. The speaker encoder listens to a sample of audio and creates an embedding. The synthesizer generates a mel spectrogram from a list of phonemes and the speaker embedding. The mel spectrogram is converted into an audio waveform by a neural vocoder (Fig. 4). Synthesizer Training: To begin, we gather a phoneme sequence and amMel spectrogram of a speaker saying that sentence. The mel spectrogram is then sent to the speaker encoder, which creates a speaker embedding [12]. The encoder of the
Real-Time Voice Cloning System Using Machine Learning Algorithms
529
Fig. 4 General SV2TTS architecture
Fig. 5 Synthesizer training
synthesizer [13] concatenates the phoneme sequence encoding [14] with the speaker embedding. The synthesizer’s decoder and attention sections generate the mel spectrogram on a regular basis. Finally, the mel spectrogram is compared to the original target, resulting in a loss that is optimized (Fig. 5).
3 Results and Analysis This is the main interface of the project [8] where all the argument values are provided to be synthesized and vocoded. Various datasets can be included in order to take the information form a large variety of input audios for a better result. At this point, the synthesizer has created a Mel spectrogram; however, we cannot pay attention anything. To convert the Mel spectrogram into uncooked audio waves, we use a vocoder. The vocoder used on this venture is primarily based totally on DeepMind’s WaveNet version, which generates uncooked audio waveforms from textual content and became previously the enterprise fashionable for TTS systems. The version’s dilated [9] convolution blocks are quite interesting. They guarantee [8] the causality of series facts via way of means of proscribing the convolution to values from in advance time steps. However, due to the fact the receptive fields of neurons are narrowed, very excessive intensity fashions are required. Dilation is a
530
G. Rajesh Chandra et al.
Fig. 6 Interface
pleasing idea that skips over some neurons in preceding time [13] steps to growth every neuron’s variety in deeper layers. Because the spectrogram consists of all the information, this version would not require a separate illustration of the goal speaker. It turns into true at developing voices of unknown audio system whilst the version has been educated on a large corpus comprising a couple of audio system’ voices (Fig. 6). Input audio can be provided through browsing from samples which have been download in prior [11] in order to check the performance of the system. Another way is recording the audio in real time to give your voice as the input to perform cloning (Fig. 7). Text input can be given in the above textbox which is provided on the right [8] top corner which takes the text input and vocodes the given text as an audio output [5]. This is done by synthesizing and vocoding the two different inputs (Fig. 8). Synthesize and vocode the audio or text given as input by choosing the synthesize and vocode option. One can also choose only synthesize or only vocode options [15] which are available in order to segregate the two (Fig. 9). Synthesizing and vocoding process takes almost 70–120 s to finish the processing depending on [6] the option that is chosen among (1) Synthesize and vocode, (2) Synthesize, (3) Vocode with the latter taking the least time (Fig. 10). A set of sample audio inputs that can be given to the system for vocoding have been included for testing the system or providing a random input audio for the given text sample (Fig. 11). Mel spectrum: In sound processing, the mel-frequency cestrum (MFC) is a representation of a sound’s short-term power spectrum based on a linear cosine transform of a log power spectrum on a nonlinear mel-frequency scale. The coefficients that make up an MFC are called MFCCs (Fig. 12).
Real-Time Voice Cloning System Using Machine Learning Algorithms
Fig. 7 Audio input
Fig. 8 Text input
Fig. 9 Processing
531
532
Fig. 10 Synthesizing and vocoding
Fig. 11 Sample audio files
Fig. 12 Mel-spectrum
G. Rajesh Chandra et al.
Real-Time Voice Cloning System Using Machine Learning Algorithms
533
The synthesizer has created a Mel spectrogram, however, we cannot listen anything. To convert the Mel spectrogram into uncooked audio waves [16], we use a vocoder. The vocoder used on this assignment is primarily based totally on DeepMind’s WaveNet version, which generates uncooked audio waveforms from textual content and turned into previously the enterprise trendy for TTS systems. The version’s dilated convolution blocks are quite interesting. They assure the causality of collection facts through restricting the convolution to values from in advance time steps. However, due to the fact the receptive fields of neurons are narrowed, and very excessive intensity fashions are required. Dilation is a pleasant idea that skips over some neurons in preceding time steps to boom every neuron’s variety in deeper layers. Because the spectrogram incorporates all the information, this version does not require a separate illustration of the goal speaker. It turns into correct at growing voices of unknown audio system while the version has been educated on a massive corpus comprising more than one audio system’ voices. This is the principle interface of the challenge wherein all of the argument values are furnished to be synthesized and vocoded. Various datasets may be blanketed a good way to take the statistics shape a huge style of enter audios for a higher result. Synthesizer Training: To begin, we acquire a phoneme series and a Mel spectrogram of a speaker announcing that sentence. The mel spectrogram [17] is then despatched to the speaker encoder, which creates a speaker embedding. The encoder of the synthesizer then concatenates the phoneme series encoding with the speaker embedding. The synthesizer’s decoder and interest sections generate the mel spectrogram on an ordinary basis. Finally, the mel spectrogram is in comparison to the unique target, ensuing in a loss this is optimized.
4 Conclusion and Future Scope Using frequency-based mel spectrograms gave higher accuracy results than using amplitude-based mel-spectrograms. While amplitude simply offers information on intensity, or how “loud” a sound is, the frequency distribution over time offers information on the sound’s substance. Mel spectrograms are also visually appealing. CNN had the best results. It takes the longest to train, but the increased precision makes the extra computing cost worthwhile. However, the accuracy similarities between the SVM, KNN, and feed-forward neural network are encouraging. As a result, we created a music genre detector that can identify the genre of any audio file. The user’s input is received as a WAV file through the user interface. Librosa converts music files into pitch, frequency, and many spectrogram characteristics. Model: The model we used to forecast the audio, which can be adjusted for a variety of purposes. It offers a large number of open source libraries and modules that can be utilized in a variety of applications. We were successful in creating our search engine using all of the above-mentioned tools. We plan to explore with different sorts of deep learning approaches in the future, as they performed the best. Given the time series nature of
534
G. Rajesh Chandra et al.
the data, an RNN model may be appropriate. Recording a voice for each of the characters’ spoken conversations in the game takes a lot of time and effort. Developers will be able to employ advanced neural networks to replicate human voices in the coming year. Audio Books: Without the use of human labor, the ability to synthesize millions of audio books. Voice for humans: Those who have lost their voices can reclaim their identity by constructing their own voice clone using recordings of themselves speaking before they lost their voices.
References 1. Chandra GR, Kamisetty S, Reddy RVS, Chowdary AD, Sampath O (2022) A new survey of data markets analysis based on constraint reserve rate and online estimating. IJRMAT 12(2):371–376 2. Rejeti VKK Blockchain technology for fraudulent practices in insurance claim process. In: Proceedings of the fifth ICCES 2020 IEEE conference record # 48766; IEEE Xplore ISBN 978-1-7281-5371-1 3. Reddy RVK, Subhani S, Chandra GR Breast cancer prediction using classification techniques. Int J Emerg Trends Eng Res 8(9):6074 4. Haritha D, Sirisha PGK Optimized segmentation of brain images using shuffled frog leaping algorithm—Tabu search frame work. Int J Pharm Technol 5. Kumar BP, Reddy ES (2020) An efficient security model for password generation and time complexity analysis for cracking the password. Int J Saf Sec 10(5):713–720. https://doi.org/ 10.18280/ijsse.100517 6. Kumar KM, Shubhang G, Chandra GR Data leakage detection system for cloud-based storage systems. IJAER 7. Chandra GR, Rao KRH Tumor detection in brain using genetic algorithm. Procedia Comput Sci 79:449–457 8. Wan L, Wang Q, Papir A, Moreno IL (2017) Generalized end-to-end loss for speaker verification 9. Akula S, Reddy ES (2020) Robust Gaussian noise detection and removal in color images using modified fuzzy set filter. J Intell Syst 7(4):820–824. Degruyter publications with ISSN 2394-5125 10. Akula S, Reddy ES (2021) Image denoising based on statistical nearest neighbor and wave atom transform. Int J Comput Dig Syst 10(1). University of Bahrain, ISSN 2210-142X 11. Kumar BP, Reddy ES (2019) Identification of password strength and time analysis for hacking the generated key: a survey. Int J Anal Exp Modal Anal XI(VII):1307–1315.IF 6.3. ISSN 0886-9367 12. Kumar BP, Reddy ES (2019) Identification of password strength and time analysis for hacking the generated key: a survey. Int J Anal Exp Modal Anal XI(VII):1307–1315.IF 6.3. An UGCCARE Approved Group-II Journal, ISSN 0886-9367 13. Heo H-S, Jung J, Yang IL-H, Yoon S-H, Shim H, Yu H-J (2019) End-to-end losses based on speaker basis vectors and all-speaker hard negative mining for speaker verification 14. van den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbrenner N, Senior A, Kavukcuoglu K (2016) WaveNet: a generative model for raw audio 15. Kalchbrenner N, Elsen E, Simonyan K, Noury S, Casagrande N, Oord A, Dieleman S, Kavukcuoglu K (2018) Efficient neural audio synthesis 16. Haritha D, Sirisha PGK (2017) Optimized segmentation brain images using shuffled frog leaping algorithm—expected-maximization frame work. Res J Bio Technol Special issue pages 151–157. E-ISSN 2278-4535 17. Rejeti VKK Distributed denial of service attack prevention from traffic flow for network performance enhancement. In: Proceedings of the second ICOSEC 2021, DVD Part Number: CFP21V90-DVD; IEEE Xplore ISBN 978-1-6654-3367-9
Artificial Neural Network (ANN) Approach in Evaluation of Diesel Engine Operated with Diesel and Hydrogen Under Dual Fuel Mode Shaik Subani and Domakonda Vinay Kumar
Abstract There is a significant role of using hydrogen in engines than pure diesel, bio diesels. A combination of hydrogen at different flow rates and diesel can be used. Hydrogen tanks with storage capacity are used in vehicle which is very dangerous, so here we include a hydrogen reactor in which chemicals are mixed to produce hydrogen, and can be sent to the engine along with diesel. In this work, an on-demand hydrogen generation setup was used to generate hydrogen for the purpose of burning it in the engine along with diesel under dual fuel mode. A solid-state hydrogen generation mechanism was employed in a purpose-built reactor using sodium borohydride (NaBH4 ) and Aluminum Sulfate (Al2 (SO4 )3 ) mixed in water. The experimentation was done on 4-stroke VCR diesel engine with single cylinder which was modified into duel fuel mode. The studies were carried with pure diesel and hydrogen at 0, 1.5, 3, 6, 9, 12 and 15 lit per min at 18 compression ratio by holding speed to be 1500 rpm with varied loads from 0 to 12 kg with an interval of 3 kg. The performance parameters like brake thermal efficiency got enhanced due to the usage of hydrogen than pure diesel. The emission parameter such as nitrogen oxide increased when hydrogen was injected. Feed forward, back propagation multi layer, and artificial neural networks scheme was developed to model and simulate the above parameters and to predict those values at 15 LPM of hydrogen flow rate. The performance and emission valued predicted using ANN were compared with the experimental data and were found to be in good agreement with each other. Keywords Performance · Emissions · Hydrogen reactor · Diesel
1 Introduction Our daily lives place a greater demand on energy. We use energy that is created from fossil fuels. The main issue that many nations are dealing with is the trend toward fossil fuel depletion. A lot of researchers are concentrating on renewable S. Subani (B) · D. V. Kumar VFSTR, Vadlamudi, Guntur, AP, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_46
535
536
S. Subani and D. V. Kumar
energy sources like sunshine, wind, geothermal, etc. to solve this issue. But none of the afore mentioned renewable energy sources is more effective, and they may also be climate-dependent. Although very valuable, the energy we receive from the aforementioned renewable source is insufficient for our daily needs. Researchers are now focusing on a different option that should be practical, affordable, and sustainable. The trend in emissions like CO and HC for carbonless fuels like hydrogen and biodiesel is downward. Because of its high heating value, high diffusivity, and selfignition temperature, direct hydrogen use in internal combustion is highly dangerous [1]. The main drawback of utilizing biofuel is that it uses more fuel than fossil fuel to create the same amount of power. Low heating value, inappropriate combustion caused by excessive viscosity, increased in-cylinder temperature, and poor ignition qualities are the main causes of NOx emissions [2]. Researchers have experimented with a variety of gaseous fuels as a resulting fuel for dual-fuel engines, including hydrogen, liquid petroleum gas, biogas, producer gas, and compressed natural gas. The only gaseous fuel that is carbon-free among these is hydrogen, which also has a broad variety of flammability restrictions elevated heating rate, and a quick flame velocity [3]. The primary reason for using hydrogen as supplemental fuel in CI engines is because diesel fuel’s decreased heterogeneity allows for better air premixing and uniformity of the combustion mixture [4]. It has been extensively researched how to operate dual-fuel engines with hydrogen as a supplementary fuel. Hydrogen is supplied into Ricardo E6 research engine without any modification by sending through air intake manifold, and noted that there has been a noticeable gain in brake thermal efficiency, followed by a drop in brake-specific fuel consumption, with reduced CO and improved NOx emission [5]. Under various load circumstances, the author explored hydrogen as a resulting fuel in engines (used timed manifold hydrogen induction circuit). They discovered that the performance is better than with normal diesel. Additionally, they noticed that when CO2 and NOx levels grew, CO, HC, and particulate matter emissions decreased [6]. Here author used hydrogen cylinder in his experimental setup and concluded that the brake thermal efficiency, NOx emissions increased and CO decreased [7]. Here, author reported hydrogen can be produced by electrolysis, With H2 engines or fuel cells as an alternate of batteries, applications like aviation and road transportation across greater area of distance and lightly peopled area are still noticeably easier [8], to store energy onboard vehicles has to come up with new idea of hydrogen generation. The onboard hydrogen generation and storage space arrangement concept for a hydrogen fuel cell car is being explored, according to the author. Two organized sub systems, a generator and a fuel cell, make up the on-board hydrogen production process. MATLAB/SimulinkTM has been used to model and implement the subsystems. The outcomes are anticipated to demonstrate that by utilizing this method, major energy may be collected, supplying hydrogen fuel which is in on-board and therefore enhancing the vehicle’s effectiveness and range [9]. Over the years, artificial neural networks (ANNs) have found use in numerous branches of research and engineering. Since the ANNs will be trained using a variety of experimental data sets, they can be used to find solutions. The network is then utilized to forecast parameters that have not been used anyplace else in the network system [10]. There is a limited research over production
Artificial Neural Network (ANN) Approach in Evaluation of Diesel …
537
of hydrogen for on board vehicles without any harm, and majority of researchers are using hydrogen cylinder for the experimental work which is limited to stationary engines. Here in this paper, a hydrogen reactor is introduced with which hydrogen is generated by mixing up the chemicals.
2 Experimental Setup For the investigations, VCR research engine has been modified to dual fuel mode as shown Fig. 1. The investigational system is schematically illustrated in Fig. 3. The conditions of the engine are scheduled in Table 1. Load was maintained at varied from 0 to 12 kgf throughout the test while the engine was driven at 1500 rpm. A hydrogen flow meter and the hydrogen supply reactor were both attached to the engine’s input manifold. Hydrogen Reactor: A solid-state hydrogen generation mechanism was employed in a purpose-built reactor using Sodium Borohydride (NaBH4 ) and Aluminum Sulfate (Al2 (SO4 )3 ) mixed in water as shown in Fig. 2 to produce hydrogen gas. This gas is then supplied to a flow meter and later to a dual-mode engine, along with pure diesel [11]. Parts of investigational system 1. Engine 2. Fuel Tank 3. Hydrogen Reactor 4. Flow meter 5. System for recording Performance data 6. Exhaust gas analyzer. Engine control panel
Engine Performance monitoring unit
Diesel Engine
Gas Analyser
Flow Regulator
Fig. 1 Experimental setup
H2 Reactor
538 Table 1 Conditions of engine
Fig. 2 Hydrogen reactor
Fig. 3 Schematic illustration of engine setup
S. Subani and D. V. Kumar Engine constraint
Condition
Type of engine
Kirloskar
Compression ratio
18:1
Speed
1500 rpm, constant
Injection point
30◦ Before TDC
Fuel injection pressure
200 bar
Rated power
87.5 mm
Cylinder bore
3.5 KW
Piston stroke length
110 mm
Cylinders
1
Artificial Neural Network (ANN) Approach in Evaluation of Diesel … 35 30
539
CR : 18 Speed :1500 rpm IP : 200bar
BTHE (%)
25 20
H 2 LPMs 0 1.5 3 6 9 12 15
15 10 5 0 0.000
0.875
1.750
2.625
3.500
Brake Power (kW)
Fig. 4 Deviation of BTE with BP
3 Results and Discussion 3.1 Performance Parameters Here, one parameter is investigated and discussed below. Brake Thermal Efficiency (BTE): Fig. 4 represents deviation in BTE for diesel fuel and hydrogen at various BPs and hydrogen flow rate, BTE with hydrogen is shown to be better than standard diesel. The reason is with the addition of hydrogen, the heat transfer losses from the combustion chamber and the frictional losses are also reduced, which leads to an increase in BTE [7, 12].
3.2 Emission Parameters Variations NOx were studied here. Nitrogen Oxides (NOx ) The fluctuation of NOx at different hydrogen flow rate and conventional diesel is shown in Fig. 5. Load and hydrogen enrichment in diesel both led to an increase in NOx emissions. The greater combustion velocity is accelerated by the presence of hydrogen, which raises NOx emissions. This is a result of the combustion chamber’s temperature and pressure rising [7, 12].
540
S. Subani and D. V. Kumar 500
H 2 LPMs
NOX (ppm)
400
300
0 1.5 3 6 9 12 15
200
100
0 0.000
CR : 18 Speed :1500 rpm IP : 200bar 0.875
1.750
2.625
3.500
Brake Power (kW)
Fig. 5 Deviation of NOx emissions with BP
4 Prediction of Performance and Emissions Using ANN Model The present work made use of the back propagation (BP) learning technique, which has been famous in applications in science and engineering. Three layers make up this network: an input layer, a hidden layer, and an output layer [13]. To train and test the neural networks, input data sets and corresponding target sets are necessary (Figs. 6 and 7).
5 Conclusion A single cylinder diesel engine operating in dual fuel mode at different hydrogen flow rate and 18 cr, experimental investigation has been conducted. Observations from experiments led to the following conclusions: Hydrogen reactor is used to produce hydrogen, and tests are conducted with it on duel fuel mode engine and promissory results are obtained. Hence, this reactor can be used as onboard hydrogen generation. Experiments were conducted by pumping hydrogen using flow meter at different lpm and performance; emissions parameters were recorded and observed. When compared to normal diesel, the performance characteristics like brake thermal efficiency got enhanced. Emissions such as NOx got increased with hydrogen.
Artificial Neural Network (ANN) Approach in Evaluation of Diesel …
Fig. 6 Values of training, validation, and test data
541
542
S. Subani and D. V. Kumar
Fig. 7 Training process in neural network
References 1. Vijayaragavan M, Lalgudi Ramachandran GS (2020) Effect of synergistic interaction between hydrogen inductions with Simarouba glauca-diesel blend for CI engine application. Heat Mass Transfer 56:1739–1752. https://doi.org/10.1007/s00231-020-02810-3 2. Varuvel EG, Sonthalia A, Subramanian T, Aloui F (2018) NOx-smoke trade-off characteristics of minor vegetable oil blends synergy with oxygenating in a commercial CI engine. Environ Sci Pollut Res 25(35):35715–35724 3. Jayaraman K, Babu GN, Dhandapani G et al (2019) Effect of hydrogen addition on performance, emission, and combustion characteristics of Deccan hemp oil and its methyl ester–fuelled CI engine. Environ Sci Pollut Res 26:8685–8695 4. Sunmeet SK, Subramanian KA (2017) Experimental investigations of effects of hydrogen blended CNG on performance, combustion and emissions characteristics of a biodiesel fueled reactivity controlled compression ignition engine (RCCI). Int J Hydrog Energ 42:4548–4560 5. Hamdan MO, Selim MYE (2016) Performance of CI engine operating with hydrogen supplement co-combustion with jojoba methyl ester. Int J Hydrog Energ 41:10255–10264 6. Madhujit D, Abhishek P, Durbadal D, Sastry GRK, Raj SP, Bose PK (2015) An experimental investigation of performance emission trade off characteristics of a CI engine using hydrogen as dual fuel. Energy 85:569–585 7. Jaikumar S, Bhatti SK, Srinivas V (2019) Experimental explorations of dual fuel CI engine operating with Guizotia abyssinica methyl ester-diesel blend (B20) and hydrogen at different compression ratios. Arab J Sci Eng 44:10195–10205. https://doi.org/10.1007/s13369-019-040 33-z
Artificial Neural Network (ANN) Approach in Evaluation of Diesel …
543
8. Boretti A (2021) The hydrogen economy is complementary and synergetic to the electric economy. Int J Hydrogen Energ 46(78):38959–38963. ISSN 0360-3199, https://doi.org/10. 1016/j.ijhydene.2021.09.121 9. Thanapalan K, Zhang F, Maddy J, Premier G, Guwy A (2011) Design and implementation of on-board hydrogen production and storage system for hydrogen fuel cell vehicles. In: 2011 2nd international conference on intelligent control and information processing, pp 484–488. https://doi.org/10.1109/ICICIP.2011.6008291 10. Vinay Kumar D, Ravi Kumar P, Santosha Kumari M (2013) Prediction of performance and emissions of a biodiesel fueled lanthanum zirconate coated direct injection diesel engine using artificial neural networks. Procedia Eng 64:993–1002 11. Malek A, Prasad E, Aryasomayajula S, Thomas T (2017) Chimie douce hydrogen production from Hg contaminated water, with desirable throughput, and simultaneous Hg-removal. Int J Hydrogen Energ 42(24):15724–15730. ISSN 0360-3199 12. Kumar RS, Loganathan M, Gunasekaran EJ (2015) Performance, emission and combustion characteristics of CI engine fuelled with diesel and hydrogen. Front Energ 9:486–494. https:// doi.org/10.1007/s11708-015-0368-4 13. Xu K, Xie M, Tang LC, Ho SL (2003) Application of neural networks in forecasting engine systems reliability. Appl Soft Comput 2:255–268
Machine Learning Model for Medical Data Classification for Accurate Brain Tumor Cell Detection Gnana Sri Sai Sujith Navabothu, Himanshu Sakode, Jagathi Gottipati, and Polagani Rama Devi
Abstract The health industry has grown into a large organization, and a vast number of medical data can be produced every day by the healthcare industry to collect information for the prediction of future diseases that can be produced by a patient using treatment history and health information. In a variety of medical diagnostic applications, automated defect identification in medical imaging has become an arising sector. Automated brain tumor identification and categorization are the focus of our work. In most cases, CT or MRI scans are used to examine the anatomy of the brain. The main objective of brain tumor detection is to help in clinical diagnosis. The goal is to design a machine learning method that ensures the existence of a tumor and to provide a reliable technique of tumor diagnosis that aids medical staff to provide proper medication to the patient. We get deep convolutional neural network (CNN) models that have already been trained using the transfer learning method to extract features from brain MRI scans. The Visual Geometry Group (VGG 16) is the most notable model when it comes to image recognition architectures as it surpasses and outperforms other traditional convolutional neural network models. Keywords Automated brain tumor detection · Visual geometric group (VGG 16) · Transfer learning · Recognition · Convolution neural networks
1 Introduction Automated brain tumor identification and categorization are the focus of our work. The goal is to design a ML method that ensures the existence of a tumor and to provide a reliable technique of tumor diagnosis that aids medical staff to provide medication to the patient.
G. S. S. S. Navabothu (B) · H. Sakode · J. Gottipati · P. Rama Devi Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada 520007, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_47
545
546
G. S. S. S. Navabothu et al.
1.1 Origin of Problem The brain is a large and complicated organ with 50–100 billion neurons. It is made up of a great number of cells, each of which has a distinct purpose. Extra cells are produced by the body, which forms a mass of tissue known as a tumor [1]. The delicate functioning of the body is harmed when a tumor is implanted in the brain region. The importance of early identification and recognition of brain cancers cannot be overstated. Computer-aided diagnostic (CAD) [2] tools are commonly used to diagnose brain disorders in a systematic and particular manner. A brain tumor is an abnormal tissue development or the central spine that may disrupt the brain’s normal function. It is critical to identify a brain tumor at an advanced stage to preserve lives. Furthermore, tumor identification techniques should be carried out with extreme speed and precision. This is only achievable with magnetic resonance (MR) imaging, and suspicious areas are retrieved from complicated medical pictures using MR image segmentation [3]. However, there are certain drawbacks, such as the fact that it takes a long time and that MR image segmentation by various specialists might differ greatly. Furthermore, the findings of tumor detection by the same physician may change in different situations, and the brightness and contrast of the display screen might affect segmentation results. Because of these factors, the automatic identification of brain tumors is critical. Brain tumors that are detected automatically have a better chance of surviving. Machine learning-based approaches constitute a vital role in helping diagnose and identify the tumor cells which is an efficient alternative to brain tumor biopsy. In this effort, we developed a neural network model that uses transfer learning to identify and categorize brain tumors (using a pre-trained VGG-16 Keras model).
1.2 Basic Definition and Background (a) Operations of Neural Network Neural networks serve as the basis for deep learning, a subset of ML in which the algorithms simulate the functioning of the human brain structure (NN). The output for a new set of data is determined by NNs after they have received the input data and trained on it to find patterns and insightful results. Neurons are arranged in different levels to form neural networks [4]. The very first layer, called the input layer, is provided with the data, while the output layer is associated with the predictions. The hidden layers, which are responsible for the computations and calculations made by our model, are positioned between the output and input layers. The pixels in each of our images of brain tumors measure 64 by 64. The input for each neuron in the first layer is one pixel. Neurons in one layer may link to neurons in the next layer through channels. The input is multiplied by the proper weight before being added together and transmitted to the hidden layer neurons as input. A “bias” is a numerical value that
Machine Learning Model for Medical Data Classification for Accurate …
547
is allocated to each of the neurons and is then added to the total input. A threshold function known as an “activation function” is then used to pass this value [5]. Whether a particular neuron is activated depends on the result of the activation function. The final output layer will activate the neuron with the greatest value, which will then determine the output. Several investigations demonstrate that the neural network classification technique, notably convolutional neural network (CNN), outperformed the random forest field, and support vector machine (SVM) algorithms. (b) Transfer Learning A key presumption in many data mining and ML approaches is that the training and test data must be in the same feature space as well as distribution. This presumption, however, could not be accurate in many scenarios seen in the actual world. Considering that subsequent data can use a different dimensions system or data distribution, we may, for instance, have insufficient training data for one domain but not enough for another. Effective information transmission in these circumstances will considerably increase learning performance by removing the need for expensive data labeling methods. In recent years, a brand-new method of learning has emerged to address this issue: transfer learning. Neural networks may now use substantially fewer inputs thanks to transfer learning. We are effectively distributing the “knowledge” learned by the model from a previous assignment to our current one through transfer learning. This method has been shown to improve the accuracy of the model while reducing training time: less data, less time, and greater accuracy. There are three types of transfer learning namely, inductive, transductive, and unsupervised. The majority of prior works concentrated on the settings [6]7. Additionally, each learning transfer approach may be divided into four settings based on what has to be transferred. Examples include the relational knowledge transfer method, parameter transfer, feature representation transfer, as well as instance transfer technique. The larger, deeper networks utilized the smaller, convergent networks as initializations. Pre-training is the term for this procedure. This involves strenuous effort and is a time-consuming procedure that facilitates the training of a network before it is used as initialization for other networks.
1.3 Problem Statement, Objectives, and Outcomes Automated tumor detection in MRI is essential because it offers details about abnormal tissues which are needed for treatment planning. The approach that has been in use for a long time for tumor detection is human inspection. However, this method turned futile as larger amounts of data came into existence. Hence, efficient and automatic classification mechanisms were needed to reduce the death toll of humans. So that is how the automated approaches came to light as they would help in saving the time of the radiologist and obtaining a certain accuracy upon testing. It identifies the brain tumor endpoints that are invisible to the naked eye. This greatly helps the neurosurgeons to identify the type of tumor as well as helps the medical staff
548
G. S. S. S. Navabothu et al.
to determine the necessary treatment to be done. It provides robust and computable proportions of tumor depiction. These will significantly help by relieving doctors of the difficulty of manually depicting the tumors in their medical management.
1.4 Real-Time Applications of Proposed Work It saves and prolongs a patient’s life by detecting malignancies early. Users no longer have to wait for the findings of the MRI analysis. It has a crucial part in image-guided surgery.
2 Summary of Literature Study 2.1 ImageNet Classification with Deep CNN In the ImageNet LSVRC-2010 challenge, 1.2 million images were divided into 1000 different classes using a deep CNN, which produced the top results based on transfer learning in image classification. The neural network was constructed using five convolutional layers, some of which were max-pooling layers, three fully-connected layers, and a Softmax layer as the last layer. Altogether, it had 650 thousand neurons and 60 million parameters. Does not include any unsupervised pre-training to simplify the trials, despite the fact that they anticipated it to assist, especially if they get enough processing capacity to dramatically expand network size without augmentation in the volume of labeled data. There was an improvement in the results when they increased the depth of their network as well as its training time; however, they have a long way to go before they can replicate the inferotemporal circuit of the human visual system. They want to use very large and detailed convolutional networks on video sequences in the future since the temporal structure of these sequences reveals highly significant features that are either invisible or missing from static pictures.
2.2 Very Deep Convolutional Networks for Large-Scale Image Recognition Study about how the depth of a convolutional neural network affected its accuracy in large-scale picture recognition. A key contribution in this study is the thought of increasing the depth of the network architecture by including small convolutional filters which are of size (3 × 3) that resulted in 16–19 weight layers in total. VGG
Machine Learning Model for Medical Data Classification for Accurate …
549
when trained with fewer weight layers, can achieve a significant improvement over the prior configuration. This resulted in the conclusion that representation depth improves classification accuracy, and that enhanced performance in the classification process can be reached with significantly improved depth. It has also been shown how effectively these models generalize to a range of activities and datasets. The results demonstrate that depth in visual representations is important.
2.3 A Survey on Transfer Learning This study was to classify and evaluate how regression, clustering, and classification models may benefit from the transfer learning process. Also discusses the relationship between multitask learning, domain adaptation, covariate shift, and sample selection bias as they relate to various machine learning techniques. Mentioned a few potential future obstacles in the transfer learning study sector. Included evaluation of numerous contemporary transfer learning trends in this study. Inductive, transductive, and unsupervised are the three types of transfer learning. Unsupervised transfer learning may become increasingly popular in the future. Furthermore, depending on “what to transfer” in learning, each of the techniques to transfer learning may be grouped into four settings. Examples include the approach of instance transfer, parameter transfer, relational knowledge transfer, and feature representation transfer.
2.4 Going Deeper with Convolutions The creation of the Inception deep CNN architecture is connected to this research. The better use of computer resources within the network is the main feature of this architecture. This was made possible by a carefully designed architecture that enables the network’s depth and breadth to be extended while keeping an analytical cost constant. The results seem to confirm that creating neural networks for computer vision using readily available dense building pieces to approximate the optimal parse structure is a workable approach. Google Net uses a detection strategy like R-CNN, but with the addition of the Inception model as a region classifier. The superpixel size was raised by two to minimize the false positive number. This reduces the number of recommendations generated by the selective search algorithm by half. Further addition of 200 multi-box area suggestions results in a total of around 60% of the proposals being used while boosting coverage from 92 to 93%. For the single model example, the total benefit of reducing the number of recommendations while increasing coverage is a 1% gain in mean average accuracy. Finally, while categorizing each region, it is employed an ensemble of six ConvNets, improved accuracy from 40 to 43.9%.
550
G. S. S. S. Navabothu et al.
2.5 Deep Residual Learning for Image Recognition Deals with the introduction of the ResNet architecture, which uses batch normalization and skip connections. They also developed a residual learning framework to make it easier to train the networks which have a greater number of layers than those of the previously utilized ones. Presented an abundance of actual data proving that residual networks are more easily tuned and improve accuracy with depth. Using ground truth classes, VGG yields a center-crop inaccuracy of 33.1%. The RPN approach employing the ResNet-101 net considerably decreases the centercrop error to 13.3% in the same environment. This comparison reveals the better computational efficiency of the framework. Their ResNet-101 has an inaccuracy of 11.7% when employing ground truth classes and dense (totally 23 convolutional) and multi-scale testing. The top-5 localization error is 14.4% when ResNet-101 is used to forecast classes (4.6% top-5 classification error).
3 Proposed Method The most efficient and convenient method to detect and classify brain tumors is to use deep learning neural networks. VGG-16 (Visual Geometry Group) and traditional CNN algorithms are implemented in this project that is utilized to detect brain tumors in the input images. The design methodology, architecture, and algorithms are discussed and elaborated on in detail in this chapter.
3.1 Design Methodology As there are various factors like the brightness and the contrast of the display screen that might affect locating exact tumor cells in the brain MRI scans when viewed using the naked eye, trusted and flexible automation schemes are used for classification and detection purposes. The figure displayed below outlines the essential and required steps that are used in the automation processes for tumor recognition. The initial step is to gather the dataset and normalize the images using some pre-processing techniques. After that, testing and training data are separated from the dataset. The next phase includes building the network or the model and training it with the training data. After training it for a certain period, the model is then tested using the test data. At the final stage, its performance is evaluated, as also its detection and classification capabilities.
Machine Learning Model for Medical Data Classification for Accurate …
551
3.2 System Architecture Diagram Step-by-step procedure of proposed methodology (Fig. 1): • In the initial step, download the brain tumor dataset from Kaggle. • As all the images will not be on the same scale, their normalization, reshaping, and augmentation will be done through pre-processing techniques. • During this stage, the dataset is divided such that 70% of the data are used as training sets and the rest of 30% are utilized as test sets. • Utilize three fully connected and 13 convolutional layers to create the VGG-16 model where the output layer consists of four nodes, each node denoting one class of brain tumor. • Using the training set of data, build a model, then compare it to the test set. • The precision, accuracy, and other parameters are then calculated and monitored using a variety of performance indicators which helps in understanding how well the model is doing against the predictions.
Fig. 1 Process flow of networks
552
G. S. S. S. Navabothu et al.
3.3 Description of VGG-16 Algorithm In their 2014 publication “Very Deep Convolutional Networks for Large Scale Image Recognition, Karen Simonyan, and Andrew Zisserman of the Visual Geometry Group Lab at Oxford University” proposed the VGG-16 algorithm. Won both first and second places in the 2014 ILSVRC challenge [13]. There are a couple of points that are notable to answer the need of using VGG-16 even though there are topperforming models like AlexNet and ZFNet. First and foremost, this model suggested using relatively small 3 × 3 filters associated with a stride of 1 pixel throughout the whole network. The VGG stands out due to the concept of consistently applying 3 × 3 filters. By combining two successive 3 × 3 filters an effective 5 × 5 field can be produced. In the same way, three consecutive 3 × 3 filters add up to a 7 × 7 field. With this method, a combination of many 3 × 3 filters may represent a bigger receptive area. Three 3 × 3 filters provide the benefit of having three non-linear activation layers in addition to three convolutional layers, as opposed to one 7 × 7 filter. The decision-making procedures are thus more selective. The network would have the ability to converge faster. Second, it lessens the overall number of weight parameters in the model. The number of weight parameters in a single 7 × 7 convolutional stack is nearly exactly half that of a three-layer 3 × 3 convolutional stack. This may be seen as a generalization of the 7 × 7 convolutional filters that makes them decompose via the 3 × 3 filters, with non-linearity naturally given in between by the technique of ReLU activations. The propensity of network to overfit when being trained using the train data would be lessened by doing this (Fig. 2). Processing of the Brain Tumor MRI images through the VGG-16 model: • In the very first step, the image that is given as the input is made to pass through the initial stack of two convolutional layers having 3 × 3 receptive fields, followed by ReLU activations. These two layers each include 64 filters. Both the stride and the padding remain at 1 pixel. The structural integrity is maintained at this stage. The dimensions of the output map that was created as a consequence of this transition match those of the input picture. Then, the maps were subjected to a max pooling operation where the stride is of two pixels and the maps are fed upon a 2 × 2-pixel window. Hence, this step results in reducing the size of the output activation maps to half.
Fig. 2 VGG-16 architecture
Machine Learning Model for Medical Data Classification for Accurate …
553
• The output feature maps are then directed toward the next stack of layers which are almost similar to the first ones except that the number of filters, in this case, is not 64 but is increased to 128. As a result, the size of the feature maps is again reduced to half. Thereafter, the third stack is added which is made up of one max pooling layer and three of the convolutional ones. There are 256 filters implemented in this case. The next two blocks each include three convolutional layers and 512 filters. • Convolutional layer stacks are then followed by three fully connected layers with a flattening layer in between. The output layer, the last completely linked layer, has four neurons that correspond to the four potential categories of brain tumors, while the first two fully connected levels each include 512 neurons. The categorical categorization method utilized by the Softmax activation layer, will be the last activation layer to be applied after the output layer. The convolutional layer is the most important element of a CNN design. It also has several additional levels, which are: • Input Layer—This layer is concerned with taking in the input image’s pixel value. • Convolutional Layer—This layer is associated with the extraction of features of the input image that is obtained from the input layer. Through the use of small squares of the input image, it learns relevant features while maintaining the relationship between the pixels. It is usually referred to as a mathematical operation that takes in two inputs where the first is the image matrix and the second is called a filter or kernel that slide over that matrix, extracting important features from it. The result obtained by this operation is called a feature map. By using different filters, this layer can perform different operations like edge detection, sharpening, etc. • Activation Layer—Using the weighted sum of the inputs, it generates a single output [6]. In this project, we use the ReLU activation function in the convolution layers and the softmax activation function in the output layer. They are defined as follows: ReLU Activation: The rectified linear activation function, often known as ReLU, is a nonlinear or piecewise linear function which, if the input is positive, outputs the input directly; if not, it outputs zero. In neural networks, particularly convolutional neural networks (CNNs) and multilayer perceptrons, it is the most broadly used activation function. It is represented as: f (x) =
0 if x < 0 x if x ≥ 0
(1)
Softmax Activation: A vector of K real values is changed into another vector of K real values that total to 1 using an activation function known as the softmax. It changes the input values, which may be positive, negative, or zero, into values between 0 and 1, making them interpretable as probabilities. Although it always ranges from 0 to
554
G. S. S. S. Navabothu et al.
1, the softmax translates negative or little inputs into small probabilities and positive or large inputs into high probabilities. This function also goes by the name multi-class logistic regression or softargmax function. The reason for this is the softmax can be depicted as a similar function to logistic regression which is used for the categorization of multi-classes. Moreover, its formula is nearly equivalent to sigmoid function. It is represented as: z
ke i f (x) = z j = 1e i
(2)
• Pooling Layer—This layer is meant for decreasing the total number of parameters when large images are given as input. The spatial dimensionality of the feature maps is minimized by spatial pooling (that is also called subsampling or down sampling), but it preserves the key details [6]. It is of mainly two types: Max Pooling—This technique involves selecting the greatest value in the feature map.
Average pooling—This technique involves evaluating the average of all the values in the feature map and selecting the obtained value as the final result of that matrix. Sum Pooling—This technique involves adding all the values in the feature map. • Fully connected layer—In short, this layer is called the FC layer. The flatten operation, which transforms the data from the previous layer into a one-dimensional array and feeds it into the next layers of the neural network, is carried out at this layer. From this stage, all the layers that come next are the fully connected layers. All these features together constitute the model for classifying the input brain MRI images into any of the four categories. • Dropout layer—By preventing network nodes from co-adapting with one another, it is utilized to stop the model from overfitting (Fig. 3).
3.4 Description of Datasets, Requirements, and Tools The brain tumor dataset is obtained from Kaggle. It is divided into two folders (Training, and Testing), with four subfolders for each type of image. There are 3264
Machine Learning Model for Medical Data Classification for Accurate …
555
Fig. 3 a Full network; b partial learning of weights
MRI scans (JPEG) divided into 4 categories: meningioma, pituitary, glioma, and No_tumor, and each MRI image is reshaped to the size 64 × 64 after preprocessing and data augmentation, which is fed into the input layer of the CNN. Training Set: • • • •
Meningioma tumor (822 scans) Glioma tumor (826 scans) No tumor (395 scans) A pituitary tumor (827 scans).
Testing Set: • • • •
Meningioma tumor (100 scans) Glioma tumor (100 scans) No tumor (105 scans) A pituitary tumor (100 scans) (Fig. 4).
Fig. 4 a Glioma tumor; b meningioma tumor; c pituitary tumor; d no tumor
556
G. S. S. S. Navabothu et al.
4 Results and Observations This chapter includes the various evaluation metrics that are used to compare the models. The observations are noted based on the values and results obtained through images from the model.
4.1 Evaluation Metrics The performance of the model can be sum up using a table called a contingency table or confusion matrix. Row-Actual and Column-Predicted confusion matrix is as follows (Fig. 5): • Precision: Exactness–what % of tuples that the model labelled as positive is positive that is predicted is positive and actual is also positive: Precision =
TP TP + FP
(3)
• Recall completeness: The proportion of positive tuples that the classifier correctly identified as positive. Recall =
TP TP + FN
(4)
• Accuracy: The percentage of test set tuples that are correctly predicted. It is also known as the recognition rate. Accuracy =
TP + TN TP + FP + TN + FN
• F1 Score: Harmonic average of recall and precision
Fig. 5 Evaluation metrics
(5)
Machine Learning Model for Medical Data Classification for Accurate …
F1 Score =
557
2 × Precision × Recall Precision + Recall
(6)
4.2 Result Analysis Table 1 give the obtained evaluation measures of the VGG-16 neural network and the traditional CNN neural network regarding their classification and recognition capabilities of the different types of tumors present in the images that are passed to them. The built VGG-16 model’s accuracy and training and validation loss are shown graphically in Fig. 6. Finally, as the model is ready, we upload some MRI images, and the brain tumor type found in the provided input image is returned by the model. It will return the class/label name of the type of tumor if there is present, or else it will return the label “No Tumor” (Fig. 7).
5 Conclusion and Future Work In this last chapter, conclusions are specified and extended the discussion related to future work to improve the model in all aspects. Table 1 Result comparison
Measure
VGG-16 observations
CNN observations
Precision
0.77
0.28
Recall
0.75
0.22
Accuracy
0.76
0.54
F1_Score
0.72
0.42
Fig. 6 Graphs illustrating the variations in training and accuracy and validation loss over time of building the VGG model
558
G. S. S. S. Navabothu et al.
Fig. 7 Output results
5.1 Conclusion For this, we identified a brain tumor using a CNN and also for segmentation through a method that is computerized. The MR-based images are given as input and then are made into a grayscale type of image. The pre-processing of images is done through a filtering technique known as adaptive bilateral filtering technique to remove noises from the given image. For identifying the tumor region in the MR images, we have applied binary thresholding on denoised image and segmentation in CNN. We have obtained an accuracy value of 90% through our proposed model also it takes less time for computation without any errors.
5.2 Future Study Provide a mobile app-based user interface for clinicians to utilize in hospitals so they may examine the effects of tumors rapidly and suggest a course of action. Since the complexity and performance of ConvNets depend on the input data format, we may attempt to predict the location and stage of the tumor using volume-based 3D images. Training, planning, and computer-assisted surgery are all enhanced by designing three-dimensional (3D) anatomical models from specific patients. VolumeNet was used in combination with the LOPO (“Leave-One-Patient-Out”) technique to achieve high training and validation accuracy (> 95%). Each generation of the LOPO check scheme makes use of one affected person for trying out and the last sufferers for schooling the ConvNets; for each impacted individual, this cycle iterates. We have been able to employ the LOPO check technique even though it requires a lot of
Machine Learning Model for Medical Data Classification for Accurate …
559
computation. By doing this, we may get more training data, which is necessary for ConvNets training. LOPO testing is dependable and ideal for our application, which needs for getting test results for every single patient. Therefore, we may investigate the situation separately if the classifier mistakenly identifies a patient. By employing classifier boosting strategies, such as using more images with more fine-tuning hyperparameters, data augmentation, training for a longer period using more epochs, adding more relevant layers, and so on, you can raise computation time as well as testing accuracy. By utilizing the training data to build a model, classifier boosting then builds a second model that tries to fix the faults in the first model for faster prediction. Even more, accuracy can be achieved by utilizing these techniques. The technology must advance to the point where it can be an asset for any facility that treats brain cancers. Instead of CNN, we may utilize U-Net architecture for more complex datasets, simply substituting upsampling layers for the max pooling ones. We ultimately want to deploy extremely deep and big convolutional networks on video sequences, where the temporal structure provides incredibly important data that is absent or less seen in static scans. Unsupervised transfer learning could become more common in the future.
References 1. Louis DN, Perry A, Reifenberger G, Deimling AV, Figarella-Branger D, Cavenee WK, Ohgaki H, Wiestler OD, Kleihues P, Ellison DW (2016) The 2016 “World Health Organization classification of tumors of the central nervous system: a summary.” Acta Neuropathol 131:803–820. https://doi.org/10.1007/s00401-016-1545-1 2. Bosch A, Munoz X, Oliver A, Marti J (2006) Modeling and classifying breast tissue density in mammograms. In: IEEE computer society conference on computer vision and pattern recognition, pp 17–22. https://doi.org/10.1109/CVPR.2006.188 3. Tandel GS, Biswas M, Kakde OG, Tiwari A, Suri HS, Turk M, Laird JR, Asare CK, Ankrah AA, Khanna NN et al (2019) A review on a deep learning perspective in brain cancer classification. Cancers 11:111. https://doi.org/10.3390/2Fcancers11010111 4. Liu J, Pan Y, Li M, Chen Z, Tang L, Lu C, Wang J (2018) Applications of deep learning to MRI images: a survey. Big Data Min Anal 1:1–18. https://doi.org/10.26599/BDMA.2018.902 0001 5. Mohammad H, Axel D, Warde F (2017) Brain tumor segmentation with deep neural networks. Med Image Anal 35:18–31. https://doi.org/10.1016/j.media.2016.05.004 6. Mehrotra R, Ansari MA, Agrawal R, Anand RS (2020) A transfer learning approach for AIbased classification of brain tumors. Mach Learn Appl 2:10–19. https://doi.org/10.1016/j.mlwa. 2020.100003 7. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 2210:1345– 1359. https://doi.org/10.1109/TKDE.2009.191 8. Anaraki AK, Ayati M, Kazemi F (2019) Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybern Biomed Eng 312:1–12. https://doi.org/10.1016/J.BBE.2018.10.004 9. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. Commun ACM 606:84–90. https://doi.org/10.1145/3065386 10. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. 1–14. https://doi.org/10.48550/arXiv.1409.1556
Brightness Preserving Medical Image Contrast Enhancement Using Entropy-Based Thresholding and Adaptive Gamma Correction with Weighted Distribution Kurman Sangeeta and Sumitra Kisan Abstract The contrast enhancement has a significant role to play in improving the visibility and interpretation during analysis by medical experts and machines. The enhancement of the image must preserve the original image characteristics such as restoring the original brightness. The main aim of the proposed method is to improve the visual quality while retaining image details such as the entropy and brightness of the images. We proposed a technique for contrast enhancement that uses entropy Otsu thresholding as the base method for binarization, and then one of the sub-images is equalized using adaptive gamma correction with weighted distribution while the other part is equalized with contrast limited adaptive histogram equalization (CLAHE) technique. The results are compared with benchmark contrast enhancing techniques such as MMBEBHE, BBHE, and with Otsu thresholding binarization technique. The results are evaluated on several quantitative parameters such as structural similarity index measurement (SSIM), average mean brightness error (AMBE), FSIM, and entropy. Keywords Minimum mean brightness error bi-histogram equalization (MMBEBHE) · Brightness Preserving Bi-Histogram Equalization (BBHE) · Structural similarity index metric (SSIM) · Feature similarity index metric (FSIM)
1 Introduction Medical images generally have low contrast, poor in quality, and also sometimes blurry and distorted due to drawbacks in the acquisition process. Contrast enhancement techniques improve the contrast by minimizing noise, distortions, and
K. Sangeeta (B) · S. Kisan CSE Department, VSSUT Burla, Sambalpur 768018, India e-mail: [email protected] S. Kisan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_48
561
562
K. Sangeeta and S. Kisan
improving the image quality. The literature study suggests various contrast enhancement techniques that is roughly classified into two types such as direct methods and indirect methods. Direct methods involve contrast measure specification and improvement. On contrary, Indirect methods involves contrast improvement techniques and can be divided into (i) modification of high- and low-frequency signals of an image such as (ii) homomorphic filtering, (iii) histogram modification, and (iv) transform function based. Among all these methods, histogram modification techniques such as histogram equalization and histogram matching are most popular for its low cost of computation and fast implementation. Histogram equalization makes full use of the input image histogram to a near approximate uniform distribution. Histogram equalization (HE) has the drawback of intensity saturation and as a result few undesirable visual artefacts can be seen in the output image. Thus, HE fails in preserving the brightness of the image. The paper [1] come up with an image segmentation-based approach Brightness Preserving Bi-Histogram Equalization (BBHE) where histogram of the source image is partitioned into subhistograms using mean grey level as the threshold for partition and then each sub histogram is equalized so that brightness is preserved. The authors [2] proposed a novel technique namely dualistic sub image istogram equalization (DSIHE) that makes use of median rather than mean value for partitioning the histogram. BBHE and DSIHE both has the drawback of brightness shifting and intensity saturation artefacts in the enhanced output image. The authors [3] proposed recursive mean separate histogram equalization (RMSHE) to segment the input image recursively using local mean as the threshold value. The brightness preservation is comparatively better than in BBHE. This technique results in over enhancement of low-contrast regions and also determining the best value for the number of steps of recursion r, is a tough task. The paper [4] discusses Minimum mean brightness error bi-histogram equalization (MMBEBHE) for the partitioning of histogram using the threshold level which has the least absolute mean brightness error. The authors [5] proposed recursive sub image histogram equalization (RSIHE), a modified DSIHE, identical to RMSHE in terms of recursive approach but does the histogram partitioning using median grey level as the threshold. The technique excels DSIHE in retaining the brightness and information. The authors [6] proposed brightness preserving dynamic istogram equalization (BPDHE) method for enhancement of contrast. Initial application of one-dimensional Gaussian filter for histogram smoothening follows partitioning based on its local maxima. The researchers proposed a technique namely, Range Limited Bi-Histogram Equalization with Adaptive Gamma Correction (RLBHE with AGC) [7] for the binarization of an input image histogram using Otsu thresholding method. The minimum and maximum limits for the HE process are calculated, and a balance between higher visual quality and lower cost of computation is maintained. Experimental outcomes prove the efficiency of RLBHE with AGC in enhancing poor contrast images over RLBHE and AGCWD [15], but found to be ineffective for images with complicated background. The authors [8] suggested Range Limited Double Threshold Multi Histogram Equalization (RLDTMHE) where the input image is subjected to double thresholding and then a range of grey levels are equalized. The authors [9] proposed a novel and effective statistical-based approach,
Brightness Preserving Medical Image Contrast Enhancement Using …
563
Range Limited Double Threshold Weighted Histogram Equalization (RLDTWHE). This method integrates Otsu’s double thresholding, dynamically stretching of range, weighted distribution of histogram, adaptive gamma correction, and noise reduction using homomorphic filtering. Researchers [10] suggested a linear weighted combination of Contrast limited adaptive HE (CLAHE), colour restoration technique and Brightness preserving dynamic HE (BPDHE). The paper [12] discusses a method to reduce noise, detail preservation by combining homomorphic filter and successive mean quantization transform (SMQT) algorithm. The authors [13] proposed an automatic contrast enhancement based on the analytic view of contrast distribution at the edges of objects and background. The researchers used no-reference contrast metrics for evaluation of quality of output image. The proposed technique uses entropy-based Otsu thresholding for segmenting the original grey image which will not only ensures maximum entropy but also maximum between-class variance in the resultant image. Each sub-image is processed independently. The sub-image with grey levels between 0 and t is applied adaptive gamma correction with weighted distribution (AGCWD), while the sub-image with grey levels between t + 1 and maximum intensity level, here 256, is equalized using Contrast Limited Adaptive Histogram Equalization (CLAHE), thereby preserving brightness.
1.1 Organization of the Paper This paper consists of six sections. Section 2 describes the different contrast enhancement techniques and image quality assessment methods used in our proposed method. Section 3 discusses about the proposed technique. Section 4 explores the details of the data set. Section 5 describes the comparative analysis of results on applying various techniques. Section 6 throws light on the brief description of the proposed method in the conclusion section.
2 Mathematical Background 2.1 Entropy-Based Otsu Thresholding [14] Otsu thresholding aims to automatically find optimal threshold for image binarization. Binarization usually is a two-step procedure which involves determination of a grey threshold based on some objective criteria and assigning each pixel to any one of the two classes. According to the Otsu’s thresholding, the intensity value that maximizes the between class variance is the threshold value Th as given in Eq. (1). Th = argmax w1 (i )(μ1 (i ) − μ1 )2 + w2 (i )(μ2 (i ) − μ2 )2
564
K. Sangeeta and S. Kisan
0≤i ≤ L −1
(1)
Alternative faster approach as suggested by the authors [18] is given by Eq. (2) Th = argmax w1 (i )μ21 (i) + w2 (i )μ22 (i ) 0≤i ≤ L −1
(2)
The Otsu’s method is improved by adding the weight W and is given by Eq. (3) Th = argmax W w1 (i )μ21 (i ) + w2 (i )μ22 (i ) 0≤i ≤ L −1
(3)
The first weighting scheme is valley-emphasis method proposed by [16] and is given by Eq. (4) Th = argmax (1 − Pi )(w1 (i )μ21 (i ) + w2 (i )μ22 (i )) 0≤i ≤ L −1
(4)
Here Weight = 1 − Pi . The entropy-based weight for Otsu’s thresholding is calculated as given below. The posteriori entropy of an image for a grey level i is given by Eq. (5) E n' = −Pi ln Pi − (1 − Pi ) ln(1 − Pi )
(5)
L−1 where Pi = ij=0 P j and 1 − Pi = i= j+1 P j . On maximization E n ' , both Pi and 1 − Pi are equal to ½. To overcome this problem, the objective function used as recommended in [17] is given by Eq. (6) f (i ) = ln Pi (1 − pi ) + E=
i j=0
En − Ei Ei + Pi 1 − Pi
P j ln P j and E n = −
L−1
(6)
P j ln P j .
j=0
The threshold value is the intensity level which maximizes f (i). For uniform distribution of intensity levels, the objective function is given by Eq. (7) f (i ) = ln i (L − 1 − i )
(7)
The entropy-based weight with Otsu’s method is given by replacing W factor in Eq. (3) with f (i ) as illustrated in Eq. (8) Th = argmax W w1 (i )μ21 (i ) + w2 (i )μ22 (i )
Brightness Preserving Medical Image Contrast Enhancement Using …
0≤i ≤ L −1
565
(3)
Th = argmax f (i )w1 (i )μ21 (i ) + w2 (i )μ22 (i ) 0≤i ≤ L −1
(8)
2.2 Adaptive Gamma Correction with Weighted Distribution [15] The transform function using gamma correction in its simplest form is given by this simple Eq. (9) T f (I ) = Imax
γ
I
(9)
Imax
where Imax is maximum intensity level, and each pixel of the original image with intensity I is transformed to new value using the above function (9) for transformation. The traditional histogram equalization is given by Eq. (10) T f (I ) = cd f (I )Imax
(10)
where cd f (I ) is the cumulative distribution function and is computed using the formula as illustrated in Eq. (11) cd f (I ) =
I
pd f ( j )
(11)
j=0 nI and pd f (I ) = mn is the probability density function of intensity level I and n I is frequency count of I in an image of size m and n. The authors [15] proposed a novel technique to modify the histogram using hybridization of traditional gamma correction given by Eq. (9) and traditional histogram equalization given by Eq. (10). The modified gamma correction as proposed by the authors is given by Eq. (12).
T f (I ) = Imax
I Imax
γ
= Imax
I Imax
1−cd f (I ) (12)
566
K. Sangeeta and S. Kisan
2.3 CLAHE [17] CLAHE limits the maximum number of pixels in a histogram bin by clipping the histogram using a clip limit parameter so that the clipped pixels are redistributed equally to all the histogram bins. This makes the histogram count same. The parameters of this algorithm which influence the output are clip limit and size of the contextual region.
2.4 Evaluation Metrics for Image Quality Image quality assessment (IQA) is broadly classified divided into reference-based evaluation and no-reference evaluation. The difference lies in whether to use or not use a source image as a reference for evaluation. No-reference image quality assessment unlike reference-based methods does not require a base image to evaluate image quality. Peak Signal-to-Noise Ratio (PSNR) Peak signal-to-noise ratio (PSNR) is a measure to determine the similarity in any two signals. Higher value of the PSNR results in improved image quality. The mathematical formula is given by Eq. (13)
Max f PSNR = 20 √ MSE
(13)
Absolute mean brightness error (AMBE) The absolute difference in the mean brightness of the source and resultant images is known as the absolute mean brightness error (AMBE). It is a measure for the preservation of brightness in output image. Lower value of AMBE signifies better brightness preservation. If X and Y are input and output images, then AMBE is given by Eq. (14) AMBE = E(X )E(Y ).
(14)
Structural Similarity Index Metric (SSIM) [19] and Feature Similarity Index Metric (FSIM) [20] This is one of the IQA measurements which is based on the human visual system (HVS) to assess the perceptual quality of an image. A better-quality image is one with a higher FSIM and SSIM values. Structured similarity indexing method (SSIM) and feature similarity indexing method (FSIM) are measures to determine similarity in the structure and features between the source and resultant image based on human perception. SSIM measures the variation in the luminance, contrast, and structure in an image. Entropy It quantifies the image’s information, typically in units like bits. A betterquality image is one with a higher entropy. The mathematical formula is given by Eq. (15)
Brightness Preserving Medical Image Contrast Enhancement Using …
H (x) =
n i=1
p(xi )I (xi ) = −
n
567
p(xi ) log10 I (xi )
(15)
i=1
3 Proposed Method The proposed method uses the advantages of entropy-based Otsu global thresholding for segmentation of histogram of an input image. Each sub-image is processed independently using a combination of benchmark contrast enhancement techniques to improve the contrast. The proposed method follows the following steps as shown in Fig. 1. The proposed method involves the following step-by-step procedure: 1. Input image is converted into grey scale image. 2. Entropy-based Otsu Thresholding is applied for segmenting the original grey image histogram into two sub-image histograms based on maximum class variance threshold value say t. 3. Each sub-image is processed independently using a combination of different contrast enhancement techniques. The sub-image with grey levels between 0 and t is applied adaptive gamma correction with weighted distribution (AGCWD) while the sub-image with grey levels between t + 1 and maximum intensity level, here 256, is equalized using Contrast Limited Adaptive Histogram Equalization (CLAHE). 4. The resultant output image is compared with benchmark techniques Brightness Preserving Bi-Histogram Equalization (BBHE) and Minimum mean brightness error bi-histogram equalization (MMBEBHE). The output and input images Fig. 1 Proposed architecture
Input image
Entropy Otsu’s thresholding Sub-image1
Sub-image2
AGCWD
CLAHE
Enhanced Sub-image1
Enhanced Sub-image2
Output image
568
K. Sangeeta and S. Kisan
Table 1 Classification of Herlev pap-smear dataset [16] Class type
Non-cancerous or cancerous
Type of cell
Cell instances count
1
Non-cancerous
Superficial squamous epithelial
74
2
Non-cancerous
Intermediate squamous epithelial
70
3
Non-cancerous
Columnar epithelial
98
4
Cancerous
Mild squamous non-keratinizing dysplasia
182
5
Cancerous
Moderate squamous non-keratinizing dysplasia
146
6
Cancerous
Severe squamous non-keratinizing dysplasia
197
7
Cancerous
Squamous cell carcinoma in situ intermediate
150
are compared using PSNR, AMBE, Entropy, SSIM, and FSIM image quality assessment (IQA) metrics.
4 Dataset Details [16] The proposed method is assessed by considering a single cell cervical cancer image from new Herlev pap-smear dataset collected by MDE-Lab, University of the Aegean (http://mde-lab.aegean.gr/downloads/smear2005.zip). The data set includes 917 cell images of Pap-smear cells. The cells are classified into seven classes depending on the severity of the disease as shown in Table 1. There are 242 normal cells and 675 abnormal cells.
5 Result Analysis The values of various metrics against each of the benchmark contrast techniques for single cell of each of the seven classes are shown in tables.
5.1 Normal-Intermediate Squamous Epithelial The contextual region and clip limit are set as 11 by 11 and 0.01 for CLAHE and weighting parameter as 0.05 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 2a, b, respectively.
Brightness Preserving Medical Image Contrast Enhancement Using …
569
Fig. 2 a Input image and its histogram; b output image and its histogram
Table 2 Comparison of image quality assessment metrics Metrics\method
MMBHE
BBHE
Otsu thresholding (AGCWD + CLAHE)
PROPOSED METHOD (entropy otsu + AGCWD + CLAHE)
Entropy
5.0476
5.5741
6.5164
6.4588
AMBE
44.0458
44.0828
2.3475
0.2378
FSIM
0.5235
0.5165
PSNR
9.0318
11.0494
20.558
20.7773
SNR
7.4751
9.4927
18.9952
19.1047
SSIM
0.2895
0.3249
0.8286
0.8066
0.8728
0.8691
The values of various metrics against each of the benchmark contrast techniques are shown in Table 2. The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 2 is plotted in Fig. 3.
5.2 Abnormal-Mild Squamous Non-keratinizing Dysplasia The contextual region and clip limit are set as 13 by 13 and 0.02 for CLAHE and weighting parameter as 0.05 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 4a, b, respectively. The values of various metrics against each of the benchmark contrast techniques is shown in Table 3.
570
K. Sangeeta and S. Kisan
Normal - Intermediate squamous epithelial
Fig. 3 Plot for Table 2 50 40 30 20 10 0
Entropy AMBE
FSIM
PSNR
SNR
SSIM
MMBHE BBHE Otsu thresholding(AGCWD+CLAHE) PROPOSED METHOD(entropy otsu+AGCWD+CLAHE)
Fig. 4 a Input image and its histogram; b output image and its histogram Table 3 Comparison of image quality assessment metrics Metrics\method
MMBHE
Entropy
5.9802
BBHE
Otsu thresholding (AGCWD + CLAHE)
Proposed method (entropy otsu + AGCWD + CLAHE)
5.7088
6.0022
6.0429 11.6541
AMBE
44.4025
9.6717
13.0905
FSIM
0.7421
0.6748
0.8537
0.8087
PSNR
11.1268
18.1307
14.6331
15.2168
SNR
7.7949
14.7989
11.3012
11.8850
SSIM
0.5067
0.5962
0.8334
0.7768
Brightness Preserving Medical Image Contrast Enhancement Using …
571
Fig. 5 Plot for Table 3
The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 3 is plotted in Fig. 5.
5.3 Normal–Columnar Epithelial Cells The contextual region and clip limit are set as 11 by 11 and 0.02 for CLAHE and weighting parameter as 0.05 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 6a, b, respectively. The values of various metrics against each of the benchmark contrast techniques are shown in Table 4. The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 4 is plotted in Fig. 7.
Fig. 6 a Input image and its histogram; b output image and its histogram
572
K. Sangeeta and S. Kisan
Table 4 Comparison of image quality assessment metrics Metrics\method
MMBHE
BBHE
Otsu thresholding (AGCWD + CLAHE)
Proposed method (entropy otsu + AGCWD + CLAHE)
Entropy
5.8097
5.7758
6.7178
6.2688
AMBE
58.5149
41.1078
36.8392
1.0401
FSIM
0.6346
0.5968
0.4126
0.7692
PSNR
10.0210
12.7064
8.9698
18.9807
SNR
8.7450
11.4304
7.6938
17.7047
SSIM
0.4292
0.3771
0.2093
0.6312
Normal - Columnar epithelial cells 70 60 50 40 30 20 10 0 Entropy
AMBE
FSIM
PSNR
SNR
SSIM
MMBHE BBHE Otsu thresholding(AGCWD+CLAHE) PROPOSED METHOD(entropy otsu+AGCWD+CLAHE)
Fig. 7 Plot for Table 4
5.4 Normal–Superficial Squamous Epithelial Cells The contextual region and clip limit are set as 11 by 11 and 0.02 for CLAHE and weighting parameter as 0.05 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 8a, b, respectively. The values of various metrics against each of the benchmark contrast techniques are shown in Table 5. The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 5 is plotted in Fig. 9.
Brightness Preserving Medical Image Contrast Enhancement Using …
573
Fig. 8 a Input image and its histogram; b output image and its histogram Table 5 Comparison of image quality assessment metrics Metrics\method
MMBHE
BBHE
Otsu thresholding (AGCWD + CLAHE)
Proposed method (entropy otsu + AGCWD + CLAHE)
Entropy
6.2702
5.9701
6.9869
7.0952 18.4719
AMBE
63.7066
28.5088
-7.3640
FSIM
0.7144
0.7991
0.6852
0.7500
PSNR
10.0680
17.5114
12.9593
17.5905
SNR
8.1154
15.5589
6.9820
15.6380
SSIM
0.5331
0.6398
0.5070
0.6536
Normal - Superficial squamous epithelial cell 80 60 40 20 0 -20
Entropy
AMBE
FSIM
PSNR
SNR
MMBHE BBHE Otsu thresholding(AGCWD+CLAHE) PROPOSED METHOD(entropy otsu+AGCWD+CLAHE)
Fig. 9 Plot for Table 5
SSIM
574
K. Sangeeta and S. Kisan
5.5 Abnormal–Severe Squamous Non-keratinizing Dysplasia The contextual region and clip limit are set as 11 by 11 and 0.004 for CLAHE and weighting parameter as 0.06 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 10a, b, respectively. The values of various metrics against each of the benchmark contrast techniques are shown in Table 6. The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 6 is plotted in Fig. 11.
Fig. 10 a Input image and its histogram; b output image and its histogram
Table 6 Comparison of image quality assessment metrics Metrics\method
Entropy
MMBHE
BBHE
Otsu thresholding (AGCWD + CLAHE)
Proposed method (entropy otsu + AGCWD + CLAHE)
6.5308
6.4873
6.7513
6.5594 −14.6893
−19.9041
−22.7371
−3.8157
FSIM
0.8557
0.8564
0.7423
0.7124
PSNR
17.8625
17.8030
12.8971
12.4196
SNR
11.8851
11.8256
6.9198
6.4422
SSIM
0.8431
0.8401
0.5734
0.5243
AMBE
Brightness Preserving Medical Image Contrast Enhancement Using …
575
Abnormal - Severe squamous non-keratinizing dysplasia 20 10 0 -10
Entropy
AMBE
FSIM
PSNR
SNR
SSIM
-20 -30 MMBHE BBHE Otsu thresholding(AGCWD+CLAHE) PROPOSED METHOD(entropy otsu+AGCWD+CLAHE)
Fig. 11 Plot for Table 6
5.6 Abnormal–Moderate Squamous Non-keratinizing Dysplasia The contextual region and clip limit are set as 11 by 11 and 0.09 for CLAHE and weighting parameter as 0.05 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 12a, b, respectively. The values of various metrics against each of the benchmark contrast techniques are shown in Table 7. The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 7 is plotted in Fig. 13.
Fig. 12 a Input image and its histogram; b output image and its histogram
576
K. Sangeeta and S. Kisan
Table 7 Comparison of image quality assessment metrics Metrics\method
Entropy AMBE
MMBHE
BBHE
Otsu thresholding (AGCWD + CLAHE)
Proposed method (entropy otsu + AGCWD + CLAHE)
5.7194
5.7369
6.4340
6.3505
−24.2463
−39.3699
−44.8529
−46.0484
FSIM
0.7271
0.7115
0.4673
0.4857
PSNR
12.0710
11.4560
9.6155
10.0991
SNR
2.6860
2.0710
0.2305
0.7141
SSIM
0.6090
0.6040
0.2499
0.2982
Abnormal - Moderate squamous non-keratinizing dysplasia 20 10 0 -10
Entropy
AMBE
FSIM
PSNR
SNR
SSIM
-20 -30 -40 -50 MMBHE BBHE Otsu thresholding PROPOSED METHOD(entropy otsu+AGCWD+CLAHE)
Fig. 13 Plot for Table 7
5.7 Abnormal–Squamous Cell Carcinoma in Situ Intermediate The contextual region and clip limit are set as 11 by 11 and 0.009 for CLAHE and weighting parameter as 0.05 for AGCWD. The original image, its histogram, and the output image with its histogram are shown in Fig. 14a, b respectively. The values of various metrics against each of the benchmark contrast techniques is shown in Table 8. The comparative analysis of our proposed method with benchmark contrast enhancement algorithms in Table 8 is plotted in Fig. 15.
Brightness Preserving Medical Image Contrast Enhancement Using …
577
Fig. 14 a Input image and its histogram; b output image and its histogram
Table 8 Comparison of image quality assessment metrics Metrics\method
MMBHE
Otsu thresholding (AGCWD + CLAHE)
Proposed method (entropy otsu + AGCWD + CLAHE)
6.1565
6.1745
6.2982
6.3256
−19.2395
−8.1089
−42.7085
−39.4348
0.8543
0.8501
0.6522
0.6438 11.1493
Entropy AMBE
BBHE
FSIM PSNR
16.9256
16.8040
11.2148
SNR
10.6198
10.4983
4.9091
4.8436
SSIM
0.7678
0.7259
0.4682
0.4514
6 Conclusion Experimental results of the proposed method demonstrate that our proposed technique outperforms on an average other benchmark methods in preserving brightness, entropy, and reduction in AMBE on Herlev pap smear dataset. For superficial squamous epithelial, the entropy is comparatively good, and AMBE is highly reduced. For Intermediate squamous epithelial cell, the entropy is improved and AMBE is highly reduced. For columnar epithelial cell, the entropy, PSNR, SNR, SSIM, FSIM metrices are improved, and AMBE is highly reduced. Similarly for mild squamous non-keratinizing dysplasia, moderate squamous non-keratinizing dysplasia, severe squamous non-keratinizing dysplasia, and squamous cell carcinoma in situ intermediate cells, shows improved values of all the metrices. The future scope of our method involves experimenting with different thresholding techniques and with other medical images.
578
K. Sangeeta and S. Kisan
Abnormal - Squamous cell carcinoma 20 10 0 -10
Entropy
AMBE
FSIM
PSNR
SNR
SSIM
-20 -30 -40 -50 MMBHE BBHE Otsu thresholding(AGCWD+CLAHE) PROPOSED METHOD(entropy otsu+AGCWD+CLAHE)
Fig. 15 Plot for Table 8
References 1. Kim Y (1997) Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans Consum Electron 43:1–8. https://doi.org/10.1109/30.580378 2. Wang Y, Chen Q, Zhang B (1999) Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans Consum Electron 45(1):68–75.https://doi.org/10. 1109/30.754419 3. Chen S-D, Ramli AR (2003) Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans Consum Electron 49(4):1301– 1309. https://doi.org/10.1109/TCE.2003.1261233. 4. Chen S-D, Ramli AR (2003) Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans Consum Electron 49(4):1310–1319. https://doi.org/10. 1109/TCE.2003.1261234 5. Sim KS, Tso CP, Tan YY (2007) Recursive sub-image histogram equalization applied to gray scale images. Pattern Recogn Lett 28(10):1209–1221. https://doi.org/10.1016/j.patrec.2007. 02.003. ISSN 0167-8655 6. Ibrahim H, Pik Kong NS (2007) Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53(4):1752–1758. https://doi.org/ 10.1109/TCE.2007.4429280 7. Gautam T Efficient color image contrast enhancement using range limited bi-histogram equalization with adaptive Gama correction. In: Proceedings of the IEEE international conference on industrial instrumentation and control (ICIC). IEEE Press, pp 175–180 8. Xu H, Chen Q, Zuo C et al (2015) Range limited double-thresholds multi-histogram equalization for image contrast enhancement. Opt Rev 22:246–255. https://doi.org/10.1007/s10043015-0073-x 9. Rani G, Agarwal M (2020) Contrast enhancement using optimum threshold selection. Int J Softw Innov (IJSI) 8(3):96–118. https://doi.org/10.4018/IJSI.2020070107 10. Thepade SD, Pardhi PM (2022) Contrast enhancement with brightness preservation of low light images using a blending of CLAHE and BPDHE histogram equalization methods. Int J Inf Tecnol. https://doi.org/10.1007/s41870-022-01054-0
Brightness Preserving Medical Image Contrast Enhancement Using …
579
11. Adhikari SP, Panday SP (2019) Image enhancement using successive mean quantization transform and homomorphic filtering. In: Artificial intelligence for transforming business and society (AITB). https://doi.org/10.23956/IJARCSSE/V7I6/0269 12. Yelmanova ES, Romanyshyn YM (2017) Medical image contrast enhancement based on histogram. In: 2017 IEEE 37th international conference on electronics and nanotechnology (ELNANO). https://doi.org/10.1109/elnano.2017.7939762 13. Truong MTN, Kim S (2018) Automatic image thresholding using Otsu’s method and entropy weighting scheme for surface defect detection. Soft Comput 22:4197–4203. https://doi.org/10. 1007/s00500-017-2709-1 14. Otsu N (1979) A threshold selection method from gray-level histogram. IEEE Trans Syst Man Cybern 90(1):62–66. 15. Huang S-C, Cheng F-C, Chiu Y-S (2013) Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans Image Process 22(3):1032–1041. https:// doi.org/10.1109/TIP.2012.2226047 16. Jantzen J, Norup J, Dounias G, Bjerregaard B (2005) Pap-smear benchmark data for pattern classification. In: Nature inspired smart information systems (NiSIS) 17. Medical Images Database. http://www.imageprocessingplace.com/DIP-3E/dip3e_book_i mages_downloads.htm 18. Zuiderveld K Contrast limited adaptive histogram equalization. In: Graph gems IV, pp 474–485 19. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612 20. Zhang L, Zhang L, Mou X, Zhang D (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386. https://doi.org/10.1109/TIP.2011. 2109730
Optimisation and Performance Enhancement of Mechanical Systems
Statistical Study of the Influence of Anthropometric Parameters on the Hand Grip Strength of an Individual M. Rajesh, H. Adithi, P. Prathik, and Sadhashiv J. Panth
Abstract The purpose of this study was to determine the effects of hand span, height, weight, and gender on hand grip strength (HGS). The study was performed on undergraduate students aged 18 to 22 using a digital dynamometer that measured HGS of both the dominant and non-dominant hand. Descriptive statistical analysis was carried out to assess the relationship between the selected human factors and HGS. It was noticed that in females, weight was the most strongly correlated factor with HGS (r = 0.506 and r = 0.458 in dominant and non-dominant hand, respectively), followed by BMI (r = 0.506 and 0.381). While in males, hand span (r = 0.709 and r = 0.575) was strongly correlated factor with the HGS followed by weight (r = 0.575 and r = 0.629) and height (r = 0.517 and r = 0.575). Hence, the findings of this research can serve as a foundation for future study in the field of ergonomic design of products which requires hand grip strength. Keywords Handgrip strength · Handgrip dynamometer · Ergonomics · Regression · Anthropometry
1 Introduction The Hand Grip Strength (HGS) is the maximum force or tension generated by one’s forearm muscles. Earlier studies in the western countries point out that HGS as an important health indicator [1–3]. HSG is a measure of muscle strength, an indicator of muscle health and functional integrity of a hand measured using a hand grip dynamometer. HGS can be used to predict total muscle strength and fitness, a powerful assessment tool for overall body strength [4–6]. Several studies indicate that the skeletal muscle strength is affected by a variety of intrinsic and extrinsic M. Rajesh Faculty, Ramaiah Institute of Technology, Bengaluru, India e-mail: [email protected] H. Adithi · P. Prathik (B) · S. J. Panth Student, Ramaiah Institute of Technology, Bengaluru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_49
583
584
M. Rajesh et al.
factors [7–9] such as muscle mass, Muscle Cross-Sectional Area (MCSA), motion range muscle, muscle contraction velocity, muscular type, gender, age, duration of day, nourishment, and use of anabolic steroids. Studies by Fernandes et al. [6] revealed that HGS varied across different entnicicites. Also, Woo et al. [10] found a significant difference in mean HGS values between the same ethnic groups living in different geographic locations. But studies that can be used to evaluate individuals of demographics are still lacking among Asians. A very few researchers from India have tried to establish the relationship between grip strength, age, and gender [11, 12]. Bansal collected data of hand grip strength in young adults aged between 18 and 25 years old adults and found that males had higher HGS than females [12]. Some studies have also showed that the grip and pinch strength of healthy Indian adults is relatively lower when compared to populations of the same gender and age from other continents. Hence, this study aims to collect data from a healthy young Indian adults and determine the factors affecting their handgrip strength.
2 Methodology The study was carried out on healthy undergraduate students aged between 18 and 25 years at a higher educational institute in India. Individuals who volunteered to take part in this study were clearly briefed about the purpose and objectives of the study. Consent forms were obtained from each of them before their participation in the proposed experiment. Detailed procedure was described, and the trials were recorded in a single testing session. Sample size for experimentation [13, 14] was calculated using the research survey system. The sample size was found to be 45 at 95% confidence level. Accordingly, 20 were females and 25 were males were randomly invited to participate in the study. Measurements of height (m), weight (kg), hand length (cm) and hand dominance were recorded. The participants were requested to take off their footwear prior to measuring their height to ensure standardized recordings. Weight was measured using a calibrated digital weighing scale. Hand length which is defined as the length from just above the distal wrist crease till the tip of the middle finger was measured using a customized fixture as shown in Figs. 1 and 2. The fixture was made from thin mild steel plates welded at right angles where the longer plate had a standard ruler attached. For proper measurement, the subjects had to stand and place their hand at right angle towards the weld corner and the scale. The process was repeated for other hand as well. A calibrated handgrip dynamometer, as shown in Fig. 3 was used in the study. In order to measure the handgrip strength, the dynamometer was set at level three and connected to a personal computer that runs an application to record and store the data values. During the recoding process, the subjects were made to sit on a chair comfortably with their elbow bent at a right angle and wrist were maintained in a neutral position. The following directions were given: “You must squeeze the handle
Statistical Study of the Influence of Anthropometric Parameters …
Fig. 1 Fixture for hand length measurement
Fig. 2 Hand length measurement during a trial
585
586
M. Rajesh et al.
Fig. 3 Measurement of HGS using handgrip dynamometer
as firmly as you can while maintaining the position of both your body and arm.” The average of three trials was noted for each hand alternately, and the outcomes were quantified in kilograms.
3 Statistical Analysis Statistical analysis was performed using Microsoft Excel, for all the measured variables the typical descriptive statistics (mean, standard deviation, minimum, and maximum) were obtained. The association between hand grip strength and other variables was determined using Pearson correlation. The statistical relationship between the variables was expressed in an equation using the multiple linear regression (MLR) methodology. Descriptive statistics are shown for the measured variables in Tables 1 and 2. Table 1 Descriptive statistics for all the measured variables in female and male participants
Parameter (HGS)
Male (n = 25)
Female (n = 20)
Mean ± SD
Mean ± SD
DHGS (kg)
39.73 ± 9.40
20.66 ± 5.51
NDHGS (kg)
35.18 ± 8.59
18.26 ± 4.01
Statistical Study of the Influence of Anthropometric Parameters … Table 2 Descriptive statistics for measured HGS
Parameter
587
Male (n = 25)
Female (n = 20)
Mean ± SD
Mean ± SD
Height (m)
1.74 ± 0.05
1.58 ± 0.05
Weight (kg)
69.94 ± 14.03
57.46 ± 9.71
BMI (kg/m2 )
22.98 ± 4.04
22.68 ± 3.27
D.H. length (cm)
19.42 ± 1.00
17.45 ± 0.70
N.D.H. length (cm)
19.41 ± 1.00
17.43 ± 0.72
where N = Total number of subjects, BMI = Body Mass Index, D.H. = Dominant Hand, N.D.H = Non-Dominant Hand. DHGS = Dominant Hand Grip Strength, NDHGS = Non-Dominant Hand Grip Strength
Table 2 sheds light on the total strength of the dominant and non-dominant hands in male and female subjects. It can be noticed that there is a vast difference between DHGS and NDHGS in male and female parameter values.
4 Result A certain aspect that catches one’s attention is that the grip strength value for male sample is almost twice that of the female sample, indicating that on an average the hand grip strength in the males is predominantly higher than the females due to certain parameters which will be ascertained in the following Table 3. The anthropometric measurements of weight and BMI had the strongest correlations with the HGS in female (r = 0.506 and r = 0.459 in dominant and non-dominant hands, respectively). Female height had the lowest correlation (r = 0.169 and r = 0.292 for dominant and non-dominant hands, respectively). All factors exhibited a moderate to significant connection with male HGS. Weight (r = 0.575 and r = 0.629 in dominant and non-dominant hands, respectively), height (r = 0.518 and r = 0.575 in dominant and non-dominant hands, respectively), and hand span (r = 0.709 and r Table 3 Correlation between HGS and variables measured Parameter
S
GS
S
GS
r
r
r
r
Height (m)
0.518
0.575
0.169
0.292
Weight (kg)
0.575
0.629
0.506
0.459
0.479
0.538
0.508
0.381
BMI
(kg/m2 )
D.H. length (cm) N.D.H. length (cm)
0.709
0.187 0.575
0.329
588
M. Rajesh et al.
= 0.575 in dominant and non-dominant hands, respectively) were the anthropometric measurements that had the strong correlation with the HGS in males. Figures 4, 5, 6 and 7 represent the scatter plot of the values of hand grip strength (HGS, in kg) versus height of the person (m) and their hand length (cm) for the male and female population, respectively. A more in-depth look into the graphs provide us with a reasoning as to why the most correlated parameter in both the genders happen to be hand-length, because of minimal deviation from the regression line. Multiple linear regression (MLR) analysis was done to model the linear relationship between HGS and the human factors which are strongly correlated (Table 4).
Fig. 4 Scatterplot of HGS versus height in male
Fig. 5 Scatterplot of HGS versus hand span in male
Statistical Study of the Influence of Anthropometric Parameters …
589
Fig. 6 Scatterplot of HGS (kg) versus BMI (kg) in female
Fig. 7 Scatterplot of HGS versus weight in female
5 Discussion The findings of this study are identical to studies carried out by Nakandala [13] among healthy first-year residential undergraduate student population (n = 524,350 female, 174 male, mean age = 21.31 ± 0.93). HGS, gender, hand dominance, BMI, hand
590
M. Rajesh et al.
Table 4 Multiple linear regression analysis for estimation of HGS S
GS
S
GS
r
r
r
r
Height (m)
0.518
0.575
0.169
0.292
Weight (kg)
0.575
0.629
0.506
0.459
BMI (kg/m2 )
0.479
0.538
0.508
0.381
D.H. length (cm)
0.709
Parameter
N.D.H. length (cm)
0.187 0.575
0.329
length, hand span, handbreadth, forearm length, forearm girth, and wrist circumference were the primary outcome variables. The study showed that male students’ HGS of the dominant hand was 35.27 ± 5.91 kg, which is noticeably greater (p < 0.05) than that of female students (19.52 ± 4.34 kg). With the exception of forearm length, it does, however, have a strong but moderately positive association with other measurements. Furthermore, Mullerpatan et al. [11] examined HGS among 1005 healthy, inactive individuals in numerous Indian states, ranging in age from 18 to 30 years, and found that the HGS values for both genders were essentially the same. Women (19.51 ± 3.9 kg) and men (33.67 ± 7.2 kg). Normative HGS values have been developed for the Korean population also discovered that male HGS readings were greater. In addition, Bhattacharyya and Goswami [15] used 80 subjects 47 male and 33 female to conduct their study where 58.75% of the male participants had greater values for practically all criteria than female, who made up 41.25% of the participants. According to a study by Hogrel et al. [16], strength is directly correlated with the body height. This implies that the body height is always one of the foremost considerations in accounting for muscle strength-based activities [16]. Hoor et al. [17] study found a strong association between weight/BMI and muscle strength (r = 0.35 to 0.55; p < 0.05). Given this level of correlation, one simply cannot ignore the inclusion of weight as a major factor. A systematic review conducted by Fallahi et al. [18] demonstrated a strong positive correlation between handgrip strength of athletes. According to Wichelhaus et al. [19] research state that tiny hands apply less force than large hands. Hence, the outcome of this research is in line with that of several other researcher of different demograpies.
6 Conclusion The results of the study demonstrate the various anthropometric factors and its corresponding level of correlation that have an effect on the hand grip strength of an individual. It gives a general idea on which factors can act as constituent elements in determining the HGS of a person. It was found that the average dominant hand grip strength in male is 39.73 ± 9.40 kg and in female is 20.66 ± 5.51 kg clearly
Statistical Study of the Influence of Anthropometric Parameters …
591
indicates the influence of height, weight, and hand length on the handgrip strength of a young individual. However, the study was based on a healthy student population the findings were generalized, but it may not be same for subjects of different age, occupation, region, etc. Hence, future studies can be carried out on different age groups and industry personal who require higher physical strength.
References 1. Kim SY (2021) Importance of handgrip strength as a health indicator in the elderly. Korean J Fam Med 42(1):1. https://doi.org/10.4082/kjfm.42.1E 2. Musalek C, Kirchengast S (2017) Grip strength as an indicator of health-related quality of life in old age—a pilot study. Int J Environ Res Public Health 14(12):1447. Published 2017 Nov 24. https://doi.org/10.3390/ijerph14121447 3. Whiting SJ, Cheng PC, Thorpe L, Viveky N, Alcorn J et al (2016) Hand grip strength as a potential nutritional assessment tool in long-term care homes. J Aging Res Healthcare 1(2):1–11 4. Wind AE, Takken T, Helders PJ, Engelbert RH (2010) Is grip strength a predictor for total muscle strength in healthy children, adolescents and young adults. Eura J Pediatr 169(3):281– 287 5. Trosclair D, Bellar D, Judge LW, Smith J, Mazerat N, Brignac A (2011) Hand-grip strength as a predictor of muscular strength and endurance. J Strength Cond Res 25(S99) 6. Andrade Fernandes AD, Natali AJ, Vieira BC, Valle MAAND, Gomes Moreira D, MassyWestropp N et al (2014) The relationship between hand grip strength and anthropometric parameters in men. Arch Med Deporte 31(3):160–64 7. Jung M-C, Hallbeck MS (2004) Quantification of the effects of instruction type, verbal encouragement, and visual feedback on static and peak handgrip strength. Int J Ind Ergon 34:367–374 8. Norman K, Stobaus N, Gonzalez MC, Schulzke JD, Pirlich M (2011) Hand grip strength: outcome predictor and marker of nutritional status. Clin Nutr 30:135–142 9. Folland JP, Williams AG (2007) The adaptations to strength training: morphological and neurological contributions to increased strength. Sports Med 37:145–168 10. Woo J, Arai H, Ng TP, Sayer AA, Wong M, Syddall H et al (2014) Ethnic and geographic variations in muscle mass, muscle strength and physical performance measures. Eur Geriatr Med 5(3):155–64 11. Mullerpatan RP, Karnik G, John R (2013) Grip and pinch strength: normative data for healthy Indian adults. Hand Ther 18(1):11–16 12. Bansal N (2008) Hand grip strength: normative data for young adults. Indian J Physiother Occup Ther Int J 29–33. [S.l.], Apr. 2008. ISSN 0973-5674. Available at https://www.i-sch olar.in/index.php/ijpot/article/view/47060 13. Nakandala P, Manchanayake J, Narampanawa J, Neeraja T, Pavithra S, Mafahir M, Dissanayake J (2019) Descriptive study of hand grip strength and factors associated with it in a group of young undergraduate students in University of Peradeniya, Sri Lanka who are not participating in regular physical training. Int J Physiother 6(3):82–88. https://doi.org/10.15621/ijphy/2019/ v6i3/183876 14. Survey system (2017) Sample size calculator. Confidence level, confidence interval, sample size, population size, relevant population. Creative Research Systems. [Online]. Available from http://www.surveysystem.com/sscalc.htm 15. Bhattacharyya J, Goswami B (2022) Hand grip muscle strength, endurance and anthropometric parameters in healthy young adults: a cross-sectional study. https://doi.org/10.7860/JCDR/ 2022/55381.16897
592
M. Rajesh et al.
16. Hogrel JY, Decostre V, Alberti C et al (2012) Stature is an essential predictor of muscle strength in children. BMC Musculoskelet Disord 13:176. https://doi.org/10.1186/1471-2474-13-176 17. Ten Hoor GA, Plasqui G, Schols AMWJ, Kok G (2018) A benefit of being heavier is being strong: a cross-sectional study in young adults. Sports Med Open 4(1):12. https://doi.org/10. 1186/s40798-018-0125-4. PMID: 29492711; PMCID: PMC5833324 18. Fallahi AA, Jadidian AA (2011) The effect of hand dimensions, hand shape and some anthropometric characteristics on handgrip strength in male grip athletes and non-athletes. J Hum Kinet 29:151–9. https://doi.org/10.2478/v10078-011-0049-2. Epub 2011 Oct 4. PMID: 23486361; PMCID: PMC3588620 19. Wichelhaus A, Harms C, Neumann J et al (2018) Parameters influencing hand grip strength measured with the monography system. BMC Musculoskelet Disord 19:54. https://doi.org/10. 1186/s12891-018-1971-4
Repeatability Analysis on Multi-fidelity and Surrogate Model Based Multi-objective Optimization Algorithm Anand Amrit
Abstract Physics-based multi-objective design is performed utilizing computationally expensive high-fidelity simulation models and an efficient multi-objective optimization (MOO) algorithm. The MOO technique discussed in this paper exploits variable-fidelity computational models, a design space reduction scheme, and cokriging surrogates to efficiently determine the Pareto frontier of an aerodynamic design problem at the high-fidelity modeling level. The approach is demonstrated on the aerodynamic shape design of transonic airfoils. The results show that a small number of high-fidelity models are necessary to obtain the entire Pareto front. Finally, a repeatability analysis is performed to check the accuracy of Pareto front obtained. Keywords Multi-objective · Optimization · Multi-fidelity · Repeatability · Aerodynamics · Surrogate modeling · Optimization
1 Introduction Time has brought an escalation in complexity of various engineering systems which require simultaneous handling of several conflicting objectives. For example, in the development of wireless headphones several objectives need to be considered, such as cost, battery life, and esthetics. To increase battery life, a bigger battery is required while to reduce cost, the batter size needs to be reduced. In such cases, a bargain must be made among objectives to obtain optimal decisions. In short, designers need a set of optimal solutions that can satisfy the conflicting objectives, also known as Pareto-optimal set [1]. Pareto optimality is defined as a set in which none of the two individuals can be better than the other without making at least one individual worse [2]. Multi-objective optimization [3] (MOO) or Pareto optimization is a necessary process to obtain trade-off solutions among various conflicting features. A widelyused approach to perform MOO is by using metaheuristic algorithms, such as the A. Amrit (B) Rivian Automotive, Irvine, CA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_50
593
594
A. Amrit
multi-objective evolutionary algorithm [4] (MOEAs), genetic algorithms [5] (GAs), and particle swarm optimization [6] (PSO). The best part of these algorithms is that they can generate all the Pareto optimal solutions in a single iteration. Highfidelity partial differential equation (PDE) simulations play a vital role in the design of complex multidisciplinary engineering systems due to high nonlinearity of the physics governing those systems. Moreover, nonlinear couplings between disciplines, like fluid–structure interaction of a wing, may exist. Sometimes with unconventional systems, it may not be possible to rely on prior designs. Hence, high-fidelity PDE simulations are essential in the design of such complex systems. However, the key challenges of using such models in MOO are: (i) computationally expensive (in terms of cost and time) simulations, (ii) huge design space dimensions and ranges of the parameter, and (iii) exhaustive number of function evaluations executed by off-the-shelf MOO algorithms. An optimization problem becomes nearly impossible to solve using (iii) if it is a combination of (i) and (ii). Therefore, structured methodologies are needed which can decrease the design space and processing time, while still maintaining a level of accuracy that is acceptable. Efficient and fast MOO techniques were recently developed in the area of aerodynamic shape design [7–10]. This paper is focused on checking the repeatability of multi-objective optimization algorithm developed by Amrit et al. [11, 12]. The MOO technique utilized in this paper [11, 12] performs an initial design space reduction where an approximate Pareto front is determined using a kriging interpolation surrogate based on low-fidelity models and multi-objective evolutionary algorithms (MOEA). The final Pareto front is obtained using a limited number of high-fidelity samples and co-kriging interpolation surrogates.
2 Optimization Algorithm As described earlier, it is difficult to utilize accurate but expensive high-fidelity models in the optimization of complex physics-based systems. It becomes even more challenging when utilizing them with MOO. For computational efficiency, the MOO procedure as proposed by [11, 12] has been utilized in this paper. The algorithm utilizes a surrogate model which is a combination of both high-fidelity f(x) and lowfidelity model c(x). The low fidelity model described here is a coarse discretization of the high-fidelity simulations that allows for a swift evaluation of the system but with a lack of accuracy. Even though the design process is accelerated by utilizing low-fidelity model for most of the optimization operations; the required accuracy of the Pareto front is achieved using a few high-fidelity model evaluations.
Repeatability Analysis on Multi-fidelity and Surrogate Model Based …
595
3 Multi-objective Optimization of Transonic Airfoils The multi-objective optimization (MOO) technique described in Sect. 2 is demonstrated on an aerodynamic problem involving a transonic airfoil. This example is chosen to illustrate the use of the MOO algorithm, as well as to present a repeatability study of the algorithm results. For the numerical example, the problem statement, design space, computational models, and the results are described.
3.1 Problem Statement Multi-objective design optimization of the RAE 2822 transonic airfoil shape is performed considering two objectives, i.e., aerodynamic drag coefficient C d.f (x) and pitching moment coefficient C m.f (x). The optimization problem is: Minimize Cd. f (x), Minimize Cm. f (x), Subjected to M∞ = 0.734, Cl. f (x) = 0.824 and A(x) ≥ Abaseline , where M ∞ and C l.f (x) are the free-stream Mach number and the lift coefficient respectively [9]. For a given design x, A(x) is the airfoil cross-sectional area that is non-dimensionalized with the chord squared, and Abaseline is a baseline reference value.
3.2 Design Space The B-spline curves are used to parameterize each surface of the airfoil. The parametric form of the airfoil surfaces is written as [13] x(t) =
n+1 i=1
X i Ni,k (t), z(t) =
n+1
Z i Ni,k (t).
(1)
i=1
The details of each term can be found in [9]. The control points as shown in Fig. 1a are the design variables that can move only in the vertical direction. A total of eight design variables will be utilized in the optimization problem with four for each of the top and bottom surfaces. To start with, the RAE 2822 airfoil is used as the starting point and as the baseline shape. The corresponding design variable vector is set to x0 = [0.0175 0.04975 0.0688 0.0406; −0.0291 −0.0679 −0.03842 0.0054]T . The position of the design
596
A. Amrit
Designable control point 0.08 0.06
Farfield
0.04
Airfoil
z/c
0.02 Fixed control point
55 c
0
c
-0.02 -0.04 Fixed control point
-0.06 -0.08 0
0.2
0.6
0.4
0.8
1
x/c
(a)
(b)
Fig. 1 Transonic airfoil design problem setup: a solution domain, and b the airfoil shape parameterization setup
variables (control points) are set as X = [0.0 0.15 0.45 0.8; 0.0 0.35 0.6 0.9]T as shown in Fig. 1a. The design space boundary is given by: l = (1 − sign(x0 ) · 0.15) o x0 and u = (1 + sign(x0 ) · 0.15) o x0 , where o denotes component-wise multiplication. The cross-sectional area of the baseline airfoil shape is ARAE2822 = 0.0779.
3.3 Training Points A set of design points is generated using Latin hypercube sampling (LHS) technique to train the kriging model that will be utilized in the optimization process. An initial base set of 100 designs is used in the reduced design space (this number of design points was set based on mean square error which was below 1%).
3.4 Computational Fluid Dynamics Modeling SU2, an unstructured mesh-based fluid dynamics software is utilized to perform the flow simulations [14]. The software utilizes an implicit density-based formulation to solve the steady compressible Euler. A second order Jameson-Schmidt-Turkel (JST) scheme [15] resolves the convective fluxes. A steady state solution is obtained using asymptotic convergence. Further details on the convergence criterion for the solver can be found in [9]. Pointwise is used to generate an O-type computational mesh. Figure 1b shows the computational domain. The distance between the farfield boundary and the airfoil surface is about 55 chord lengths. A grid convergence study revealed that a 512 × 512 mesh can be used as the high-fidelity model as it converges within 0.2 drag counts
Repeatability Analysis on Multi-fidelity and Surrogate Model Based …
597
when compared with the next mesh. A 64 × 64 mesh is selected as the low-fidelity fast model. While the flow simulation with the former takes around 30 min that with the later takes around 1 min. Fixed lift is obtained by varying the angle of attack internally within the solution flow.
3.5 Multi-objective Optimization Results A surrogate model s0 is constructed using output space mapping (OSM) [11] using the training samples. The baseline airfoil is used to get its corresponding low and highfidelity responses to construct the OSM correction factors [11, 12]. s0 constructed by OSM is then used to perform design space reduction within the lower and upper bounds [11, 12]. In particular, the two approximate extreme ends of the Pareto front are obtained using two single objective optimizations runs executed using s0 . Thus, the two single-objective optimization runs are: xc∗ (1) = arg min Cd.c (x)
(2)
s.t. M∞ = 0.734, Cl = 0.824, A ≥ A Baseline ,
(3)
xc∗ (2) = arg min Cm.c (x)
(4)
s.t. M∞ = 0.734, Cl = 0.824, A ≥ A Baseline .
(5)
l≤x≤u
and l≤x≤u
The search utilized around ~ 150 function evaluations to perform both optimization runs. The optimum solutions are: xc *(1) = [0.0167, 0.0456, 0.0642, 0.0513, − 0.0236, −0.0627, −0.0496, 0.0100], C d *(1) = 0.0011, C m *(1) = 0.1559, and xc *(2) = [0.0408, 0.0297, 0.0567, 0.0340, −0.0134, −0.0538, −0.0892, 0.0042], C d *(2) = 0.0454, C m *(2) = 0.000. The boundary for the reduced design spaces is given by l* = min (xc *(1), xc *(2)), and u* = max(xc *(1), xc *(2)). The final reduced design space is given by: l* = [0.0167 0.0297 0.0567 0.0340 −0.0236 −0.0628 −0.0892 0.0042], and u* = [0.0408 0.0456 0.0642 0.0513 −0.0134 −0.0539 −0.0496 0.0100]. Next, 100 training points sampled using LHS are utilized to construct a kriging model sKR using low-fidelity model only. The MOO algorithm [11, 12] are executed successively using the kriging model to obtain an initial Pareto front. Next 4 highfidelity samples selected along the predicted Pareto front is utilized to generate a co-kriging model. The MOEA algorithm utilizes the co-kriging model to refine the Pareto front and converges within three iterations as shown in Fig. 2a. Figure 2b shows the initial Pareto front obtained using low-fidelity kriging model and the final Pareto front obtained using the co-kriging model. The cost of two single objective optimizations run in parallel with 32 processors is about 4.35 h. Evaluation of 100
598
A. Amrit
Fig. 2 Results of transonic airfoil shape multi-objective optimization: a Pareto fronts at several iterations, b initial and final Pareto fronts and single-objective optimization points
training points on low-fidelity model is around 1.6 h. The final Pareto front is obtained in three iterations and in each iteration four high-fidelity verification samples are used which accounts to a cost of approximately 6 h. In short, the total CPU time for the MOO procedure described here is around 12 h.
3.6 Repeatability The MOO method successfully yields the Pareto front representation in just 4 iterations. However, it is necessary to verify that the same Pareto front is obtained every time the MOO procedure is repeated since it involves the execution of the MOEA. To test the repeatability, the MOO algorithm is executed several times for the same objectives and constraints, but with different initial training sample set. The MOO algorithm is executed on 5 different sets of 100 training samples, collected using LHS. The test is performed on the initial Pareto front and the final Pareto front. Four pitching moment coefficient values are selected along the initial and the final fronts and the corresponding designs are collected from all the 5 Pareto fronts as shown in Fig. 3. The designs selected from initial Pareto front are evaluated on low-fidelity model as the front is based on kriging model constructed using low-fidelity model evaluations (as shown in Fig. 3a). Similarly, designs are collected from final Pareto set and are evaluated on high-fidelity model, as the final front is obtained by refining the initial Pareto with samples evaluated on high-fidelity model (as shown in Fig. 3b). Variance is calculated for drag coefficient values of the designs collected from both Pareto fronts and is found to be very low. This proves that the MOO algorithm is repeatable.
Repeatability Analysis on Multi-fidelity and Surrogate Model Based …
599
Fig. 3 Repeatability analysis of the multi-objective optimization results: a initial Pareto fronts obtained from 5 different kriging models, and b the final Pareto front obtained from 5 different co-kriging models
4 Conclusion The paper presents an efficient approach to aerodynamic multi-objective optimization (MOO) using computationally expensive simulations. The approach was demonstrated on a transonic airfoil involving two objectives and a small design space. The results showed that the approach was successful in obtaining comparable results as a benchmark method. The MOO approach yields the entire Pareto front using a lot of fast low-fidelity model and limited number of high-fidelity model. Due to the good accuracy of the kriging and co-kriging surrogate models with the reduced design space, the results are repeatable when using the multi-objective evolutionary algorithms. Future work will investigate the scalability of the MOO algorithm for problems with larger dimensionality and several objective functions.
References 1. Fonseca C (1995) Multiobjective genetic algorithms with applications to control engineering problems. PhD thesis, Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, UK 2. Hwang C, Masud A (1947) Multiple objective decision making, methods and applications: a state-of-the-art survey. Springer, Berlin, Germany 3. Miettinen K (1999) Nonlinear multiobjective optimization. Kluwer Academic Publishers, Boston, MA 4. Zitzler E (1999) Evolutionary algorithm for multi-objective optimization: methods and application. Master’s thesis, Institut fur Technische Informatik und Kommunikationsnetze, Computer Engineering and Networks Laboratory, Zurich, Switzerland
600
A. Amrit
5. Holland J (1975) Adaptation in natural and artificial systems. The University of Michigan Press, Ann Harbour, Michigan 6. Eberhart R, Shi Y (1998) Comparison between genetic algorithms and particle swarm optimization. In: Evolutionary programming VII, Springer, Berlin, Germany, pp 611–616 7. Amrit A, Leifsson L (2019) Applications of surrogate-assisted and multi-fidelity multiobjective optimization algorithms to simulation-based aerodynamic design. Eng Comput 37(2):430–457. https://doi.org/10.1108/EC-12-2018-0553 8. Amrit A, Leifsson L (2020) Fast multi-objective aerodynamic optimization using sequential domain patching and multi-fidelity models. J Aircr 57(3):388–398. https://doi.org/10.2514/1. C035500 9. Ren J, Thelen AS, Amrit A, Du X, Leifsson LT, Tesfahunegn Y, Koziel S (2016) Application of multifidelity optimization techniques to benchmark aerodynamic design problems. In: 54th AIAA aerospace sciences meeting, January 2016 10. Amrit A, Leifsson L, Koziel S (2018) Aerodynamic design exploration through point-by-point Pareto set identification using local surrogate models. In: AIAA/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Kissimmee, Florida 11. Amrit A, Leifsson L, Koziel S (2017) Design strategies for multi-objective optimization of aerodynamic surfaces. Eng Comput 34(5):1724–1753. https://doi.org/10.1108/EC-07-20160239 12. Amrit A, Leifsson LT, Koziel S, Tesfahunegn YA (2016) Efficient multiobjective aerodynamic optimization by design space dimension reduction and cokriging. In: 17th AIAA/ISSMO multidisciplinary analysis and optimization conference, AIAA AVIATION Forum (AIAA 2016-3515) 13. Farin G (1993) Curves and surfaces for computer aided geometric design. Academic Press, Boston, MA 14. Palacios F, Colonno M, Aranake A, Campos A, Copeland S, Economon T, Lonkar A, Lukaczyk T, Taylor TR, Alonso J (2013) Stanford University unstructured (SU2): an open-source integrated computational environment for multi-physics simulation and design. In: AIAA Paper 2013-0287, 51st AIAA aerospace sciences meeting and exhibit, grapevine, Texas, USA 15. Jameson A, Schmidt W, Turkel E (1981) Numerical solution of the Euler equations by finite volume methods using Runge-Kutta time-stepping schemes. In: AIAA 1981-1259, AIAA 14th fluid and plasma dynamic conference, Palo Alto, CA, June 23–25
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy Transportation Problems Indira Siguluri, N. Ravishankar, and Ch. Uma Swetha
Abstract Due to various of unpredictable variables, researchers face multiple state of unpredictability and hesitation while solving real-world transportation problems. Several authors have proposed an intuitionistic fuzzy form of the data that deal with ambiguity and hesitation. Hence, in this article, we investigate a transportation problem with supply, demand, and cost uncertainty and hesitation. We propose a novel ranking technique which represents both area of the fuzzy number’s involvement and non-involvement components, as well as expressing problem using generalized intuitionistic pentagonal fuzzy numbers. The centroid of centroids of such plane images is computed by subtracting fuzzy numbers of involvement and nonmembership areas from three planar images. A ranking index is defined. Moreover, a mathematical formulation is used to demonstrate efficiency of the proposed strategy. Keywords IFS · IFN · PIFN · GPIFN · Pentagonal IFTP · Optimum solution
1 Introduction There is a requirement to ship products from many origins to different locations in many real-life circumstances. The goal of the decision-makers (DM) would be to determine how often goods must be dispatched out of which origin to which desired location, given that almost all supply locations are fully utilized as well as all requirement points are fully obtained, so the overall transportation cost is minimized for a minimization problem or perhaps the total transportation profit is fully utilized
I. Siguluri (B) Vignan’s Institute of Information Technology(A), Visakhapatnam, India e-mail: [email protected] N. Ravishankar Gitam School of Sience, Visakhapatnam, India Ch. U. Swetha Anil Nerukonda Institute of Tehnology & Science, Visakhapatnam, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_51
601
602
I. Siguluri et al.
for an optimum solution. Furthermore, none of the current methods may be employed when all or a few of the characteristics are unknown. In order to handle ambiguous data in decision-making, [10] devised a fuzzy set hypothesis that has been successfully applied in a number of disciplines. After ground-breaking research, the application of fuzzy set theory to the area of development is investigated [2]. A product’s level of involvement (belongingness) in the set is a concern for a fuzzy set. In a crisp set, however, a component that has a position in the set is acknowledged by 1 and a component that does not have a place in the set is addressed by 0. Overall value (degree of acknowledgment or level of fulfillment) ranges from 0 to 1 in a fuzzy set. Atanassov [1] concept of intuitionistic fuzzy sets (IFS’s) is most beneficial when dealing with a large number of exceptions, confusion, and ambiguity. The IFSs distinguish between an elements level of membership (MF) and level of non-membership (NMF). IFSs assist decision makers in determining the extent of approval and non-approval for transportation cost (TC) in any transportation problem, as well as the intensity of fulfillment, non-fulfillment, and ambiguity for consignments (TP). And without a doubt, when it comes to solving challenges involving decision-making, IFS emerged as the best, most-closable solution. In perspective of all this, using IFS rather than FS is preferable when dealing with issues brought on by poor decision-making or infidelity. Examine at a similar study on TP in a fuzzy environment in [4, 6]. IFSs are thus employed by numerous writers for a variety of optimum situations. Pentagonal fuzzy numbers, their properties, and their use in fuzzy equations were created by [7, 8]. Centroidal Approach to Ranking Generalized Intuitionistic Pentagonal Fuzzy Numbers was created by [9]. Charles Rabinson and Chandrasekaran [3] used a ranking algorithm with an ATM to solve a pentagonal fuzzy transportation problem. Ambiguity index was utilized to solve an unbalanced transportation problem with a pentagonal intuitionistic fuzzy number [5]. The following is an overview of the article’s content. The second section explores some terminologies. The GPIFN ranking is presented in Sect. 3. The concept of pentagonal IFTP as well as its mathematical formulation are presented in Sect. 4. The proposed solution is described in Sect. 5. Section 6 contains numerical examples, a comparison of the proposed technique to existing methods, and a commentary. Finally, the article is concluded in Sect. 7.
2 Preliminaries The basic definitions are presented in this section [1, 5]. ˜ The membership Fuzzy Set: Let μ A˜ (X) be a function from [0, 1] to a classical set A. ˜ function μ A˜ (X) of a fuzzy set A is defined as A˜ = {(X, μ A˜ (X )) : X ∈ A˜ and μ A˜ (X ) ∈ [0, 1]}
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy …
603
Fuzzy Number: A fuzzy number A˜ is indeed an expansion of such a regular number in that it refers to either a connected collection of possible values rather than a single value, with each conceivable value having its own weight between 0 and 1. The weight (membership function) indicated by μ A˜ (x) meets the following criteria: 1. μ A˜ (x) has a piecewise continuous pattern. 2. μ A˜ (x) seems to be a fuzzy convex subgroup. 3. μ A˜ (x) is the piecewise continuousness of a fuzzy subset, suggesting that the initially filled for at least one member x0 must be 1, i.e., μ A˜ (x0 ) = 1. Pentagonal Fuzzy Number: If the membership function of a fuzzy number is A˜ = (a1 , a2 , a3 , a4 , a5 ), then its pentagonal fuzzy number is given by (Fig. 1) ⎧ x−a1 , a1 ≤ x ≤ a2 ⎪ a2 −a1 ⎪ ⎪ x−a2 ⎪ ⎪ , a2 ≤ x ≤ a3 ⎪ a −a ⎪ ⎨ 3 2 1, x = a3 μ A˜ (x) = x−a4 ⎪ a −a , a3 ≤ x ≤ a4 ⎪ 3 4 ⎪ ⎪ x−a5 ⎪ , a4 ≤ x ≤ a5 ⎪ a −a ⎪ 5 4 ⎩ 0, otherwise Intuitionistic Fuzzy Set (IFS): The A˜ IFS IFS in X expressed by the following equation has the following structure A˜ IFS = {(X, μ A˜ IFS (x), ν A˜ IFS (x)) : x ∈ X } , where in functions μ A˜ IFS (x) : X → [0, 1] and ν A˜ IFS (x) : X → [0, 1] indicate a level of element membership degree and non-membership, correspondingly and 0 ≤ μ A˜ IFS (x), ν A˜ IFS (x)) ≤ 1, for each x ∈ X. Intuitionistic Fuzzy Numbers (IFN’s): IFS is subset, A˜ IFS = {(X, μ A˜ IFS (x), ν A˜ IFS (x)) : x ∈ X } , of real line R which is termed as an IFN if ensuring carries: ∃m ∈ R, μ A˜ IFS (m) = 1 and ν A˜ IFS (m) = 0.μ A˜ IFS : R → [0, 1] is continuous and for every x ∈ R, 0 ≤ μ A˜ IFS (x), ν A˜ IFS (x) ≤ 1 holds. Fig. 1 Pentagonal fuzzy number
604
I. Siguluri et al.
Involvement and non-involvement function of A˜ IFS is as ensures ⎧ ⎧ f 1 (x), x ∈ [m − α1 , m) 1, x ∈ [−∞, m − α2 ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 1, x = m ⎨ f (x), x ∈ [m − α , m) 2 2 and ν A˜ (x) = , μ A˜ (x) = ⎪ ⎪ h (x), x ∈ (m, m + β ) 0, x = m, x ∈ [m + β2 , ∞) 1 1 ⎪ ⎪ ⎪ ⎪ ⎩ ⎩ 0, otherwise h 2 (x), x ∈ (m, m + β2 ] where f i (x) and h i (x), i = 1, 2, .. were exclusively gradually decreasing functions in [m − αi , m) and (m, m + βi ] simultaneously. αi and βi seem to be spread on upper and lower parts of μ A˜ (x) and ν A˜ (x) simultaneously. Pentagonal Intuitionistic Fuzzy Number (PIFN): )> < ( An IFN A˜ PIFN = (a1 , a2 , a3 , a4 , a5 ) a1' , a2' , a3' , a4' , a5' . Its participation and non-participation consequences are correspondingly accountable, and it is considered to be a PIFN (Fig. 2). ⎧ 0, if x < a1 ⎪ ⎪ ⎪ (x−a1 ) , if a < x ⎪ ⎪ 1 a2 −a1 ⎪ ⎪ (x−a2 ) ⎪ ⎪ , if a 2 < x ⎨ a3 −a2 μ A˜ PIFN (x) = 1, if x = a3 ⎪ (x−a4 ) ⎪ ⎪ , if a3 < x ⎪ a3 −a4 ⎪ ⎪ (x−a5 ) ⎪ ⎪ a4 −a5 , if a4 < x ⎪ ⎩ 0, if a5 < x PIFN A˜ PIFN
< a2 < a3 < a4 < a5
⎧ 1, if x < a1' ⎪ ⎪ ⎪ ' ) (x−a ⎪ 2 ⎪ , if a1' < x ⎪ a1' −a2' ⎪ ⎪ ' ⎪ (x−a3 ) ' ⎪ ⎪ ⎨ a2' −a3' , if a2 < x and ν A˜ PIFN (x) = 0, if x = a3' ⎪ ' ) (x−a ⎪ 3 ⎪ , if a3' < x ⎪ a4' −a3' ⎪ ⎪ ' ⎪ 4) ⎪ (x−a , if a4' < x ⎪ ⎪ a5' −a4' ⎪ ⎩ 1, if a5' < x
< a2' < a3' < a4' < a5'
Arithmetic operations: For two PIFNs ) any ( = a1 , a2 , a3 , a4 , a5 ; a1' , a2' , a3' , a4' , a5' and = B˜ PIFN
Fig. 2 PIFN graph
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy …
( ) b1 , b2 , b3 , b4 , b5 ; b1' , b2' , b3' , b4' , b5' , transactions:
preceding
are
605
indeed
mathematical
PIFN Addition: A˜ PIFN ⊕ B˜ PIFN = (a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 , a5 + b5 ; ) . a1' + b1' , a2' + b2' , a3' + b3' , a4' + b' , a5' + b' PIFN Subtraction: A˜ PIFN − B˜ PIFN =(a1 − b5 , a2 − b4 , a3 − b3 , a4 − b2 , a5 − b1 ; a1' − b5' , a2' − b4' , a3' − b3' , a4' − b2' , a5' − b1'
)
PIFN Multiplication: ) ( A˜ PIFN ⊗ B˜ PIFN = a1 b1 , a2 b2 , a3 b3 , a4 b4 , a5 b5 ; a1' b1' , a2' b2' , a3' b3' , a4' b5' , a5' b5' PIFN Scalar multiplication: k × A˜ PIFN
[( ) ka1 , ka2 , ka3 , ka4 , ka5 ; ka1' , ka2' , ka3' , ka4' , ka5' , k ≥ 0 ) = ( ka5 , ka4 , ka3 , ka2 , ka1 ; ka5' , ka4' , ka3' , ka2' , ka1' , k < 0
Generalized Pentagonal IFN (GPIFN): ) ( An IFN A˜ GPIFN = a1 , a2 , a3 , a4 , a5 ; a1' , a2' , a3' , a4' , a5' ; ωa , ωb is considered to be a GPIFN if its participating and non-participation are both compensated for in same way (Fig. 3).
μ A˜ GPIFN (x) =
⎧ if x < a1 ⎪ ) ⎪ 0, ( ⎪ ⎪ ωa x−a1 ⎪ , if a1 < x < a2 ⎪ 2 a2 −a1( ⎪ ) ⎪ ⎪ ⎪ ωa ωa x−a2 ⎪ ⎪ ⎨ 2 + 2 a3 −a2 , if a2 < x < a3
and wa , ( ) if x = a3 ⎪ ⎪ wa x−a4 ⎪ ⎪ wa − 2 a3 −a4 , if a3 < x < a4 ⎪ ⎪ ( ) ⎪ ⎪ wa x−a5 ⎪ ⎪ , if a4 < x < a5 ⎪ 2 a4 −a5 ⎪ ⎩ 0, iif a5 < x
606
I. Siguluri et al.
Fig. 3. Generalized PIFN graph
⎧ ' ωb , ⎪ ( ) if x < a1 ⎪ ⎪ ' x−a2 ⎪ ωb ⎪ , if a1' < x + ωb ⎪ 2 ( 2 )a1' −a2' ⎪ ⎪ ⎪ ⎪ ωb x−a3' ⎪ if a2' < x ⎪ ⎨ 2 a2' −a3' , ν A˜ GPIFN (x) = 0, ( if x = a3' ) ⎪ ⎪ ωb x−a3' ⎪ ⎪ , if a3' < x ' ⎪ 2 a4' −a3( ⎪ ) ⎪ ⎪ x−a ' ωb ⎪ ⎪ − ω2b a ' −a4' , if a4' < x ⎪ 2 4 5 ⎪ ⎩ ωb , if a5' < x
< a2' < a3' < a4' < a5'
GPIFN Arithmetic operations: For any two PIFNs ( ) ' ' ' ' ' GPIFN ˜ GPIFN ˜ = a , a , a , a , a ; a , a , a , a , a ; ω and = A B 1 2 3 4 5 a )1 2 3 4 5 ( b1 , b2 , b3 , b4 , b5 ; b1' , b2' , b3' , b4' , b5' ; ωb , preceding are indeed mathematical transactions: GPIFN Addition: [ A˜ GTrIFN ⊕ B˜ GTrIFN =
(a1 + b1 , a2 + b2 , a3 + b3 , a4 + b4 , a5 + b5 ; min(ωa , ωb )) ) ( ' a1 + b1' , a2' + b2' , a3' + b3' , a4' + b4' , a5' + b5' ; max(ωa , ωb )
}
GPIFN Subtraction: [ A˜ GTrIFN − B˜ GTrIFN =
GPIFN Multiplication:
} (a1 − b5 , a2 − b4 , a3 − b3 , a4 − b2 , a5 − b1 ; min(ωa , ωb )) ) ( ' a1 − b5' , a2' − b4' , a3' − b3' , a4' − b2' , a5' − b1' ; max(ωa , ωb )
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy …
[ ˜ GT r I F N
A
˜ GT r I F N
⊗B
=
(a1 b1 , a2 b2 , a3 b3 , a4 b4 , a5 b5 ; min(ωa , ωb ))
607
}
(a1' b1' , a2' b2' , a3' b3' , a4' b4' , a5' b5' ; max(ωa , ωb ))
GPIFN Scalar multiplication: [ k × A˜ GTrIFN =
} ( ) (ka1 , ka2 , ka3 , ka4 , ka5 ; ωa ) ka1' , ka2' , ka3' , ka4' , ka5' ; ωb i f k ≥ 0 ( ' ) (ka5 , ka4 , ka3 , ka2 , ka1 ; ωa ) ka5 , ka4' , ka3' , ka2' , ka1' ; ωb i f k < 0
3 The Proposed Ranking Approach for GPIFNs Consider, GPIFN A˜ GPIFN = .The balance point of a pentagon is regarded to be the center of the pentagon. Divide pentagon’s participation portion into three planes shapes. They are, respectively, a triangular AFB, a pentagonal BFGDC, as well as a triangle DEG. Let G 1 , G 2 and G 3 be the three planar figure’s centroids. The point of reference to establishing ranking of generalized pentagonal intuitionistic fuzzy numbers is centroid of these centroids G 1 , G 2 and G 3 . The balanced places of these three plane shapes are their centroid. For a GPIFN, centroid of these centroid positions is a far appropriate balance point (Fig. 4). The centroids of these plane figures are (
( ) ) a1 + 2a2 wa 2a2 + a3 + 2a4 wa , , G1 = , G2 = and 3 6 5 5 ) ( 2a4 + a5 wa , respectively. G3 = 3 6 The centroid of G 1 , G 2 and G 3 are ( ) a1 +2a2 + 2a2 +a53 +2a4 + 2a43+a5 w6a + w5a + w6a 3 , G= 3 3 ) ( 5a1 + 16a2 + 3a3 + 16a4 + 5a5 8wa , G= 45 45 Similarly, non-membership function’s pentagon is split into three planar figures. The centroid of three plane figures and the centroid of these centroids are evaluated in a same way. Such plane figures get a centroid which is G '1 =
(
b1 + 2b2 wb + wb + , 3 3
wb 2
) ,
608
I. Siguluri et al.
Fig. 4 Graphical representation of centroid of centroid PIFNs
(
2b2 + b3 + 2a4 wb + wb + wb + w2b + , = 5 5 ( wb ) w + w + + b 2b b b 4 5 2 , respectively. G '3 = 3 3
G '2
wb 2
) and
The centroid of G '1 , G '2 and G '3 are ⎛ G' = ⎝ '
G =
(
b1 +2b2 3
+
2b2 +b3 +2a4 5
3
+
2b4 +b5 3
,
2wb + 3
wb 2
+
5b1 + 16b2 + 3b3 + 16b4 + 5b5 37wb , 45 45
3wb +
wb 2
5
3
+
wb 2
+
2wb + 3
wb 2
⎞ ⎠
)
Mean of centroid of centroids of GPIFN exploit community function and noncommunity is (
) x μv , y μv ) ( 5(a1 + b1 + a5 + b5 ) + 16(a2 + b2 + a4 + b4 ) + 3(a3 + b3 ) 8wa + 37wb = , 90 90
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy …
609
Currently, ranking function stipulates pentagonal intuitionistic fuzzy quantity ) ( R A˜ PIFN (( )( )) 5(a1 + b1 + a5 + b5 ) + 16(a2 + b2 + a4 + b4 ) + 3(a3 + b3 ) 8wa + 37wb = 90 90
Arithmetic operations of GPIFN: ) ( ' ' ' ' ' ˜ PIFN ˜ PI FN = For ( any two PIFNs 'A ' =' a' 1 , a' 2 , a3), a4 , a5 ; a1 , a2 , a3 , a4 , a5 ; wa and B b1 , b2 , b3 , b4 , b5 ; b1 , b2 , b3 , b4 , b5 ; wb , arithmetic operations are as follows: GPIFN Addition: {( ) a1 + b1 , a2 + b2, , a3 + b3 , a4 + b4, , a5 + b5 ; min(wa , wb ) )} (a1' + b1' , a2' + b2' , a3' + b3' , a4' + b4' , a5' + b5' ); max(wa , wb )
A˜ PIFN ⊕ B˜ PIFN =
GPIFN Subtraction: { A˜ PIFN − B˜ PIFN = (a1 − b5 , a2 − b4, , a3 − b3 , a4 − b2, , a5 − b1 ; min wa , wb ) ( ' ) )} a1 − b5' , a2' − b4' , a3' − b3' , a4' − b2' , a5' − b1' ; max (wa , wb ) GPIFN Multiplication: A˜ PIFN ⊗ B˜ PIFN ) {( )( = a1 b1 , a2 b2, , a3 b3 , a4 b4 , a5 b5 ; min(wa , wb ) a1' b1' , a2' b2' , a3' b3' , a4' b4' , a5' b5' ; max (wa , wb )} GPIFN Scalar multiplication: k × A˜ PIFN =
[
) ( (ka1 , ka2 , ka3 , ka4 , ka5 ; wa )(ka1' , ka2' , ka3' , ka4' , ka5' ; wb ), k ≥ 0 (ka5 , ka4 , ka3 , ka2 , ka1 ; wa ) ka5' , ka4' , ka3' , ka2 , ka1' ; wb , k < 0
4 Comparison of PIFNs To compare and compare PIFNs, we must rate them. A ranking function is defined that translates every PIFN into some kind of real line, such as R : F(R) → R. F(R) denotes the ordering among all PIFNs. PIFNs can be evaluated by using ranking function. ) ( Let A˜ PIFN = a1 , a2 , a3 , a4 , a5 ; a1' , a2' , a3' , a'4 , a'5 ; wa and ) ( B˜ PIFN = b1 , b2 , b3 , b4 , b5 ; b1' , b2' , b3' , b4' , b5' are two PIFNs, then
610
I. Siguluri et al.
) ( 5(a +a +a ' +a ' )+16(a2 +a4 +a2' +a4' )+3(a3 +a3' ) R A˜ PIFN = 1 5 1 5 and 90 ) ( ' ' ' ' ' 5(b +b +b +b )+16 b +b +b +b +3(b +b ) ( ) 1 5 2 4 3 1 2 4 3 5 R B˜ PIFN = , then orders are defined as 90 follows: ) ( ) ( (i) A˜ PIFN > B˜ PIFN if R A˜ PIFN > R B˜ PIFN , ) ( ) ( (ii) A˜ PIFN < B˜ PIFN if R A˜ PIFN < R B˜ PIFN , and ) ( ) ( (iii) A˜ PIFN = B˜ PIFN if R A˜ PIFN = R B˜ PIFN The following qualities of the ranking function R are also present: ) ( ) ( ) ( (i) R A˜ PIFN + R B˜ PIFN = R A˜ PIFN + B˜ PIFN ) ( ) ( (ii) R k A˜ PIFN = k R A˜ PIFN ∀k ∈ R
4.1 Mathematical Formulation of Pentagonal Intuitionistic Fuzzy Transportation Problem (PIFTP) Examine a TP with ‘m’ vendors and ‘n’ insistent. ci j is value of transiting one module of outcome (from ith vendor to jth insistent. ' ' ' ' ') a˜ iPIFN = a1i , a2i , a3i , a4i , a5i ; a1i , a2i , a3i , a4i , a5i be IF extent at ith vendor. ( ' ' ' ' ') = b1i , b2i , b3i , b4i , b5i ; b1i , b2i , b3i , b4i , b5i be IF abundant at jth insistent. b˜ PIFN j ) ( ij ij ij ij ij i' j' i' j' i' j' i' j' i' j' x˜iPIFN = x , x , x , x , x ; x , x , x , x , x be IF amount trans1 2 3 4 1 2 3 4 j 5 5 formation of ith vendor to jth insistent. Then, balanced pentagonal IFTP is given by. Min Z˜ PIFN = ∑n
n m ∑ ∑
ci j × xiPIFN j
i=1 j=1
s.t. x˜ PIFN = a˜ iPIFN , i = 1, 2, . . . , m j=1 i j ∑m x˜iPIFN = b˜ PIFN , j = 1, 2, . . . , n j j i=1
˜ i = 1, 2, . . . , m; j = 1, 2, . . . , n ≥ 0; x˜iPIFN j The TP is termed as pentagonal intuitionistic fuzzy transportation problem having availability, demand are real numbers and costs are PIFNs. To find obtain optimum solution, we are using following transportation strategy.
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy …
611
5 Transportation Strategy 5.1 Proposed Transportation Strategy Stage 1: First check it given transportation problem is balanced or not. If it is not balanced converted into balanced transportation problem by introducing dummy cost with respect to demand as total demand is less than the total supply otherwise go to stage 2. Stage 2: Calculate arithmetic total of a rows or columns variances, then write R for the row sum and C for the column sum. The total amount of the rows and columns must be specified. Choose the row and column with the greatest variation. Stage 3: Select cell in the row and column indicated in stage 2 that has the lowest cost. Stage 4: Assign a reasonable task to cell selected in phase 5. Remove completed row/column. Stage 5: Repeat Stages 1–4 once all identifiers are being finished. Stage 6: An optimum solution and triangle IFOS has at stage 5, ∑mbeen ∑nachieved PIFN ⊗ xi j . thus optimum solution {xi j } and triangular IFOS is i=1 j=1 c˜i j
6 Numerical Example Consider a 3 × 4 pentagonal intuitionistic fuzzy number with value and ambiguity index taken from [5] (Table 1). Solution: Here, supply is not equal to demand, i.e., given transportation problem is unbalanced. Using stage 1 of proposed method makes them into balanced as given in Table 2. Select maximum and minimum PIFN in each row and column take difference as given in Table 3. The problem in Table 3 was turned into Table 4 utilizing stage 3 as well as proposed method 5.1’s step 5 was used to assign the initial allocation. Using stage 5 of 5.1, remove D5 from Table 4. New reduced allocation is shown in Table 5 again apply procedure 5.1 (Table 6). The allotment is determined as described in Table 7, using Stage 6 of 5.1 model. Stage 7: Optimum solution and pentagonal intuitionistic fuzzy optimum value The optimum solution, obtained in Stage 5, is x13 = 35, x14 = 15, x24 = 10, x25 = 30, x31 = 30, x32 = 25, x34 = 15. The IF optimum value of pentagonal intuitionistic fuzzy transportation problem, given in Table 1, is 35 ⊗ ((3, 5, 7, 9, 11)(3, 5, 7, 10, 12); 0.6, 0.2) ⊕ 15
D2 ((16, 18, 20, 22, 24) (14, 17, 20, 22, 24); 0.6, 0.2)) ((12, 14, 16, 18, 20) (10, 13, 16, 19, 21); 0.6, 0.2) ((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8) 25
D1
((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
((15, 20, 21, 24, 27) (14, 19, 21, 25, 28); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2)
30
S1
S2
S3
Demand
Table 1 Pentagonal intuitionistic fuzzy transportation problem
35
((14, 16, 18, 20, 22) (12, 15, 18, 21, 23); 0.6, 0.2)
((16, 18, 20, 22, 24) (14, 17, 20, 23, 26); 0.6, 0.2)
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12);0.6, 0.2)
D3
40
((5, 7, 9, 11, 13) (3, 6, 9, 12, 13); 0.6, 0.2)
((8, 10, 12, 14, 16) (9, 11, 12, 15, 18); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13);0.6, 0.2)
D4
70
40
50
Supply
612 I. Siguluri et al.
D2 ((16, 18, 20, 22, 24) (14, 17, 20, 22, 24); 0.6, 0.2)) ((12, 14, 16, 18, 20) (10, 13, 16, 19, 21); 0.6, 0.2) ((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8) 25
D1
((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
((15, 20, 21, 24, 27) (14, 19, 21, 25, 28); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2)
30
S1
S2
S3
Demand
Table 2 Balanced pentagonal intuitionistic fuzzy transportation problem
35
((14, 16, 18, 20, 22) (12, 15, 18, 21, 23); 0.6, 0.2)
((16, 18, 20, 22, 24) (14, 17, 20, 23, 26); 0.6, 0.2)
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12); 0.6, 0.2)
D3
40
((5, 7, 9, 11, 13) (3, 6, 9, 12, 13); 0.6, 0.2)
((8, 10, 12, 14, 16) (9, 11, 12, 15, 18); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2)
D4
30
0
0
0
D5
70
40
50
Supply
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy … 613
((16, 18, 20, 22, 24) (14, 17, 20, 22, 24); 0.6, 0.2))
((12, 14, 16, 18, 20) (10, 13, 16, 19, 21); 0.6, 0.2)
((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8)
25
((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
((15, 20, 21, 24, 27) (14, 19, 21, 25, 28); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2)
30
1.8978
S1
S2
S3
Demand
Col diff
1.7508
D2
D1
Table 3 Row and column difference table
1.7306
35
((14, 16, 18, 20, 22) (12, 15, 18, 21, 23); 0.6, 0.2)
((16, 18, 20, 22, 24) (14, 17, 20, 23, 26); 0.6, 0.2)
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12); 0.6, 0.2)
D3
0.6206
40
((5, 7, 9, 11, 13) (3, 6, 9, 12, 13); 0.6, 0.2)
((8, 10, 12, 14, 16) (9, 11, 12, 15, 18); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2)
D4
0
30
0
0
0
D5
C = 5.9998
70
40
50
Supply
R = 8.0476
2.4325
2.9431
2.6720
Row diff
614 I. Siguluri et al.
((16, 18, 20, 22, 24) (14, 17, 20, 22, 24); 0.6, 0.2))
((12, 14, 16, 18, 20) (10, 13, 16, 19, 21); 0.6, 0.2)
((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8)
((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
((15, 20, 21, 24, 27) (14, 19, 21, 25, 28); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2)
30
1.8978
S1
S2
S3
Demand
Col diff
1.7508
25
D2
D1
Table 4 Initial allotment table
1.7306
35
((14, 16, 18, 20, 22) (12, 15, 18, 21, 23); 0.6, 0.2)
((16, 18, 20, 22, 24) (14, 17, 20, 23, 26); 0.6, 0.2)
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12); 0.6, 0.2)
D3
0.6206
40
((5, 7, 9, 11, 13) (3, 6, 9, 12, 13); 0.6, 0.2)
((8, 10, 12, 14, 16) (9, 11, 12, 15, 18); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2)
D4
0
30 0
0
0 (30)
0
D5
C = 5.9998
70
40 10
50
Supply
R = 8.0476
2.4325
2.9431
2.6720
Row diff
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy … 615
((16, 18, 20, 22, 24) (14, 17, 20, 22, 24); 0.6, 0.2))
((12, 14, 16, 18, 20) (10, 13, 16, 19, 21); 0.6, 0.2)
((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8)
25
((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
((15, 20, 21, 24, 27) (14, 19, 21, 25, 28); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2)
30
1.8978
S1
S2
S3
Demand
Col diff
1.7508
D2
D1
Table 5 Updated reduction table
1.7306
35
((14, 16, 18, 20, 22) (12, 15, 18, 21, 23); 0.6, 0.2)
((16, 18, 20, 22, 24) (14, 17, 20, 23, 26); 0.6, 0.2)
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12); 0.6, 0.2)
D3
0.6206
40
((5, 7, 9, 11, 13) (3, 6, 9, 12, 13); 0.6, 0.2)
((8, 10, 12, 14, 16) (9, 11, 12, 15, 18); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2)
D4
C = 5.9998
70
40 10
50
Supply
R = 4.3243
1.3872
1.2456
1.6915
Row diff
616 I. Siguluri et al.
((12, 14, 16, 18, 20) (10, 13, 16, 19, 21); 0.6, 0.2)
((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8)
25
((15, 20, 21, 24, 27) (14, 19, 21, 25, 28); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2) (30)
30 0
1.8978
S2
S3
Demand
Col diff
1.7508
D2
((16, 18, 20, 22, 24) (14, 17, 20, 22, 24); 0.6, 0.2))
D1
((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
S1
Table 6 Second allocation table D3
1.7306
35
((14, 16, 18, 20, 22) (12, 15, 18, 21, 23); 0.6, 0.2)
((16, 18, 20, 22, 24) (14, 17, 20, 23, 26); 0.6, 0.2)
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12); 0.6, 0.2)
D4
0.6206
40
((5, 7, 9, 11, 13) (3, 6, 9, 12, 13); 0.6, 0.2)
((8, 10, 12, 14, 16) (9, 11, 12, 15, 18); 0.6, 0.2)
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2)
Supply
C = 5.9998
70 40
10
50
Row diff
R = 4.3243
1.3872
1.2456
1.6915
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy … 617
618
I. Siguluri et al.
Table 7 Final allocation table D1 S1 ((9, 10, 11, 13, 15) (7, 9, 11, 14, 17); 0.6, 0.2)
((16, 18, 20, 22, 24)D2 D3 (14, 17, 20, 22, 24); 0.6, 0.2))
((3, 5, 7, 9, 11) (3, 5, 7, 10, 12); 0.6, 0.2) (35)
D4
D5
((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2) (15)
0
S2 ((15, 20, 21, 24, 27) ((12, 14, 16, 18, 20) (14, 19, 21, 25, 28); (10, 13, 16, 19, 21); 0.6, 0.2) 0.6, 0.2)
((16, 18, 20, 22, 24) ((8, 10, 12, 14, 16) 0 (14, 17, 20, 23, 26); (9, 11, 12, 15, 18); (30) 0.6, 0.2) 0.6, 0.2) (10)
S3 ((4, 6, 8, 10, 12) (2, 5, 8, 10, 12); 0.6, 0.2) (30)
((14, 16, 18, 20, 22) ((5, 7, 9, 11, 13) (12, 15, 18, 21, 23); (3, 6, 9, 12, 13); 0.6, 0.2) 0.6, 0.2) (15)
((8, 10, 12, 13, 15) (7, 9, 12, 15, 18); 0.6, 0.8) (25)
0
⊗ ((4, 6, 8, 10, 12) (2, 5, 8, 11, 13); 0.6, 0.2)⊕ 10 ⊗ ((8, 10, 12, 14, 16)(9, 11, 12, 15, 18); 0.6, 0.2) ⊕ 30 ⊗ ((4, 6, 8, 10, 12)(2, 5, 8, 10, 12); 0.6, 0.2)⊕ 25 ⊗ ((8, 10, 12, 13, 15)(7, 9, 12, 15, 18); 0.6, 0.3)⊕ 15 ⊗ ((5, 7, 9, 11, 13)(3, 6, 9, 12, 13); 0.6, 0.2) = ((640, 1080, 1160, 1395, 1655)(505, 915, 1160, 1520, 1800); 0.6, 0.2)
7 Conclusion This paper introduced a transportation technique by using a new ranking function of PIFNs with help of centroid of centroid approach. On the basis of present study, it can be concluded that it is much easy to apply proposed method as compared to existing methods [5], for determining optimum solution of PIFTP. Therefore, it is preferable to apply the proposed method to solve pentagonal intuitionistic fuzzy transportation issues rather than the existing methods.
References 1. Atanassov KT (1986) Intitionistic fuzzy sets. Fuzzy Sets Syst 20:87–96 2. Bellman R, Zadeh LA (1970) Decision making in a fuzzy environment. Manag Sci 17(B), 141–164 3. Charles Rabinson G, Chandrasekaran R (2019) A method for solving a pentagonal fuzzy transportation problem via ranking technique and ATM. Int J Res Eng IT Soc Sci 09(04)
A New Ranking Approach for Pentagonal Intuitionistic Fuzzy …
619
4. Ismail Mohideen S, Senthil Kumar A (2010) A comparative study on transportation oblem in fuzzy environment. Int J Math Res 2(1):151–158 5. Kamal Nasir V, Beenu VP (2021) Unbalanced transportation problem with pentagonal intuitionsitic fuzzy number solved using ambiguity index. Malaya J Matematik, 9(1):720–724 6. Narayanamoorthy S, Saranyaand S, Maheswari S (2013) A method for solving fuzzy transportation problem (FTP) using Fuzzy Russell’s method. Int J Intell Syst Appl 5(2):71–75. ISSN: 2074-904 7. Ponnivalavan K, Pathinathan T (2015) Intuitionistic pentagonal fuzzy number. ARPN J Eng Appl Sci 10(12):5446–5450. ISSN 1819-6608 8. Mondal S (2017) Pentagonal fuzzy number, its properties and application in fuzzy equation. Future Comput Inf J 2(2) 9. Uthra G, Thangavelu K, Shunmuga Priya S (2017) Ranking generalized intuitionistic pentagonal fuzzy number by centroidal approach. Int J Math Appl 5(4–C):389–393 10. ZadehL A (1965) Fuzzy sets. Inf Control 8:338–353
Dynamic Allocation the Best-Fit Resource for the Specific Task in the Environment of Manufacturing Grid Avijit Bhowmick, Arup Kumar Nandi, and Goutam Sutradhar
Abstract Manufacturing Grid is composed of enormous set of miscellaneous as well as geographically scattered manufacturing resources; those are aggregated as a virtual stage for carrying out far-reaching different types of manufacturing operations. Since the number of heterogeneous manufacturing resources increases in MG environment, so allocating the applicable resources for specific jobs becomes a critical concern. In this work, we propose an algorithm for allocating appropriate resource to job in the MG environment to optimize resource utilization with respect to different parameters like scalability, job completion time, and cost as well. Unlike conventional different types of algorithms, our proposed algorithm captures the resource which is most suitable with respect to all parameters; henceforth, it permits all other further resources to take part in further bid processes, and our recommended algorithm is able to assign the most appropriate and correct resources for jobs completing as well as attain worthy enactment in provisions of effectiveness and helpfulness. Keywords MG · Resource searching · Scheduling · Optimum utilization
1 Introduction 1.1 Grid Technology The archconservative Internet is the connection of computers worldwide and which is based on some specific protocols to transfer data and information among the computing resources. The web is the association of disparate homepages which are A. Bhowmick (B) Budge Budge Institute of Technology, Kolkata, India e-mail: [email protected] A. K. Nandi CSIR-CMERI, Durgapur, India G. Sutradhar National Institute of Technology, Manipur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_52
621
622
A. Bhowmick et al.
furnishing information through static and dynamic homepages. But the Grid is a collaborative arrangement of different type of resources like computers and other computing cum electronic devices based on the Internet backbone from multiple domains to achieve common goals. The concept of Grid Computing was proposed, presented, and defined by Ian Foster [1] as “coordinated resource sharing and problem solving in dynamic, multiinstitutional virtual organizations” [1]. According to his structure and architecture, a completely different technology named Grid Technology materialized somewhat between mid-90 s to fulfill the huge intensifying demand for collaborative dive of organizations for data storage and resources for computing purposes as well. One kind special type of Grid named Service Grid is a significant structural design constituent necessary to comprehend the commercial prospective of services which are based on web connected worldwide [2]. This Grid is basically concerned with service. The Open Grid Services Architecture (OGSA) [3] presents an architectural development based on web services perceptions and technologies. One specific Service Grid dealing with web-based services with Grid-based services named service domain was developed by the company IBM [4]. With the quick improvement in the field of manufacturing worldwide, more and more different sort of heterogeneous resources are disseminated in different corporations which are geographically scattered near and far resources which appears to be of no use in an enterprise may be extremely wanted for some other enterprises (Fig. 1).
Fig. 1 Manufacturing resource providers and resource users through MG
Dynamic Allocation the Best-Fit Resource for the Specific Task …
623
So, globally scattered organizations are interested to collaborate and work together with each other sharing their own vital resources to get more and more trade prospects based on today’s high-speed Internet-based technology where thousands and thousands of devices are connected in this networking environment globally. So the conception of Manufacturing Grid (MG) has got evolved and accepted by organizations and research organizations for utilizing different manufacturing resources in coordinated way.
2 Resource Management in MG The management of different types of resources in the environment of MG is a crucial issue because of dissimilar heterogeneous manufacturing resources scattered geographically, and in addition with that another problem is those resources lie on diverse managerial domain having dissimilar protocols. But, the most significant question arises that how to assign and allocate jobs to the resources so that the job gets executed in efficient manner along with deadline are satisfied (Fig. 2). The resource discovery and picking up the best resource among all the scattered resources geographically are get divided into two phases. Firstly, describing and selecting the resources to the predefined tasks from the available resources of manufacturing.
Fig. 2 Manufacturing grid (MG) architecture
624
A. Bhowmick et al.
Secondly, using optimizing technique for choosing best resource among the described selected resources to finish up the tasks. The approach is vindicated valid by a classic circumstance.
3 The Proposed Algorithm Bidding process challenges are the major challenging issues encountered during allocation of resources: At the point when different resource suppliers hold their assets for a solitary offering process, the matrix client just chooses one resource to follow through with the responsibility, denying the open door to different resources to participate in additional offering cycles and serve extra MG clients prior to being dismissed by the underlying MG client.
Contrarily though, non-reserved auction methods occur in jobs taking longer than expected to complete. This occurs when the resource supplier is required to take part in many simultaneous bidding processes for the same resources. Due to competition for such resources among MG clients, jobs submitted may take longer than planned to complete.
4 Proposed Resource Allocation Process 4.1 Algorithm at User’s End A bidding process is needed to establish for the providers of different resources. Every resource provider is informed in advance for taking part in the process. After that all the resource providers are likely to submit their bid defining the time, they can function considering all the parameters calculated. A collection of all the bids is saved in efficient manner which are received from the resource providers. Now, if a job is reached for processing, a message is sent to call to assign resource. Kind of job along with completion time to carry out for processing task is asked to describe. Getting collected the details for job processing; the system starts examining and hunting the best-fit resource among all the resources are ready to serve. Same type of process is getting followed for the all the process selections. Algorithm in M-Grid at the Resource Demander’s End Start Create a DB for keeping best_bid Create another DB keeping all bids Inclusion a job constraint in the resource request communication.
Dynamic Allocation the Best-Fit Resource for the Specific Task …
625
Transmit the request communication to all resources that are accessible. Resource benefactor initialization a←1 Recurrence Obtained bid (w.r.t parameter like time) ← obtained bid from the resource benefactor a++ End descendent ordered (allocated_bid) best__fit_bid ← calculated Computed obtained_bid [1] reseta←1 if and only if found obtained_bid[a] = computed best_fit_bid then best__fit_bid_DB[a] updated ← obtained_bid DB[a] End-if use & request the algorithm for allocating resources End
4.2 Algorithm at the Resource Supplier’s End At the end of resource provider, when a job call reaches for resource allocation, then it gets examined first of availability. Depending upon the interest to join the process of bidding, the bid is estimated and returns it back to the end of client else it gets denied. The system keeps continuing till the system quits.
Algorithm at Resource Supplier’s End Start: Continue Receive Request from resource Demander Check the Request of Resource and it’s status then Compute the bid Convey the bid to resource demander w.r.t to parameters like Availability, Time and Efficiency from Eff—DB
626
A. Bhowmick et al.
Allocate that very Resource for that bid else Disregard the Request End-if Job Submission then If resource is matched for a specific job then Allocate that specific job to that Resource End-if else Cancel End-if till sign out from MG system End
4.3 Advantages of Proposed Algorithm To overcome the problematic situations of jobs assignment, the manufacturing resources are critically examined first at every administrative site. After that resource discovery starts going on until it finds and allocates. If the required resource is not obtained in the pool of resources, then the process of discovery goes on. The activity itself needs to be conscious of and request on how lengthy the resource will be desired by the activity. Whenever the timeframe has elapsed, the timer mechanically decrements and unlocks the resource so it can be allocated to another service that just might need it. A cost is imposed to the site single moment a service is delivered to it. The resource is returned after charging when its utilization is over, fulfilling all steps of resource management in MG. The activity itself has to be cognizant of how long the resource will be used and therefore must ask for it. The timer appears to be reducing like any other activity.
5 Results from Experiment Based on the experimental findings, it has been observed from the figure that total completion time of executing jobs in our proposed algorithm getting diminished with increasing the jobs. It happens so because the most appropriate resource is selected for job execution.
Dynamic Allocation the Best-Fit Resource for the Specific Task …
627
Utilization of Resources
Jobs Submitted
6 Conclusion Because of stationary character of dissimilar sites and different types of dynamic resources, numerous time is devitalized, and dynamic resources application is veritably small. Therefore, we attempted to ameliorate heterogeneous resource assignment operation in M-Grid environment in an efficient and effective manner. For this purpose, we proposed a dynamic resource contingency model where a kind of specific best-fit technique is used here for selection of resource among all other resources available, i.e., according to the specific need of client, only one resource
628
A. Bhowmick et al.
will be assigned which is best-fit satisfying the all the parameters. Our obtained experimental outcome showing as the quantity of job’s growth takes place, and then the total end time of our proposed algorithm reduces. This model saves some vital resources for job implementation; as a result, client itself requires knowing how long the task should be completed and sends back the task to. Being a resource provider, increase maximum resource operations. In the future, the algorithm can be better optimized to resolve difficulties that may occur when resources assignment takes place. This work only deals with similar task conditions.
References 1. Foster I, Kesselman C, Tuecke S (2001) The anatomy of the grid: enabling scalable virtual organizations. Int J High Performance Comput Appl 15(3):200–222 2. John H, John SB (2008) Service grid: the missing link in web services (2002). In: Malarvizhi N, Uthariaraj VR (eds) A broker-based approach to resource discovery and selection in grid environments. In: Computer and electrical engineering, 2008. ICCEE 2008. International conference on 2008 3. Foster I, Berry D, et al (2005) The open grid services architecture, Version 1.0 4. Yih-Shin T, Brad T et al (2004) IBM business service grid: summary page. http://www-128. ibm.com/developerworks/library/gr-servicegrcol.html
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using NSGSA Technique Ch. Syam Sundar and Gummadi Srinivasa Rao
Abstract Hot water planning is one of the most important optimization issues in power system operation and control. Because of their clean and renewable properties, hydro and wind power plants are important for the economic and environmental performance of power systems, including thermal power plants and attract increasing attention from researchers. This project investigates wind–wind forcing and heat delivery to develop a multi-objective hot wind timing (MO-HTWS) model. To solve the MO-HTWS problem with various constraints, this project proposes a non-dominated sorted gravity search algorithm (NSGSA) by adopting the concept of dominance relation criteria. In addition, the constraint management strategy provides to create an infeasible solution by changing the decision variables to the feasible solution. Finally, we test the ability of NSGSA to solve the MO-HTWS problem using the daily scheduling example of a hot water system. From the recent literature, we conclude that the excellent satisfactory and right distribution of NSGSA pre-premier solutions may be a green opportunity to the optimization of MO-HTWS problems. Keywords Economic dispatch (ED) · Minimization of cost and emission · Non-gravity search algorithm (NSGSA) · Optimization of gravity search algorithm (GSA)
1 Introduction The primary goal of hydrothermal planning is determining the optimal amount of water and thermal power to meet the load demand in a specific schedule. As far as possible, the cost of fuel needed to operate the heat generator can be reduced. Among scheduling, short-term hydrothermal scheduling (SHTS) is a complex, nonlinear, and Ch. S. Sundar (B) · G. Srinivasa Rao Department of EEE, V.R. Siddhartha Engineering College (Autonomous), Vijayawada, India e-mail: [email protected] G. Srinivasa Rao e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_53
629
630
Ch. S. Sundar and G. Srinivasa Rao
computationally challenging problem, but there are few limiting problems. Using these methods, we observe the management of geothermal systems. To overcome these shortcomings, probabilistic search algorithms such as simulated annealing (SA), genetic algorithms (GA), and evolutionary programming (EP) have been effective in solving inherently complex power system problems in the past decades. The best global solution for hydro thermal scheduling was provided in [1]. Rather, it can lead to solutions that are less than optimal or close to the overall optimal solution. Recently, particle swarm optimization (PSO) has been applied in various fields of power system optimization, such as power system stabilizer design, reactive power and voltage control, dynamic security environment identification, and short-term hydrothermal problems. The integrated operation of hydro, wind, and thermal power plants in a grid has recently become more economical. In the short term, wind, hydro, and heat have become more economical due to the integrated operation of hydro, wind, and thermal power plants within the network [2].
1.1 Literature Survey A. K. Barisal explained that GSA is one of the latest heuristic optimization techniques developed by Rashidi to solve the problem and optimize the city [3]. Hydrothermal programming is very complex and nonlinear due to the challenging environment for optimization and mathematical calculations. The proposed GSA algorithm is plays a vital role. The role in solving big problems of hydrothermal planning 44 Cascade reservoirs considers all the variables of the problem and does not need the usual simplifying assumptions and linearizations required by traditional methods. But the use of wind energy is not mentioned here. R. K. Swain has given a novel Loom [1], which uses a gravitational search algorithm to solve for economic distribution of loads. He proposed an estimated function model and improved the performance of the gravitational search algorithm by choosing the optimal mass of the agent. Relative packing was done in the proposed evolutionary planning [EP] and GSA technique, in which GSA method gave improved outcome with reduced time of computation. Neither had he determined about inclusion of wind energy. M. Basu, through an interactive fuzzy filling method [2], we have developed an algorithm for economic emission load distribution of thermal power plants with non-uniform fuel cost and emission level functions [2] associated with fixed head hydropower units [2]. Suppose the “decision maker” has fuzzy objectives for each objective function. The multi-objective load dispatching problem was transformed into a minimization problem and processed by the simulated annealing method. He also didn’t state anything about wind energy. The solution methodology provides “decision makers” with globally optimal or nearly universally optimal solutions. E. Rashedi proposed a work on a different improvement procedures, which have been urbanized [4]. Some of these calculations are motivated by swarm practices in nature. In this article, another streamlining calculation that is called gravitational
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
631
search algorithm (GSA) is presented. GSA is built in view of the law of gravity and the thought of mass communications. The GSA calculation utilizes the hypothesis of Newtonian material science, and its searcher operators are the accumulation of masses. In GSA, we have a disengaged arrangement of masses. Utilizing the gravitational power, each mass in the framework can see the circumstance of different masses. Exchange of data between various masses happens through gravitational each algorithm. Y. Wang, various PSO methods have been effectively implemented to solve the proposed hydrothermal short-term scheduling difficulty [5]. The expected performance of this technique is illustrated by comparison with prominent examples used in Other Evolutionary Algorithms Comparisons between different PSO techniques have also been made. S. Banerjee investigated [6]. Scheduling of water energy, wind power, and heat flow in the light of molecular group advanced method after applying the ideal PSO method of the hourly schedule for the energy era in the framework of hydro–wind– thermal power, the corresponding simulation results are presented. He focused on one goal. X. Yuan investigated on “A broadened NSGA-III for arrangement of multiobjective hydro-warm breeze scheduling considering wind control cost,” in which he coordinated breeze control with hydro–wind–warm planning to set up multi-objective monetary emanation hydro–warm breeze planning issue (MO-HTWS) issue with different muddled limitations [7]. In addition, the requirement taking care of procedure which repairs the infeasible arrangements by altering choice factors in practical zone as indicated by the infringement sum is likewise proposed. K. K. Mandal announced research on particle swarm optimization considering hatch point loading effect into consideration [8]. The proposed method is compared with two other techniques: simulated annealing and evolutionary programming. GSA is not considered by his job. X. Yuan focused on multi-objective optimal planning in wind-water heating system [9]. He chose three that represent different requirements such as economical, efficient, and energy efficient harvesting. Operation and Control of Power Generation in hydro thermal programming with different ways using different conventional examples are explained with the advantages and disadvantages [10]. Srinivasan, “Power System Operation and Control,” has given a detail study about thermal economic dispatch and hydrothermal scheduling with different examples and detailed description [11].
1.2 Objectives The principle objective of the present work is
632
Ch. S. Sundar and G. Srinivasa Rao
1. To demonstrate a traditional lambda iterative method for solving thermal economic dispatch. 2. Validate the normalized lambda iteration method for the hydrothermal scheduling problem. 3. Implementation of heuristics such as gravity search algorithms to solve short-term hydrothermal scheduling problems. 4. Develop a gravity search algorithm to solve short-term hydrothermal and warm air timing problems. 5. Presenting a multipurpose GSA to solve short-term hot water/hot wind scheduling.
2 Problem Formulation 2.1 Introduction of STHTS The problem formulation of short-term hydrothermal scheduling (STHTS) is the optimal timing of power plant production which can be determined by summing up the total power efficiency of each power plant and observing the minimum and maximum limits for the lowest cost. The main purpose of hot wind planning is based on the volume of water coming out of each tank to generate enough electricity to reduce fuel costs for thermal power plants. Target jobs are communicated numerically [3]. Minimize F =
T ∑ NS ∑
( ) f i Psit
(1)
t=1 I =1
f i = ai + bi Psi + ci Psi2
(2)
where ai, bi, and ci are the fuel cost coefficients. (a) Power balance constraints The power balance constraints are based on wind and hydro. Also, the thermal unit must meet the load demand for any specific time period during the planning period. Ns ∑ i=1
t psi+
Nh ∑ j=1
t Phj+
Nw ∑
t Pwk = Pt
(3)
k=1
where psit is the output power of the ith thermal power plant unit in the time period t is the output power t; Phjt is the output power of jth hydro plant unit at interval t; pwk kth wind farm units at intervals t and PD^t is the total load demand [1]. (b) Continuity equation for hydro reservoir
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
Vhjt = Vhjt−1 + Ihjt − Q thj
633
(4)
where Vhjt is the final storage volume of the reservoir i at time interval t; Ihjt is the inflow; Q thj is the discharge; Nh represents a hydropower plant, i = 1,2,…T; i = 1,2,…,Nh [12]. (c) Operating limits I. Thermal plant
Psi,min ≤ Psit ≤ Psi,max
(5)
where Psi,min , Psi,max are minimum and maximum production limit of thermal power plant i, i = 1, 2… Ns and t = 1, 2… T. II. Hydro plant
Phj,min ≤ Phjt ≤ Phj,max
(6)
where Phj,min , Phj,min are the minimum and maximum generation limits of jth water plant j = 1, 2 … Nh; t = 1, 2, …, T. III. Wind plant
t Pwk,min ≤ Pwk ≤ Pwk,max
(7)
where Pwk,min , are the minimum and maximum values of wind power plant, w = 1, 2… Nw; t = 1, 2… T. (d) Hydro plant discharge limits
Q hj,min ≤ Q thj ≤ Q hj,max
(8)
where Q hj , min, Q hj are maximum, minimum, and maximum emissions of greenhouse gases of the jth hydropower plant, j = 1, 2,… Nh; t = 1, 2…, T from [13]. (e) Reservoir storage volume limits:
634
Ch. S. Sundar and G. Srinivasa Rao
Vhj,min ≤ Vhjt ≤ Vhj,max
(9)
where Vhj , min, Vhj are maximum, minimum, and maximum volume of water plant, j = 1, 2,… Ns; t = 1,2…,T. (f) Initial and Final Reservoir storage:
Vhj0 = Vhj, ini ; Vhjt = Vhj, fin
(10)
Minimization of emission: The goal of minimizing total greenhouse gas emissions is to minimize greenhouse gas emissions by a thermal power plant during its operating period [2]. E = min
T ∑ NS ∑ ( ) e Psit
(11)
t=1 i=1
( ) where E denotes total emission, e Psit denotes emission function. The emission function can be described as follows: ( )2 ( ) e Psit =∝si +βsi Psit + γsi Psit
(12)
where ∝si βsi , γsi are emission coefficients of thermal plant i, and rest of the constraints is as in the above wind–hydro–thermal scheduling problem.
3 Optimization Techniques 3.1 Gravitational Search Algorithm GSA was presented by Rashidi Nizamabad pour in 2008 and is trusted to solve improvement problems. The population-based heuristic estimation is arranged in light of the gravity and mass connections. The figurine is encapsulated gathering of searcher administrators that interface with each other through the gravity control. The experts are considered as things, and their execution is measured by their masses. The gravity control causes an overall advancement where all things move toward various things with heavier masses. The direct improvement of heavier masses guarantees the abuse wander of the count and identifies with extraordinary game plan. The masses are obeying the law of gravity and the law of motion in Equation [4].
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
F=G
M1 XM2 R2
a=
F Ma
635
(13) (14)
In perspective of Eq. (12), F is gravity, G is static gravity, M1 and M2 are the masses of the first and second elements, and R is the division. Remove between the two substances. While for Eq. (13), Newton’s second law exhibits particles with no middle person and immediately. In the proposed calculation, as talked about above, operators are being considered as items and their execution is measured by their masses. Henceforth, each mass is taught by gravity power and refreshed by trading the data. ) ( P i = pi , . . . pid , . . . pin : i = 1, 2, . . . , m
(15)
where pid position of the ith mass in dth dimension and n dimensions of the search space at time ‘t’. Fijd (t) = G(t)
) Mpi (t) × Maj (t) ( d P j (t) − Pid (t) Rij (t) + £
(16)
where M i is the mass of object i, M j is the mass of object j, G(t) is the gravitational constant at time ‘t’, Rij (t) is the Euclidean distance between two objects i and j, and ϵ is A. A small constant [4] || || Rij (t) = || Pi (t), p j (t)||2 Fid (t) =
m ∑
rand j Fijd (t)
(17)
(18)
j=i j/=1
where rand j is a random number in the interval [0, 1]. According to the law of motion, the dth dimensional acceleration, ∝id (t) of agent i, at time t, is taken from [3]. ∝id (t) =
Fid (t) Mn (t)
(19)
Finding particle velocity is a function of flow acceleration and flow velocity. Therefore, you can calculate the next position and the next velocity of the agent as follows: vid (t + 1) = randi pvid (t)+ ∝id (t)
(20)
636
Ch. S. Sundar and G. Srinivasa Rao
pid (t + 1) = pid (t) + vid (t + 1)
(21)
where rand i is a uniform random variable in the interval [0, 1]. The gravitational constant G is initialized and decreases with time. That is, G is a function of the initial value of G_° and time (t). G(t) = G(G ◦ , t)
(22)
G(t) = G ◦ e−αT
(23)
1
They are gravitational mass and inertial mass refreshing by accompanying conditions. m i (t) =
fit(t) − worst(t) best(t) − worst(t)
m i (t) Mi (t) = ∑m j=1 m j (t)
(24) (25)
where fiti (t) represents the fit of agent i at time t, Best (t) and worst (t) populations represents strongest and weakest factors. To minimize problems: j
best(t) = min fitt
(26)
worst(t) = max fittj )
(27)
j∈{l,...m}
j∈{l,...m}
where best (t) is a set of factors with the highest fitness value and the highest mass. Hence, the flowchart representation of GSA is given below.
3.2 Non-dominating Sorting Gravitational Search Algorithms According to science research and engineering applications, multi-objective optimization (MOP) has gained its tremendous importance. MOP is unique in relation to the single target streamlining issue which considers just a single objective. In order to understand the concept of MOP, let us consider an n-dimensional minimization optimization problem with m objective functions. The mop is displayed as follows: The equation is written from (9), where X is the vector of decision variables and minimum f (x). The objective function considered the inequality constraints g_j (x)
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
637
with h_k (x) as the number of dimensions of equality constraints and where j is the equalities and inequalities limitations [14]. Dominance Relationship The dominance equation is taken from the equation from (9). Consider X 1 and X 2 to be two solutions, and let us say X 1 > X 2 only when solution of X 1 dominates X 2 solving few set of clauses, then it can be said that X 1 is non-dominated solution with respect to X 2 , which is inferior solution [9]. Pareto Optimal Solution MOP optimal solution doesn’t really mean single person solution, but a non-dominant set of solutions obtained. The Pareto optimal set of solutions plotted against the objective function is called the Pareto optimal front [9].
3.3 Computation of Non-dominated Sorting and Crowding Distance In this technique, rapid non-dominant selection strategies and appropriate breed conservation approaches are introduced, using herd segregation to sort populations into different uncontrolled levels. Both this undirected rapid placement method and herd separation calculations, which are referred to as an approach to breed protection, are derived from NSGA-II. The procedure for fast computation of non-dominant placement and population segregation is shown below. Step 1: For each placement, evaluate each objective and look at two factors: (1) n p , , the number of dominant behaviors, the number of placements that drive placement p, and (2) Sp, the p solutions. Placement of solutions with dominance, Step 2: For all locations, set the rank to one. Rank kp = 1 as the original irregular surface, called the Pareto optimal front, because the location contains 0 controls. Step 3: For each placement, access each part q of the set Sp in p with np = 0 and decrement the dominance statistic q by n q . If nq is clearly zero for any part of q, put it on the second non-primary level and change its rank to 2 k p , Step 4: In the above method for every individual from, a distinction is made between the second non-main level and the third non-administrative level. Repeat this step until all unmastered levels are generated [9]. Computation of crowding distance Step 1: Initialize the population classification as per every target work an incentive in upward request. Step 2: For every target, select the arrangements with littlest and biggest capacity esteems to be the limit arrangements and allot them unending crowding separation esteem. Step 3: Calculate the separation of different arrangements as indicated by the standardized distinction in the capacity estimation of two neighbor arrangements.
638
Ch. S. Sundar and G. Srinivasa Rao
The separation of arrangement I of target m can be portrayed as taken: distm (i ) =
f m (i + 1) − f m (i − 1) f mmax − f mmin
(28)
Step 4: Continue the figuring until the point that the separations of all answers for different goals are reached. Step 5: The swarming separation is ascertained as the aggregate of individual separation esteems relating to every goal. The swarming separation of arrangement I can be communicated as CDi =
m ∑
distn (i)
(29)
n=1
After non-dominant sorting and dense spacing calculation methodology, each individual i in the group is reduced to two properties: (1) Non-dominant rank (rankk). (2) Crowded distance (CDi ). Also, this can characterize correlation criteria between two people as take after: i is better than j if (ranKi < ranK j ) or ranKi = ranK j and CDi = CD j
(30)
That is, between two individual with various non-control positions, we incline toward the person of low rank. In the event that the two people are in a similar rank, at that point we lean toward the individual which situates in a lesser swarmed location.
4 Result and Discussions The results for thermal dispatch (case-1) and hydrothermal scheduling (case-2) [11] using conventional technique have been tabulated in Tables 1 and 2 respectively. In that test case-3 taken from the ref paper [4] in that test case-4 taken from the ref paper [10]. The test case-5 adding a wind constraint 0 ≤ Pw ≤ 10 MW to the test case-4. The algorithm was initialized population 30 and executed up to 100 iterations. From the test case-4 [10], applying a heuristic technique of gravitational search algorithm. From that hydro, thermal scheduling, and water discharge, volume of water of the results are shown below. The flowchart for GSA as shown in Fig. 1. The amount of heat and water energy, displacement, electricity production, electricity demand, and total fuel cost.
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
639
Table 1 Result for test case-I for thermal dispatch [11] Unit No.
Pg (mw)
IFC
L
λ
1
177.299
10.6184
1.1398
12.1034
2
489.8232
11.3410
1.0672
12.1034
Table 2 Result for test case-II for hydrothermal scheduling
Unit
Pg (MW) = T1
Pg (MW) = T2
Pg (MW) = T3
Hydro
14.0553
23.6343
7.3583
Thermal
0.9447
1.3537
0.6417
= 15 MW
= 25 MW
= 8 MW
Fig. 1 Flowchart of GSA (5)
Test Case-5 For the above test case, a wind limit has been added to the range 0 ≤ Pw ≤ 10 MW. Figure 2 and Table 5 describe hydrothermal timing using the gravity search algorithm. Add wind restriction to the test case to reduce fuel costs.
640
Ch. S. Sundar and G. Srinivasa Rao
Table3 Result for test case-4 Time
) ) ( ( PH (MW) PT (MW) Q ∗104 m3 /s V ∗104 m3 /s PG(MW) Pd (mw) Cost (Rs/h)
24.00–12.00 339.46
860.54
1820.9
102,150
1200
1200
118,250
12.00–24.00 639.46
860.54
3369.1
85,720
1500
1500
118,250
24.00–12.00 239.46
860.54
1318.8
93,894
1100
1100
118,250
12.00–24.00 939.46
860.54
4798.4
60,313
1800
1800
118,250
89.46
860.54
1317.8
68,499
950
950
118,250
12.00–24.00 439.46
860.54
2708.3
60,000
1300
1300
118,250
24.00–12.00
Fuel cost = 709,520 Rs/hr Simulation time = 12.933 s Table 4 Comparison of the proposed method with other algorithmic methods compare between different algorithms and proposed algorithm
Algorithm
Least cost ($)
Simulation time 901
SA
709,874.36
EP
709,863.29
IFEP
709,862.05
IBFA
709,837.926
IWAPSO
709,599.22
SPSO-TVAC
709,528.45
CSA
709,862.05
4.54
ORCSA-levy flight
709,862.04
0.18
ORCSA-cauchy
709,862.04
0.18
FIPSO
623,550.0
Proposed
709,520
59.7
12.973
Result Table 6 explains the three-day hot water value, water quantity, electricity generation, electricity demand, and total fuel cost. Test Case-6: (NSGSA applied to HTWS) For the above test case-5, an emission is considered J = 0.006483P12 −0.79027P1 + 28.82488. Figure 3 discusses about hydro–thermal–wind scheduling of multi-objective using non-domination sorting gravitational search algorithm to reduce the cost and emission as shown in Fig. 3. In Fig. 4, we can see that the graph between minimum cost and minimum emission is a competitive solution for the non-dominated classified gravity search algorithm for hot water scheduling (Fig. 5).
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
641
Fig. 2 Cost minimization of HTS using GSA
5 Conclusions In order to reduce the fuel cost of thermal power plants, hydropower plants are integrated into in thermal power plant systems. There is only hydrothermal scheduling, and this scheduling is done in different ways using conventional or exploratory methods. However, traditional methods have some drawbacks, so we choose a heuristic method that has a high convergence rate and satisfies the load demand with the lowest fuel cost.
6 Future Scope Further, this presented work can be extended such that multi-objective optimization is achieved ensuring the considered HTW system will provide minimum cost along with minimum emission using various other techniques other than heuristic optimization techniques.
261.35
512.84
24.00–12.00
12.00–24.00
Fuel cost = 702,650 Rs/h Simulation time = 6.23711 s
299.32
768.56
24.00–12.00
12.00–24.00
285.73
558.98
24.00–12.00
12.00–24.00
PH (M W )
Time
Table 5 Result for test case-5
(MW)
777.58
679.08
1021.9
791.1
931.44
904.3
Pt
9.5784
9.5784
9.5784
9.5784
9.5784
9.9729
PT (MW)
2878.8
1628.9
4149.8
1817.6
3108.1
1750.1
Q(*104 m3 /s)
60,000
70,546
66,093
91,890
89,701
103,00
Volume (*104 m3 /s)
1300
950
1800
1100
1500
1200
Pg (Mw)
1300
950
1800
1100
1500
1200
Pd (mw)
106,108
92,052
142,770
108,060
128,890
124,790
Cost (Rs/h)
642 Ch. S. Sundar and G. Srinivasa Rao
53.978
739.71
24.00–12.00
12.00–24.00
855.3
9.4816
9.4816
550.81
9.4816
9.4816
9.4816
2.1361
Pw (MW)
886.54
1077.4
Fuel cost = 446,450 Rs/h Emission cost = 304,920 Pound/kg Simulation time = 5.19908 s
235.22
713.07
24.00–12.00
12.00–24.00
956.5
787.07
410.79
534.01
24.00–12.00
PT (MW)
PH(MW)
12.00–24.00
Time
Table 6 Result for test case-6
4006.3
598.27
3874
1499.1
2984.1
2371.6
Q(*104 m3 /s)
60,000
84,706
67,255
89,743
83,732
95,540
volume(*104 m3 /s)
1300
950
1800
1100
1500
1200
Pg (Mw)
1300
950
1800
1100
1500
1200
Pd (mw)
74,408
74,408
74,408
74,408
74,408
74,408
Cost (Rs/h)
18,725
53,083
80,440
49,145
62,451
41,075
Emission (pound/kg)
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using … 643
644
Fig. 3 Minimizing costs HTWS using GSA
Fig. 4 Cost minimization of HTWS using NSGSA
Ch. S. Sundar and G. Srinivasa Rao
Optimal Scheduling of Multi-objective Hydro–Thermal–Wind Using …
645
Fig. 5 Cost minimization and emission minimization of HTWS using NSGSA
References 1. Swain RK (2011) Short-term-hydro-thermal-scheduling using clonal selection algorithm. Electric Power Energy Syst 33:647–656 2. Basu M (2014) Improved differential evolution for short-term hydro-thermal scheduling. Electric Power Energy Syst 58:91–100 3. Barisal AK, Sahu NC (2012) Short term hydrothermal scheduling using gravitational search algorithm. IEEE 978(1–4673):1049–1059 4. Rashedi E, Nezambadi-Pour H (2009) GSA: a gravitational search algorithm. Energy Convers Manag 179:2232–2248 5. Wang Y, Zhou J, Zhou C (2012) An improved self-adaptive PSO technique for short-termhydro-thermal-scheduling. Electric Power Energy Syst 39:2288–2295 6. Benerjee S (2016) Short term hydro-thermal-wind scheduling based on particle swarm optimization technique. Electric Power Energy Syst 81:275–288 7. Yu B, Yuan X (2007) Short-term hydro-thermal scheduling using particle swarm optimization method. Energy Convers Manag 48:1902–1908 8. Mandal KK, Basu M (2007) Particle swarm optimization technique based short-term hydrothermal scheduling. Energy Convers Manag 8:1392–1399 9. Yuan X (2015) An extended NSGA-III for solution multi-objective hydro-thermal-wind scheduling considering wind power cost. EnergyConvers Manag 96:568–578 10. Wood AJ, Wallenberg BF (2012) Power generation operation and control, second edition. Wiley India, New Delhi 11. Sivanagaraju S (2009) Power system operation and control. Pearson 12. Zhang J, Wang J (2012) Small population based particle swarm optimization for short term hydrothermal scheduling. IEEE 142–152 13. Mondal S, Bhattacharya A (2012) Multi-objective economic emission load dispatch solution using gravitational search algorithm and considering wind power penetration. Electric Power Energy Syst 44:282–292 14. Tian H, Yuan X (2014) Multi-objective optimization of short-term–hydro-thermal scheduling using non-dominated sorting gravitational search algorithm with chaotic mutation. Energy Convers Manag 81:504–519
Route Optimization as an Aspect of Humanitarian Logistics: Delineating Existing Literature from 2011 to 2022 Shashwat Jain, M. L. Meena, Vishwajit Kumar, and Pankaj Kumar Detwal
Abstract The delayed response and rescue operations during natural disasters and complex global catastrophes impact the ensnared individuals. These inescapable events might occur multiple times every year, and the lack of local preparedness, logistics planning prior to their occurrence, and impaired coordination capabilities to deal with them are all barriers that must be overcome. In recent years, biological weapons, wars, environmental upheavals, and other cataclysmic occurrences have sparked a great concern about human suffering. Route optimization entails planning fleet size and type, route selection, and timely delivery of disaster relief goods with specific objectives of minimizing transit time, route length, associated costs, and, most importantly, prioritizing the demands of the affected. This review aims to explore the available scholarly works on route optimization in humanitarian logistics and comprehensively assesses year-by-year trends, keywords trend analysis, geographical distribution, and prospects for further investigation. We evaluated the breadth of the subject by querying the Scopus database for scholarly articles published between 2011 and October 2022. Based on the findings, it seems that researchers have taken a keen interest in route optimization studies in recent years. Substantial potential for future studies exists in this field. This article provides valuable information for academics, investors, and other stakeholders concerned with channelizing their efforts in the relevant areas. Keywords Humanitarian logistics · Vehicle route optimization · Expository analysis · Disaster management
S. Jain (B) · M. L. Meena Malaviya National Institute of Technology, Jaipur, Rajasthan, India e-mail: [email protected] V. Kumar Indian Institute of Technology, Bombay, India P. K. Detwal Indian Institute of Technology, Roorkee, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_54
647
648
S. Jain et al.
1 Introduction Human intervention with nature’s establishment and other innate causes has increased the severity and frequency of disasters, making us more susceptible to these hazards. When a disaster befalls a region within any country, the emergency logistics planning, the readiness of relief systems, and coordination among them are put to the test at the expense of the lives of the masses. The impending collapse of the healthcare system, transportation infrastructure, distribution network, and operations of the multiple companies is prolonged until a viable strategy to repair and revive these systems is implemented. The most crucial aspect to be considered for relief operations is the price we must pay in the case of delays or failures. Humanitarian logistics is concerned with supplying essential assistance commodities, including food, shelter, clean water, and medications, to afflicted individuals in a timely and effective manner; nevertheless, the organizations responsible for disseminating these restricted resources encounter many problems. The research conducted by [1] demonstrates the importance of the ideation of using logistics in the field of emergency response. The task of route optimization is an essential component of humanitarian logistics. Many objectives must be addressed, like timeliness, distance, cost, and prioritization. Diverse approaches and concepts used in this area are discussed herein. The formulation of multi-objective problems by [2] aims to minimize time, cost, route length, and cluster formation in affected areas proposed by [3] to ensure that each cluster is visited exactly once, and in its total demand satisfaction, some standard approaches are used. In light of the dynamic situations that follow a disaster, decisions must be made regarding the number of vehicles and the selection of their routes, the allocation of available vehicles to injured people, and their timely transportation to healthcare facilities [4]. The optimality of heuristics [5] results was found to be acceptable, indicating the way forward to use them to solve real-world scenarios. As this field progressed, many programming models were developed, such as an exact Mixed Integer Programming model [6] and robust possibilistic programming [7] model. Xu et al. [8] also proposed a random fuzzy simulation-based interactive genetic algorithm model to deal with the randomness of disasters. For last-mile distribution of humanitarian supplies, [9] examined vehicle fleet size, multiple trips, coverage of all areas, and associated interdependencies. As new constraints were introduced into the optimization problems involving the combination of vehicle and crew routing [10], satisfactory results were obtained, highlighting the utility of experimentation and modeling simulation in this area. Many practical challenges were faced by various authorities during the COVID-19 period, possibly requiring a decision support approach, as proposed by [11], integrating both route optimization and advanced simulation to improve the sustainable performance of last-mile vaccine cold chain logistics operations. A review of the relevant journal articles supports the assertion that the interest of researchers in this field over the preceding decade has significantly expanded. In the context of this research, some research questions have been developed to bridge
Route Optimization as an Aspect of Humanitarian Logistics: …
649
the gap identified during the literature review. The following research questions are addressed in this study. RQ1: How has research on route optimization in humanitarian logistics evolved? RQ2: How does the research vary according to the geographical location, and which region has done more ground-breaking research in this field? RQ3: Which themes were most popular, and is there a link between the various terms employed by the researchers in this area? RQ4: Which journals have published a substantial number of articles during the review period? This review attempts to enlighten by addressing the aforementioned research questions and providing a glimpse into the field’s current state as well as its possible future directions. It is worth mentioning that the past studies carried out by various researchers contain numerous innovations which enrich the present study. This study explores the significance of route optimization in humanitarian logistics. Despite Scopus’s prominence as a research resource, no study has yet presented an in-depth descriptive analysis of the literature concerning route optimization related to humanitarian logistics. A study is undoubtedly required to present a holistic perspective on the evolution of this subject across a wide range of dimensions. The study covers a wide range of topics but focuses on relevant themes. The subsequent flow of this study is as follows: The available literature is assessed in Sect. 2. Section 3 describes the methodology of the literature review adopted in this research. Section 4 contains an expository analysis of the selected publications, followed by Sect. 5, covering the results and a critical inferential discussion of the findings. The study’s inherent limitations, conclusion, and significant contribution are discussed in Sect. 6.
2 Literature Review The following section presents the breadth of extant literature in the field of route optimization concerning humanitarian logistics. Table 1 showcases the focus space of the various researchers, classified into two levels: primary level focus and secondary level focus. Logistics being the central theme, [11] discusses sustainable logistics by focusing on vaccines’ last-mile cold chain transportation. Using a fleet of vehicles, [12] introduced smart logistics for vaccination supplies to geographically distant consumers. The phases of a disaster described in [13, 14] indicate that the response and relief stages are prevalent. Researchers employ a wide variety of algorithms to study disasters since the consequences of each catastrophe are unique and call for a fresh approach. These methods focus on meeting the specific necessities of affected people. It is evident from [15] that, in addition to conventional optimization techniques, heuristics are applied to produce high-quality solutions for real-world scenarios. The usage of metaheuristics [16] indicates that a combination of MCGA and Tabu
650 Table 1 Primary level and secondary level focus
S. Jain et al. Primary level focus
Secondary level focus
References
Logistics
Cold chain sustainable vaccine
[11]
Smart
[12]
Relief
[25]
Disaster phase
Response
[17]
Relief
[14]
Distribution
Medical supply: vaccine [19] Last mile
Programming
[9]
Humanitarian aid
[26]
Robust possibilistic
[7]
Constraint mixed integer [10] Restricted dynamic Optimization
Algorithm
Scheduling
Network
[16]
Bi-level
[8]
Combinatorial
[27]
Robust
[23]
Agile
[18]
Hybrid
[10]
Indexing
[19]
Biased-randomized
[18]
Genetic
[20]
Ant colony
[2]
Dynamic
[22]
Multiphase
[21]
Crew, vehicle
[10]
Restoration
[24]
Assessment
[15]
Repair
[14]
Analysis
[28]
search may be used to quickly arrive at a solution. As disasters are unavoidable, modeling and simulation can play an important role in determining countermeasures [17] and biased-randomized [18] and indexing [19] techniques can be adopted to deal with these uncertainties. Likewise, other algorithms can improve the performance of classic heuristics methods [20] to produce a better result. Road network layout, pre-, and post-disaster road conditions, real-time monitoring of road damage, and traffic flow must be considered when addressing vehicles’ scheduling [10]. To handle a problem of this complexity, [21] worked on an effective relief logistics schedule based on accurate transport time data for available routes. In [22], emphasis is placed on Internet of Things (IoT) technologies, which provide
Route Optimization as an Aspect of Humanitarian Logistics: …
651
dynamic data updates to accommodate changes in distribution schedules. Concerning network aspects, the design [23], repair [14], and restorations [24] are discussed, with road restoration to facilitate connectivity between critical locations identified as one of the primary goals. The assessment of networks performed by [15] can minimize disaster uncertainty and enable the precise distribution of relief aid while discussing managerial insights pertinent to humanitarian aid agencies. Table 1 shows that optimization in humanitarian logistics has encompassed several areas in logistics, distribution, and networks. Considering the significance of the findings, it is clear that algorithms and optimization techniques have evolved. Allocating routes for distribution purposes necessitates analysis of their feasibility, optimal design, operability, and innovation potential. Despite the vast literature on route optimization, no research has provided a descriptive analysis encompassing this. Definitely, a study is required to provide a holistic view of the area’s progress across several aspects addressing the mentioned gap. Based on the findings of this research, a better understanding can be developed regarding route optimization literature and future research scope in the same.
3 Review Methodology The adopted methodology for examining the existing literature on route optimization in humanitarian logistics is illustrated in Fig. 1, in keeping with the research questions established in the Introduction section of this paper. As shown, the methodology is devised using the fundamental approach of review. The search criteria are specified to extract documents in the given area using a particular database and set of keywords, followed by a screening process to filter the retrieved papers as per their relevance to the specific topic under study. The database Scopus was chosen to obtain the literature, and the search query string was [(“HUMANITARIAN” OR “RELIEF”) AND (“SUPPLY CHAIN” OR “LOGISTICS”) AND “ROUTE”]. The words “relief” and “supply chain” were added to the string as their presence was strong in other related articles. The database retrieved a sample of 191 publications for the study. The papers were restricted to journal articles only in the English language. The selection period was from 2011 to 2022 to confine the scope of the investigation. Based on these inclusion and exclusion criteria, 114 articles were found to be admissible. Furthermore, analysis is carried out by employing various tools, generating findings, and discussing them to explain the current research horizon and future scope. Some articles irrelevant to our domain were eliminated, including those published in economics and social themes. Numerous articles have been written on facility location and warehouse design. As our study primarily focuses on route optimization, the work in these areas was also excluded. In the end, 49 publications from the available literature were considered for this analysis. Multiple tools were used to conduct the expository analysis, ensuring that the most instructive results were obtained from each tool. The descriptive summary of the selected review articles is
652
S. Jain et al.
Fig. 1 Review methodology
obtained using R studio’s statistics package. VOS Viewer is used to construct the keyword co-occurrence network. The Power BI (Data Visualization) tool is used to produce the line plot for year-wise publication trends, the filled world map depicting the landscape distribution of literature, and a step bar chart representing journal publications across years. Inferences are drawn from these illustrative charts. Later, the limitations and future research elements are highlighted in the discussion and conclusion section.
4 Expository Analysis 4.1 Descriptive Summary of Data Table 2 shows the salient descriptive details of the selected research articles for review. The authors intend to represent the spectrum of the field’s literature between 2011 and 2022. Authors from across the world have attempted to discuss the integration of route optimization approaches in humanitarian operations and logistics, which might contribute to future standards and guidelines development. For the timespan mentioned above, the topic is covered by 39 sources and 49 documents (all journal articles). Although there are just 137 authors, 188 keywords often appear in their writings. The final statistics showed that the authors’ keywords were 188, while the keywords plus (terms frequently featured in titles) were 343. Another significant factor is the International co-authorship%, assessed as 26% in this context, and the co-author per document is 2.94. It undoubtedly evinces an interaction among researchers from diverse regions across the globe and indicates that this field has a strong collaboration network.
Route Optimization as an Aspect of Humanitarian Logistics: … Table 2 Descriptive enumeration of literature
653
Description
Results
Timespan
2011:2022
Sources (journals, books, etc.)
39
Documents
49
Annual growth rate %
14.65
Document average age
3.51
Average citations per doc
15.32
References
2852
Document contents Keywords plus (ID)
343
Author’s keywords (DE)
188
Authors Authors
137
Authors of single-authored docs
3
Authors collaboration Single-authored docs
3
Co-authors per doc
2.94
International co-authorships %
26
Document types Article
49
4.2 Year-Wise Trend of Publication Figure 2 demonstrates the chronological progression of the research. In the earlier years, the publishing rate is lower than in the years after 2015. Increasing emphasis on preparedness and mitigation strategies [29] and the emergence of more effective methodologies, such as modified metaheuristics [4], dynamic programming [16], and experimental analysis [21] may have led to a rise in publications after 2015. For a better understanding, the period of 2011–2022 is divided into three time frames (TF) as follows: a. TF1 (2011–2014) (Blue markers on the plot) (6 Articles): As indicated by the number of publications, studies on route optimization have been trending down during TF1. According to [30], effective solutions were generated by heuristics and metaheuristics. Compared to conventional techniques, the use of improved and modified algorithms led to better route selection and shorter time allocations [20]. In other words, researchers believed they had reached a state of technique saturation, justifying the fall in the number of publications, although heuristics are still frequently used in optimization. b. TF2 (2015–2019) (Orange markers on the plot) (21 Articles): The need for predisaster evacuation, in contrast to post-disaster evacuation, as highlighted by
654
S. Jain et al.
Fig. 2 Annual publication trend
[29], prompted scholars to consider planning and strategy formation as important factors of disaster management. In TF2, complex problems under insecure and uncertain conditions [31] and en route coordination among the various stakeholders [32] intrigued the researcher’s interest in this area. This conduces to the development of more robust approaches for strategic and operational decision-making [33], which supports the growth in TF2 publications. c. TF3 (2020–2022) (Green markers on the plot) (22 Articles): The COVID-19 epidemic shook the world [34], exposing the inadequacy of healthcare systems, shutdowns, and economic crises. The focus of the researchers shifted to finding quicker and more precise solutions to the problems without violating real-time constraints [18]. As new data technologies became available, it became feasible to evaluate the consequences of planning and operations [28] by integrating data on the road network conditions in the impacted region. In recent years, the restoration of roads to offer faster access to critical nodes [13] and the application of IoT technologies for disaster-scenario adaptation [22] have played a significant role in the increased interest of researchers in this area. This lends credence to the idea that route optimization in humanitarian logistics may benefit from a review that looks at the relevant literature and summarizes its most important findings.
Route Optimization as an Aspect of Humanitarian Logistics: …
655
4.3 Landscape Distribution of Literature The frequency of publishing articles by the countries worldwide is shown in Fig. 3, the filled world map produced using the Power BI tool. Publications in a particular region are depicted by the size of the respective colored bubble; larger bubbles indicate more publications from that location, while smaller bubbles indicate fewer. In the context of continent-specific publications, North America and Asia have dominated during the last decade, followed by Europe and Australia. Countrywise, the publications in the USA top the list with a count of 23. China (19), India (16), and Iran (12) have also made significant contributions to the existing body of literature. Other countries that have helped with the development of route optimization in humanitarian logistics include Turkey (9), France (8), and Indonesia (7). Figure 4 indicates the total number of citations and average article citations from a particular country. The higher this number is, the more ground-breaking work has been done in that country. Despite the observation in Fig. 3 that the USA has published more papers in the area of research, Turkey has the highest total citations (358). Also, Turkey has a considerably higher average article citation (71) among all countries. In contrast, the USA, India, Iran, Spain, and France all have nearly similar average article citations, suggesting that research growth is analogous across all these countries. Africa and several regions of Asia are likewise conspicuously under-represented.
Fig. 3 Geographical distribution of literature
656
S. Jain et al.
Fig. 4 Total citation and average article citation
4.4 Keywords Analysis The keyword analysis for the selected review papers is done using VOS viewer software, and the result obtained is shown in Fig. 5. This analysis aims to determine if there is any connection between the various concepts and topics associated with route optimization in humanitarian operations through keywords. Only 18 keywords out of the most frequently occurring words were discovered to appear more than 5 times across all of the articles. Total co-occurrence strength, i.e., the strength of links among other keywords, is computed for each of the 18 keywords, and those with the highest total link strength are selected for further analysis. Applying the mentioned steps, the keyword ‘vehicle routing’ occurred 19 times, with a maximum total link strength of 57. With a total link strength of 56, ‘humanitarian logistics’ has the highest frequency among all keywords, occurring 20 times. Several other terms, like vehicles, disasters, optimization, and heuristics algorithms, have
Fig. 5 Keyword co-occurrence network
Route Optimization as an Aspect of Humanitarian Logistics: …
657
appeared with a substantial number of occurrences and a moderate total link strength. The analysis divides the keywords into two clusters, each containing nine terms. Cluster 1 (Scope and Phases of Disaster Management) comprises disaster-related terminology, including disaster response, disaster management, disaster prevention, disaster relief, decision-making, and more. Cluster 2 (Real-Time Route Optimization Models) includes keywords related to algorithms and approaches applied in this field, such as genetic algorithms, heuristic algorithms, optimization, vehicle routing, and others. The generated keyword occurrence network provides a snapshot of the connections between the two clusters.
4.5 Journal-Wise Distribution of Literature Power BI is used to generate a stepped bar graph to visualize the distribution of publications across journals. Each step of the graph’s bars denotes the number of journal articles published in a particular year, respectively. Various journals in operations research, management, optimization, and supply chain have published work in humanitarian logistics with integration to route optimization. Among these publications, the Annals of Operational Research, the European Journal of Operational Research, and the International Transactions in Operational Research have published the most articles in the last years. All of these journals are renowned in the discipline of operations research. Figure 6 displays the journals that have published papers in this area between 2011 and 2022. As previously described in the literature, vehicle routing and transportation are essential when dealing with emergency logistics. Transportation Research Part E: Logistics and Transportation Review and Transportation Science have also published articles highlighting the work in this sector. As computer technologies and methods in various programming languages have advanced, journals such as Applied Soft Computing Journal, Computers and Industrial Engineering, and others have contributed to this field.
5 Results and Discussions This section discusses the assertions made from the findings and thorough analysis. Route optimization is an operational and executional decision-making process that involves route planning, vehicle scheduling, flow simulations, and real-time flow optimization modeling. Based on the understanding of the literature, functions, notably, minimization of time, the cost associated with traveling, route length, vehicle fleet size, and penalty associated with unfulfilled demand, can be assessed as common objectives for any route optimization problem. One of the most important goals is to cover the maximum possible demand. Sometimes, these objectives are required to be fulfilled despite uncertainties, such as when a facility breaks down, a road gets blocked, unexpected delays, or the failure of distribution networks. Now that we live
658
S. Jain et al.
Fig. 6 Journal-wise trend of publication
in the information era, we may emphasize a decision support system to facilitate more reliable strategic planning and more effective handling of risks. Decision-making in route optimization may be carried out in either a proactive way or a reactive way. While the reactive strategy focuses on loss management and damage mitigation, the proactive strategy seeks to avoid disasters before they happen by assessing risk factors. Most rural regions and urban centers alike have created a reliable road network. Although its failure is inevitable amid calamities, the network’s infeasibility can be detected, and fixes or enhancements can be recommended. The aftermath of a disaster creates demand nodes depending on population density and current conditions. These demand nodes are prioritized based on either time or location. The clustering of demand nodes can enable operation planning and execution. Coordination is required since many relief agencies work together to provide humanitarian assistance to needy individuals. As a result, to facilitate last-mile distribution, organizations must address operational interdependencies and strive to collaborate to accomplish the common purpose. For more effective relief operations, hybrid systems can be designed for collaborative vehicle scheduling and multi-crew assignment. In certain circumstances, randomness can be incorporated as a variable in the formulation, and simulations can be performed to check the applicability of models. Since sustainability is a primary concern for many organizations worldwide, route optimization can be enhanced by giving special attention to incorporating sustainable practices.
Route Optimization as an Aspect of Humanitarian Logistics: …
659
6 Conclusions This study presents the first expository analysis of route optimization in humanitarian logistics to assist learners in understanding existing research. 49 Scopus research papers were reviewed to address the research questions discussed in the Introduction. Concerning the research question, RQ1, the trend of yearly publications initially stagnated. However, with the emergence of new technologies and recognition of uncertainties in planning, blended with interdependencies in operations, researchers’ interests evolved, leading to major contributions in the last three years. For RQ2, the USA, China, and India have done much research. However, Turkey is doing the finest research on this topic, evidently having the most total and average citations worldwide. Also, a need for contribution from Africa is observed. Considering RQ3, the terms ‘vehicle routing’ and ‘humanitarian logistics’ appear more often than any other keywords, exhibiting reasonably strong link strength. Observing the publications across journals for RQ4, the most effective content is concentrated in the Annals of Operational Research, the European Journal of Operational Research, and the International Transactions in Operational Research. The inherent limitation of the research is that only one leading database, Scopus, is selected for the review. In contrast, other databases, such as Web of Science and Science Direct, can be considered for a more comprehensive review. Also, only journal articles are included in the review, excluding conference papers, books, reports, and other document types. The studies in facility location, warehouse location, and integration of these with route optimization can be considered for further studies.
References 1. Jiang P, Gan X (2018) Research on multi-target routing problem of coal emergency logistics vehicles based on genetic algorithm. J Mines Metals Fuels 66(8):405–413 2. Liu R, Yang Y (2013) The location-routing problem of urban disaster emergency based on an improved ant colony algorithm. Sens Transducers 160(12):472–478 3. Afsar HM, Prins C, Santos AC (2014) Exact and heuristic algorithms for solving the generalized vehicle routing problem with flexible fleet size. Int Trans Oper Res 21(1):153–175. https://doi. org/10.1111/itor.12041 4. Yadollahnejad V, Bozorgi-Amiri A, Jabalameli M (2017) Allocation and vehicle routing for evacuation operations: a model and a simulated annealing heuristic. J Urban Plan Dev 143(4). https://doi.org/10.1061/(ASCE)UP.1943-5444.0000404 5. Nair DJ, Grzybowska H, Rey D, Dixit V (2016) Food rescue and delivery: heuristic algorithm for periodic unpaired pickup and delivery vehicle routing problem, vol 2548. https://doi.org/ 10.3141/2548-10 6. Akbari V, Salman FS (2017) Multi-vehicle synchronized arc routing problem to restore postdisaster network connectivity. Eur J Oper Res 257(2):625–640. https://doi.org/10.1016/j.ejor. 2016.07.043 7. Nodoust S, Pishvaee MS, Seyedhosseini SM (2021) Vehicle routing problem for humanitarian relief distribution under hybrid uncertainty. Kybernetes. https://doi.org/10.1108/K-09-20210839
660
S. Jain et al.
8. Xu J, Wang Z, Zhang M, Tu Y (2016) A new model for a 72-h post-earthquake emergency logistics location-routing problem under a random fuzzy environment. Transp Lett 8(5):270– 285. https://doi.org/10.1080/19427867.2015.1126064 9. Penna PHV, Santos AC, Prins C (2018) Vehicle routing problems for last mile distribution after major disaster. J Oper Res Soc 69(8):1254–1268. https://doi.org/10.1080/01605682.2017.139 0534 10. Lam E, van Hentenryck P, Kilby P (2020) Joint vehicle and crew routing and scheduling. Transp Sci 54(2):488–511. https://doi.org/10.1287/trsc.2019.0907 11. Andoh EA, Yu H (2022) A two-stage decision-support approach for improving sustainable last-mile cold chain logistics operations of COVID-19 vaccines. Ann Oper Res. https://doi. org/10.1007/s10479-022-04906-x 12. Sajid M, et al (2021) Article a novel algorithm for capacitated vehicle routing problem for smart cities. Symmetry (Basel) 13(10). https://doi.org/10.3390/sym13101923 13. Akbari V, Sayarshad HR (2022) Integrated and coordinated relief logistics and road recovery planning problem. Transp Res D Transp Environ 111. https://doi.org/10.1016/j.trd.2022. 103433 14. Moreno A, Alem D, Gendreau M, Munari P (2020) The heterogeneous multicrew scheduling and routing problem in road restoration. Transp Res Part B: Methodol 141:24–58. https://doi. org/10.1016/j.trb.2020.09.002 15. Zhang G, Zhu N, Ma S, Xia J (2021) Humanitarian relief network assessment using collaborative truck-and-drone system. Transp Res E Logist Transp Rev 152. https://doi.org/10.1016/ j.tre.2021.102417 16. Tikani H, Setak M (2019) Efficient solution algorithms for a time-critical reliable transportation problem in multigraph networks with FIFO property. Appl Soft Comput J 74:504–528. https:// doi.org/10.1016/j.asoc.2018.10.029 17. Kavlak H, Ertem MA, Satır B (2022) Intermodal humanitarian logistics using unit load devices. Arab J Sci Eng 47(3):3821–3846. https://doi.org/10.1007/s13369-021-06001-y 18. Do C Martins L, Hirsch P, Juan AA (2021) Agile optimization of a two-echelon vehicle routing problem with pickup and delivery. Int Trans Oper Res 28(1):201–221. https://doi.org/10.1111/ itor.12796 19. Petroianu LPG et al (2021) A light-touch routing optimization tool (RoOT) for vaccine and medical supply distribution in Mozambique. Int Trans Oper Res 28(5):2334–2358. https://doi. org/10.1111/itor.12867 20. Qiang X (2012) Vehicle scheduling model for emergency logistics distribution with improved genetic algorithm. Int J Adv Comput Technol 4(18):315–323. https://doi.org/10.4156/ijact. vol4.issue18.37 21. Zhang Y, Liu J (2021) Emergency logistics scheduling under uncertain transportation time using online optimization methods. IEEE Access 9:36995–37010. https://doi.org/10.1109/ACCESS. 2021.3061454 22. Mishra BK, Dahal K, Pervez Z (2022) Dynamic relief items distribution model with sliding time window in the post-disaster environment. Appl Sci (Switzerland) 12(16). https://doi.org/ 10.3390/app12168358 23. Perez-Galarce F, Candia-Vejar A, Maculan G, Maculan N (2021) Improved robust shortest paths by penalized investments. RAIRO Oper Res 55(3):1865–1883. https://doi.org/10.1051/ ro/2021086 24. Ajam M, Akbari V, Salman FS (2022) Routing multiple work teams to minimize latency in postdisaster road network restoration. Eur J Oper Res 300(1):237–254. https://doi.org/10.1016/j. ejor.2021.07.048 25. Saatchi HM, Khamseh AA, Tavakkoli-Moghaddam R (2021) Solving a new bi-objective model for relief logistics in a humanitarian supply chain using bi-objective meta-heuristic algorithms. Scientia Iranica 28(5E):2948–2971. https://doi.org/10.24200/sci.2020.53823.3438 26. Mastouri T, Rekik M, el Fath MN (2017) A mathematical approach to model humanitarian aid distribution in disaster area. Int J Emerg Manage 13(3):252–267. https://doi.org/10.1504/ IJEM.2017.085018
Route Optimization as an Aspect of Humanitarian Logistics: …
661
27. de Castro Pena G, Santos AC, Prins C (2022) Solving the integrated multi-period scheduling routing problem for cleaning debris in the aftermath of disasters. Eur J Oper Res. https://doi. org/10.1016/j.ejor.2022.07.005 28. Warnier M, Alkema V, Comes T, van de Walle B (2020) Humanitarian access, interrupted: dynamic near real-time network analytics and mapping for reaching communities in disasteraffected countries. OR Spectrum 42(3):815–834. https://doi.org/10.1007/s00291-020-00582-0 29. Swamy R, Kang JE, Batta R, Chung Y (2017) Hurricane evacuation planning using public transportation. Socioecon Plann Sci 59:43–55. https://doi.org/10.1016/j.seps.2016.10.009 30. Halper R, Raghavan S (2011) The mobile facility routing problem. Transp Sci 45(3):413–434. https://doi.org/10.1287/trsc.1100.0335 31. Ferrer JM, Ortuño MT, Tirado G (2016) A GRASP metaheuristic for humanitarian aid distribution. J Heuristics 22(1):55–87. https://doi.org/10.1007/s10732-015-9302-5 32. Collins AJ, Frydenlund E (2016) Agent-based modeling and strategic group formation: a refugee case study. In: Proceedings—winter simulation conference, vol 0, pp 1289–1300. https://doi.org/10.1109/WSC.2016.7822184 33. Qu Y (2017) A robust approach to the two-level humanitarian relief operation. Int J Emerg Manage 13(1):1–18. https://doi.org/10.1504/IJEM.2017.081188 34. Pacheco J, Laguna M (2020) Vehicle routing for the urgent delivery of face shields during the COVID-19 pandemic. J Heuristics. https://doi.org/10.1007/s10732-020-09456-8
Parametric Optimization of Rotor-Bearing System with Recent Artificial Rabbits Optimization and Dynamic Arithmetic Optimization Algorithm Pravajyoti Patra, Debivarati Sarangi, Arati Rath, and Dilip Kumar Bagal Abstract In this article, the dynamic behaviour of rotor-bearing system is evaluated with particular environment and the newest optimization techniques, i.e. artificial rabbit’s optimization and dynamic arithmetic optimization algorithm are employed to reduce the system’s vibration amplitude. Designing and refining of system components are followed by errors that arise during calculations, modelling, measurements and engineering approximations. For a nonlinear response like rotor-bearing system, it has diverse dynamical behaviour which is responsible for irregular and unexpected consequences that might lead to chaos. These systems are particularly sensitive to the input parameters such as speed, radial internal clearance, rotor-bearing defects, rotor unbalance, misalignment, over-loading of the system, etc. In the present study, the system’s sensitivity was considered in terms of root-mean-square value of the vibration amplitude for healthy bearing. The trend analysis is conducted out by exponential approach utilising Tableau 19 software. Here, Taguchi L27 orthogonal array is utilised to optimise the output response using Minitab 20 software, and ultimately modern meta-heuristic algorithms are used to figure the optimum variable in single type optimization using MATLAB 2021b software. It is discovered that the influence of an imbalanced rotor state is inescapable which makes the system reaction unpredictable in the real-world situation. Keywords ARO · DAOA · Rotor bearing · Taguchi
P. Patra · D. K. Bagal (B) Department of Mechanical Engineering, GCE, Kalahandi, Odisha 766002, India e-mail: [email protected] D. Sarangi World Skill Center, Bhubaneswar, Odisha 751010, India A. Rath School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_55
663
664
P. Patra et al.
1 Introduction A bearing has two basic roles in machinery: transferring force and permitting relative motion among machine parts. Based on the contact mechanism, the bearings may be classed into two categories, contact type bearings and non-contact type bearings. Out of numerous bearings, rolling element bearings are utilised in many applications, beginning from heavy load and high-temperature settings to the dusty environment and various essential life-saving applications. The health evaluation for the rotorbearing system utilising vibration signals is becoming more significant as demand for running precision is growing. Increasing focus is being paid to rolling element bearing, not only as structural components but also as source of vibration. The vibrations are created by the rotation of a limited number of loaded rolling contacts between rolling elements and races [1]. An NU-205 cylindrical roller bearing is considered to study the nonlinear behaviour of vibration response. Li et al. proposed a general five degrees of freedom dynamic model for investigating the vibration characteristics of flexible rotor-bearing system based on nonlinear elastic Hertz contact theory and Timoshenko beam theory by employing differential quadrature finite element and improved Newton–Raphson methods. The flexible rotor-bearing system is made up of stepped shaft and angular contact ball bearing (ACBB). First, the validation of the developed model is checked using the available literatures and finite element software ANSYS 18.2. Then, the dynamic behaviours of ACBB are explored systematically for the researches of the vibration behaviours of flexible rotor-bearing system offer reference. Finally, the vibration characteristics of flexible rotor-bearing system exposed to external combined loads are explored from various angles completely. The findings demonstrate that the preload, external force, radial force and torque have considerable effect on the vibration behaviours of flexible rotor-bearing system [2]. Sayed et al. examined the stability and bifurcations of an elastic rotor-bearing model. The Hopf bifurcation analysis and limit cycle continuation are examined utilising the suggested approach for getting the optimum bearing forces for ungrooved bearing with L/D = 0.5 [3]. Sakly et al. explored the control of vibration of a bi-disc rotor-bearing system utilising electro-rheological elastomer (ERE) rings put in the bearings. The bi-disc rotor is designed so that vibration response may be examined in a rotational speed range which encompasses the first two crucial speeds. The rotor-bearing system is simulated using the finite element approach taking into consideration the gyroscopic effect of the rotor and the internal damping of the shaft. Nevertheless, it has been proven that the rotor steady-state vibration response may be lowered at other rotational speed range when an electric field is supplied [4]. For a loaded rotor-bearing system, Flowers and Fangsheng [5] examined the outcomes of experimental and simulated data. In a balanced rotor-bearing system, Tiwari [6] examined the nonlinear behaviour of deep groove ball bearings for various clearance groups and loading scenarios. By altering both speed and radial internal
Parametric Optimization of Rotor-Bearing System with Recent …
665
clearance, Cui et al. [7] studied the dynamic nature of a roller bearing system by taking the impact of coupling into account. Chikelu et al. presented the modelling and simulation approach, as a failure prevention and design optimization tool for determining the dynamic optimal performance of a rotor-bearing system, in respect to stability at specified speed range, through selection of appropriate bearing damping coefficient, at the conceptual design stage. Varying values of damping coefficient for the system model were simulated and result demonstrated the underdamped, critically damped, and overdamped functioning states of the system. The study demonstrated that a suitable design and selection of damping coefficient of bearing are vital as a way of lowering the amplitude of vibration, which might result in mechanical failure of any designed shredder rotor-bearing system while in operation [8]. In order to improve performance, Patra et al. made an attempt to appreciate the relevance of employing the response surface approach to diagnose the bearing health state and complexity caused by several parametric influences, such as radial internal clearance, speed and radial load [9]. Mlaouhi et al. focusing on the parameter optimization of rotor-bearings system is reported in this study. In this topic, the vibration level attenuation issue in rotor dynamics was described as an improvement optimization problem. The disparity between the displacements in the bearings and those of the target ones created according to the known starting model is regarded as an objective function, and thus, the intention is to lower the vibration amplitude at critical speeds. The constraint function is created based on stability performance of the rotor-bearing system. Considered design factors are dynamic characteristic of the bearings such as the stiffness and damping coefficients. The suggested ADE algorithm is offered by the idea of a novel adaptive mutation operator controlled by two monotonous functions against the iteration counter increasing the mobility of the individual and applied to avoid speedy merging to surrounding optima. A rotor finite element model of a low-pressure gas turbine supported by three balls bearings is explored as a numerical example to assess the efficiency of the envisaged ADE method. The findings validate the performances of the recommended approach, in terms of solution correctness and convergence rate compared to the conventional DE [10]. In order to understand the system’s dynamic character, Xu et al. [11] considered the combined impact of speed and radial internal clearance. Sunnersjö found that variable compliance effect and unbalance are the causes of the nonlinear complicated response for a healthy bearing, which brings about dynamic behaviour like perioddoubling and intermittency to the system [12]. In order to demonstrate the zone of chaos and instability in a balanced rotor-bearing system, Harsha [13, 14] took into consideration the influence of geometrical imperfections such radial internal clearance and waviness with speed changes. Jiang researched on dynamics analysis and parameters optimisation of rotorbearing system which give a basis for the design of high-speed rotational machines. Finite element analysis (FEA) model of a flexible rotor-bearing system was
666
P. Patra et al.
constructed utilising FEA software. Both the stability and critical response difficulties were overcome. Based on APDL language and particle swarm method, multiobjective optimisation applications were constructed. All the bearing specifications and the journal diameter were tuned. Considering the results of both stability optimization and critical response optimization, it is proposed that the stability optimization results should be adopted during a high-speed operation of the rotor and the bearing parameters should be adjusted according to the critical response optimization result [15].
2 Proposed Optimization Algorithm 2.1 Artificial Rabbits Optimization Artificial Rabbits Optimization (ARO) utilises the foraging and hiding methods of actual rabbits, as well as their energy shrink leading to transitioning between both techniques. It is a contemporary bio-inspired meta-heuristic optimization approach for tackling engineering challenges. The pseudocode of this wonderful algorithm is presented below. Pseudocode of ARO algorithm Randomly initialise a set of rabbits Xi (solutions) and evaluate their fitness Fiti and evaluate their fitness Fiti , and X best is the best solution found so far. While the stop criterion is not satisfied do For each individual Xi do ) ( Calculate the energy factor A using A(t) = 4 1 − Tt ln r1 . If A > 1 Choose a rabbit randomly from other individuals. Calculate R using following equations: R = (L × c ) t−1 2 L = e − e( T ) × sin 2πr2 { 1, i f k == g(l) c(k) = Where, k = 1…, d and l = 1…, [r3 ∗ d] 0, else g = r andper m(d) n 1 ∼ N (0, 1) Perform detour foraging(using −−−−−→ −−→ −−→ −−→) vi (t + 1) = x j (t) + R · xi (t)−x j (t) + r ound(0.5 · (0.05 + r1 )) · n 1 , wher e, i, j = 1, . . . , nand j /= i. Calculate the fitness Fiti .
Parametric Optimization of Rotor-Bearing System with Recent …
667
Update the position individual using ⎧ −−→ of the current(− (−−−−−→) −→) ⎨ xi (t) when f xi (t) ≤ f vi (t + 1) −−−−−→ ( −→) (−−−−−→) xi (t + 1) = −−−−−→ ⎩ vi (t + 1) when f − xi (t) > f vi (t + 1) Else
−−−→ −−→ Generate d burrows and randomly pick one as hiding using bi,r (t) = xi (t) + −−→ H · gr · xi (t) ( −−−−−→ −−−→ −−→) −−→ Perform random hiding using vi (t + 1) = x j (t) + R · r4 · bi,r (t) − xi (t) where i = 1…, n Calculate the fitness Fiti . −−−−−→ Update of = ⎧ −−→ the (position ) (−−−the )individual using xi (t + 1) − − → − − → ⎨ xi (t)when f xi (t) ≤ f vi (t + 1) (−−−−−→) (−−→) −−−−→ ⎩− vi (t + 1)when f xi (t) > f vi (t + 1) End If Update the best solution found so far Xbest . End for End While Return Xbest [16].
2.2 Dynamic Arithmetic Optimization Algorithm Initial the Algorithm Parameters, i.e. α, μ Create random values for initial positions while (t < maximumnumber o f iterations) Do Evaluate fitness values for given solutions Find the best solution ( max )α Update the DAF value using D AF = I ter I ter / I ter Update the DCS value using DC S(0) = 1 − I ter max for i = 1: number of solutions Do for j = 1: number of positions Do Create random values between 0 and 1 for r1 ; r2 ; r3 if r1 < D AF then exploitation phase if r2 > 0 : 5 then update the solutions’ positions Using first rule in { xi, j (C I ter + 1) =
( ) (( ) ) best x j ÷ (DC S+ ∈) × U B j − L B j × μ + L B j ), r2 < 0.5 ( ) (( ) ) best x j × DC S × U B j − L B j × μ + L B j ), Other wise
668
P. Patra et al.
else Using second rule in { xi, j (C I ter + 1) =
( ) (( ) ) best x j ÷ (DC S+ ∈) × U B j − L B j × μ + L B j ), r2 < 0.5 ( ) (( ) ) best x j × DC S × U B j − L B j × μ + L B j ), Other wise
end if if r1 < D AF then exploitation phase if r3 > 0 : 5 then update the solutions’ positions Using first rule in { xi, j (C I ter + 1) =
( ) (( ) ) best x j − DC S × U B j − L B j × μ + L B j ), r3 < 0.5 ( ) (( ) ) best x j + DC S × U B j − L B j × μ + L B j ), Other wise
else Using second rule in { xi, j (C I ter + 1) =
( ) (( ) ) best x j − DC S × U B j − L B j × μ + L B j ), r3 < 0.5 ( ) (( ) ) best x j + DC S × U B j − L B j × μ + L B j ), Other wise
end if end if end for end for t =t +1 end while Return best solution end procedure [17].
3 Result and Discussions Table 1 shows the experimental design of simulation with L27 Taguchi approach. The output response is chosen by the criteria of minimization and Fig. 1 shows the trend of all input variables which are affect the RMS output. From this figure, it can be predicted that only load variable affects the output response with increasing order. Here the exponential trend line analysis was carried out and the exponential equation shown in Eq. (1). From the analysis, it is concluded that only load factor affects the most other than input variables because of P-value and R-square vale which are more in case of three inputs other than load. Similarly, Fig. 2 shows the S/N ratio plot with minimization criteria for RSM. The optimal setting is found to be A1B1C3D1 with lowest value of 0.0884556 in RMS output. Furthermore, the output with load variable is optimised through two recent optimization techniques and found the lowest value of 0.10114 with 1000N load (Figs. 3 and 4).
Parametric Optimization of Rotor-Bearing System with Recent …
669
Table 1 L 27 Taguchi orthogonal array with rotor-bearing analysis Speed (rpm)
Load (N)
RIC (µm)
Unbalance (N)
RMS
3000
1000
30
10
0.1145
3000
1000
30
10
0.0809
3000
1000
30
10
0.0818
3000
3000
45
30
0.322
3000
3000
45
30
0.2277
3000
3000
45
30
0.2303
3000
5000
60
50
0.5098
3000
5000
60
50
0.3605
3000
5000
60
50
0.3646
5000
1000
45
50
0.1177
5000
1000
45
50
0.0835
5000
1000
45
50
0.0846
5000
3000
60
10
0.3184
5000
3000
60
10
0.2255
5000
3000
60
10
0.228
5000
5000
30
30
0.5134
5000
5000
30
30
0.3639
5000
5000
30
30
0.3681
7000
1000
60
30
0.1137
7000
1000
60
30
0.0807
7000
1000
60
30
0.0817
7000
3000
30
50
0.3275
7000
3000
30
50
0.2328
7000
3000
30
50
0.2355
7000
5000
45
10
0.5111
7000
5000
45
10
0.3633
7000
5000
45
10
0.3675
RMS = 0.0696861 ∗ EXP(0.000372466 ∗ LOAD(N )) R-square value = 0.894712 P-value < 0.0001 Optimal setting: 3000, 1000, 60, 10 RMS = 0.0884556
(1)
670
P. Patra et al.
Fig. 1 Trend analysis of RSM output
Fig. 2 Signal-to-noise ratio plot of RSM through Taguchi approach
SN = 20.6880 The best fitness of F1 is: 0.10114 The best solution is: 1000 N of Load.
4 Conclusions From the full experimentation, it can be anticipated from trend analysis that load is essential parameter which impact the output RMS vale. Again, the parametric optimization was carried out using contemporary meta-heuristic optimization approach,
Parametric Optimization of Rotor-Bearing System with Recent …
671
Fig. 3 ARO result of minimization in RMS response
Fig. 4 DAOA result of minimization in RMS response
and lowest value of load indicates the best value of output response. Here, both the approaches offer superior results for improving the rotor-bearing analysis. Furthermore, this sort of technique may be employed in the designing any component of machine or any engineering applications using Taguchi design of experimentations.
References 1. Wei Y, Li Y, Xu M, Huang W (2019) A review of early fault diagnosis approaches and their
672
P. Patra et al.
applications in rotating machinery. Entropy 21(4):409 2. Li Z, Wang Q, Qin B, Shao W (2022) Vibration characteristic analysis of flexible rotor-bearing system subjected to external combined loads. Euro J Mech A/Solids 104688 3. Sayed H, El-Sayed TA (2022) A novel method to evaluate the journal bearing forces with application to flexible rotor model. Tribol Int 173:107593 4. Sakly F, Chouchane M (2022) Vibration control of a bi-disk rotor using electro-rheological elastomers. Smart Mater Struct 31(6):065009 5. Flowers GT, Fangsheng WU (1996) Disk/shaft vibration induced by bearing clearance effects: analysis and experiment. J Vibr Acoustics 118 6. Tiwari M, Gupta K, Prakash O (2000) Effect of radial internal clearance of a ball bearing on the dynamics of a balanced. J Sound Vib 238:723–756 7. Cui L, Changli L, Jianrong Z (2009) Study on nonlinear vibration and dynamic characteristics of rigid. IDETCCIE 86:1–7 8. Chikelu P, Nwigbo S, Azaka O, Olisakwe H, Chinweze A (2022) Modeling and simulation study for failure prevention of shredder rotor bearing system used for synthetic elastic material applications. J Failure Anal Prev 1–12 9. Patra P, Huzur VS, Harsha SP (2020) Vibration response analysis of high-speed cylindrical roller bearings using response surface method. J Multibody Dyn 234(2):379–392 10. Mlaouhi I, Guedria NB, Bouraoui C (2021) An adaptative differential evolution algorithm for vibration level reduction in rotordynamics. In: International conference on advances in materials, mechanics and manufacturing. Springer, Cham, pp 103–113 11. Xu LX, Yang YH, Li YG (2012) Modeling and analysis of planar multibody systems containing deep groove ball bearing with clearance. Mech Mach Theo 56:69–88 12. Sunnersjö CS (1978) Varying compliance vibrations of rolling bearings. J Sound Vibr 58:363– 373 13. Harsha SP (2006) Nonlinear dynamic analysis of a high-speed rotor supported by rolling element bearings. J Sound Vibr 290:65–100 14. Harsha SP (2006) Nonlinear dynamic response of a balanced rotor supported by rolling element bearings due to radial internal clearance effect. Mech Mach Theo 41:688–706 15. Jiang L (2021) Finite element analysis and multi-objective optimization of flexible rotor-bearing system. Atomic Energy Sci Technol 55(zengkan2):327 16. Wang L, Cao Q, Zhang Z, Mirjalili S, Zhao W, Artificial rabbits optimization: a new bio-inspired meta-heuristic algorithm for solving engineering optimization problems 17. Khodadadi N, (Member, IEEE), Vaclav Snasel 2, (Senior Member, IEEE), and Seyedali Mirjalili 3,4, (Senior Member, IEEE), dynamic arithmetic optimization algorithm for truss optimization under natural frequency constraints
Influence of Various Operating Characteristics on the Biodiesel Preparation from Raw Mesua Ferrea Oil D. Chandravathi, S. B. Padal, and J. Prakasa Rao
Abstract Researchers have discovered several alternatives to diesel and have promoted the use of vegetable oils in diesel vehicles. However, the high viscosity of pure vegetable oils limits their usefulness as a substitute fuel. Therefore, the aim of this study is to develop a process for the production of biodiesel from crude Mesua ferrea oil by transesterification and to investigate the effects of various operating characteristics, in particular the catalyst content, the methanol to oil molar ratio, and the reaction temperature. The test was performed with four different catalysts such as NaOH, NaOCH3 , KOH, and KOCH3 . The catalyst concentration was varied as 1%, 1.5%, and 1.75%, respectively, while the molar ratios were 6:1, 8:1, and 10:1. Finally, the reaction temperatures of 50, 60 and 70 °C were considered. The better biodiesel conversion efficiency is found at 8:1 (molar ratio), 1.5%, (catalyst proportion), and a 60 °C (reaction temperature). The individual conversion efficiencies of Mesua ferrea oil biodiesel (MOBD) were 98.8%, 98.6%, and 98.1%, respectively. Also, the KOCH3 catalyst has shown better yield of biodiesel than other catalysts. Keywords Mesua ferrea · Biodiesel · Conversion efficiency · Catalyst · Temperature · Molar ratio
1 Introduction The depletion of natural resources and the depletion of energy sources that do not renew themselves are the two most urgent problems facing the world today. These petroleum compounds are very useful, but their production and widespread use are depleting the earth’s resources. Researches into alternative energy sources are absolutely necessary to solve the environmental and global petroleum problems given the current state of the energy industry. Sustainable fuels are important forms of energy to reduce emissions, improve air quality, reduce dependence on oil from other countries, D. Chandravathi (B) · S. B. Padal · J. Prakasa Rao Department of Botany, Andhra University College of Science and Technology, Visakhapatnam, Andhra Pradesh 530 003, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_56
673
674
D. Chandravathi et al.
and open new avenues for economic growth. Biodiesels derived from vegetable oils are considered to have the greatest potential for replacing fossil liquid fuels. Diesel engines are more likely to use biodiesels. Lower sulfur and aromatic content, as well as higher flash point and better lubricity, are some of the advantages that vegetable oils offer when used as diesel fuel. Their disadvantages include very high viscosity, higher pour point, lower cetane number, lower heating value, and lower volatility. In addition, their viscosity increases with increasing temperature [1–3]. Their main problem is their much higher viscosity. The free fatty acid content (FFA) in vegetable oils has a significant effect on the process of methyl ester conversion, and oils with a high FFA content are difficult to convert to biodiesel. However, attempts to convert the oil to soaps result in a reduction in the amount of ester produced as less glycerol is separated from the oil [4, 5]. Transesterification of oils with alcohol is a viable option for the synthesis of biodiesel that can be carried out using conventional methods. In transesterification, methanol is usually preferred over ethanol because the former offers alcohol cost advantages [6, 7]. Pamilia et al. [8] investigated the influence of NaOH and MgO catalyst in the production of beef tallow biodiesel. They found that the optimum molar ratio was 6:1 for higher yield of biodiesel. Eman et al. [9] conducted an experiment to evaluate the palm oil methyl ester production at different operating parameters. The best yield of 88% was perceived at 60 °C during 60 min. Finally, the fuel properties of the methyl ester were evaluated and found to meet ASTM requirements. Antony et al. [10] investigated the possibility of producing and characterizing jatropha-based biodiesel. It was found that the better yield was with a combination of 6:1, 60 min, 0.92% and 60 °C. According to a summary of research conducted to date, the influence of base catalysts in biodiesel production with Mesua ferrea oil has not been studied. With this in mind, the purpose of this research is to investigate the production of biodiesel from Mesua ferrea oil, a potential source of non-edible materials, under various operating conditions to evaluate the yield (or) methyl ester conversion and physicochemical properties of the product.
2 Materials and Methods 2.1 Collection of Materials, Oil Extraction, and Chemicals The Mesua ferrea tree, often referred to as Ceylon ironwood, belongs to the Caryophyllaceae family and is known by its common name. After drying in an oven at a temperature of 60 °C, the seeds were then crushed using a seed crushing mill, and the oils were extracted. It is estimated that the oil yield was in the range of 60–63% by weight and has a kind that can be described as reddish-brown. The oil is contained in the seed kernel, and each seed can produce one to four kernels. The view of Mesua ferrea seeds is depicted in Fig. 1.
Influence of Various Operating Characteristics on the Biodiesel …
675
Fig. 1 Mesua ferrea seeds
The degree of purity of the substances used in this study was 99.99%. For the purpose of the study, the chemicals, namely bleaching powder, 0.1N NaOH solution, Methanol, Isopropyl alcohol, NaOH, KOH, NaOCH3 , KOCH3 , phosphoric acid, phenolphthalein indicator, etc., were used.
676
D. Chandravathi et al.
2.2 Free Fatty Acid (FFA) Content of Mesua Ferrea Oil The FFA content of the oil was determined by the titration method. A 10 g of raw Mesua ferrea oil with 50 ml of isopropyl alcohol was taken in a container. Subsequently, 3–5 drops of phenolphthalein indicator was added to the oil and alcohol mixture. This solution was subjected to titration with solutions of 0.1 N NaOH, and the results were recorded in the burette. The FFA of the oil was determined as follows after these data were collected and analyzed. FFA =
282 × NaOH Normality × Sample rundown in burette 10 × Weight of Mesua ferrea oil sample
According to the results, the FFA concentration in the oil is 6%; therefore, the oil has to go through the degumming process to get rid of the FFA. There is a high probability that the FFA content could remain after the degumming process. However, all the gums in the oil is removed. Initially, the raw Mesua ferrea oil was diluted with a mixture of 0.01% H2 O and 0.1% H3 PO4 . Thereafter, it was heated in an electric heater with stirring for about 40 min while keeping the temperature constant at 60 °C. After heating, the mixture was allowed to cool for 24 h, and then the gums that had sunk to the bottom were removed. The degummed oil was then bleached to get rid of all other impurities. Bleaching powder was added during the heating process, making up between 1 and 3% of the weight of the oil. The reaction was also sustained for 35 min as the temperature was increased to 125 °C. The oil was then allowed to cool before being filtered with filter paper to remove all impurities. Subsequent to the bleaching process, the oil was allowed for deodorization process. Deodorization is a technique that requires both high vacuum pressure and high temperature. Using nitrogen as the carrier gas, this process was maintained at temperatures between 250 and 270o Cand a vacuum pressure of 760 mm of Hg. This could remove the fatty acid content from the oil. The final oil FFA was lowered to 2% approximately.
2.3 Transesterification Process In the current study, the biodiesel was produced by transesterification, which resulted in a reduction in the viscosity of the raw Mesua ferrea oil. The schematic view Mesua ferrea oil biodiesel preparation process is shown in Fig. 2. Firstly, a quantity of 250 ml of crude deodorized Mesua ferrea oil was placed in a beaker and heated the oil to a temperature of 50 °C. Subsequently, different proportion of methanol (alcohol) and catalyst (NaOH) mixture was added to the heated oil and the condenser assembly positioned to be cooled by running water. The magnetic pellet was used to stir this solution and the rotation speed was kept at 900 rpm during the mixing process. In the analysis, different catalysts based on
Influence of Various Operating Characteristics on the Biodiesel …
677
Fig. 2 Transesterification process
diverse solubility were used, the methanol/oil molar ratios (from 4:1 to 10:1), the amount of catalyst (from 1 to 2%) and the temperatures in the range of 60–70 °C for 60 min. After the reaction was complete, the procedure was stopped and the product left in the separating funnel for a period of 10 h. This resulted in a two-layer separation of the glycerol and the methyl ester, with the methyl ester occupying the top layer and the glycerol the bottom. The oil was then cleaned again in 60 °C water to remove any remaining cleaning agents (soaps). Eventually, the methyl ester was heated (at 90 °C) to drive off the moisture. The properties of biodiesel were estimated in accordance with ASTM standards [11–17]. ASTM D1298, ASTM D4809, ASTM D976, ASTM D445, ASTM D130, ASTM D-97, and ASTM D92 were used to investigate the characteristics of specific gravity, heating value, cetane number, viscosity, copper corrosion, cloud point, and flashpoint, respectively. The properties are presented in Table 1.
678
D. Chandravathi et al.
Table 1 Properties of MOBD Fuel property (units)
Testing method Range as per Mesua ferrea biodiesel (B100) ASTM standards NaOH NaOCH KOH KOCH3 3
Specific gravity 15 °C
ASTM D-1298
860–900
0.888
Heating value (kJ/kg)
ASTM D-4809
42
39,223 39,317
39,338 39,364
Cetane number
ASTM D-976
47 (min.)
55
55
55
55
4.26
4.18
4.16
0.886
0.884
0.882
Viscosity at 40 °C ASTM D-445 (Cts)
2.5–6
Copper corrosion
ASTM D-130
1 (max)
1a
1a
1a
1a
ASTM D-97
6 (max.)
−3 4.34
−3
−4
−4
ASTM D-92
130 (min.)
152
150
148
147
Cloud point
(o C)
Flashpoint (o C)
3 Results and Discussions 3.1 Methanol to Oil Molar Ratios Figure 3 shows the variation in MOBD yield against varying molar ratio of methanol to oil. The methanol to oil ratio was increased from 6:1, resulting in an upward trend in the amount of biodiesel produced. In addition, the biodiesel yield was optimized at 8:1, and increasing the molar ratio beyond this point resulted in a decrease in the amount of product produced. At a molar ratio of 8:1, the maximum yield was attained with all the catalyst. When the molar ratio was increased to 10:1, the amount of biodiesel produced decreased, mainly due to the constant presence of alcohol in large amounts and the decrease in the chemical activity of the catalyst. The increase in insolubility has an effect on the insulating ability of glycerol. Besides, the methoxidebased catalyst has shown more MOBD yield than the hydroxide catalysts. At a molar ratio of 8:1, the yield of MOBD for NaOH, NaOCH3 , KOH, and KOCH3 were 96.8%, 98.5%, 97.6%, and 98.8% correspondingly. Among all the catalysts used, the KOCH3 was noticed better yield of 98.8% compared to remaining catalyst.
3.2 Influence of Catalyst Proportion The effect of catalyst concentration on MOBD yield is shown in Fig. 4. The range of the catalyst was changed from 1% to 1.75%, and when the catalyst concentration reached 1.5%, the yield of MOBD started to decrease. This was due to lower glycerol deposition, which was affected by the emulsification of methyl ester. According to the results of the current study, a higher catalyst concentration resulted in a lower
Influence of Various Operating Characteristics on the Biodiesel …
679
Fig. 3 Variation of yield with respect to molar ratio
MOBD conversion rate compared to the value determined as optimal. Moreover, if the concentrations are not high enough, it is not possible to convert the biodiesel through the process of transesterification. Thus, the optimum value of catalyst concertation has to been identified. The yields of MOBD for NaOH, NaOCH3 , KOH, and KOCH3 were 95.7%, 97.9%, 97.1%, and 98.6%, respectively, when the catalyst concentration was 1.5%. Compared to the other catalysts used, the yield with the KOCH3 catalyst was 98.6% higher than the others.
3.3 Reaction Temperature The change in MOBD yield seen in Fig. 5 is the result of varying the reaction temperature. It is clear that increasing the reaction temperature from 50 °C to 70 °C resulted in an increase in the amount of MOBD produced. The reaction rate improves to some extent with an increase in temperature, but further increase in temperature may decrease the yield, so it was restricted to 70 °C. This is due to emulsification of the biodiesel and excessive loss of methanol, which contributes to an increase in yield at higher temperatures. Methanol loss at high temperatures is also a contributing factor. For this reason, the amount of glycerol extracted during separation is much lower, which in turn leads to a lower yield of MOBD. At a reaction temperature of 70 °C, the yields of MOBD for NaOH, NaOCH3 , KOH, and KOCH3 were 94.6%, 97.1%, 96.1%, and 98.1%, respectively; however, the maximum yield for KOCH3 was 98.1% compared to other catalysts.
680
Fig. 4 Variation of yield with respect to catalyst
Fig. 5 Variation of yield with respect to reaction temperature
D. Chandravathi et al.
Influence of Various Operating Characteristics on the Biodiesel …
681
4 Conclusions The following conclusions can be drawn from the results of the present investigation. . After analyzing the effects of different catalysts used in the preparation of Masua ferrea biodiesel (MOBD), it was found that potassium methoxide (KOCH3 ) followed by sodium methoxide (NaOCH3 ) gave the best results in terms of yield and physicochemical properties. . The optimal conditions for biodiesel conversion (98.8%) are 8:1, 1.5%, and 60 °C. In addition, the KOCH3 catalyst has been shown to provide a higher yield of biodiesel compared to other catalysts. . The biodiesel produced from Masua ferrea oil met all acceptable parameters specified by ASTM standards in terms of its properties. There was an increase in the cetane number, which should lead to an improvement in combustion in the engine. Based on the conclusions, the Masua ferrea biodiesel blend (MOBD20) can be best suited for diesel engine applications for the assessment of engine operating characteristics.
References 1. Deepanraj B, Lawrence P, Sivashankar R, Sivasubramanian V (2016) Analysis of pre-heated crude palm oil, palm oil methyl ester and its blends as fuel in a diesel engine. Int J Ambient Energy 37:495–500 2. Jaikumar S, Bhatti SK, Srinivas V (2019) Experimental investigations on performance, combustion, and emission characteristics of Niger (Guizotia abyssinica) seed oil methyl ester blends with diesel at different compression ratios. Arab J Sci Eng 44(6):5263–5273 3. Jaikumar S, Bhatti SK, Srinivas V, Satyameher R, Padal SB, Chandravathi D (2020) Combustion, vibration, and noise characteristics of direct injection VCR diesel engine fuelled with Mesua ferrea oil methyl ester blends. Int J Ambient Energy 1–12 4. Agarwal D, Kumar L, Agarwal AK (2008) Performance evaluation of a vegetable oil fueled compression ignition engine. Renew Energy 33:1147–1156 5. Obed M, Ali, Rizalman M, Nik R, Abdullah, Abdul AA (2016) Analysis of blended fuel properties and engine performance with palm biodiesel-diesel blended fuel. Renew Energy 86:59–67 6. Atabani AE, Silitonga AS, Ong HC, Mahlia TMI, Masjuki HH, Badruddin IA, Fayaz H (2013) Non-edible vegetable oils: a critical evaluation of oil extraction, fatty acid compositions, biodiesel production, characteristics, engine performance and emissions production. Renew Sustain Energy Rev 18:211–245 7. Paresh DP, Absar L, Sajan C, Rajesh NP (2016) Bio fuels for compression ignition engine: a review on engine performance, emission and life cycle analysis. Renew Sustain Energy Rev 65:24–43 8. Pamilia C, Larasati S, Dita T (2019) The effects of catalysts type, molar ratio, and transesterification time in producing biodiesel from beef tallow. Mater Sci Eng 620:012019. https://doi. org/10.1088/1757-899X/620/1/012019 9. Eman NA, Cadence IT (2013) Characterization of biodiesel produced from palm oil via base catalyzed transesterification. Procedia Eng 53:7–12
682
D. Chandravathi et al.
10. Antony RS, Robinson SDS, Lindon RLC (2011) Biodiesel production from Jatropha oil and its characterization. Res J Chem Sci 1(1) 11. ASTM D445-19 (2019) Standard test method for kinematic viscosity of transparent and opaque liquids (and calculation of dynamic viscosity). In: ASTM international, West Conshohocken, PA. www.astm.org 12. ASTM D976-06 (2016) Standard test method for calculated cetane index of distillate fuels. In: ASTM international. West Conshohocken, PA. www.astm.org 13. ASTM D4809-18 (2018) Standard test method for heat of combustion of liquid hydrocarbon fuels by bomb calorimeter (precision method). In: ASTM international. West Conshohocken, PA. www.astm.org 14. ASTM D130-04 (2017) Standard test method for corrosiveness to copper from petroleum products by copper strip test. In: ASTM international. West Conshohocken, PA. www.astm.org 15. ASTM D1298-12b (2017) Standard test method for density, relative density, or API gravity of crude petroleum and liquid petroleum products by hydrometer method. In: ASTM international. West Conshohocken, PA. www.astm.org 16. ASTM D92-18 (2018) Standard test method for flash and fire points by Cleveland open cup tester. In: ASTM international. West Conshohocken, PA. www.astm.org 17. ASTM D97-17b (2017) Standard test method for pour point of petroleum products. In: ASTM international. West Conshohocken, PA
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s Auxiliary Power Unit (APU) Ashish Pradhan, Shaik Zeeshan Ali, Korupolu Lok Chakri, Paleru Ranjan Kumar, and Ujjal Kalita
Abstract When an aircraft is on the ground or parked at the airport, the auxiliary power unit (APU) is switched on to supply conditioned air to the cabin or to provide electric power for aircraft systems. But an APU tends to produce a loud unwanted noise every time it is used, and this can prove to be quite troublesome for the ground crew. So, to reduce this noise, a muffler is usually fitted in the APU’s exhaust. This study presents a literature review on different types and aspects of the muffler as well as the performance parameters that are required for an efficient noise attenuation for the APU. Also, in addition to the muffler materials, various tools and software necessary for muffler design and numerical calculations have also been described. This study also encapsulates the recent vital works done on the APU’s muffler. Keywords Aircraft APU muffler · Transmission and insertion loss of a muffler · Muffler materials · Types of mufflers
1 Introduction The auxiliary power unit (APU) has always played a crucial role in the aircrafts. It is basically an additional but smaller gas turbine engine that employs the same principle as that of a jet engine and can be operated when the aircraft is in air or on the ground. It is usually located in the tail cone of the aircraft; but unlike the main engine, it does not generate thrust and can be instead used to supply conditioned air to the cabin, or to provide electrical power to the aircraft systems, or to supply bleed air to start the main jet engine. The APU can be used as a main source of electrical power when an aircraft is on the ground, as well as may be used to provide pneumatic or electrical power when the aircraft is in flight. The installation of APU can prove to be very beneficial for an aircraft, but it does come with a major drawback—an APU tends to produce an extremely loud noise while it is running on the ground and that might become worse if the bleed air is A. Pradhan · S. Z. Ali · K. L. Chakri · P. R. Kumar · U. Kalita (B) Aerospace Department, Lovely Professional University, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_57
683
684
A. Pradhan et al.
switched on to heat/cool the cabin. The typical noise output of an APU is 113 decibels which is about 27 decibels lower than that of a jet engine but is high enough to affect various people, i.e., ground crew, aircraft maintenance staff, and people living in the vicinity of the airport. And to protect the workers from this hazardous noise, the European Union [1] imposed a limit on the noise exposure which should not exceed a daily or weekly value of 87 dB, provided that the workers are equipped with noise attenuation devices. So, to reduce the impact of this exhaust noise, the APU is usually fitted with a muffler. The APU’s exhaust needs to be transmitted through the muffler in order for its noise to be reduced and to achieve this, the muffler is usually placed around APU’s exhaust pipe. And this can create a problem—mufflers are generally made up of metal or a superalloy known as Inconel and due to the high temperature generated by the exhaust gases, these materials tend to get very hot during the working of APU. Such high temperatures can affect the tail cones of the aircraft which are made up using composite materials and such materials cannot tolerate high temperatures and might experience failure. So, mufflers have a requirement to be designed in such a way that they can attenuate noise levels as well as stop the heat radiation to exceed the predetermined limit.
1.1 The Noise Sources of an APU Unlike the noise generated from flying aircraft, the ramp noise or ground noise is not regulated by any governing body like JAA or FAA. Instead, it is the job of the authorities of individual airports to impose limitations or penalties by considering various factors. Some airports in Europe require the auxiliary power unit (APU) to be shut down within 5 min after arrival and switched on 5 min before the departure. The International Civil Aviation Organization (ICAO), in its edition of Annex 16, Attachment C [2], suggested the airports to establish a SPL limit of 85 dBA for aircraft at their service location and a maximum limit of 90 dBA for a region around 20-m from the airplane. An APU normally has an acoustic power level radiation of 140–145 dB, which is contributed by various noise sources, and one of the primary contributors to this noise is the combustion noise. The noise generated by the combustion process is labeled as direct combustion noise, and from a research study, it was found that combustion noise indeed possesses an exclusive yet different spectral shape with the peak frequency lying in the low frequency range of 250–350 Hz [3]. The same study concluded that the noise intensity (I) in the combustion chamber is directly proportional to the square of the fuel consumption rate (Q). In common literature about combustion noise, there is a widely spread theory called as the hot spot theory, which states that the combustion noise can be generated by a second source which involves pressure/entropy fluctuations on the downstream of the combustor flame. The noise generated from passaging of such hot spots through the engine’s constriction is termed as indirect combustion noise, and a study was conducted to find whether
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
685
an APU really produces indirect combustion noise [4]. To identify the regions of this indirect noise, anomalous peaks in the spectra were taken into consideration. It was finally observed that the direct noise dominated the low frequency region, making the indirect noise invalid, but for higher frequencies, an anomalous peak was found, proving the existence of indirect combustion noise in the APU. Aircraft have to perform numerous ground operations, and the noise generated by them is mostly dominated by two operations, reverse thrust and takeoff. But when APU is used specifically during turnaround times, the noise generated gets reduced by 10 dB [5]. The noise from APU operation is deemed negligible if it is operating in an airport with heavy traffic and crowd, but if an APU is operated on smaller airports, it provides a significant contribution to the total noise received by nearby vicinities, and this alone can take the A-weighted SPL to 45 dB (A). In a study by Callaway [6], the noise sources and control methods of the auxiliary power unit (APU) situated in Boeing 727–100 were analyzed. The majority of APU noise in this aircraft is originated from the inlet, the case, and the exhaust. To attenuate the inlet noise, a compact muffler was used, which created a minor pressure drop reducing the noise coming from the source. But for the case-radiated noise, a 0.5-in thick stainless steel fire shroud was considered to decrease the noise. Surprisingly, no acoustical attenuation treatment was provided to this aircraft to reduce the exhaust noise of its APU, but after being on the service for many months, the airport operators finally noticed that this noise was too loud to be left untouched.
1.2 Types of Mufflers Early developments of acoustical mufflers for aircraft can be dated back to the 1960s with Alther [7] developing an efficient muffler for the Cessna Skymaster aircraft. He evaluated an expansion chamber muffler and a resonator muffler, and since then there have been many advancements to design an optimum muffler that can have the capability to reduce the noise generated from various parts of the aircraft to an acceptable limit. These developments were important as because of them, there are now various types of mufflers that can be used in aircraft. 1. Reactive Mufflers—Easy to manufacture and economically inexpensive, reactive mufflers use the destructive interference phenomena to attenuate noise. They generally consist of multiple chambers and perforated tubes of varying crosssectional area which manage to generate a mismatch of the impedance for the sound wave. This mismatch results in the sound waves getting redirected and reflected back toward the source of the sound, effectively reducing the noise level. Basically, reactive mufflers scatter and reflect the sound wave to attenuate noise effectively at lower frequencies. 2. Absorptive Mufflers—Such mufflers use a sound absorbing material to attenuate the noise. A perforated tube is generally covered with sound absorbers like steel wool or fiberglass, and when sound waves pass through these materials, the sound
686
A. Pradhan et al.
energy gets transformed to heat energy which is then dissipated into the air. Absorptive mufflers work by reducing the fluctuations of high-pressure exhaust gas which in turn attenuates the noise intensity effectively. 3. Combination (Hybrid) Mufflers—A muffler combining the properties of both reactive and absorptive muffler can be termed as combination or hybrid muffler. Such mufflers incorporate the best of both worlds, and feature scattering as well as absorption to attenuate noise in a single unit. A study by Biswas [8] concluded that even for a smaller volume, combination mufflers are much more effective when compared to reactive mufflers. Combination mufflers managed to attenuate noise by 11 dB (A) while reactive mufflers could only reduce noise by 5 dB (A), and this makes the former muffler quite useful for aircraft. 4. Baffle Mufflers—These kinds of mufflers use baffles as obstructions to dampen the noise. Baffles generally in the shape of a cylinder are placed inside the muffler which then obstruct the exhaust noise one after another until the sound at the outlet is completely minimized. 5. Expansion Chamber Mufflers—Featuring one of the most basic muffler configurations, expansion chamber mufflers consist of a sound absorbing acoustic chamber on to which an inlet and outlet pipes are connected, and this unit works together to attenuate the pressure pulsation, finally reducing the noise intensity. These mufflers are further classified on the basis of their number of chambers, i.e., simple/one expansion chamber muffler, two-chamber muffler, and three-chamber muffler. According to a research experiment by Bhat et al. [9], the three-chamber mufflers were found to provide higher sound attenuation as compared to simple and two-chamber mufflers. The authors also concluded that increasing the number of chambers results in an increase in transmission loss, providing a better noise reduction characteristic.
1.3 Performance Parameters of Mufflers The main objective of an acoustic muffler is to attenuate the sound pressure level, but depending upon the types and configurations, muffler’s effectiveness can vary to a great extent. So, to evaluate a muffler, researchers and the companies use some performance parameters which can help to determine the noise reduction capacity of a novel muffler. 1. Transmission Loss (TL)—To compute the TL of any muffler, the difference between the sound pressure level (SPL) of the incident wave entering the inlet and the wave exiting the outlet needs to be found (Eq. 1). In terms of sound energy/power, transmission loss can also be defined as the ratio of the summation of incident sound power at the muffler inlet to the summation of transmitted sound power at the muffler outlet (Eq. 2). TL = L pi −L po
(1)
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
687
Or TL = 10 log10
∑ Wi, incident ∑ i o Wo, transmitted
(2)
The above equations are a simpler definitive way to calculate the transmission loss, but if a muffler has different configurations and parameters, its TL can be calculated by a different technique known as the Transfer Matrix Method (TMM) [10]. A simple 2 × 2 transfer matrix can be described by relating the sound pressure (p) and the volume velocity (v) at upstream (1) and downstream (2) as p1 = W p2 + X v2 ,
(3)
v1 = Y p2 + Z v2 ,
(4)
where W, X, Y, and Z are the four-pole constants. The Eqs. 3 and 4 when written in the form of a matrix (Eq. 5) can contribute to the calculation of transmission loss. [ ] [ ][ ] p1 W X p2 = (5) v2 v2 Y Z 2. Insertion Loss—At its simplest form, insertion loss can be defined as the difference in the outlet sound pressure/power measured at a fixed point in the presence as well as absence of the muffler (Eqs. 6 and 7). IL = L w1 −L w2 (dB)
(6)
∴ IL = 10 log(W1 /W2 )(dB)
(7)
Zhong [11] used the transfer matrix method to derive a mathematical model of insertion loss (Eq. 8). | | | AZ + Bρc + Z (CZ + Dρc) | r a r | | ( )| IL = 20log| ' | A Zr + B ' ρc + Z a C' Zr + D ' ρc |
(8)
Here, Z a = source impedance and Z r = tailpipe radiation impedance. In another study by Hua et al. [12], both superposition method and impedance matrix method were used to calculate the insertion loss for multi-inlet mufflers. 3. Noise Elimination (△L)—Sometimes also called as Level Difference, Noise Elimination (△L) is the variance between the sound pressure levels (SPL) of two selected points in the tailpipe and the exhaust pipe or difference between the
688
A. Pradhan et al.
upper and lower SPLs of a muffler (Eq. 9) [13]. △L = 20 log( pi / po ) = L pi − L po
(9)
where pi and po represent the sound pressure at the muffler inlet and outlet, whereas L pi and L po are the SPLs of inlet and outlet of the muffler.
2 Literature Review In 2018, Deaconu et al. [14] adopted a newer approach to reduce the duct noise with the help of metamaterials. This leads to an effect called as acoustic black hole, ABH, which is basically a technique used to control passive structural vibration. The researchers considered various muffler designs and calculated the acoustic attenuation in addition to pressure loss of those mufflers by using numerical CFD and FEM simulations. The mufflers were designed considering an inlet diameter of 100 mm and an outlet diameter of 50 mm. A thin perforated shell (t = 0.5 mm, σ = 8.37%, and d = 1 mm) was also positioned at the neck of the slits to reduce the pressure loss. For such type of mufflers, Actran software was used to compute transmission loss, and to determine the pressure loss, Ansys CFX software was considered. Out of the four muffler configurations that were considered for this experiment, the muffler featuring transversally disposed ABH structures on the convergent sides showed excellent and better noise attenuation characteristics at low frequencies, generating a TL of 114 dB at 810 Hz and 81 dB at 2220 Hz. Elnady [15], in his thesis, presented a theoretical and experimental work on perforated tube muffler. Perforated tubes are used to decrease the back pressure and the noise inside the muffler by confining the mean flow. The modeling for such mufflers can be done by using two different techniques, namely the distributed approach and the discrete (or segmentation approach). The distributed approach converts the perforated tube into a continuous object where the difference in local pressure relates to particle velocity by surface wall impedance, while the discrete approach divides the perforate coupling into several segmented coupling points which comprise a coupling branch and two straight hard pipes. During the analysis of simple perforated mufflers, distributed approach and continuity of acoustic momentum are mainly used, but to do the same on some advanced muffler systems, discrete approach and continuity of acoustic energy would be required. Elnady and Abom [16] then decided to study the sensitivity of transmission loss by choosing different coupling conditions on a through flow muffler as well as on a plug flow muffler. It was found that changing the coupling conditions showed a notable effect on TL for the former muffler, while the latter didn’t show much effect on TL, except for the peaks. Munjal et al. [17] earlier analyzed perforated mufflers with two-duct and threeduct configuration and then compared the transmission loss with other measurements and predictions. Prior works on perforated mufflers include a study by Sullivan and Crocken [18] where they solved the coupled equations to predict the transmission
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
689
loss of resonators. A segmentation method was developed by Sullivan [19] where a separate transfer matrix was used by each segment. To project the transmission loss (TL), the researchers used transfer matrices for perforated elements in addition to those same of other elements that form a muffler. Some Fortran programs were used to digitally compute the TL of the mufflers at a range of 20–3500 Hz. It was found that increasing the Mach number results in an increase of the resistive part of the perforated impedance and that in turn increases the transmission loss. The authors concluded that the number of segments in the segmentation method should be high in order to achieve a good accuracy. Green and Lilley [20] conducted experiments with a wide-angle diffuser so that it can reduce the total noise level of a jet engine during ‘run up’ by more than 30 dB. This diffuser, combined with gauze resistance, was placed at the tailpipe of a jet engine to be used as a ground muffler. While designing this muffler, an equal pressure condition at the jet pipe exit and the free jet was satisfied. The muffler finally designed consisted a straight sided conical diffuser of 22.5° semi-angle, a combination of three gauzes, and a nozzle with 1-in diameter, while having an overall length of 4.5-in. The sound intensity levels were measured at pressure values of 1.5, 2, and 2.33 at a radial distance of 10 feet from the nozzle exit while increasing the azimuthal angles by 10 degrees till it reaches 170° with respect to jet’s downstream direction. It was observed that the combination of gauzes prevented the separation of flow, and increasing the azimuthal angles provided a decreasing varying slope of noise levels, which can be further reduced by adding a perforated pipe extension. On Aug 5, 2005, Brown et al. [21] filed a patent (Patent No.: US 7,367.424 B2) in US Patent and Trademark Office regarding an ‘Eccentric Exhaust Muffler for use with Auxiliary Power Units.’ The inventors designed a muffler system (108) which includes an outer can (110), annular baffle, and a tube (112). The muffler in this design was placed between the APU and the exhaust opening, which in turn will reduce noise emissions coming from the APU. The outer can of this muffler consists of a sidewall (114), a forward annular wall (116), an aft annular wall (118), and a cavity (117) extending between the former two walls. The tube works together with the outer can (110) to dampen the APU noise and the inventors succeeded in doing this by placing acoustic liner (188) inside the tube (112), which further was constructed using an acoustic material with a resistance greater than zero. The tube, when put together with the forward wall, aft wall, and outer can sidewall, results in the formation of an acoustic chamber, which attenuates the noise through viscous effects after it travels through the acoustic liner. Moreover, there’s a change in crosssectional area from the forward end of the outer can to its aft end that manages to provide a necessary change in acoustic impedance and reduces exhaust noise more. To increase the noise attenuation more, the muffler includes one or more baffles. Every annular baffle is placed between the forward wall and aft wall and features an opening sufficiently sized to provide a radial gap, which will further allow the acoustic liner to expand. The official drawing of this muffler is attached below for reference purposes (Fig. 1). Throughout history, there have been many attempts and research to design an ideal muffler for the APU. Early designs included stacked or parallel coupled treatments
690
A. Pradhan et al.
Fig. 1 Muffler patented by Brown et al.
consisting of side branch resonators [22] leading to a large radial envelope and that resulted in unacceptable weight and size penalty. Mufflers were also designed to achieve attenuation on low frequencies within acceptable weight limitations. Such mufflers tend to incorporate folded-cavity resonators [23, 24] whose length varied axially and not radially. The magnitude of this sound attenuation can be controlled by the outer diameter of the muffler. So, to know more about such design procedures for coupled resonator and folded-cavity mufflers, Ross and Lyon [25] experimentally evaluated two exhaust mufflers on APU. The researchers used the Garrett GTC36200 APU for this study, which is used on the US Navy’s McDonnell Douglas F/A-18 Hornet. The dimensions of both mufflers were similar, with each muffler being 26 in long while having a 10 in inner diameter. The first muffler, designed for mid and high frequencies, featured a small folded-cavity tuning section and a concentric tube resonator. For low frequency attenuation, the second muffler was used, which consisted of a long, folded-cavity resonator in addition to a honey comb resonator with two facesheets (feltmetal) to attenuate the mid and high frequencies as well. To design and evaluate both these mufflers, finite element analysis was used, which also helped to predict broadband attenuation. The average insertion loss for the first muffler was found to be around 3.9 dB, while the same for the second muffler was 3.8 dB, after which both the values were added and compared with the experimental result giving an approximate comparison between the results. The study also concluded that the tailpipe length of the APU is responsible for influencing the mid and high frequency sound in addition to the insertion loss spectrum. Callaway [6], from the Commercial Airplane Division of The Boeing Co., published a research work where he studied the noise control of auxiliary power unit (APU) of Boeing 727–100 aircraft. This aircraft uses two absorptive mufflers on the exhaust duct of the galley to attenuate the noise as well as allow air to flow for a successful ventilation. The air inlet mufflers were used on APU to dampen the noise
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
691
emerging from the inlet and the case, with a requirement that when they are attached to the turbine, the mufflers must create very little pressure drop. The muffler finally designed, used 135 degree of acoustically treated lined turns, and was developed using open-cell polyurethane foam that was further covered with an open-weave fiberglass cloth. It managed to absorb the acoustical energy with the help of the foam material and the covering cloth protected the foam from air erosion in addition to acting as a mass-loaded diaphragm, which allowed the muffler to absorb the low and high frequencies simultaneously. The exhaust of this APU also used a muffler which was constructed using sintered stainless steel and dissipated acoustical energy by oscillating the air through the interlocked non-woven felted body of the metal. In another study about the same aircraft, Gebhardt [26] observed that upon increasing the octave bands for the muffler situated in the compressor air inlet, the attenuation seemed to increase but for the muffler placed in secondary air inlet, the attenuation showed variable properties with the peak being around 600–1200 octave band. In 2014, Knobloch et al. [27] conducted an experimental investigation on the noise emission of the APU, Garrett (Honeywell) GTCP 36-300, which is usually installed on the commercial aircraft like Airbus A320. For this experiment, the researchers used five different configurations, out of which the outcome of the second and third configurations were not disclosed. The first muffler configuration, C1, was constructed using a sound absorbing material known as Feltmetal and provided a 40 Rayl specific acoustic impedance. The fourth configuration, C4, was a newly designed muffler manufactured by PFW Aerospace AG, which used the silent-metal facesheet and provided a similar 40 Rayl impedance. The fifth configuration, C5, was a reference case which was used solely to evaluate the noise reduction performance of the mufflers. The three mufflers had roughly similar dimensions, with their overall length being 1.2 m long, inner duct diameter being 220 mm, lined surface length being 1 m and depth of cavities being 70 mm. After comparing C4 to C5 and C1, it was observed that the sound pressure level (SPL) of MES was greater than that of ECS. The sound was attenuated by 20 dB in the ECS case, which was more than compared to MES, but the noise attenuation was negligible for frequencies less than 200 Hz. In the fifth configuration, for frequencies more than 200 Hz, sound pressure level was found to be lowest in the downstream of the exhaust plane. It was also observed that the fourth configuration managed to provide the same sound attenuation level as the first one, C1. These results were interpreted in 1/3rd octave bands, which made the comparison between the test configurations easier. In 2018, Knobloch et al. [28] designed two different mufflers for GTCP36-28 APU to test their operating conditions. The first muffler (162.3 mm inner diameter and 1054 mm overall length) was designed using various cavities of different volume with a perforation (2.5 mm diameter and 3 mm depth) of 0.65% applied on the inner duct of damper, and a small bias flow was applied on the muffler to increase the acoustic damping performance and sound absorption. The second muffler used variable perforations and featured a porous absorber material located between the inner and outer duct. Both the mufflers were designed to perform competently over a broad range of frequencies and were tested on a downstream section made up from 15 microphones placed in an indoor setup. It was observed that the first muffler
692
A. Pradhan et al.
showed excellent noise damping performance over a frequency range from below 100 Hz up to few kHz, while the second muffler performed sufficiently over the entire frequency range. The noise damping of the latter muffler was not quite high as the former muffler and that makes the first muffler with various cavities better for practical APU application due to its multi-frequency damping capabilities. Vieille et al. [29], in their study about acoustic efficiency of the auxiliary power unit, used the numerical tools ACTIPOLE and ACTRAN/TM to predict the attenuation caused by the exhaust muffler of an APU. The muffler used in this study consisted of a circular cross-section and allowed the testing of a linear as well as nonlinear facesheets over a backing cavity, which further contained a straight and tapered section with 2 to 5 baffles spaced on the axis at equal distance. The exhaust noise was tested using two different setups. The first setup had APU mounted within an anechoic chamber, which consisted of 24 mics (some on ground and some 6ft above) in order to measure the SPL and acoustic directivity. The second setup also placed APU inside the anechoic chamber, but it used nine free field microphones, which were scattered from 30º to 150º from the exhaust centerline. The muffler with a porous sheet was operated under different conditions involving the inclusion of bleed air as well as the exclusion of shaft load condition, and it was observed that both the experimental data provided a very minor difference from the numerical predicted result which was apparently not vastly affected by bleed conditions. After measuring the noise attenuation by the muffler consisting of a perforated liner, links between the experimental and simulation data were found above 3 kHz, but the frequencies below this value showed major differences. It was concluded that the overall attenuation depends upon the liner admittance, reflection, geometry of cavities, and Mach number. Through further simulation, it was also observed that the addition of more cavities on the muffler can provide a higher attenuation for mid to large frequencies. Aircraft ground noise can be very pragmatic for the workers and crew, so, in 2003, the European Union legislated minimum requirements for the protection of workers from risks that arise from exposure to loud noise in their directive 2003/10/EC— Noise. So, keeping this directive in mind, Nodé-Langlois et al. [30] conducted an experiment to predict and measure the muffler’s transmission loss by implementing a non-locally reacting acoustic treatment. The researchers considered a reactive muffler for the APU exhaust which consisted of air cavities and baffles joined to the main duct through a perforated sheet. The muffler, finally developed, successfully combined the reflective properties of baffles as well as the dissipation property of the resistive sheet. This study was also focused on assessing and comparing experimental measurements with the analytical data obtained from an extension called Bulk Absorber or Honeycomb lined duct Acoustic propagation Modeling with Axial Segments (BAHAMAS) which was developed by Sijtsma and Van der Wal [31] and the FEM software ACTRAN/TM. To evaluate the experimental data, a bench was placed in the acoustic test center of Airbus, which had the capability to generate azimuthal modes from −10 to +10 order up to 4000 Hz as well as measure noise radiation in the anechoic room using two far field microphones. During the calculation, it was found that even a slight variation in the frequency can have a huge effect
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
693
on the transmission loss. For the frequency of 4056 Hz, the computational transmission loss was around 8.5 dB, but when the frequency was changed to 4030 Hz, TL was observed to be around 35 dB which was pretty close to the experimental value of 43 dB. Insertion loss is certainly an important evaluation standard for muffler performance. So, to assess the insertion loss for two mufflers, Zlavog et al. [32] used finite element method (FEM) that was executed in COMSOL Multiphysics to predict the acoustical performance of the mufflers and then validated them through testing in Boeing Interior Noise Test Facility (INTF). The researchers chose two mufflers for this experiment, one being conical shaped and the other stepped shaped, but both of them consisted of similar twelve chambers which were disconnected from the main duct by a porous facesheet. The sheet with lower percentage of open area was named POA1, and the one with higher percentage of open area was named POA2. After putting a 2D model of noise propagation into COMSOL and considering only plane wave signals in FEM simulation, it was predicted that at the first resonant frequency, the conical muffler would show an attenuation (insertion loss) above 90 dB for POA1 and over 80 dB for POA2 but upon testing, the insertion loss came around 25 dB, while the second resonant frequency had exact value of 1250 Hz, similar to what was predicted. The stepped muffler showed a less difference between predicted and experimental data; above 10 dB in POA1 and 15 dB in POA2 while the resonant frequencies were correctly predicted in this case. The author concluded that the discrepancies between the data might be because of noise leakage, which were further proved by studying the pictures of the setup. Ahmed et al. [33], in the year 2021, conducted research on Boeing 737 (B737-400) aircraft to assess thermodynamic parameters of the exhaust muffler. The APU of this aircraft featured a centrifugal compressor (single stage) consisting of 17 impellers, a reverse flow annular combustor, and a radial inflow turbine with 14 blades. An exhaust muffler was used in this APU, whose composition and types were not described in this specific study. During experimental investigation, the temperature of the exhaust muffler was measured using a thermoelectrical thermometer held at the exhaust. It was observed that the temperature of the muffler exit corresponds to 8% of cold mass flow rate when compared to the same mass flow of the muffler inlet. This results in a decrease of the dynamic pressure at the inlet of the muffler and that leads to a higher static pressure. The average temperature drop along the muffler was found to be around 26 °C. It was finally concluded that upon adding a muffler to the APU, the mass flow, along with density, slightly increases while the temperature and velocity are reduced. It is very essential to design a muffler in such a way that the heat radiated from it does not exceed the predetermined limit. The best way to do this is to insulate the muffler with a blanket of low conductivity. And in 2012, Vunnam and Bouldin [34] conducted a research experiment to perform a conjugate heat transfer CFD analysis on an APU’s exhaust in order to get the most out of the blanket design. The muffler used in this experiment incorporated an exhaust duct made from a perforated liner surrounded by a large backing cavity, which in turn was separated into different
694
A. Pradhan et al.
Table 1 Materials used and compared in the work by Vunnam and Bouldin Blanket type
Insulation material and thickness used in upper blanket
Insulation material and thickness used in lower blanket
Result
Option 1
0.75-in thick Pyrogel No air gap
0.75-in thick Pyrogel No air gap
Outer surface temperature was 4% hotter than the maximum allowable limit
Option 2
0.236-in thick Pyrogel 0.75-in thick silica 0.125-in high air gap
0.236-in thick Pyrogel 0.375-in thick silica 0.125-in high air gap
Outer surface temperature was 1% higher than the maximum allowable limit
Option 3
0.236-in thick Pyrogel 0.75-in thick silica 0.125-in high air gap
0.986-in thick OEM insulation 0.125-in high air gap
Maximum wall temperatures were 2% less than the maximum allowable limit
compartments by baffles. The researchers considered three different insulation blankets to understand the thermal behavior of the outer blanket and aft tail cone surface. A CFD analysis of these mufflers was conducted using Ansys Fluent software to determine the blanket option that minimizes the heat transfer through the muffler the most. The three options are described in Table 1. Out of all these insulation blankets, the option 3 managed to minimize the heat transfer and successfully met the design temperature criteria. It used a material with less density for its lower blanket which managed to offer a lower thermal insulation and allowed the passing of more heat, resulting in a uniform heat distribution in the upper and lower half of the muffler blanket. Lieber et al. [35] conducted a research experiment to predict transmission loss for two inlet designs of a turboprop engine. A reacting muffler covered by a resistive perforated facesheet and having three-cavity with a depth of 3 in was placed above the upper surface of the duct inlet. The muffler worked sufficiently well to attenuate the noise at low frequency, but for higher frequencies, a 1-in thick liner with a bulk absorbing material was used. The other inlet design, plenum inlet, worked as a cowl muffler and featured a bulk absorbing material to attenuate high frequency noise. The acoustic simulation software, ACTRAN/TM, was used to perform analysis on the two inlet designs, where one third Octave Band frequencies were chosen. It was found that the configuration containing the muffler managed to provide an attenuation of 4 dB at a frequency of 800 Hz which when combined with the bulk absorber melamine foam provided an attenuation of 7.5 dB at 1600 Hz which can decrease up to 5.5 dB at lower/higher frequency. The plenum inlet was more effective, providing a maximum attenuation of 7.5 dB at a frequency of 1600 Hz and an overall attenuation above 5.5 dB at frequencies less than 5000 Hz. Apart from the auxiliary power unit (APU), mufflers are also used in the air conditioning system (ACS) of the aircraft. So, Chappuis et al. [36] conducted a ground
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
695
noise test on an Airbus aircraft to identify the sources and locations of noise generated from the air conditioning system. A total of eight external mufflers were used, with a transmission loss over 15 dB in the 300 Hz–10 kHz frequency range, to insulate the inlets and outlets of two different ACS: Environmental Control System (ECS) and Pack Bay Ventilation (PBV). In order to measure the sound pressure levels, twelve microphones were laid out on the ground while an additional two microphones were installed in Engine Oil Filling (EOF) and the Refuel Defuel Control Panel (RDCP). After successfully conducting the test, it was observed that the APU’s background noise was predominant at low frequency while at a higher frequency, ECS noise was more. The ECS noise increases for frequency above 1000 Hz and drives the SPL above 2000 Hz, decreasing the TL. Palan et al. [37] analyzed and compared two different ideal muffler types for a compressor inlet in a fuel cell APU. The mufflers were designed for the 800-Hz 1/3rd octave band, having an inlet tube diameter of 20 mm, an internal chamber diameter of 51 mm, and a chamber length of 112 mm. To achieve a transmission loss more than the calculated value of 10.4 dB, a metal perforated sheet was also used inside the expansion chamber. The first muffler designed was purely a reactive one, which was made using steel and featured a simple expansion chamber. This design did not produce an increase in insertion loss, which was then improved by 1 dB after adding a tube (30 cm long) that connected the muffler outlet and the inlet of the compressor. The second muffler was a dissipative muffler that was constructed by slicing the reactive muffler into two parts, after which fiber glass wool was inserted into the expansion chamber, and the parts were joined back by using aluminum sheet secured with a horse clamp. It was observed that the sound pressure level (SPL) in the 800-Hz frequency band was reduced by 2 dB, making it a more pleasing and quieter option. A study on absorption materials of mufflers was carried out by Kalita et al. [38], where the properties of porous materials like Hemp and Kenaf; ceramic foams; aerogels; and metal foams were discussed. It was concluded that use of such materials can reduce fuel consumption in addition to harmful emissions from the vehicles. In another study by Kalita and Singh [39], the performance parameters of a reactive muffler were computed using CFD analysis. A total of five configurations were chosen for this experiment, out of which the fourth design was found to generate maximum noise attenuation and was perfectly optimized for a straight-four diesel engine as it featured baffles along with tubes, which managed to generate a 67% drop in pressure, that was found to be greater when compared to the same of a simple reactive muffler. The researchers then compared a reactive muffler with a hybrid muffler to assess their pressure drop and performance parameters [40]. It was found that the fourth model, which was a hybrid muffler made up from a combination of absorptive and reactive muffler, managed to produce the best pressure drop (38% increase) as well as the least sound pressure level (45–50 dB) at the outlet.
696
A. Pradhan et al.
3 Research Gap Although, throughout history, there have been numerous studies and research to reduce the exhaust noise with the help of a muffler to an acceptable limit, there’s still a room for improvement in term of muffler design. Reactive mufflers work well enough for low frequencies but struggle to attenuate the noise at mid or high frequencies for which absorptive mufflers are now used. But sometimes, if the perforations on the absorptive mufflers are badly designed, they can deteriorate the engine by increasing the back pressure. So, a newly careful design of ideal perforations is the need of the hour now. Moreover, exhaust mufflers tend to prematurely fail due to wet corrosion, air corrosion or fatigue [41] because of various reasons. So, research needs to be done for the muffler material or coating so that it can prevent such corrosion failures. When mufflers are placed at the exhaust pipe, they tend to get very hot due to the flow of hot gases and this can affect the tailcone of the aircraft. So, current studies are going on to design a material that can minimize the heat radiating from the muffler as well as attenuate the noise well enough. Aeroacoustic is an ever-developing field, many researchers have come up with many designs but there’s always an untouched research gap in terms of muffler design and its performance parameters, filling which will help the muffler to decrease the sound pressure level more efficiently and to an acceptable limit imposed by the authorities.
4 Conclusion This literature review briefly presented the recent vital works done on the muffler that can be used on the aircraft’s auxiliary power unit (APU) to reduce the noise emitted from it. Many studies have noted that the combination mufflers work sufficiently enough for such cases and adding some baffles to the configurations can help in noise attenuation more. The performance parameters (transmission loss and insertion loss) of different muffler configurations have also been described in this paper. It was observed that increasing the number/geometry of cavities, Mach number, and octave bands, results in an increase of the transmission loss and noise attenuation capacity of the muffler. Also, the average insertion loss for mufflers designed to attenuate noise at mid or high frequencies was found to be higher than those designed for lower frequencies. This paper also illustrated the numerical tools and design software required to obtain numerical data and then verify it by comparing those data with experimental results. The acoustic attenuation in different studies was commonly calculated by CFD software like Ansys CFX or Ansys Fluent and numerical tools (FEM software) like ACTRAN/TM, ACTIPOLE, or BAHAMAS, which sometimes were implemented in COMSOL Multiphysics software.
A Comprehensive Review on Acoustical Muffler Used in Aircraft’s …
697
In this study, priority has also been given to muffler materials. Mufflers consisting of folded cavities or acoustically treated turns were made up of opencell polyurethane foam which was further covered with an open-weave fiberglass cloth to protect it from soil erosion. Also, some mufflers used sound absorbing material like Feltmetal or some porous facesheets which seemed to increase the sound attenuation capacity a little bit more. It was observed that constructing the muffler using Pyrogel, silica, and OEM materials helped to insulate the outer surface with low conductive blanket and that successfully minimized the heat radiating from the muffler.
References 1. Directive 2003/10 EC of the European Parliament and of the Council (2003) Minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (noise). 6 Feb 2003 2. International Civil Aviation Organization (ICAO) Annex 16—Noise, "Environmental Protection", August 1971 3. Tam C et al (2005) Combustion noise of auxiliary power units. In: 11th AIAA/CEAS aeroacoustics conference 4. Tam CKW et al (2013) Indirect combustion noise of auxiliary power units. J Sound Vib 332(17):4004–4020 5. Pott-Pollenske M et al (2007) Characteristics of noise from aircraft ground operations. In: 13th AIAA/CEAS aeroacoustics conference (28th AIAA aeroacoustics conference) 6. Callaway VE (1968) Noise control of aircraft auxiliary power units. SAE Trans 1993–1999 7. Alther GA (1966) Muffler development for light aircraft and a technique of in-flight data acquisition and analysis. SAE Trans 262–271 8. Biswas S (2010) Combination muffler is more effective than reactive muffler even in small size. In: Frontiers in automobile and mechanical engineering-2010. IEEE 9. Bhat C et al (2010) Design and analysis of a expansion chamber mufflers. World J Eng 7(3):117– 118 10. Bugaru M, Vasile O, Enescu N (2006) The Mufflers modeling by transfer matrix method. In: Proceedings of the 10th WSEAS international conference on applied mathematics 11. Zhong S (2020) Research on computational data simulation of automobile engine exhaust muffler performance. J Phys: Conf Series 1533(4) 12. Hua X et al (2014) Determination of transmission and insertion loss for multi-inlet mufflers using impedance matrix and superposition approaches with comparisons. J Sound Vib 333(22):5680–5692 13. Munjal ML (2014) Acoustics of ducts and mufflers, Second Edition. John Wiley & Sons 14. Deaconu M, Radulescu D, Vizitiu G (2018) Acoustic study of different mufflers based on metamaterials using the black hole principle for aircraft industry. In: Conference proceedings euronoise 15. Elnady T (2004) Modelling and characterization of perforates in lined ducts and mufflers. Doctoral dissertation, Farkost och flyg 16. Elnady T, Åbom M (2004) Paper VI: on acoustic network models for perforated tube mufflers and the effect of different coupling conditions. Diss. Ph. D. Thesis, The Royal Institute of Technology (KTH), Stockholm, Sweden 17. Munjal ML, Narayana Rao K, Sahasrabudhe AD (1987) Aeroacoustic analysis of perforated muffler components. J Sound Vib 114(2):173–188 18. Sullivan JW, Crocker MJ (1978) Analysis of concentric-tube resonators having unpartitioned cavities. J Acoust Soc Am 64(1):207–215
698
A. Pradhan et al.
19. Sullivan JW (1979) A method for modeling perforated tube muffler components. I. Theory. J Acoust Soc Am 66(3):772–778 20. Green DJ, Lilley GM (1955) A preliminary report on the use of the wide angle diffuser in ground mufflers of the type used for silencing jet aircraft 21. Brown DV, Asplund KD, Michalski JW (2008) Eccentric exhaust muffler for use with auxiliary power units. U.S. Patent No. 7,367,424. 6 May 2008 22. Dean LW (1975) Coupling of Helmholtz resonators to improve acoustic liners for turbofan engines at low frequency. No. PWA-5311 23. Sawdy DT, Beckemeyer RJ (1980) Bandwidth attenuation with a folded cavity liner in a circular flow duct. AIAA J 18(7):766–773 24. Beckemeyer RJ, Sawdy DT (1976) Analytical and experimental studies of folded cavity duct acoustic liners. J Acoust Soc Am 60(S1):S123–S123 25. Ross D, Lyon C (1984) Application and test verification of finite element analysis for gasturbine extended reaction exhaust muffler systems. In: 9th aeroacoustics conference 26. Gebhardt GT (1965) Acoustical design features of boeing model 727. J Aircr 2(4):272–277 27. Knobloch K et al (2014) Full-scale tests on APU noise reduction. In: Turbo expo: power for land, sea, and air, vol 45608. American Society of Mechanical Engineers 28. Knobloch K, Enghardt L, Bake F (2018) APU-noise reduction by novel muffler concepts. In: Turbo expo: power for land, sea, and air, vol 51005. American Society of Mechanical Engineers 29. Lavieille M, Brown D, Vieuille F (2011) Numerical modeling and experimental validation of the acoustic efficiency of treated ducts on an aircraft auxiliary power system. In: 17th AIAA/CEAS aeroacoustics conference (32nd AIAA aeroacoustics conference) 30. Nodé-Langlois T et al (2010) Modeling of non-locally reacting acoustic treatments for aircraft ramp noise reduction. In: 16th AIAA/CEAS aeroacoustics conference 31. Sijtsma P, van der Wal H (2003) Modelling a spiralling type of non-locally reacting liner. In: 9th AIAA/CEAS aeroacoustics conference and exhibit 32. Zlavog G, Breard C, Diamond J (2009) Non-locally reacting liner modeling and validation. In: 15th AIAA/CEAS aeroacoustics conference (30th AIAA aeroacoustics conference) 33. Ahmed U, Ali F, Jennions IK (2021) Development of a far-field noise estimation model for an aircraft auxiliary power unit. IEEE Access 9:127703–127719 34. Vunnam K, Bouldin B (2012) APU exhaust muffler design improvements through conjugate heat transfer CFD analysis. In: Turbo expo: power for land, sea, and air, vol 44700. American Society of Mechanical Engineers 35. Lieber LS, Weir D, Sheoran Y (2013) Prediction of transmission loss for two turboprop engine inlet designs. In: 19th AIAA/CEAS aeroacoustics conference 36. Chappuis J, François B, Matthieu P (2011) Air conditioning system noise measurement and characterization for aircraft ground operations. In: 17th AIAA/CEAS aeroacoustics conference (32nd AIAA aeroacoustics conference) 37. Palan VW, Shepard S, Lim TC (2004) Case history: noise control approaches for an aircompressor in a fuel-cell auxiliary power unit. Noise Control Eng J 52(5):197–209 38. Kalita U, Pratap A, Kumar S (2015) Absorption materials used in muffler a review. Int J Mech Ind Technol 2(2):31–37 39. Kalita U, Singh M (2021) Design and CFD analysis on flow through a reactive muffler of four-cylinder diesel engine. In: Recent trends in engineering design. Springer, Singapore, pp 211–223 40. Kalita U, Singh M (2022) Optimization of a reactive muffler used in four-cylinder petrol engine into hybrid muffler by using CFD analysis. Mater Today: Proc 50:1936–1945 41. Corrosion in exhaust systems, mufflers and silencers vol 1|Total solutions in heat resistant paints and powder coatings. www.sme.in/Orbit/articles/corrosion-in-exhaust-system-vol1/index.html
A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi Technique for Low Tension Application S. Thirumurugaveerakumar , V. Manivelmuralidaran , and S. Ramanathan
Abstract It is essential to govern and control the temperature rise in bus bars. A case study of an industry using high load low tension application has been taken in this research. Current intensity, width of bus bar and type of bus bar material have been perceived to be the influential parameters that augment heat transfer rate in bus bar. The aim of this research work is to optimize the heat transfer rate in bus bar under the given ambience in low tension application using Taguchi method. The temperature is taken as response parameter in the optimization throughout the study. The error between the experimental and theoretical value is reported to be even less than 0.6. The optimisation reveals that the width and material of the bus bar are the most and least influential parameters, respectively, with a contribution rate of 40.7% and 18.5% in augmenting the heat transfer rate. Keywords Ampacity · Bus bar system · Heat transfer · Taguchi method
1 Introduction The heat transfer phenomenon plays a crucial role in the application of bus bar. Copper and aluminium are the two commonly employed materials in bus bar. The rise in the temperature in the bus bar is a crucial factor while designing the bus bar system. In order to optimize the performance of bus bar system, the parameters are optimized by Taguchi’s method. The theoretical values obtained are compared with the experimental ones to validate the experimental results. The objective of this research work is to utilize Taguchi optimization technique for performance improvement of a bus bar system for industrial application. Parameters considered in this research work are current, width of bus bar, and the material used in bus bar. The steady-state temperature of bus bar is taken as response parameter. The steady-state temperature henceforth called as temperature in this article. S. Thirumurugaveerakumar · V. Manivelmuralidaran (B) · S. Ramanathan Kumaraguru College of Technology, Coimbatore 641 049, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_58
699
700
S. Thirumurugaveerakumar et al.
Bedkowski et al., 2014 reported expressions that are available to predict the temperature rise in bus bar system [1]. A study by Bedkowski et al., 2016 found that the calculation of convective heat transfer coefficients in thermal applications is mandatory when surfaces are cooled in connection with liquids or gases and the convection occurs [2]. There is a strong relationship between the thermal aspect and the fluid flow aspect. Bedkowski et al., 2017 found that the fluid flow characteristic quantities are influenced by the distribution of temperature, whereas the distribution of temperature is influenced by the fluid flow parameters [3]. Analysis of bus bars with perspective of thermal behaviour at high current supply system also carried out which discussed the effect of current on the performance of bus bar system [4]. The operating temperature of the bus bar system plays a crucial effect in deciding the system performance of the bus bar [5]. An analytical algorithm was developed to find the permissible ampacities in horizontally installed rectangular bus bar was documented [6, 7]. In Newton’s law of cooling, the essential components of heat transfer by convection are listed is given in Eq. 1: Q = h A(T − T∞ )
(1)
In which Q is the rate of heat transfer (power) between surfaces of the wall and the fluid. T is bus bar temperature, T ∞ is free stream of fluid’s temperature, h is the convection coefficient (also called film coefficient). The convection coefficient h is a complicated function of the fluid flow that shows convection. In addition, the coefficient of convection varies uniformly throughout the area and depends on the location where the temperature measurement is carried out [8]. By considering the energy balance, it is possible to develop a thermal model of a bus bar in steady and unsteady state. The convection heat flux during temperature rise is given by Eq. 2, and radiation heat flux is given in Eq. 3 qc = h(T − T∞ ),
(2)
qr = h r (T − T∞ ).
(3)
hr stands for radiation heat transfer coefficient and is given by Eq. 4 2 h r = εσ T 2 + T∞ (T + T∞ ).
(4)
Total heat flux transferred from the bus bar to the atmosphere is given by Eq. 5 q = qc + qr.
(5)
The resistance to the current flow leads to heat generation is given by Q = I 2 R(t). Maximum operating temperature determines the current carrying capacity or ampacity of the bus bar [9, 10]. Energy balance equation can be written as given in Eq. 6.
A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi …
701
Q = I 2 R(t) = Rate of heat sorted in the bus bar + Rate of heat dissipated from the bus bar from convection and radiation.
(6)
The energy balance equation is given in Eq. 7 ρC p V
dT 4 = I 2 R(t) − h As (T − T∞ ) − εσ As T 4 − T∞ . dt
(7)
Equation 7 is like the differential Eq. 8. dT + a(T ) = C dt
(8)
Solution for the above given differential Equation is given in Eq. 9 c 1 − e−at + Ti e−at , a 2 h As +εσ As (T +T∞ )(T 2 +T∞ ) where a = ρC p V Ti+1 =
C=
(9)
2 h As + εσ As (T + T∞ ) T 2 + T∞ I 2 R(t) + (T∞ ). ρC p V ρC p V
Thermal time constant is the function of geometrical, physical, and thermal properties of the bus bar material T (t) − T1 −t =1−eτ . T2 − T1
(10)
The thermal time constant “τ” in (10) equation can be denoted as a function of the thermal capacitance of the bus and both the radiative and convective thermal resistances from the surface of the bus bar [11, 12]. The thermal time constant is given by Eq. 11 ρC p V As . τ= 2 (T − T ) h + εσ T 2 − T∞ ∞
(11)
Taguchi method proposed by renowned Japanese quality scientist Taguchi mainly focusses on optimization to reduce the variation in the process. Taguchi method mainly focusses on minimization of variation of quality characteristics of interest [13].
702
S. Thirumurugaveerakumar et al.
2 Experimental Setup This research work carried out in an industry taken as a case study. The observations of current and temperature were taken down by the powerhouse of the Sangeeth Textile Mill, located in Coimbatore, Tamilnadu, India which is having the capacity of 2500 A rating and 1500 KVA transformer. The voltage is reduced to 440 V in the transformer for distribution to the load centres. Figure 1 shows the bus bar panel in powerhouse of Sangeeth Textiles. A thermocouple is an electrical device made up of two dissimilar conductors that form electrical junction at different temperatures. As a result of thermoelectric effect, a thermocouple generates a temperature-dependent voltage, which can be used to measure temperature. T-type thermocouples are used (diameter = 0.2 mm and accuracy = 0.02 °C) along with NI data acquisition system (NI 9211) to measure temperature in the bus bar. Figure 2a, b shows T-type thermocouples and NI 9211used in this research work. The selected T-type thermocouples are also recommended in high moisture, dust, fog, smoke, liquid, high pressure, and corrosive environments. Attributes of this type of thermocouples are high dielectric strength, durable, malleable, and good response to temperature variations. Mechanical strength, corrosion, and moisture resistance are provided by the uniform thickness of wires and magnesium oxide electrical insulation [14–16].
Fig. 1 Bus bar panel arrangement in the powerhouse of Sangeeth Textiles
A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi …
703
Fig. 2 a T-type thermocouple. b Four-channel temperature DAQ
3 Taguchi Method The Taguchi method solves the single objective problem effectively, and it is a useful tool for determining critical parameters and predicting optimal settings for each process parameter. The following are the steps to be developed for bus bar parameter optimization. Step 1: Determination of the quality characteristic to be optimized. Step 2: Identification of the noise factors and test conditions. Step 3: Identification of the control factors and their levels. Step 4: Select the design matrix and define the data analysis procedure. Step 5: Conduct the experiments as per the design of experiments. Step 6: Analysis of data and determine optimum level of control factors. Step 7: Conduct a conformity test to validate the results. In this work, supplied current, size of bus bar, and type of material used in bus bar are considered for the analysis. The current and size of bus are taken at three levels, and the material is taken at two level. The levels of the parameters and their values are given in Table 1. Table 1 Input parameters and their levels Parameters
Notation
Units
Level 1
Level 2
Level 3
Current
C
A
500
750
1000
Size of bus bar
S
mm
35
50
100
Type of material
M
–
Cu
Al
–
704
S. Thirumurugaveerakumar et al.
Fig. 3 Comparison of experimental and theoretical results
In Taguchi method, mixed three level design has been taken to study the heat transfer in bus bar. An orthogonal array L 18 has been selected to conduct 18 experiments. The temperature is taken as output response. The design matrix with experimentally measured temperature and theoretically calculated temperature is given in Table 2. For validating the experimental values, the obtained values are compared with the theoretical value obtained through heat transfer equation. The comparison between the theoretical and experimental values was plotted and given in Figure 3. From Figure 3, it is found that most of the points are lies around the line which means that the experimental values are close to the theoretical values. The R2 value observed here is 0.9998 which infers that the theoretical values are closed to the observed experimental values. This justifies the applicability of the proposed solution to the industrial case study which was selected in this research article. Further the error between the theoretical and experimental values is plotted for each experimental run is shown in Fig. 4. It is found that from Fig. 4 the error is below 0.6 and the experimental results achieve the theoretical values.
4 Results and Discussion The current carrying capacity of aluminium bus bars is lower than that of copper bus bars. Aluminium has been compared with copper due to its lower cost in this research. The degradation mechanisms of bus bars, such as corrosion and oxidation, typically occur around 85 °C [17–19]. In view of optimizing the performance of bus bar, the mathematical equation developed from thermal model is solved. Further the experimental values are optimized using Taguchi method. Variation in temperature of bus bar with different material for different sizes of bus bar has been plotted in the parameter interaction plot. Figure 5 shows the main effects plot for SN ratio. It is depicted from the main effects plot that the copper material possesses high heat transfer due to high thermal conductivity of the copper than aluminium. Hence, aluminium has lower heat transfer efficiency [20–23]. When the current increases the heat transfer decreases [24, 25].
A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi …
705
Table 2 L18 orthogonal array with response and theoretically calculated vales of temperature Type of material
Current (A)
Experimentally measured temperature ºC
Theoretically calculated temperature ºC
1
Cu
1000
2
Cu
1000
35
37.5
37.89
50
36
36.12
3
Cu
1000
100
4
Cu
1500
35
33.5
34.02
40
40.33
5
Cu
1500
50
6
Cu
1500
100
37.5
38.1
35
7
Cu
2000
35.22
35
45.2
45.17
8
Cu
9
Cu
2000
50
42
41.64
2000
100
37.1
10
Al
1000
37.09
35
42
42.27
11
Al
12
Al
1000
50
39
39.49
1000
100
35.5
13
35.19
Al
1500
35
45.5
45.62
14
Al
1500
50
41.5
41.97
15
Al
1500
100
37
37.26
16
Al
2000
35
53.6
53.53
17
Al
2000
50
47.5
47.76
18
Al
2000
100
40.5
40.32
Error between theoretical and experimental values
Expt No.
Size of bus bar (mm)
0.6 0.4 0.2 0 -0.2
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18
-0.4 -0.6 -0.8
Experimental Runs
Fig. 4 Error between theoretical and experimental values
Figure 6 shows the interaction plot of the considered bus bar parameters on heat transfer efficiency of bus bar system. From Fig. 6, it is inferred that there is no interaction effects of these parameters [4, 26, 27]. Compared to copper material, the aluminium bus bar material transfers the heat less [28, 29]. When the supplied current is higher, the effect is more on thickness. The same effect is presented for
706
S. Thirumurugaveerakumar et al.
Main Effects Plot for SN ratios Data Means Current
Material
-31.0 -31.5
Mean of SN ratios
-32.0 -32.5 -33.0 Cu
1000
Al
1500
2000
Thickness
-31.0 -31.5 -32.0 -32.5 -33.0 35
50
100
Signal-to-noise: Smaller is better
Fig. 5 Main effects plot for SN ratio
thickness and the type of materials. In Fig. 6, copper bus bar for 100 mm width, steady-state temperature is reached at 37.09 °C. If the width is reduced to 50 mm, the steady-state temperature increases by 15%. Furthermore, if the width of the bus bar is having negative correlation with the steady-state temperature [30]. When the width is reduces to 35 mm, the steady-state temperature rises by 19%. Steady-state temperature reached at 40.32 °C for aluminium bus bar of 100 mm width. By reducing the width to 50 mm, the steady-state temperature increases by 19%. Furthermore, if the width of the bus bar is stepped down next standard size (35 mm), the steadystate temperature will increase by 35%. This result is consistent with earlier research reported [31–33]. Table 3 shows the ANOVA result for the experimental results. It is inferred from the ANOVA table that the thickness of the material plays an important role in enhancing the heat transfer in bus bar system. The thickness of bus bar material contributes 41% to the heat transfer. Next to thickness, the current supplied contributed 33% to heat transfer efficiency of bus bar system. The material of the bus bar system contributes only 18.5% to the heat transfer enhancement. Figure 7 shows the Pareto chart giving the percentage contribution of each parameter to the response heat transfer. In this, the thickness of the material of bus bar contributes more to heat transfer at 40.7%. The next factor that contributes higher to heat transfer is the current with 33%. The material of the bus bar contributes less at 18.5% to the heat transfer in bus bar system.
A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi …
707
Interaction Plot for SN ratios Data Means 1000
1500
2000
Material Cu Al
-31.5 M ater ial
-32.5 -33.5
-31.5 C ur r ent
-32.5
Current 1000 1500 2000
-33.5
Thickness 35 50 100
-31.5 T hickness
-32.5 -33.5 Cu
35
Al
50
100
Signal-to-noise: Smaller is better
Fig. 6 Interaction effects of parameters Table 3 ANOVA for experimental results Parameter
Degree of freedom
Sum of square
Mean square
F-value
% significant
Order
Material
1
79.506
79.506
28.50
18.5
3
Current
2
141.945
70.973
25.44
33.0
2
31.40
Thickness
2
175.202
87.601
40.7
1
Error
12
33.480
2.79
–
7.8
–
Total
17
430.134
–
–
–
–
100 80 60 40 20 0 Thickness
Current
Fig. 7 Pareto chart giving the percentage contribution
Material
708
S. Thirumurugaveerakumar et al.
5 Conclusion In this research work, an industrial case study of bus bar system has taken up and optimized using Taguchi method. By varying current, width of bus bar using copper and aluminium material, the experiments were carried out using L18 orthogonal array. The optimum parameters obtained with 100 mm width, 1000A current with copper material. It is suggested that copper material with low current and larger width will enhance the heat transfer rate of the bus bar. The optimization reveals that the width of material is a major contributing factor to enhance the heat transfer in bus bar system with 40.7%. The experimental results were compared with theoretical values, and the error between the experimental and theoretical value is 0.6 °C. Therefore, the material and size of the bus bars have to be controlled, which increases the efficiency of bus bar in industrial application.
References 1. Bedkowski M, Smolka J, Banasiak K, Bulinski Z, Nowak AJ, Tomanek T, Wajda A (2014) Coupled numerical modelling of power loss generation in busbar system of low-voltage switchgear. Int J Therm Sci 82:122–129 2. Bedkowski M, Smolka J, Bulinski Z, Ryfa A (2016) 2.5-D multilayer optimization of an industrial switch gear busbar system, applied thermal modelling 101:147–155 3. Bedkowski M, Smolka J, Bulinski Z, Ryfa A (2017) Simulation of cooling enhancement in industrial low-voltage switch gear using validated coupled CFD-EMAG model. Int J Therm Sci 111:437–447 4. Plesca A (2019) Thermal analysis of busbars from a high current power supply system. Energies 12:2288 5. Barrett R (2013) Operating temperature of current carrying copper busbar conductors. Bachelor’s Thesis, University of Southern Queensland, Toowoomba, Australia 6. Bedkowski M, Smolka J, Bulinski Z, Ryfa A, Ligeza M (2016) Experimentally validated model of coupled thermal processes in a laboratory switchgear. IET Gener Transm Distrib 10:2699–2709 7. Thirumurugaveerakumar S, Sakthivel M, Valarmathi S (2014) Experimental and analytical study on the bus duct system for the prediction of temperature variations due to the fluctuation of load. J Electric Eng Technol 9(6):2036–2041 8. Thirumurugaveerakumar S, Sakthivel M, Rajendran S (2015) Experimental and analytical study of effect of forced convectional cooling of bus duct system in the prediction of temperature rise. Int J Adv Eng Res 10(21):42202–42208 9. Thirumurugaveerakumar S, Sakthivel M, SharmilaDeve V (2016) Prediction and comparison of size of the copper and aluminium bus duct system based on ampacity and temperature variations using MATLAB. Therm Sci 9:1–11 10. Robert Coneybeer T, Black WZ, Bush RA (1994) Steady-state and transient ampacity of bus bar. IEEE Trans Power Deliv 9(4):1822–1829 11. Klimenta DO, Perovi´c BD, Jevti´c MD, Radosavljevi´c JN (2016) An analytical algorithm to determine allowable ampacities of horizontally installed rectangular bus bars. Therm Sci 20(2):717–730 12. Kim JK, Hahn SC, Park KY, Kim HK, Oh YH (2005) Temperature rise prediction of EHV GIS bus bar by coupled magneto-thermal finite element method. IEEE Trans Magnet 41(5):1636– 1639
A Case Study of Bus Bar Heat Transfer Optimization Using Taguchi …
709
13. Kim M, Bak S, Jung W, Hur D, Ko D, Kong M (2019) Improvement of heat dissipation characteristics of Cu bus-bar in the switchboards through shape modification and surface treatment. Energies 12:146 14. Li X, Gao N, Wu W (2018) A modified self-powered wireless temperature measurement system for high voltage switchgear. In: Proceedings of the 2018 13th IEEE conference on industrial electronics and applications (ICIEA). Wuhan, China, pp 1425–1430 15. Linsuo Z, Maojun W (2006) The design and realization of on-line measuring device of busbar temperature rise for HV switch board. In: Proceedings of the 2006 international conference on power system technology. Chongqing, China, pp 1–5 16. Loken RSJ, Bostad A, Ingebrigtsen S (2014) Utility experiences on busbar faults in a transmission grid. In: Proceedings of the 12th IET international conference on developments in power system protection (DPSP 2014), Copenhagen, Denmark, pp 1–5 17. Jung HS (2017) Study for temperature rise on busbar of the switchgear and controlgear assemblies. Korea Inst Inf Commun Eng 21:379–385 18. Guo B, Song Z, Fu P, Jiang L, Wang M, Dong L (2016) Prediction of temperature rise in water-cooling DC busbar through coupled force and natural convection thermal-fluid analysis. IEEE Trans Plasma Sci 44:3346–3352 19. Delgado F, Renedo CJ, Ortiz A, Fernández I, Santisteban A (2017) 3D thermal modeland experimental validation of a low voltage three-phase busduct. Appl Ther Eng 110:1643–1652 20. Kim JW, Park JY, Sohn JM, Ahn KY (2014) A study on thermal characteristics for D.C. molded cased circuit breaker busbar. Korean Inst Electr Eng 11:252–254 21. Geng C, He F, Zhang J, Hu H (2017) Partial stray inductance modeling and measuring of asymmetrical parallel branches on the bus-bar of electric vehicles. Energies 10:1519 22. Wang L, Chiang HD (2017) Toward online bus-bar splitting for increasing load margins to static stability limit. IEEE Trans Power Syst 32:3715–3725 23. Callegaro AD, Guo J, Eull M, Danen B, Gibson J, Preindl M, Emadi A (2018) Bus bar design for high-power inverters. IEEE Trans Power Electron 33:2354–2367 24. Smirnova L, Juntunen R, Murashko K, Musikka T, Pyrhönen J (2016) Thermal analysis of the laminated busbar system of a multilevel converter. IEEE Trans Power Electron 31:1479–1488 25. Slade P (2014) Electrical contacts: principles and applications. CRC Press, Boca Raton, FL, USA. ISBN 978-1-4398-8130-9 26. Yaman G (2019) A thermal analysis for a switchgear system. J BAUN Inst Sci Technol 21:72–80 27. Szulborski M, Łapczýnski S, Kolimas Ł, Kozarek Ł, Rasolomampionona DD (2020) Calculations of electrodynamic forces in three-phase asymmetric busbar system with the use of FEM. Energies 13:5477 28. Łapczýnski S, Szulborski M, Kolimas Ł, Kozarek Ł, Gołota K (2020) Mechanical and electrical simulations of tulip contact system. Energies 13:5059 29. Author F (2016) Article title. Journal 2(5):99–110 30. Author F, Author S (2016) Title of a proceedings paper. In: Editor F, Editor S (eds) Conference 2016, LNCS, vol 9999. Springer, Heidelberg, pp 1–13 31. Author F, Author S, Author T (1999) Book title, 2nd edn. Publisher, Location 32. Author F (2010) Contribution title. In: 9th international proceedings on proceedings. Publisher, Location, pp 1–2 33. LNCS Homepage. http://www.springer.com/lncs. Last accessed 21 Nov 2016
Combustion and Vibration Investigations of a Reactor-Based Dual Fuel CI Engine that Burns Hydrogen and Diesel Shaik Subani and Domakonda Vinay Kumar
Abstract The difficulty of using fossil and conventional fuels is less performance, combustion, and more vibrations. This can be resolved by pumping hydrogen with diesel in dual mode. This paper dwells the on-board hydrogen generation using reactor, experimentation were performed with constant speed of 1500 rpm, 12 kgf load. Hydrogen flow rate was kept at 15 lit per min, and engine was set at 18 compression ratio. Hydrogen was generated by mixing chemicals in the reactor. Accelerometers is attached using gel to cylinder head, and vibrations were collected in three directions. The rate of pressure rise, cylinder pressure, and net heat release rate all rose as a result of the addition of hydrogen to the combustion process. Vibrations also reduced in all three directions when compared to standard diesel by using hydrogen with the effect of improved combustion. Keywords Combustion · Compression ratio · Vibration and hydrogen
1 Introduction At present stage, the use of fossil fuels had reached the extreme level and resulted in air pollution and decrease in combustion levels of an IC engine vibrations and loud noise produced by the diesel engine when operated with conventional and fossil fuel is the main drawback [1]. There is a need to increase fuel economy. A wide spectrum of hydrogen compounds was flammable, and they burned quickly and diffused widely for this reason it increases the combustion and gives fewer vibrations [2]. Vibrations are developed due to engine combustion and reciprocating parts that help the vehicle to run [3]. In 2020, European Green Deal said to increase hydrogen production and technologies in transportation [4]. This increased the focus on diesel engines as electrical vehicle is less feasible for long transport. Road transport for long distance and in populated areas hydrogen CI engines is better than batteries and to store energy for on-board vehicle there must be a means [5]. Because hydrogen has a S. Subani (B) · D. V. Kumar Department of Mechanical Engineering, VFSTR, Guntur, Andhra Pradesh 522213, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_59
711
712
S. Subani and D. V. Kumar
higher self-ignition temperature than any other fuel, it cannot be put directly into the engine [6]. When hydrogen is supplied to the engine at higher compression ratio, there is increase in pressure and temperature in cylinder at the advance combustion level this results fuel economy by hydrogen [7]. Pressure rises quickly when test fuel burns in the engine’s combustion chamber. The internal chamber and piston walls are subjected to this pressure, which is the main cause of engine vibrations. As a result, the main force driving vibrations in all directions is pressure rise in the cylinder. Vertically vibrations were strongest in engine. This was principally caused by the fact that the piston experiences thrust from the engine’s, which acts mostly in the vertical direction. Vibration amplitudes are recorded in time history and then converted in FFT domain, time, and amplitude history is given by RMS, by division of signal into number of bands amplitude frequency values are determined in FFT methods. Individual measurements taken along orthogonal axes should be integrated for vibration assessments. Three translational axes are involved in vibration, hence the measurement should be done along those three axes: lateral, longitudinal, and vertical [8]. Various studies are done on diesel engine as dual fuel with hydrogen and diesel. When experiments were conducted using biodiesel such MOB with diesel vibrations were reduced and upon addition of hydrogen to biodiesel much vibrations reduced [9]. Hydroxy gas generator was installed to diesel engine and noticed that vibrations are reduced with various biodiesel blends pumped into the engine an also found that upon addition of HHO gas vibrations reduced further [10]. There is limited literature regarding hydrogen with CI engine, and here in our study, diesel engine is modified to send hydrogen and diesel, where hydrogen reactor is used to generate hydrogen, and experiments are carried at full load of 12kgf, 15 LPMs at 18 compression ratio.
2 Experimental Setup A water-cooled, VCR research engine called the Kirloskar TAF-1with single cylinder, that has been customised to operate in dual fuel mode and is fixed to a fuel flow sensor was used for the experiments, Fig. 2 depicts the schematic illustration of the investigational system. This engine’s maximum rpm is 1500, and its maximum power is 3.5 kW. Table 1 lists the engine’s specifications. To monitor load, the engine was connected through a load cell to an eddy current dynamometer of the water-cooled variety. The combustion analysis was evaluated using the Engine Soft software. On the computer, the experiment’s findings were displayed. The engine was kept running at 1500 rpm throughout the experimentation while the load was kept at 12 kgf. The engine’s input manifold was connected to the hydrogen supply reactor and a hydrogen flow metre. Given that the system is computerised, each of the sensors—fuel, speed, load, and pressure—was linked to the engine separately in order to measure the values. When hydrogen-enriched air was supplied during suction, the engine’s initial diesel fuel consumption was lowered. Vibration was measured using a PCB electronics
Combustion and Vibration Investigations of a Reactor-Based Dual Fuel … Table 1 Engine details
Parameter
Description
Engine made
Kirloskar
Compression ratio
18
Speed
1500 rpm
Cylinder bore
87.5 mm
Fuel injection pressure
200 bars
Brake power
3.5 kW
Piston stroke length
110 mm
Injection point
30 ◦ Before TDC
Cylinders
1
713
accelerometer (Triaxial ICP®, Model No: 356A32). Quick adhering gel was used to glue the sensor to the engine head. Three orthogonal axes were used to record the data (x-longitudinal axis, z-lateral axis, y-vertical axis) (see Fig. 2). To perform the FFT, Dewesoft×3 software was utilised. The sensors used in the experimental system were attached all the way through DAQ to the computer, which shows the findings. A two-stage reactor as shown in Fig. 1 is used to produce hydrogen where aluminium salt and sodium salt are mixed in the reactor and hydrogen gas is produced, and this gas with the help of valve will be supplied to flow metre and then to diesel engine along with pure diesel as the engine is in dual mode, and experiments are performed with care at full load of 12 kgf at 18 compression ratio, and results were collected (Figs 2 and 3). 1. Control panel
8. Accelerometer
2. Speed indicator
9. Exhaust gas analyser
3. Load indicator
10. Vibration measuring equipment(DAQ)
4. Fuel flow level
11.Hydrogen flow
5. Engine
12.Hydrogen reactor
6. Flywheel
13. Engine combustion monitoring unit
7. Dynamometer
14. Dewesoft software for collecting vibration
3 Results and Discussions 3.1 Combustion Parameters The combustion characteristics have a significant impact on an engine’s ability to run more efficiently. Peak cylinder pressure, rate of pressure rise, and net heat release
714
S. Subani and D. V. Kumar
Fig. 1 Hydrogen reactor
rate are three combustion parameters that have been studied. The fuel’s capacity to burn without knocking is predicted by each of these combustion factors. • Cylinder Pressure (CP) From the Fig. 4, it is noticed that the cylinder pressure is more for hydrogen at 15lpm than standard diesel. Due to the fuel’s hydrogen enrichment, a large amount of it is burned during the premixed combustion stage alone, causing an extremely high rise in cylinder pressure [11]. Higher hydrogen dilution rates and turbulence might occur if there is a sizable amount of hydrogen in the fuel mixture that would ignite combustion and raise cylinder pressure. • Net Heat Release Rate (NHRR) This can be seen in Fig. 5 when compared to ordinary diesel, the NHRR increased as the hydrogen flow rate increased, because of the greater hydrogen self-ignition temperature, which causes a decrease in ignition delay to simultaneously result in an increase in NHRR [12]. • Rate of Pressure Rise (RoPR) It is seen from the Fig. 6. Due to the combustion chamber’s in cylinder pressure rapidly increasing, the rate of pressure rise increased. The flame spreads rapidly
Combustion and Vibration Investigations of a Reactor-Based Dual Fuel …
1. 2. 3. 4. 5. 6. 7.
Control panel Speed indicator Load indicator Fuel flow level Engine Flywheel Dynamometer
715
8. Accelerometer 9. Exhaust gas analyser 10. Vibration measuring equipment(DAQ) 11.Hydrogen flow 12.Hydrogen reactor 13. Engine Combustion monitoring unit 14. Dewesoft software for collecting vibration
Fig. 2 Schematic illustration of engine setup
as a result there will be elevated rate of pressure rise brought on by the cylinder’s increasing temperature and pressure [13].
3.2 Vibrations From the Fig. 7 by addition of hydrogen at 15lpm to diesel, it was discovered the vibrations produced were less when compared with standard diesel for all directions such as x, y, z. Movement of the piston upward and downward created the most vibration value occurred at Y (vertical)-axis because the combustion energy will push the piston downwards and second highest vibration was found in x (longitudinal)-axis due to piston pushes against cylinder walls. Least vibrations were seen in z (lateral)axis because of auxiliary equipment [13]. Engine block vibration may be reduced
716
S. Subani and D. V. Kumar
Fig. 3 Setup for an experiment
60
Cylinder Pressure (bar)
50
CR : 18 Speed :1500 rpm IP : 200bar
Pure Diesel H2 at 15 LPM
40 30 20 10 0 -10 320
340
360
380
400
420
Crank Angle (deg)
Fig. 4 Change in cylinder pressure under full load (12 kgf)
as a result of ignition delay and changes in peak pressure rise rate by increasing the H2 level to 15 lpm [14]. Variations in cylinder pressure and ignition delay cause vibrations to lessen. The large amount of fuel is burnt during the actual premixed combustion stage was what caused the massive increase in the hydrogen-enriched
Combustion and Vibration Investigations of a Reactor-Based Dual Fuel … 60
Net Heat Release (J/deg)
50
CR : 18 Speed :1500 rpm IP : 200bar
717
Pure Diesel H2 at 15 LPM
40 30 20 10 0 -10 -20
340
360
380
400
Crank Angle (deg)
Fig. 5 Changes in NHRR under full load (12 kgf)
Rate of Pressure Rise (dp/d )
6
CR : 18 Speed :1500 rpm IP : 200bar
Pure Diesel H2 at 15 LPM
4
2
0
-2 340
360
380
400
Crank Angle (deg)
Fig. 6 ROPR variation at full load (12 kgf)
fuel’s cylinder pressure [11]. A substantial amount of hydrogen in the fuel mix and higher hydrogen diffusion rates may create turbulence that enhances combustion [15] and resulting in increased cylinder pressure.
718
S. Subani and D. V. Kumar 100
Y-Pistonic direction X- Longitudinal dIrection Z-Lateral direction
90 80
CR : 18 Load : 12kgf Speed : 1500rpm
2 Acc (m/s )
70 60 50 40 30 20 10 0
0
15
LPM
Fig. 7 Vibration acceleration for diesel and hydrogen at 15lpm
4 Conclusion Experimental investigation has been performed on single cylinder VCR diesel engine under dual mode at 15 lpm hydrogen flow rate, 18 cr. Based on experiments, the following conclusions had observed: On-board hydrogen generation is used, and hydrogen was produced using chemicals through reactor. Experiments were conducted by pumping hydrogen using flow metre at a15lpm and combustion parameters, vibrations were recorded and observed when compared to normal diesel, the combustion characteristics including in cylinder pressure, rate of pressure rise, and net heat release rate were higher and vibrations are less compared to diesel when hydrogen was sent at 15 lpm. Vibrations were less in Z-direction in both standard diesel and hydrogen case. Vibrations are lower in hydrogen case than the standard diesel case.
References 1. Salvi BL, Subramanian KA (2015) Sustainable development of road transportation sector using hydrogen energy system. Renew Sustain Energy Rev 51:1132–1155 2. Ankur N, Biswajit P, Sunil KS (2016) Comparison of performance and emissions characteristics of CI engine fueled with dual biodiesel blends of palm and Jatropha. Fuel 173:172–179 3. Cheikh K, Sary A, Khaled L, Abdelkrim L, Mohand T (2016) Experimental assessment of performance and emissions maps for biodiesel fueled compression ignition engine. Appl Energy 161:320 4. European Commision (2019) A European green deal 2019. https://ec.europa.eu/info/strategy/ priorities-2019-2024/european-green-deal_en
Combustion and Vibration Investigations of a Reactor-Based Dual Fuel …
719
5. Boretti A (2021) The hydrogen economy is complementary and synergetic to the electric economy. Int J Hydrogen Energy 46(78):38959–38963. https://doi.org/10.1016/j.ijhydene. 2021.09.121, ISSN 0360-3199 6. Eric C, Christopher D, Jing G, Edward P (2012) Analysis of the effects of reformates (hydrogen/carbon monoxide) as an assistive fuel on the performance and emissions of used canola-oil biodiesel. Int J Hydrog Energy 37:3510–3527 7. Abhishek RG, Nirmal PS, Kolluri RVS, Anurag P, Singh SN (2015) Effect of compression ratio on the performance of diesel engine at different loads. Int J Eng Res Appl 5:62–68 8. Dayong N, Changle S, Yongjun G, Zengmeng Z, Jiaoyi H (2006) Extraction of fault component from abnormal sound in diesel engines using acoustic signals. Mech Syst Signal Process 75:544–555 9. Tüccar G (2021) Experimental study on vibration and noise characteristics of a diesel engine fueled with mustard oil biodiesel and hydrogengasmixtures. Biofuels 12(5):537542. https:// doi.org/10.1080/17597269.2018.1506631 10. Uludamar E, Tosun E, Tüccar G, Yıldızhan S, ¸ Çalık A, Yıldırım S, Serin H, Özcanlı M (2017) Evaluation of vibration characteristics of a hydroxyl (HHO) gas generator installed diesel engine fuelled with different diesel–biodiesel blends. Int J Hydrogen Energy 42(36):23352– 23360. https://doi.org/10.1016/j.ijhydene.2017.01.192, ISSN 0360-3199 11. Mohanad A, Radu C, Viorel B, Georges D, Pierre P (2017) Investigation on the mixture formation, combustion characteristics and performance of a diesel engine fueled with diesel, biodiesel (B20) and hydrogen addition. Int J Hydrogen Energy 42:16793–16807 12. Osama HG (2013) Performance and combustion characteristic of CI engine fueled with hydrogen enriched diesel. Int J Hydrog Energy 38:15469–15476 13. Satsangi DP, Tiwari N (2018) Experimental investigation on combustion, noise, vibrations, performance and emissions characteristics of diesel/n-butanol blends driven genset engine. Fuel 221:44–60 14. Erinc U, Yildizhan Kadir A, Mustafa O (2016) Vibration, noise and exhaust emissions analysis of an unmodified compression ignition engine fuelled with low sulphur diesel and biodiesel blends with hydrogen addition. Int J Hydrogen Energy 41:11481–11490 15. Anuj P, Avinash KA (2015) Effect of compression ratio on combustion, performance and emissions of a laser ignited single cylinder hydrogen engine. Int J Hydrogen Energy 40:12531– 12540
Evolution of Automotive Braking system—A Mini Review M. S. Sureshkumar, R. Rahul, J. Aishwarya Shree, S. Chandine, and A. Sai Darshan
Abstract Since the invention of the wooden block brake, interesting technological developments have occurred in the field of brake systems. Such advancements have resulted in increased road safety and fewer accidents. Brakes have impressively evolved throughout the years, incorporating several innovative technologies. The primary goal of all braking system upgrades is to increase vehicle safety and effectiveness. It is difficult to identify the creator of the first braking system since there have been so many variations over the last century; yet, individuals who created these systems shared a common objective: to enable people to manage a motor vehicle. Over time, inventors have improved upon this basic concept by introducing new technology to the brake system with the aim of fostering safer conditions. The evolution of automotive brakes during the last four decades is swiftly addressed in this research. Keywords Braking system · Autonomous braking · Contactless braking · Advancement · Invention · Vehicle safety · Effectiveness
M. S. Sureshkumar · R. Rahul (B) · J. Aishwarya Shree · S. Chandine · A. Sai Darshan Department of Robotics and Automation, Sri Ramakrishna Engineering College, Coimbatore 641022, India e-mail: [email protected] M. S. Sureshkumar e-mail: [email protected] J. Aishwarya Shree e-mail: [email protected] S. Chandine e-mail: [email protected] A. Sai Darshan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 B. B. V. L. Deepak et al. (eds.), Intelligent Manufacturing Systems in Industry 4.0, Lecture Notes in Mechanical Engineering, https://doi.org/10.1007/978-981-99-1665-8_60
721
722
M. S. Sureshkumar et al.
1 Introduction The act of braking involves slowing or deaccelerating a moving vehicle or rotor. The term “braking system” refers to a collection of various braking-related tools or equipment. One of the most essential elements of any vehicle is the braking system (whether it is a car, bus, rail, or aircraft), since its failure may result in numerous fatalities [1]. The history of brakes predates the invention of wheeled transportation. Equally crucial to a vehicle’s capacity to move is, its ability to slow down quickly and securely [2]. Brakes are in use for centuries, and whose common construction basis is the conversion of kinetic energy into thermal energy prior to stopping. Friction is generally used to achieve it [3]. Development in brakes is widely increased according to the modern demand. So, every advancement in braking system happens every alternate year and gives to most valued addition to the current improved braking system. Starting from wooden block brakes to anti-lock braking system now, the safety, the braking distance, and most importantly, the braking efficiency have found its way of burgeoning to meet the safety [4]. Each type of braking system had a drawback to overcome the danger, ensure safety and efficaciousness [5].
2 Major Types of Braking System Over the Years 2.1 Wooden Block Brakes The first types of braking were simply wooden blocks (Figs. 1 and 2) pushed it against wooden wheel’s rims or leather belting fastened to the car’s axle. Owing to the power rating and speeds of such animals, those brakes have been deployed for many periods in animal-driven vehicles or wagons [6]. This system had an immediate drawback where it was not able to stop the motion with stability when the speed exceeds 20 kms per hour [7]. To fix this and to get the stability over the speeds, the system is organised into four fundamental subsystems, such as follows: A—Source of energy: it includes the key parts of a braking system to achieve, store, and make braking energy available. B—System application: it includes all the parts that are utilised to regulate how much braking occurs. C—Energy transmission system: all parts that move the force required to come to a stop from the deploy device to the driving wheels are included in this. D—Actuation system: these are the elements that provide forces that oppose the existing or planned vehicle motion [8].
Evolution of Automotive Braking system—A Mini Review
723
Fig. 1 Wooden brake in wagons [7]
2.2 Drum Brakes After organising the fundamentals, drum brakes came into the advancement as their requirements for braking systems call for the ability to controllably, steadily, predictably and repeatedly slow the vehicle down regardless of weather, weight, the road, or a partial failure [9]. All of these are possible if the braking system’s function remains stable when the car is being stopped. For more than 40 years, the automobile industry has employed drum brakes extensively which has 30% more stability than wooden block brakes [10]. Even though virtually all automobiles now use disc brakes up front, on the automobile’s rear wheels, the drum brake system is still discernible [11] (Fig. 3). The primary focus on brake squeal issues has switched in recent years from theoretical, foundational research to more realistic, problem-solving focused activities. The braking system model often includes more brake components than a straightforward schematic model, allowing for investigation of the impact of design factors
724 Fig. 2 Wooden brake in vehicle [7]
Fig. 3 Cross-sectional view of drum brake [11]
M. S. Sureshkumar et al.
Evolution of Automotive Braking system—A Mini Review
725
on stability [12]. Drum brakes lower the maintenance costs and the cost of manufacturing by having a single, simpler-to-replace design when repairs are required. While disc brakes necessitate the installation of specialist parking brake devices, drum brakes are self-energising and may also be utilised as brakes for parking [13].
2.3 Disc Brakes Drum brakes and disc brakes are the two most popular types of friction brakes. Due to its bigger swept surface, relative to increased exposure of air flow and also due to centrifugal forces, disc brakes have the characteristics of cleaning by itself and cool faster than drum brakes [13]. Friction materials in the form of brake pads are driven mechanically, hydraulically, pneumatically, or electrostatically on the both sides of the disc to halt the wheel. The brake pads are attached to a mechanism known as a brake calliper. These brakes can recover fast from immersion and provide greater stopping performance than equivalent drum brakes, including resistance to “brake fade” brought on by overheated brake parts with 25% increased braking efficiency than drum brakes [14] (Fig. 4). Because of these and other factors, disc brakes are now the preferred option for the front braking on vehicles and are anticipated to control the commercial vehicle market within the next few years [15]. The development of materials used in braking technology for high weights and speeds followed by railroad transportation growth, and the first braking tests were conducted around the turn of the twentieth century. Frederick Willian Lanchester, an English inventor, is credited with patenting one of the earliest known devices for disc brake braking (Clark, 1995; Harper, 1998). From the first invention till now even with the development from drum to disc brakes, there is still a risk that when applying the brake, the wheel can slide and lock, resulting in Fig. 4 Disc brake and basic components [16]
726
M. S. Sureshkumar et al.
an accident. The identical disc brake system discovers a need for an enhancement of anti-lock braking system (ABS) to combat wheel locking [16].
2.4 Anti-lock Braking System (ABS) The most contemporary and widely used brake technology, ABS, guards against brakes locking up [17]. The ABS must be built to withstand a variety of driving circumstances, notably those requiring rapid braking and acceleration. Therefore, prolonging the brake system’s lifespan under these circumstances becomes crucial. Whenever the brake pedal is pressed, the ABS system will automatically perform the cadence deaccelerating to avoid wheel locking and vehicle slipping [18]. When the brake is going to lock, this technology detects it and activates hydraulic valves to release pressure. ABS gives the driver with better control, reducing the chances of fatalities and injuries [19] (Fig. 5). ABS can increase braking separation overall, though it still improves the overall stability and efficiency by 17% than disc brakes. It is a vehicle safeguard architecture, and the controller’s job is to govern the fundamental torque in order to keep the slip ratio ideal [20]. Before the introduction of ABS system, Honda has introduced a system called Combi-Brake System (CBS) which achieved a shorter braking distance than the standard disc brakes, but it could not manage to avoid the slippage and locking of the wheel [21]. So, the revolutionary advancement of ABS was overcoming
Fig. 5 Comprehensive view of ABS [18]
Evolution of Automotive Braking system—A Mini Review
727
Fig. 6 Schematic model of contactless braking [25]
the factor of sudden holds of the running wheel and drastically gained the braking efficiency in the past four decades of braking system [22].
2.5 Braking System in Electric Vehicle (EV Braking) Electric vehicle adoption has expanded in recent years, with systems featuring twoand four-wheeled drive mechanics. Two-wheeled EVs use a rear-wheel motor-driven system, whereas four-wheeled EVs use either a front or rear motor-driven system [23]. Conversely, there has been significant advancement in the field of regenerative braking systems over the past decade in order to increase safety, effectiveness, and cost-effective braking in EVs [24]. The vehicle range will be enhanced if some of the waste energy could be repurposed, which is where this regenerative braking plays an important role [25] (Fig. 6). Regenerative braking is a mechanism where a part of the vehicle’s kinetic energy is saved via a quick storage system. During braking, energy ordinarily dissipated in the brakes is sent to the energy store through a power transfer system. That energy is stored until the vehicle needs it again, at which point it is transformed again to kinetic power and utilised to drive the vehicle [26]. It has been established that an EV featuring regenerative braking may have a 15% higher driving distance than an EV with only mechanical brakes [27].
2.6 Contactless Braking System Electromagnetic braking systems apply wheel brakes using electrical and magnetic power [28]. In order to this function, the system makes use of the electromagnetic
728
M. S. Sureshkumar et al.
Fig. 7 Schematic model of contactless braking [28]
principle [29]. To achieve frictionless braking, Eddy braking increases the dependability and durability of brakes because they do hardly wear out due to friction over time, and when exposed to a magnetic field, magnetorheological (MR) fluids exhibit a drastic shift in their viscosity and elasticity within milliseconds [30] (Fig. 7). In the travelling time, changing magnetic field is formed in the airgap when a magnetic source is relocated and/or fluctuating above a conducting linear plate. This field causes Eddy currents in the plate, which can generate both tangential and regular forces. This does not require extreme upkeep and no lubricating often [31]. This attempts to increase braking efficiency even further by reducing frictional loss in all contact braking systems. Despite friction brakes are small and efficient, they have several drawbacks, some of which are petty irritations and others of which are a major strain on users and owners, whether individual or commercial [32].
3 Conclusion The evolution of brake technology has had a significant beneficial influence on improving and stability by lowering braking distance and efficiency (Fig. 7). The braking mechanism evolves in tandem with the evolution of the car. Autonomous braking technologies help drivers to maintain greater control of their vehicles in situations where emergency braking is required. The most recent stopping system for automobiles, the contactless braking system, has the potential to achieve reliability through computerised programming to control the braking system in small automated vehicles like Automated Guided Vehicles, Autonomous Mobile Robot, and warehouse bots, facilitating a greater scope in the field of robotics (Fig. 8).
Evolution of Automotive Braking system—A Mini Review
729
Fig. 8 Braking distance and advancement in braking system over the years [33]
References 1. Atmur SD, Strasser TE (1996) U.S. Patent No. 5,560,455. Washington, DC: U.S. Patent and Trademark Office 2. Skorupka Z (2013) Braking moment comparison and analysis for various brake designs using results from sample and full-scale friction material tests. J KONES 20 3. Bohr EN, Ukpaka CP, Nkoi B (2018) Reliability analysis of an automobile brake system to enhance performance. Int J Prod Eng 4(2):47–56 4. Monteith MJ, Ashburn-Nardo L, Voils CI, Czopp AM (2002) Putting the brakes on prejudice: on the development and operation of cues for control. J Pers Soc Psychol 83(5):1029 5. Owen C (2010) Automotive brake systems, classroom manual. Today’s Tech. Delmar Cengage Learn 6. Pitt W (1919) On friction brakes with straps stiffened by wood blocks. Proc Instit Mech Eng 97(1):587–592 7. Johnson A (2009) Hitting the brakes: engineering design and the production of knowledge. Duke University Press 8. Diwakar LB, Diwakar SL, Deshpande VV (2020) Design and selection of the braking system for all terrain vehicle. Int J Eng Res Technol 9:730–733 9. Soni K, Vara G, Sheth I, Patel H (2018) Design and analysis of braking system for ISIE ESVC. Int J Appl Eng Res 13(10):8572–8576 10. Baba SNN, Hamid MNA, Soid SNM, Zahelem MN, Omar MS (2018) Analysis of drum brake system for improvement of braking performance. In: Engineering applications for new materials and technologies. Springer, Cham, pp 345–357 11. Huang J, Krousgrill CM, Bajaj AK (2006) Modelling of automotive drum brakes for squeal and parameter sensitivity analysis. J Sound Vib 289(1–2):245–263 12. Kennedy Jr FE (1988) Discussion: “An Analysis of Speed, Temperature, and Performance Characteristics of Automotive Drum Brakes”. Day, AJ, 1988, ASME J Tribol 110:298–303 13. Sasaki Y (1995) Development philosophy of friction materials for automobile disc brakes. In: Proceeding of the eighth international pacific conference on automotive engineering, vol 407. Yokohama, pp 2 14. Sarkar S, Rathod PP, Modi AJ (2014) Research paper on modelling and simulation of disc brake to analyse temperature distribution using FEA. Int J Sci Res Dev 2:491–494 15. Nejad S, Kheybari M (2017) Brake system design for sports cars using digital logic method. Autom Sci Eng 7(4):2571–2582
730
M. S. Sureshkumar et al.
16. Laksono PW, Kusumawardani CA (2017) Kanban system implementation in cardboard supply process (Case study: PT. Akebono Brake Astra Indonesia-Jakarta). In: AIP conference proceedings, vol 1902, no 1. AIP Publishing LLC, pp 020033 17. Maluf O, Angeloni M, Milan MT, Spinelli D, Bose Filho WW (2007) Development of materials for automotive disc brakes. Minerva 4(2):149–158; Surblys V, Sokolovskij E (2016) Research of the vehicle brake testing efficiency. Procedia Eng 134:452–458 18. Sarkar S, Mistry D, Raval S, Vadhnere H, Suryawala N (2020) A review paper on pneumatic controlled ABS 19. Bera TK, Bhattacharya K, Samantaray AK (2011) Evaluation of antilock braking system with an integrated model of full vehicle system dynamics. Simul Model Pract Theory 19(10):2131– 2150 20. Bhasin K (2019) A review paper on anti-lock braking system (ABS) and its future scope. Int J Res Appl Sci Eng Technol 21. Peng D, Zhang Y, Yin CL, Zhang JW (2008) Combined control of a regenerative braking and antilock braking system for hybrid electric vehicles. Int J Automot Technol 9(6):749–757 22. BAO C (2012) The research and design of a new ABS system for automotive by using the GMR sensor. Doctoral dissertation, University of York 23. Jamadar NM, Jadhav HT (2021) A review on braking control and optimization techniques for electric vehicle. Proc Instit Mech Eng Part D: J Autom Eng 235(9):2371–2382 24. Clegg SJ (1996) A review of regenerative braking systems 25. Yanan G (2016) Research on electric vehicle regenerative braking system and energy recovery. Int J Hybrid Inf Technol 9(1):81–90 26. Shi Q, Zhang C, Cui N (2012) An improved electric vehicle regenerative braking strategy research. In: Advances in computer science and information engineering. Springer, Berlin, Heidelberg, pp 637–642 27. Hinov NL, Penev DN, Vacheva GI (2016) Ultra-capacitors charging by regenerative braking in electric vehicles. In: 2016 XXV international scientific conference electronics (ET). IEEE, pp 1–4 28. Gao Y, Chu L, Ehsani M (2007) Design and control principles of hybrid braking system for EV, HEV and FCV. In: 2007 IEEE vehicle power and propulsion conference. IEEE, pp 384–391 29. Krishnan PP (2018) Contactless Eddy braking system. EPH-Int J Sci Eng 1(1):51–59. ISSN 2454-2016 30. Sukhwani VK, Hirani H (2008) Design, development, and performance evaluation of highspeed magnetorheological brakes. Proc Instit Mech Eng Part L: J Mater Des Appl 222(1):73–82 31. Paudel N, Paul S, Bird JZ (2012) General 2-D transient eddy current force equations for a magnetic source moving above a conductive plate. Prog Electromagnet Res B 43:255–277 32. Gay SE (2010) Contactless magnetic brake for automotive applications. Doctoral dissertation, Texas A & M University 33. Niemz T, Reul M, Winner H (2007) A new slip controller to reduce braking distance by means of active shock absorbers (No. 2007-01-3664). SAE Technical Paper