Application of Machine Learning Models in Agricultural and Meteorological Sciences 9811997322, 9789811997327

This book is a comprehensive guide for agricultural and meteorological predictions. It presents advanced models for pred

208 5 8MB

English Pages 200 [201] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 The Importance of Agricultural and Meteorological Predictions Using Machine Learning Models
1.1 Introduction
1.2 The Necessity of Meteorological Variables Prediction
1.3 The Necessity of Agricultural Factors Prediction
1.4 Conclusion
References
2 Structure of Particle Swarm Optimization (PSO)
2.1 Introduction
2.2 Structure of Particle Swarm Optimization
2.3 The Application of PSO in Meteorological Field
2.4 The Application of PSO in Agricultural Studies
2.5 The Application of PSO in Other Related Studies
2.6 Conclusion
References
3 Structure of Shark Optimization Algorithm
3.1 Introduction
3.2 The Structure of Shark Algorithm
3.3 Application of SSO in Climate Studies
3.4 Application of SSO in Agricultural Studies
3.5 Application of SSO in Other Studies
3.6 Conclusion
References
4 Sunflower Optimization Algorithm
4.1 Introduction
4.2 Applications of SFO in the Different Fields
4.3 Structure of Sunflower Optimization Algorithm
References
5 Henry Gas Solubility Optimizer
5.1 Introduction
5.2 Application of HGSO in Different Fields
5.3 Structure of Henry Gas Solubility
References
6 Structure of Crow Optimization Algorithm
6.1 Introduction
6.2 The Application of the COA
6.3 Mathematical Model of COA
References
7 Structure of Salp Swarm Algorithm
7.1 Introduction
7.2 The Application of the Salp Swarm Algorithm in Different Fields
7.3 Structure of Salp Swarm Algorithm
References
8 Structure of Dragonfly Optimization Algorithm
8.1 Introduction
8.2 Application of Dragonfly Optimization Algorithm
8.3 Structure of Dragonfly Optimization Algorithm
References
9 Rat Swarm Optimization Algorithm
9.1 Introduction
9.2 Applications of Rat Swarm Algorithm
9.3 Structure of Rat Swarm Optimization Algorithms
References
10 Antlion Optimization Algorithm
10.1 Introduction
10.2 Mathematical Model of ALO
10.3 Mathematical Model of ALO
References
11 Predicting Evaporation Using Optimized Multilayer Perceptron
11.1 Introduction
11.2 Review of the Previous Works
11.3 Structure of MULP Models
11.4 Hybrid MULP Models
11.5 Case Study
11.6 Results and Discussion
11.6.1 Choice of Random Parameters
11.6.2 Investigation the Accuracy of Models
11.6.3 Discussion
11.7 Conclusion
References
12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis Function Neural Network
12.1 Introduction
12.2 Structure of Radial Basis Function Neural Network (RABFN)
12.3 RABFN Models
12.4 Structure of Inclusive Multiple Model
12.5 Case Study
12.6 Results and Discussion
12.6.1 Choice of Random Parameters
12.6.2 Investigation the Accuracy of Models
12.6.3 Discussion
12.7 Conclusion
References
13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy Interface System and Bayesian Model Averaging
13.1 Introduction
13.2 Structure of ANFIS Models
13.3 Hybrid ANFIS Models
13.4 Bayesian Model Averaging (BMA)
13.5 Case Study
13.6 Results and Discussion
13.6.1 Determination of the Size of Data
13.6.2 Determination of Random Parameters Values
13.6.3 Evaluation of the Accuracy of Models
13.6.4 Discussion
13.7 Conclusion
References
14 Predicting Evapotranspiration Using Support Vector Machine Model and Hybrid Gamma Test
14.1 Introduction
14.2 Review of Previous Papers
14.3 Structure of Support Vector Machine
14.4 Hybrid SVM Models
14.5 Theory of Gamma Test
14.6 Case Study
14.7 Results and Discussion
14.7.1 Choice of the Algorithm Parameters
14.7.2 The Input Scenarios
14.7.3 Assessment of the Performance of Models
14.7.4 Discussion
14.8 Conclusion
References
15 Predicting Infiltration Using Kernel Extreme Learning Machine Model Under Input and Parameter Uncertainty
15.1 Introduction
15.2 Structure of Kernel Extreme Learning Machines (KELM)
15.3 Hybrid KELM Model
15.4 Uncertainty of Input and Model Parameters
15.5 Case Study
15.6 Results and Discussion
15.6.1 Selection of Size of Data
15.6.2 Choice of Random Parameters of Optimization Algorithms
15.6.3 Evaluation of the Accuracy of Models
15.6.4 Discussion
15.7 Conclusion
References
16 Predicting Solar Radiation Using Optimized Generalized Regression Neural Network
16.1 Introduction
16.2 Structure of Generalized Regression Neural Network (GRNN)
16.3 Structure of Hybrid GRNN
16.4 Case Study
16.5 Results and Discussions
16.5.1 Selection of Random Parameters
16.5.2 Investigation of the Accuracy of Models
16.5.3 Discussion
16.6 Conclusion
References
17 Predicting Wind Speed Using Optimized Long Short-Term Memory Neural Network
17.1 Introduction
17.2 Structure of Long Short-Term Memory (LSTM)
17.3 Hybrid Structure of LSTM Models
17.4 Case Study
17.5 Results and Discussion
17.5.1 Selection of Random Parameters
17.5.2 Choice of Inputs
17.5.3 Investigation of the Accuracy of Models
17.5.4 Discussion
17.6 Conclusion
References
18 Predicting Dew Point Using Optimized Least Square Support Vector Machine Models
18.1 Introduction
18.2 Structure of the LSSVM Model
18.3 Hybrid Structure of the LSSVM Model
18.4 Case Study
18.5 Results and Discussion
18.5.1 Selection of Random Parameters
18.5.2 Selection of the Best Input Combination
18.5.3 Evaluation of the Accuracy of Models
18.6 Conclusion
References
Recommend Papers

Application of Machine Learning Models in Agricultural and Meteorological Sciences
 9811997322, 9789811997327

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Mohammad Ehteram Akram Seifi Fatemeh Barzegari Banadkooki

Application of Machine Learning Models in Agricultural and Meteorological Sciences

Application of Machine Learning Models in Agricultural and Meteorological Sciences

Mohammad Ehteram · Akram Seifi · Fatemeh Barzegari Banadkooki

Application of Machine Learning Models in Agricultural and Meteorological Sciences

Mohammad Ehteram Department of Water Engineering and Hydraulic Structures Faculty of Civil Engineering Semnan University Semnan, Iran

Akram Seifi Department of Water Science and Engineering, College of Agriculture Vali-e-Asr University of Rafsanjan Rafsanjan, Iran

Fatemeh Barzegari Banadkooki Agricultural Department Payame Noor University Tehran, Iran

ISBN 978-981-19-9732-7 ISBN 978-981-19-9733-4 (eBook) https://doi.org/10.1007/978-981-19-9733-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

Meteorological and agricultural predictions are required for water resource and planning management. Meteorological predictions can be used for predicting natural hazards such as flood and drought periods. Meteorological and agricultural predictions have a complex and nonlinear process. Thus, robust models are required for meteorological and agricultural predictions. This book uses robust machine learning algorithms for meteorological and agricultural predictions. Also, this book introduces new optimization algorithms for training machine learning models. First, the book introduces the structure of optimization algorithms for training machine learning models. Afterward, the structures of machine learning models are explained. Also, the structures of optimized machine learning models are explained. This book uses machine learning models to predict meteorological and agricultural variables. Different case studies are explained to evaluate the new machine learning models’ ability to predict meteorological and agricultural variables. The decision-makers can use the current book for managing watersheds. Also, scholars can use the current book to develop hydrological science models. Semnan, Iran Rafsanjan, Iran Tehran, Iran October 2022

Mohammad Ehteram Akram Seifi Fatemeh Barzegari Banadkooki

v

Contents

1

The Importance of Agricultural and Meteorological Predictions Using Machine Learning Models . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Necessity of Meteorological Variables Prediction . . . . . . . . . 1.3 The Necessity of Agricultural Factors Prediction . . . . . . . . . . . . . . 1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 7 9 15 16

2

Structure of Particle Swarm Optimization (PSO) . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Structure of Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . 2.3 The Application of PSO in Meteorological Field . . . . . . . . . . . . . . 2.4 The Application of PSO in Agricultural Studies . . . . . . . . . . . . . . . 2.5 The Application of PSO in Other Related Studies . . . . . . . . . . . . . 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23 23 24 26 27 27 28 29

3

Structure of Shark Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The Structure of Shark Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Application of SSO in Climate Studies . . . . . . . . . . . . . . . . . . . . . . 3.4 Application of SSO in Agricultural Studies . . . . . . . . . . . . . . . . . . 3.5 Application of SSO in Other Studies . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33 33 34 39 39 40 40 41

4

Sunflower Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Applications of SFO in the Different Fields . . . . . . . . . . . . . . . . . . 4.3 Structure of Sunflower Optimization Algorithm . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43 43 43 45 46

vii

viii

Contents

5

Henry Gas Solubility Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Application of HGSO in Different Fields . . . . . . . . . . . . . . . . . . . . 5.3 Structure of Henry Gas Solubility . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 49 50 51 53

6

Structure of Crow Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Application of the COA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Mathematical Model of COA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55 55 55 57 58

7

Structure of Salp Swarm Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The Application of the Salp Swarm Algorithm in Different Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Structure of Salp Swarm Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 61

8

Structure of Dragonfly Optimization Algorithm . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Application of Dragonfly Optimization Algorithm . . . . . . . . . . . . 8.3 Structure of Dragonfly Optimization Algorithm . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67 67 67 69 70

9

Rat Swarm Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Applications of Rat Swarm Algorithm . . . . . . . . . . . . . . . . . . . . . . . 9.3 Structure of Rat Swarm Optimization Algorithms . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73 73 73 74 75

10 Antlion Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Mathematical Model of ALO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Mathematical Model of ALO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77 77 77 79 81

11 Predicting Evaporation Using Optimized Multilayer Perceptron . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Review of the Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Structure of MULP Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Hybrid MULP Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Choice of Random Parameters . . . . . . . . . . . . . . . . . . . . . . 11.6.2 Investigation the Accuracy of Models . . . . . . . . . . . . . . . . 11.6.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83 83 84 86 86 87 89 89 92 96

62 63 64

Contents

11.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis Function Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Structure of Radial Basis Function Neural Network (RABFN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 RABFN Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Structure of Inclusive Multiple Model . . . . . . . . . . . . . . . . . . . . . . . 12.5 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Choice of Random Parameters . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Investigation the Accuracy of Models . . . . . . . . . . . . . . . . 12.6.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

98 98 101 101 103 104 104 104 107 107 108 109 112 115

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy Interface System and Bayesian Model Averaging . . . . . 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Structure of ANFIS Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Hybrid ANFIS Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Bayesian Model Averaging (BMA) . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 Determination of the Size of Data . . . . . . . . . . . . . . . . . . . 13.6.2 Determination of Random Parameters Values . . . . . . . . . 13.6.3 Evaluation of the Accuracy of Models . . . . . . . . . . . . . . . 13.6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

117 117 118 119 120 121 123 123 123 123 127 128 129

14 Predicting Evapotranspiration Using Support Vector Machine Model and Hybrid Gamma Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Review of Previous Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Structure of Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Hybrid SVM Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Theory of Gamma Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.1 Choice of the Algorithm Parameters . . . . . . . . . . . . . . . . . 14.7.2 The Input Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.3 Assessment of the Performance of Models . . . . . . . . . . . . 14.7.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

131 131 131 133 134 134 135 136 136 138 139 141

x

Contents

14.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 15 Predicting Infiltration Using Kernel Extreme Learning Machine Model Under Input and Parameter Uncertainty . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Structure of Kernel Extreme Learning Machines (KELM) . . . . . . 15.3 Hybrid KELM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Uncertainty of Input and Model Parameters . . . . . . . . . . . . . . . . . . 15.5 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6.1 Selection of Size of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6.2 Choice of Random Parameters of Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6.3 Evaluation of the Accuracy of Models . . . . . . . . . . . . . . . 15.6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Predicting Solar Radiation Using Optimized Generalized Regression Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Structure of Generalized Regression Neural Network (GRNN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 Structure of Hybrid GRNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.1 Selection of Random Parameters . . . . . . . . . . . . . . . . . . . . 16.5.2 Investigation of the Accuracy of Models . . . . . . . . . . . . . 16.5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Predicting Wind Speed Using Optimized Long Short-Term Memory Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Structure of Long Short-Term Memory (LSTM) . . . . . . . . . . . . . . 17.3 Hybrid Structure of LSTM Models . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.1 Selection of Random Parameters . . . . . . . . . . . . . . . . . . . . 17.5.2 Choice of Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5.3 Investigation of the Accuracy of Models . . . . . . . . . . . . . 17.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147 147 148 149 149 149 151 151 151 155 155 157 161 163 163 164 164 165 166 166 167 171 173 174 175 175 176 177 177 179 179 179 179 185 185 186

Contents

18 Predicting Dew Point Using Optimized Least Square Support Vector Machine Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Structure of the LSSVM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Hybrid Structure of the LSSVM Model . . . . . . . . . . . . . . . . . . . . . . 18.4 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5.1 Selection of Random Parameters . . . . . . . . . . . . . . . . . . . . 18.5.2 Selection of the Best Input Combination . . . . . . . . . . . . . 18.5.3 Evaluation of the Accuracy of Models . . . . . . . . . . . . . . . 18.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

187 187 188 189 189 191 191 192 192 195 195

Chapter 1

The Importance of Agricultural and Meteorological Predictions Using Machine Learning Models

Abstract This chapter reviews the applications of machine learning (ML) models for predicting meteorological and agricultural variables. The advantage and disadvantages of models are explained. This chapter also explains the importance of meteorological and agricultural predictions for water resource planning and management. The details of different machine learning models are explained. Afterward, the applications of these models are described. The ML includes different methods for learning predictive rules from historical datasets to predict unknown future data. Several studies have reported the superiority of ML techniques in agricultural and weather predictions that can maximize agricultural profit. Keywords Optimization algorithms · Agriculture systems · Machine learning models · Water resource management

1.1 Introduction Accurate meteorological prediction information is crucial to recognize the state of atmospheric conditions and prevent/reduce destructive effects of agricultural crises and disasters, such as tornadoes, floods, and major storms, from affecting people and property. Using meteorological variables for weather forecasting and climate prediction is essential for many industries, especially agriculture. The main aim of weather predicting models is to predict some meteorological variables, including air temperature, relative humidity, rainfall, dew point temperature, solar radiation, sunshine hours, and wind speed, based on the historical dataset as decision support systems in agriculture (Jaseena & Kovoor, 2020; Ghanbari-Adivi et al., 2022; Farrokhi et al., 2021). Insufficient knowledge related to soil (soil type, soil temperature, water content, salinity, spatial and temporal variations) and crop (yield, water requirement, evapotranspiration, use of pesticides, harvesting, and marketing) properties, weather, and irrigation problems can cause farmers losing their farms and reduce expected efficiency. Following the famous saying “Information is Power”, farmers may make better decisions by tracking crop, market, and environment information (Meshram et al., 2021; Farrokhi et al., 2020). Prior meteorological information assists farmers © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_1

1

2

1 The Importance of Agricultural and Meteorological Predictions Using …

in improving crop yields by enabling them to make necessary decisions (Salman et al., 2015). Hence, predicting agriculture information, such as crop yield, early and accurately contributes significantly to facilitating decision-making and a country’s growth (Sawasawa, 2003). Figure 1.1 shows the important parameters in studies related to agriculture, weather and climate, and irrigation problems. In the last three decades, predicting agriculture, climate, and irrigation factors has been considered one of the most significant developments in achieving precision agriculture. It is possible to estimate the status of crops by monitoring environmental parameters (air quality, soil, weather, and water quality) for plants in the field and greenhouse. As a result, precision prediction of agricultural factors using powerful algorithms and intelligence heuristic models provides detailed and valuable information that supports farming practices (Mancipe-Castro & Gutiérrez-Carvajal, 2022;

Fig. 1.1 Different parameters considered in studies related to agriculture, weather and climate, and irrigation problems

1.1 Introduction

3

Nwokolo et al., 2022; Patil and Deka, 2016). One of the main challenges of prediction in agriculture studies is to develop robust techniques that can produce more accurate predictions using limited data. Data scarcity, data missing, ungagged data, or limitations are important issues in agriculture and climate predictions that need to be addressed in precise agriculture predictions by developing forecasting systems. In the past few decades, innovative technologies have revolutionized the world and human life to solve new, complex, challenging problems. In terms of the models or methodologies used for meteorological and agriculture factors predicting, forecasting systems can be categorized into three types, including statistical models, artificial intelligence (AI) models, and hybrid models. Statistical and mathematical models such as auto-regressive moving average (ARMA), auto-regressive integrated moving average (ARIMA), and their variants are widely used over time (Table 1.1). Solving complex problems is difficult with these models due to nonlinear behavior and relationships between the components. Hence, some methods have been developed to exhibit “intelligence behavior”. By using these methods, valuable information can Table 1.1 Stochastic models application Researchers

Study description

Applied model

Aghelpour and Norooz-Valashedi (2022)

Predicting daily reference evapotranspiration rates in a humid region

Auto-regressive (AR), moving average (MA), ARIMA, ARMA, least square support vector machine (LSSVM), adaptive neuro-fuzzy inference system (ANFIS), and generalized regression neural network (GRNN)

Elsaraiti and Merabet (2021)

Wind speed prediction

ARIMA, artificial neural networks (ANNs), recurrent neural networks (RNNs), and long short-term memory (LSTM),

Praveen and Sharma (2020) Climate variability and its impacts on agriculture production

ARIMA

Wen et al. (2019)

Linear modeling of agricultural management time series

ARIMA and SVM

Dwivedi et al. (2019)

Forecasting monthly rainfall

ARIMA and ANN

Katimon et al. (2018)

Predicting river water quality and hydrological variables

ARIMA, AR, and MA

Mossad and Alazba (2015)

Drought forecasting in a hyper-arid climate

ARIMA

Kumari et al. (2014)

Rice yield prediction

ARIMA

Fernández-González et al. (2012)

Estimation of atmospheric vineyard pathogens

ARIMA

4

1 The Importance of Agricultural and Meteorological Predictions Using …

be extracted and used to recognize patterns and gain a deeper understanding of the problem (Yaghoubzadeh-Bavandpour et al., 2022). AI techniques are powerful methods for solving complex problems in several fields of science. Engelbrecht (2007) defined AI as “the study of adaptive mechanisms to facilitate intelligent behavior in complex and changing environments”. The artificial intelligence models can be further classified into machine learning (ML) and deep learning predictors. Machine learning is defined as “it gives computers the ability to learn without being explicitly programmed” (Samuel, 1967). ML is a learning process by itself from experience and given data to perform a prediction (new or future data). Technologies like ML can be used to get information and process it to increase crop production, improve crop quality, reduce crop disease, and increase the profitability of farmers. Precision learning is critical to improving harvesting yields in agriculture (Meshram et al., 2021). Deep learning (DL) is a subset of ML that has high performance in dealing with real-life, complex, and challenging problems (Fig. 1.2). Some of ML models are artificial neural networks (ANNs), Bayesian models (BM), deep learning (DL), dimensionality reduction (DR), decision trees (DT), ensemble learning (EL), random forest (RF), and support vector machines (SVMs). The most important ML algorithms employed for weather forecasting are adaptive neuro-fuzzy inference systems (ANFIS), bootstrap aggregating (bagging), backpropagation network (BPN), generalized regression neural network (GRNN), K-nearest neighbor (KNN), multiple linear regression (MLP), etc. (Jaseena & Kovoor, 2020; Liakos et al., 2018). Nonlinear datasets can be handled better using ML and deep learning models. Rapid development in AI and ML techniques has improved the precision of future weather, environmental, and agriculture predictions (Gheisari et al., 2017). These techniques provide predictions based on real-world data (Meenal Fig. 1.2 Hierarchy diagram of artificial intelligence, machine learning, and deep learning (Arumugam et al., 2022)

1.1 Introduction

5

et al., 2022). Table 1.2 shows the applications of ML and DL algorithms in agriculture and weather studies. Table 1.2 Machine learning and deep learning models application Researchers

Study description

Applied model

Band et al. (2022)

Estimation long-term mean monthly wind speed

ANN, RF, gene expression programming (GEP), and multivariate adaptive regression spline (MARS)

Chaudhary et al. (2022)

Soil moisture estimation

SVM, RF, MLP, ANFIS, radial basis function (RBF), Wang and Mendel’s (WM), subtractive clustering (SBC), hybrid fuzzy interference system (HyFIS), and dynamic evolving neural fuzzy inference system (DENFIS)

Zongur et al. (2022)

Detection of the diameter of the inhibition zone of gilaburu (Viburnum opulus L.) extract against eight different Fusarium strains isolated from diseased potato tubers

SVM, KNN, RF, ensemble algorithms (EA), AdaBoost (AB) algorithm, and gradient boosting (GBM) algorithm

Buyruko˘glu (2021)

Prediction of Salmonella presence in agricultural surface waters

SVM, ANN, RF, and Naïve Bayes (NB)

Yoosefzadeh-Najafabadi et al. Predicting soybean (Glycine (2021) max) seed yield

MLP, SVM, and RF

Nuruzzaman et al. (2021)

Identifying the potato species

RF, linear discriminant analysis (LDA), logistic regression, SVM, CART, NB, and KNN

Wu et al. (2019)

Estimation of monthly mean daily reference evapotranspiration

ANN, RF, gradient boosting decision tree (GBDT), extreme gradient boosting (XGBoost), multivariate adaptive regression spline (MARS), SVM, and kernel-based nonlinear extension of Arps decline (KNEA)

Fan et al. (2018)

Predicting daily global solar radiation

SVM and XGBoost

Mokhtarzad et al. (2017)

Drought forecasting

ANFIS, ANN, and SVM

Goel and Sehgal (2015)

Estimation of the ripeness of tomatoes based on color

Fuzzy rule-based classification approach (FRBCS)

6

1 The Importance of Agricultural and Meteorological Predictions Using …

Two or more models are combined in hybrid models to improve forecasting performance (Jaseena & Kovoor, 2020). Hybrid models provided more accurate predictions than conventional and other standalone artificial intelligence-based forecasting models (Seifi et al., 2021a, 2021b). Prediction can be more accurate and effective with the advancement of hybrid models using ML, optimization techniques, and suitable data visualization methods (Gheisari et al., 2017). Recently, meta-heuristic/swarm intelligence optimization algorithms have been developed to find the optimal solutions of a problem. These algorithms provide an opportunity to overcome problems related to the stability of individual models. Table 1.3 shows the applications of hybrid algorithms in agriculture and weather studies. There is a need to increase cereal production by 3 billion tons and meat production by over 200% by 2050 to meet the needs of the global population (Trendov et al., 2019). Weather uncertainty, climate change crises, rising food prices, and the rapid growth of the world’s population have all led to agricultural sectors looking for new ways to maximize harvests. Hence, the use of ML techniques is increasing Table 1.3 Hybrid models application Researchers

Study description

Applied model

Bazrafshan et al. (2022)

Tomato yield prediction

ANFIS and MLP hybridized with multiverse optimization algorithm (MOA), particle swarm optimization (PSO), and firefly algorithm (FFA)

Seifi et al. (2022)

Prediction of pan evaporation

ANFIS hybridized with seagull optimization algorithm (SOA), crow search algorithm (CA), firefly algorithm (FFA), and PSO

Sahoo et al. (2021)

Prediction of flood

Radial basis function–firefly algorithm (RBF-FA) and support vector machine–firefly algorithm (SVM-FA)

Seifi et al. (2021a)

Global horizontal irradiation predictions

ANFIS and ELM hybridized with multiverse optimization (MVO), sine cosine algorithm (SCA), and salp swarm algorithm (SSA)

Seifi et al. (2021b)

Soil temperature prediction

ANFIS, SVM, RBFNN, and MLP hybridized with sunflower optimization (SFO), FFA, salp swarm algorithm (SSA), and PSO

Ehteram et al. (2021c)

Infiltration rate prediction

ANFIS hybridized with SCA, PSO, and FFA

Ashofteh et al. (2015)

Irrigation allocation policy under Genetic programming climate change

Noory et al. (2012)

Irrigation water allocation and multicrop planning optimization

Discrete PSO algorithm

1.2 The Necessity of Meteorological Variables Prediction

7

in agriculture as technology advances (Shaikh et al., 2022). In preharvesting, the ML models predict crop, agriculture, and environmental conditions, including seed quality, genetics, soil, pruning, fertilizer application, temperature, and humidity. By considering each component, production losses can be minimized. Also, ML techniques are helping farmers reduce harvesting and post-harvesting losses (Meshram et al., 2021). A key principle of ML in agriculture is its adaptability, speed, precision, and cost-effectiveness. Farmers can increase yields and improve quality with fewer resources by using AI and ML techniques in agriculture (Shaikh et al., 2022). This chapter focuses on ML applications in agriculture, weather and climate, and irrigation problems.

1.2 The Necessity of Meteorological Variables Prediction Weather and climate are two parts of meteorology that can largely influence human life. Identifying and predicting weather and climate are important for some studies, such as agriculture management (Parasyris et al., 2022). Also, weather prediction and climate considerations are highly correlated with decision-making for agriculture, water resource management, irrigation management, drought and flood conditions management, etc. The spatial and temporal distribution of most meteorological variables is an important factor that is commonly used in a variety of scientific fields, including climatology (Philandras et al., 2015; Varquez & Kanda, 2018), environmental and ecological studies (Paul et al., 2019; Zanobetti & Schwartz, 2008), hydrological studies (Wang et al., 2009), agriculture (Wakamatsu, 2010), epidemiology studies (Bunker et al., 2016), and many other studies. Meteorological variables information is often limited in time and space, and there is a critical need to predict these variables (Kloog et al., 2014). Also, as global climate change continues, developing reliable and accurate models is imperative, especially using limited data. Hanoon et al. (2021) presented that using the univariate ARMA models, the low temperature values are overestimated, while high values are underestimated, resulting in poor water resources planning and management. Therefore, developing reliable prediction models with low uncertainty that avoid these shortages is necessary for predicting meteorological variables (Ehteram et al., 2021a; El-Shafie et al., 2014). A great deal of spatial variability, time variation, and stochastic behavior is associated with meteorological variables, making it difficult to predict with simple models (Adnan et al., 2021). The classification of different models for predicting meteorological variables is given in Fig. 1.3. Despite the importance of conceptual models for identifying meteorological processes, they have encountered many practical challenges, particularly when accuracy is the most important factor (El-Shafie et al., 2011). Rather than building conceptual models, exploring and developing data-driven models could be more beneficial. In different studies, models based on data have been found to provide accurate predictions (Ehteram et al., 2021b; Seifi et al., 2022). However, due to the nonlinear dynamics inherent in meteorological phenomena,

8

1 The Importance of Agricultural and Meteorological Predictions Using …

most models may not always perform well and may lead to undesirable results in many cases (Hanoon et al., 2021). Since ML approaches include both effective structure and efficient learning, these approaches have been proposed as an alternative modeling technique for a dynamic system with nonlinear behavior (Ahmed et al., 2019). ML models require less information to make accurate predictions of future time series. The internal network parameters are determined based on the available time series and applying a suitable tuning algorithm. Furthermore, this could include adjusting the initially selected network structure for assessing the better model structure while studying a special problem (Palit & Popovic, 2006). Recently, ML techniques have remarkably advanced in modeling dynamic and nonlinear systems for different science applications (Ehteram et al., 2021c; Panahi et al., 2021; Seifi et al., 2021b, 2022) (Fig. 1.3). Thus, it could be used as an efficient approach for modeling meteorological processes. The main advantage of this strategy is its ability to succeed when explicit knowledge of the internal meteorological process is unavailable. Although these ML models have proven effective, it is still not known which of

Fig. 1.3 Classification of predicting models related to weather and climate studies

1.3 The Necessity of Agricultural Factors Prediction

9

the ML models would be the most appropriate option for certain system processes. Therefore, evaluating and comparing various ML modeling approaches are necessary to select the most appropriate one (Hanoon et al., 2021). Overall, many studies approved the efficiency of ML techniques for predicting meteorological variables (Table 1.4). For example, Mohammadi et al. (2015) used extreme learning machine (ELM)-based, SVM, and ANN models to predict daily dew point temperature. The findings demonstrated that the applied ML models have a high potential for predicting daily dew point temperature. The performance of the ELM model was higher than those of SVM and ANN techniques. Azad et al. (2019) investigated the ability of standalone ANFIS to predict monthly air temperature. In another attempt, the ANFIS model was optimized using genetic algorithm (GA), PSO, ant colony optimization for continuous domains (ACOR), and differential evolution (DE). The results indicated that the hybrid ANFIS-GA model has the best performance in predicting maximum air temperature. Pham et al. (2020) presented that hybrid ANFIS-PSO, SVM, and ANN models are useful for predicting daily rainfall using other meteorological variables (maximum temperature, minimum temperature, wind speed, relative humidity, and solar radiation). Among applied ML models, the SVM was more efficient in predicting rainfall. Qadeer et al. (2021) predicted relative humidity using RF and SVM ML models. Also, a commercial process simulator called Aspen HYSYS® V10 was used to create a data mining environment. The prediction accuracy of RF model was 74.4% higher than those of the SVM. Compared to Aspen HYSYS, the ML models of RF and SVM predicted relative humidity with a mean absolute deviation of 1.1% and 4.3%, respectively. Jin et al. (2022) applied an ensemble model using integrating two regression models of the generalized additive model (GAM), and generalized additive mixed model (GAMM), and two ML models of RF, and extreme gradient boosting (XGBoost) to estimate daily mean air temperature from satellite-based land surface temperature. The results showed that the ensemble model has the highest performance (R2 = 0.98, RMSE = 1.38 °C), and two ML models [RF (R2 = 0.97), XGBoost (R2 = 0.98)] showed better accuracy than the two regression models [GAM (R2 = 0.95), GAMM (R2 = 0.96)]. Malik et al. (2022a, 2022b) reviewed the ANN-based model in predicting solar radiation and wind speed. They demonstrated that using the artificial neural system is the best way to anticipate extremely nonlinear meteorological data.

1.3 The Necessity of Agricultural Factors Prediction Agriculture plays a vital role in the global economy. The increasing human population will increase pressure on the agricultural system. Precision farming, also known as digital agriculture, is a new field of science that uses data-intensive methods to maximize agricultural productivity and minimize its environmental impact. A possible way to deal with ecological, social, and economic problems in agriculture is through intelligent systems, which offer the opportunity to develop intelligent systems. The

ANN, GPR, SVM, MARS

SVM

ANN

ANN, SVR, FIS, ANFIS

ANN

ANN, SVR

Ba¸sakın et al. (2022)

Namboori (2020)

Tran Anh et al. (2019)

Khosravi et al. (2018)

Liu et al. (2018)

Rasel et al. (2017)

ANN

RF

ANFIS

ANN, AVM

Ahmed (2015)

Mohammadi et al., (2015)

Luna et al. (2014)

Shabariram et al. SVM (2016)

Khajure and Mohod (2016)

Piri et al. (2016) ANN, SVM, ANFIS

Model(s)

Researcher

×

×

×

×

Air temperature

×

Relative humidity

Predictive variable

×

×

Solar radiation

×

×

×

Wind speed

×

×

Dew point temperature

Table 1.4 Application of machine learning in the prediction of meteorological variables and weather

×

×

×

×

Rainfall

×

Air pressure

×

CO2 emission

(continued)

×

Ozone level

10 1 The Importance of Agricultural and Meteorological Predictions Using …

Model(s)

ANN

Researcher

Nayak et al. (2012)

Table 1.4 (continued) Relative humidity ×

Air temperature

×

Predictive variable Solar radiation ×

Wind speed

Dew point temperature

Rainfall

Air pressure

CO2 emission

Ozone level

1.3 The Necessity of Agricultural Factors Prediction 11

12

1 The Importance of Agricultural and Meteorological Predictions Using …

emergence of ML and big data technologies has created new possibilities for unraveling, quantifying, and understanding agricultural operations (Couliably et al., 2022; Liakos et al., 2018). The AI, ML, and deep learning techniques offer many opportunities to assist agriculture during various stages to overcome the different challenges and access its objectives (Couliably et al., 2022; Gupta, 2019) as presented in Table 1.5 and summarized as follows: 1. Agricultural information processing: Plant and animal health monitoring is crucial to agricultural production. 2. Through models, disease detection becomes more feasible, resulting in a higher level of crop production. Environmental and climatic variables are important in the onset of special crop disease. 3. Optimal control of agricultural production systems: The farmers’ knowledge and experience and expert knowledge control the agricultural production systems, which ignores the physiological conditions of plants. 4. Farmers can manage crops with minimal effort using new crop management practices. 5. Intelligent farm machinery equipment: Agriculture involves different tasks, from planting to post-harvesting. Table 1.5 Different objectives for developing machine learning related to agriculture Researchers

Study description

Applied model

Malik et al. (2022a, 2022b)

Predicting daily soil temperature at multiple depths

SVM, MLP, ANFIS

Fan et al. (2021)

Estimation of daily maize transpiration

SVM, ANN, XGBoost

Wu et al. (2021)

Prediction of yield loss in frost-damaged winter wheat

Partial least square regression (PLSR), SVR

Sujatha et al. (2021)

Citrus plant disease detection

SVM, RF, DL

Chen et al. (2021)

Automated agriculture commodity price prediction

SVR, XGBoost

Wu et al. (2020)

Detecting rice kernels with different varieties

RF-ELM

Yakut and Süzülmü¸s (2020)

Prediction of monthly mean air temperature

ANN, ANFIS, SVM

dos Santos Silva and de Oliveira (2019)

Prediction and selection of agronomic and physiological traits for tolerance to water deficit in cassava

ANN, SVM, ELM, generalized linear model with stepwise feature selection (GLMSS), partial least squares (PLS)

Verma et al. (2018)

Identification and diagnosis of tomato plant diseases

ANN, SVM, ELM, DL

Yue et al. (2018)

Prediction temperature and humidity of a greenhouse

Levenberg–Marquardt radial basis function neural network

1.3 The Necessity of Agricultural Factors Prediction

13

6. Models based on ML can be used to make highly accurate predictions and analyze agricultural data efficiently. 7. Management of the agricultural economic system: The output of agriculture alone is insufficient. Prices and quality of agricultural products must also be considered. Forecasting agricultural prices are very important. 8. Weather forecasting, which plays a crucial role in agriculture, has been significantly revolutionized by artificial intelligence. The applications of ML in agriculture have been classified into different categories, including crop management, water management, climate change, and soil management (Fig. 1.4). In the crop section, the capability of ML is used in several subcategories, including yield production, crop quality, weed detection, disease detection, and species recognition. Yield prediction is one of the most important topics in precision agriculture, which is significant for increasing crop production by studying yield mapping, yield estimation, crop supply, demand balancing, and crop management. The ML models are used in different studies in counting coffee fruits (Ramos et al., 2017), automating shaking and catching cherries during harvest (Amatya et al., 2016), developing a green citrus yield mapping system (Sengupta & Lee, 2014), grassland biomass estimation (Ali et al., 2016), rice development stage prediction (Su et al., 2017), and wheat yield production (Kamir et al., 2020). Water management plays a significant role in hydrological, climatological, and agronomical balance in agricultural production. Among water management factors, evapotranspiration and evaporation are highly important for water resources management in crop production, designing, and optimal operation management of irrigation systems. Several researchers investigated the capability of ML models for predicting daily, monthly, and annual evapotranspiration (Feng et al., 2017; Mehdizadeh et al., 2017; Seifi & Riahi, 2020) and evaporation (Seifi & Soroush, 2020; Seifi et al., 2022) rate using meteorological variables. The soil is a heterogeneous natural resource with complex mechanisms and processes. Understanding the dynamics of ecosystems and the impact of agriculture is possible by studying soil properties (Liakos et al., 2018). Improved soil management can be achieved by accurately estimating soil temperature (Seifi et al., 2021b), soil moisture (Johann et al., 2016), soil drying (Coopersmith et al., 2014), and soil condition (Morellos et al., 2016). A region’s soil temperature is essential for analyzing the effects of climate change on its environment and ecosystem. Soil temperature controls the interaction between the atmosphere and the ground. In addition, crop yield variability is also influenced by soil moisture. Due to the time-consuming and expensive nature of soil measurements, computational analysis based on ML techniques provides accurate and reliable predictions of soil properties (Liakos et al., 2018). Climate change affects environmental systems, human health, and agriculture production (Arunanondchai et al., 2018). There is an internal correlation between agriculture and climate change in various aspects. Water deficits and temperature extremes have greatly influenced plant physiology and growth (Thornton et al., 2014). The changes in rainfall, average temperatures, heat waves, growing weeds, pests,

14

1 The Importance of Agricultural and Meteorological Predictions Using …

Fig. 1.4 Some main applications of machine learning in agriculture modeling (Meshram et al., 2021; Saggi & Jain, 2022)

microbes, atmospheric CO2 , and ozone levels affect agricultural yield (Raza et al., 2019). Droughts, floods due to heavy rainfall, temperature fluctuations, salinity, and insect pest attacks will decrease crop productivity, increasing the risk of starvation (Dhankher and Foyer, 2018). Overall, climate change and global warming negatively affect crops and humans. Some negative effects are (1) crop damage from extreme heat, (2) planning problems because of less reliable forecasts, (3) increased insect

1.4 Conclusion

15

infestations, (4) torrential rain, (5) increased drought, (6) increased weed growth, (7) increased moisture stress, (8) increased crop diseases, (9) increased ground-level ozone toxic to green plants, (10) stronger storms and floods, (11) warning stress, (12) waterlogged land, (13) more soil erosion, and (14) decreased pesticides and herbicides efficiency (Raza et al., 2019). Food and Agriculture Organization (FAO) reports that climate change events are increasing dramatically (Meenal et al., 2022). Recently, machine learning models have become popular techniques for investigating the interaction between climate change and agriculture and environmental processes. Meenal et al. (2022) presented new technologies like artificial intelligence, ML, and deep learning that are required to produce weather forecasts for various kinds of weather systems and distribute location-specific information over a targeted area. For example, Balogun and Tella (2022) used ML models of RF, SVM, decision tree regression, and linear regression to evaluate the effects of meteorological variables on ozone concentration in Malaysia. The four ML models showed high predictive performances. Chen et al. (2022) applied ML algorithms, meteorological variables, and landscape data to map hourly air temperature at the 1-km resolution. The assessment showed that meteorological variables, especially relative humidity, contributed the most to the air temperature mapping. The reliability of spatial refinement of hourly air temperature mapping was improved using ML.

1.4 Conclusion Recently, different intelligent methods have been developed to solve complex and nonlinear problems in the real world. Artificial intelligence has become increasingly popular among prediction techniques and is widely used in the agricultural industry. Many crops, environmental, and irrigation concerns are common in agriculture. It is necessary to improve the performance of the agricultural industry and systems by applying new technology and methods. Since machine learning algorithms are predictive, they are suitable for solving various real-world problems in various industries. Thus, accurate estimation of crop requirement, appropriate irrigation systems design and weed detection, plant and weather conditions monitoring, and crop yield can be enhanced using ML methods. ML includes different methods for learning predictive rules from historical datasets to predict unknown future data. The superiority of ML techniques in agricultural and weather predictions had reported in several studies that can maximize agricultural profit. This chapter presented applications of the ML technique in agriculture and investigated the importance of meteorological variables and weather prediction in agriculture.

16

1 The Importance of Agricultural and Meteorological Predictions Using …

References Adnan, M., Adnan, R. M., Liu, S., Saifullah, M., Latif, Y., & Iqbal, M. (2021). Prediction of relative humidity in a high elevated basin of western Karakoram by using different machine learning models. In Weather forecast (pp. 1–20). Aghelpour, P., & Norooz-Valashedi, R. (2022). Predicting daily reference evapotranspiration rates in a humid region, comparison of seven various data-based predictor models. Stochastic Environmental Research and Risk Assessment, 1–23. Ahmed, A. N., Othman, F. B., Afan, H. A., Ibrahim, R. K., Fai, C. M., Hossain, M. S., Ehteram, M., & Elshafie, A. (2019). Machine learning methods for better water quality prediction. Journal of Hydrology, 578, 124084. Ahmed, B. (2015). Predictive capacity of meteorological data: Will it rain tomorrow? In Science and Information Conference (SAI) (pp. 199–205). IEEE. Ali, I., Cawkwell, F., Dwyer, E., & Green, S. (2016). Modeling managed grassland biomass estimation by using multitemporal remote sensing data—A machine learning approach. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(7), 3254–3264. Amatya, S., Karkee, M., Gongal, A., Zhang, Q., & Whiting, M. D. (2016). Detection of cherry tree branches with full foliage in planar architecture for automated sweet-cherry harvesting. Biosystems Engineering, 146, 3–15. Arumugam, K., Swathi, Y., Sanchez, D. T., Mustafa, M., Phoemchalard, C., Phasinam, K., & Okoronkwo, E. (2022). Towards applicability of machine learning techniques in agriculture and energy sector. Materials Today: Proceedings, 51, 2260–2263. Arunanondchai, P., Fei, C., Fisher, A., McCarl, B. A., Wang, W., & Yang, Y. (2018). How does climate change affect agriculture? In The Routledge handbook of agricultural economics (pp. 191– 210). Routledge. Ashofteh, P. S., Haddad, O. B., Akbari-Alashti, H., & Marino, M. A. (2015). Determination of irrigation allocation policy under climate change by genetic programming. Journal of Irrigation and Drainage Engineering, 141(4), 04014059. Azad, A., Pirayesh, J., Farzin, S., Malekani, L., Moradinasab, S., & Kisi, O. (2019). Application of heuristic algorithms in improving performance of soft computing models for prediction of min, mean and max air temperatures. Engineering Journal, 23(6), 83–98. Balogun, A. L., & Tella, A. (2022). Modelling and investigating the impacts of climatic variables on ozone concentration in Malaysia using correlation analysis with random forest, decision tree regression, linear regression, and support vector regression. Chemosphere, 299, 134250. Band, S. S., Ardabili, S., Mosavi, A., Jun, C., Khoshkam, H., & Moslehpour, M. (2022). Feasibility of soft computing techniques for estimating the long-term mean monthly wind speed. Energy Reports, 8, 638–648. Ba¸sakın, E. E., Ekmekcio˘glu, Ö., Çıtako˘glu, H., & Özger, M. (2022). A new insight to the wind speed forecasting: Robust multi-stage ensemble soft computing approach based on pre-processing uncertainty assessment. Neural Computing and Applications, 34(1), 783–812. Bazrafshan, O., Ehteram, M., Latif, S. D., Huang, Y. F., Teo, F. Y., Ahmed, A. N., & El-Shafie, A. (2022). Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models. Ain Shams Engineering Journal, 13(5), 101724. Bunker, A., Wildenhain, J., Vandenbergh, A., Henschke, N., Rocklöv, J., Hajat, S., & Sauerborn, R. (2016). Effects of air temperature on climate-sensitive mortality and morbidity outcomes in the elderly; a systematic review and meta-analysis of epidemiological evidence. eBioMedicine, 6, 258–268. Buyruko˘glu, S. (2021). New hybrid data mining model for prediction of Salmonella presence in agricultural waters based on ensemble feature selection and machine learning algorithms. Journal of Food Safety, 41(4), e12903. Chaudhary, S. K., Srivastava, P. K., Gupta, D. K., Kumar, P., Prasad, R., Pandey, D. K., Das, A. K., & Gupta, M. (2022). Machine learning algorithms for soil moisture estimation using Sentinel-1: Model development and implementation. Advances in Space Research, 69(4), 1799–1812.

References

17

Chen, Z., Goh, H. S., Sin, K. L., Lim, K., Chung, N. K. H., & Liew, X. Y. (2021). Automated agriculture commodity price prediction system with machine learning techniques. arXiv:2106. 12747 Chen, G., Shi, Y., Wang, R., Ren, C., Ng, E., Fang, X., & Ren, Z. (2022). Integrating weather observations and local-climate-zone-based landscape patterns for regional hourly air temperature mapping using machine learning. Science of The Total Environment, 841, 156737. Coopersmith, E. J., Minsker, B. S., Wenzel, C. E., & Gilmore, B. J. (2014). Machine learning assessments of soil drying for agricultural planning. Computers and Electronics in Agriculture, 104, 93–104. Couliably, S., Kamsu-Foguem, B., Kamissoko, D., & Traore, D. (2022). Deep learning for precision agriculture: a bibliometric analysis. Intelligent Systems with Applications, 200102. Dhankher, O. P., & Foyer, C. H. (2018). Climate resilient crops for improving global food security and safety. Plant, Cell & Environment, 41(5), 877–884. dos Santos Silva, P. P., & de Oliveira, E. J. (2019). Prediction models and selection of agronomic and physiological traits for tolerance to water deficit in cassava. Euphytica, 215(4), 1–18. Dwivedi, D. K., Kelaiya, J. H., & Sharma, G. R. (2019). Forecasting monthly rainfall using autoregressive integrated moving average model (ARIMA) and artificial neural network (ANN) model: A case study of Junagadh, Gujarat, India. Journal of Applied and Natural Science, 11(1), 35–41. Ehteram, M., Ahmed, A. N., Latif, S. D., Huang, Y. F., Alizamir, M., Kisi, O., Mert, C., & El-Shafie, A. (2021a). Design of a hybrid ANN multi-objective whale algorithm for suspended sediment load prediction. Environmental Science and Pollution Research, 28(2), 1596–1611. Ehteram, M., Sammen, S. S., Panahi, F., & Sidek, L. M. (2021b). A hybrid novel SVM model for predicting CO2 emissions using multiobjective Seagull optimization. Environmental Science and Pollution Research, 28(46), 66171–66192. Ehteram, M., Teo, F. Y., Ahmed, A. N., Latif, S. D., Huang, Y. F., Abozweita, O., Al-Ansari, N., & El-Shafie, A. (2021c). Performance improvement for infiltration rate prediction using hybridized adaptive neuro-fuzzy inferences system (ANFIS) with optimization algorithms. Ain Shams Engineering Journal, 12(2), 1665–1676. Elsaraiti, M., & Merabet, A. (2021). A comparative analysis of the ARIMA and LSTM predictive models and their effectiveness for predicting wind speed. Energies, 14(20), 6782. El-Shafie, A., Mukhlisin, M., Najah, A. A., & Taha, M. R. (2011). Performance of artificial neural network and regression techniques for rainfall-runoff prediction. International Journal of Physical Sciences, 6(8), 1997–2003. El-Shafie, A., Najah, A., Alsulami, H. M., & Jahanbani, H. (2014). Optimized neural network prediction model for potential evapotranspiration utilizing ensemble procedure. Water Resources Management, 28(4), 947–967. Engelbrecht, A. P. (2007). Computational intelligence: an introduction. John Wiley & Sons. Fan, J., Wang, X., Wu, L., Zhou, H., Zhang, F., Yu, X., Lu, X., & Xiang, Y. (2018). Comparison of support vector machine and extreme gradient boosting for predicting daily global solar radiation using temperature and precipitation in humid subtropical climates: A case study in China. Energy Conversion and Management, 164, 102–111. Fan, J., Zheng, J., Wu, L., & Zhang, F. (2021). Estimation of daily maize transpiration using support vector machines, extreme gradient boosting, artificial and deep neural networks models. Agricultural Water Management, 245, 106547. Farrokhi, A., Farzin, S., & Mousavi, S. F. (2020). A New Framework for Evaluation of Rainfall Temporal Variability through Principal Component Analysis, Hybrid Adaptive Neuro-Fuzzy Inference System, and Innovative Trend Analysis Methodology. Water Resources Management. https://doi.org/10.1007/s11269-020-02618-0 Farrokhi, A., Farzin, S., & Mousavi, S. F. (2021). Meteorological drought analysis in response to climate change conditions, based on combined four-dimensional vine copulas and data mining (VC-DM). Journal of Hydrology. https://doi.org/10.1016/j.jhydrol.2021.127135

18

1 The Importance of Agricultural and Meteorological Predictions Using …

Feng, Y., Peng, Y., Cui, N., Gong, D., & Zhang, K. (2017). Modeling reference evapotranspiration using extreme learning machine and generalized regression neural network only with temperature data. Computers and Electronics in Agriculture, 136, 71–78. Fernández-González, M., Rodríguez-Rajo, F. J., Jato, V., Aira, M. J., Ribeiro, H., Oliveira, M., & Abreu, I. (2012). Forecasting ARIMA models for atmospheric vineyard pathogens in Galicia and Northern Portugal: Botrytis cinerea spores. Annals of Agricultural and Environmental Medicine, 19(2). Ghanbari-Adivi, E., Ehteram, M., Farrokhi, A., & Sheikh Khozani, Z. (2022). Combining radial basis function neural network models and inclusive multiple models for predicting suspended sediment loads. Water Resources Management, 36(11), 4313–4342. Gheisari, M., Wang, G., Bhuiyan, M. Z. A., & Zhang, W. (2017). Mapp: A modular arithmetic algorithm for privacy preserving in IoT. In IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC) (pp. 897–903). IEEE. Goel, N., & Sehgal, P. (2015). Fuzzy classification of pre-harvest tomatoes for ripeness estimation– An approach based on automatic rule learning using decision tree. Applied Soft Computing, 36, 45–56. Gupta, J. (2019). The role of artificial intelligence in agriculture sector. Customer Think. Hanoon, M. S., Ahmed, A. N., Zaini, N. A., Razzaq, A., Kumar, P., Sherif, M., Sefelnasr, A., & El-Shafie, A. (2021). Developing machine learning algorithms for meteorological temperature and humidity forecasting at Terengganu state in Malaysia. Scientific Reports, 11(1), 1–19. Jaseena, K. U., & Kovoor, B. C. (2020). Deterministic weather forecasting models based on intelligent predictors: A survey. Journal of King Saud University-Computer and Information Sciences. Jin, Z., Ma, Y., Chu, L., Liu, Y., Dubrow, R., & Chen, K. (2022). Predicting spatiotemporallyresolved mean air temperature over Sweden from satellite data using an ensemble model. Environmental Research, 204, 111960. Johann, A. L., de Araújo, A. G., Delalibera, H. C., & Hirakawa, A. R. (2016). Soil moisture modeling based on stochastic behavior of forces on a no-till chisel opener. Computers and Electronics in Agriculture, 121, 420–428. Kamir, E., Waldner, F., & Hochman, Z. (2020). Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods. ISPRS Journal of Photogrammetry and Remote Sensing, 160, 124–135. Katimon, A., Shahid, S., & Mohsenipour, M. (2018). Modeling water quality and hydrological variables using ARIMA: A case study of Johor River Malaysia. Sustainable Water Resources Management, 4(4), 991–998. Khajure, S., & Mohod, S. W. (2016). Future weather forecasting using soft computing techniques. Procedia Computer Science, 78, 402–407. Khosravi, A., Koury, R. N. N., Machado, L., & Pabon, J. J. G. (2018). Prediction of hourly solar radiation in Abu Musa Island using machine learning algorithms. Journal of Cleaner Production, 176, 63–75. Kloog, I., Nordio, F., Coull, B. A., & Schwartz, J. (2014). Predicting spatiotemporal mean air temperature using MODIS satellite surface temperature measurements across the Northeastern USA. Remote Sensing of Environment, 150, 132–139. Kumari, P., Mishra, G. C., Pant, A. K., Shukla, G., & Kujur, S. N. (2014). Autoregressive integrated moving average (ARIMA) approach for prediction of rice (Oryza sativa L.) yield in India. The Bioscan, 9(3), 1063–1066. Liakos, K. G., Busato, P., Moshou, D., Pearson, S., & Bochtis, D. (2018). Machine learning in agriculture: A review. Sensors, 18(8), 2674. Liu, X., Zhang, C., Liu, P., Yan, M., Wang, B., Zhang, J., & Higgs, R. (2018). Application of temperature prediction based on neural network in intrusion detection of IoT. Security and Communication Networks.

References

19

Luna, A. S., Paredes, M. L. L., De Oliveira, G. C. G., & Corrêa, S. M. (2014). Prediction of ozone concentration in tropospheric levels using artificial neural networks and support vector machine at Rio de Janeiro, Brazil. Atmospheric Environment, 98, 98–104. Malik, A., Tikhamarine, Y., Sihag, P., Shahid, S., Jamei, M., & Karbasi, M. (2022a). Predicting daily soil temperature at multiple depths using hybrid machine learning models for a semi-arid region in Punjab, India. Environmental Science and Pollution Research, 1–20. Malik, P., Gehlot, A., Singh, R., Gupta, L. R., & Thakur, A. K. (2022b). A review on ANN based model for solar radiation and wind speed prediction with real-time data. Archives of Computational Methods in Engineering, 1–19. Mancipe-Castro, L., & Gutiérrez-Carvajal, R. E. (2022). Prediction of environment variables in precision agriculture using a sparse model as data fusion strategy. Information Processing in Agriculture, 9(2), 171–183. Meenal, R., Binu, D., Ramya, K. C., Michael, P. A., Vinoth Kumar, K., Rajasekaran, E., & Sangeetha, B. (2022). Weather forecasting for renewable energy system: A review. Archives of Computational Methods in Engineering, 1–17. Mehdizadeh, S., Behmanesh, J., & Khalili, K. (2017). Using MARS, SVM, GEP and empirical equations for estimation of monthly mean reference evapotranspiration. Computers and Electronics in Agriculture, 139, 103–114. Meshram, V., Patil, K., Meshram, V., Hanchate, D., & Ramkteke, S. D. (2021). Machine learning in agriculture domain: A state-of-art survey. Artificial Intelligence in the Life Sciences, 1, 100010. Mohammadi, K., Shamshirband, S., Motamedi, S., Petkovi´c, D., Hashim, R., & Gocic, M. (2015). Extreme learning machine based prediction of daily dew point temperature. Computers and Electronics in Agriculture, 117, 214–225. Mokhtarzad, M., Eskandari, F., Jamshidi Vanjani, N., & Arabasadi, A. (2017). Drought forecasting by ANN, ANFIS, and SVM and comparison of the models. Environmental Earth Sciences, 76(21), 1–10. Morellos, A., Pantazi, X. E., Moshou, D., Alexandridis, T., Whetton, R., Tziotzios, G., Wiebensohn, J., Bill, R., & Mouazen, A. M. (2016). Machine learning based prediction of soil total nitrogen, organic carbon and moisture content by using VIS-NIR spectroscopy. Biosystems Engineering, 152, 104–116. Mossad, A., & Alazba, A. A. (2015). Drought forecasting using stochastic models in a hyper-arid climate. Atmosphere, 6(4), 410–430. Namboori, S. (2020). Forecasting carbon dioxide emissions in the United States using machine learning (Doctoral dissertation, Dublin, National College of Ireland). Nayak, R., Patheja, P. S., & Waoo, A. (2012). An enhanced approach for weather forecasting using neural network. In Proceedings of the International Conference on Soft Computing for Problem Solving (SocProS 2011) December 20–22, 2011 (pp. 833–839). Springer. Noory, H., Liaghat, A. M., Parsinejad, M., & Haddad, O. B. (2012). Optimizing irrigation water allocation and multicrop planning using discrete PSO algorithm. Journal of Irrigation and Drainage Engineering, 138(5), 437–444. Nuruzzaman, M., Hossain, M. S., Rahman, M. M., Shoumik, A. S. H. C., Khan, M. A. A., & Habib, M. T. (2021). Machine vision based potato species recognition. In 5th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 1–8). IEEE. Nwokolo, S. C., Ogbulezie, J. C., & Obiwulu, A. U. (2022). Impacts of climate change and meteosolar parameters on photosynthetically active radiation prediction using hybrid machine learning with Physics-based models. Advances in Space Research. Palit, A. K., & Popovic, D. (2006). Computational intelligence in time series forecasting: Theory and engineering applications. Springer Science & Business Media. Panahi, F., Ahmed, A. N., Singh, V. P., Ehtearm, M., & Haghighi, A. T. (2021). Predicting freshwater production in seawater greenhouses using hybrid artificial neural network models. Journal of Cleaner Production, 329, 129721.

20

1 The Importance of Agricultural and Meteorological Predictions Using …

Parasyris, A., Alexandrakis, G., Kozyrakis, G. V., Spanoudaki, K., & Kampanis, N. A. (2022). Predicting meteorological variables on local level with SARIMA LSTM and hybrid techniques. Atmosphere, 13(6), 878. Patil, A. P., & Deka, P. C. (2016). An extreme learning machine approach for modeling evapotranspiration using extrinsic inputs. Computers and Electronics in Agriculture, 121, 385–392. Paul, M. J., Coffey, R., Stamp, J., & Johnson, T. (2019). A review of water quality responses to air temperature and precipitation changes 1: Flow, water temperature, saltwater intrusion. JAWRA Journal of the American Water Resources Association, 55(4), 824–843. Pham, B. T., Le, L. M., Le, T. T., Bui, K. T. T., Le, V. M., Ly, H. B., & Prakash, I. (2020). Development of advanced artificial intelligence models for daily rainfall prediction. Atmospheric Research, 237, 104845. Philandras, C. M., Nastos, P. T., Kapsomenakis, I. N., & Repapis, C. C. (2015). Climatology of upper air temperature in the Eastern Mediterranean region. Atmospheric Research, 152, 29–42. Piri, J., Mohammadi, K., Shamshirband, S., & Akib, S. (2016). Assessing the suitability of hybridizing the Cuckoo optimization algorithm with ANN and ANFIS techniques to predict daily evaporation. Environmental Earth Sciences, 75(3), 1–13. Praveen, B., & Sharma, P. (2020). Climate variability and its impacts on agriculture production and future prediction using autoregressive integrated moving average method (ARIMA). Journal of Public Affairs, 20(2), e2016. Qadeer, K., Ahmad, A., Qyyum, M. A., Nizami, A. S., & Lee, M. (2021). Developing machine learning models for relative humidity prediction in air-based energy systems and environmental management applications. Journal of Environmental Management, 292, 112736. Ramos, P. J., Prieto, F. A., Montoya, E. C., & Oliveros, C. E. (2017). Automatic fruit count on coffee branches using computer vision. Computers and Electronics in Agriculture, 137, 9–22. Rasel, R. I., Sultana, N., & Meesad, P. (2017). An application of data mining and machine learning for weather forecasting. In International Conference on Computing and Information Technology (pp. 169–178). Springer. Raza, A., Razzaq, A., Mehmood, S. S., Zou, X., Zhang, X., Lv, Y., & Xu, J. (2019). Impact of climate change on crops adaptation and strategies to tackle its outcome: A review. Plants, 8(2), 34. Saggi, M. K., & Jain, S. (2022). A survey towards decision support system on smart irrigation scheduling using machine learning approaches. Archives of Computational Methods in Engineering, 1–24. Sahoo, A., Samantaray, S., & Ghose, D. K. (2021). Prediction of flood in Barak River using hybrid machine learning approaches: A case study. Journal of the Geological Society of India, 97(2), 186–198. Salman, A. G., Kanigoro, B., & Heryadi, Y. (2015). Weather forecasting using deep learning techniques. In International Conference on Advanced Computer Science and Information Systems (ICACSIS) (pp. 281–285). IEEE. Samuel, A. L. (1967). Some studies in machine learning using the game of checkers. II—Recent progress. IBM Journal of Research and Development, 11(6), 601–617. Sawasawa, H. (2003). Crop yield estimation: Integrating RS, GIS, and management factors. International Institute for Geo-information Science and Earth Observation. Seifi, A., & Riahi, H. (2020). Estimating daily reference evapotranspiration using hybrid gamma test-least square support vector machine, gamma test-ANN, and gamma test-ANFIS models in an arid area of Iran. Journal of Water and Climate Change, 11(1), 217–240. Seifi, A., & Soroush, F. (2020). Pan evaporation estimation and derivation of explicit optimized equations by novel hybrid meta-heuristic ANN based methods in different climates of Iran. Computers and Electronics in Agriculture, 173, 105418. Seifi, A., Ehteram, M., & Dehghani, M. (2021a). A robust integrated Bayesian multi-model uncertainty estimation framework (IBMUEF) for quantifying the uncertainty of hybrid meta-heuristic in global horizontal irradiation predictions. Energy Conversion and Management, 241, 114292.

References

21

Seifi, A., Ehteram, M., Nayebloei, F., Soroush, F., Gharabaghi, B., & Torabi Haghighi, A. (2021b). GLUE uncertainty analysis of hybrid models for predicting hourly soil temperature and application wavelet coherence analysis for correlation with meteorological variables. Soft Computing, 25(16), 10723–10748. Seifi, A., Ehteram, M., Soroush, F., & Haghighi, A. T. (2022). Multi-model ensemble prediction of pan evaporation based on the Copula Bayesian Model Averaging approach. Engineering Applications of Artificial Intelligence, 114, 105124. Sengupta, S., & Lee, W. S. (2014). Identification and determination of the number of immature green citrus fruit in a canopy under different ambient light conditions. Biosystems Engineering, 117, 51–61. Shabariram, C. P., Kannammal, K. E., & Manojpraphakar, T. (2016). Rainfall analysis and rainstorm prediction using MapReduce framework. In International Conference on Computer Communication and Informatics (ICCCI) (pp. 1–4). IEEE. Shaikh, T. A., Rasool, T., & Lone, F. R. (2022). Towards leveraging the role of machine learning and artificial intelligence in precision agriculture and smart farming. Computers and Electronics in Agriculture, 198, 107119. Su, Y. X., Xu, H., & Yan, L. J. (2017). Support vector machine-based open crop model (SBOCM): Case of rice production in China. Saudi Journal of Biological Sciences, 24(3), 537–547. Sujatha, R., Chatterjee, J. M., Jhanjhi, N. Z., & Brohi, S. N. (2021). Performance of deep learning vs machine learning in plant leaf disease detection. Microprocessors and Microsystems, 80, 103615. Thornton, P. K., Ericksen, P. J., Herrero, M., & Challinor, A. J. (2014). Climate variability and vulnerability to climate change: A review. Global Change Biology, 20(11), 3313–3328. Tran Anh, D., Duc Dang, T., & Pham Van, S. (2019). Improved rainfall prediction using combined pre-processing methods and feed-forward neural networks. J, 2(1), 65–83. Trendov, M., Varas, S., & Zeng, M. (2019). Digital technologies in agriculture and rural areas: Status report. Digital Technologies in Agriculture and Rural Areas: Status Report. Varquez, A. C., & Kanda, M. (2018). Global urban climatology: A meta-analysis of air temperature trends (1960–2009). NPJ Climate and Atmospheric Science, 1(1), 1–8. Verma, S., Chug, A., & Singh, A. P. (2018). Prediction models for identification and diagnosis of tomato plant diseases. In International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 1557–1563). IEEE. Wakamatsu, K. I. (2010). Effects of high air temperature during the ripening period on the grain quality of rice in warm regions of Japan. Bulletin of the Kagoshima Prefectural Institute for Agricultural Development. Agricultural Research (Japan). Wang, L., Koike, T., Yang, K., & Yeh, P. J. F. (2009). Assessment of a distributed biosphere hydrological model against streamflow and MODIS land surface temperature in the upper Tone River Basin. Journal of Hydrology, 377(1–2), 21–34. Wen, Q., Wang, Y., Zhang, H., & Li, Z. (2019). Application of ARIMA and SVM mixed model in agricultural management under the background of intellectual agriculture. Cluster Computing, 22(6), 14349–14358. Wu, L., Peng, Y., Fan, J., & Wang, Y. (2019). Machine learning models for the estimation of monthly mean daily reference evapotranspiration based on cross-station and synthetic data. Hydrology Research, 50(6), 1730–1750. Wu, N., Jiang, H., Bao, Y., Zhang, C., Zhang, J., Song, W., Zhao, Y., Mi, C., He, Y., & Liu, F. (2020). Practicability investigation of using near-infrared hyperspectral imaging to detect rice kernels infected with rice false smut in different conditions. Sensors and Actuators B: Chemical, 308, 127696. Wu, Y., Ma, Y., Hu, X., Ma, J., Zhao, H., & Ren, D. (2021). Narrow-waveband spectral indices for prediction of yield loss in frost-damaged winter wheat during stem elongation. European Journal of Agronomy, 124, 126240. Yaghoubzadeh-Bavandpour, A., Bozorg-Haddad, O., Zolghadr-Asli, B., & Singh, V. P. (2022). Computational intelligence: An introduction. In: O. Bozorg-Haddad & B. Zolghadr-Asli (Eds.),

22

1 The Importance of Agricultural and Meteorological Predictions Using …

Computational intelligence for water and environmental sciences. Studies in computational intelligence (Vol. 1043). Springer. https://doi.org/10.1007/978-981-19-2519-1_19 Yakut, E., & Süzülmü¸s, S. (2020). Modelling monthly mean air temperature using artificial neural network, adaptive neuro-fuzzy inference system and support vector regression methods: A case of study for Turkey. Network: Computation in Neural Systems, 31(1–4), 1–36. Yoosefzadeh-Najafabadi, M., Earl, H. J., Tulpan, D., Sulik, J., & Eskandari, M. (2021). Application of machine learning algorithms in plant breeding: Predicting yield from hyperspectral reflectance in soybean. Frontiers in Plant Science, 11, 624273. Yue, Y., Quan, J., Zhao, H., & Wang, H. (2018). The prediction of greenhouse temperature and humidity based on LM-RBF network. In IEEE International Conference on Mechatronics and Automation (ICMA) (pp. 1537–1541). IEEE. Zanobetti, A., & Schwartz, J. (2008). Temperature and mortality in nine US cities. Epidemiology (Cambridge, Mass.), 19(4), 563. Zongur, A., Kavuncuoglu, H., Kavuncuoglu, E., Capar, T. D., Yalcin, H., & Buzpinar, M. A. (2022). Machine learning approach for predicting the antifungal effect of gilaburu (Viburnum opulus) fruit extracts on Fusarium spp. isolated from diseased potato tubers. Journal of Microbiological Methods, 192, 106379.

Chapter 2

Structure of Particle Swarm Optimization (PSO)

Abstract PSO is an evolutionary algorithm for solving the optimization problem. This chapter explains the mathematical model and structure of PSO. The PSO is initialized with random positions and the velocity of random particles. Then, it searches for the global optimum solution by adjusting each particle’s moving vector based on each particle’s personal (cognitive) and global (social) best positions at each iteration. Also, this chapter reviews the application of PSO in different fields. In summary, many climatic and agricultural studies have proposed applying the PSO as an appropriate approach for solving related problems. Keywords Optimization algorithm · PSO · Complex problems · Artificial intelligence models

2.1 Introduction Over time, nature has continually inspired science, and there is still so much to learn and discover. Swarm intelligence (SI) is one of the important branches of artificial intelligence (AI) that is used to model social behavior in nature (Farrokhi et al., 2020; Ghanbari-Adivi et al., 2022). Over the last two decades, several swarm algorithms have been developed to find the optimal solution(s) and optimize different problems in various studies. Due to the flexibility and simple implementation of swarm algorithms, they can often solve complex optimization problems effectively without many technical assumptions (Houssein et al., 2021). Particle swarm optimization (PSO) was developed by Kennedy and Eberhart (1995) and is a well-known swarm-based optimization algorithm. It mimics the flocking behavior of birds, fishes, herds, and insects. In these swarms, members find food cooperatively, and each member adjusts the search pattern according to its own and others’ learning experiences (Wang et al., 2018). The PSO is initialized with random positions and the velocity of random particles. Then, it searches for the global optimum solution by adjusting each particle’s moving vector based on the personal (cognitive) and global (social) best positions of each particle at each iteration (Du & Swamy, 2016). Since 1995, the initial version of PSO was further

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_2

23

24

2 Structure of Particle Swarm Optimization (PSO)

developed through various studies, leading to several hybrids and multi-objective versions (Sengupta et al., 2018; Zhang et al., 2015). PSO has been effectively applied in different studies to optimize problems as a stochastic optimization technique. Many weather, climate, and agricultural problems require optimization algorithms to optimize one or several objective functions or for accurate prediction. Problems in mentioned studies include different categories, including crop water requirement, water quality, water scarcity, meteorological variables, hydrologic models, yield, disease detection, water allocation, weed detection, crop quality, irrigation systems, etc. The current chapter presents the theory and structure of PSO and its ability to find the best solution. Also, the application of PSO in different studies and optimization problems is investigated.

2.2 Structure of Particle Swarm Optimization PSO performs by a random search mechanism via a swarm of particles to obtain the solution. In other words, the search starts with some randomly generated potential solution during a swarm intelligent search algorithm. Each member of generated solution represents a potential solution and is known as “particle”, and the set of potential solutions is known as a “swarm” (Bansal et al., 2019). In PSO, the movement of particles is influenced by three forces (Fig. 2.1). Firstly, the particles tend to stay on their current trajectory during “inertia” term. Secondly, the particles are attracted toward the global best (gbest) solution. Thirdly, the particles are attracted toward their best fitness point (pbest) (Chopard & Tomassini, 2018). In PSO, the ith particle of the swarm has a position of x i and velocity of vi that in the d-dimensional space search are written as [xi1 , xi2 , . . . , xid ] and [vi1 , vi2 , . . . , vid ], respectively. The movement of particles is a function of their velocity in each iteration

Fig. 2.1 Forces acting on a particle in PSO (Chopard & Tomassini, 2018)

2.2 Structure of Particle Swarm Optimization

25

of the search. The previously best-visited position of ith particle is presented as [ pi1 , pi2 , . . . , pid ]. . Particles move in the direction of their previous best positions (pbest) to find the optimal solution in tth iteration (Zhang et al., 2014). The equations are written as follows (Bansal et al., 2019; Ferdowsi et al., 2022; Zhang et al., 2015): [ ( )] pbestit = arg min f xik [ ( )] gbestt = arg min f xik , i = 1, 2, . . . , Np k = 1, 2, . . . , t

(2.1)

where i is the particle index, Np is the total number of particles, t is the current iteration number, f is function, and x is the position. The position of ith particle is updated according to the new velocity and current position as: t+1 t+1 t xid = xid + vid

(2.2)

Also, the velocity of ith particle is as follows: ( ) ( ) t+1 t t t t + c2 r2 gbesttd − xid vid = ωvid + c1r1 pbestid − xid

(2.3)

where ω is the inertia weight, r1 and r2 are random parameters in range, c1 is the cognitive parameter, and c2 is social coefficient. c1 and c2 are positive constant called acceleration parameters. pbestid and gbestd are the individual and global best position (Fig. 2.2).

Fig. 2.2 Pseudo-code of PSO

26

2 Structure of Particle Swarm Optimization (PSO)

The parameters of population (swarm) size, ω, c1 , and c2 have a great effect on the optimization problem performance. Piotrowski et al. (2020) showed that a population size between 70 and 500 can produce high performance in most problems. In other studies, Seifi et al. (2020) illustrated that the population size of 400 is appropriate for preparing PSO. Seifi et al. (2021) presented that the population size of 200 has the lowest error for hybridizing PSO with soft computing models in soil temperature prediction. Seifi et al. (2022) showed that the population sizes in the range of [100, 200] have the lowest root mean square error (RMSE) for optimizing ANFIS parameters in pan evaporation prediction at different stations. Huang et al. (2022) used the population sizes of 200 and 300 to optimize the parameters of predictive models using PSO for predicting solar radiation in Rasht and Yazd, respectively. Bazrafshan et al. (2022) hybridized PSO with different models to estimate tomato yield and showed the population size of 200 has the lowest error. Also, selecting an appropriate value for control parameters of PSO (ω, c1 , c2 ) affects the number of iterations and can create large velocities (Harrison et al., 2016). The mentioned studies used sensitivity analysis and Taguchi methods to tune the PSO parameters.

2.3 The Application of PSO in Meteorological Field Different versions of PSO were used to optimize the prediction problems related to climatic studies. For example, Chen et al. (2018) used the PSO approach to model predictive control (MPC) for greenhouse temperature. Ma et al. (2019) proposed an ensemble structure optimized using PSO to obtain precise wind speed forecasts in Western China. Mohamadi et al. (2022) trained the RBFNN and MLP models using four optimization algorithms of PSO, firefly algorithm, naked mole rat algorithm, and genetic algorithm. Seifi et al. (2022) optimized the ANFIS model using PSO, seagull optimization algorithm, crow search algorithm, and firefly algorithm to predict pan evaporation. Ehteram et al. (2022a) developed Bayesian model averaging (BMA) based on optimized kernel extreme learning machine (KELM) to predict daily pan evaporation in Iran. Ehteram et al. (2022b) applied multi-objective particle swarm optimization (MOPSO), multi-objective salp swarm algorithm, and multi-objective crow algorithm to determine the optimum MLP parameters for predicting daily evaporation in Malaysia. Feng et al. (2020) determined suitable SVM parameters using quantum-behaved PSO in monthly runoff time series prediction. Achite et al. (2022) utilized hybrid models of ANN-PSO, ANN-water strider algorithm, ANN-salp swarm algorithm, and ANN-sine cosine algorithm to develop BMA for forecasting meteorological drought in the Wadi Ouahrane basin of Algeria.

2.5 The Application of PSO in Other Related Studies

27

2.4 The Application of PSO in Agricultural Studies The agricultural industry includes factors such as yield, disease detection, weed detection, crop quality, crop water requirement, irrigation systems, water quality that affect plants in different climates. Hence, predicting agricultural factors and optimizing these industries can decrease environmental and climatic impacts and save money and time. Some recent studies are presented to understand the application of artificial intelligence and PSO in this industry. Ting et al. (2017) used the PSO to search for optimum values of SVM parameters to predict photosynthesis for the entire growth stage of tomatoes. AgaAzizi et al. (2021) estimated the number of impurities in the wheat mass using video processing with ANN-PSO model. Ji et al. (2021) proposed the ANN-PSO model for soybean disease identification based on categorical feature inputs. Ehteram et al. (2021b) integrated the MLP model with PSO, seagull optimization algorithm, bat algorithm, and sine cosine algorithm to develop ensemble CBMA for predicting freshwater production in seawater greenhouse. Bazrafshan et al. (2022) applied PSO, multiverse optimization algorithm, and firefly algorithm approaches to hybridize with ANFIS and MLP models and enhancing the accuracy of tomato yield prediction using Bayesian model averaging (BMA). Sabzzadeh and Shourian (2020) coupled SWAT and MODFLOW models with PSO to calibrate parameters and maximize crop production’s net benefit for a plain west of Iran. Banadkooki et al. (2020b) employed PSO, crow algorithm, genetic algorithm, and shark algorithm to minimize irrigation water deficit and optimize reservoir operation. Seifi et al. (2021) used the ability of PSO, firefly algorithm, salp swarm algorithm, and sunflower optimization algorithms coupled with different models. Ehteram et al. (2021c) evaluated the ability of hybrid models, including ANFIS-PSO, ANFIS-firefly algorithm, and ANFIS-sine cosine algorithm, to estimate water infiltration rate during furrow irrigation. Kisi et al. (2021) investigated the ability of fuzzy C-means clustering-based adaptive neural fuzzy inference system integrated with PSO-gravity search algorithm to estimate outputs. Banadkooki et al. (2020a) applied optimization algorithms of PSO, cat swarm optimization, shark optimization, gray wolf optimization, and gravitational search algorithm for training ANFIS, SVM, and ANN models in the prediction Total Dissolved Solids (TDS) in Yazd plain. Ehteram et al. (2021a) optimized the SVM model using multi-objective algorithms of PSO.

2.5 The Application of PSO in Other Related Studies The topics of PSO application in different branches related to climatic and agricultural studies are not limited to the mentioned studies. The PSO algorithm was used for groundwater level prediction studies (Table 2.1).

28

2 Structure of Particle Swarm Optimization (PSO)

Table 2.1 PSO application in studies related to climatic and agricultural Application

Reference

Applied algorithm

Crop planning

Noory et al. (2012)

Linear programming (LP)-continuous PSO

Liu and Bai (2014)

Approximation approach (AA)-PSO

Crop pattern optimization

Irrigation water allocation

Groundwater level

Water resource planning

Bou-Fakhreddine et al. (2016)

PSO

Lin et al. (2020)

PSO

Varade and Patel (2018)

PSO

Hao et al. (2018)

PSO

Kumar and Yadav (2020)

PSO

Jain et al. (2021)

Multi-objective crow search algorithm-PSO

Habibi Davijani et al. (2016)

PSO

Saeidian et al. (2019)

PSO

Kumar and Yadav (2021)

PSO

Rath and Swain (2021)

PSO

Tapoglou et al. (2014)

ANN-PSO

Huang et al. (2017)

Chaotic PSO-SVM

Seifi et al. (2020)

ANFIS-PSO, ANN-PSO, ANN-PSO

Khozani et al. (2022)

Long short-term memory (LSTM) neural network-PSO

Shourian et al. (2008)

PSO-MODSIM

Chang et al. (2013)

PSO-genetic algorithm

Rezaei et al. (2017)

Fuzzy-based multi-objective PSO algorithm

Hatamkhani et al. (2022)

Multi-objective PSO

2.6 Conclusion In recent years, PSO algorithm has received wide attention as an optimization technique in many engineering and real-world applications. Many versions of PSO were developed to enhance its performance for finding optimal solutions and solving multiobjective and complex optimization problems. Based on the literature review, the PSO algorithm can easily be hybridized with other meta-heuristic algorithms and artificial intelligence models. In addition, the optimal value can be quickly obtained due to the high-speed coverage properties of PSO. However, the PSO is a robust algorithm that has been successfully applied to optimize climatic and agricultural sciences. It suffers from lacking a mathematical theory basis. This limits the PSO from achieving the highest accuracy compared to other evolutionary algorithms. In summary, many

References

29

climatic and agricultural studies have proposed applying the PSO as an appropriate approach for solving related problems.

References Achite, M., Banadkooki, F. B., Ehteram, M., Bouharira, A., Ahmed, A. N., & Elshafie, A. (2022). Exploring Bayesian model averaging with multiple ANNs for meteorological drought forecasts. Stochastic Environmental Research and Risk Assessment, 1–26. AgaAzizi, S., Rasekh, M., Abbaspour-Gilandeh, Y., & Kianmehr, M. H. (2021). Identification of impurity in wheat mass based on video processing using artificial neural network and PSO algorithm. Journal of Food Processing and Preservation, 45(1), e15067. Banadkooki, F. B., Adamowski, J., Singh, V. P., Ehteram, M., Karami, H., Mousavi, S. F., Farzin, S., & EL-Shafie, A. (2020a). Crow algorithm for irrigation management: A case study. Water Resources Management, 34(3), 1021–1045. Banadkooki, F. B., Ehteram, M., Panahi, F., Sammen, S. S., Othman, F. B., & Ahmed, E. S. (2020b). Estimation of total dissolved solids (TDS) using new hybrid machine learning models. Journal of Hydrology, 587, 124989. Bansal, J. C., Singh, P. K., & Pal, N. R. (Eds.). (2019). Evolutionary and swarm intelligence algorithms (Vol. 779). Springer. Bazrafshan, O., Ehteram, M., Latif, S. D., Huang, Y. F., Teo, F. Y., Ahmed, A. N., & El-Shafie, A. (2022). Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models. Ain Shams Engineering Journal, 13(5), 101724. Bou-Fakhreddine, B., Abou-Chakra, S., Mougharbel, I., Faye, A., & Pollet, Y. (2016). Optimal multi-crop planning implemented under deficit irrigation. In 18th Mediterranean Electrotechnical Conference (MELECON) (pp. 1–6). IEEE. Chang, J. X., Bai, T., Huang, Q., & Yang, D. W. (2013). Optimization of water resources utilization by PSO-GA. Water Resources Management, 27(10), 3525–3540. Chen, L., Du, S., He, Y., Liang, M., & Xu, D. (2018). Robust model predictive control for greenhouse temperature based on particle swarm optimization. Information Processing in Agriculture, 5(3), 329–338. Chopard, B., & Tomassini, M. (2018). An introduction to metaheuristics for optimization. Springer International Publishing. Du, K. L., & Swamy, M. N. S. (2016). Search and optimization by metaheuristics. Techniques and Algorithms Inspired by Nature, 1–10. Ehteram, M., Ahmed, A. N., Kumar, P., Sherif, M., & El-Shafie, A. (2021a). Predicting freshwater production and energy consumption in a seawater greenhouse based on ensemble frameworks using optimized multi-layer perceptron. Energy Reports, 7, 6308–6326. Ehteram, M., Sammen, S. S., Panahi, F., & Sidek, L. M. (2021b). A hybrid novel SVM model for predicting CO2 emissions using multiobjective Seagull optimization. Environmental Science and Pollution Research, 28(46), 66171–66192. Ehteram, M., Teo, F. Y., Ahmed, A. N., Latif, S. D., Huang, Y. F., Abozweita, O., Al-Ansari, N., & El-Shafie, A. (2021c). Performance improvement for infiltration rate prediction using hybridized adaptive neuro-fuzzy inferences system (ANFIS) with optimization algorithms. Ain Shams Engineering Journal, 12(2), 1665–1676. Ehteram, M., Graf, R., Ahmed, A. N., & El-Shafie, A. (2022a). Improved prediction of daily pan evaporation using Bayesian Model Averaging and optimized Kernel Extreme Machine models in different climates. Stochastic Environmental Research and Risk Assessment, 1–36. Ehteram, M., Panahi, F., Ahmed, A. N., Huang, Y. F., Kumar, P., & Elshafie, A. (2022b). Predicting evaporation with optimized artificial neural network using multi-objective salp swarm algorithm. Environmental Science and Pollution Research, 29(7), 10675–10701.

30

2 Structure of Particle Swarm Optimization (PSO)

Farrokhi, A., Farzin, S., & Mousavi, S. F. (2020). A New Framework for Evaluation of Rainfall Temporal Variability through Principal Component Analysis, Hybrid Adaptive Neuro-Fuzzy Inference System, and Innovative Trend Analysis Methodology. Water Resources Management. https://doi.org/10.1007/s11269-020-02618-0 Feng, Z. K., Niu, W. J., Tang, Z. Y., Jiang, Z. Q., Xu, Y., Liu, Y., & Zhang, H. R. (2020). Monthly runoff time series prediction by variational mode decomposition and support vector machine based on quantum-behaved particle swarm optimization. Journal of Hydrology, 583, 124627. Ferdowsi, A., Mousavi, S. F., Mohamad Hoseini, S., Faramarzpour, M., & Gandomi, A. H. (2022). A survey of PSO contributions to water and environmental sciences. In Computational intelligence for water and environmental sciences (pp. 85–102). Springer. Ghanbari-Adivi, E., Ehteram, M., Farrokhi, A., & Sheikh Khozani, Z. (2022). Combining radial basis function neural network models and inclusive multiple models for predicting suspended sediment loads. Water Resources Management, 36(11), 4313–4342. Habibi Davijani, M., Banihabib, M. E., Nadjafzadeh Anvar, A., & Hashemi, S. R. (2016). Multiobjective optimization model for the allocation of water resources in arid regions based on the maximization of socioeconomic efficiency. Water Resources Management, 30(3), 927–946. Hao, L., Su, X., & Singh, V. P. (2018). Cropping pattern optimization considering uncertainty of water availability and water saving potential. International Journal of Agricultural and Biological Engineering, 11(1), 178–186. Harrison, K. R., Engelbrecht, A. P., & Ombuki-Berman, B. M. (2016). Inertia weight control strategies for particle swarm optimization. Swarm Intelligence, 10(4), 267–305. Hatamkhani, A., KhazaiePoul, A., & Moridi, A. (2022). Sustainable water resource planning at the basin scale with simultaneous goals of agricultural development and wetland conservation. Journal of Water Supply: Research and Technology-Aqua. Houssein, E. H., Gad, A. G., Hussain, K., & Suganthan, P. N. (2021). Major advances in particle swarm optimization: Theory, analysis, and application. Swarm and Evolutionary Computation, 63, 100868. Huang, F., Huang, J., Jiang, S. H., & Zhou, C. (2017). Prediction of groundwater levels using evidence of chaos and support vector machine. Journal of Hydroinformatics, 19(4), 586–606. Huang, H., Band, S. S., Karami, H., Ehteram, M., Chau, K. W., & Zhang, Q. (2022). Solar radiation prediction using improved soft computing models for semi-arid, slightly-arid and humid climates. Alexandria Engineering Journal, 61(12), 10631–10657. Jain, S., Ramesh, D., & Bhattacharya, D. (2021). A multi-objective algorithm for crop pattern optimization in agriculture. Applied Soft Computing, 112, 107772. Ji, M., Liu, P., & Wu, Q. (2021). Feasibility of hybrid PSO-ANN model for identifying soybean diseases. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 15(4), 1–16. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of ICNN’95International Conference on Neural Networks (Vol. 4, pp. 1942–1948). IEEE. Khozani, Z. S., Banadkooki, F. B., Ehteram, M., Ahmed, A. N., & El-Shafie, A. (2022). Combining autoregressive integrated moving average with long short-term memory neural network and optimisation algorithms for predicting ground water level. Journal of Cleaner Production, 348, 131224. Kisi, O., Khosravinia, P., Heddam, S., Karimi, B., & Karimi, N. (2021). Modeling wetting front redistribution of drip irrigation systems using a new machine learning method: Adaptive neuro-fuzzy system improved by hybrid particle swarm optimization–Gravity search algorithm. Agricultural Water Management, 256, 107067. Kumar, V., & Yadav, S. M. (2020). Self-adaptive multi-population-based Jaya algorithm to optimize the cropping pattern under a constraint environment. Journal of Hydroinformatics, 22(2), 368– 384. Kumar, V., & Yadav, S. M. (2021). Optimization of water allocation for Ukai reservoir using elitist TLBO. In Water resources management and reservoir operation (pp. 191–204). Springer.

References

31

Lin, C. C., Deng, D. J., Kang, J. R., & Liu, W. Y. (2020). A dynamical simplified swarm optimization algorithm for the multiobjective annual crop planning problem conserving groundwater for sustainability. IEEE Transactions on Industrial Informatics, 17(6), 4401–4410. Liu, R., & Bai, X. (2014). Random fuzzy production and distribution plan of agricultural products and its PSO algorithm. In IEEE International Conference on Progress in Informatics and Computing (pp. 32–36). IEEE. Ma, T., Wang, C., Wang, J., Cheng, J., & Chen, X. (2019). Particle-swarm optimization of ensemble neural networks with negative correlation learning for forecasting short-term wind speed of wind farms in western China. Information Sciences, 505, 157–182. Mohamadi, S., Sheikh Khozani, Z., Ehteram, M., Ahmed, A. N., & El-Shafie, A. (2022). Rainfall prediction using multiple inclusive models and large climate indices. Environmental Science and Pollution Research, 1–38. Noory, H., Liaghat, A. M., Parsinejad, M., & Haddad, O. B. (2012). Optimizing irrigation water allocation and multicrop planning using discrete PSO algorithm. Journal of Irrigation and Drainage Engineering, 138(5), 437–444. Piotrowski, A. P., Napiorkowski, J. J., & Piotrowska, A. E. (2020). Population size in particle swarm optimization. Swarm and Evolutionary Computation, 58, 100718. Rath, A., & Swain, P. C. (2021). Water allocation from Hirakud Dam, Odisha, India for irrigation and power generation using optimization techniques. ISH Journal of Hydraulic Engineering, 27(3), 274–288. Rezaei, F., Safavi, H. R., & Zekri, M. (2017). A hybrid fuzzy-based multi-objective PSO algorithm for conjunctive water use and optimal multi-crop pattern planning. Water Resources Management, 31(4), 1139–1155. Sabzzadeh, I., & Shourian, M. (2020). Maximizing crops yield net benefit in a groundwater-irrigated plain constrained to aquifer stable depletion using a coupled PSO-SWAT-MODFLOW hydroagronomic model. Journal of Cleaner Production, 262, 121349. Saeidian, B., Mesgari, M. S., Pradhan, B., & Alamri, A. M. (2019). Irrigation water allocation at farm level based on temporal cultivation-related data using meta-heuristic optimisation algorithms. Water, 11(12), 2611. Seifi, A., Ehteram, M., Singh, V. P., & Mosavi, A. (2020). Modeling and uncertainty analysis of groundwater level using six evolutionary optimization algorithms hybridized with ANFIS, SVM, and ANN. Sustainability, 12(10), 4023. Seifi, A., Ehteram, M., Nayebloei, F., Soroush, F., Gharabaghi, B., & Torabi Haghighi, A. (2021). GLUE uncertainty analysis of hybrid models for predicting hourly soil temperature and application wavelet coherence analysis for correlation with meteorological variables. Soft Computing, 25(16), 10723–10748. Seifi, A., Ehteram, M., Soroush, F., & Haghighi, A. T. (2022). Multi-model ensemble prediction of pan evaporation based on the Copula Bayesian model averaging approach. Engineering Applications of Artificial Intelligence, 114, 105124. Sengupta, S., Basak, S., & Peters, R. A. (2018). Particle swarm optimization: A survey of historical and recent developments with hybridization perspectives. Machine Learning and Knowledge Extraction, 1(1), 157–191. Shourian, M., Mousavi, S. J., & Tahershamsi, A. (2008). Basin-wide water resources planning by integrating PSO algorithm and MODSIM. Water Resources Management, 22(10), 1347–1366. Tapoglou, E., Trichakis, I. C., Dokou, Z., Nikolos, I. K., & Karatzas, G. P. (2014). Groundwaterlevel forecasting under climate change scenarios using an artificial neural network trained with particle swarm optimization. Hydrological Sciences Journal, 59(6), 1225–1239. Ting, L., Yuhan, J., Man, Z., Sha, S., & Minzan, L. (2017). Universality of an improved photosynthesis prediction model based on PSO-SVM at all growth stages of tomato. International Journal of Agricultural and Biological Engineering, 10(2), 63–73. Varade, S., & Patel, J. N. (2018). Determination of optimum cropping pattern using advanced optimization algorithms. Journal of Hydrologic Engineering, 23(6), 05018010.

32

2 Structure of Particle Swarm Optimization (PSO)

Wang, D., Tan, D., & Liu, L. (2018). Particle swarm optimization algorithm: An overview. Soft Computing, 22(2), 387–408. Zhang, Y., Balochian, S., Agarwal, P., Bhatnagar, V., & Housheya, O. J. (2014). Artificial intelligence and its applications. Mathematical Problems in Engineering. Zhang, Y., Wang, S., & Ji, G. (2015). A comprehensive survey on particle swarm optimization algorithm and its applications. Mathematical Problems in Engineering.

Chapter 3

Structure of Shark Optimization Algorithm

Abstract This chapter studies the structure of the shark optimization algorithm (SSO). First, the applications of the shark algorithm are reviewed in different fields. The SSO can identify optimal solutions by balancing exploitation and exploration phases. The SSO benefits from low computation costs and fast convergence properties. The rotational movement of sharks is used to escape from the local optimums. It is suggested to explore SSO’s capability for many additional applications, such as crop planning, crop pattern optimization, irrigation water allocation, and crop yield. Keywords Shark optimization algorithm · Rotational movement · Optimization problem · Decision variable

3.1 Introduction Meta-heuristic algorithms have been developed based on different human, natural, and animal behavior and physics phenomena (Zhou et al., 2020). In recent years, bioinspired systems have become a source of inspiration for many artificial intelligence systems that currently exist (Cuevas et al., 2022). By progressing meta-heuristic algorithms in recent years, numerous engineering studies have employed these techniques for different applications and to solve optimization problems. There are some advantages for using meta-heuristics optimization algorithms over traditional methods (Rao et al., 2019), including: (1) the metaheuristics algorithms do not use derivative information and are derivative-free, (2) the problem formulation is not restricted by meta-heuristics algorithms, and (3) different engineering problems can be solved using meta-heuristics algorithms. Metaheuristics approaches have several features, including tuning variables and strategies of exploration and exploitation. Therefore, applying a suitable algorithm for optimizing of each problem (Rao et al., 2019) is necessary. The evolutionary approaches such as shark smell optimization (SSO) are easy and simple and also have high efficiency and fast learning speed to use in nonlinear and complex problems (Abedinia et al., 2016; Seifi et al., 2020). The SSO is a naturebased meta-heuristic developed by Abedinia et al. (2016) using sharks’ olfactory abilities to detect smells from all directions. The SSO mimics sharks’ hunting habits © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_3

33

34

3 Structure of Shark Optimization Algorithm

(moves, moves, and turns) and finds the food source (optimal solution) by sensing blood concentration. A shark’s nostrils, along its snout, allow water to flow through them. Upon entering the olfactory pits, water flows through sensory cells. Sharks use their smell sense as guides for finding the source of food and hunting. During the movement of sharks, blood concentration plays a crucial role in guiding the shark to its prey (Mohammad-Azari et al., 2018). The SSO method has some attributes (Abedinia et al., 2016; Rao et al., 2019; Seifi et al., 2020), including: (1) the SSO includes a few parameters that need less attempt for tuning, (2) each iteration of the SSO suggests different alternatives based on a population set of solutions, (3) the search space of the SSO is initialized randomly, (4) the SSO can identify the best solution by balancing exploitation and exploration phases, and (5) the SSO benefits from low computation costs and fast convergence properties because it depends on varying a single vector’s quality in the seeking process. The studies developed improved versions of the SSO, such as chaotic binary SSO algorithm (Zhao et al., 2022), logistic mapping SSO (Zhou et al., 2020), backward movement-oriented SSO (Manjunath et al., 2022), to enhance its efficiency. Also, since the nonlinear characteristics of phenomena cannot always be captured by individual artificial intelligence models, hybrid models have been developed to enhance prediction performance. The hybrid models use the optimization ability of metaheuristic algorithms for finding optimum values of machine learning parameters. For example, Banadkooki et al. (2020) hybridized individual models with SSO algorithm to predict the Total Dissolved Solids (TDS). In addition, multi-objective variation of the SSO algorithm was developed for solving multi-objective and complex optimization problems. For example, Seifi et al. (2020) used the advantages of multi-objective SSO algorithm in influent flow rate prediction. In the current chapter, the application of SSO in different meteorological and agricultural studies and optimization problems is investigated.

3.2 The Structure of Shark Algorithm In the last two decades, meta-heuristic optimization algorithms such as SSO have gained wide attention due to their characteristics, including avoiding local optimal, derivative-free, flexibility, and simplicity (Mirjalili et al., 2014). Sharks move toward prey based on the concentration of blood odor in water (Fig. 3.1) (Abedinia & Amjady, 2015). The SSO algorithm was constructed based on some assumptions. First, an injured prey releases blood into the seawater (search domain). Second, the influence of seawater flow on odor particles of prey blood is not taken into account when the blood is continuously released into the water. Third, one prey (one blood injection source) and only one seeking environment exist in the optimization process (Rao et al., 2019) to prevent the shark from becoming confused.

3.2 The Structure of Shark Algorithm

35

Fig. 3.1 Schematic illustration of a shark movement based on blood concentration

The SSO meta-heuristic algorithm was formulated based on three important components: odor particle initialization, forward and rotational movement of sharks, and updating position to find prey (Seifi et al., 2020). Figure 3.2 shows the behavior of sharks’ movement toward odor sources. The SSO includes the following levels: (1) Initialization of a population of solutions: The injured prey leads to blood spreading in the sea (search domain). It is assumed that the prey (food) is almost fixed at this point (Mohammad-Azari et al., 2018). To model this algorithm, the SSO algorithm creates a random primary population of initial solutions. The solution indicates one possible position of the shark for finding the optimal situation (food source) at the beginning of the search process (Abedinia et al., 2016): ] [ 1 , NP : Population size X = X 11 , X 21 , . . . , X NP where subscribe of 1 shows the first iteration.

(3.1)

36

3 Structure of Shark Optimization Algorithm

Fig. 3.2 Schematic illustration of a shark moving toward an odor source (Abedinia et al., 2016)

The ith initial shark position vector that is ith possible initial solutions for the optimization problem is written as: ] [ 1 1 1 , i = 1, . . . , NP X i1 = X i,1 , X i,2 , . . . , X i,ND

(3.2)

where X i1j is jth decision variable of the ith position of population vector and ND is the number of decision variables for the optimization problem. (2) Exploitation phase: There is a regular injection of blood into seawater, where the odor particle concentration is stronger near the injured prey (Fig. 3.1). It is assumed that water flow does not influence distorting odor particles. Sharks approach prey by following odor particles (Cuevas et al., 2022). To model the shark movement phase, it is considered that each individual in the population is moving toward the prey with an initial velocity vector (Abedinia et al., 2016): ] [ 1 V = V11 , V21 , . . . , VNP

(3.3)

Sharks will increase their velocity in forwarding motion by increasing the concentration of odor particles. There are several decision variables associated with each speed vector: [ 1 ] 1 1 Vi1 = Vi,1 , Vi,2 , . . . , Vi,NP , i = 1, . . . , NP

(3.4)

It is imperative to establish limits to prevent increasing sharks’ speed exponentially by increasing in intensity and odor concentration. The gradient of the objective function can mathematically represent this type of movement. The gradient shows the direction of movement in which the function increases at the fastest rate. The velocity of shark is as follows:

3.2 The Structure of Shark Algorithm

37

| | [| |] | | | m| |V | = Min |μm .R1. ∂(OF) |x m + αm .R2.v m−1 |, ||γm .v m−1 || ij i, j | i, j | i, j ∂x j

i = 1, . . . , NP, m = 1, . . . , M, j = 1, . . . , ND μm ∈ (0, 1)

(3.5)

where μm is the gradient coefficient in range [0, 1], ∂(OF) is the gradient of the ∂x j objective function, γ m is the velocity limiter for each stage of m, α m is the momentum effect in the interval [0, 1], m is the stage number of shark forward movement, and R1 and R2 are random values with uniform distribution. The shark’s position is updated for universal seeking: Z im+1 = X im + Vim ..tm i = 1, . . . , NP, m = 1, . . . , M

(3.6)

where .t m is the time interval in mth stage. In addition, the individuals of the SSO algorithm approach to the injured prey along rotational movement to find stronger odor particles. Hence, the local search process occurs in the rotational movement phase. According to the following equation, the shark seeks local prey: .im+1,l = Z im+1 + R3.Z im+1 i = 1, . . . , NP, m = 1, . . . , M, l = 1, . . . , L

(3.7)

where R3 is a random constant in the interval [− 1, 1] and L is the number of points in the local search. (3) Exploitation phase: Sharks are attracted to injured fish because they produce one odor source in their search environment. A process is used to evaluate the quality of a solution in a shark search domain (Cuevas et al., 2022; Mohammad-Azari et al., 2018). The optimum points that are chosen in the forward movement and local search are used by the shark in the SSO methods as: ) ( )} ( { ( ) X ik+1 = arg max OF Z im+1 , OF .im+1,1 , . . . , OF .im+1,L i = 1, . . . , NP

(3.8)

In SSO, some parameters (NP, M, γ m , μm , and α m ) are defined by the user before optimization begins. Random solutions are initially generated, and each decision 1 is randomly created within a specified range. The iteration process variable of X i,1 starts, and the velocity vectors are calculated. Consequently, the location of the individual is updated by Eq. 3.6, and the local search equation is calculated (Eq. 3.7). In the next step, the individual (shark) position is updated based on the best locations obtained from forward and rotational movement by Eq. 3.8. Then, the best solution is chosen to minimize the objective function for each optimization problem. The

38

3 Structure of Shark Optimization Algorithm

optimization process continues until the stop criterion is met (Seifi et al., 2020). The flowchart of SSO algorithm is given in Fig. 3.3.

Fig. 3.3 Flowchart of the SSO algorithm (Ahmadigorji & Amjady, 2016)

3.4 Application of SSO in Agricultural Studies

39

3.3 Application of SSO in Climate Studies Over the past two decades, hydrologists and engineers have widely used optimization approaches based on meta-heuristic nature-inspired algorithms for optimization problems and modeling purposes to make appropriate decisions. The meta-heuristic nature-inspired algorithms are scientific approaches for solving optimization problems (Ibrahim et al., 2022). Several studies have demonstrated that meta-heuristic nature-inspired algorithms have tremendous potential to be applied to different scientific fields. In the field of climatic and meteorological variables prediction, Abedinia and Amjady (2015) proposed a hybrid model based on a neural network (NN) and chaotic SSO (CSSO) algorithm to predict short-term wind power. The CSSO algorithm was used to optimize the number of hidden nodes of NN. The proposed technique could provide accurate predictions despite their variability and intermittency. Mohamadi et al. (2020) exanimated the performance of optimization algorithms such as SSO and firefly algorithms (FFAs) to predictive models to predict the monthly evaporation at two stations in Iran. The results illustrated that the hybrid ANFIS-SSO is a powerful model for predicting evaporation. Ehteram et al. (2019b) applied a hybrid multi-objective ANFIS-SSO model to predict future renewable solar energy production. The results showed that the proposed multi-objective ANFIS-SSO model has a high capability to generate accurate zone mapping. Li et al. (2020) introduced an approach using SSO-enhanced fuzzy clustering (EFC)-Weather Research and Prediction (SSOFC-Apriori-WRP) for predicting wind speed.

3.4 Application of SSO in Agricultural Studies Reliable prediction of factors related to agricultural and crop production is important in the high-efficiency performance agricultural industry. In recent years, computational models and meta-heuristic nature-inspired algorithms such as SSO have been widely used to predict and solve optimization problems in the agricultural industry. In the field of SSO algorithm application in irrigation, Valikhan-Anaraki et al. (2019) used the SSO, hybrid bat algorithm and particle swarm optimization algorithm, and genetic algorithm to optimize the utilization of the Aydoghmoush dam reservoir to reduce irrigation deficiencies. Ehteram et al. (2019a) predicted future trends of air temperature and precipitation variables based on the A1 B scenario and the HAD-CM3 model to use in the SSO algorithm. Seifi et al. (2020) applied multi-objective SSO to optimize multilayer perceptron (MLP). The hybrid models were used to predict different prediction horizons of influent flow rate time series. Ganesan and Chinnappan (2022) developed novel hybrid deep learning using the YOLO classifier to recognize paddy leaf disease.

40 Table 3.1 SSO application in studies related to climatic and agricultural

3 Structure of Shark Optimization Algorithm Application

Reference

Applied algorithm

Optimization in reservoir operation

Ehteram et al. (2017)

SSO

Ehteram et al. (2018)

SSO

Allawi et al. (2018)

Shark machine learning algorithm (SMLA)

Allawi et al. (2019)

SMLA

Mirzapour et al. (2019)

SSO

Wei and Stanford (2019)

Chaotic binary SSO

Chen et al. (2020)

Improved SSO

Energy

Vinay et al. (2022) SSO Groundwater level

Rezaei et al. (2021)

MODFLOW-SSO

3.5 Application of SSO in Other Studies As mentioned above, most researchers presented that the SSO algorithm can give accurate prediction and optimization results in different real-world applications and studies, such as wind power, solar radiation, evaporation, influent flow rate, crop disease recognition, and irrigation deficiency. Several studies used the SSO algorithm for reservoir operation optimization to solve the optimal operation rules (Table 3.1). In addition, some studies call the SSO algorithm to solve the general issues related to energy. Also, there are limited studies in groundwater prediction using the SSO algorithm.

3.6 Conclusion This chapter described shark smell optimization (SSO) as a meta-heuristic natureinspired algorithm that mimics shark hunting behavior. The SSO is a simple, flexible, and derivative-free algorithm consisting of five steps: initialization of random population, forward movement, rotational movement, deterministic selection mechanism, and stopping condition for optimizing a problem. The algorithm was implemented for reservoir operation optimization, crop diseases recognition, influent flow rate, energy, wind speed, and groundwater level predictions applications. It is suggested to explore SSO’s capability for many additional applications, such as crop planning, crop pattern optimization, irrigation water allocation, and crop yield.

References

41

References Abedinia, O., & Amjady, N. (2015). Short-term wind power prediction based on hybrid neural network and chaotic shark smell optimization. International Journal of Precision Engineering and Manufacturing-Green Technology, 2(3), 245–254. Abedinia, O., Amjady, N., & Ghasemi, A. (2016). A new metaheuristic algorithm based on shark smell optimization. Complexity, 21(5), 97–116. Ahmadigorji, M., & Amjady, N. (2016). A multiyear DG-incorporated framework for expansion planning of distribution networks using binary chaotic shark smell optimization algorithm. Energy, 102, 199–215. Allawi, M. F., Jaafar, O., Mohamad Hamzah, F., Ehteram, M., Hossain, M., & El-Shafie, A. (2018). Operating a reservoir system based on the shark machine learning algorithm. Environmental Earth Sciences, 77(10), 1–14. Allawi, M. F., Jaafar, O., Hamzah, F. M., & El-Shafie, A. (2019). Novel reservoir system simulation procedure for gap minimization between water supply and demand. Journal of Cleaner Production, 206, 928–943. Banadkooki, F. B., Ehteram, M., Panahi, F., Sammen, S. S., Othman, F. B., & Ahmed, E. S. (2020). Estimation of total dissolved solids (TDS) using new hybrid machine learning models. Journal of Hydrology, 587, 124989. Chen, S., Farkoush, S. G., & Leto, S. (2020). Photovoltaic cells parameters extraction using variables reduction and improved shark optimization technique. International Journal of Hydrogen Energy, 45(16), 10059–10069. Cuevas, F., Castillo, O., & Cortes, P. (2022). Optimal setting of membership functions for interval type-2 fuzzy tracking controllers using a shark smell metaheuristic algorithm. International Journal of Fuzzy Systems, 24(2), 799–822. Ehteram, M., Karami, H., Mousavi, S. F., El-Shafie, A., & Amini, Z. (2017). Optimizing dam and reservoirs operation based model utilizing shark algorithm approach. Knowledge-Based Systems, 122, 26–38. Ehteram, M., Karami, H., Mousavi, S. F., Farzin, S., & Kisi, O. (2018). Evaluation of contemporary evolutionary algorithms for optimization in reservoir operation and water supply. Journal of Water Supply: Research and Technology—AQUA, 67(1), 54–67. Ehteram, M., Ahmed, A. N., Fai, C. M., Afan, H. A., & El-Shafie, A. (2019a). Accuracy enhancement for zone mapping of a solar radiation forecasting based multi-objective model for better Management of the Generation of renewable energy. Energies, 12(14), 2730. Ehteram, M., El-Shafie, A. H., Hin, L. S., Othman, F., Koting, S., Karami, H., Mousavi, S. F., Farzin, S., Ahmed, A. N., Bin Zawawi, M. H., & Hossain, M. S. (2019b). Toward bridging future irrigation deficits utilizing the shark algorithm integrated with a climate change model. Applied Sciences, 9(19), 3960. Ganesan, G., & Chinnappan, J. (2022). Hybridization of ResNet with YOLO classifier for automated paddy leaf disease recognition: An optimized model. Journal of Field Robotics, 39(7), 1087–1111. Ibrahim, N. S., Yahya, N. M., & Mohamed, S. B. (2022). Metaheuristic nature-inspired algorithms for reservoir optimization operation: A systematic literature review. Indonesian Journal of Electrical Engineering and Computer Science, 26(2), 1050–1059. Li, L., Yin, X. L., Jia, X. C., & Sobhani, B. (2020). Day ahead powerful probabilistic wind power forecast using combined intelligent structure and fuzzy clustering algorithm. Energy, 192, 116498. Manjunath, K., Ramaiah, G. K., & GiriPrasad, M. N. (2022). Backward movement oriented shark smell optimization-based audio steganography using encryption and compression strategies. Digital Signal Processing, 122, 103335. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. Mirzapour, F., Lakzaei, M., Varamini, G., Teimourian, M., & Ghadimi, N. (2019). A new prediction model of battery and wind-solar output in hybrid power system. Journal of Ambient Intelligence and Humanized Computing, 10(1), 77–87.

42

3 Structure of Shark Optimization Algorithm

Mohamadi, S., Ehteram, M., & El-Shafie, A. (2020). Accuracy enhancement for monthly evaporation predicting model utilizing evolutionary machine learning methods. International Journal of Environmental Science and Technology, 17(7), 3373–3396. Mohammad-Azari, S., Bozorg-Haddad, O., & Chu, X. (2018). Shark smell optimization (SSO) algorithm. In Advanced optimization by nature-inspired algorithms (pp. 93–103). Springer. Rao, Y., Shao, Z., Ahangarnejad, A. H., Gholamalizadeh, E., & Sobhani, B. (2019). Shark smell optimizer applied to identify the optimal parameters of the proton exchange membrane fuel cell model. Energy Conversion and Management, 182, 1–8. Rezaei, M., Mousavi, S. F., Moridi, A., Eshaghi Gordji, M., & Karami, H. (2021). A new hybrid framework based on integration of optimization algorithms and numerical method for estimating monthly groundwater level. Arabian Journal of Geosciences, 14(11), 1–15. Seifi, A., Ehteram, M., & Soroush, F. (2020). Uncertainties of instantaneous influent flow predictions by intelligence models hybridized with multi-objective shark smell optimization algorithm. Journal of Hydrology, 587, 124977. Valikhan-Anaraki, M., Mousavi, S. F., Farzin, S., Karami, H., Ehteram, M., Kisi, O., Fai, C. M., Hossain, M. S., Hayder, G., Ahmed, A. N., & El-Shafie, A. H. (2019). Development of a novel hybrid optimization algorithm for minimizing irrigation deficiencies. Sustainability, 11(8), 2337. Vinay, N., Bale, A. S., Tiwari, S., & Baby, C. R. (2022). Artificial intelligence as a tool for conservation and efficient utilization of renewable resource. In Artificial intelligence for renewable energy systems (pp. 37–77). Wei, Y., & Stanford, R. J. (2019). Parameter identification of solid oxide fuel cell by Chaotic binary shark smell optimization method. Energy, 188, 115770. Zhao, S., Sun, W., Li, J., & Gong, Y. (2022). Dynamic modeling of a proton exchange membrane fuel cell using chaotic binary shark smell optimizer from electrical and thermal viewpoints. International Journal of Energy and Environmental Engineering, 1–14. Zhou, Y., Ye, J., Du, Y., & Sheykhahmad, F. R. (2020). New improved optimized method for medical image enhancement based on modified shark smell optimization algorithm. Sensing and Imaging, 21(1), 1–22.

Chapter 4

Sunflower Optimization Algorithm

Abstract This chapter explains the mathematical model and structure of sunflower optimization (SFO). The algorithm acts based on the life of the SFO. When the distance of a flower from the sun increases, the radiation intensity decreases. The sunflower seeks the best orientation toward the sun. The SFO has a high ability to solve optimization problems. The SFO can be easily implemented for solving complex problems. The SFO outperformed the other optimization algorithms, such as particle swarm optimization (PSO), genetic algorithm (GA), and other algorithms. The SFO can be used for solving complex problems in different fields. It also can be used for training soft computing models. The SFO can be coupled with other optimization algorithms for solving complex problems. Keywords Sunflower optimization algorithm · Convergence velocity · Training soft computing models · Optimization algorithms

4.1 Introduction In this chapter, the structure of sunflower optimization is explained. It is a new meta-heuristic algorithm inspired by sunflowers’ movement toward the sun. First, the applications of SFO are reviewed. Then, the structure of SFO is described.

4.2 Applications of SFO in the Different Fields For structural damage detection problems, Gomes et al. (2019) introduced SFO. SFO outperformed other optimization algorithms based on the results. The SFO converged earlier than other optimization algorithms. Shaheen et al. (2019) used SFO to solve the problem of optimal power flow (OPF). The SFO showed more flexibility and faster convergence than the genetic algorithm. Qais et al. (2019) found parallel resistance and photo-generated current parameters using SFO. They suggested SFO for modeling any marketable photovoltaic module. El-Sehiemy et al. (2020) improved

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_4

43

44

4 Sunflower Optimization Algorithm

SFO to estimate battery parameters accurately. They used a reduction strategy for developing SFO. They reported that the SFO obtained the lowest value of the objective function. Gomes and Giovani (2022) proposed SFO for evaluating the correct severity of induced damages. Improved efficiency and decreased computational costs were two benefits of the algorithm. Shaheen et al. (2021) enhanced SFO to minimize the power distribution network’s operational costs. They improved SFO based on the pollination rate and the mortality rate. They reported that the SFO was superior to the other algorithms. Alshammari and Guesmi (2020) introduced a chaotic SFO for designing power system stabilizers. The improved SFO outperformed other optimization algorithms. Duong and Nguyen (2020) used SFO to find the optimal power flow. It was reported that the SFO was highly efficient in the electricity market. Duong et al. (2020) proposed a hybrid algorithm [SFO-cuckoo search algorithm (CSA)] for optimizing power system optimization. The new algorithm enhanced the performance of CSA and SFO. Raslan et al. (2020) developed SFO to maximize sensor networks’ lifetime. They integrated SFO with the Lèvy flight to create a new hybrid algorithm. The proposed algorithm extended the lifetime of sensor networks more than the other algorithms. Fan et al. (2020) used multi-objective SFO to choose the best fuel cell configuration. They reported that the multi-objective SFO outperformed different optimization algorithms. Yuan et al. (2020) developed the SFO for the optimal choice of the parameters of fuel cells. The developed SFO had better performance than the SFO. Subhash and Udayakumar (2021) used SFO for modeling resource allocation. They coupled the whale algorithm with SFO to create a new hybrid algorithm. The new algorithm outperformed the SFO and whale algorithms. Magacho et al. (2021) used multi-objective SFO for structural health motoring. The results indicated that the multi-objective SFO was superior to the multi-objective genetic algorithm. Hussien et al. (2021) used SFO to enhance the performance of the inverter (https:// www.sciencedirect.com/topics/engineering/inverter)-based microgrid (https://www. sciencedirect.com/topics/engineering/micro-grids). SFO was found to be a flexible, justifiable, and applicable algorithm against particle swarm optimization (PSO). Sun (2021) used improved SFO for adjusting extreme learning machine (ELM) parameters. They stated that the SFO had a high ability to train ELM. Ehteram et al. (2021) used adjusting adaptive neuro-fuzzy interface systems. The ANFIS-SFO had a high potential for predicting water levels. Jena et al. (2022) developed SFO to improve the efficiency of the industrial reducer gearbox. They reported that the SFO was a reliable tool for solving complex problems. Mouncef and Mostafa (2022) used SFO for estimating battery capacity. They stated that the SFO had a high potential for estimating capacity.

4.3 Structure of Sunflower Optimization Algorithm

45

4.3 Structure of Sunflower Optimization Algorithm Sunflowers have the same daily cycle: like clockwork, they awaken and follow the sun. The sunflower seeks the best orientation toward the sun (Gomes et al., 2019). There are millions of pollen gametes released by each flower patch in the real world. For simplicity, we assume that sunflowers produce only one pollen gamete (Gomes et al., 2019). When the distance of a flower from the sun increases, the radiation intensity decreases. First, the direction of sunflowers is determined based on the following equation (Gomes et al., 2019): s.i =

Su ∗ − Su i ||Su ∗ − Su i ||

(4.1)

where s.i : direction of sunflower, Su ∗ : the best location of sunflower, Su i : the location of ith sunflower. Sunflowers move in the directions as follows: di = λ × Pi (||Su i + Su i−1 ||) × ||Su i + Su i−1 ||

(4.2)

where di : the step of sunflowers, Pi (||Su i + Su i−1 ||): the probability of pollination. Additionally, each individual’s maximum step should be limited (Gomes et al., 2019). The following equation is used to define the maximum step. dmax =

||Su max − Su min || 2 × Npop

(4.3)

where Vmax : maximum value of decision variable, Vmin : minimum value of decision variable, and N pop : the number of plants. Finally, the location of sunflowers is computed as follows: s u.i+1 = s u.i + di × s u.i

(4.4)

where s u.i+1 : the new location of sunflowers. Figure 4.1 shows the flowchart of SFO.

46

4 Sunflower Optimization Algorithm

Fig. 4.1 Structure of SFO for solving optimization problems

References Alshammari, B. M., & Guesmi, T. (2020). New Chaotic sunflower optimization algorithm for optimal tuning of power system stabilizers. Journal of Electrical Engineering and Technology. https://doi.org/10.1007/s42835-020-00470-1 Duong, T. L., & Nguyen, T. T. (2020). Application of sunflower optimization algorithm for solving the security constrained optimal power flow problem. Engineering, Technology & Applied Science Research. https://doi.org/10.48084/etasr.3511 Duong, T. L., Nguyen, N. A., & Nguyen, T. T. (2020). A newly hybrid method based on cuckoo search and sunflower optimization for optimal power flow problem. Sustainability (Switzerland). https://doi.org/10.3390/su12135283 Ehteram, M., Ferdowsi, A., Faramarzpour, M., Al-Janabi, A. M. S., Al-Ansari, N., Bokde, N. D., & Yaseen, Z. M. (2021). Hybridization of artificial intelligence models with nature inspired optimization algorithms for lake water level prediction and uncertainty analysis. Alexandria Engineering Journal. https://doi.org/10.1016/j.aej.2020.12.034 El-Sehiemy, R. A., Hamida, M. A., & Mesbahi, T. (2020). Parameter identification and stateof-charge estimation for lithium-polymer battery cells using enhanced sunflower optimization algorithm. International Journal of Hydrogen Energy. https://doi.org/10.1016/j.ijhydene.2020. 01.067 Fan, X., Sun, H., Yuan, Z., Li, Z., Shi, R., & Razmjooy, N. (2020). Multiobjective optimization for the proper selection of the best heat pump technology in a fuel cell-heat pump micro-CHP system. Energy Reports. https://doi.org/10.1016/j.egyr.2020.01.009

References

47

Gomes, G. F., & Giovani, R. S. (2022). An efficient two-step damage identification method using sunflower optimization algorithm and mode shape curvature (MSDBI–SFO). Engineering with Computers. https://doi.org/10.1007/s00366-020-01128-2 Gomes, G. F., da Cunha, S. S., & Ancelotti, A. C. (2019). A sunflower optimization (SFO) algorithm applied to damage identification on laminated composite plates. Engineering with Computers. https://doi.org/10.1007/s00366-018-0620-8 Hussien, A. M., Hasanien, H. M., & Mekhamer, S. F. (2021). Sunflower optimization algorithmbased optimal PI control for enhancing the performance of an autonomous operation of a microgrid. Ain Shams Engineering Journal. https://doi.org/10.1016/j.asej.2020.10.020 Jena, S., Jeet, S., Bagal, D. K., Baliarsingh, A. K., Nayak, D. R., & Barua, A. (2022). Efficiency analysis of mechanical reducer equipment of material handling industry using sunflower optimization algorithm and material generation algorithm. Materials Today: Proceedings, 50, 1113–1122. Magacho, E. G., Jorge, A. B., & Gomes, G. F. (2021). Inverse problem based multiobjective sunflower optimization for structural health monitoring of three-dimensional trusses. Evolutionary Intelligence. https://doi.org/10.1007/s12065-021-00652-4 Mouncef, E., & Mostafa, B. (2022). Battery total capacity estimation based on the sunflower algorithm. Journal of Energy Storage. https://doi.org/10.1016/j.est.2021.103900 Qais, M. H., Hasanien, H. M., & Alghuwainem, S. (2019). Identification of electrical parameters for three-diode photovoltaic model using analytical and sunflower optimization algorithm. Applied Energy. https://doi.org/10.1016/j.apenergy.2019.05.013 Raslan, A. F., Ali, A. F., & Darwish, A. (2020). A modified sunflower optimization algorithm for wireless sensor networks. Advances in Intelligent Systems and Computing. https://doi.org/10. 1007/978-3-030-44289-7_21 Shaheen, M. A. M., Hasanien, H. M., Mekhamer, S. F., & Talaat, H. E. A. (2019). Optimal power flow of power systems including distributed generation units using sunflower optimization algorithm. IEEE Access. https://doi.org/10.1109/ACCESS.2019.2933489 Shaheen, A. M., Elattar, E. E., El-Sehiemy, R. A., & Elsayed, A. M. (2021). An improved sunflower optimization algorithm-based monte Carlo simulation for efficiency improvement of radial distribution systems considering wind power uncertainty. IEEE Access. https://doi.org/10.1109/ACC ESS.2020.3047671 Subhash, L. S., & Udayakumar, R. (2021). Sunflower whale optimization algorithm for resource allocation strategy in cloud computing platform. Wireless Personal Communications. https://doi. org/10.1007/s11277-020-07835-9 Sun, Y. (2021). Mammograms classification using ELM based on improved sunflower optimization algorithm. Journal of Physics: Conference Series. https://doi.org/10.1088/1742-6596/1739/ 1/012047 Yuan, Z., Wang, W., Wang, H., & Razmjooy, N. (2020). A new technique for optimal estimation of the circuit-based PEMFCs using developed sunflower optimization algorithm. Energy Reports. https://doi.org/10.1016/j.egyr.2020.03.010

Chapter 5

Henry Gas Solubility Optimizer

Abstract This chapter explains the structure and mathematical model of the Henry gas solubility optimization (HGSO). Also, the different applications of HGSO are reviewed in other fields. The HGSO uses advanced operators for solving complex optimization problems. The HGSO can converge earlier than the other optimization algorithm. The HGSO had a high efficiency for solving multi-objective optimization problems. The HGSO also can be used for training soft computing models. The HGSO is a robust algorithm. Keywords Optimization problem · Optimization algorithms · Global search · Local search

5.1 Introduction Nowadays, modelers face complicated and different problems. Therefore, they need the newest and most advanced tools in order to solve these problems. Optimization algorithms are among the most powerful tools that are able to analyze linear and nonlinear problems. These algorithms have different types, and we use them to find an accurate solution (Hashim et al., 2020). They may be used in various industries, such as agriculture and water resources management. Nowadays, we use computers to solve complicated problems, but they also have other applications, such as simulation. One of the main applications of evolutionary algorithms is configuring the parameters of other models, which requires high accuracy. Such algorithms have crucial features like the ability to combine with other numerical models, and it is a necessary task in many industries. Therefore, we can say that accuracy and flexibility of optimization algorithms are very high. However, it is worth mentioning that some algorithms may trap in some of the local optimums. Each algorithm uses different operators and mathematical methods and functions to solve complicated problems (Hashim et al., 2020). Although the numerical models have complex equations, and therefore, they make problem-solving very difficult, optimization algorithms can be easily used and have high accuracy. Furthermore, there is a possibility that some problems may have no analytic solution. Therefore, analytic methods can be replaced by optimization

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_5

49

50

5 Henry Gas Solubility Optimizer

algorithms. These algorithms may have flaws, and researchers use different methods to fix them. They can combine one algorithm with another so they can achieve a better solution. These algorithms are able to converge as quickly as possible, and their applications are not limited to only one context. Optimization algorithms contain a set of primitive solutions. Such solutions can be updated in the optimization process. The structure of the henry gas solubility optimizer (HGSO) is explained in this chapter. As a first step, the various applications of HGSO are explained. Next, the mathematical model of HGSO was explained.

5.2 Application of HGSO in Different Fields Hashim et al. (2019) introduced HGSO for solving optimization problems. It solves optimization problems using Henry’s law. They reported that the HGSO was superior to other optimization algorithms. For the selection of significant features, Neggaz et al. (2020) used HGSO. The HGSO chose features successfully. For the optimal design of vehicle components, Yildiz et al. (2020) used HGSO. They reported that the HGSO had a high ability to solve real problems. Using HSO, Mirza et al. (2020) proposed a maximum power point tracking (MPPT) technique. HGSO-MPPT was used to address two major issues in photovoltaic systems. They stated that the hybrid method decreased the tracking time by 15–30%. Hashim et al. (2020) developed HGSO for motif discovery. They reported that the modified HGSO outperformed other optimization algorithms. HGSO was used to train a predictive model by Cao et al. (2020). They reported that the SVM-HGSO had a high potential for predicting outputs. Abd Elaziz and Attiya (2021) coupled the whale optimization algorithm with HGSO for optimum task scheduling. A set of 36 optimization benchmark functions was used to validate the new method, and it was compared with the conventional HGSO and whale algorithm. They found that the hybrid method had the best performance among other methods. Mohammadi et al. (2021) enhanced the performance of HGSO using quantum theory. They reported that the new algorithm successfully solved complex problems. Ravikumar and Kavitha (2021) used HGSO for adjusting convolutional neural network (CNN) parameters. They reported that the HGSO improved the performance of CNN. Xie et al. (2020) coupled the Harris Hawk optimization algorithm with HGSO to solve optimization problems. They reported that the new hybrid algorithm was a reliable tool for solving complex problems. Ding et al. (2021) used HGSO for training a predictive model. Karasu and Altan (2022) used HGSO for long short-term memory neural network (LSTM) parameters. They stated that the LSTM-HGSO outperformed other models. Bi and Zhang (2022) enhanced the performance of HGSO. They used the interval shrinking strategy to improve the efficiency of HGSO. They concluded that the new HGSO performed better than the original HGSO. Kahloul et al. (2022) presented a

5.3 Structure of Henry Gas Solubility

51

new multi-objective HGSO. Pareto solutions were stored in an elite archive. They stated that the new algorithm had a high ability to solve complex problems. Singh and Sandhu (2022) used HGSO to estimate the parameters of the membrane fuel cell. Yıldız et al. (2022) suggested a novel optimization algorithm based on integrating chaotic maps into HGSO. It was found that the new HGSO performed better than the other optimization algorithms.

5.3 Structure of Henry Gas Solubility The Henry gas theory provides a relationship between gas solubility and partial pressure. First, Eq. 5.1 is used to initialize the initial location of gasses and solutions: xi = lo + rand ∗ (up − lo)

(5.1)

where xi : the location of ith gas (solution), rand: random number, up and lo: the upper and lower values of decision, and lo: lower bound of the decision variable. The population of gases is divided into an equal number of clusters. Each cluster has a type of gas. For each cluster, the Henry Coefficient is determined (Hashim et al., 2019). )) ( ( 1 1 − θ H j (t + 1) = H j (t) × exp −α j × T (t) T ) ( −t T (t) = exp iter

(5.2) (5.3)

where H j (t + 1): Henry Coefficient at the time (t + 1), H j (t): Henry Coefficient at the time (t), T θ : constant value, α j : constant value, T (t): temperature, t: number of current iterations, and iter: maximum number of iterations. H j (t) is computed as follows: H j = l × r1 (l = 5E − 2)

(5.4)

where r1 : random number. In the next level, the solubility is updated: Si j = β × H j (t + 1) × Pi j (t)

(5.5)

Pi j (t) = l2 × r1

(5.6)

where Pi j (t): the partial pressure of gas i in cluster j, Si j : solubility of gas I in cluster j, l2 : constant value, and r1 : random number. Finally, the location of each gas is

52

5 Henry Gas Solubility Optimizer

updated based on the following equation: ( ) xi j (t + 1) = xi j (t) + f g × r × υ × xib (t) − xi j (t) ( ) + f g × r × ϕ × Si j (t) × xib (t) − xi j (t) v =ψ ×e

−(

Fb (t))+κ Fi j +κ

(5.7) (5.8)

where Fi j : the objective function of x i , r : random number, ψ: constant value, f : flag, v: the effect of other gases on gas i, xib (t): the best solution, Fb (t): the objective function of the best solution, and κ = 0.50. The HGSO can update the worst solutions based on the following equation: ) ( max Wi j = Wimin − Wimin j + r × Wi j j

(5.9)

n w = n × r × (χ2 − χ1 ) + χ1

(5.10)

where Wimin and Wimax j j : bound of problems, n w : number of worst agents, χ2 : 0,20, and χ1 : 0.10. Figure 5.1 shows the structure of HGSO.

Fig. 5.1 Structure of HGSO for solving problems

References

53

References Abd Elaziz, M., & Attiya, I. (2021). An improved Henry gas solubility optimization algorithm for task scheduling in cloud computing. Artificial Intelligence Review. https://doi.org/10.1007/s10 462-020-09933-3 Bi, J., & Zhang, Y. (2022). An improved Henry gas solubility optimization for optimization tasks. Applied Intelligence. https://doi.org/10.1007/s10489-021-02670-2 Cao, W., Liu, X., & Ni, J. (2020). Parameter optimization of support vector regression using Henry gas solubility optimization algorithm. IEEE Access. https://doi.org/10.1109/ACCESS.2020.299 3267 Ding, W., Nguyen, M. D., Salih Mohammed, A., Armaghani, D. J., Hasanipanah, M., Bui, L. V., & Pham, B. T. (2021). A new development of ANFIS-based Henry gas solubility optimization technique for prediction of soil shear strength. Transportation Geotechnics. https://doi.org/10. 1016/j.trgeo.2021.100579 Hashim, F. A., Houssein, E. H., Mabrouk, M. S., Al-Atabany, W., & Mirjalili, S. (2019). Henry gas solubility optimization: A novel physics-based algorithm. Future Generation Computer Systems. https://doi.org/10.1016/j.future.2019.07.015 Hashim, F. A., Houssein, E. H., Hussain, K., Mabrouk, M. S., & Al-Atabany, W. (2020). A modified Henry gas solubility optimization for solving motif discovery problem. Neural Computing and Applications. https://doi.org/10.1007/s00521-019-04611-0 Kahloul, S., Zouache, D., Brahmi, B., & Got, A. (2022). A multi-external archive-guided Henry gas solubility optimization algorithm for solving multiobjective optimization problems. Engineering Applications of Artificial Intelligence. https://doi.org/10.1016/j.engappai.2021.104588 Karasu, S., & Altan, A. (2022). Crude oil time series prediction model based on LSTM network with chaotic Henry gas solubility optimization. Energy, 242, 122964. Mirza, A. F., Mansoor, M., & Ling, Q. (2020). A novel MPPT technique based on Henry gas solubility optimization. Energy Conversion and Management. https://doi.org/10.1016/j.enconman. 2020.113409 Mohammadi, D., Abd Elaziz, M., Moghdani, R., Demir, E., & Mirjalili, S. (2021). Quantum Henry gas solubility optimization algorithm for global optimization. Engineering with Computers. https://doi.org/10.1007/s00366-021-01347-1 Neggaz, N., Houssein, E. H., & Hussain, K. (2020). An efficient Henry gas solubility optimization for feature selection. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2020.113364 Ravikumar, S., & Kavitha, D. (2021). CNN-OHGS: CNN-oppositional-based Henry gas solubility optimization model for autonomous vehicle control system. Journal of Field Robotics. https:// doi.org/10.1002/rob.22020 Singh, P., & Sandhu, A. (2022). Optimal parameter extraction of proton exchange membrane fuel cell using Henry gas solubility optimization. International Journal of Energy Research. Xie, W., Xing, C., Wang, J., Guo, S., Guo, M. W., & Zhu, L. F. (2020). Hybrid Henry gas solubility optimization algorithm based on the Harris Hawk optimization. IEEE Access. https://doi.org/10. 1109/ACCESS.2020.3014309 Yildiz, B. S., Yıldız, A. R., Pholdee, N., Bureerat, S., Sait, S. M., & Patel, V. (2020). The Henry gas solubility optimization algorithm for the optimum structural design of automobile brake components. Materialpruefung/Materials Testing. https://doi.org/10.3139/120.111479 Yildiz, B. S., Pholdee, N., Panagant, N., Bureerat, S., Yildiz, A. R., & Sait, S. M. (2022). A novel chaotic Henry gas solubility optimization algorithm for solving real-world engineering problems. Engineering with Computers. https://doi.org/10.1007/s00366-020-01268-5

Chapter 6

Structure of Crow Optimization Algorithm

Abstract This chapter explained the structure and mathematical model of the crow optimization algorithm (COA). A characteristic of crows is hiding their food. COA’s mathematical model is defined based on the mechanism of hiding food. The algorithms can be used for solving complex problems such as the optimal operation of dam reservoirs, training soft computing models, optimal design of structures, and flood control. The COA can be easily implemented. The fast convergence is another advantage of COA. The COA has high efficiencies for solving multi-objective optimization problems. The COA can be easily coupled with different optimization algorithms. Keywords Crow optimization algorithm · Optimization algorithms · Artificial intelligence models · Soft computing models

6.1 Introduction The COA is a useful evolutionary method. In this chapter, the application of the COA is explained. Afterward, the mathematical model of the crow optimization algorithm is described.

6.2 The Application of the COA Askarzadeh (2016) introduced the COA based on the intelligent behavior of crows. The COA was used for designing engineering structures. It was found that the COA had a high ability to solve complex problems. Nobahari and Bighashdel (2017) developed COA for solving multi-objective optimization problems. They reported that the multi-objective COA could successfully solve complex problems. Shi et al. (2017) improved the performance of COA. They used adaptive inertia and weight factor to enhance COA. They reported that the improved COA outperformed the COA. Oliva et al. (2017) optimized the minimum cross entropy using the COA. Liu et al. (2017) used COA for adjusting extreme learning machine (ELM) models. The COA found the optimal values of weights and bias parameters of the ELM. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_6

55

56

6 Structure of Crow Optimization Algorithm

They reported that the ELM-COA could successfully simulate complex problems. Zolghadr-Asli et al. (2018) investigated the application of CA in different fields. They reported that the COA had a fast convergence. Hassanien et al. (2018) combined a rough searching scheme (RSO) with COA for solving complex problems. The hybrid algorithm could accelerate the modeling process. Gupta et al. (2018) used COA to diagnose Parkinson’s disease (https://www.sciencedirect.com/topics/engine ering/parkinsons-disease). They improved the original version of COA. The results revealed that the improved COA performed better than the original COA. Lakshmi et al. (2018) coupled COA with K-Means clustering to find the optimal solutions. For complex problems, COA showed high efficiency. Diaz et al. (2018) enhanced COA for optimizing energy problems. They reported the high performance of improved COA. Rizk-Allah et al. (2018) used a chaos theory to enhance COA. They used a nonparametric test to evaluate the performance of COA. They stated that the improved COA performed better than the original COA. Mohammadi and Abdi (2018) used COA to solve the problem of non-convex economic load dispatch (ELD). They reported that the improved COA converged earlier than the other optimization algorithms. Hinojosa et al. (2018) applied Multi-Objective Chaotic COA (MOCCOA) to solve complex problems. Huang and Wu (2018) combined COA with particle swarm optimization (PSO). They claimed that PSO-COA performed better than COA. Anter et al. (2019) used COA to enhance the efficiency of Fuzzy C-means (FFCM). They reported that the FFCM-COA had a high potential for locating crop rows. Sayed et al. (2019) combined chaotic theory with COA (CCOA) for feature selection. The results revealed that the CCOA outperformed the COA. Manimurugan et al. (2020) used an adaptive neuro-fuzzy interface system (ANFIS) to identify abnormalities in the network. They reported that the COA improved the performance of the ANFIS model. Subramanian et al. (2020) coupled the gray wolf optimization algorithm with COA to improve the lifetime expectancy of the network. They reported that the hybrid algorithm outperformed the COA. Bhullar et al. (2020) developed COA for solving engineering problems. They used an enhanced COA to optimize the proportional–integral–derivative (PID) controller. They stated that the accuracy of enhanced COA was better than the existing algorithms. Pandey and Kirmani (2020) suggested COA for placing the optimal photovoltaic. They reported that the COA could successfully minimize the voltage losses. Chugh et al. (2021) introduced a new hybrid algorithm named monkey COA. Meraihi et al. (2021) stated that the COA had a high potential for image processing, feature selection, and water resource management. Huangpeng et al. (2021) used COA to optimize dam reservoir operation under climate change. Vega et al. (2022) used a Learning-based Modular Balancer (LBMB) to improve the performance of COA. The LBMB CSA outperformed the COA. Hossain et al. (2022) coupled COA with a support vector machine (SVM) to optimize microalgaebased wastewater treatment. They reported that the COA improved the accuracy of SVM models.

6.3 Mathematical Model of COA

57

6.3 Mathematical Model of COA A characteristic of crows is hiding their food. COA’s mathematical model is defined based on the mechanism of hiding food. First, the initial locations of crows are defined. Crows memorize the position of their hiding places in their memory (Askarzadeh, 2016). During iteration iter, mi, iter shows the hiding place of crow i. Crows are always searching for better food sources in their environment. They memorize the best locations for finding food. The COA considers two assumptions (Askarzadeh, 2016). 1. The crow i follows the crow j. The crow j is unaware that the crow j is following it. ( ) cri,iter+1 = cri,iter + ri × fl × m j,iter − cri,iter

(6.1)

where cri,iter+1 : : the location of ith cr in iter + 1th iteration, r : random number, fl: flight length, and m j,iter : the best position obtained so far. 2. The crow i follows the crow j. The crow j knows that the crow i is following it. [ cri,iter+1 =

] ( ) cri,iter + ri × fli,iter × m j,iter − cri,iter ← r j ≥ AP j,iter a(random)location

(6.2)

where fli,iter : flight length and AP j,iter : the awareness probability of crow j. The crows change their position based on the following equation: [ m

i,iter+1

=

) ( )] ( cri,iter+1 ← f cri,iter+1 is(better)than( f ) m i,iter m i,iter

(6.3)

) ) ( ( where f cri,iter+1 : objective function of ith crow and ( f ) m i,iter : the objective function of the best crow. Using a population increases the probability of escaping local optima and finding a good solution. Two parameters must be tuned in CSA: flight length and awareness probability. Figure 6.1 shows the structure of COA.

58

6 Structure of Crow Optimization Algorithm

Fig. 6.1 Structure of COA for solving optimization problems

References Anter, A. M., Hassenian, A. E., & Oliva, D. (2019). An improved fast fuzzy c-means using crow search optimization algorithm for crop identification in agricultural. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2018.10.009 Askarzadeh, A. (2016). A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Computers and Structures. https://doi.org/10.1016/ j.compstruc.2016.03.001 Bhullar, A. K., Kaur, R., & Sondhi, S. (2020). Enhanced crow search algorithm for AVR optimization. Soft Computing. https://doi.org/10.1007/s00500-019-04640-w Chugh, A., Sharma, V. K., Kumar, S., Nayyar, A., Qureshi, B., Bhatia, M. K., & Jain, C. (2021). Spider monkey crow optimization algorithm with deep learning for sentiment classification and information retrieval. IEEE Access, 9, 24249–24262. Díaz, P., Pérez-Cisneros, M., Cuevas, E., Avalos, O., Gálvez, J., Hinojosa, S., & Zaldivar, D. (2018). An improved crow search algorithm applied to energy problems. Energies. https://doi.org/10. 3390/en11030571 Gupta, D., Sundaram, S., Khanna, A., Ella Hassanien, A., & de Albuquerque, V. H. C. (2018). Improved diagnosis of Parkinson’s disease using optimized crow search algorithm. Computers and Electrical Engineering. https://doi.org/10.1016/j.compeleceng.2018.04.014

References

59

Hassanien, A. E., Rizk-Allah, R. M., & Elhoseny, M. (2018). A hybrid crow search algorithm based on rough searching scheme for solving engineering optimization problems. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-018-0924-y Hinojosa, S., Oliva, D., Cuevas, E., Pajares, G., Avalos, O., & Gálvez, J. (2018). Improving multicriterion optimization with chaos: A novel multi-objective chaotic crow search algorithm. Neural Computing and Applications. https://doi.org/10.1007/s00521-017-3251-x Hossain, S. Z., Sultana, N., Mohammed, M. E., Razzak, S. A., & Hossain, M. M. (2022). Hybrid support vector regression and crow search algorithm for modeling and multiobjective optimization of microalgae-based wastewater treatment. Journal of Environmental Management, 301, 113783. Huang, K. W., & Wu, Z. X. (2018). CPO: A crow particle optimization algorithm. International Journal of Computational Intelligence Systems. https://doi.org/10.2991/ijcis.2018.125905658 Huang, K. W., & Wu, Z. X. (2019). CPO: A crow particle optimization algorithm. International Journal of Computational Intelligence Systems, 12(1), 426–435. Huangpeng, Q., Huang, W., & Gholinia, F. (2021). Forecast of the hydropower generation under influence of climate change based on RCPs and developed crow search optimization algorithm. Energy Reports. https://doi.org/10.1016/j.egyr.2021.01.006 Lakshmi, K., Visalakshi, N. K., & Shanthi, S. (2018). Data clustering using K-means based on crow search algorithm. Sadhana—Academy Proceedings in Engineering Sciences. https://doi.org/10. 1007/s12046-018-0962-3 Liu, D., Liu, C., Fu, Q., Li, T., Imran, K. M., Cui, S., & Abrar, F. M. (2017). ELM evaluation model of regional groundwater quality based on the crow search algorithm. Ecological Indicators. https:// doi.org/10.1016/j.ecolind.2017.06.009 Manimurugan, S., Majdi, A. M., Mohmmed, M., Narmatha, C., & Varatharajan, R. (2020). Intrusion detection in networks using crow search optimization algorithm with adaptive neurofuzzy inference system. Microprocessors and Microsystems.https://doi.org/10.1016/j.micpro. 2020.103261 Meraihi, Y., Gabis, A. B., Ramdane-Cherif, A., & Acheli, D. (2021). A comprehensive survey of crow search algorithm and its applications. Artificial Intelligence Review, 54(4), 2669–2716. Mohammadi, F., & Abdi, H. (2018). A modified crow search algorithm (MCSA) for solving economic load dispatch problem. Applied Soft Computing Journal. https://doi.org/10.1016/j.asoc. 2018.06.040 Nobahari, H., & Bighashdel, A. (2017). MOCSA: A multi-objective crow search algorithm for multiobjective optimization. In: 2nd Conference on Swarm Intelligence and Evolutionary Computation, CSIEC 2017—Proceedings. https://doi.org/10.1109/CSIEC.2017.7940171 Oliva, D., Hinojosa, S., Cuevas, E., Pajares, G., Avalos, O., & Gálvez, J. (2017). Cross entropy based thresholding for magnetic resonance brain images using crow search algorithm. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2017.02.042 Pandey, A. K., & Kirmani, S. (2020). Optimal location and sizing of hybrid system by analytical crow search optimization algorithm. International Transactions on Electrical Energy Systems. https://doi.org/10.1002/2050-7038.12327 Rizk-Allah, R. M., Hassanien, A. E., & Bhattacharyya, S. (2018). Chaotic crow search algorithm for fractional optimization problems. Applied Soft Computing Journal. https://doi.org/10.1016/j. asoc.2018.03.019 Sayed, G. I., Hassanien, A. E., & Azar, A. T. (2019). Feature selection via a novel chaotic crow search algorithm. Neural Computing and Applications. https://doi.org/10.1007/s00521-017-2988-6 Shi, Z., Li, Q., Zhang, S., & Huang, X. (2017). Improved crow search algorithm with inertia weight factor and roulette wheel selection scheme. In Proceedings—2017 10th International Symposium on Computational Intelligence and Design, ISCID. https://doi.org/10.1109/ISCID.2017.140 Subramanian, P., Sahayaraj, J. M., Senthilkumar, S., & Alex, D. S. (2020). A hybrid grey wolf and crow search optimization algorithm-based optimal cluster head selection scheme for wireless sensor networks. Wireless Personal Communications. https://doi.org/10.1007/s11277-020-072 59-5

60

6 Structure of Crow Optimization Algorithm

Vega, E., Soto, R., Crawford, B., Peña, J., Contreras, P., & Castro, C. (2022). Predicting population size and termination criteria in metaheuristics: A case study based on spotted hyena optimizer and crow search algorithm. Applied Soft Computing, 109513. Zolghadr-Asli, B., Bozorg-Haddad, O., & Chu, X. (2018). Crow search algorithm (CSA). In Advanced optimization by nature-inspired algorithms (pp. 143–149). Springer.

Chapter 7

Structure of Salp Swarm Algorithm

Abstract This chapter explains the theory of the salp swarm algorithm (SSA). The SSA can be easily implemented. Also, adjusting SSA parameters is easy. The fast convergence and high accuracy are the advantages of SSA. The SSA can be coupled with optimization algorithms to solve complex problems. SSA is an example of a strong algorithm. This algorithm has few parameters. The best solution in the algorithm will guide the other solutions. The SSA can be coupled with soft computing models for finding their parameter values. Keywords Optimization algorithms · Salp swarm algorithm · Soft computing models · Hybrid algorithms

7.1 Introduction Nowadays computer systems play an important role in solving complex mathematical problems and nonlinear problems. These systems can utilize artificial intelligence models and optimization algorithms to solve complex mathematical problems (Mirjalili et al., 2017). These optimization algorithms can be used in various forms. Using these advanced optimization algorithms can increase the chance of solving nonlinear and mathematical problems. These algorithms produce more accurate results. These algorithms have different kinds. Usually, these algorithms have a primary population (Ehteram et al., 2022). Primary population consist of primary solutions that are installed in the matrix. The algorithm uses its operators to direct the solutions to the best location. This algorithm can be used directly to solve the optimization problems. Furthermore, these algorithms can be utilized in a way to adjust various parameters of models. Optimization algorithms have the capability to solve multidimensional problems. Also, these algorithms have many capabilities. Applying the codes of these optimization algorithms is not difficult. Nowadays, these optimization algorithms are very practical and used in various areas such as industries. These algorithms are used to design and manufacture various tools. SSA is an example of a strong algorithm. This algorithm has few parameters (Mirjalili et al., 2017). The best solution in the algorithm will guide the other answers. Therefore,

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_7

61

62

7 Structure of Salp Swarm Algorithm

there is a leader and a few followers in this algorithm. Followers try to increase their quality in the process of optimization. This chapter introduces the structure of SSA. First, the applications of algorithms in different fields are investigated. Afterward, the structure of SSA is explained. Also, the SSA flowchart is described.

7.2 The Application of the Salp Swarm Algorithm in Different Fields The behavior of salps inspired SSA. The SSA has been used in different fields. Mirjalili et al. (2017) introduced SSA to solve complex engineering problems. The SSA was tested on mathematical functions. It was reported that the SSA solved complex problems efficiently. For feature selection, Faris et al. (2018) used SSA. The continuous version of SSA is converted to a binary version by employing eight transfer functions. Based on chaos theory (CT), Sayed et al. (2018) proposed a new version of SSA. CT-SSA was able to maximize classification accuracy by finding the optimal feature subset. For pattern classification, Abusnaina et al. (2018) used SSA to optimize the weight coefficients for neural networks. Several well-known classification problems were used to validate the proposed algorithm and analyze its performance against other optimization algorithms. The proposed method outperformed other methods regarding classification accuracy and sum squared errors. Ekinci and Hekimoglu (2018) tuned the power system stabilizer (PSS) in a multimachine power system using SSA. The proposed SSA approach performed better than other intelligent approaches. Zhang et al. (2018) estimated the soil–water retention curve parameters using SSA. Based on the results, the SSA achieved the highest efficiency. Ibrahim et al. (2019) coupled SSA with PSO to increase efficiency. SSAPSO was used to find the best features based on different datasets. Hybrid SSA-PSO showed a high capacity for solving complex problems. The binary SSA was introduced by Rizk-Allah et al. (2019). The new SSA well solved complex problems. Qais et al. (2019) improved the SSA for solving the high-dimensional functions. The improved SSA enhanced the convergence velocity of the SSA for solving optimization problems. Bairathi and Gopalani (2019) proposed SSA for adjusting artificial neural network (ANN) parameters. The results indicated that the SSA enhanced the performance of ANN models. Wu et al. (2019) improved SSA using a weight coefficient. It was found that the new SSA outperformed the SSA. Zhang et al. (2022) used several strategies to enhance the precision of the SSA. The presented algorithm was better than the original SSA. Yaseen et al. (2020) used SSA for training extreme learning machine (ELM). They reported that the optimized ELM was better than the ELM. Aljarah et al. (2020) used a time-varying strategy and local fittest solutions to enhance the efficiency of SSA. The results indicated that the new SSA achieved significantly promising results versus other algorithms. Tubishat et al. (2021) stated that SSA, like other optimization algorithms, fell into local optima. Their algorithm,

7.3 Structure of Salp Swarm Algorithm

63

Dynamic Salp Swarm Algorithm (DSSA), improved the accuracy of SSA. For salps’ position update, a new equation was developed. DSSA has a high capacity to solve complex problems. Braik et al. (2021) used multileadership and simulated annealing to improve SSA accuracy. This integration enhanced the exploration and exploitation of SSA. Si et al. (2022) investigated opposition-based learning (OBL) and its impact on the SSA. Samantaray et al. (2022) integrated SSA with a support vector machine (SVM) to predict monthly runoff. SVM-SSA was recommended for modeling and predicting rainfall-runoff interactions. Khajehzadeh et al. (2022) used an adaptive SSA for solving global optimization problems. They stated that the adaptive SSA outperformed the SSA. Ehteram et al. (2022) used a multi-objective SSA for training ANN and finding the best input combinations. They reported that the multi-objective SSA had a high ability to solve complex problems. Tawhid and Ibrahim (2022) introduced chaos into SSA to improve the global search for SSA. Noori et al. (2022) used SSA to estimate the level of reclaimed solid waste. The SSA was effective for determining least-distance paths. Eslami et al. (2022) used SSA for training support vector machine (SVM). They reported that the SVM-SSA had a high potential for predicting groundwater levels. Guan et al. (2022) used SSA for placing multiple controllers in large-scale software. The results indicated that the SSA had a high potential for solving optimization problems.

7.3 Structure of Salp Swarm Algorithm Salp chains can be mathematically modeled by dividing the population into two groups: leaders and followers (Mirjalili et al., 2017). Followers follow the leader at the front of the chain. A leader guides the followers of the salp chain. Equation 7.1 changes the position of a leader: [ le1j

=

] (( ) ) food j + α1 up j − lo j α2 + lo j ← α3 ≥ 0 (( ) ) food j − α1 up j − lo j α2 + lo j ← α3 < 0

(7.1)

where le1j : leader of salps, food j : the location of food source, α1 , α2 , and α3 : random numbers, and up j and lo j : upper and lower bounds of decision variable. The parameter α1 (controller parameter) is updated as follows: α1 = 2e− I

4i

(7.2)

where i: iteration number (IT) and I: maximum IT. The position of the follower is updated as follows: saij =

) 1( i sa j + sai−1 j 2

(7.3)

64

7 Structure of Salp Swarm Algorithm

Fig. 7.1 SSA flowchart for solving the optimization problem

i where sai−1 j : the location of i − tth salp in jth dimension and sa j : the location of ith salp in jth dimension. Figure 7.1 shows the SSA flowchart.

References Abusnaina, A. A., Ahmad, S., Jarrar, R., & Mafarja, M. (2018). Training neural networks using salp swarm algorithm for pattern classification. ACM International Conference Proceeding Series. https://doi.org/10.1145/3231053.3231070 Aljarah, I., Habib, M., Faris, H., Al-Madi, N., Heidari, A. A., Mafarja, M., Elaziz, M. A., & Mirjalili, S. (2020). A dynamic locality multiobjective salp swarm algorithm for feature selection. Computers and Industrial Engineering. https://doi.org/10.1016/j.cie.2020.106628 Bairathi, D., & Gopalani, D. (2019). Salp swarm algorithm (SSA) for-training feed-forward neural networks. Advances in Intelligent Systems and Computing. https://doi.org/10.1007/978-981-131592-3_41 Braik, M., Sheta, A., Turabieh, H., & Alhiary, H. (2021). A novel lifetime scheme for enhancing the convergence performance of salp swarm algorithm. Soft Computing. https://doi.org/10.1007/ s00500-020-05130-0 Ehteram, M., Panahi, F., Ahmed, A. N., Huang, Y. F., Kumar, P., & Elshafie, A. (2022). Predicting evaporation with optimized artificial neural network using multiobjective salp swarm algorithm. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-16301-3 Ekinci, S., & Hekimoˇglu, B. (2018). Parameter optimization of power system stabilizer via Salp Swarm algorithm. In 5th International Conference on Electrical and Electronics Engineering, ICEEE. https://doi.org/10.1109/ICEEE2.2018.8391318

References

65

Eslami, P., Nasirian, A., Akbarpour, A., & Nazeri Tahroudi, M. (2022). Groundwater estimation of Ghayen plain with regression-based and hybrid time series models. Paddy and Water Environment, 1–12. Faris, H., Mafarja, M. M., Heidari, A. A., Aljarah, I., Al-Zoubi, A. M., Mirjalili, S., & Fujita, H. (2018). An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowledge-Based Systems. https://doi.org/10.1016/j.knosys.2018.05.009 Guan, S., Li, J., Li, Y., & Wang, Z. (2022). A multi-controller placement method for software defined network based on improved firefly algorithm. Transactions on Emerging Telecommunications Technologies, e4482. Ibrahim, R. A., Ewees, A. A., Oliva, D., Abd Elaziz, M., & Lu, S. (2019). Improved salp swarm algorithm based on particle swarm optimization for feature selection. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-018-1031-9 Khajehzadeh, M., Iraji, A., Majdi, A., Keawsawasvong, S., & Nehdi, M. L. (2022). Adaptive salp swarm algorithm for optimization of geotechnical structures. Applied Sciences, 12(13), 6749. Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., & Mirjalili, S. M. (2017). Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software. https://doi.org/10.1016/j.advengsoft.2017.07.002 Noori, M. A., Al-Janabi, T. A., & Hussien, S. A. S. (2022). Solid waste recycling and management cost optimization algorithm. Bulletin of Electrical Engineering and Informatics, 11(4). Qais, M. H., Hasanien, H. M., & Alghuwainem, S. (2019). Enhanced salp swarm algorithm: Application to variable speed wind generators. Engineering Applications of Artificial Intelligence. https://doi.org/10.1016/j.engappai.2019.01.011 Rizk-Allah, R. M., Hassanien, A. E., Elhoseny, M., & Gunasekaran, M. (2019). A new binary salp swarm algorithm: Development and application for optimization tasks. Neural Computing and Applications. https://doi.org/10.1007/s00521-018-3613-z Samantaray, S., Sawan Das, S., Sahoo, A., & Prakash Satapathy, D. (2022). Monthly runoff prediction at Baitarani river basin by support vector machine based on salp swarm algorithm. Ain Shams Engineering Journal. https://doi.org/10.1016/j.asej.2022.101732 Sayed, G. I., Khoriba, G., & Haggag, M. H. (2018). A novel chaotic salp swarm algorithm for global optimization and feature selection. Applied Intelligence. https://doi.org/10.1007/s10489018-1158-6 Si, T., Miranda, P. B., & Bhattacharya, D. (2022). Novel enhanced salp swarm algorithms using opposition-based learning schemes for global optimization problems. Expert Systems with Applications, 117961. Tawhid, M. A., & Ibrahim, A. M. (2022). Improved salp swarm algorithm combined with chaos. Mathematics and Computers in Simulation, 202, 113–148. Tubishat, M., Ja’afar, S., Alswaitti, M., Mirjalili, S., Idris, N., Ismail, M. A., & Omar, M. S. (2021). Dynamic salp swarm algorithm for feature selection. Expert Systems with Applications.https:// doi.org/10.1016/j.eswa.2020.113873 Wu, J., Nan, R., & Chen, L. (2019). Improved salp swarm algorithm based on weight factor and adaptive mutation. Journal of Experimental and Theoretical Artificial Intelligence. https://doi. org/10.1080/0952813X.2019.1572659 Yaseen, Z. M., Faris, H., & Al-Ansari, N. (2020). Hybridized extreme learning machine model with salp swarm algorithm: A novel predictive model for hydrological application. Complexity. https://doi.org/10.1155/2020/8206245 Zhang, J., Wang, Z., & Luo, X. (2018). Parameter estimation for soil water retention curve using the salp swarm algorithm. Water (Switzerland). https://doi.org/10.3390/w10060815 Zhang, H., Cai, Z., Ye, X., Wang, M., Kuang, F., Chen, H., Li, C., & Li, Y. (2022). A multi-strategy enhanced salp swarm algorithm for global optimization. Engineering with Computers. https:// doi.org/10.1007/s00366-020-01099-4

Chapter 8

Structure of Dragonfly Optimization Algorithm

Abstract This chapter explains the structure and mathematical model of the dragonfly optimization algorithm (DFOA). The dragonfly is regarded as a small predator in nature. However, during the exploration phase, dragonflies form small groups and fly back and forth to seek food and attract prey. The DFOA can be used for solving different optimization problems. The DFOA can be easily implemented. Also, the DFOA can be coupled with different optimization algorithms. The DFOA can be used for solving multiobjective optimization problems. The DFOA is a robust optimization algorithm for training soft computing models. This chapter indicated that the DFOA was successfully used in different fields. Keywords Dragonfly optimization algorithm · Hybrid optimization algorithms · Soft computing models · Multiobjective optimization problems

8.1 Introduction This chapter explains the structure of the dragonfly optimization algorithm (DFOA). In the next section, the application of DFOA is reviewed in the different fields. Afterward, the mathematical model of DFOA is described.

8.2 Application of Dragonfly Optimization Algorithm Mirjalili (2016) introduced DFOA to solve complex problems. Binary and multiobjective versions of DFOA were introduced. The DFOA outperformed other optimization algorithms. Mafarja et al. (2017) used DFOA for feature selection. They reported that the DFOA outperformed the genetic algorithm and particle swarm optimization. Tharwat et al. (2018) used DFOA for training support vector machines (SVM). It was reported that the SVM-DFOA predicted output accurately. Salam et al. (2016) used DFOA for training extreme learning machine models. They reported that the DFOA was superior to genetic algorithms and particle swarm optimization. Jafari and Chaleshtari (2017) evaluated the ability of DFOA for the optimal design of composite © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_8

67

68

8 Structure of Dragonfly Optimization Algorithm

plates. They reported that the DFOA increased the structural load-bearing capacity. Suresh and Sreejith (2017) used DFOA to design solar systems. They reported that the DFOA could be successfully used for designing solar systems. Sree Ranjini and Murugan (2017) used a hybrid particle swarm optimization algorithm (PSOA)DFOA. The new method used the exploration and exploitation abilities of DFOA and PSOA simultaneously. The PSO-DFOA outperformed the PSOA and DFOA. Hariharan et al. (2018) proposed an improved binary DFOA for feature selection. They stated that the new DFOA performed better than the other optimization algorithms. Amroune et al. (2018) predicted voltage stability using a hybrid SVM-DFOA. They reported that the SVM-FDOA could successfully predict voltage stability. Mafarja et al. (2018) used the binary DFOA (BDFOA) for feature selection. BDFOA is based on the transfer functions, which map continuous spaces to discrete ones. Using the improved BDFOA, complex problems could be solved efficiently. Ghanem and Jantan (2018) coupled DFOA with the artificial bee colony (ABC) to improve the convergence velocity of DFOA. They used ABC, DFOA, and ABC-DFOA for training artificial neural network models. The ABC-DFOA outperformed the ABC and DFOA. Rahman and Rashid (2019) stated that the hybrid versions of algorithms could improve the efficiency of the original DFOA. Aci and Gülcan (2019) coupled the DFOA with the Levy flight mechanism for solving optimization problems. The modified algorithm outperformed the DFOA. Hariharan et al. (2018) introduced a new version of DFOA for feature selection. The new method successfully classified the different types of infant cry signals. Sayed et al. (2019) proposed a new chaotic DFOA for solving complex problems. They proposed ten chaotic maps for adjusting the movement of dragonflies. The chaotic DFOA had better performance than the DFOA. Diab and Rezk (2019) suggested DFO for optimal adjustment of capacitors. The DFOA had the best performance among other algorithms. Khishe and Safari (2019) used artificial neural networks (ANN)-DFOA for classifying sonar targets. The ANN-DFOA had the best accuracy among other models. Mafarja et al. (2020) stated that the DFOA was an effective tool for training soft computing models. Mafarja et al. (2020) stated that the binary DFOA was an effective tool for feature selection problems. Hariharan et al. (2020) introduced three versions of binary DFOA for feature selection. The binary DFOA was successful in selecting the best features. Li et al. (2020) combined the support vector machine (SVM) with the DFOA for predicting wind speed. They reported that the SVM-DFOA had the best accuracy among other models. Aghelpour et al. (2021) coupled DFOA with ANN and SVM models for predicting drought. They reported that the SVM-DFOA had the best accuracy among other models. Lodhi et al. (2021) used DFOA to optimize the performance of grid-connected photovoltaic (PV) systems. They reported that the DFOA outperformed the other optimization algorithms. Zhang et al. (2021) used DFOA to improve the accuracy of wind speed prediction. They reported that the DFOA could significantly improve the accuracy of predictions. Elkorany et al. (2022) integrated SVM with DFOA for breast cancer detection. The SDVM-DFOA outperformed other models. Gülcü (2022) found the values of ANN parameters using DFOA. The ANNDFOA outperformed the standalone ANN. Ramesh Kumar and Kuttiappan (2022)

8.3 Structure of Dragonfly Optimization Algorithm

69

used modified DFOA for adjusting deep learning parameters. They reported that the deep learning model-DFOA gave the best accuracy among other models.

8.3 Structure of Dragonfly Optimization Algorithm The dragonfly is regarded as a small predator in nature. Many dragonflies migrate over long distances in one direction during the exploitation phase. However, during the exploration phase, dragonflies form small groups and fly back and forth to seek food and attract prey. There are two types of dragonfly swarms in nature: dynamic and static. Large migratory swarms fly in one direction over long distances. A static swarm of dragonflies hunts prey over a small area. The dragonflies update this place based on the following levels (Mirjalili, 2016; Hammouri et al. 2020 ; Meraihi et al., 2020): 1. Separation: In the static swarm, this step avoids collisions between individuals: Sei = −

n Σ

DR − DR j

(8.1)

j=1

where Sei : separation, DR: the current location of dragonfly, DR j : the location of jth neighboring individual. 2. This level aims to match the velocity of each dragonfly with the other dragonflies: Σn j=1

ALi =

Vj

n

(8.2)

where ALi : alignment and V j : the speed of the jth neighbor element. 3. Cohesion is the tendency of individuals to concentrate in the center of the neighborhood (Mirjalili, 2016): Σn COi =

j=1

n

DR j

− DR

(8.3)

where COi : cohesion. 4. Attraction: the attraction toward a food source can be computed as follows: FOi = DR+ − DR

(8.4)

where FOi : attraction, DR: the current location of dragonfly, and DR+ : the location of food source. 5. Distraction: distraction outwards predators are computed as follows: ENi = DR− + DR

(8.5)

70

8 Structure of Dragonfly Optimization Algorithm

Fig. 8.1 Structure of DFOA for solving the optimization problem

where the enemy’s distraction motion and DR− : the location of enemy. The positions of dragonflies are using following equations: .DRt+1 = (seSEi + αALi + cCOi + f FOi + eENi ) + ω.DRt DRt+1 = .DRt+1 + DRt

(8.6) (8.7)

where .DRt+1 : the step vector, se, α, c, f , e: separation, alignment, cohesion, food, and enemy. Figure 8.1 shows the structure of DFOA.

References Aci, Ç. I., & Gülcan, H. (2019). A modified dragonfly optimization algorithm for single- and multiobjective problems using brownian motion. Computational Intelligence and Neuroscience. https://doi.org/10.1155/2019/6871298 Aghelpour, P., Mohammadi, B., Mehdizadeh, S., Bahrami-Pichaghchi, H., & Duan, Z. (2021). A novel hybrid dragonfly optimization algorithm for agricultural drought prediction. Stochastic Environmental Research and Risk Assessment. https://doi.org/10.1007/s00477-021-02011-2

References

71

Amroune, M., Bouktir, T., & Musirin, I. (2018). Power system voltage stability assessment using a hybrid approach combining dragonfly optimization algorithm and support vector regression. Arabian Journal for Science and Engineering. https://doi.org/10.1007/s13369-017-3046-5 Diab, A. A. Z., & Rezk, H. (2019). Optimal sizing and placement of capacitors in radial distribution systems based on grey wolf, dragonfly and moth–flame optimization algorithms. Iranian Journal of Science and Technology—Transactions of Electrical Engineering. https://doi.org/10.1007/s40 998-018-0071-7 Elkorany, A. S., Marey, M., Almustafa, K. M., & Elsharkawy, Z. F. (2022). Breast cancer diagnosis using support vector machines optimized by whale optimization and dragonfly algorithms. IEEE Access, 10, 69688–69699. Ghanem, W. A. H. M., & Jantan, A. (2018). A cognitively inspired hybridization of artificial bee colony and dragonfly algorithms for training multi-layer perceptrons. Cognitive Computation. https://doi.org/10.1007/s12559-018-9588-3 Gülcü, S. ¸ (2022). Training of the feed forward artificial neural networks using dragonfly algorithm. Applied Soft Computing, 109023. Hammouri, A. I., Mafarja, M., Al-Betar, M. A., Awadallah, M. A., & Abu-Doush, I. (2020). An improved dragonfly algorithm for feature selection. Knowledge-Based Systems. https://doi.org/ 10.1016/j.knosys.2020.106131 Hariharan, M., Sindhu, R., Vijean, V., Yazid, H., Nadarajaw, T., Yaacob, S., & Polat, K. (2018). Improved binary dragonfly optimization algorithm and wavelet packet based non-linear features for infant cry classification. Computer Methods and Programs in Biomedicine. https://doi.org/10. 1016/j.cmpb.2017.11.021 Jafari, M., & Bayati Chaleshtari, M. H. (2017). Using dragonfly algorithm for optimization of orthotropic infinite plates with a quasi-triangular cut-out. European Journal of Mechanics, A/Solids. https://doi.org/10.1016/j.euromechsol.2017.06.003 Khishe, M., & Safari, A. (2019). Classification of sonar targets using an MLP neural network trained by dragonfly algorithm. Wireless Personal Communications. https://doi.org/10.1007/s11 277-019-06520-w Li, L. L., Zhao, X., Tseng, M. L., & Tan, R. R. (2020). Short-term wind power forecasting based on support vector machine with improved dragonfly algorithm. Journal of Cleaner Production. https://doi.org/10.1016/j.jclepro.2019.118447 Lodhi, E., Wang, F. Y., Xiong, G., Mallah, G. A., Javed, M. Y., Tamir, T. S., & Gao, D. W. (2021). A dragonfly optimization algorithm for extracting maximum power of grid-interfaced PV systems. Sustainability (Switzerland). https://doi.org/10.3390/su131910778 Mafarja, M., Aljarah, I., Heidari, A. A., Faris, H., Fournier-Viger, P., Li, X., & Mirjalili, S. (2018). Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowledge-Based Systems. https://doi.org/10.1016/j.knosys.2018.08.003 Mafarja, M., Heidari, A. A., Faris, H., Mirjalili, S., & Aljarah, I. (2020). Dragonfly algorithm: Theory, literature review, and application in feature selection. In Studies in computational intelligence.https://doi.org/10.1007/978-3-030-12127-3_4 Mafarja, M. M., Eleyan, D., Jaber, I., Hammouri, A., & Mirjalili, S. (2017). Binary dragonfly algorithm for feature selection. In Proceedings—International Conference on New Trends in Computing Sciences, ICTCS. https://doi.org/10.1109/ICTCS.2017.43 Meraihi, Y., Ramdane-Cherif, A., Acheli, D., & Mahseur, M. (2020). Dragonfly algorithm: a comprehensive review and applications. In Neural computing and applications.https://doi.org/10.1007/ s00521-020-04866-y Mirjalili, S. (2016). Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multiobjective problems. Neural Computing and Applications. https://doi.org/10.1007/s00521-015-1920-1 Rahman, C. M., & Rashid, T. A. (2019). Dragonfly algorithm and its applications in applied science survey. Computational Intelligence and Neuroscience. https://doi.org/10.1155/2019/9293617

72

8 Structure of Dragonfly Optimization Algorithm

Ramesh Kumar, A., & Kuttiappan, H. (2022). Detection of brain tumor size using modified deep learning and multilevel thresholding utilizing modified dragonfly optimization algorithm. Concurrency and Computation: Practice and Experience, 34(18), e7016. Salam, M. A., Zawbaa, H. M., Emary, E., Ghany, K. K. A., & Parv, B. (2016). A hybrid dragonfly algorithm with extreme learning machine for prediction. In Proceedings of the 2016 International Symposium on INnovations in Intelligent SysTems and Applications, INISTA. https://doi.org/10. 1109/INISTA.2016.7571839 Sayed, G. I., Tharwat, A., & Hassanien, A. E. (2019). Chaotic dragonfly algorithm: An improved metaheuristic algorithm for feature selection. Applied Intelligence. https://doi.org/10.1007/s10 489-018-1261-8 Sree Ranjini, S. R., & Murugan, S. (2017). Memory based hybrid dragonfly algorithm for numerical optimization problems. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2017. 04.033 Suresh, V., & Sreejith, S. (2017). Generation dispatch of combined solar thermal systems using dragonfly algorithm. Computing. https://doi.org/10.1007/s00607-016-0514-9 Tharwat, A., Gabel, T., & Hassanien, A. E. (2018). Parameter optimization of support vector machine using dragonfly algorithm. Advances in Intelligent Systems and Computing. https://doi.org/10. 1007/978-3-319-64861-3_29 Zhang, L., Wang, J., & Niu, X. (2021). Wind speed prediction system based on data pre-processing strategy and multi-objective dragonfly optimization algorithm. Sustainable Energy Technologies and Assessments. https://doi.org/10.1016/j.seta.2021.101346

Chapter 9

Rat Swarm Optimization Algorithm

Abstract This chapter reviews the application of rat swarm optimization algorithms (RSOA) for solving different optimization problems. RSOA is a robust and simple optimization algorithm. There are both male and female rats in a group of rats. A mathematical model of rats’ chasing and fighting behaviors is used to design an RSO algorithm and optimize the results. The results indicated that the RSOA was implemented for solving complex problems. The RSOA is a robust optimizer for training soft computing models. The high accuracy and fast convergence are the advantages of RSOA. Keywords Rat swarm optimization algorithm · Optimization algorithm · Soft computing models · Training algorithms

9.1 Introduction This chapter explains the structure of the rat swarm optimization algorithm (RSOA). First, the application of the rat swarm algorithm is rewired in different fields. Then, the mathematical model of RSOA is explained.

9.2 Applications of Rat Swarm Algorithm Dhiman et al. (2021) introduced the RSOA based on the life of rats. Various engineering design problems, as well as combinatorial optimization problems, were solved using RSO. Thus, RSO had the least computational cost and the fastest convergence speed. For unimodal and multimodal test functions, RSO outperformed other algorithms. Ghadge and Prakash (2021) used RSOA as a training algorithm. RSOA was used to train artificial neural networks (ANNs). An ANN model was optimized using the RSOA. Based on the results, RSOA was a reliable tool for training models. Tamilarasan et al. (2022) used RSOA to find optimal values of parameters of a system. They reported that the RSOA converged earlier than other optimization algorithms. Awadallah et al. (2022) used RSO for feature selection. They used © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_9

73

74

9 Rat Swarm Optimization Algorithm

the S-shape transfer function to develop the binary version of RSO. Based on their findings, the enhanced binary RSOA was capable of selecting features efficiently. Vasantharaj et al. (2021) used the RSOA for training deep learning models. They used optimized deep learning models for brain image diagnosis. They reported that the optimized deep learning models had high abilities for predicting different variables. Eslami et al. (2022) suggested RSO for extracting parameters of photovoltaic models. They found that the RSOA outperformed the other optimization algorithms. Sayed (2022) suggested RSO for training a convolutional neural network (CNN). The RSOA improved the performance of the CNN model. Mohana Dhas and Suresh Singh (2022) used RSOA for denoising blood image cells. They reported that the RSOA produced a high-quality denoised image.

9.3 Structure of Rat Swarm Optimization Algorithms RSOA is a robust and simple optimization algorithm. There are both male and female rats in a group of rats. Rats are territorial animals. The aggressive behavior of some rats may lead to their death. The first step is to define the locations of the rats in the search space. A mathematical model of rats’ chasing and fighting behaviors is used to design an RSO algorithm and optimize the results (Dhiman et al., 2021). Using their social agonistic behavior, rats chase prey. The algorithm hypothesizes that the best rats know the location of their prey (Dhiman et al., 2021). ) ( . = α.R AT . i (x) + β. R AT . r (x) − R AT . i (x) R AT

(9.1)

. : the position of rate after chasing prey, R AT . r (x): the best solution, and where R AT . R ATi (x): the location of ith rat. α = R − IT ×

R ITmax

(9.2)

where R: random number, IT: iteration number, and ITmax : maximum iteration numbers. Mathematically, the following equation can be used to describe rats’ fights with prey | | . r (x) − R A T. || . i (x + 1) = || R AT R AT

(9.3)

. i (x + 1): the location of rats after fighting with prey. Figure 9.1 shows where R AT the structure of RSOA.

References

75

Fig. 9.1 Structure of rat swarm optimization algorithm

References Awadallah, M. A., Al-Betar, M. A., Braik, M. S., Hammouri, A. I., Doush, I. A., & Zitar, R. A. (2022). An enhanced binary rat swarm optimizer based on local-best concepts of PSO and collaborative crossover operators for feature selection. Computers in Biology and Medicine, 105675. Dhiman, G., Garg, M., Nagar, A., Kumar, V., & Dehghani, M. (2021). A novel algorithm for global optimization: Rat swarm optimizer. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-020-02580-0 Eslami, M., Akbari, E., Seyed Sadr, S. T., & Ibrahim, B. F. (2022). A novel hybrid algorithm based on rat swarm optimization and pattern search for parameter extraction of solar photovoltaic models. Energy Science & Engineering. Ghadge, R. R., & Prakash, S. (2021). Investigation and prediction of hybrid composite leaf spring using deep neural network based rat swarm optimization. Mechanics Based Design of Structures and Machines. https://doi.org/10.1080/15397734.2021.1972309 Mohana Dhas, M., & Suresh Singh, N. (2022). Blood cell image denoising based on tunicate rat swarm optimization with median filter. In Evolutionary computing and mobile sustainable networks (pp. 33–45). Springer. Sayed, G. I. (2022). A novel multi-objective rat swarm optimizer-based convolutional neural networks for the diagnosis of COVID-19 disease. Automatic Control and Computer Sciences, 56(3), 198–208.

76

9 Rat Swarm Optimization Algorithm

Tamilarasan, A., Renugambal, A., & Vijayan, D. (2022). Parametric estimation for AWJ cutting of Ti-6Al-4V alloy using rat swarm optimization algorithm. Materials and Manufacturing Processes, 1–11. Vasantharaj, A., Rani, P. S., Huque, S., Raghuram, K. S., Ganeshkumar, R., & Shafi, S. N. (2021). Automated brain imaging diagnosis and classification model using rat swarm optimization with deep learning based capsule network. International Journal of Image and Graphics. https://doi. org/10.1142/S0219467822400010

Chapter 10

Antlion Optimization Algorithm

Abstract Modelers may encounter multidimensional problems. Some of these problems may have constraints. Solving such problems requires robust models. This chapter explains the structure and mathematical model of the antlion optimization algorithm (ALO). Antlions dig holes in the sand. Their prey is trapped in holes. The ALO uses elitism to maintain the best solutions. The ALO can be applied to solve complex problems. The other optimization algorithms can be coupled with ALO to improve the quality of solutions. Also, the ALO can be used as a robust training algorithm for training soft computing models. The fast convergence and high accuracy are the advantages of ALO. Keywords Antlion optimization algorithm · Optimization problems · Hybrid algorithms · Soft computing models

10.1 Introduction The design of structures and simulation of phenomena require robust tools today. Computer systems are commonly used to model different phenomena. Modelers may encounter multidimensional problems. Some of these problems may have constraints. Solving such problems requires robust models. In engineering and mathematics, optimization algorithms are well known. The algorithms can be used to design structures or solve problems. The optimization algorithm can obtain a single solution or multiple solutions. Optimizing algorithms are appropriate tools for solving multiobjective problems. Access to the code of these algorithms is available. The purpose of this chapter is to explain the structure of the antlion optimization algorithm.

10.2 Mathematical Model of ALO Mirjalili (2015) proposed a new algorithm named ALO. The ALO solved different optimization problems using the hunting mechanism of antlions. Raju et al. (2016) applied ALO to optimize the controller gains. They reported that the ALO could © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_10

77

78

10 Antlion Optimization Algorithm

successfully solve complex problems. Emary et al. (2016) suggested a binary ALO. The binary ALO outperformed the other optimization algorithms. Trivedi et al. (2016) used ALO to find optimal power flow. They reported that the ALO performed better than the particle swarm optimization. Yamany et al. (2016) used ALO for training artificial neural network (ANN) models. The ANN-ALO outperformed the ANN model. Mirjalili et al. (2017) suggested a multiobjective version of ALO. The results revealed that the multiobjective ALO had the best accuracy among other optimization algorithms. Wu et al. (2017) suggested ALO to determine parameter values of the photovoltaic cell. The ALO was better than the other optimization algorithms. Yao and Wang (2017) introduced a new ALO, named dynamic adaptive ALO (DAALO). The DAALO outperformed the ALO. Mouassa et al. (2017) proposed ALO to solve optimal reactive power dispatch problems (ORPD). The ALO was superior to other optimization algorithms. Rajan et al. (2017) modified ALO for solving ORPD problems. They reported that the modified ALO outperformed ALO. Kanimozhi and Kumar (2018) estimated the parameters of a solar cell using a solar cell. Dinkar and Deep (2018) developed ALO based on the Lévy flight operators. The original algorithm might be trapped in local optimums. The developed ALO was superior to the original ALO. Pradhan et al. (2018) used ALO for tuning the parameters of the designed PID controller. Saha and Mukherjee (2018) improved the performance of ALO using quasi-opposition-based learning (QOBL). It was found that the ALO-QQBL converged faster than the ALO. Mani et al. (2018) stated that the ALO could successfully solve mathematical problems. In the modeling process, advanced operators played a key role. Kose (2018) suggested the artificial neural network (ANN)-ALO for predicting electroencephalogram. The ALO improved the efficiency of ANN models. Roy et al. (2019) used ALO for training recurrent neural networks. They reported that the ALO outperformed the genetic algorithm. Wang et al. (2019) developed ALO for training a predictive model. Mafarja and Mirjalili (2019) developed a binary ALO. They reported that the binary ALO had the best accuracy among other methods. Hu et al. (2019) used ALO to train ANN models. They used ANN-ALO to predict Chinese influenza. They reported that the ANN-LAO outperformed the ANN model. Heidari et al. (2020) stated that the ALO was a robust tool for training ANN models. They reported that the ANN-ALO could be used for simulating complex phenomena. Wang et al. (2020) suggested the ALO for estimating parameters of photovoltaic (https://www.sciencedirect.com/top ics/engineering/photovoltaics) models. They stated that the ALO successfully estimated model parameters. Kilic et al. (2020) enhanced the efficiency of ALO using advanced operators. They reported that the ANFIS-EALO outperformed the ANFIS model. Abualigah et al. (2021) stated that the ALO had abilities to solve problems such as image processing and classification problems. Dong et al. (2021) used a dynamic random walk (https://www.sciencedirectcom/topics/computer-science/ran dom-walk) to enhance the ALO’s efficiency. They reported that the enhanced ALO outperformed the original version of ALO. Rani and Garg (2021) proposed a multiobjective ALO to solve workflow-scheduling problems. They stated that the multiobjective ALO outperformed the other algorithms. Sharifi et al. (2022) proposed ALO

10.3 Mathematical Model of ALO

79

for optimal operation of hydropower reservoirs. The ALO converged earlier than the other algorithms.

10.3 Mathematical Model of ALO Based on the life of an antlion, the ALO was created. Antlions dig holes in the sand. Their prey is trapped in holes. Antlions dig conical holes to trap inspects. A clever antlion uses sand as a weapon to push its prey into the pit. In nature, ants search for food stochastically; hence, a random walk is used to model their movements (Mirjalili 2015; Nair et al., 2017; Rani and Garg, 2021). L(t) = [0, cumsum(2r (t1 ) − 1), cumsum(2r (t2 ) − 1), . . . , cumsum(2r (tn ) − 1)] (10.1) where cumsum: cumulative sum, n: maximum number of iterations, r(t): stochastic function, and L(t): random walk at step t. [ r (t) =

1 ← if(rand > 0.50) 0 ← if(rand ≤ 0.50)

] (10.2)

where rand: random number. The position of ants cannot be directly updated using Eq. 10.1 since a range of variables bounds every search space. (

L it

) ( ) L it − αi × dit − cit + ci = (bi − αi )

(10.3)

where bi : the maxim random walk (RW), αi : the minimum RW, ci and dit : the minimum and maximum value of the variable. ALO considers ants to be prey. The random walk (RW) is computed using following equation: cit = ANTtj + ct

(10.4)

dit = ANTtj + d t

(10.5)

where cit : the minimum value of all variables at tth iteration, cit : the minimum of all variables for ith ant, dit : the maximum of all variables for ith ant, and d t : maximum value of all variables at tth iteration, ANTtj : the location of the antlion. The ALO selects antlions using a roulette wheel operator. When an ant enters the pit, the antlion shoots sand. The trapped ants slide down because of this behavior. This behavior is mathematically modeled based on the following equations:

80

10 Antlion Optimization Algorithm

ct =

ct I

(10.6)

dt =

dt I

(10.7)

where I: a ratio, t: iteration number (IT), and T: maximum IT, ω: a constant value I = 10ω

t T

(10.8)

If an antlion wants to hunt new prey, it needs to update its location to the latest location of the hunted ant. Elitism is one of the features of optimization algorithms that enables them to maintain the best solutions. Since the elite antlion is the best agent, it should be able to affect the movement of all ants ANTit =

R tA + R tE 2

Figure 10.1 shows the structure of ALO. Fig. 10.1 Structure of ALO for solving optimization problems

(10.9)

References

81

References Abualigah, L., Shehab, M., Alshinwan, M., Mirjalili, S., & Elaziz, M. A. (2021). Ant lion optimizer: A comprehensive survey of its variants and applications. Archives of Computational Methods in Engineering. https://doi.org/10.1007/s11831-020-09420-6 Dinkar, S. K., & Deep, K. (2018). An efficient opposition based Lévy Flight Antlion optimizer for optimization problems. Journal of Computational Science. https://doi.org/10.1016/j.jocs.2018. 10.002 Dong, H., Xu, Y., Li, X., Yang, Z., & Zou, C. (2021). An improved antlion optimizer with dynamic random walk and dynamic opposite learning. Knowledge-Based Systems. https://doi.org/10.1016/ j.knosys.2021.106752 Emary, E., Zawbaa, H. M., & Hassanien, A. E. (2016). Binary ant lion approaches for feature selection. Neurocomputing. https://doi.org/10.1016/j.neucom.2016.03.101 Heidari, A. A., Faris, H., Mirjalili, S., Aljarah, I., & Mafarja, M. (2020). Ant lion optimizer: Theory, literature review, and application in multi-layer perceptron neural networks. Studies in Computational Intelligence. https://doi.org/10.1007/978-3-030-12127-3_3 Hu, H., Li, Y., Bai, Y., Zhang, J., & Liu, M. (2019). The improved antlion optimizer and artificial neural network for Chinese influenza prediction. Complexity. https://doi.org/10.1155/2019/148 0392 Kanimozhi, G., & Kumar, H. (2018). Modeling of solar cell under different conditions by ant lion optimizer with LambertW function. Applied Soft Computing Journal. https://doi.org/10.1016/j. asoc.2018.06.025 Kilic, H., Yuzgec, U., & Karakuzu, C. (2020). A novel improved antlion optimizer algorithm and its comparative performance. Neural Computing and Applications. https://doi.org/10.1007/s00521018-3871-9 Kose, U. (2018). An antlion optimizer-trained artificial neural network system for chaotic electroencephalogram (EEG) prediction. Applied Sciences (Switzerland). https://doi.org/10.3390/app809 1613 Mafarja, M. M., & Mirjalili, S. (2019). Hybrid binary ant lion optimizer with rough set and approximate entropy reducts for feature selection. Soft Computing. https://doi.org/10.1007/s00500-0183282-y Mani, M., Bozorg-Haddad, O., & Chu, X. (2018). Ant lion optimizer (ALO) algorithm. In Advanced optimization by nature-inspired algorithms (pp. 105–116). Springer, Singapore. Mirjalili, S. (2015). The ant lion optimizer. Advances in Engineering Software. https://doi.org/10. 1016/j.advengsoft.2015.01.010 Mirjalili, S., Jangir, P., & Saremi, S. (2017). Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Applied Intelligence 83, 80–98. https:// doi.org/10.1007/s10489-016-0825-8 Mouassa, S., Bouktir, T., & Salhi, A. (2017). Ant lion optimizer for solving optimal reactive power dispatch problem in power systems. Engineering Science and Technology, an International Journal. https://doi.org/10.1016/j.jestch.2017.03.006 Nair, S. S., Rana, K. P. S., Kumar, V., & Chawla, A. (2017). Efficient modeling of linear discrete filters using ant lion optimizer. Circuits, Systems, and Signal Processing. https://doi.org/10.1007/ s00034-016-0370-z Pradhan, R., Majhi, S. K., Pradhan, J. K., & Pati, B. B. (2018). Antlion optimizer tuned PID controller based on Bode ideal transfer function for automobile cruise control system. Journal of Industrial Information Integration. https://doi.org/10.1016/j.jii.2018.01.002 Rajan, A., Jeevan, K., & Malakar, T. (2017). Weighted elitism based ant lion optimizer to solve optimum VAr planning problem. Applied Soft Computing Journal. https://doi.org/10.1016/j.asoc. 2017.02.010 Raju, M., Saikia, L. C., & Sinha, N. (2016). Automatic generation control of a multi-area system using ant lion optimizer algorithm based PID plus second order derivative controller. International Journal of Electrical Power and Energy Systems. https://doi.org/10.1016/j.ijepes.2016.01.037

82

10 Antlion Optimization Algorithm

Rani, R., & Garg, R. (2021). Pareto based ant lion optimizer for energy efficient scheduling in cloud environment. Applied Soft Computing, 113, 107943. Roy, K., Mandal, K. K., & Mandal, A. C. (2019). Ant-lion optimizer algorithm and recurrent neural network for energy management of micro grid connected system. Energy. https://doi.org/10.1016/ j.energy.2018.10.153 Saha, S., & Mukherjee, V. (2018). A novel quasi-oppositional chaotic antlion optimizer for global optimization. Applied Intelligence. https://doi.org/10.1007/s10489-017-1097-7 Sharifi, M. R., Akbarifard, S., Madadi, M. R., Qaderi, K., & Akbarifard, H. (2022). Optimization of hydropower energy generation by 14 robust evolutionary algorithms. Scientific Reports, 12(1), 1–14. Trivedi, I. N., Jangir, P., & Parmar, S. A. (2016). Optimal power flow with enhancement of voltage stability and reduction of power loss using antlion optimizer. Cogent Engineering. https://doi. org/10.1080/23311916.2016.1208942 Wang, M., Wu, C., Wang, L., Xiang, D., & Huang, X. (2019). A feature selection approach for hyperspectral image based on modified ant lion optimizer. Knowledge-Based Systems. https:// doi.org/10.1016/j.knosys.2018.12.031 Wang, M., Zhao, X., Heidari, A. A., & Chen, H. (2020). Evaluation of constraint in photovoltaic models by exploiting an enhanced ant lion optimizer. Solar Energy. https://doi.org/10.1016/j.sol ener.2020.09.080 Wu, Z., Yu, D., & Kang, X. (2017). Parameter identification of photovoltaic cell model based on improved ant lion optimizer. Energy Conversion and Management. https://doi.org/10.1016/j.enc onman.2017.08.088 Yamany, W., Tharwat, A., Hassanin, M. F., Gaber, T., Hassanien, A. E., & Kim, T. H. (2016). A new multi-layer perceptrons trainer based on ant lion optimization algorithm. In Proceedings— 2015 4th International Conference on Information Science and Industrial Applications, ISI 2015. https://doi.org/10.1109/ISI.2015.9 Yao, P., & Wang, H. (2017). Dynamic adaptive ant lion optimizer applied to route planning for unmanned aerial vehicle. Soft Computing. https://doi.org/10.1007/s00500-016-2138-6

Chapter 11

Predicting Evaporation Using Optimized Multilayer Perceptron

Abstract In this study, the sunflower algorithm (SUA), shark algorithm (SHA), and particle swarm optimization (PASO) were integrated with the multilayer perceptron (MULP) model to predict daily evaporation. The average temperature (AVT), relative humidity (REH), wind speed (WISP), number of sunny hours (NSH), and rainfall (RAI) were used to predict evaporation at the Hormozgan, Fars, Mazandaran, Yazd, and Isfahan stations located in Iran country. The accuracy of the models indicated that the MULP-SUA provided the highest accuracy at the different stations. Also, the AVT and NSH were the most important parameters in desert climates. The results indicated that the optimized MULP models performed better than the MULP models. Keywords Evaporation · Optimization algorithm · MULP model · Optimized models

11.1 Introduction Evaporation is an important parameter for managing water resources and planning. Modeling evaporation is a complex and nonlinear process (Ghorbani et al., 2018). The accuracy of evaporation prediction depends on different meteorological parameters (Wu et al., 2020). Model accuracy is one of the most important topics in the modeling process (Allawi et al., 2019; Ehteram et al., 2022a, 2022b). Models should be applicable worldwide. A key challenge in the modeling process is choosing the right inputs. It may be expensive to measure evaporation directly. Furthermore, empirical models may not be highly accurate. Soft computing models can replace direct measurement methods and empirical methods. A soft computing model can handle a large number of inputs. They also speed up modeling. These models have mathematical operators and different layers (Ehteram et al., 2022a). They have unknown parameters. For this reason, robust training algorithms are needed to train soft computing models. This chapter uses multilayer perceptron (MULP) and optimization algorithms to predict evaporation. Optimization algorithms adjust MULP parameters, including weights and biases. In the first step, MLP models are coupled with optimization algorithms. In the next step, the best input scenario will be chosen. A hybrid MULP © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_11

83

84

11 Predicting Evaporation Using Optimized Multilayer Perceptron

model is also applied to predict rainfall at two stations located in Iran. In this chapter, sunflower optimization (SUA), shark algorithm (SHA), and particle swarm optimization (PASO) are used to train the MULP model. The structures of algorithms were explained in the previous chapters.

11.2 Review of the Previous Works Water resources management is of great importance. Nowadays, numerous drafts, rain shortage, water resources decrease, and population growth make water resources management very important (Qasem et al., 2019; Ghanbari-Adivi, et al., 2022). Moreover, irrigation and farming management is also critical. Decrease of irrigation resources results in the shortage of food resources as a serious challenge for many countries. Water resources management requires complicated modeling and accurate simulation. The accurate hydrologic forecasts and modeling are required for planning the water resources. Since nonlinear processes affect simulation and modeling, the modeling of various meteorological and agricultural parameters is complicated. Evaporation is one of the significant parameters of water resources management area (Ghorbani et al., 2018) that depends on the other parameters. Farming and irrigation water resources management, planning for storage dams water resources, planning for water supply, and basins management require accurate estimation of evaporation. Estimation of evaporation depends on the multiple parameters. Evaporation may be measured directly and indirectly (Feng et al., 2018). The direct measurement needs various tools which may be costly. The indirect measurement includes empirical equations and mathematical and smart models; however, empirical models may not have high accuracy. Thus, artificial intelligence model is a powerful tool for evaporation estimation (Moayedi et al., 2022). Evaporation is estimated based on several input parameters including temperature, rain speed, and relative moisture. Determining the exact relationship between evaporation and other climatic parameters is a complex issue. The artificial intelligence models use mathematical tools and functions to find the exact relationships between the inputs and the outputs. The artificial intelligence models learn from observation data to estimate the output value. Evaporation estimation requires models that can provide an exact relationship between the inputs and the outputs in various areas. There are various artificial intelligence models (Ehteram et al., 2022b). They are chosen based on the accuracy required by the modeler. However, these models may accurately estimate evaporation based on climatic and geographical data. The artificial intelligence models have parameters with unknown values which must be determined exactly (Kisi et al., 2022). The unknown parameters of these models are obtained in calibration stage. Advances in computer systems and mathematical models enable us to achieve accurate results and estimations. There are various soft computing and artificial intelligence models (Ehteram et al., 2022b), and the artificial neural network (ANN) is one of the most significant models of this type. There are various types of neural network inspired by the human brain. The neural network has neurons as well as different layers. Each

11.2 Review of the Previous Works

85

ANN has an input layer responsible for receiving input data. Moreover, ANN has middle and hidden layers responsible for the data analysis. The last layer of ANN produces output data. Every ANN use some mathematical functions called the activation functions. The ANN may easily be run (Ghorbani et al., 2018). The different layers of ANN are connected to each other by weighted links with unknown values. The modeling process faces with various challenges. One of them is the selection of the best input data. The other challenges include adjustment of ANN parameters and uncertainty of the model’s input data and parameters. The type of ANN varies depending on the user choice. The powerful optimization algorithms help achieving the best accuracy. Shirsath and Singh (2010) employed the regression methods and the ANN models to estimate evaporation in India. They used 365 daily data such as temperature, number of sunny hours, relative moisture, and wind speed as the input data. The results accuracy indicated that the ANN model worked better than the others. Preparing the ANN parameters was one of their study’s challenges. Kiran and Rajput (2011) used the ANN and adaptive neuro-fuzzy interface system (ANFIS) to estimate evaporation. They employed various training algorithms to adjust the ANN. The results showed that the ANFIS and the ANN were successful in estimating evaporation. Tabari et al. (2012) used the ANFIS and the ANN models for evaporation estimation. Arunkumar and Jothiprakash (2013) used the decision tree and the ANN models to estimate evaporation. The results’ comparison showed that both models were capable of estimating evaporation with reasonable accuracy in different basins. Guven and Kisi (2013) employed the ANFIS, the ANN, and the genetic programming to estimate evaporation using climatic data. Sanikhani et al. (2012) used the ANN, the ANFIS, and the multiple linear regression models to estimate evaporation. The comparison of these models showed that the ANFIS model performed better than the others. They also employed various methods to train the ANFIS model. Ki¸si (2013) used various ANN models to estimate evaporation and observed that these models were highly capable of evaporation estimation. However, preparing the model’s structure and adjusting its parameters were challenging for the study. Malik and Kumar (2015) used the ANFIS, the ANN, and the multiple linear regression models to estimate evaporation. They employed various models to estimate evaporation in India’s stations. The results indicated that the ANN model had the highest accuracy, but it was hard to achieve the best structure of the ANN model. El-Shafie et al. (2014) used various ANN models to estimate evaporation and compared the results with experimental data. The results showed that the ANN models performed better than the experimental models. Moreover, the experimental models show low accuracy in different climates, and the modelers need the climatic data to use these models, some of which may not be available. Kisi (2015) employed the multiple linear regression, the decision tree, and the support vector machine models to estimate evaporation in various stations. Piri et al. (2016) used the ANFIS, and the ANN models to estimate evaporation in various stations employing the optimization algorithms to adjust the models’ parameters. The results indicated that the optimization algorithms might remarkably increase the accuracy of the ANN and the ANFIS models. Arunkumar et al. (2017) employed the decision tree, the genetic programming, the ANFIS, and the ANN to estimate evaporation data. The results

86

11 Predicting Evaporation Using Optimized Multilayer Perceptron

showed that the genetic programming and the ANN models had high accuracy in estimating the evaporation data. Ghorbani et al. (2018) used various ANN models to estimate evaporation employing the optimization algorithms to enhance the models. The results indicated that the optimization algorithm could adjust the ANN model’s parameters well and improve the models’ accuracy. They also employed different climatic parameters to estimate the evaporation data using correlation method to choose the best input data. Their models were highly accurate in various stations. Qasem et al. (2019) used the ANN and the support vector machine models to estimate evaporation testing different models in various stations. The results showed that both models were of high accuracy in estimating evaporation. However, the models’ parameters should be adjusted to achieve high accuracy.

11.3 Structure of MULP Models There are three layers in the MULP model (Panahi et al., 2021). Each layer has a duty. An input layer receives the data (Seifi et al., 2022). Hidden layers are the second layer (Ehteram et al., 2022a; Malik et al., 2021; Zounemat-Kermani et al., 2021). The hidden layer calculates the outputs of the second layer using the activation function (Seifi et al., 2021a, 2021b). The structure of MLP models is shown in Fig. 11.1. MULP is mathematically described as follows: [ outk = fu0

M Σ

( ωk j × fuh

i=1

N Σ

) ω ji INi + b jo + bko

] (11.1)

i=1

where outk : the output layer, fuh and fu0 : activation function of the output and hidden layers, b jo : bias of hidden layer, bko : bias of output layer, INi : input, ω ji : weight connection of hidden layer, and ωk j : weight connection of output layer. The most important parameters of the MULP models are weight and bias. As a result, adjusting these parameters is of utmost importance. In this study, optimization algorithms are applied to adjust the MULP parameters.

11.4 Hybrid MULP Models The optimization algorithms are reliable tools for global searches. Optimization algorithms are used to adjust the parameters of the MULP model: 1. The testing and training data are determined for the training and testing levels. 2. The MULP model runs at the training level. 3. The root mean square error (RMSE) is the objective function for determining the quality of the solutions.

11.5 Case Study

87

Fig. 11.1 Structure of MLP model

4. The operators of algorithms are applied to update the location of agents. The place of agents shows the MULP parameters. 5. The stop condition (SC) is met. If SC is satisfied, the MULP models go to the testing level; otherwise, it goes to step 3.

11.5 Case Study Table 11.1 shows the details of the input data. These data were used to estimate the daily evaporation at Isfahan (ISF), Mazandaran station (MAZ), Hormozgan station (HOS), Fars (FAR) station, and Yazd station (YAZ). The stations are located in Iran country. Average temperature (AVT), wind speed (WIS), relative humidity (REH), rainfall (RAI), and the number of sunny hours (NSH) are the inputs. Figure 11.2 shows the locations of stations. Figure 11.3 shows the time series of different stations.

MAE =

N 1 Σ |EVAob − EAVes | N i=1

ΣN PBIAS =

(EVAob − EAVes ) ΣN i=1 (EAVob )

i=1

(11.2)

(11.3)

ΣN

(EAVob − EAVes ) NSE = 1 − Σ Ni=1 ( ) i=1 EAVob − EAVes

(11.4)

88

11 Predicting Evaporation Using Optimized Multilayer Perceptron

Table 11.1 Details of input data Province

Parameters

(ISF)

(AVT) (o C)

4.22

(REH) (%)

(MAZ)

Minimum

8.45 11.35

71.23

74.91

0.00

11.24

12.67

8.23

(RAI) (mm)

4.23

18.89

41

9.23

(WIS) (m/s)

0.00

4.24

7.92

3.12

EVA (mm)

0.50

10.28

23.5

6.23

(AVT) (o C)

6.89

22.21

31.12

5.15

54.23

88.78

97.23

11.42

6.86

8.91

11.45

7.52

11.89

76.44

99.12

8.23

(RAI) (mm) (WIS) (m/s)

3.52

11.24

12.23

6.77

EVA (mm)

0.50

8.92

22.50

8.23

(AVT) (o C)

8.23

26.76

37.98

6.12

(REH) (%)

21.23

54.22

67.14

7.12

(NSH)

0.00

11.84

14.21

6.23

(RAI) (mm)

0.00

9.21

18.12

8.23

(WIS) (m/s)

2.44

7.65

8.23

7.12

EVA (mm)

0.5

10.80

27

8.34

(AVT) (o C)

6.87

20.23

33.12

8.23

(REH)%

(HOR)

Standard deviation

32.21

(REH)%

(FAR)

Maximum 34.24

(NSH)

(NSH)

(YAZ)

Average 20.66

34.12

6.78

77.23

7.89

(NSH)

0.00

11.23

12.01

6.78

(RAI) (mm)

2.34

15.12

25.23

6.89

(WIS) (m/s)

2.33

4.67

6.89

7.91

EVA (mm)

0.50

7.55

20.00

8.91

(AVT) (o C)

9.86

25.54

40.23

8.14

(REH) (%)

44.23

76.56

78.23

6.12

(NSH)

0.00

11.23

14.12

8.71

(RAI) (mm)

7.89

20.34

31.24

6.78

(WIS) (m/s)

1.12

EVA (mm)

0.50

/ CRMSE =

6.55 11.5

8.12

7.9

28.00

6.87

) ( ) ΣN ( i=1 EAVob − EAViobs − EAVes − EAVes N

(11.5)

where MAE: mean absolute error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, EAVob : observed evaporation, EAViobs : average observed value, EAVes : estimated value.

11.6 Results and Discussion

89

Fig. 11.2 Location of the case study

11.6 Results and Discussion 11.6.1 Choice of Random Parameters Optimization algorithms contain random parameters. Therefore, these parameters should be adjusted accurately. A sensitivity analysis is used to adjust the random parameters. Minimizing the objective function (RMSE) is achieved by adjusting random parameters. The best values of random parameters happened when the RMSE had the least values. Population size (POS) and the maximum number of iterations (MNO) had two important random parameters. Figure 11.4 shows the sensitivity analysis of parameters at the Hormozgan station. For the SUA, the RMSE of the POS = 50, POS = 100, POS = 150, POS = 200, and PSO = 250 was 1.23, 0.955, 1.34, 1.36, and 1.39 mm. Thus, the POS = 100 provided the lowest RMSE. For the SHA, the RMSE of the POS = 50, POS = 100, POS = 150, POS = 200, and PSO = 250 was 1.76, 1.67, 1.55, 1.78, and 1.89 mm. Thus, the POS = 150 provided the lowest RMSE. For the SUA, the RMSE of the MNO = 80, MNO = 160, MNO = 240, MNO = 320, and MNO = 400 was 1.24, 0.957, 1.33, 1.39, and 1.43 mm. The sensitivity analysis was used to obtain the values of random parameters at the other stations. Table 11.2 shows the values of random parameters for the other stations.

90

11 Predicting Evaporation Using Optimized Multilayer Perceptron

Fig. 11.3 Time series for different stations (2008–2011)

11.6 Results and Discussion

Fig. 11.4 Sensitivity analysis of random parameters

91

92

11 Predicting Evaporation Using Optimized Multilayer Perceptron

Table 11.2 Values of random parameters at the other stations Station

POS

MNO

Isfahan

POSSUA = 100, POSSHA = 150, and POSPASO = 150

MNOSUA = 160, MNOPASO = 320, and MNOSHA = 240

Fars

POSSUA = 100, POSSHA = 150, and POSPASO = 150

MNOSUA = 160, MNOPASO = 320, and MNOSHA = 240

Yazd

POSSUA = 100, POSSHA = 150, and POSPASO = 150

MNOSUA = 160, MNOPASO = 320, and MNOSHA = 240

Maz

POSSUA = 100, POSSHA = 150, and POSPASO = 150

MNOSUA = 160, MNOPASO = 320, and MNOSHA = 240

11.6.2 Investigation the Accuracy of Models This section investigated the accuracy of different models at the testing level. Figure 11.5 evaluates the performance of models. . Hormozgan station MAE of the MULP-SUA, MULP-SAA, MLP-PASO, and MULP was 0.987 mm, 1.45 mm, 1.67 mm, and 1.78 mm, respectively. The PBIAS of the MULP-SUA, MULP-SHA, MULP-PASO, and MLP was 8, 12, 15, and 18 mm, respectively. The highest NSE was obtained by the MULP-SUA model. . Isfahan station The MAE of the MULP-SUA was 12, 14, and 22% which was lower than that of the MULP-SA, MULP-PSO, and MULP models. The NSE of the MULP-SUA, MULP-SA, MULP-PSO, and MULP was 0.95, 0.92, 0.90, and 0.89. The highest PBIAS was obtained by the MLP model. . Mazandaran station MAE of the MULP-SUA, MULP-SA, MULP-PSO, and MULP was 0.999 mm, 1.15 mm, 1.45 mm, and 1.55 mm, respectively. The PBIAS of the MULP-SUA, MULP-SA, MULP-PSO, and MULP was 5, 9, 10, and 14 mm, respectively. The highest NSE was obtained by the MLP-SUA. . Yazd station The MAE of the MLP-SUA was 21, 30, and 32% which was lower than that of the MULP-SA, MULP-PSO, and MULP models. The NSE of the MULP-SUA, MULP-SA, MULP-PSO, and MULP was 0.95, 0.92, 0.90, and 0.87. The PBIAS of optimized models was lower than that of the other models. . Fars station MAE of the MULP-SUA, MULP-SA, MULP-PSO, and MULP was 0.99 mm, 1.32 mm, 1.41 mm, and 1.47 mm, respectively. The PBIAS of the MULP-SUA, MULP-SA, MULP-PSO, and MULP was 6, 11, 15, and 17 mm, respectively. The MULP-SUA and MULP provided the highest and lowest NSE among the other models.

11.6 Results and Discussion

93

MAE

NSE Fig. 11.5 Radar plots for investigating the accuracy of models

94

11 Predicting Evaporation Using Optimized Multilayer Perceptron

PBIAS Fig. 11.5 (continued)

Figure 11.6 shows the boxplots of models at the different stations. . Fars The median of observed data, MULP-SUA, MLP-SHA, MLP-PASO, and MLP models was 6 mm, 6 mm, 7 mm, 7.5 mm, and 8.5 mm, respectively. The maximum value of MULP-SUA, MLP-SHA, MLP-PASO, and MLP models was 20, 21, 21, 21, 21, and 21 mm. . Mazandaran The median of observed data, MULP-SUA, MULP-SHA, MLP-PASO, and MULP models was 8 mm, 8.5 mm, 8.82 mm, 9 mm, and 10 mm, respectively. The maximum value of MULP-SUA, MLP-SHA, MULP-PASO, and MULP models was 22.5, 23, 24, 24, and 25 mm. The boxplot of MULP-SUA had the highest match with observed data. . Isfahan The median of observed data, MULP-SUA, MLP-SHA, MULP-PASO, and MULP models was 11 mm, 11 mm, 12 mm, 12 mm, and 12 mm, respectively. The maximum value of MLP-SUA, MLP-SHA, MULP-PASO, and MULP models was 23.5, 23.5, 24, 24, and 25 mm. . Yazd The median of observed data, MULP-SUA, MLP-SHA, MULP-PASO, and MULP models was 10.3 mm, 10.55 mm, 10.75 mm, 10.75 mm, and 10.75 mm,

11.6 Results and Discussion

95

Fig. 11.6 Boxplots of different models at the different stations

Fars

Mazandaran

Isfahan

96

11 Predicting Evaporation Using Optimized Multilayer Perceptron

Fig. 11.6 (continued)

Yazd

Hormozgan

respectively. The MULP-SUA and MLP models had the highest and lowest match with observed data. . Hormozgan The median of observed data, MULP-SUA, MULP-SHA, MLP-PSOA, and MULP models was 11 mm, 11 mm, 12.50 mm, 13.5 mm, and 14 mm, respectively. The MLP-SUA and MULP models had the highest and lowest match with observed data.

11.6.3 Discussion In this study, climate parameters were used to predict evaporation. However, it is necessary to determine the most important parameters at the different stations.

11.6 Results and Discussion

97

Table 11.3 Investigation of the effect of input parameters on evaporation RMSE values Input combination

Isfahan

Hormozgan

Fars

Yazd

Mazandaran

All

0.878

0.955

0.912

0.935

0.899

All parameters except AVT

1.455

1.567

1.672

1.667

1.567

All parameters except RAI

1.000

0.967

1.00

0.947

1.455

All parameters except WIS

0.912

1.00

1.345

1.01

0.912

All parameters except REH

0.890

0.978

0.945

0.939

1.325

All parameters except NSH

1.356

1.554

1.455

1.754

0.905

Each input parameter was removed from the input combination to assess its effect on objective function (RMSE). Table 11.3 shows the effects of different input parameters on evaporation. . Isfahan Eliminating AVT from the input combination increased RMSE from 0.878 mm to 1.455. Thus, the AVT had the most effect on ET at Isfahan station. The REH had the lowest effect on the prediction of evaporation. Eliminating REH increased RMSE from 0.87 to 0.890 mm. In Isfahan, summers are hot, and the winters are cold. . Hormozgan After removing the AVT from the input combination, the RMSE increased from 0.955 to 1.567 mm. The desert climate is observed at the Hormozgan station. The AVT and NSH were the most important in this region. After removing the RAI from the input combination, the RMSE increased from 0.955 to 0.967 mm. As a result, the RAI had the least effect on the prediction of evaporation. . Fars Eliminating AVT from the input combination increased RMSE from 0.912 mm to 1.672. Thus, the AVT had the most effect on ET at Isfahan station. The REH had the lowest effect on the prediction of evaporation. Eliminating REH increased RMSE from 0.912 to 0.945 mm. In Isfahan, summers are hot, and the winters are cold. . Yazd After removing the AVT from the input combination, the RMSE increased from 0.935 to 1.667 mm. The desert climate is observed at the Yazd station. The AVT and NSH were the most important in this region. After removing the REH from the input combination, the RMSE increased from 0.935 to 0.939 mm. As a result, the REH had the least effect on the prediction of evaporation. . Mazandaran Eliminating AVT from the input combination increased RMSE from 0.899 mm to 1.567. Thus, the AVT had the most effect on ET at Isfahan station. The NSH had the lowest effect on the prediction of evaporation. Eliminating REH increased

98

11 Predicting Evaporation Using Optimized Multilayer Perceptron

RMSE from 0.899 to 0.1.325 mm. In Mazandaran, a wet climate is observed. The AVT, RAI, and REH were the most important parameters.

11.7 Conclusion This study predicted evaporation using optimized and standalone MULP models. Based on the obtained results, the following points should be considered: 1. The optimization algorithms could enhance the efficiency of MULP models. 2. At each station, input parameters affected evaporation differently. The AVT and NSH were the most important climate parameters in desert regions. 3. The determination of random parameters was important because these parameters changed the accuracy of optimization algorithms. 4. The use of all input variables provided the highest accuracy. 5. Each of the optimization algorithms gave different accuracies. However, the next research can test the models in different regions. Also, the modelers can use the lagged evaporation values as the inputs.

References Allawi, M. F., Othman, F. B., Afan, H. A., Ahmed, A. N., Hossain, M. S., Fai, C. M., & El-Shafie, A. (2019). Reservoir evaporation prediction modeling based on artificial intelligence methods. Water (switzerland). https://doi.org/10.3390/w11061226 Arunkumar, R., & Jothiprakash, V. (2013). Reservoir evaporation prediction using data-driven techniques. Journal of Hydrologic Engineering. https://doi.org/10.1061/(asce)he.1943-5584.000 0597 Arunkumar, R., Jothiprakash, V., & Sharma, K. (2017). Artificial intelligence techniques for predicting and mapping daily pan evaporation. Journal of The Institution of Engineers (India): Series A. https://doi.org/10.1007/s40030-017-0215-1 Ehteram, M., Panahi, F., Ahmed, A. N., Huang, Y. F., Kumar, P., & Elshafie, A. (2022a). Predicting evaporation with optimized artificial neural network using multi-objective Salp swarm algorithm. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-16301-3 Ehteram, M., Graf, R., Ahmed, A. N., & El-Shafie, A. (2022b). Improved prediction of daily pan evaporation using Bayesian Model averaging and optimized kernel extreme machine models in different climates. Stochastic Environmental Research and Risk Assessment, 1–36. El-Shafie, A., Najah, A., Alsulami, H. M., & Jahanbani, H. (2014). Optimized neural network prediction model for potential evapotranspiration utilizing ensemble procedure. Water Resources Management, 28(4), 947–967. Feng, Y., Jia, Y., Zhang, Q., Gong, D., & Cui, N. (2018). National-scale assessment of pan evaporation models across different climatic zones of China. Journal of Hydrology, 564, 314–328. Ghanbari-Adivi, E., Ehteram, M., Farrokhi, A., & Sheikh Khozani, Z. (2022). Combining radial basis function neural network models and inclusive multiple models for predicting suspended sediment loads. Water Resources Management, 36(11), 4313–4342. Ghorbani, M. A., Deo, R. C., Yaseen, Z. M., & Kashani, H. M. (2018). Pan evaporation prediction using a hybrid multilayer perceptron-firefly algorithm (MLP-FFA) model: Case study in North Iran. Theoretical and Applied Climatology. https://doi.org/10.1007/s00704-017-2244-0

References

99

Guven, A., & Kisi, O. (2013). Monthly pan evaporation modeling using linear genetic programming. Journal of Hydrology. https://doi.org/10.1016/j.jhydrol.2013.08.043 Kiran, T. R., & Rajput, S. P. S. (2011). An effectiveness model for an indirect evaporative cooling (IEC) system: Comparison of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS) and fuzzy inference system (FIS) approach. Applied Soft Computing Journal. https://doi.org/10.1016/j.asoc.2011.01.025 Ki¸si, Ö. (2013). Evolutionary neural networks for monthly pan evaporation modeling. Journal of Hydrology. https://doi.org/10.1016/j.jhydrol.2013.06.011 Kisi, O. (2015). Pan evaporation modeling using least square support vector machine, multivariate adaptive regression splines and M5 model tree. Journal of Hydrology. https://doi.org/10.1016/j. jhydrol.2015.06.052 Kisi, O., Mirboluki, A., Naganna, S. R., Malik, A., Kuriqi, A., & Mehraein, M. (2022). Comparative evaluation of deep learning and machine learning in modelling pan evaporation using limited inputs. Hydrological Sciences Journal, (just-accepted). Malik, A., & Kumar, A. (2015). Pan evaporation simulation based on daily meteorological data using soft computing techniques and multiple linear regression. Water Resources Management. https://doi.org/10.1007/s11269-015-0915-0 Malik, A., Tikhamarine, Y., Al-Ansari, N., Shahid, S., Sekhon, H. S., Pal, R. K., Rai, P., Pandey, K., Singh, P., Elbeltagi, A., & Sammen, S. S. (2021). Daily pan-evaporation estimation in different agro-climatic zones using novel hybrid support vector regression optimized by Salp swarm algorithm in conjunction with gamma test. Engineering Applications of Computational Fluid Mechanics. https://doi.org/10.1080/19942060.2021.1942990 Moayedi, H., Ghareh, S., & Foong, L. K. (2022). Quick integrative optimizers for minimizing the error of neural computing in pan evaporation modeling. Engineering with Computers, 38(2), 1331–1347. Panahi, F., Ahmed, A. N., Singh, V. P., Ehtearm, M., & Elshafie. (2021). Predicting freshwater production in seawater greenhouses using hybrid artificial neural network models. Journal of Cleaner Production. https://doi.org/10.1016/j.jclepro.2021.129721 Piri, J., Mohammadi, K., Shamshirband, S., & Akib, S. (2016). Assessing the suitability of hybridizing the Cuckoo optimization algorithm with ANN and ANFIS techniques to predict daily evaporation. Environmental Earth Sciences. https://doi.org/10.1007/s12665-015-5058-3 Qasem, S. N., Samadianfard, S., Kheshtgar, S., Jarhan, S., Kisi, O., Shamshirband, S., & Chau, K. W. (2019). Modeling monthly pan evaporation using wavelet support vector regression and wavelet artificial neural networks in arid and humid climates. Engineering Applications of Computational Fluid Mechanics, 13(1), 177–187. Sanikhani, H., Kisi, O., Nikpour, M. R., & Dinpashoh, Y. (2012). Estimation of daily pan evaporation using two different adaptive neuro-fuzzy computing techniques. Water Resources Management. https://doi.org/10.1007/s11269-012-0148-4 Seifi, A., Ehteram, M., Nayebloei, F., Soroush, F., Gharabaghi, B., & Torabi Haghighi, A. (2021a). GLUE uncertainty analysis of hybrid models for predicting hourly soil temperature and application wavelet coherence analysis for correlation with meteorological variables. Soft Computing. https://doi.org/10.1007/s00500-021-06009-4 Seifi, A., Ehteram, M., & Dehghani, M. (2021b). A robust integrated Bayesian multi-model uncertainty estimation framework (IBMUEF) for quantifying the uncertainty of hybrid meta-heuristic in global horizontal irradiation predictions. Energy Conversion and Management. https://doi.org/ 10.1016/j.enconman.2021.114292 Seifi, A., Ehteram, M., Soroush, F., & Haghighi, A. T. (2022). Multi-model ensemble prediction of pan evaporation based on the Copula Bayesian Model Averaging approach. Engineering Applications of Artificial Intelligence, 114, 105124. Shirsath, P. B., & Singh, A. K. (2010). A comparative study of daily pan evaporation estimation using ANN, regression and climate based models. Water Resources Management. https://doi.org/ 10.1007/s11269-009-9514-2

100

11 Predicting Evaporation Using Optimized Multilayer Perceptron

Tabari, H., Talaee, P. H., & Abghari, H. (2012). Utility of coactive neuro-fuzzy inference system for pan evaporation modeling in comparison with multilayer perceptron. Meteorology and Atmospheric Physics. https://doi.org/10.1007/s00703-012-0184-x Wu, L., Huang, G., Fan, J., Ma, X., Zhou, H., & Zeng, W. (2020). Hybrid extreme learning machine with meta-heuristic algorithms for monthly pan evaporation prediction. Computers and Electronics in Agriculture. https://doi.org/10.1016/j.compag.2019.105115 Zounemat-Kermani, M., Keshtegar, B., Kisi, O., & Scholz, M. (2021). Towards a comprehensive assessment of statistical versus soft computing models in hydrology: Application to monthly pan evaporation prediction. Water (switzerland). https://doi.org/10.3390/w13172451

Chapter 12

Predicting Rainfall Using Inclusive Multiple Model and Radial Basis Function Neural Network

Abstract This study used the salp swarm algorithm (SSA), Henry gas solubility optimization algorithm (HGSOA), and crow optimization algorithm (COA) to adjust the radial basis function neural network (RABFN) parameters for predicting monthly rainfall. Then, a new ensemble model was created using the outputs of RABFN, RABFN-SSA, RABFN-HGSOA, and RABFN-COA. The new ensemble model was named inclusive multiple model (IMM). This study indicated that the ensemble models improved the efficiency of the optimized RABFN models. The training MAE of the IMM, RABFN-HGSOA, RABFN-SSA, RABFN-PSO, and RBFN models was 0.987, 1.35, 1.47, 1.58, and 2.21 mm. The IMM reduced the testing MAE of the IMM, RABFN-HGSOA, RABFN-SSA, RABFN-PSO, and RBFN models by 32%, 37%, 42%, and 55%, respectively. Also, the HGSOA had better performance than the SSA and COA. Keywords Rainfall · Optimization algorithms · Individual models · Ensemble models

12.1 Introduction The importance of hydrological predictions in water resources cannot be ignored. Drought and flood control are made possible by these predictions. Hydrological and meteorological predictions are required for water resource management. Rainfall is an essential parameter in water resource planning and management (Ehteram et al., 2021). Predicting rainfall assists in identifying drought periods. Furthermore, rainfall prediction is critical for flood control. Rainfall is affected by disturbance parameters. The process of predicting rainfall is complex. As a result, inexpensive and precise models are required for rainfall prediction. Given that the world is currently experiencing droughts and climate change, it is crucial to forecast rainfall in various regions of the world. Input and output parameter interactions can be precisely found using artificial intelligence models (Seifi et al., 2021). Climate data are one of the inputs that artificial intelligence models can use to predict the output value. Software computing models have the advantages of lower costs and a faster computing process. These models’ computer codes are accessible and simple to implement. In © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_12

101

102

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis …

any case, one of the most important responsibilities for modelers is to select the input data affecting rainfall. For forecasting rainfall, using models with low uncertainty is essential (Xiang et al., 2018). Hydrological predictions can be made using artificial intelligence models in conjunction with other models and software. The use of artificial intelligence models is not restricted to a specific basin. These models can be applied to various basins with varying climates. These models contain unidentified parameters that can be determined using optimization algorithms. Rainfall and other hydrological variables can be accurately predicted using optimization algorithms and artificial intelligence models. The management of irrigation systems and agriculture also depends on rainfall prediction. Rainfall prediction is also an important matter because the amount of fresh water on the planet has reduced (Akiner, 2021). Predictions of rainfall are affected by different parameters. Predicting rainfall is a complex and nonlinear process (Xiang et al., 2018). For predicting rainfall, soft computing models are robust. Using linear and nonlinear functions, these models can analyze complex phenomena. For predicting target outputs, these methods use advanced operators (Alotaibi et al., 2018; Johny et al., 2020). Data selection is an important aspect of modeling. An artificial neural network (ANN) is a robust soft computing model. There are different kinds of ANN models. Each of these models can simulate complex processes. ANN models include radial basis function neural networks. A radial basis function neural network has one middle layer (Seifi et al., 2021). The parameters of this network are unknown. Radial basis function neural network parameters can be adjusted using robust optimizers. As an individual model, the RBFNN model should be considered (Ehteram et al., 2021). For predicting output variables, individual models are weaker than ensemble models. Ensemble models can improve the performance of RABFNs by using the outputs of RABFNs (Panahi et al., 2021). RABFN can process data quickly because it has one middle layer. Consequently, radial basis function neural networks are used for many projects because of their high accuracy and fast performance. Radial basis function neural networks are a good choice for predicting rainfall because they can analyze nonlinear and complex data. Ensemble models can improve the performance of radial basis function neural networks by using the outputs of radial basis function neural networks (Panahi et al., 2021). In this chapter, optimization algorithms are integrated with radial basis function neural network (RBF) to estimate monthly rainfall. Also, a new ensemble model was introduced for predicting rainfall. First, the structure of RABFN was introduced. Afterward, the structure of optimized RABFN was explained. Finally, the details of the new ensemble model were described. This study integrated the salp swarm algorithm (SSA), Henry gas solubility optimization algorithm (HGSOA), and the crow optimization algorithm (COA) with RABFN. The structure of HGSOA, COA, and SSA was explained in Chaps. 5, 6, and 7.

12.2 Structure of Radial Basis Function Neural Network (RABFN)

103

12.2 Structure of Radial Basis Function Neural Network (RABFN) RABFN is one of the famous artificial neural network models (Seifi et al., 2021). RBFN has three layers (Deng et al., 2021a, 2021b). The RABFN consists of an input layer, a hidden layer, and an output layer (Ehteram et al., 2021). The RABFN has one hidden layer. The RBFNN uses the Gaussian function as an activation function (Seifi et al., 2020). Figure 12.1 shows the structure of the RBFNN model. The activation function of RABFN acts based on the following equation: ( || || ) ||x − c j ||2 |) (| | | ϕ x − c j = exp − 2σ j2

(12.1)

where x: input, c j : the center of hidden neuron, and σ j : the width of the neuron. The final output is calculated based on the following equation: yi =

n Σ

ω ji ϕ j (x)

(12.2)

j=1

where yi : output and ω ji : the connection weight between the hidden layer and output layer. However, center and width are the most important parameters of the RABFN. Determining accurate values of parameters is essential for the modeling process. Henry gas solubility optimization algorithm (HGSOA), crow optimization algorithm

Fig. 12.1 Structure of the RABFN model

104

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis …

(COA), and salp swarm algorithm (SSA) were used to adjust the RABFN. The structure of algorithms was explained in the previous chapter.

12.3 RABFN Models Among the parameters of a RABFN model, the width and center are the most important. The RABFN parameters are set using evolutionary algorithms in this study. The parameters are encoded as the initial population of the optimization algorithm. Next, a RABFN model is run using training data. For evaluating the quality of solutions, root mean square error is calculated. Algorithm operators are used for updating the solutions. A convergence condition (CC) is checked. The model goes to the testing level if the CC is satisfied; otherwise, the process continues.

12.4 Structure of Inclusive Multiple Model The RABFN is an individual model. A single model is used to maximize the potential of the individual model. Therefore, it may result in a high computation error. Ensemble models combine the potential of multiple individual models. Consequently, the ensemble model can reduce errors caused by individual models. Panahi et al. (2021) stated ensemble models could predict hydrological variables well. Using ensemble models, Ehteram et al. (2022) predicted evaporation. Compared to individual models, ensemble models performed better. Different ensemble models exist. One of the ensemble models is Bayesian model averaging (BMA). The BMA is a robust ensemble model, but it has complexities. An alternative to BMA was introduced in this study. As a new ensemble model, the inclusive multiple model (IMM) is introduced in this study. Multiple optimized and standalone RABFN models are incorporated into the IMM. As a first step, the outputs of optimized and standalone RABFN were obtained. Afterward, the outputs were incorporated into an RBFNN model. The structure of the IMM model is shown in Fig. 12.2.

12.5 Case Study Predicting rainfall in the Sharekoord basin is the goal of this study. Iran’s Sharekoord plain is one of its most important plains. The basin area is 1211 km2 . The average annual exploitation of plain water resources is 330 MCM. Among all sectors in this basin, agriculture uses the most water. There have been successive droughts in the plain. Another challenge is the decrease in groundwater resources. Figures 12.3 and 12.4 show the basin’s location and rainfall time series. In this study, RA (t − 1),

12.5 Case Study

105

Fig. 12.2 Structure of IMM model

RA (t − 2)…, and (RA (t − 12)) were used to estimate one-month ahead rainfall (t: number of the month). Figure 12.5 shows the significant lag times based on correlation values. Thus, RA (t − 1), RA (t − 2), RA (t − 3), and RA (t − 4) were used as the inputs to the models. In this study, the following indices are used to evaluate the ability of models: MAE =

n 1Σ |RAob − RAes | n i=1

ΣN PBIAS =

(RAob − RAes ) ΣN i=1 (RAob )

i=1

(12.3)

(12.4)

Σn

(RAob − RAes ) ) NSE = 1 − Σni=1 ( i=1 RAob − RAob / ) ( ) Σn ( i=1 RAob − RAiobs − RAes − RAes CRMSE = n

(12.5)

(12.6)

where MAE: mean absolute error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, RAob : observed rainfall, RAiobs : average rainfall value, RAes : estimated value.

106

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis …

Fig. 12.3 Location of the basin

Fig. 12.4 Monthly rainfall time series (2000–2012)

12.6 Results and Discussion

107

Fig. 12.5 Heat map of correlation values (*: show significant input parameters)

12.6 Results and Discussion 12.6.1 Choice of Random Parameters Random parameters play a significant role in the performance of optimization algorithms. The population size (POPS) and the maximum number of iterations (MANU) are the most important parameters in optimization algorithms. In this study, POPS and MANU are determined through a sensitivity analysis. By changing the values of these parameters, we minimize the RMSE of the objective function value. Figure 12.6 shows the sensitivity analysis of random parameters. For the HGSOA, the OFV of MANU = 50, MANU = 100, MANU = 200, MANU = 250, and MANU = 300 was 1.25 mm, 1.11 mm, 1.27 mm, 1.38 mm, and 1.55 mm, respectively. Thus, MANU = 100 provided the lowest objective function values. This process was similarly performed for other algorithms.

108

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis …

Fig. 12.6 Sensitivity analysis for determination of random parameters

12.6.2 Investigation the Accuracy of Models Figure 12.7 evaluates the performance of models using different indices. The training MAE of the IMM, RABFN-HGSOA, RABFN-SSA, RBFN-PSO, and RBFN models was 0.987, 1.35, 1.47, 1.58, and 2.21 mm. The IMM reduced the testing MAE of

12.6 Results and Discussion

109

the IMM, RABFN-HGSOA, RABFN-SSA, RABFN-PSO, and RABFN models by 32%, 37%, 42%, and 55%, respectively. The training NSE of the IMM, RBFANHGSOA, RBFAN-SSA, RBFN-PSO, and RBFN models was 0.97, 0.95, 0.92, 0.90, and 0.89, respectively. The IMM and RABFN provided the highest and lowest NSE at the testing level. The testing PBIAS of the IMM, RBFAN-HGSOA, RBFAN-SSA, RBFN-PSO, and RBFN models was 0.12, 0.14, 0.16, 0.17, and 0.20, respectively. Taylor diagram is an appropriate tool for assessing the accuracy of models. The Taylor diagram uses CRMSE, standard deviation, and correlation coefficient to evaluate the efficiency of models. The CRMSE of the IMM, RBFAN-HGSOA, RBFANSSA, RBFN-PSO, and RBFN models was 0.24, 1.11, 1.51, 1.93, and 1.91. The correlation coefficient of the IMM, RBFAN-HGSOA, RBFAN-SSA, RBFN-PSO, and RBFN models was 0.99, 0.95, 0.94, 0.91, and 0.84, respectively (Fig. 12.8). Figure 12.9 shows the boxplots of models. The median of the IMM, RABFNHGSOA, RABFN-SSA, RABFN-COA, and RABFN models was 32.0, 32.5, 33.5, 36, and 36 mm. The maximum value of the IMM, RABFN-HGSOA, RABFN-SSA, RABFN-COA, and RABFN models was 63, 63, 64, 65, and 67 mm.

12.6.3 Discussion This study uses different optimization algorithms to adjust RABFN parameters. Algorithm operators produce different accuracy levels. Among the algorithms tested in this study, HGSOA provided the highest accuracy. The number of maximum iterations (NOFE = population size × maximum number of iterations) is one of the most important indices for the evaluation of the ability of the models. In the modeling process, algorithms with a low NOFE have the higher ability. Figure 12.10 shows the NOFE of different algorithms. The NOFE of the HGSOA, SSA, and COA was 14,000, 31,000, and 56,000, respectively. Thus, the HGSOA had the best performance among other algorithms. This study showed that the IMM could improve the performance of both optimized and standalone RABFN models. Unlike BMA, which requires complex computations, the IMM is simple to implement. In different fields, the IMM will be reliable for predicting other variables. Figure 12.11 shows the scatterplots of models. The R2 values indicated that the IMM and optimized models outperformed the RABFN models. The results of this study can be used to predict temporal and spatial rainfall patterns. The models provide accurate maps of rainfall variation in large basins. Additionally, these models can receive data from general circulation models to predict rainfall under climate change conditions. The lagged rainfall values were used to predict rainfall in this study. Future articles can consider the effects of different parameters on rainfall prediction, such as large climate indices. Future studies can define more layers and develop RABFN. The current study used RABFN with one hidden layer. The additional layers are expected to enhance the performance of

110

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis …

Fig. 12.7 Radar plots for evaluation of the accuracy of models

12.6 Results and Discussion

111

Fig. 12.8 Evaluation of the accuracy of models based on the Taylor diagram

Fig. 12.9 Boxplots of models for predicting rainfall

RABFN models. Moreover, future studies can define multiple kernel functions for RABFN models.

112

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis …

Fig. 12.10 NOFE values of different models

12.7 Conclusion In this study, the optimized and ensemble models were used to predict monthly rainfall. Multiple optimization algorithms were applied to adjust the RABFN parameters. Then, the standalone and optimized RABFN models’ outputs were inserted into an RBFNN model as an ensemble model. The values of error indices indicated that the ensemble and optimized RABFN models outperformed the RABFN model. However, the next studies can use other input variables, such as large climate indices, for predicting monthly rainfall. Also, the other ensemble models, such as BMA, can be compared with the IMM model. The current research demonstrates that the radial basis function neural network (RABFN) model alone cannot generate reliable data. One method for improving the precision of the RABFN model is to use optimization algorithms. Using ensemble models is another method to improve the precision of the RABFN model. Ensemble models have numerous benefits that improve RABFN model precision. As a result, it is also recommended that ensemble models be applied to other hydrological sciences. The uncertainty of neural network models can be reduced by these models. Furthermore, the models mentioned above can forecast rainfall in the context of climate change. Future rainfall can be accurately predicted by the models mentioned above. The aforementioned models can, therefore, be used to effectively prevent floods and drought periods. Rainfall prediction accuracy aids in identifying flood or drought periods. The current models’ data are useful for water resource planning and management. The aforementioned models can be used in subsequent investigations to anticipate spatial and temporal rainfall. Furthermore, sources of uncertainty may be the focus of future studies. Additionally, various techniques, including wavelets, can be employed for data preprocessing.

12.7 Conclusion

113

70 R² = 0.9992

Estimated Rainfall (mm)

60 50 40 30 20 10 0

0

10

20

30

40

50

60

70

50

60

70

50

60

70

Observed Rainfall (mm)

IMM 70

Estimated Rainfall (mm)

R² = 0.9935 60 50 40 30 20 10 0

0

10

20

30

40

Observed Rainfall (mm)

RABFN-HGSOA 70

R² = 0.9861

Estimated Rainfall (mm)

60 50 40 30 20 10 0

0

10

20

30

40

Observed Rainfall (mm) RABFN-SSA Fig. 12.11 Scatterplots of models

114

12 Predicting Rainfall Using Inclusive Multiple Model and Radial Basis … 70 R² = 0.9788

Estimated Rainfall (mm)

60 50 40 30 20 10 0

0

10

20

30

40

50

60

70

50

60

70

Observed Rainfall (mm)

RABFN-COA 70 R² = 0.9685

Observed Rainfall (mm)

60 50 40 30 20 10 0

0

10

20

30

40

Estimated Rainfall (mm) RABFN Fig. 12.11 (continued)

The models’ precision will be improved by data preprocessing. The best input data can be chosen using the appropriate tools. The best models can also be selected using techniques like multicriteria decision model indicators. Deep learning models could be employed in future studies to precisely forecast rainfall in different basins. The models used in the present investigation can also be evaluated in other parts of the world with various climatic conditions.

References

115

References Akiner, M. E. (2021). Long-term rainfall information forecast by utilizing constrained amount of observation through artificial neural network approach. Advances in Meteorology. https://doi.org/ 10.1155/2021/5524611 Alotaibi, K., Ghumman, A. R., Haider, H., Ghazaw, Y. M., & Shafiquzzaman, M. (2018). Future predictions of rainfall and temperature using GCM and ANN for arid regions: A case study for the Qassim region, Saudi Arabia. Water (switzerland). https://doi.org/10.3390/w10091260 Deng, Y., Zhou, X., Shen, J., Xiao, G., Hong, H., Lin, H., Wu, F., & Liao, B. Q. (2021). New methods based on back propagation (BP) and radial basis function (RBF) artificial neural networks (ANNs) for predicting the occurrence of haloketones in tap water. Science of the Total Environment. https:// doi.org/10.1016/j.scitotenv.2021.145534 Dong, X. J., Shen, J. N., He, G. X., Ma, Z. F., & He, Y. J. (2021). A general radial basis function neural network assisted hybrid modeling method for photovoltaic cell operating temperature prediction. Energy. https://doi.org/10.1016/j.energy.2021.121212 Ehteram, M., Ahmed, A. N., Kumar, P., Sherif, M., & El-Shafie, A. (2021). Predicting freshwater production and energy consumption in a seawater greenhouse based on ensemble frameworks using optimized multi-layer perceptron. Energy Reports, 7, 6308–6326. Ehteram, M., Graf, R., Ahmed, A. N., & El-Shafie, A. (2022). Improved prediction of daily pan evaporation using Bayesian Model Averaging and optimized Kernel Extreme Machine models in different climates. Stochastic Environmental Research and Risk Assessment, 1–36. Johny, K., Pai, M. L., & Adarsh, S. (2020). Adaptive EEMD-ANN hybrid model for Indian summer monsoon rainfall forecasting. Theoretical and Applied Climatology. https://doi.org/10.1007/s00 704-020-03177-5 Panahi, F., Ehteram, M., Ahmed, A. N., Huang, Y. F., Mosavi, A., & El-Shafie, A. (2021). Streamflow prediction with large climate indices using several hybrid multilayer perceptrons and copula Bayesian model averaging. Ecological Indicators. https://doi.org/10.1016/j.ecolind.2021.108285 Seifi, A., Ehteram, M., Nayebloei, F., Soroush, F., Gharabaghi, B., & Torabi Haghighi, A. (2021). GLUE uncertainty analysis of hybrid models for predicting hourly soil temperature and application wavelet coherence analysis for correlation with meteorological variables. Soft Computing. https://doi.org/10.1007/s00500-021-06009-4 Seifi, A., Ehteram, M., & Soroush, F. (2020). Uncertainties of instantaneous influent flow predictions by intelligence models hybridized with multi-objective shark smell optimization algorithm. Journal of Hydrology. https://doi.org/10.1016/j.jhydrol.2020.124977 Xiang, Y., Gou, L., He, L., Xia, S., & Wang, W. (2018). A SVR–ANN combined model based on ensemble EMD for rainfall prediction. Applied Soft Computing Journal. https://doi.org/10.1016/ j.asoc.2018.09.018

Chapter 13

Predicting Temperature Using Optimized Adaptive Neuro-fuzzy Interface System and Bayesian Model Averaging

Abstract This study uses an optimized adaptive neuro-fuzzy interface system (ANFIS) and Bayesian model averaging (BMA) to estimate one-month-ahead temperature. The lagged temperatures were used as the inputs to the models. The dragonfly optimization algorithm (DRA), rat swarm optimization (RSOA), and antlion optimization algorithm (ANO) were used to set the ANFIS parameters. The results indicated that the BMA model outperformed the other models. Also, the DRA had the best performance among other optimization algorithms. The Nash– Sutcliffe efficiency (NSE) of the BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS models was 0.96, 0.91, 0.90, 0.89, and 0.87, respectively. The BMA and ANFIS-DRA had the highest NSE values at the testing level. It was observed that increasing time horizons decreased the accuracy of models. Keywords Air temperature · Optimization algorithms · ANFIS · Ensemble model

13.1 Introduction For planting crops, the temperature is one of the most important factors. Managing agriculture systems requires predicting temperature. The air temperature can be affected by various factors, such as large climate indices. Predicting air temperatures is one of the most important components of sustainable agriculture. Modeling air temperature is a complex process. A modeler needs to identify the effective inputs. Predicting air temperature requires a robust model. Under climate change conditions, modelers also need accurate models to predict temperature. The modeler may face many data points. The skill and experience of modelers help them in the modeling process. In recent years, different machine learning models such as artificial neural network (ANN) (Nadig et al., 2013; Rajendra et al., 2019; Tran et al., 2021), support vector machine (SVM) (Adnan et al., 2021; Deif et al., 2022; Katipo˘glu, 2022), and adaptive neuro-fuzzy interface system (ANFIS) (Abbaspour-Gilandeh et al., 2020; Ozbek et al., 2021; Sekertekin et al., 2021). have been used to estimate different variables. The studies used individual methods to predict air temperature. However, they have not used ensemble models. An ensemble model combines the skills of several robust models. The ensemble model uses the outputs of multiple individual models. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_13

117

118

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

An ensemble model incorporates the advantages of multiple models. It is possible to reduce the uncertainty of the modeling process by using an ensemble structure. In this study, outputs of the optimized adaptive neuro-fuzzy system (ANFIS) are used to create an ensemble model for predicting monthly temperature. First, optimization algorithms are applied to adjust ANFIS parameters. Afterward, the outputs of ANFIS models are inserted into a Bayesian model averaging (BMA). In this study, the dragonfly optimization algorithm (DRA), rat swarm optimization (RSOA), and antlion optimization algorithm (ANO) were used to set the ANFIS parameters.

13.2 Structure of ANFIS Models Nowadays, weather forecasting is essential in agriculture and water resources management. Temperature is one of the important parameters in weather forecasting (Adnan et al., 2021). Predicting temperature is also necessary for farming and watering. For the time being and, in the future, temperature forecasts can help better planning for water resources management. Providing temperature maps is important for hydrologists and other researchers. Since temperature is under influence of different parameters, forecasting is difficult. Therefore, temperature forecasting needs strong models. There are weak points and problems in numerical or experimental models. Nowadays, artificial intelligence models can be used extensively for predicting hydrologic databases. These models can also provide accurate spatial and temporal maps for temperature changes. They can receive a great amount of data and then give accurate predictions. Artificial intelligence models can connect with different methods like data preprocessing and provide exact results. Furthermore, adjusting the models’ parameters and choosing the inputs is crucial. Optimization algorithms can be connected to the mentioned models to reinforce the results of artificial intelligence models (Ehteram et al., 2021). Currently, computer systems and artificial intelligence models play an important role in the modeling process. Moreover, the usage of the mentioned models is not restricted to a climate or region, and they can be used for different regions. Adaptive neuro-fuzzy interface system (ANFIS) model is one of the well-known models. This model can provide the output results accurately. The mentioned model is made of a combination of artificial neural networks and fuzzy rules. It also can reduce computational costs and speed up the calculation. This model can provide the time and location maps of temperature for different seasons and months. Furthermore, the ANFIS model can reduce the uncertainty of outputs. ANFIS combines artificial neural networks (ANNs) and fuzzy logic theory for predicting outputs (Alrassas et al., 2021). It uses a hybrid learning algorithm, including the least-square method and the backpropagation gradient-descent method for adjusting its parameters (Zhu et al., 2021). The ANFIS model consists of 5 layers: 1. The first layer is the fuzzy layer. The outputs of the first layer are computed as follows: (Bazrafshan et al., 2022; Ehteram et al., 2021; Panahi et al., 2021):

13.3 Hybrid ANFIS Models

119

out1,i = σ Ai (xi )

(13.1)

where out1,i : the output of the ith node, μ Ai : membership function node of A, ad x: input. 2. The second layer produces the firing strength (Seifi et al., 2022): out2,i = σ Ai (xi ) × σ Bi−2 (yi )

(13.2)

where out2,i : output of the second node, σ Ai (xi ) and σ Bi−2 (yi ): membership functions of node A node B, y: input. 3. The third layer provides the normalized outputs (Huang et al., 2022): ωi out3,i = ωi = Σ2 i=1

ωi'

(13.3)

where out3,i : output of the third layer, ωi : the ith output from the second layer. 4. The fourth layer is named the fuzzification layer. out4,i = ω f i = ωi (αi x + εi y + bi )

(13.4)

where out4,i : output of the fourth layer, αi , εi , and b: consequent parameters. 5. The fifth layer generates final outputs: out5,i =

Σ

ω fi

(13.5)

i

where out5,i : final output. Figure 13.1 shows the structure of the ANFIS model (Adeleke et al., 2022). The membership function value is computed as follows (Adeleke et al., 2022): σ (x) = e

) ( x−v − z i i

(13.6)

where vi and zi : premise parameters. The premise and consequent parameters have unknown values. Thus, robust optimizers are used to adjust them.

13.3 Hybrid ANFIS Models The ANFIS parameters have key roles in the model process. Therefore, it is essential to find them accurately. This study integrated DRA, RSOA, and AOA with the ANFIS model to find the ANFIS parameters. First, the ANFIS parameters are encoded as the initial population of the algorithms. Afterward, the training data are used to run the ANFIS model at the training level. The root mean square is computed to assess the

120

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

Fig. 13.1 Structure of the ANFIS model

quality of the solutions. Each algorithm has operators. These operators are applied to find new solutions. Finally, the stop condition (SC) is controlled. If SC is met, the testing data are used to run ANFIS at the testing level; otherwise, the optimization process continues.

13.4 Bayesian Model Averaging (BMA) By combining the predicted densities of various competing models, the BMA produces a new forecast probability density function (PDF). BMA is an ensemble model. Achite et al. (2022) stated that the BMA model improved the accuracy of individual ANN models for predicting drought. BMA has been widely used in different fields, such as groundwater vulnerability (Gharekhani et al., 2022), prediction of evaporation (Ehteram et al., 2022), analyzing freeway traffic (Zou et al., 2021), and estimating evapotranspiration (Yang et al., 2021). The BMA uses the law of total probability to estimate the predictive distribution of output: k ) Σ ( ) ( ci p q t |M1t , Q p q t |M1t , M2t , . . . , Mkt , q =

(13.7)

k=1

where M1t : the output of ith model ( at time) t, Q: observations during the training period, q: forecasted variable, p q t |M1t , Q : the posterior distribution of q, and c:

13.5 Case Study

121

weights. The BMA hypothesizes that the estimated values are unbiased. Thus, the bias correction method is used to satisfy this assumption: f it = ηt + γt Mit

(13.8)

where f it : the bias-corrected forecasts and ηt and γt : regression coefficients. Because the log-likelihood function can be computed more easily than the likelihood function, it is used for variance and weight estimation. ( l(θ ) = log

l Σ

) ci . p(q| f i , Q)

(13.9)

i=1

where l(θ ): log-likelihood function. Raftery et al. (2005) suggested the expectationmaximation technique to obtain the variance and weight of posterior distributions. They used a latent variable for solving the Eq. (13.9).

13.5 Case Study A study is conducted to predict temperature in the Sefidrood basin, one of Iran’s largest basins. The basin area is 59,273 km2 . In this region, the Sefidrood River is of great importance. Near the Caspian Sea, the eastern region of the basin has a temperate climate. The northern regions of the basin have a cold climate. There is an annual rainfall of 415 mm in the basin. Temperature time series and case study location are shown in Figs. 13.2 and 13.3. 40

Temperature (°C)

35 30 25 20 15 10 5 0

0

20

40

60

Number of Month

Fig. 13.2 Temperature time series

80

100

120

122

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

Fig. 13.3 Location of case study

This study used values of lagged temperatures (TEM (t − 1)… TEM (t − 12) to predict one-month-ahead temperature. The following indices were used to assess the performance of models: | n | 1 Σ || TEMob − TEMes || MAPE = | ∗ 100 n i=1 | TEMob

(13.10)

ΣN PBIAS =

(TEob − TEes ) ΣN i=1 (TEob )

i=1

(13.11)

Σn

(TEob − TEes ) ) NSE = 1 − Σni=1 ( i=1 TEob − TEob / ) ( ) Σn ( i=1 TEob − TEiobs − TEes − TEes CRMSE = n

(13.12)

(13.13)

where MAPE: mean absolute percentage error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, TEMob : observed temperature, TEMiobs : average temperature value, TEMes : estimated value.

13.6 Results and Discussion

123

13.6 Results and Discussion 13.6.1 Determination of the Size of Data Figure 13.4 shows the objective function value for different data sizes at the training and testing level. For the ANFIS-ANO, the objective function value of 50, 60, 70, 80, and 90% of training data was 3.24, 1.95, 2.41, 3.39, and 3.78. For the ANFIS-DRA, the objective function value of 50, 60, 70, 80, and 90% of training data was 3.12, 1.34, 2.23, 3.45, and 3.87. For the ANFIS-RSOA, the objective function value of 50, 60, 70, 80, and 90% of training data was 3.34, 1.76, 2.39, 3.32, and 3.57.

13.6.2 Determination of Random Parameters Values Evolutionary algorithms are affected by random parameters. In the first level, random parameters are determined using a sensitivity analysis. While the other parameter values remain fixed, the objective function values are computed for a parameter. Figure 13.5 illustrates the sensitivity analysis for determining population size (POPS) and the maximum number of iterations (MAN). For DRA, the objective function of MAN = 80, MAN = 160, MAN = 240, MAN = 320, and MAN = 400 was 1.66, 1.32, 1.58, 1.82, and 1.96, respectively. For RSOA, the objective function of MAN = 80, MAN = 160, MAN = 240, MAN = 320, and MAN = 400 was 1.85, 1.81, 1.74, 1.89, and 1.90, respectively. For ANO, the objective function of MAN = 80, MAN = 160, MAN = 240, MAN = 320, and MAN = 400 was 2.0, 1.98, 1.95, 1.97, and 2.02, respectively. For DRA, the objective function of POPS = 60, POPS = 120, POPS = 180, POPS = 240, and POPS = 300 was 1.67, 1.34, 1.55, 1.78, and 1.98, respectively. For RSOA, the objective function of POPS = 60, POPS = 120, POPS = 180, POPS = 240, and POPS = 300 was 1.87, 1.82, 1.76, 1.84, and 1.89, respectively. For ANO, the objective function of POPS = 60, POPS = 120, POPS = 180, POPS = 240, and POPS = 300 was 2.10, 1.99, 1.95, 1.99, and 2.12, respectively.

13.6.3 Evaluation of the Accuracy of Models Figure 13.6 assesses the accuracy of models based on radar plots. The MAPE of BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS models was 9, 12, 17, 21, and 23%. The ANFIS and BMA model had the highest and lowest MAP values at the testing level. The NSE of the BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS models was 0.96, 0.91, 0.90, 0.89, and 0.87, respectively. The PBIAS of the BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS was 5, 7, 11, 12, and 14% at the testing level.

124

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

Fig. 13.4 Objective function values for different sizes of data

ANFIS-ANO

ANFIS-DRA

ANFIS-RSOA

13.6 Results and Discussion

125

Fig. 13.5 Sensitivity analysis for determining random parameters

Taylor diagram evaluates the potential of models using CRMSE, correlation coefficient, and standard deviation. Figure 13.7 shows the Taylor diagram. The CRMSE of the BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS models was 0.80, 1.10, 1.22, 1.63, and 2.05, respectively. The correlation coefficient of the BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS was 0.99, 0.98, 0.94, 0.92, and 0.89, respectively. Figure 13.8 shows the boxplots of models. The median of observed data, BMA, ANFIS-DRA, ANFIS-RSOA, ANFIS-ANO, and ANFIS model was 17.5, 18, 16.5, 16, 14.5, and 13. The maxim value of observed data, BMA, ANFIS-DRA, ANFISRSOA, ANFIS-ANO, and ANFIS model was 35, 35, 32, 31, 30, and 30, respectively.

126

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

PBIAS

NSE

MAPE Fig. 13.6 Radar plots for evaluation of the accuracy of models

13.6 Results and Discussion

127

Fig. 13.7 Taylor diagram for the evaluation of the models

13.6.4 Discussion This study used optimized ANFIS models and BMA to predict one-month-ahead temperature. In this section, the ability of models was evaluated to predict one, two, and three-month-ahead temperature. Table 13.1 shows the testing results of different models. Two-month-ahead The MAPE of BMA, ANFIS-RSOA, ANFIS-ANO, and ANFIS model was 15, 19, 25, and 27%. The BMA and ANFIS models had the highest and lowest PBIAS. The NSE of the BMA, ANFIS-RSOA, ANFIS-ANO, and ANFIS models was 0.92, 0.90, 0.87, and 0.85, respectively. Three-month-ahead The MAPE of BMA, ANFIS-RSOA, ANFIS-ANO, and ANFIS model was 18, 19, 25, and 27%. The BMA and ANFIS models had the highest and lowest PBIAS. The NSE of the BMA, ANFIS-RSOA, ANFIS-ANO, and ANFIS models was 0.92, 0.90, 0.87, and 0.85, respectively. The results of this section indicated that the increasing horizon time decreased the accuracy of models. The MAEP of the BMA model was 14, 15, and 18 for predicting one, two, and three-month-ahead temperature. The NSE of the BMA model was 0.96, 0.92, and 0.90 for predicting one, two, and three-month-ahead temperature.

128

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

Fig. 13.8 Boxplots of models for predicting temperature

13.7 Conclusion This study used BMA and optimized ANFIS models to estimate one-month-ahead temperature. Lagged temperatures were used to predict monthly temperature. The results indicated that the BMA improved the efficiency of individual models for predicting air temperature. Also, optimized ANFIS models outperformed the ANFIS models. Increasing time horizons decreased the accuracy of models. The next studies can use the other inputs, such as large climate indices, for predicting temperature. Also, they can test other soft computing models for predicting temperature.

References Table 13.1 Error indices for predicting one, two, and three-month-ahead temperature

129 Testing level MAPE

PBIAS

NSE

One-month-ahead BMA

14

5

0.96

ANFIS-DRA

18

7

0.91

ANFIS-RSOA

18

11

0.90

ANFIS-ANO

24

12

0.89

ANFIS

26

14

0.87

Two-month-ahead ANFIS-DRA

15

8

0.92

ANFIS-RSOA

19

11

0.90

ANFIS-ANO

25

12

0.87

ANFIS

27

16

0.85

ANFIS-DRA

18

9

0.90

ANFIS-RSOA

21

12

0.88

ANFIS-ANO

26

14

0.84

ANFIS

29

17

0.82

Three-month-ahead

References Abbaspour-Gilandeh, Y., Jahanbakhshi, A., & Kaveh, M. (2020). Prediction kinetic, energy and exergy of quince under hot air dryer using ANNs and ANFIS. Food Science and Nutrition. https://doi.org/10.1002/fsn3.1347 Achite, M., Banadkooki, F. B., Ehteram, M., Bouharira, A., Ahmed, A. N., & Elshafie, A. (2022). Exploring Bayesian model averaging with multiple ANNs for meteorological drought forecasts. Stochastic Environmental Research and Risk Assessment. https://doi.org/10.1007/s00477-02102150-6 Adeleke, O., Akinlabi, S. A., Jen, T. C., & Dunmade, I. (2022). Prediction of municipal solid waste generation: An investigation of the effect of clustering techniques and parameters on ANFIS model performance. Environmental Technology (United Kingdom). https://doi.org/10.1080/095 93330.2020.1845819 Adnan, R. M., Liang, Z., Kuriqi, A., Kisi, O., Malik, A., Li, B., & Mortazavizadeh, F. (2021). Air temperature prediction using different machine learning models. Indonesian Journal of Electrical Engineering and Computer Science. https://doi.org/10.11591/ijeecs.v22.i1.pp534-541 Alrassas, A. M., Al-Qaness, M. A. A., Ewees, A. A., Ren, S., Elaziz, M. A., Damaševiˇcius, R., & Krilaviˇcius, T. (2021). Optimized Anfis model using Aquila optimizer for oil production forecasting. Processes. https://doi.org/10.3390/pr9071194 Bazrafshan, O., Ehteram, M., Dashti Latif, S., Feng Huang, Y., Yenn Teo, F., Najah Ahmed, A., & El-Shafie, A. (2022). Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models: Predicting crop yields using a new robust Bayesian averaging model. Ain Shams Engineering Journal. https://doi.org/10.1016/j.asej.2022.101724 Deif, M. A., Solyman, A. A. A., Alsharif, M. H., Jung, S., & Hwang, E. (2022). A hybrid multiobjective optimizer-based SVM model for enhancing numerical weather prediction: A study for the Seoul Metropolitan Area. Sustainability (Switzerland). https://doi.org/10.3390/su14010296

130

13 Predicting Temperature Using Optimized Adaptive Neuro-fuzzy …

Ehteram, M., Graf, R., Ahmed, A. N., & El-Shafie, A. (2022). Improved prediction of daily pan evaporation using Bayesian Model Averaging and optimized Kernel Extreme Machine models in different climates. Stochastic Environmental Research and Risk Assessment, 1–36. Ehteram, M., Yenn, F., Najah Ahmed, A., Dashti Latif, S., Feng Huang, Y., Abozweita, O., AlAnsari, N., & El-Shafie, A. (2021). Performance improvement for infiltration rate prediction using hybridized adaptive neuro-fuzzy inferences system (ANFIS) with optimization algorithms. Ain Shams Engineering Journal. https://doi.org/10.1016/j.asej.2020.08.019 Gharekhani, M., Nadiri, A. A., Khatibi, R., Sadeghfam, S., & Asghari Moghaddam, A. (2022). A study of uncertainties in groundwater vulnerability modelling using Bayesian model averaging (BMA). Journal of Environmental Management. https://doi.org/10.1016/j.jenvman.2021.114168 Huang, H., Band, S. S., Karami, H., Ehteram, M., Chau, K. W., & Zhang, Q. (2022). Solar radiation prediction using improved soft computing models for semi-arid, slightly-arid and humid climates. Alexandria Engineering Journal, 61(12), 10631–10657. Katipo˘glu, O. M. (2022). Prediction of missing temperature data using different machine learning methods. Arabian Journal of Geosciences, 15(1), 21. Nadig, K., Potter, W., Hoogenboom, G., & McClendon, R. (2013). Comparison of individual and combined ANN models for prediction of air and dew point temperature. Applied Intelligence. https://doi.org/10.1007/s10489-012-0417-1 Ozbek, A., Sekertekin, A., Bilgili, M., & Arslan, N. (2021). Prediction of 10-min, hourly, and daily atmospheric air temperature: Comparison of LSTM, ANFIS-FCM, and ARMA. Arabian Journal of Geosciences. https://doi.org/10.1007/s12517-021-06982-y Panahi, F., Ehteram, M., & Emami, M. (2021). Suspended sediment load prediction based on soft computing models and black widow optimization algorithm using an enhanced gamma test. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-14065-4 Rajendra, P., Murthy, K. V. N., Subbarao, A., & Boadh, R. (2019). Use of ANN models in the prediction of meteorological data. Modeling Earth Systems and Environment. https://doi.org/10. 1007/s40808-019-00590-2 Raftery, A. E., Gneiting, T., Balabdaoui, F., & Polakowski, M. (2005). Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Review. https://doi.org/10.1175/MWR 2906.1 Seifi, A., Ehteram, M., Soroush, F., & Haghighi, A. T. (2022). Multi-model ensemble prediction of pan evaporation based on the Copula Bayesian model averaging approach. Engineering Applications of Artificial Intelligence, 114, 105124. Sekertekin, A., Bilgili, M., Arslan, N., Yildirim, A., Celebi, K., & Ozbek, A. (2021). Short-term air temperature prediction by adaptive neuro-fuzzy inference system (ANFIS) and long short-term memory (LSTM) network. Meteorology and Atmospheric Physics. https://doi.org/10.1007/s00 703-021-00791-4 Tran, T. T. K., Bateni, S. M., Ki, S. J., & Vosoughifar, H. (2021). A review of neural networks for air temperature forecasting. In Water (Switzerland). https://doi.org/10.3390/w13091294 Yang, Y., Sun, H., Xue, J., Liu, Y., Liu, L., Yan, D., & Gui, D. (2021). Estimating evapotranspiration by coupling Bayesian model averaging methods with machine learning algorithms. Environmental monitoring and assessment, 193, 1–15. Zhu, H., Zhu, L., Sun, Z., & Khan, A. (2021). Machine learning based simulation of an anticancer drug (Busulfan) solubility in supercritical carbon dioxide: ANFIS model and experimental validation. Journal of Molecular Liquids. https://doi.org/10.1016/j.molliq.2021.116731 Zou, Y., Lin, B., Yang, X., Wu, L., Muneeb Abid, M., & Tang, J. (2021). Application of the Bayesian model averaging in analyzing freeway traffic incident clearance time for emergency management. Journal of Advanced Transportation. https://doi.org/10.1155/2021/6671983

Chapter 14

Predicting Evapotranspiration Using Support Vector Machine Model and Hybrid Gamma Test

Abstract In agriculture and water resource management, evapotranspiration prediction plays an important role. In this article, the optimized SVM models are used for predicting evapotranspiration. In this study, the SVM parameters are adjusted using particle swarm optimization (PSO), antlion optimization (ANO), and crow optimization algorithm (COA). For choosing the best input combination, a hybrid gamma test is used. Automatically, the hybrid gamma test can determine the best input combination. The optimized SVM models outperformed the standalone SVM models. The mean absolute error (MAE) of the SVM-ANO, SM-COA, SVM-PSO, and SVM models was 0.678, 0.789, 0.812, and 0.824 at the Iranshahr station. Keywords Hybrid gamma test · Optimization algorithms · Evapotranspiration · Support vector machine

14.1 Introduction Predicting evapotranspiration is a complex and nonlinear process (Fu et al., 2021; Mehdizadeh et al., 2017; Mohammadrezapour et al., 2019; Yao et al., 2017; Zeinolabedini Rezaabad). Different meteorological parameters influence evapotranspiration (Antonopoulos et al., 2019; Fan et al., 2018; Ferreira et al., 2019; Luo et al., 2015). For this reason, robust models are required to predict evapotranspiration. In this research, soft computing models are applied to estimate evapotranspiration. Particle swarm optimization (PSO), antlion optimization (ANO), and crow optimization algorithm (COA) are used to set the SVM parameters. The structures of algorithms are explained in Chaps. 2, 6 and 10.

14.2 Review of Previous Papers Drought and climate change are growing worldwide. Additionally, water demands are rising. Increased emphasis has been placed on water resources planning due to the rise in water demands and the decline in water resources (Mehdizadeh et al., 2017). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_14

131

132

14 Predicting Evapotranspiration Using Support Vector Machine Model …

Hydrological and meteorological forecasts are required for effective water resource planning (Aghelpour and Norooz-Valashedi (2022); Alizamir et al., 2020; Zeinolabedini Rezaabad et al., 2020). These predictions include estimates of critical meteorological parameters. Evapotranspiration is a critical meteorological and agricultural parameter. Predicting evapotranspiration is critical for irrigation, agriculture, and crop production. Accurate prediction is necessary for irrigation and agricultural management. Evapotranspiration cannot be adequately predicted by empirical models alone. It is possible to predict evapotranspiration using both direct and indirect measurement techniques. Different tools are required for the implementation of direct techniques. Direct measurement techniques are costly. Methods of indirect measurement include numerical models and techniques, as well as artificial intelligence. Using climatic data, artificial intelligence models can provide accurate predictions of evapotranspiration. These models facilitate a faster simulation procedure. In recent years, experts have developed numerous evapotranspiration prediction models based on artificial intelligence. El-Shafie et al. (2013) predicted evapotranspiration using a neural network model. They used data of the maximum and minimum input temperatures to estimate evapotranspiration. Setting the parameters of the neural network model was a significant obstacle in their modeling procedure. The results revealed that the neural network model outperformed the experimental models. Pour-Ali Baba et al. (2013) predicted evapotranspiration using neural network and ANFIS models. They utilized data on solar radiation and other environmental factors to estimate evapotranspiration. The results indicated that the neural network model and ANFIS are more precise than the experimental models. El-Shafie et al. (2014) employed experimental models and a neural network to predict evapotranspiration at many sites. They predicted evapotranspiration using maximum and minimum temperature data. According to their findings, the neural network model demonstrated promising potential for estimating evapotranspiration. Citakoglu et al. (2014) predicted evapotranspiration through neural networks, ANFIS, and empirical models. The experimental model has been illustrated to be less precise than other models. Kisi et al. (2015) predicted evapotranspiration using the ANFIS model. They utilized nonclimate data other. The models’ examination revealed that such data’s application might yield reliable results. However, the preprocessing of data, the setup of model parameters, and the selection of input data posed obstacles to the research. Abdullah et al. (2015) used climate data to predict evapotranspiration. They predicted it using the extreme learning machines (ELMs) model. Each parameter has a unique impact on evapotranspiration. Consequently, it was required to identify the most crucial input parameters. The results indicated that the ELM model has a high predictive accuracy potential. Goci´c et al. (2015) predicted evapotranspiration using a support vector machine (SVM) and genetic programming (GP). In addition, they utilized the wavelet for data preprocessing. The results demonstrated that the SVM was highly accurate. However, one of the obstacles to their research was setting the model parameters. Kisi and Kilic (2016) predicted evapotranspiration using tree models. In their research, many combinations of input data were utilized. The findings indicated that tree and neural network

14.3 Structure of Support Vector Machine

133

models accurately predicted evapotranspiration. They utilized many indices to forecast it. Pandey et al. (2017) predicted evaporation using a support vector machine, neural network, and regression models. To predict evapotranspiration, several input data were employed. Various sets of input data were utilized. Each input had a distinct impact on evapotranspiration. The results revealed that the neural network model was more precise than other models. Keshtegar et al. (2018) predicted evapotranspiration using the ANFIS model. When the ANFIS model and the tree model were compared, the ANFIS model was found to be more accurate. The wavelet approach was coupled with various soft computing models by Kisi and Alizamir et al. (2018). The input data were preprocessed using the wavelet approach. The application of the wavelet approach improved the precision of neural network models. Ferreira et al. (2019) predicted evapotranspiration using a neural network approach and support vector machine. They utilized various data combinations to calculate evapotranspiration. Different input data resulted in varying degrees of precision among the models.

14.3 Structure of Support Vector Machine SVM is a robust method for predicting time series (Seifi et al., 2020). The SVM acts based on the following equation (Yahya et al., 2019): f (x) = ω T r .x + b

(14.1)

where f (x): output, ω: weight, x: input, and b: bias. SVR minimizes computational errors. Thus, an optimization problem is defined for the modeling process. Sain and Vapnik (1996) defined an e-insensitive loss function for the SVM model. An optimization problem is defined based on the following equation (Ehteram et al., 2019): Σ( ) 1 σi− + σi+ Minimize ||ω||2 + C 2 i=1

(14.2)

subject(to)(ωi .xi + b) − yi < ε + σi+

(14.3)

yi − (ωi .xi + b) ≤ ε + σi−

(14.4)

m

where σi− , σi+ : violation of the ith training point, C: penalty coefficient, x: input, and y: output. Several kernel functions can map a nonlinear time series to linear separable space and predict or simulate it (Samantaray & Ghose, 2022). f (x) = ω T r .K (x, xi ) + b

(14.5)

134

14 Predicting Evapotranspiration Using Support Vector Machine Model …

) ( |x − xi |2 K (x, xi ) = exp − 2γ 2

(14.6)

where K (x, xi ): kernel function, γ : kernel parameter. The penalty and kernel parameters have unknown values.

14.4 Hybrid SVM Models SVM parameters are adjusted using optimization algorithms in this study. As an initial population, SVM parameters are defined. Next, SVM is run at the training level using the training data. A root mean square error (objective function) is calculated for each model. Next, the algorithms are applied to update agents’ locations. By updating the location of agents, SVM parameters are updated. When convergence is reached, the process ends.

14.5 Theory of Gamma Test Input variables can be selected using the gamma test (GT). The GT is widely used in different fields, such as sediment load prediction (Panahi et al., 2021a), evaporation estimation (Malik et al., 2021), municipal solid waste prediction (Liang et al., 2021), and streamflow prediction (Panahi et al., 2021b). The gamma test assumes the relationship between the inputs and outputs: out = f (in1 , in2 , . . . in M ) + r

(14.7)

where out: output, f : smooth function, i: input, r: noise. Gamma statistic . estimates the variance of the output model. The GT acts based on the nearest neighbors for input and output vectors. The delta and gamma functions should be computed to obtain the . values. M |2 1 Σ || in N [i,k] − ini | ← (1 ≤ k ≤ p) M i=1

(14.8)

M ] 1 Σ[ out N [i,k] − outi (1 ≤ k ≤ p) = 2M i=1

(14.9)

δM =

γM

where δ M : delta function, out N [i,k] : corresponding output value to the kth nearest neighbor, in N [i,k] : corresponding input value to the kth nearest neighbor, p: number of neighbors, and M: number of the input vector. The following equation is used to

14.6 Case Study

135

compute . statistics. γ M (k) = A + δ M (k)

(14.10)

where A: gradient. The gamma test also has another important index: Vratio =

. σ2

(14.11)

where σ : variance. The lowest values of Vratio and . determine the best input variables. If a modeler faces many input data, it is complex to compute . for many input variables. Thus, it is essential to modify the gamma test. GT can be combined with optimization algorithms to simplify the modeling process. As a first step, the names of input variables are considered to be the initial population of the algorithms. In the next level, . is computed as an objective function. Next, operators of algorithms are used to update the input combinations. Operators of algorithms are used to update solutions and create new input combinations.

14.6 Case Study This chapter uses standalone and optimized SVM models to estimate monthly evapotranspiration at Zahedan, Iranshahr, and Chabahar stations in the Sistan and Baluchistan province of Iran. Figure 14.1 shows the location of the case study (Mohammadrezapour et al., 2019).

Fig. 14.1 Location of case study (Mohammadrezapour et al., 2019)

136

14 Predicting Evapotranspiration Using Support Vector Machine Model …

Table 14.1 Average values of input data Stations

WIS

AVT

NSHO

RAH

Chabahar

245.3

25.4

8.4

67.23

Zahedan

276.2

18.2

9.2

33.12

Zabol

455.2

21.3

8.2

33.14

Wind speed (WIS), relative humidity (RAH), average temperature (AVT), and number of sunny hours (NSHO) are input variables. Table 14.1 shows the details of input data. Figure 14.2 shows the evapotranspiration time series. The error indices are used to evaluate the ability of models: MAE =

n 1Σ |EVTob − EVTes | n i=1

ΣN PBIAS =

/ CRMSE =

(EVTob − EVTes ) ΣN i=1 (EVTob )

i=1

Σn (EVTob − EVTes ) ) NSE = 1 − Σni=1 ( i=1 EVTob − EVTob Σn i=1

(

) ( ) EVTob − EVTiobs − EVTes − EVTes n

(14.11)

(14.12) (14.13)

(14.14)

where MAE: mean absolute error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s Efficiency, CRMSE: centered root mean square difference error, EVTob : observed evapotranspiration, EVTiobs : average evapotranspiration value, EVTes : estimated value.

14.7 Results and Discussion 14.7.1 Choice of the Algorithm Parameters The random values of parameters should be obtained before the optimization process begins. For this study, sensitivity analysis is applied to determine the parameters of random algorithms. The best values of random parameters (RP) are shown in Table 14.2. The objective function value is computed when parameter values are changed population size (POSI), and the maximum number of iterations (MNIT) is the random parameters of the algorithms. A parameter whose objective function value is the lowest is the best. The results for the Iranshahr station are reported in

14.7 Results and Discussion

137

Chabahar

Iranshahr

Zahedan Fig. 14.2 Evapotranspiration time series at the different time series (2000–2008)

138

14 Predicting Evapotranspiration Using Support Vector Machine Model …

Table 14.2 Determination of RP values at the Iranshahr station COA

ANO POSI

Objective function

POSI

PSO Objective function

POSI

Objective function

POSI = 50

2.23

POSI = 50

2.98

POSI = 50

3.23

POSI = 100

1.23

POSI = 100

1.82

POSI = 100

3.12

POSI = 200

1.45

POSI = 200

1.76

POSI = 200

1.98

POSI = 250

1.55

POSI = 250

1.63

POSI = 250

1.95

POSI = 300

1.67

POSI = 300

1.89

POSI = 300

1.99

MNITE

Objective function

MNIT

Objective function

MNIT

Objective function

MNIT = 80

2.26

MNIT = 80

2.99

MNIT = 80

3.32

MNIT = 160

1.24

MNIT = 160

1.80

MNIT = 160

2.98

MNIT = 240

1.48

MNIT = 240

1.77

MNIT = 240

1.94

MNIT = 320

1.56

MNIT = 320

1.64

MNIT = 320

1.90

MNIT = 400

1.69

MNIT = 400

1.90

MNIT = 400

1.99

Table 14.3 RP values for other stations Station

ANO

COA

PSO

Zahedan

POPSI = 100, MNIT = 160

POSI = 150 MNI = 240

POSI = 200 POSI = 320

Chabahar

POSI = 100, MNIT = 160

POSI = 150, MNIT = 240

POSI = 200 MNIT = 320

Table 14.2. For ANO, the objective function of POSI = 50, POSI = 100, POSI = 150, POSI = 200, and POSI = 250 was 2.23, 1.23, 1.45, 1.55, and 1.67. Thus, POSI = 100 provided the lowest value for the objective function. For PSO, the objective function of POPSI = 50, POSI = 100, POSI = 150, POSI = 200, and POSI = 250 was 3.23, 3.12, 1.98, 1.95, and 1.99, respectively. For ANO, the objective function of MNIT = 50, MNIT = 100, MNIT = 150, MNIT = 200, and MNIT = 250 was 2.26, 1.24, 1.48, 1.56, and 1.69. Table 14.3 reports the best values of RPs for the other stations.

14.7.2 The Input Scenarios The original GT may be complex and time-consuming. Integrating GT and optimization algorithms allows optimal input scenarios (ISs) to be determined automatically. Table 14.4 lists the best ISs. As can be seen from the table, the best input

14.7 Results and Discussion Table 14.4 Best input combination for different stations

139 Iranshahr Input

.

V ratio

AVT WIS, NSHO

0.0567

0.0104

AVT, NSHO

0.0678

0.0121

AVT, WIS

0.0891

0.0164

Zahedan AVT WIS, NSHO

0.0712

0.0044

AVT, NSHO

0.0812

0.0051

AVT, WIS

0.0914

0.0055

Chabahar AVT WIS, NSHO

0.0761

0.007

AVT, NSHO

0.0812

0.008

AVT, WIS

0.0954

0.009

combination included AVT, NSHO, and WSI. In this study, there are four input variables. If the original GT is used, . should be computed for 24 –1 input combinations. Thus, hybrid GT gives the best input combination atomically.

14.7.3 Assessment of the Performance of Models Figure 14.3 shows the testing accuracy of models using error indices. The MAE of the SVM-ANO, SM-COA, SVM-PSO, and SVM models was 0.678, 0.789, 0.812, and 0.824 at the Iranshahr station. The SVM-ANO decreased the MAE of the SVMCOA, SVM-PSO, and SVM models by 18, 23, and 32% at the Zahedan station. The MAE of the SVM-ANO, SM-COA, SVM-PSO, and SVM models was 0.625, 0.755, 0.811, and 0.924 at the Zabol station. The NSE of the SVM-ANO, SM-COA, SVMPSO, and SVM models was 0.97, 0.92, 0.90, and 0.86 at the Iranshahr station. The NSE of the SVM-ANO, SM-COA, SVM-PSO, and SVM models was 0.95, 0.94, 0.92, and 0.86 at the Zabol station. The PBIAS of the SVM-ANO, SM-COA, SVMPSO, and SVM models was 6, 8, 9, and 12 at the Iranshahr station. The PBIAS of the SVM-ANO, SM-COA, SVM-PSO, and SVM models was 8, 11, 14, and 17 at the Zabol station. Taylor diagram is a graphical tool for assessing models’ performance. Three indices, standard deviation, correlation coefficient, and CRMSE, are used to determine the best models. Models that are closest to reference points are the best. The CRMSE of the SVM-ANO, SVM-COA, SVM-PSO, and SVM models was 0.42, 0.87, 1.29, and 1.73 at the Iranshahr station. The correlation coefficient of the SVMANO, SVM-COA, SVM-PSO, and SVM models was 0.98, 0.95, 0.93, and 0.92 at the Iranshahr station. The CRMSE of the SVM-ANO, SVM-COA, SVM-PSO, and

140

14 Predicting Evapotranspiration Using Support Vector Machine Model …

Fig. 14.3 Evaluation of the accuracy of models

MAE

NSE

PBIAS

14.8 Conclusion

141

SVM models was 0.29, 0.67, 1.05, and 1.42 at the Zahedan station. The correlation coefficient of the SVM-ANO, SVM-COA, SVM-PSO, and SVM models was 0.99, 0.98, 0.97, and 0.95 at the Zahedan station. The CRMSE of the SVM-ANO, SVM-COA, SVM-PSO, and SVM models was 0.30, 0.66, 1.05, and 1.11 at the Zabol station (Fig. 14.5).

14.7.4 Discussion This study determined the best input combination using a new hybrid GT. Verifying its performance is essential. Each input variable is eliminated to determine the significant parameters. Table 14.5 shows the importance of input parameters. The MAE of the SVM-ANO increased from 0.678 to 0.912 mm/day when AVT was removed from the input combination at the Iranshahr station. At Iranshahr, removing RH from the input combination increased MAE from 0.678 mm/day to 0.684 mm/day. Thus, AVT and RH had the highest and lowest importance at the Iranshahr station. The NSHO was the third important parameter. The hybrid gamma test correctly chose them for predicting evapotranspiration. The MAE at Zahedan increased from 0.639 mm/day to 0.912 mm/day and from 0.639 to 0.643 by removing AVT and RH. Thus, the hybrid gamma test correctly chose them for predicting evapotranspiration. Removing AVT from the input combination increased MAE from 0.625 to 0.905 at the Zabol station. Thus, the hybrid gamma test correctly chose them for predicting evapotranspiration. Based on the results of this study, it was found that the optimization algorithm could improve the efficiency of SVM models. Among other optimization algorithms, the ANO performed best. Different algorithms have different operators that lead to different accuracies. Among other optimization algorithms, the PSO had the lowest accuracy. Moreover, the next study can use a feature selection method to identify the best input combination.

14.8 Conclusion Evapotranspiration was predicted using optimized SVM models. The parameters of the SVM are adjusted using particle swarm optimization (PSO), antlion optimization (ANO), and crow optimization algorithm (COA). Optimization algorithms automatically selected the best input combinations in the gamma test. In the hybrid gamma test, the best inputs were accurately selected. Evapotranspiration was also reliably predicted by the optimized SVM models. Using the model of the current study, other hydrological variables can be predicted. Evapotranspiration maps for a region can be generated using these models. In future studies, the different kernel functions of the SVM models can be tested.

142

14 Predicting Evapotranspiration Using Support Vector Machine Model …

Fig. 14.5 Taylor diagram for the evaluation of the accuracy of models using all input data

Zahedan

Zabol

Iranshahr station

References Table 14.5 Determination of the most important input parameters

143 RMSE

Iranshahr

Zahedan

Zabol

All inputs

0.678

0.639

0.625

All inputs, except AVT

0.912

0.912

0.901

All inputs except WIS

0.723

0.711

0.745

All inputs except NSH

0.824

0.800

0.802

All inputs except RH

0.684

0.643

0.28

References Abdullah, S. S., Malek, M. A., Abdullah, N. S., Kisi, O., & Yap, K. S. (2015). Extreme learning machines: A new approach for prediction of reference evapotranspiration. Journal of Hydrology, 527, 184–195. Aghelpour, P., & Norooz-Valashedi, R. (2022). Predicting daily reference evapotranspiration rates in a humid region, comparison of seven various data-based predictor models. Stochastic Environmental Research and Risk Assessment, 1–23. Alizamir, M., Kisi, O., Muhammad Adnan, R., & Kuriqi, A. (2020). Modelling reference evapotranspiration by combining neuro-fuzzy and evolutionary strategies. Acta Geophysica, 68(4), 1113–1126. Antonopoulos, V. Z., Papamichail, D. M., Aschonitis, V. G., & Antonopoulos, A. V. (2019). Solar radiation estimation methods using ANN and empirical models. Computers and Electronics in Agriculture. https://doi.org/10.1016/j.compag.2019.03.022 Citakoglu, H., Cobaner, M., Haktanir, T., & Kisi, O. (2014). Estimation of monthly mean reference evapotranspiration in Turkey. Water Resources Management, 28(1), 99–113. Ehteram, M., Singh, V. P., Ferdowsi, A., Mousavi, S. F., Farzin, S., Karami, H., Mohd, N. S., Afan, H. A., Lai, S. H., Kisi, O., Malek, M. A., Ahmed, A. N., & El-Shafie, A. (2019). An improved model based on the support vector machine and cuckoo algorithm for simulating reference evapotranspiration. PLoS ONE. https://doi.org/10.1371/journal.pone.0217499 El-Shafie, A., Alsulami, H. M., Jahanbani, H., & Najah, A. (2013). Multi-lead ahead prediction model of reference evapotranspiration utilizing ANN with ensemble procedure. Stochastic Environmental Research and Risk Assessment, 27(6), 1423–1440. El-Shafie, A., Najah, A., Alsulami, H. M., & Jahanbani, H. (2014). Optimized neural network prediction model for potential evapotranspiration utilizing ensemble procedure. Water Resources Management, 28(4), 947–967. Fan, J., Yue, W., Wu, L., Zhang, F., Cai, H., Wang, X., Lu, X., & Xiang, Y. (2018). Evaluation of SVM, ELM and four tree-based ensemble models for predicting daily reference evapotranspiration using limited meteorological data in different climates of China. Agricultural and Forest Meteorology, 263, 225–241. Ferreira, L. B., da Cunha, F. F., de Oliveira, R. A., & Fernandes Filho, E. I. (2019). Estimation of reference evapotranspiration in Brazil with limited meteorological data using ANN and SVM–A new approach. Journal of Hydrology, 572, 556–570. Fu, T., Li, X., Jia, R., & Feng, L. (2021). A novel integrated method based on a machine learning model for estimating evapotranspiration in dryland. Journal of Hydrology. https://doi.org/10. 1016/j.jhydrol.2021.126881 Goci´c, M., Motamedi, S., Shamshirband, S., Petkovi´c, D., Ch, S., Hashim, R., & Arif, M. (2015). Soft computing approaches for forecasting reference evapotranspiration. Computers and Electronics in Agriculture, 113, 164–173. Keshtegar, B., Kisi, O., Ghohani Arab, H., & Zounemat-Kermani, M. (2018). Subset modeling basis ANFIS for prediction of the reference evapotranspiration. Water Resources Management, 32(3), 1101–1116.

144

14 Predicting Evapotranspiration Using Support Vector Machine Model …

Kisi, O., & Alizamir, M. (2018). Modelling reference evapotranspiration using a new wavelet conjunction heuristic method: Wavelet extreme learning machine vs wavelet neural networks. Agricultural and Forest Meteorology, 263, 41–48. Kisi, O., & Kilic, Y. (2016). An investigation on generalization ability of artificial neural networks and M5 model tree in modeling reference evapotranspiration. Theoretical and Applied Climatology, 126(3), 413–425. Kisi, O., Sanikhani, H., Zounemat-Kermani, M., & Niazi, F. (2015). Long-term monthly evapotranspiration modeling by several data-driven methods without climatic data. Computers and Electronics in Agriculture, 115, 66–77. Liang, G., Panahi, F., Ahmed, A. N., Ehteram, M., Band, S. S., & Elshafie, A. (2021). Predicting municipal solid waste using a coupled artificial neural network with archimedes optimisation algorithm and socioeconomic components. Journal of Cleaner Production. https://doi.org/10. 1016/j.jclepro.2021.128039 Luo, Y., Traore, S., Lyu, X., Wang, W., Wang, Y., Xie, Y., Jiao, X., & Fipps, G. (2015). Medium range daily reference evapotranspiration forecasting by using Ann and public weather forecasts. Water Resources Management. https://doi.org/10.1007/s11269-015-1033-8 Malik, A., Tikhamarine, Y., Al-Ansari, N., Shahid, S., Sekhon, H. S., Pal, R. K., Rai, P., Pandey, K., Singh, P., Elbeltagi, A., & Sammen, S. S. (2021). Daily pan-evaporation estimation in different agro-climatic zones using novel hybrid support vector regression optimized by Salp swarm algorithm in conjunction with gamma test. Engineering Applications of Computational Fluid Mechanics. https://doi.org/10.1080/19942060.2021.1942990 Mehdizadeh, S., Behmanesh, J., & Khalili, K. (2017). Using MARS, SVM, GEP and empirical equations for estimation of monthly mean reference evapotranspiration. Computers and Electronics in Agriculture. https://doi.org/10.1016/j.compag.2017.05.002 Mohammadrezapour, O., Piri, J., & Kisi, O. (2019). Comparison of SVM, ANFIS and GEP in modeling monthly potential evapotranspiration in an arid region (Case study: Sistan and Baluchestan Province, Iran). Water Science and Technology: Water Supply. https://doi.org/10. 2166/ws.2018.084 Panahi, F., Ehteram, M., & Emami, M. (2021a). Suspended sediment load prediction based on soft computing models and black widow optimization algorithm using an enhanced gamma test. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-14065-4 Panahi, F., Ehteram, M., Ahmed, A. N., Huang, Y. F., Mosavi, A., & El-Shafie, A. (2021b). Streamflow prediction with large climate indices using several hybrid multilayer perceptrons and copula Bayesian model averaging. Ecological Indicators. https://doi.org/10.1016/j.ecolind.2021.108285 Pandey, P. K., Nyori, T., & Pandey, V. (2017). Estimation of reference evapotranspiration using data driven techniques under limited data conditions. Modeling Earth Systems and Environment, 3(4), 1449–1461. Pour-Ali Baba, A., Shiri, J., Kisi, O., Fard, A. F., Kim, S., & Amini, R. (2013). Estimating daily reference evapotranspiration using available and estimated climatic data by adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Hydrology Research, 44(1), 131–146. Sain, S. R., & Vapnik, V. N. (1996). The nature of statistical learning theory. Technometrics. https:// doi.org/10.2307/1271324 Samantaray, S., & Ghose, D. K. (2022). Prediction of S12-MKII rainfall simulator experimental runoff data sets using hybrid PSR-SVM-FFA approaches. Journal of Water and Climate Change, 13(2), 707–734. Seifi, A., Ehteram, M., Singh, V. P., & Mosavi, A. (2020). Modeling and uncertainty analysis of groundwater level using six evolutionary optimization algorithms hybridized with ANFIS, SVM, and ANN. Sustainability (Switzerland). https://doi.org/10.3390/SU12104023 Yahya, A. S. A., Ahmed, A. N., Othman, F. B., Ibrahim, R. K., Afan, H. A., El-Shafie, A., Fai, C. M., Hossain, M. S., Ehteram, M., & Elshafie, A. (2019). Water quality prediction model based support vector machine model for ungauged river catchment under dual scenarios. Water (Switzerland). https://doi.org/10.3390/w11061231

References

145

Yao, Y., Liang, S., Li, X., Chen, J., Liu, S., Jia, K., Zhang, X., Xiao, Z., Fisher, J. B., Mu, Q., Pan, M., Liu, M., Cheng, J., Jiang, B., Xie, X., Grünwald, T., Bernhofer, C., & Roupsard, O. (2017). Improving global terrestrial evapotranspiration estimation using support vector machine by integrating three process-based algorithms. Agricultural and Forest Meteorology. https://doi. org/10.1016/j.agrformet.2017.04.011 Zeinolabedini Rezaabad, M., Ghazanfari, S., & Salajegheh, M. (2020). ANFIS modeling with ICA, BBO, TLBO, and IWO optimization algorithms and sensitivity analysis for predicting daily reference evapotranspiration. Journal of Hydrologic Engineering, 25(8), 04020038.

Chapter 15

Predicting Infiltration Using Kernel Extreme Learning Machine Model Under Input and Parameter Uncertainty

Abstract This study develops the optimized kernel extreme learning machines (KELMs) for predicting the infiltration rate. The rat swarm optimization algorithm (RSOA), shark optimization (SO), and dragonfly algorithm (DRA) were used to find the KELM parameters. This study also used generalized likelihood uncertainty estimation (GLUE) for quantifying input and parameter uncertainties. The furrow length had the highest importance among other input parameters. Also, the KELMRSOA outperformed the other models. The MAE of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 0.02, 0.05, 0.07, and 0.10 at the training level. The MAE of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 0.04, 0.08, 0.10, and 0.12 at the testing level. The results revealed that the model parameters provided higher uncertainty than the input parameters. Keywords Uncertainty · Infiltration · Optimization algorithm · KELM model

15.1 Introduction Infiltration prediction is an important topic in irrigation studies. For irrigation studies, identifying the input parameters of the infiltration models to estimate soil infiltration is one of the major challenges. In recent years, soft computing models such as artificial neural networks (ANNs) (Mattar et al., 2015; Singh et al., 2021), support vector machines (SVMs) (Sayari et al., 2021; Sihag et al., 2020; Vand et al., 2018), adaptive neuro-fuzzy interface system (ANFIS) (Angelaki et al., 2021; Ehteram et al., 2021). This chapter used optimized kernel extreme learning machines (KELMs) to predict infiltration. Also, the uncertainty of parameters and models IS quantified. The rat swarm algorithm (RSA), shark optimization (SO), and dragonfly algorithm (DRA) were used to find the KELM parameters. The structure of RSA, SO, and DRA was explained in Chaps. 3, 8 and 9, respectively.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_15

147

148

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

15.2 Structure of Kernel Extreme Learning Machines (KELM) An extreme learning machine is a feedforward neural network that simulates and predicts (Ehteram et al., 2022a). There are fewer adjustable parameters in KELM; the convergence speed is faster, and the generalization performance is better. The ELM output function is computed as follows (Sebbar et al., 2021): fL =

L Σ

αi h i (x) = h(x)α

(15.1)

i=1

where f L : output function, α: the vector of output weight, and h(x): the output vector of the hidden layer (Sales et al., 2021). h(x) = [h 1 (x), h 2 (x), . . . , h L (x)]

(15.2)

where x: input. A kernel extreme learning machine (KELM) is used to improve the generalization capabilities of extreme learning machines (ELMs) by introducing kernel learning. The ELM model minimizes training error and output weight norm. Min||H α − T ||2 , and||α||

(15.3)

ELM models also use minimal normal least squares α = H /= T

(15.4)

where H: output matrix of the hidden layer By utilizing the kernel functions, the hidden layer feature mapping can be defined in the KELM method ( ) ( ) .ELM, j = h(xi ).h x j = K xi , x j [ f (x) =

K (x, x1 ) K (x, x N )

]T (

I + .ELM C

(15.5)

)−1 T

(15.6)

where K (x, x1 ): kernel function. ) ( K (x, xi ) = exp −γ ||x − xi ||2

(15.7)

where C: regularization parameter and γ : kernel parameter. The kernel and regulation parameters affect the accuracy of KELM. Thus, robust optimizers are used to find the optimal values of KELM parameters.

15.5 Case Study

149

15.3 Hybrid KELM Model The kernel and regularization parameters are adjusted using optimization algorithms. As a first step, the parameters are initialized as the initial population of the algorithms. Encoding is used to encode the parameters. Next, the KEL runs at the training level. Solution quality is determined by the objective function (root mean square error). Solutions are updated using algorithm operators. Until the convergence criterion is met, the process continues.

15.4 Uncertainty of Input and Model Parameters The uncertainty of model parameters and inputs affects the accuracy of outputs. Therefore, it is essential to quantify the uncertainty in the modeling process. Generalized likelihood uncertainty estimation (GLUE) is one of the robust methods for quantifying uncertainty. It has been widely used in different fields, such as evaluating the uncertainty of hydrological models (Mirzaei et al., 2015), uncertainty estimation in floods (Thi et al., 2018), the uncertainty of evaluation of groundwater flow modeling (Abedini et al., 2017), uncertainty estimation in predicting soil temperature (Seifi et al., 2021), sediment load prediction (Panahi et al., 2021), and predicting evaporation (Ehteram et al., 2022b). The GLUE acts based on the following levels: 1. Prior distributions are determined for inputs and model parameters. Since the model parameters do not have a physical conception, the normal distribution cannot be considered. Statistical distribution and intrinsic properties of parameters are revealed by the variation of parameters at the training level (calibration step). For this study, 3000 calibrated ANN models were used to estimate parameter distributions. Because parameter variations are fixed between calibrated 1000 ANN models and calibrated 300 ANN models, this number was used. 2. The number of sample parameters is generated using the Monte Carlo method and the prior distribution of parameters. 3. The Nash–Sutcliffe efficiency (NSE) was used as a likelihood function. 4. A threshold is determined. Parameters whose likelihood value is less than a threshold are discarded. 5. The posterior probability is estimated using prior distribution and likelihood value. 6. The mean and variance of parameters are estimated.

15.5 Case Study In this study, the models were used to estimate the infiltration rate. The data were gathered from six studies (Holzapfel et al., 2004; Mateos & Oyonarte, 2005; Playán

150

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

Table 15.1 Symbol of input data

Input

Symbol

Infiltration process time

T o (min)

Water-front advance-time

TL

The cross-sectional area of the inflow

Ao (cm2 )

Furrow length

L (m)

Inflow rate

Q (ls−1 )

Fig. 15.1 Values of input data

et al., 2004; Rodríguez Alvarez, 2003; Sepaskhah & Shaabani, 2007; Valiantzas et al., 2001). Table 15.1 shows the input data. Figure 15.1 shows the values of data in this study. The infiltration rate was predicted using the inputs. Model performance was evaluated using these indices: MAE =

n 1Σ |INob − INes | n i=1

(15.8)

ΣN PBIAS =

(INob − INes ) ΣN i=1 (INob )

i=1

(15.9)

Σn

(INob − INes ) ) NSE = 1 − Σni=1 ( i=1 INob − INob / ) ( ) Σn ( i=1 INob − INiobs − INes − INes CRMSE = n

(15.10)

(15.11)

15.6 Results and Discussion

151

where MAE: mean absolute error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, EVTob : observed infiltration, EVTiobs : average infiltration value, EVTes : estimated value. Also, this study uses two indices for quantifying uncertainty: 1. The containing ratio P-factor =

Nu × 100 N

(15.12)

2. Width of bound ) ΣN ( i=1 INup − INlow × 100 R-factor = N σo

(15.13)

where INup : the upper value of the decision variable σo : the standard deviation and INlow : the lower decision variable.

15.6 Results and Discussion 15.6.1 Selection of Size of Data The appropriate data size should be chosen for the training and testing levels. Figure 15.2 shows the objective function value versus different data sizes. The objective function of 50%, 60%, 70%, 80%, and 90% of data at the training level was 0.12, 0.10, 0.14, 0.15, and 0.18, respectively, using the KELM-RSOA. The objective function of 50%, 60%, 70%, 80%, and 90% of data at the training level was 0.14, 0.12, 0.15, 0.19, and 0.21, respectively, using the KELM-SO. The objective function of 50%, 60%, 70%, 80%, and 90% of data at the training level was 0.19, 0.18, 0.23, 0.24, and 0.25, respectively, using the KELM-DRA. The objective function of 50%, 60%, 70%, 80%, and 90% of data at the training level was 0.22, 0.20, 0.25, 0.26, and 0.27, respectively, using the KELM.

15.6.2 Choice of Random Parameters of Optimization Algorithms Modeling relies on random parameters, such as the population size (POPSI) and the maximum number of iterations (MNOI). Therefore, accurate calculation of their values is essential. In this study, the algorithm’s parameters are adjusted using sensitivity analysis. A minimum error function (RMSE) is achieved by changing the values of random parameters. As a result, the best values of parameters have the

152

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

KELM-RSOA

KELM-SO

Fig. 15.2 Choice of the best size for training and testing levels

lowest objective function values. Figure 15.3 the heat maps for the sensitivity analysis. For RSOA, the objective function of POPSI = 750, POPSI = 150, POPSI = 225, POPISI = 300, and POPSI = 350 was 0.19, 0.10, 0.14, 0.6, and 0.21, respectively. For SO, the objective function of POPSI = 750, POPSI = 150, POPSI = 225, POPSI = 300, and POPSI = 350 was 0.18, 0.15, 0.12, 0.14, and 0.16, respectively.

15.6 Results and Discussion

153

KELM-DRA

KELM Fig. 15.2 (continued)

154

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

For DRA, the objective function of POPSI = 750, POPSI = 150, POPSI = 225, POPSI = 300, and POPSI = 350 was 0.25, 0.24, 0.21, 0.20, and 0.27, respectively. For RSOA, the objective function of MNOI = 90, MNOI = 180, MNOI = 270, MNOI = 360, and MNOI = 450 was 0.18, 0.11, 0.17, 0.18, and 0.23, respectively. For SO, the objective function of MNOI = 90, MNOI = 180, MNOI = 270, MNOI = 360, and MNOI = 450 was 0.19, 0.17, 0.14, 0.15, and 0.18, respectively. For DRA, the objective function of MNOI = 90, MNOI = 180, MNOI = 270, MNOI = 360, and MNOI = 450 was 0.27, 0.23, 0.22, 0.20, and 0.25, respectively.

Fig. 15.3 Sensitivity analysis for determination of random parameters

15.6 Results and Discussion

155

15.6.3 Evaluation of the Accuracy of Models In this section, the performance of models is evaluated. Figure 15.4 shows the accuracy of models using different indices. The MAE of the KELM-RSOA, KEMLSO, KELM-DRA, and KELM models was 0.02, 0.05, 0.07, and 0.10 at the training level. The MAE of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 0.04, 0.08, 0.10, and 0.12 at the testing level. The NSE of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 0.98, 0.92, 0.90, and 0.85 at the training level. The NSE of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 0.94, 0.92, 0.88, and 0.82 at the testing level. The PBIAS of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 4, 7, 11, and 12 at the training level. The PBIAS of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 5, 9, 12, and 14 at the testing level. Figure 15.5 assesses the accuracy of models using the Taylor diagram. Taylor diagram uses CRMSE, standard deviation, and correlation coefficient to evaluate the performance of models. The CRMSE of the KELM-RSOA, KEML-SO, KELMDRA, and KELM was 0.007, 0.013, 0.015, and 0.019, respectively. The correlation coefficient of the KELM-RSOA, KEML-SO, KELM-DRA, and KELM models was 0.99, 0.97, 0.96, and 0.95, respectively. Figure 15.6 shows the boxplot of the models. The median of observed data, KELM-RSOA, KEML-SO, KELM-DA, and KELM models, was 0.15, 0.15, 0.17, 0.0.18, and 0.19. The results revealed that the observed data had the highest match with observed data.

15.6.4 Discussion In this section, the significance of parameters is investigated. Figure 15.7 shows the relative importance of input parameters. The relative importance of furrow length was 24%, 23%, 23%, and 23% for the KELM-RSOA, KEML-SO, KEML-DRA, and KELM. The results indicated that the furrow length and cross-sectional area of the inflow had the highest and lowest important in the modeling process. Figure 15.8a shows the P values of models based on parameter and input uncertainties. The P values of KELM-RSOA based on input and parameter uncertainties were 0.97 and 0.96. Thus, the model parameter provided higher uncertainty. The P values of KELM based on input and parameter uncertainties were 0.90 and 0.88. Figure 15.8b shows the R values of models based on parameter and input uncertainties. The R values of KELM-RSOA based on input and parameter uncertainties were 008 and 0.12. The R values of KELM based on input and parameter uncertainties were 0.18 and 0.20. Thus, the model parameter provided higher uncertainty. The KELM model used in this study can be integrated with various optimization algorithms. This study results indicated that the optimized KELM models outperforms other ones. However, it should be noted that these model parameters have

156

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

Fig. 15.4 Radar plots for evaluation of the accuracy of models

15.7 Conclusion

157

Fig. 15.5 Taylor diagram for the evaluation of the accuracy of models

some uncertainties. Thus, future research can be performed to estimate the output uncertainties. Additionally, the KELM model is an individual model. These ensemble models can be used for estimation purpose in future research. They can make use of different KELM models. However, various methods such as Gama test can also be applied to choose the best input data. While researchers can take advantage of deep learning models in future research, these models may give rise to more accurate results for estimation of soil infiltration. Moreover, another characteristic of used artificial intelligence models is their usability for different types of soils. Regarding various advantages of the optimization algorithms, the hybrid models had different accuracies. As a result, it is important to select a suitable algorithm to improve the accuracy of the KELM models. The KELM models introduced in the present study can be applied to estimate other variables. Some variables such as precipitation, sedimentation, and temperature can be predicted by these models. The present study indicated that the KELM model has weaker results than other ones because it does not make use of the optimization algorithms. The KELM models use the classic training algorithms to adjusting the parameters. Therefore, the optimization algorithms outperform the classic training algorithms.

15.7 Conclusion It so important to manage irrigation and agriculture. Due to lack of water resources, it is necessary to manage irrigation. There are different parameters which affected

158

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

Fig. 15.6 Boxplots of models for different models

on irrigation management and water resource planning. Nowadays, it is also important to manage irrigation and agriculture to ensure food security. The prediction of variables related to agriculture, irrigation and soils could be helpful to better planning for management of irrigation and agriculture. Infiltration is one of the notable parameters in this regard. It’s very important to estimate the infiltration for calculation of the needed water and the efficiency of the irrigation systems. However, the infiltration phenomenon is affected by some various parameters. To model this parameter, there is a nonlinear and complex procedure. Therefore, the modelers need reliable and robust models to estimate it. Experimental and numerical models may be used for this purpose. However, the numerical models have complex equations. In addition, the experimental models are not much accurate for estimation purpose. Today, the computer systems can be used for simulation and estimation of different agricultural parameters. The soft computing models include computer codes which are used for different purposes. These models can predict output or objective variables based on inputs. They’re not used only for one scientific purpose. In fact, the soft computing models may be widely used for various purposes including

15.7 Conclusion

159

Fig. 15.7 Relative importance of the input parameters

hydrology, water resources, medical, and industries. These models have some advantages, namely fast calculation, reliability and high accuracy. However, it should be noted that input data and the model parameters may have some uncertainties. As a reliable method, the optimization algorithms may cause more accurate soft computing models. These algorithms can determine the precise values of the model unknown parameters. Therefore, these models are more flexible to link with the optimization algorithms. The KELM model used in this study can be integrated with various optimization algorithms. Infiltration rate is one of the most important parameters in furrow irrigation design. This study developed KEL models using optimization algorithms. The KEL-RSOA had the best performance among other models. The optimization algorithms could improve the efficiency of the KELM models. The model parameters provided high uncertainty. The next article can utilize the preprocessing method for choosing the best inputs. The methods such as feature selection are robust tools for choosing inputs.

160

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

Fig. 15.8 Uncertainty analysis of models

References

161

References Abedini, M., Ziai, A. N., Shafiei, M., Ghahraman, B., Ansari, H., & Meshkini, J. (2017). Uncertainty assessment of groundwater flow modeling by using generalized likelihood uncertainty estimation method (case study: Bojnourd Plain). Iranian Journal of Irrigation & Drainage, 10(6), 755–769. Angelaki, A., Singh Nain, S., Singh, V., & Sihag, P. (2021). Estimation of models for cumulative infiltration of soil using machine learning methods. ISH Journal of Hydraulic Engineering. https:// doi.org/10.1080/09715010.2018.1531274 Ehteram, M., Graf, R., Ahmed, A. N., & El-Shafie, A. (2022a). Improved prediction of daily pan evaporation using Bayesian Model averaging and optimized kernel extreme machine models in different climates. Stochastic Environmental Research and Risk Assessment, 1–36. Ehteram, M., Panahi, F., Ahmed, A. N., Huang, Y. F., Kumar, P., & Elshafie, A. (2022a). Predicting evaporation with optimized artificial neural network using multi-objective salp swarm algorithm. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-16301-3 Ehteram, M., Yenn Teo, F., Najah Ahmed, A., Dashti Latif, S., Feng Huang, Y., Abozweita, O., Al-Ansari, N., & El-Shafie, A. (2021). Performance improvement for infiltration rate prediction using hybridized adaptive neuro-fuzzy inferences system (ANFIS) with optimization algorithms. Ain Shams Engineering Journal. https://doi.org/10.1016/j.asej.2020.08.019 Holzapfel, E. A., Jara, J., Zuñiga, C., Mariño, M. A., Paredes, J., & Billib, M. (2004). Infiltration parameters for furrow irrigation. Agricultural Water Management. https://doi.org/10.1016/ j.agwat.2004.03.002 Mateos, L., & Oyonarte, N. A. (2005). A spreadsheet model to evaluate sloping furrow irrigation accounting for infiltration variability. Agricultural Water Management. https://doi.org/10.1016/ j.agwat.2005.01.013 Mattar, M. A., Alazba, A. A., & Zin El-Abedin, T. K. (2015). Forecasting furrow irrigation infiltration using artificial neural networks. Agricultural Water Management. https://doi.org/10.1016/j.agwat. 2014.09.015 Mirzaei, M., Huang, Y. F., El-Shafie, A., & Shatirah, A. (2015). Application of the generalized likelihood uncertainty estimation (GLUE) approach for assessing uncertainty in hydrological models: A review. Stochastic Environmental Research and Risk Assessment. https://doi.org/10. 1007/s00477-014-1000-6 Panahi, F., Ehteram, M., & Emami, M. (2021). Suspended sediment load prediction based on soft computing models and black widow optimization algorithm using an enhanced gamma test. Environmental Science and Pollution Research. https://doi.org/10.1007/s11356-021-14065-4 Playán, E., Rodrguez, J. A., & Garca-Navarro, P. (2004). Simulation model for level furrows. I: Analysis of field experiments. Journal of Irrigation and Drainage Engineering. https://doi.org/ 10.1061/(asce)0733-9437(2004)130:2(106) Rodríguez Alvarez, J. A. (2003). Estimation of advance and infiltration equations in furrow irrigation for untested discharges. Agricultural Water Management. https://doi.org/10.1016/S03783774(02)00163-4 Sales, A. K., Gul, E., Safari, M. J. S., Ghodrat Gharehbagh, H., & Vaheddoost, B. (2021). Urmia lake water depth modeling using extreme learning machine-improved grey wolf optimizer hybrid algorithm. Theoretical and Applied Climatology. https://doi.org/10.1007/s00704-021-03771-1 Sayari, S., Mahdavi-Meymand, A., & Zounemat-Kermani, M. (2021). Irrigation water infiltration modeling using machine learning. Computers and Electronics in Agriculture. https://doi.org/10. 1016/j.compag.2020.105921 Sebbar, A., Heddam, S., & Djemili, L. (2021). Kernel extreme learning machines (KELM): A new approach for modeling monthly evaporation (EP) from dams reservoirs. Physical Geography. https://doi.org/10.1080/02723646.2020.1776087 Seifi, A., Ehteram, M., Nayebloei, F., Soroush, F., Gharabaghi, B., & Torabi Haghighi, A. (2021). GLUE uncertainty analysis of hybrid models for predicting hourly soil temperature and application wavelet coherence analysis for correlation with meteorological variables. Soft Computing. https://doi.org/10.1007/s00500-021-06009-4

162

15 Predicting Infiltration Using Kernel Extreme Learning Machine Model …

Sepaskhah, A. R., & Shaabani, M. K. (2007). Infiltration and hydraulic behaviour of an anguiform furrow in heavy texture soils of Iran. Biosystems Engineering. https://doi.org/10.1016/j.biosys temseng.2007.03.024 Sihag, P., Singh, B., Sepah Vand, A., & Mehdipour, V. (2020). Modeling the infiltration process with soft computing techniques. ISH Journal of Hydraulic Engineering. https://doi.org/10.1080/ 09715010.2018.1464408 Singh, B., Sihag, P., Parsaie, A., & Angelaki, A. (2021). Comparative analysis of artificial intelligence techniques for the prediction of infiltration process. Geology, Ecology, and Landscapes. https://doi.org/10.1080/24749508.2020.1833641 Thi, P. C., Ball, J. E., & Dao, N. H. (2018). Uncertainty estimation using the glue and Bayesian approaches in flood estimation: A case study-Ba River, Vietnam. Water (Switzerland). https://doi. org/10.3390/w10111641 Valiantzas, J. D., Aggelides, S., & Sassalou, A. (2001). Furrow infiltration estimation from time to a single advance point. Agricultural Water Management. https://doi.org/10.1016/S0378-377 4(01)00128-7 Vand, A. S., Sihag, P., Singh, B., & Zand, M. (2018). Comparative evaluation of infiltration models. KSCE Journal of Civil Engineering. https://doi.org/10.1007/s12205-018-1347-1

Chapter 16

Predicting Solar Radiation Using Optimized Generalized Regression Neural Network

Abstract One of the most important components of the hydrological cycle is solar radiation. Three stations in Iran were used to predict monthly solar radiation (SOR) using the optimized generalized regression neural network (GRNN). The Henry gas solubility optimization (HGSO), antlion optimization (ANO), and salp swarm algorithm (SSA) were used to adjust the parameters of the GRNN. Sunny hours had the highest correlation with SOR at all stations. Furthermore, the GRNN-HGSO model outperformed the other methods. At Mazandaran station, the median of observed data, GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 19 MJ m−2 , 19 MJ m−2 , 19 MJ m−2 , 21 MJ m−2 , and 24 MJ m−2 , respectively. In this study, soft computing models had a high ability to predict SOR in different climates. Using the models of the current study, decision-makers can identify the regions with the highest SRO. These regions are suitable for the construction of power plants. Keywords GRNN · Optimization algorithms · Solar radiation · Meteorological data

16.1 Introduction Solar energy prediction is essential for energy planning and management. Solar radiation prediction is a complex and nonlinear problem. Therefore, robust models are needed for prediction. Recently, researchers widely use computing model for solar radiation prediction. They have used artificial neural network model (Ehteram et al., 2021; Huang et al., 2022; Malik et al., 2022; Seifi et al., 2021; Shah et al., 2021; Shboul et al., 2021), support vector machine (Álvarez-Alvarado et al., 2021; Bendiek et al., 2022; Fan et al., 2018), and adaptive neuro-fuzzy interface system (Fraihat et al., 2022; Salisu et al., 2019; Tao et al., 2021). This study optimized the generalized regression neural network (GRNN) using optimization algorithms. The optimized GRNN was used to predict solar radiation. Henry gas solubility optimization (HGSO), antlion optimization (ANO), and salp swarm algorithm (SSA) were used to adjust GRNN parameters. The structures of algorithms were explained in Chaps. 5, 9, and 10, respectively.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_16

163

164

16 Predicting Solar Radiation Using Optimized Generalized Regression …

16.2 Structure of Generalized Regression Neural Network (GRNN) GRNNs are neural network-based algorithms that approximate or estimate functions based on inputs. A significant advantage of the model is its rapid learning and convergence toward optimal regression (Salehi et al., 2021). GRNN includes four layers: input, pattern, summation, and output layer. The data are inserted into the input layer. Pattern layers process data using transfer functions (Salehi et al., 2021). [ pi = exp

−(x − xi )T (x − xi ) 2σ 2

] (16.1)

where x: input, pi : pattern Gaussian function, and σ : smoothing parameter. Using the summation layer, we can determine the sum and weighted sum of the output of the patterns. The sum of weighted outputs and arithmetic summations of the outputs is computed in the summation layer: SA =

n Σ

pi

(16.2)

pi wi j

(16.3)

i=1

SN j =

n Σ i=1

where SA: sum of outputs, wi j : the weight of pattern, and SN j : weighted sum of outputs. The final output is computed as follows: yj =

SN j SA

(16.4)

The smoothing parameter is a key parameter in the modeling process. It controls the generalization capability of the GRNN. In this research, optimization algorithms are used to adjust the σ .

16.3 Structure of Hybrid GRNN The smoothing parameter is a decision variable. First, the value of the GRNN parameter is defined as the initial population of the algorithm. The GRNN model runs at the training level. An objective function is calculated. The root means the square error is considered an objective function. A solution’s quality is determined by its objective function. The algorithm operators are applied to update the values of solutions. GRNN parameters are updated when solutions are updated. The optimization process continues until convergence creation is met.

16.4 Case Study

165

16.4 Case Study This study used the optimized GRNN models to estimate monthly solar radiation. Three provinces of Iran, including Isfahan, Semnan, and Mazandaran, are chosen. The province of Isfahan is located in the center of Iran. In Isfahan province, January is the month with the least sunshine. The hottest and coldest months of the year in Isfahan are July and January. With dry and hot summers, Isfahan has a steppe climate. With an average daily high temperature of 24 °C, Semnan is among Iran’s warmest regions. In Semnan province, June is the month with the highest sunshine. The annual rainfall is 139.5 mm, and there are 48.7 rainfall days. The Mazandaran province experiences a moderate, subtropical climate. January and February are the coldest months in Mazandaran. The annual rainfall is 414 mm, and the number of rainfall days is 101.5 during the year. Table 16.1 gives the details of the data. Figure 16.1 shows the location of the case study. The error indices are used to evaluate the ability of models: Σn MBE =

ΣN PBIAS =

|SORob − SORes | n

(16.5)

(SORob − SORes ) ΣN i=1 (SORob )

(16.6)

i=1

i=1

Σn

(SORob − SORes ) ) NSE = 1 − Σni=1 ( i=1 SORob − SORob

(16.7)

Table 16.1 Details of input data Province

N (h)

Relative humidity Average (REH) (%) temperature

Wind speed (m/s)

Solar radiation (SR)

Maximum

12.23

70.25

26.28

9.12

27.20

Minimum

6.29

29.23

7.12

5.45

9.12

Average

7.43

45.23

18.22

8.12

19.98

Maximum

10.12

95.12

25.12

10.23

24.20

Minimum

6.12

67.23

6.24

6.12

7.12

Average

6.95

78.12

17.23

8.45

18.23

Maximum

11.67

84.23

25.28

9.23

25.20

Minimum

7.89

32.45

7.12

6.78

9.10

Average

8.11

55.56

16.21

8.19

19.01

Semnan

Mazandaran

Isfahan

166

16 Predicting Solar Radiation Using Optimized Generalized Regression …

Fig. 16.1 Location of case study

/ CRMSE =

Σn i=1

) ( ) ( SORob − SORiobs − SORes − SORes n

(16.8)

where MBE: mean bias error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, SORob : observed solar radiation, SORiobs : average solar radiation value, and SORes : estimated value.

16.5 Results and Discussions 16.5.1 Selection of Random Parameters Random parameters play a key role in optimization algorithms. Algorithm accuracy is affected by these algorithms. For determining these parameters, a sensitivity analysis is appropriate. By changing parameter values, the least error function values

16.5 Results and Discussions

167

are achieved (objective function). The heat map for sensitivity analysis is shown in Fig. 16.2. The results were reported for the Isfahan station. For HGSO, objective function of maximum number of iterations (MAXI) = 55, MAXI = 110, MAXI = 165, MAXI = 220, and MAXI = 275 was 1.26, 1.14, 1.79, 1.97, and 2.15, respectively. For ANO, objective function of MAXI = 55, MAXI = 110, MAXI = 165, MAXI = 220, and MAXI = 275 was 2.67, 2.15, 1.97, 2.06, and 2.28, respectively. For HGSO, objective function of population size (POPSI) = 65, POPSI = 130, POPSI = 195, POPSI = 260, and POPSI = 325 was 1.23, 1.12, 1.78, 1.95, and 2.12, respectively. For ANO, objective function of POPSI = 65, POPSI = 130, POPSI = 195, POPSI = 260, and POPSI = 325 was 2.65, 2.12, 1.98, 2.02, and 2.23, respectively.

16.5.2 Investigation of the Accuracy of Models Figure 16.3 shows the accuracy of models at the testing level. . Isfahan The MBE of the GRNN-HGSO decreased MBE of the GRNN-ANO, GRNNSSA, and GRNN by 36%, 40%, and 42%, respectively. The NSE of the GRNNHGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 0.94, 0.92, 0.90, and 0.89, respectively. The PBIAS of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 5, 7, 8, and 12, respectively. . Semnan The MBE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 0.187, 0.198, 0.211, and 0.234, respectively. The NSE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 0.95, 0.93, 0.89, and 0.87, respectively. The PBIAS of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 6, 8, 9, and 12, respectively. . Mazandaran The MBE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 0.190, 0.212, 234, and 0.245, respectively. The NSE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 0.93, 0.91, 0.89, and 0.86, respectively. The PBIAS of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 5, 7, 10, and 11, respectively. Figure 16.4 shows a Taylor diagram for evaluating the accuracy of models. Based on CRMSE, standard deviation, and correlation coefficient, the Taylor diagram evaluates the suitability of models. . Semnan CRMSE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN was 0.05, 0.11, 0.17, and 0.19, respectively. Correlation coefficient of the GRNNHGSO, GRNN-ANO, GRNN-SSA, and GRNN was 0.98, 0.94, 0.89, and 0.85, respectively.

168

16 Predicting Solar Radiation Using Optimized Generalized Regression …

Fig. 16.2 Sensitivity analysis for the random parameters

. Isfahan CRMSE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN was 0.02, 0.08, 0.11, and 0.14, respectively. Correlation coefficient of the GRNNHGSO, GRNN-ANO, GRNN-SSA, and GRNN was 0.99, 0.97, 0.94, and 0.92, respectively.

16.5 Results and Discussions

169

Fig. 16.3 Evaluation of the accuracy of models using error indices

MBE

NSE

PBIAS

170

16 Predicting Solar Radiation Using Optimized Generalized Regression …

Fig. 16.4 Evaluation of the accuracy of models based on Taylor diagram

Isfahan

Mazandaran

Semnan

16.5 Results and Discussions

171

. Mazandaran CRMSE of the GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN was 0.02, 0.11, 0.17, and 0.22, respectively. Correlation coefficient of the GRNNHGSO, GRNN-ANO, GRNN-SSA, and GRNN was 0.99, 0.95, 0.89, and 0.85, respectively. Figure 16.5 shows the boxplots of models. . Semnan The median of observed data, GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 22 MJ m−2 , 22 MJ m−2 , 22 MJ m−2 , 23 MJ m−2 , and 24 MJ m−2 , respectively. The maximum of observed data, GRNN-HGSO, GRNNANO, GRNN-SSA, and GRNN model was 27 MJ m−2 , 27 MJ m−2 , 28 MJ m−2 , 29 MJ m−2 , 29 MJ m−2 , respectively. . Mazandaran The median of observed data, GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 19 MJ m−2 , 19 MJ m−2 , 19 MJ m−2 , 21 MJ m−2 , and 24 MJ m−2 , respectively. The maximum of observed data, GRNN-HGSO, GRNNANO, GRNN-SSA, and GRNN model was 24 MJ m−2 , 24 MJ m−2 , 25 MJ m−2 , 26 MJ m−2 , 28 MJ m−2 , respectively. . Isfahan The median of observed data, GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 22 MJ m−2 , 22 MJ m−2 , 22 MJ m−2 , 23 MJ m−2 , and 24 MJ m−2 , respectively. The maximum of observed data, GRNN-HGSO, GRNN-ANO, GRNN-SSA, and GRNN model was 25.2 MJ m−2 , 25.2 MJ m−2 , 26 MJ m−2 , 27 MJ m−2 , 29 MJ m−2 , respectively.

16.5.3 Discussion All input data were used to predict monthly SOR in this study. However, it is essential to determine the relationship between meteorological data and SOR. The correlation between SOR and input parameters is given in Table 16.3. At all stations, relative humidity had a negative correlation with SOR. Sunny hours had the highest correlation with SOR. The correlation values ranged from − 0.6 to 0.89 at Mazandaran station, − 0.55 to 0.91 at Isfahan station, and − 0.52 to 0.93 at Semnan station. Different GRNN models were used in this study. Optimized GRNN models outperformed standalone GRNN models. Using the models of the current study, decision-makers can identify the regions with the highest SRO. These regions are suitable for the construction of power plants. Researchers can use artificial intelligence models for estimating SOR because of the high accuracy of the models of the current study. Model parameters and input parameters have uncertainties. The next study can quantify the uncertainties.

172

16 Predicting Solar Radiation Using Optimized Generalized Regression …

Semnan

Mazandaran Fig. 16.5 Boxplots of models at the different stations

16.6 Conclusion

173

Isfahan Fig. 16.5 (continued)

Table 16.3 Correlation between the metrological data and SOR

Parameter

Mazandaran

Isfahan

Semnan

Number of sunny hours

0.89

0.92

0.94

Average temperature

0.89

0.87

0.86

Wind speed

0.67

0.60

0.84

− 0.60

− 0.55

− 0.52

Relative humidity

16.6 Conclusion Solar and wind energy should be developed due to the global energy shortage. In this study, GRNN models were developed using optimization algorithms to estimate SOR. The GRNN parameters were set using Henry gas solubility optimization (HGSO), antlion optimization (ANO), and salp swarm algorithm (SSA). SOR was estimated using meteorological data. A negative correlation was found between relative humidity and SOR at all stations. Optimization algorithms improved the efficiency of GRNNs. Among the other models, GRNN-HGSO had the best accuracy. It is possible to prepare spatial and temporal maps of SOR based on the models of the current study. Policy-makers will be able to use these maps to develop renewable energy sources.

174

16 Predicting Solar Radiation Using Optimized Generalized Regression …

References Álvarez-Alvarado, J. M., Ríos-Moreno, J. G., Obregón-Biosca, S. A., Ronquillo-Lomelí, G., Ventura-Ramos, E., & Trejo-Perea, M. (2021). Hybrid techniques to predict solar radiation using support vector machine and search optimization algorithms: A review. Applied Sciences (Switzerland). https://doi.org/10.3390/app11031044 Bendiek, P., Taha, A., Abbasi, Q. H., & Barakat, B. (2022). Solar irradiance forecasting using a data-driven algorithm and contextual optimisation. Applied Sciences (Switzerland). https://doi. org/10.3390/app12010134 Ehteram, M., Ahmed, A. N., Kumar, P., Sherif, M., & El-Shafie, A. (2021). Predicting freshwater production and energy consumption in a seawater greenhouse based on ensemble frameworks using optimized multi-layer perceptron. Energy Reports. https://doi.org/10.1016/j.egyr. 2021.09.079 Fan, J., Wang, X., Wu, L., Zhou, H., Zhang, F., Yu, X., ... & Xiang, Y. (2018). Comparison of Support Vector Machine and Extreme Gradient Boosting for predicting daily global solar radiation using temperature and precipitation in humid subtropical climates: A case study in China. Energy conversion and management, 164, 102–111. Fraihat, H., Almbaideen, A. A., Al-Odienat, A., Al-Naami, B., De Fazio, R., & Visconti, P. (2022). Solar radiation forecasting by Pearson correlation using LSTM neural network and ANFIS method: Application in the West-Central Jordan. Future Internet. https://doi.org/10.3390/fi1403 0079 Huang, H., Band, S. S., Karami, H., Ehteram, M., Chau, K. W., & Zhang, Q. (2022). Solar radiation prediction using improved soft computing models for semi-arid, slightly-arid and humid climates. Alexandria Engineering Journal, 61(12), 10631–10657. Malik, P., Gehlot, A., Singh, R., Gupta, L. R., & Thakur, A. K. (2022). A review on ANN based model for solar radiation and wind speed prediction with real-time data. Archives of Computational Methods in Engineering. https://doi.org/10.1007/s11831-021-09687-3 Salehi, M., Farhadi, S., Moieni, A., Safaie, N., & Hesami, M. (2021). A hybrid model based on general regression neural network and fruit fly optimization algorithm for forecasting and optimizing paclitaxel biosynthesis in Corylus avellana cell culture. Plant Methods, 17(1), 1–13. Salisu, S., Mustafa, M. W., Mustapha, M., & Mohammed, O. O. (2019). Solar radiation forecasting in Nigeria based on hybrid PSO-ANFIS and WT-ANFIS approach. International Journal of Electrical and Computer Engineering. https://doi.org/10.11591/ijece.v9i5.pp3916-3926 Seifi, A., Ehteram, M., & Dehghani, M. (2021). A robust integrated Bayesian multi-model uncertainty estimation framework (IBMUEF) for quantifying the uncertainty of hybrid meta-heuristic in global horizontal irradiation predictions. Energy Conversion and Management. https://doi.org/ 10.1016/j.enconman.2021.114292 Shah, D., Patel, K., & Shah, M. (2021). Prediction and estimation of solar radiation using artificial neural network (ANN) and fuzzy system: A comprehensive review. International Journal of Energy and Water Resources. https://doi.org/10.1007/s42108-021-00113-9 Shboul et al. 2021 Shboul, B., AL-Arfi, I., Michailos, S., Ingham, D., Ma, L., Hughes, K. J., & Pourkashanian, M. (2021). A new ANN model for hourly solar radiation and wind speed prediction: A case study over the north & south of the Arabian Peninsula. Sustainable Energy Technologies and Assessments.https://doi.org/10.1016/j.seta.2021.101248 Tao, H., Ewees, A. A., Al-Sulttani, A. O., Beyaztas, U., Hameed, M. M., Salih, S. Q., Armanuos, A. M., Al-Ansari, N., Voyant, C., Shahid, S., & Yaseen, Z. M. (2021). Global solar radiation prediction over North Dakota using air temperature: development of novel hybrid intelligence model. Energy Reports, 7, 136–157.

Chapter 17

Predicting Wind Speed Using Optimized Long Short-Term Memory Neural Network

Abstract Predicting wind speed is an important aspect of energy management. We used optimized long short-term memory (LSTM) to predict wind speed at different stations. LSTM parameters were adjusted using sunflower optimization (SUNO), crow optimization algorithm (COA), and particle swarm optimization (PSO). We used lagged wind speed values as inputs to the models. The best input combination was determined using the person correlation method. Based on the performance of the models, the optimized LSTM models outperformed the standalone models. This study can be useful if modelers cannot access all input data. The results also indicated that each optimization algorithm provided different accuracies depending on its advanced operators. Keywords Wind speed · Energy management · LSTM model · Optimization algorithms

17.1 Introduction Energy management is an important topic for policy-makers and decision-makers. While different countries face energy shortages, developing renewable energies are an important issue for different countries. Wind energy is one of the most important renewable energies. Predicting wind speed is necessary for planning and managing wind energy. Since wind speed prediction relies on different parameters, wind speed modeling is complex. Recently, soft computing models have been widely used for predicting energy variables such as wind and solar energies. The artificial neural network models (ANNs) (Jamil & Zeeshan, 2019; Malik et al., 2022; Ramasamy et al., 2015; Zhang et al., 2020), support vector machine models (SVMs) (Sarp et al., 2022; Tian & Chen, 2021; Wang et al., 2021; Yu et al., 2021), and adaptive neurofuzzy interface system (ANFIS) (Al-qaness et al., 2022; Chen et al., 2018; Xing et al., 2022). This study uses long short-term memory (LSTM) and optimization algorithms to predict monthly wind speed. In the first step, the LSTM was introduced. Afterward, the structure of hybrid LSTM and case study is explained. The results and discussion are presented at the next level. Finally, the conclusion is presented. LSTM parameters

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_17

175

176

17 Predicting Wind Speed Using Optimized Long Short-Term Memory …

were adjusted using sunflower optimization (SUNO), crow optimization algorithm (COA), and particle swarm optimization (PSO).

17.2 Structure of Long Short-Term Memory (LSTM) LSTM is a recurrent neural network (RNN) in deep learning. Through feedback connections, LSTMs are capable of remembering patterns over time. The model can also eliminate the vanishing gradient problem that plagues other RANNs. LSTMs manage input and output flow using multiple memory cells. Based on the input element and the hidden state result from the previous time interval, the LSM creates a current hidden state at time t. Therefore, the network is capable of mapping inputs to outputs. Memory cell state, forget gate, input gate, and output gate are the main components of an LSTM block. Forget gates determine what information should be discarded from the previous cell states. ( ) f t = σ ω f xt + U f h t−1 + b f

(17.1)

where f t : forget gate, ω f : the weight matrix in of forgetting gate, b f : the bias of forgetting gate, U f : recurrent weight of forgetting gat, and h t−1 : the previous hidden state. At the next level, a sigmoid layer determines what information is saved in each cell. i t = σ (ωi xt + Ui h t−1 + bi ) C˜ t = tanh(ωc xc + Uc h c−1 + bc )

(17.2) (17.3)

where C˜ t : the output of potential cell state. In the next stage, the previous cell state is updated based on the following equation: Ct = f t ∗ Ct−1 + i t ∗ C˜ t

(17.4)

Next, it is necessary to determine what information will be output. ot = σ (ωo xt + Uo h t−1 + bo ) h t = ot ∗ tanh(Ct )

(17.5)

(17.6)

where σ : logistic sigmoid function. Figure 17.1 shows the structure of LSTM.

17.4 Case Study

177

Fig. 17.1 Structure of LSTM models

17.3 Hybrid Structure of LSTM Models The most important LSTM parameters are the number of layers, hidden neurons, weights, and biases. These parameters are adjusted using optimization algorithms. As a first step, the location of agents shows the values of LSTM parameters. The algorithm population contains the LSTM parameters. LSTM provides training outputs based on training data. The root mean square error (RMSE) is used to evaluate the accuracy of models. Agent locations are updated using algorithm operators. LSTM parameters are updated when agents’ locations are updated. Until the stop condition is met, the optimization process continues.

17.4 Case Study In this study, optimized LSTM models were used to predict long-term mean monthly wind speed at four stations of Iran country. Kashan, Mashhad, Sari, and Yazd Tabriz were considered for this study. The study period was from 2000 to 2012. Figure 17.2 shows the location of case study. Table 17.1 shows the details of input data. The lagged WIS values (WIS (t − 1) … WIS (t − 12)) were used to predict one-month ahead WIS. The error indices are used to assess the potential of models: Σn MBE =

ΣN PBIAS =

|WISob − WISes | n

(17.7)

(WISob − WISes ) ΣN i=1 (WISob )

(17.8)

i=1

i=1

178

17 Predicting Wind Speed Using Optimized Long Short-Term Memory …

Fig. 17.2 Location of case study (Kisi and Sanikhani, 2015)

Table 17.1 Details of input data

/ CRMSE =

Station

Average (WIS) (m/s)

Maximum (WIS) (m/s)

Minimum (WIS) (m/s)

Kashan

6.39

8.7

1.98

Mashhad

5.93

7.9

1.94

Sari

6.36

8.8

1.92

Tabriz

6.4

8.2

1.97

Σn (WISob − WISes ) ) NSE = 1 − Σni=1 ( i=1 WISob − WISob Σn i=1

) ( ) ( WISob − WISiobs − WISes − WISes n

(17.9)

(17.10)

where MBE: mean bias error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, WISob : observed solar radiation, WISiobs : average solar radiation value, and WISes : estimated value.

17.5 Results and Discussion

179

17.5 Results and Discussion 17.5.1 Selection of Random Parameters Random parameters are the most important components of optimization algorithms. These parameters affect the accuracy of modeling. The population size (POPS) and the maximum number of iterations (MANI) are the most important parameters of optimization algorithms. A sensitivity analysis is commonly applied to adjust these parameters’ optimal values. Each optimization problem has an objective function. In this research, the RMSE is considered an objective function. The values of algorithm parameters are changed to minimize the objective function. Figure 17.3 shows the heat map of sensitivity analysis. For SUNO, objective function of MANI = 100, MANI = 200, MANI = 300, MANI = 400, and MANI = 500 was 1.62, 1.14, 1.81, 1.90, and 2.55. Thus, MANI = 200 provided the lowest objective function value. For COA, objective function of MANI = 100, MANI = 200, MANI = 300, MANI = 400, and MANI = 500 was 2.78, 2.59, 2.14, 2.29, and 2.49. Thus, MANI = 300 provided the lowest objective function value. For SUNO, objective function of POPS = 48, POPS = 96, POPS = 144, POPS = 192, and POPS = 240 was 1.65, 1.12, 1.78, 1.89, and 2.45.

17.5.2 Choice of Inputs For predicting WIS, it is essential to determine the best inputs. The values of person correlation are computed to choose the best input. Figure 17.4 shows the Pearson correlation between WIS (t) and lagged WIS values at different stations. At Kashan station, WIS (t − 1), WIS (t − 2), WIS (t − 3), WIS (t − 4), and WIS (t − 5) had the highest correlation with the WIS (t). WIS (t − 11) and WIS (t − 12) had the lowest correlation with WIS (t). At Mashhad station, WIS (t − 1), WIS (t − 2), WIS (t − 3), WIS (t − 4), and WIS (t − 5) had the highest correlation with the WIS (t). Thus, they were chosen as the inputs to the models. At Sari station, WIS (t − 1), WIS (t − 2), WIS (t − 3), WIS (t − 4), and WIS (t − 5) had the highest correlation with the WIS (t). At Tabriz station, WIS (t − 1), WIS (t − 2), WIS (t − 3), WIS (t − 4), and WIS (t − 5) had the highest correlation with the WIS (t). Increasing lag time decreased correlation values at different stations.

17.5.3 Investigation of the Accuracy of Models The efficiency of different models is investigated in this section. Figure 17.5 shows the radar plots based on different values of error indices.

180

17 Predicting Wind Speed Using Optimized Long Short-Term Memory …

Fig. 17.3 Heat map for sensitivity analysis

• Kashan station At the training level, MBE of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 0.119, 0.123, 0.145, and 0.155, respectively. At the training level, NSE of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 0.97, 0.96, 0.95, and 0.92, respectively. At the training level, PBIAS of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 4, 7, 8, and 11, respectively. • Sari station At the training level, LSTM-SUNO decreased MBE of the LSTM-COA, LSTM-PSO, and LSTM models by 3.3%, 5.6%, and 5.4%, respectively. At the training level, NSE of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM

17.5 Results and Discussion

181

Fig. 17.4 Pearson correlation values between WIS (t) and lagged WIS (values)

models was 0.96, 0.95, 0.94, and 0.90, respectively. At the training level, PBIAS of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 5, 7, 11, and 12, respectively. • Mashhad At the training level, LSTM-SUNO decreased MBE of the LSTM-COA, LSTM-PSO, and LSTM models by 3.2%, 7.0%, and 9.8%, respectively. At the training level, NSE of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 0.95, 0.94, 0.92, and 0.89, respectively. At the training level, PBIAS of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 6, 9, 10, and 12, respectively. • Tabriz Station At the training level, MBE of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 0.123, 0.127, 0.132, and 0.133, respectively. At the training level, NSE of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 0.94, 0.92, 0.89, and 0.87, respectively. At the training level, PBIAS of the LSTM-SUNO, LSTM-COA, LSTM-PSO, and LSTM models was 5, 7, 9, and 11, respectively. Figure 17.6 shows the boxplots of different models. • Kashan The median of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 7, 7, 6.5, 6.2, and 5.9, respectively. The maximum value of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 8.6, 8.4, 8.3, 8.1, and 8.00, respectively.

182

17 Predicting Wind Speed Using Optimized Long Short-Term Memory …

Fig. 17.5 Radar plots for evaluation of the accuracy of the models

MBE

NSE

PBIAS

17.5 Results and Discussion

183

Kashan

Mashhad Fig. 17.6 Boxplots of models for predicting wind speed

• Mashhad The median of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 6.55, 6.55, 6.15, 5.95, and 5.7, respectively. The minimum value of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 1.94, 1.94, 1.92, 1.92, and 1.92, respectively.

184

17 Predicting Wind Speed Using Optimized Long Short-Term Memory …

Sari

Tabriz Fig. 17.6 (continued)

• Sari The median of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 7.25, 6.95, 6.15, 5.40, and 5.4, respectively. The maximum value of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 8.8, 8.8, 8.5, 8.2, and 8.2, respectively.

17.6 Conclusion

185

• Tabriz The median of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 7.35, 7.15, 6.65, 5.45, and 6.3, respectively. The maximum value of observed data, LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models was 8.8, 8.4, 8.2, 8.2, and 8.2, respectively.

17.5.4 Discussion Predicting wind speed is an important topic since different factors influence it. When modelers do not have access to all input data, it may be difficult to collect different input data. Consequently, it is an important issue in engineering wind to predict wind speed based on limited input data. Models such as numerical and soft computing models may be able to predict wind speed. Soft computing models have a high ability to predict the output. In the modeling process, it is important to adjust model parameters. This study used the LST model as a robust deep learning model for predicting wind speed. However, there are unknown parameters in this model. LSTM parameters were adjusted using optimization algorithms in the current study. An optimization algorithm could improve the efficiency of standalone LSTMs. Due to their different advanced operators, optimization algorithms provide different accuracies. The lagged WIS values were used as inputs in this study. Modelers can use the lagged WIS values when the other input parameters are unavailable. This study determined the best input scenario using the correlation method. In the next study, inputs can be selected using methods such as the gamma test. In the modeling process, model parameters and inputs provide uncertainty. Therefore, modelers can quantify model uncertainties. A specific climate did not limit the performance of LSTM-SUNFO, LSTM-COA, LSTM-PSO, and LSTM models. The models can be used in different climates. Furthermore, these models provide wind maps spatially and temporally. The maps will help identify suitable wind plant locations.

17.6 Conclusion Wind speed prediction is an important issue for developing renewable energies. This study used optimized LSTM models to predict one-month ahead wind speed. The lagged wind speed values were used for predicting wind speed. The study’s results indicated that the optimized LSTM models performed better than the LST models. Increasing lag times decreased the correlation between WIS (t) and lagged WIS values. The next studies can use the data such as latitude, longitude, altitude, and number of months as the inputs to the models. Also, the multi-objective optimization algorithms can be used to determine the best inputs and model parameters simultaneously. The results of these models are useful for developing wind energies in different

186

17 Predicting Wind Speed Using Optimized Long Short-Term Memory …

regions of the world. The next studies also consider the effect of input uncertainties on the model accuracies.

References Al-qaness, M. A. A., Ewees, A. A., Fan, H., Abualigah, L., & Elaziz, M. A. (2022). Boosted ANFIS model using augmented marine predator algorithm with mutation operators for wind power forecasting. Applied Energy. https://doi.org/10.1016/j.apenergy.2022.118851 Chen, J., Zeng, G. Q., Zhou, W., Du, W., & Lu, K. D. (2018). Wind speed forecasting using nonlinear-learning ensemble of deep learning time series prediction and extremal optimization. Energy Conversion and Management. https://doi.org/10.1016/j.enconman.2018.03.098 Jamil, M., & Zeeshan, M. (2019). A comparative analysis of ANN and chaotic approach-based wind speed prediction in India. Neural Computing and Applications. https://doi.org/10.1007/s00 521-018-3513-2 Kisi, O., & Sanikhani, H. (2015). Prediction of long-term monthly precipitation using several soft computing methods without climatic data. International Journal of Climatology, 35(14), 4139–4150. Malik, P., Gehlot, A., Singh, R., Gupta, L. R., & Thakur, A. K. (2022). A review on ANN based model for solar radiation and wind speed prediction with real-time data. In Archives of Computational Methods in Engineering. https://doi.org/10.1007/s11831-021-09687-3 Ramasamy, P., Chandel, S. S., & Yadav, A. K. (2015). Wind speed prediction in the mountainous region of India using an artificial neural network model. Renewable Energy. https://doi.org/10. 1016/j.renene.2015.02.034 Sarp, A. O., Menguc, E. C., Peker, M., & Guvenc, B. C. (2022). Data-adaptive censoring for shortterm wind speed predictors based on MLP, RNN, and SVM. IEEE Systems Journal. https://doi. org/10.1109/JSYST.2022.3150749 Tian, Z., & Chen, H. (2021). A novel decomposition-ensemble prediction model for ultra-shortterm wind speed. Energy Conversion and Management. https://doi.org/10.1016/j.enconman. 2021.114775 Wang, S., Guo, Y., Wang, Y., Li, Q., Wang, N., Sun, S., Cheng, Y., & Yu, P. (2021). A wind speed prediction method based on improved empirical mode decomposition and support vector machine. IOP Conference Series: Earth and Environmental Science. https://doi.org/10.1088/1755-1315/ 680/1/012012 Xing, Y., Lien, F. S., Melek, W., & Yee, E. (2022). A multi-hour ahead wind power forecasting system based on a WRF-TOPSIS-ANFIS model. Energies, 15(15), 5472. Yu, X., Zhang, Z., & Song, M. (2021). Short-term wind speed prediction of wind farms based on particle swarm optimization support vector machine. IOP Conference Series: Earth and Environmental Science. https://doi.org/10.1088/1755-1315/804/3/032061 Zhang, Y., Pan, G., Chen, B., Han, J., Zhao, Y., & Zhang, C. (2020). Short-term wind speed prediction model based on GA-ANN improved by VMD. Renewable Energy. https://doi.org/10.1016/j.ren ene.2019.12.047

Chapter 18

Predicting Dew Point Using Optimized Least Square Support Vector Machine Models

Abstract Dew point prediction (DPT) is an important topic in agriculture and water resource management. In this chapter, robust soft computing models are used for estimating DPT. This study uses a standalone least square support vector machine (LSSVM) and LSSVM models to estimate the DPT. In this chapter, the LSSVM-antlion optimization algorithm (ANOA), LSSVM-dragonfly algorithm (DOA), LSSVM-crow optimization algorithm (LSSVM-COA), and LSSVM were used to estimate DPT. The different input combinations were used to predict DPT. The results indicated that the optimized LSSVM outperformed the LSSVM models. The best input variable consisted of input variables of relative humidity (RHU), average temperature (AVTEM), wind speed (WIPSE), and number of sunny hours (NOSH). The results indicated that the optimized LSSVM models outperformed the LSSVM models. Keywords Dew point temperature · Optimization algorithms · Least square support vector machine · Soft computing model

18.1 Introduction The dew point has a significant effect on crop production and growth. The dew point temperature (DPT) significantly affects crop production and growth. Estimating dew point temperature is necessary for estimating near-surface humidity and air moisture. Modeling DPT is complex and nonlinear since different parameters affect it. Machine learning models have been widely used for estimating hydrological variables in the recent years. Nonlinear and complex processes can be modeled using these models. Mohammadi et al. (2015) estimated DPT using an artificial neural network, support vector machine (SVM), and extreme learning machine (ELM) models. They reported that the ELM outperformed the SVM and ANN models. Deka et al. (2018) used support vector machine (SVM) and extreme learning machine (ELM) models for estimating DPT. They reported that the ELM performed better than the SVM model. Shiri (2019) estimated DPT using genetic programming (GP), random forest (RF), and adaptive regression spline (MARS). The MARS gave the best results based on inputs of air temperature, relative humidity, and sunshine hours. Qasem © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 M. Ehteram et al., Application of Machine Learning Models in Agricultural and Meteorological Sciences, https://doi.org/10.1007/978-981-19-9733-4_18

187

188

18 Predicting Dew Point Using Optimized Least Square Support Vector …

et al. (2019) used support vector machine (SVM), GP, and decision tree models for estimating PDT. Naganna et al. (2019) coupled optimization algorithms with artificial neural network (ANN) models to estimate DPT. They reported that the optimized ANN models performed better than the ANN models. Alizamir et al. (2020) used ELM and deep echo state network (DeepESN) for estimating DPT. They reported that the DeepESN performed better than the other models. Alizamir et al. (2020) used kernel extreme learning machine (KELM), boosted regression tree (BRT), and different ANN models for estimating DPT. They used the different input combinations to estimate DPT. They reported that the KELM outperformed the other models. However, the results indicated that the soft computing models could be successfully used to estimate DPT. The least square support vector machine (LSSVM) is one of the robust models for estimating target variables. LSSVM is a classical statistical learning method based on SVM models. The LSSVM was widely used in different fields, such as modeling sediment load (Kisi, 2012), determination of natural gas density (Esfahani et al., 2015), prediction of river water pollution (Kisi & Parmar, 2016), slop stability analysis (Samui & Kothari, 2011), and prediction of blockchain financial products (Sivaram et al., 2020). This study uses standalone LSSVM and LSSVM models to estimate the DPT. In this chapter, the LSSVM-antlion optimization algorithm (ANOA), LSSVM-dragonfly algorithm (DOA), LSSVM-crow optimization algorithm (LSSVM-COA), and LSSVM were used to estimate DPT. The structure of ANOA, DOA, and CROA was explained in Chaps. 6, 8, and 10.

18.2 Structure of the LSSVM Model The LSSVM originated from the SVM model. The LSSVM acts based on the following equation: f (x) = w T φ(x) + b

(18.1)

where f (x): output, b: bias, x: input, and φ(x): mapping function. The regression problem is defined based on the structural minimization principle: min(J (w, e)) =

m γ Σ 2 1 T w w+ e 2 2 i=1 i

(18.2)

where γ : margin parameter and e: the slack variable. The kernel functions are used for defining LSSVM regression: f (x) =

m Σ

αi K (x, xi ) + b

i=1

where K (x, xi ): kernel function and αi : Lagrange multipliers.

(18.3)

18.4 Case Study

189

( K (x, xi ) = exp

−||x − xi || 2σ 2

) (18.4)

where σ : kernel parameter. The kernel parameter is one of the most important parameters of the LSSVM. In this study, optimization algorithms are used to adjust the LSSVM parameters.

18.3 Hybrid Structure of the LSSVM Model In this study, optimization algorithms are used to train the LSSVM model. The following levels are considered: 1. The data are divided into testing and training data. 70 and 30% of data are used for training and testing levels. These sizes of data provided the lowest values of the objective function. 2. The LSSVM runs at the training level. 3. If the stop condition is met, the models run at the testing level; otherwise, the models go to the next level. 4. The initial values of kernel parameters are defined as the initial population of the algorithms. 5. The LSSVM runs, and the objective function is computed to evaluate the quality of the solutions. 6. The advanced operators of algorithms are used to update the values of kernel parameters. 7. Finally, the stop convergence is checked. If the stop convergence is met, the process goes to step 3; otherwise, the process goes to step 5.

18.4 Case Study Tabriz synoptic station, in the province of East Azerbaijan, belongs to the Iranian Meteorological Organization. Figure 18.1 shows the location of the Tabriz synoptic station. Table 18.1 gives the details of the data. The following indices are used to evaluate the ability of the models: MAE =

n 1Σ |DPTob − DPTes | n i=1

ΣN PBIAS =

(DPTob − DPTes ) ΣN i=1 (DPTob )

i=1

Σn

(DPTob − DPTes ) ) NSE = 1 − Σni=1 ( i=1 DPTob − DPTob

(18.5)

(18.6) (18.7)

190

18 Predicting Dew Point Using Optimized Least Square Support Vector …

Fig. 18.1 Location of the case study

Table 18.1 Details of input data

/ CRMSE =

Parameter

Σn i=1

Mean

Max

Min

Temperature (°C)

12.72

32.24

− 14.21

Relative humidity (%)

51.23

95.34

22.12

Wind speed (m/s)

5.45

12.25

0.00

Number of sunny hours (h)

8.25

14.21

4.56

) ( ) ( DPTob − DPTiobs − DPTes − DPTes n

(18.8)

where MAE: mean absolute error, PBIAS: percentage of bias, NSE: Nash–Sutcliffe’s efficiency, CRMSE: centered root mean square difference error, EVTob : observed DPT, EVTiobs : average DPT, and EVTes : estimated DPT.

18.5 Results and Discussion

191

18.5 Results and Discussion 18.5.1 Selection of Random Parameters The population size (POPSI) and the maximum number of iterations (MNOIT) are the random parameters of optimization algorithms. A sensitivity analysis of random parameters is given in Table 18.2. The values of other parameters remain unchanged when a parameter’s value changes. For the ANOA, the objective function value (RMSE) of POPSI = 100, POPSI = 200, POPSI = 300, and POPSI = 400 was 1.56, 1.43, 1.53, and 1.57, respectively. For the DOA, the objective function value (RMSE) of POPSI = 100, POPSI = 200, POPSI = 300, and POPSI = 400 was 1.79, 1.55, 1.69, and 1.89, respectively. For the COA, the objective function value (RMSE) of POPSI = 100, POPSI = 200, POPSI = 300, and POPSI = 400 was 1.89, 1.72, 1.75, and 1.95, respectively. For the ANOA, the objective function value (RMSE) of MNOIT = 50, MNOIT = 100, MNOIT = 150, and MNOIT = 200 was 1.54, 1.41, 1.45, and 1.59, respectively. For the DOA, the objective function value (RMSE) of MNOIT = 50, MNOIT = 100, MNOIT = 150, and MNOIT = 200 was 1.78, 1.57, 1.55, and 1.67, respectively. For the COA, the objective function value (RMSE) of MNOIT = 50, MNOIT = 100, MNOIT = 150, and MNOIT = 200 was 1.88, 1.78, 1.71, and 1.79, respectively. Table 18.2 Sensitivity analysis for determining random parameters POPSI (ANOA)

The objective function value (ANOA)

POPSI (DOA)

Objective function value (DOA)

POPSI (COA)

Objective function value (COA)

100

1.56

100

1.79

100

1.89

200

1.43

200

1.55

200

1.72

300

1.53

300

1.69

300

1.75

400

1.57

400

1.89

400

1.95

MNOIT (ANOA)

Objective function value (ANOA)

MNOIT (DOA)

Objective function value (DOA)

MNIOT (COA)

Objective function value (COA)

50

1.54

50

1.78

50

1.88

100

1.41

100

1.57

100

1.78

150

1.45

150

1.55

150

1.71

200

1.59

200

1.67

200

1.79

192

18 Predicting Dew Point Using Optimized Least Square Support Vector …

Table 18.3 Determination of the most important input parameters Input scenario

RMSE

Temperature, relative humidity, wind speed, number of sunny hours

1.43

Temperature, relative humidity, wind speed

1.49

Temperature, relative humidity, number of sunny hours

1.65

Temperature, wind speed, number of sunny hours

1.76

Relative humidity, wind speed, number of sunny hours

1.98

18.5.2 Selection of the Best Input Combination This section used the LSSVM-ANOA to estimate the DPT based on the different input scenarios. Table 18.3 gives the RMSE values for the different input combinations. Removing the average temperature from the input combination increased RMSE from 1.43 to 1.498. Thus, the average temperature was the most important input parameter for estimating DPT. Removing number of sunny hours increased RMSE from 1.43 to 1.49. Thus, REHU has the least importance among other input variables. The best input variable consisted of input variables of REHU, AVTEM, WISPE and NOSH.

18.5.3 Evaluation of the Accuracy of Models Figure 18.2 shows the accuracy of models based on different error indices. At the training level, PBIAS values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 5, 8, 9, and 11, respectively. At the testing level, PBIAS values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 7, 10, 12, and 14, respectively. Training NSE values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 0.95, 0.92, 0.90, and 0.87, respectively. Testing NSE values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 0.92, 0.90, 0.86, and 0.84, respectively. Training NSE values of LSSVMANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 0.876, 0.923, 1.12, and 1.34, respectively. Testing NSE values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 1.10, 1.12, 1.15, and 1.19, respectively. Taylor diagram is a valuable tool for evaluating the models’ performance. The Taylor diagram acts based on CRMSE, standard deviation, and correlation coefficient (Fig. 18.3). The CRMSE of the LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models was 0.05, 0.23, 0.28, and 0.67, respectively. The correlation coefficient of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models was 0.96, 0.94, 0.86, and 0.78, respectively. Thus, LSSVM-ANOA had the best accuracy among other models.

18.5 Results and Discussion

193

Fig. 18.2 Radar plots of different error indices for estimating DPT

MAE

NSE

PBIAS

194

18 Predicting Dew Point Using Optimized Least Square Support Vector …

Fig. 18.3 Taylor diagram for comparing the accuracy of the models

This chapter used optimization algorithms to improve the efficiency of LSSVM models for predicting DPT. Since the optimization algorithms used advanced operators for solving complex problems, the optimized LSSVM models performed better than the standalone LSSVM models. However, the input and model parameters may lead to uncertainties in the modeling process. The next studies can consider the uncertainty resources in the modeling process. Hydrological and meteorological prediction is necessary for managing water resources and food security. Meteorological predictions have nonlinear processes, and they need reliable and powerful models in order to perform accurately. Reliable resources are able to reduce the uncertainty of the modeling process for meteorological predictions. It is worth mentioning that estimating variables of meteorology and agriculture are necessary for managing irrigation and agriculture. One of the key parameters of meteorology is dew point temperature. We can use dew point temperature in order to calculate relative humidity and actual vapor pressure. Since various parameters affect dew point temperature, it is safe to say that modeling and estimating dew point temperature is a complex task and needs high level of accuracy. We have to mention that this study used soft computing models in order to calculate dew point temperature. The models that we used here have many benefits such as accelerating the speed of simulation and high accuracy for models. Moreover, this study used optimization algorithms to improve the accuracy of LSSVM model. Mentioned algorithms have the ability to calculate parameters of LSSVM model with a high accuracy. The results of this study revealed that optimized version of LSSVM model has a high accuracy and is able to estimate dew point temperature. Anyhow, each optimization algorithm has various operators which make the accuracy of LSSVM models differ from each other. Future studies can focus on uncertainty of LSSVM models and may find interesting results. Parameters of LSSVM model and input variables can cause uncertainty in the modeling process. We have to admit that

References

195

by using preprocessing methods, the accuracy of LSSVM models improves significantly. Various mentioned LSSVM models in this study are able to predict various variables such as rainfall, temperature, and precipitation. Furthermore, the LSSVM model used in this study has the ability to predicting spatial and temporal changes in dew point temperature. This shows that LSSVM model and computers are able to facilitate hydrological modeling.

18.6 Conclusion The dew point temperature is an important issue for agriculture and water management. However, the DPT prediction relies on a large number of input parameters. The modeling process of DPT is nonlinear and complex. Thus, soft computing models can be used for estimating DPT. The LSSVM models have a high potential for simulating complex problems. The LSSVM model is integrated with optimization algorithms to predict DPT in this chapter. The different input combinations were used to predict DPT. The best input variable consisted of input variables of relative humidity, average temperature, wind speed, and solar radiation. At the testing level, PBIAS values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 7, 10, 12, and 14, respectively. At the training level, NSE values of LSSVM-ANOA, LSSVM-DOA, LSSVM-COA, and LSSVM models were 0.95, 0.92, 0.90, and 0.87, respectively.

References Alizamir, M., Kim, S., Zounemat-Kermani, M., Heddam, S., Kim, N. W., & Singh, V. P. (2020). Kernel extreme learning machine: An efficient model for estimating daily dew point temperature using weather data. Water, 12(9), 2600. Deka, P. C., Patil, A. P., Yeswanth Kumar, P., & Naganna, S. R. (2018). Estimation of dew point temperature using SVM and ELM for humid and semi-arid regions of India. ISH Journal of Hydraulic Engineering, 24(2), 190–197. Esfahani, S., Baselizadeh, S., & Hemmati-Sarapardeh, A. (2015). On determination of natural gas density: Least square support vector machine modeling approach. Journal of Natural Gas Science and Engineering, 22, 348–358. Kisi, O. (2012). Modeling discharge-suspended sediment relationship using least square support vector machine. Journal of Hydrology, 456, 110–120. Kisi, O., & Parmar, K. S. (2016). Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution. Journal of Hydrology, 534, 104–112. Mohammadi, K., Shamshirband, S., Motamedi, S., Petkovi´c, D., Hashim, R., & Gocic, M. (2015). Extreme learning machine based prediction of daily dew point temperature. Computers and Electronics in Agriculture, 117, 214–225. Naganna, S. R., Deka, P. C., Ghorbani, M. A., Biazar, S. M., Al-Ansari, N., & Yaseen, Z. M. (2019). Dew point temperature estimation: Application of artificial intelligence model integrated with nature-inspired optimization algorithms. Water, 11(4), 742.

196

18 Predicting Dew Point Using Optimized Least Square Support Vector …

Qasem, S. N., Samadianfard, S., Sadri Nahand, H., Mosavi, A., Shamshirband, S., & Chau, K. W. (2019). Estimating daily dew point temperature using machine learning algorithms. Water, 11(3), 582. Samui, P., & Kothari, D. P. (2011). Utilization of a least square support vector machine (LSSVM) for slope stability analysis. Scientia Iranica, 18(1), 53–58. Shiri, J. (2019). Prediction vs. estimation of dewpoint temperature: assessing GEP, MARS and RF models. Hydrology Research, 50(2), 633–643. Sivaram, M., Lydia, E. L., Pustokhina, I. V., Pustokhin, D. A., Elhoseny, M., Joshi, G. P., & Shankar, K. (2020). An optimal least square support vector machine based earnings prediction of blockchain financial products. IEEE Access, 8, 120321–120330.