129 37
English Pages 266 [256] Year 2021
Studies in Computational Intelligence 1005
Aboul Ella Hassanien Roheet Bhatnagar Václav Snášel Mahmoud Yasin Shams Editors
Medical Informatics and Bioimaging Using Artificial Intelligence Challenges, Issues, Innovations and Recent Developments
Studies in Computational Intelligence Volume 1005
Series Editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.
More information about this series at https://link.springer.com/bookseries/7092
Aboul Ella Hassanien · Roheet Bhatnagar · Václav Snášel · Mahmoud Yasin Shams Editors
Medical Informatics and Bioimaging Using Artificial Intelligence Challenges, Issues, Innovations and Recent Developments
Editors Aboul Ella Hassanien Faculty of Computers and AI Cairo University Giza, Egypt Václav Snášel Faculty of Electrical Engineering and Computer Science VSB-Technical University of Ostrava Ostrava-Poruba, Moravskoslezsky, Czech Republic
Roheet Bhatnagar Department of Computer Science and Engineering Manipal University Jaipur Jaipur, Rajasthan, India Mahmoud Yasin Shams Faculty of Artificial Intelligence Kafrelsheikh University Kafr El Sheikh, Egypt
ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-030-91102-7 ISBN 978-3-030-91103-4 (eBook) https://doi.org/10.1007/978-3-030-91103-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Today, modern societies are witnessing increased usage of technology in almost every domain. Healthcare is one big domain where modern technologies bring a sea change and a paradigm shift in how healthcare is planned, administered, and implemented. Big data, networking, graphical interfaces, data mining, machine learning, pattern recognition, and intelligent decision support systems are a few technologies and research areas currently contributing to medical informatics. Mobility and ubiquity in healthcare systems; physiological and behavioral modeling; standardization of health records, procedures, and technologies; certification; privacy; and security are some of the issues that medical informatics professionals and the Information and Communication Technology (ICT) industry and the research community, in general, are addressing to promote ICT in healthcare further. The assistive technologies and home monitoring applications of ICT have greatly enhanced the quality of life and full integration of all citizens into society. Bioimaging is a term that covers the complex chain of acquiring, processing, and visualizing structural or functional images of living objects or systems, including extraction and processing of image-related information. Examples of image modalities used in bioimaging are many, including X-ray, CT, MRI and fMRI, PET and HRRT PET, SPECT, MEG, etc. Medical imaging and microscope/fluorescence image processing are important parts of bioimaging referring to the techniques and processes used to create images of the human body, anatomical areas, tissues, and so on, down to the molecular level, for clinical purposes, seeking to reveal, diagnose, or examine diseases, or medical science, including the study of normal anatomy and physiology. Both classic image processing methods (e.g., denoising, segmentation, deconvolution, registration, feature recognition, and classification) and modern machines, particularly deep learning techniques, represent an indispensable part of bioimaging and related data analysis statistical tools. The trend is on the increase, and we are sure to witness more automation and machine intelligence in the future. This book emphasizes the latest developments and achievements in AI and related technologies with a special focus on Medical Imaging and Diagnostic AI applications. The book describes the theory, applications, and conceptualization of ideas, case studies, and critical surveys covering most aspects of AI for Medical Informatics. v
vi
Preface
The content of this book is divided into three parts: part I presents the role and importance of Machine Learning and Deep Learning Applications in Medical Diagnosis and how these technologies assist in the Smart Prognosis of critical diseases. The current pandemic has seen urgent and technology-driven solutions for COVID testing and diagnosis. Therefore, part II describes and analyzes the effective role of Artificial Intelligence in COVID-19 Diagnosis in crisis times of pandemic which had affected the whole world. Part III introduces some of the other Emerging Technologies and Applications and their impact in many applications for Medical Diagnosis. Finally, the editors of this book would like to acknowledge all the authors for their studies and contributions. Editors also would like to encourage the reader to explore and expand the knowledge to create their implementations according to their necessities. Giza, Egypt Jaipur, India Ostrava-Poruba, Czech Republic Kafr El Sheikh, Egypt
Aboul Ella Hassanien Roheet Bhatnagar Václav Snášel Mahmoud Yasin Shams
Contents
Machine Learning and Deep Learning Applications in Medical Diagnosis A Deep Learning Based Cockroach Swarm Optimization Approach for Segmenting Brain MRI Images . . . . . . . . . . . . . . . . . . . . . . . . Mohamed A. El-dosuky and Mahmoud Shams
3
Efficient Classification Model for Melanoma Based on Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ismail Elansary, Amr Ismail, and Wael Awad
15
Machine Learning-Supported MRI Analysis of Brain Asymmetry for Early Diagnosis of Dementia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nitsa J. Herzog and George D. Magoulas
29
Feature Selection Based Coral Reefs Optimization for Breast Cancer Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lobna M. Abouelmagd, Mahmoud Y. Shams, Noha E. El-Attar, and Aboul Ella Hassanien A Comprehensive Review on Brain Disease Mapping—The Underlying Technologies and AI Based Techniques for Feature Extraction and Classification Using EEG Signals . . . . . . . . . . . . . . . . . . . . . Jaideep Singh Sachadev and Roheet Bhatnagar Recognition of Ocular Disease Based Optimized VGG-Net Models . . . . . Hanaa Salem, Kareem R. Negm, Mahmoud Y. Shams, and Omar M. Elzeki
53
73 93
Artificial Intelligence in COVID—19 Diagnosis Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Abdulqader M. Almars, Ibrahim Gad, and El-Sayed Atlam
vii
viii
Contents
COVID-19 Forecasting Based on an Improved Interior Search Algorithm and Multilayer Feed-Forward Neural Network . . . . . . . . . . . . . 129 Rizk M. Rizk-Allah and Aboul Ella Hassanien Development of Disease Diagnosis Model for CXR Images and Reports—A Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Anandhavalli Muniasamy, Roheet Bhatnagar, and Gauthaman Karunakaran Intelligent Drug Descriptors Analysis: Toward COVID-19 Drug Repurposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Aya Salama Abdelhady, Yaseen A. M. M. ElShaier, Mohamed S. Refaey, Ahmed Elsyaed Elmasry, and Aboul Ella Hassanien Emerging Technologies and Applications Analysis of Aortic Valve Using a Finite Element Model . . . . . . . . . . . . . . . . 195 Kadry Ali Ezzat, Lamia Nabil Mahdy, and Ashraf Darwish Fundus Images Enhancement Using Gravitational Force and Lateral Inhibition Network for Blood Vessel Detection . . . . . . . . . . . . 207 Kamel K. Mohammed and Ashraf Darwish Multivariate Fuzzy Logic Based Smart Healthcare Monitoring for Risk Evaluation of Cardiac Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Ridhima Mehta Ensemble Machine Learning Model for Mortality Prediction Inside Intensive Care Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Nora El-Rashidy, Shaker El-Sappagh, Samir Abdelrazik, and Hazem El-Bakry
Machine Learning and Deep Learning Applications in Medical Diagnosis
A Deep Learning Based Cockroach Swarm Optimization Approach for Segmenting Brain MRI Images Mohamed A. El-dosuky and Mahmoud Shams
Abstract Segmentation of the biomedical images is essential to determine and discover the most significant parts in the images that represent the particular disease in diagnosis and prognosis issues. In this work, an optimized deep learning approach based on cockroach swarm optimization is proposed for segmenting brain MRI scans. Moreover, the U-net architecture created and inspired the proposed approach, with 2003 training images in testing and validation phases. The motivation of this proposed work is to develop robust architecture and apply some restrictions on the size of the dataset to avoid human manual segmentation. We present a robust machine learning technique that can segment both gray and white matter and investigate the cerebrospinal fluid of brain MRI scans by using images from OASIS and ABIDE datasets. This assures saving time from specialists, allowing them to focus on fields that require their experience. Therefore, Cockroach Swarm Optimizer (CSO) implemented optimization of the hyper-parameters to ensure the proposed architecture reliability based on the accuracy achieved. In this work, we built an architecture that acquired an overall 92% accuracy. Keywords Deep learning · Segmenting brain · Convolutional neural network · Cockroach Swarm Optimization · MRI
1 Introduction Typically, Magnetic Resonance Imaging (MRI) knows many diseases and parts representing human bodies. Recently, Convolutional Neural Networks (CNN) can recognize these images and combine them into a 2D volume to make predictions. The segmentation of the brain images is the core problem in the deep learning field, M. A. El-dosuky (B) Faculty of Computer and Information Science, Mansoura University, Mansoura, Egypt e-mail: [email protected] M. Shams Faculty of Artificial Intelligence, Kafrelsheikh University, Kafr el-Sheikh, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_1
3
4
M. A. El-dosuky and M. Shams
which resolves the ambiguities of diagnosing brain diseases [1]. Compared with shallow learning, deep learning is distinctive for extracting high-level and useful features that simulate the human visual cortex [2]. The core idea behind deep learning is based on extracting high-level features from images than raw pixels as human vision does [3]. However, there is an obstacle for applying deep learning to a new task, and it requires substantial skill to select hyperparameters, such as the number of layers, learning rate, size of the kernel, etc. Furthermore, a solid theory of CNN is still missing. One of the mostly-used deep learning models is the CNN by which it has many types of layers: • • • • •
Multilayer perceptron (MLP) expansion to generate fully connected layer, The convolutional layer, which is capable of selecting local features via kernels, Max-pooling layer, which performs down-sampling the output of a layer, The Dropout layer that regularizes network and reduces over-fitting, and Activation functions, such as softmax.
Currently, CNN is practically used as U-Net architecture as presented by Milletari et al. [4]. More precisely, CNN is utilized in biomedical image segmentation applications and adapting this work to be used in training images to achieve the desired accurate segmentation. Havaei et al. in [5] presented an operative method that can identify tumors. Tumors can be any place in the brain and have any size, shape, and contrast. They employed a CNN on 33 × 33 pixels, and it has features both local (7 × 7 kernel size) and global (11 × 11 kernel size). This solution used GPU and is considered a new cascaded architecture that accurately models local label dependencies. Pereira et al. [6] proposed an eleven-layer deep architecture with 2D patches. This proposed architecture performs automatic segmentation based on small 3 × 3 kernels CNN, and small kernels fight to overfit and handle a few weights in the CNN. Patch-wise, CNN for two-dimensional slices is proposed to segment brain tumors [7]. They constructed many tri-planar two-dimensional CNNs architecture for threedimensional voxel classification, which greatly reduces segmentation time. Threedimensional dense-inference patch-wise CNN is proposed [8]. This is an efficient eleven-layer deep, multi-scale, 3D CNN architecture with a novel training strategy that significantly has performance improvement. This is the 1st use of a three-dimensional, fully connected conditional random field for post-processing. In the recent survey of the state-of-the-art, refer to [9]. The learned lessons are as follows. First, regardless of the significant success of deep learning in segmenting MRI, it is still a challenge to have a robust universal method for all variances in MRI scans. Second, deep learning (DL) performance depends extremely on pre-processing and initialization. Third, MRI training datasets are moderately small compared to ImageNet datasets that encompass millions of images. Forth, the existing deep learning architectures are supervised learning constructed that needs manual ground truth labels generation. Finally, data augmentation accurately mimics variances in MRI scans, thus it could ease the need of big amount of MRI scans.
A Deep Learning Based Cockroach Swarm Optimization …
5
Chiroma et al. [10] provided a survey that facilitates interaction between inspired nature algorithms and DL approaches. They provided a classification of the inspired nature algorithms for DL architecture. Convolutional neural network optimization can be achieved using linearly decreasing weight particle swarm optimization [11]. Moreover, Fong et al. [12] proved that metaheuristic algorithms could contribute to DL in big data analysis. PSOCNN is proposed in [13], which can quickly converge compared with other evolutionary algorithms to find meaningful CNNs architectures. The proposed architecture is inspired by the U-Net architecture extension, by which a CSO optimizer accomplishes optimization of the hyper-parameters. The pre-processing phase includes resizing images to 128 × 128, converting them into grayscale, and calculating the z-score. The second phase is training using 12 layers, with augmentation by two transformations: translations and rotations. The last two phases are validation and testing. The rest of this chapter is presented as follows. Section 2 discusses cockroach swarm optimization and deep learning, focusing on U-Net architecture. Section 3 is for the proposed work. In Sect. 4, the experiments and results are demonstrated. Section 5 demonstrates the conclusion and future work of the recent study.
2 Preliminaries This section discusses cockroach swarm optimization and deep learning, focusing on U-Net architecture that has been used in the proposed approach.
2.1 Cockroach Swarm Optimization Kwiecien [14] introduce the Cockroach Swarm Optimization (CSO) as an effective method for optimization stratifies. Moreover, the structure and components of CSO are described as follows [15].
2.1.1
Chase-Swarming Behavior
In conducting this behavior, cockroaches with the robust power convey the top local candidate solution (Pi ) and transfer it to the best (Pg ) that represent the global level in the possibility of being visually determined. This assures that cockroach quickly develops a solider by finding the optimal solution representing the desired candidate. X i,G+1 =
X i,G + step · rand 1. Pi,G − X i,G , Pi,G = X i,G X i,G + step · rand 2. Pg,G − X i,G , Pi,G = X i,G
(1)
6
M. A. El-dosuky and M. Shams
where, Xi, G is the cockroach position at G-th generation, rand1 and rand2 are random numbers in [0, 1] interval, and step is a constant. Pg, G is the global best position at the G-th iteration. Pi is local best position, based on Eq. (2): Pi = O ptimal j X j X i − X j ≤ visual
(2)
where visual is perception constant.
2.1.2
Dispersion Behavior
This behavior is frequently performed to confirm the diversity as well as achieves random cockroaches steps as in Eq. (3). X i = X i + random(1, D)
(3)
where, random (1, D) is random position in a D-dimensional range.
2.1.3
Ruthless Behavior
This behavior simulates the phenomenon of cockroaches eating weaker when the food resources are incomplete. It is implemented by replacing random by best individuals for N cockroaches, as in Eq. (4) X rand = Pg
(4)
where rand is a random integer in [1, N] interval. Pg is the global best position.
2.2 Deep Learning Deep learning is based on extracting high-level features from images than raw pixels as human vision does [3]. Deep learning methods include Convolutional Neural Network (CNN) -based methods, Restricted Boltzmann Machines (RBMs)-based methods, autoencoderbased methods, and sparse coding-based methods. A convolutional neural network (CNN) is a common deep learning architecture with many layers. The convolutional layer can select local features via kernels, and each convolutional kernel outputs a feature map. The Max-pooling layer is a samplebased discretization that performs down-sampling of the output of a layer. The fully
A Deep Learning Based Cockroach Swarm Optimization …
7
connected layer is a multilayer perceptron expansion. The drawback of these layers is that they contain many parameters, which results in a great computational effort while training. There could be other layers such as the dropout layer that regularizes the network and reduces over-fitting, and activation functions, such as softmax. U-net is CNN architecture for fast and accurate image segmentation. It has outperformed the prior best method on many challenges [16]. U-net is considered a network and training strategy that depends on the extensive use of augmentation to use the annotated samples more powerfully. U-net contains a constricting track to capture context. It also contains expanding with the symmetric path that allows accurate localization [16]. For instance, in the proposed U-net in [17], the input of CNN is pipelined into a down convolution layer. At that moment, it undergoes a max-pooling layer. Afterward, it is pipelined into a down convolution layer. Then it undergoes a max-pooling layer, and then it is pipelined into a down convolution layer, then it undergoes a maxpooling layer. Then, there is a convolution layer before taking the upsampling, and there is a convolution layer before taking the upsampling, then there is a convolution layer before taking the upsampling. Finally, there is a convolutional layer. Each up convolution has a concatenation from the corresponding down convolution.
3 The Proposed Approach In this work, the proposed architecture is inspired by and U-Net architecture extension. The proposed model first applies hyper-parameter optimization using cockroach swarm optimization (CSO). Pre-processing is performed on the dataset before feeding the pre-processed Dataset to CNN as shown in Fig. 1. The hyperparameters optimization is performed by cockroach swarm optimization (CSO). Hyperparameters are the total number of epochs, batch size, filter size, the total number of filters, drop out, and batch normalization. The pre-processing stage includes resizing images to 128 × 128, converting them into grayscale, and calculating the z-score. This phase is vital before the actual training. Handling RGB images is expensive, and images are converted to grayscale. This reduces the complexity before training. Images are resized to 128 × 128, hence reducing the time complexity during training. Calculating z-score is achieved in terms of the mean (μ) and standard deviation (σ), as shown in Eq. (5). z=
x −μ σ
(5)
The used CNN has a twelve-layer architecture that automatically segments the MRI cortical brain structures as shown in Fig. 2. The twelve layers are as follows. First, there are two convolutional layers followed by normalizing and max-pooling layers. Second, there are two convolutional layers
8
M. A. El-dosuky and M. Shams
Hyperparameters
Dataset
Cockroach swarm optimization(CSO)
Pre-processing
Optimal Hyperparameters
Convolutional Neural Network Fig. 1 The proposed model-based U-Net architecture representing optimal hyperparameter values
Softmax
Input layer
Dropout
Conv-layer-1
Conv-layer-6
Conv-layer-2
Batch norm
Max pool
Max pool
Batch norm
Conv-layer-3
Conv-layer-4
Fig. 2 The structure of the proposed CNN Architecture
Conv-layer-5
A Deep Learning Based Cockroach Swarm Optimization …
9
followed by normalizing and max-pooling layers. Finally, there are two convolutional layers followed by a dropout and softmax layers. Dropout is applied between the last hidden layer and the output layer to make the training noisy, and this forces nodes within a layer to take on more or less accountability for the inputs. Finally, the softmax layer, a normalized exponential function, must have the same number of nodes as the output layer.
4 Experimental Results and Discussion Figure 3 shows the 3 different types of MRI scans. They can be: “axial plane” which represents images as slices, “sagittal plane” which separates left and right sides; and the “coronal plane” that separates the front from the back. We build a dataset based on two medical datasets: OASIS [18] and ABIDE [19]. The dataset is divided into 3 subsets train, validate and test the proposed model with 70% (1402 images), 10%(200 images), and 20% (400 images), respectively as shown in Fig. 4. The total number of images is 2003. The first dataset split is for training. We augment the training set by two transformations: translations and rotations. Augmenting the total training set size is a factor of 7. The next dataset split is for evaluation. It uses minimum square error (MSE), Eq. (6), one of the simplest methods for comparing two images. 2 1 yˆ , y L yˆ , y = 2n n
(6)
where n is training examples. It calculates how far predicted y is from actual target y.
(a) Axial
(b) Sagittal
(c) Coronal
Fig. 3 Examples of the MRI scan images a Axial MRI scan, b Sagittal MRI scan, and c Coronal MRI scan
10
M. A. El-dosuky and M. Shams Training • 70% of the dataset (1402 images+ 6×1402 augmentations) Validation • 10% of the dataset (200 images) Testing • 20% of the dataset (400 images)
Fig. 4 The dataset splits into training, validation, and testing
In the final dataset, the split is for testing. Augmentation is done using a regularizer such as a Dropout and Batch normalization to overcome the over-fitting problem. Table 1 summarizes optimal hyper-parameters. Hyper-parameters are epochs, batch size, filter size, the total number of filters, existing of drop out, and existing of batch normalization. To measure the accuracy by using the loss function in the training time, it took 2.7 days without optimization and 1.6 days with optimization. Accuracy was 87 and 92% without and with optimization, respectively, as shown in Table 2. A confusion matrix is instigated in Table 3. Many evaluation metrics can be measured based on the confusion matrix, such as sensitivity, specificity, accuracy, precision, recall, and F · measure. Table 1 Test hyperparameters
Table 2 Accuracy of the proposed system before and after the optimization process
Table 3 Confusion matrix of the MRI positive and negative cases
Hyperparameter
Value
Epochs
740
Batch size
32
Filter size
3×3
#Filters
32
Drop out
yes
Batch normalization
yes
Accuracy (%) Before optimization
87
After optimization
92
Predicted positive
Predicted negative
Actual posistive
306
5
Actual negative
27
62
A Deep Learning Based Cockroach Swarm Optimization …
306 = 98% 306 + 5
(7)
62 = 69.7% 62 + 27
(8)
306 + 62 = 92% 306 + 62 + 27 + 5
(9)
306 = 92% 306 + 27
(10)
Sensitivity = Specificity = Accuracy =
Precision = Recall = F · Measure =
11
306 = 98% 306 + 5
(11)
2 × Pr ecision × Recall = 94.9% Pr ecision + Recall
(12)
We obtained an accuracy of 92% in the test sub-dataset for this model. Its training time is not computationally expensive, and because of the real test on images from the dataset that performed better. When we assess images that are in numerous forms from the dataset, i.e., images that do not belong to ABIDE dataset either OASIS, we get some segmentation, even though the model makes more errors for addressing this problem, we add images of other augmenting and form it with other types of image transformations such as zoom and elastic deformations as shown in Figs. 5 and 6.
(a) Original Test Image Fig. 5 Segmenting sagittal MRI scan
(b) Ground Truth
(c) Predicted
12
M. A. El-dosuky and M. Shams
(a) Original Test Image
(b) Ground Truth
(c) Predicted
Fig. 6 Segmenting axial MRI scan
5 Conclusion and Future Directions In this work, We build an architecture that first applies the hyperparameters optimization using cockroach swarm optimization (CSO). It acquired 92% accuracy with 2003 images on the dataset and used pre-processing steps for segmentation. The real challenge is the long training time. It makes the system feasible to be used in real situations, improving traditional segmentation. There are many ways to improve the proposed model. Regarding the datasets, we may increase the images of the Dataset from OASIS and ABIDE datasets. Training can be sped up using Principal Component Analysis (PCA), which compresses the input image with fewer parameters.
References 1. Liew, A. W., & Yan, H. (2006). Current methods in the automatic tissue segmentation of 3D magnetic resonance brain images. Current Medical Imaging Reviews, 2(1), 91–103. 2. Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27–48. 3. Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J., & Chen, T. (2018). Recent advances in convolutional neural networks. Pattern Recognition, 77, 354–377. 4. Milletari, F., Navab, N., & Ahmadi, S.A. (2016). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D vision (3DV), (pp. 565–571). 5. Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., Pal, C., Jodoin, P. M., & Larochelle, H. (2017). Brain tumor segmentation with deep neural networks. Medical Image Analysis, 35, 18–31. 6. Pereira, S., Pinto, A., Alves, V., & Silva, C. A. (2016). Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Transactions on Medical Imaging., 35(5), 1240–1251. 7. Zhao, L., & Jia, K. (2015). Deep feature learning with discrimination mechanism for brain tumor segmentation and diagnosis. In 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), (pp. 306–309).
A Deep Learning Based Cockroach Swarm Optimization …
13
8. Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., Rueckert, D., & Glocker, B. (2017). Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis, 36, 61–78. 9. Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D. L., & Erickson, B. J. (2017). Deep learning for brain MRI segmentation: State of the art and future directions. Journal of Digital Imaging., 30(4), 449–459. 10. Chiroma, H., Gital, A. Y., Rana, N., Shafi’I, M. A., Muhammad, A. N., & Umar, A. Y., et al. (2019). Nature inspired meta-heuristic algorithms for deep learning: recent progress and novel perspective. In Science and Information Conference 2019 Apr 25 (pp. 59–70). Springer, Cham. 11. Serizawa, T., & Fujita, H. (2020). Optimization of convolutional neural network using the linearly decreasing weight particle swarm optimization. arXiv:2001.05670. 12. Fong, S., Deb, S., & Yang, X. S. (2018) How meta-heuristic algorithms contribute to deep learning in the hype of big data analytics. In Progress in Intelligent Computing Techniques: Theory, Practice, and Applications 2018 (pp. 3–25). Springer, Singapore. 13. Junior, F. E., & Yen, G. G. (2019). Particle swarm optimization of deep neural networks architectures for image classification. Swarm and Evolutionary Computation. 14. Kwiecien, J. (2020). Cockroach swarm optimization, swarm intelligence algorithms (pp. 85– 96). CRC Press. 15. Cheng, L., Song, Y., & Bian, Y. (2019). Cockroach swarm optimization using a neighborhoodbased strategy. International Arab Journal of Information Technology, 16(4), 784–790. 16. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and ComputerAssisted Intervention 2015 Oct 5 (pp. 234–241). Springer, Cham. 17. https://en.wikipedia.org/wiki/U-Net. Last accessed May 2020. 18. Oasis open access series of imaging studies. https://www.oasis-brains.org. Last accessed May 2020. 19. Welcome to the autism brain imaging data exchange. https://preprocessed-connectomes-pro ject.org/abide/. Last accessed May 2020.
Efficient Classification Model for Melanoma Based on Convolutional Neural Networks Ismail Elansary, Amr Ismail, and Wael Awad
Abstract In many medical domains recent advances in deep learning-based computer vision have moved the model performance towards (or exceeding in some cases) the level of human experts. On object classification and localization tasks, state-of-the-art classifiers based on convolutional neural networks (CNNs) achieve great accuracy. According to its rapid spread to numerous areas of the body, skin cancer is the most irritating sort of cancer diagnosis. Melanoma is a fatal kind of skin cancer that, if found early enough, can be treated with simple surgery. Therefore, Dermatologists can treat patients and save their lives by accurately classifying skin lesions in their early stages. Doctors and patients might both profit greatly from quick and precise diagnosis. As a result, computer-assisted diagnostic support systems is required. Motivated by this, this paper introduce a proposed model for melanoma classification. ISIC 2020, the most recent well-known public challenge dataset, is utilized to test the suggested model’s capacity to classify Melanoma. Moreover, with less than 2% malignant samples, the dataset is significantly unbalanced. We utilise random over-sampling and data augmentation for the handling of unbalanced data. In addition, class weight technique was used to provide a weight for each class, so that the result is a classification that can equally learn from all classes, which places greater focus on the minority class. The suggested model for melanoma classification uses EfficientNet-B6 which it deep Convolutional Neural Networks (CNN) for classifying patients’ skin lesions into malignant or benign. The results of the research demonstrated that the suggested 97.84% system accuracy ratio is competitive with other models. Keywords Skin Cancer · Melanoma · Deep Learning · CNN · EfficientNet I. Elansary (B) · A. Ismail Faculty of Science, Port Said University, Cairo, Egypt e-mail: [email protected] I. Elansary Modern Academy for Computer Science and Management Technology, Cairo, Egypt W. Awad Faculty of Computers and Information, Damietta University, Damietta, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_2
15
16
I. Elansary et al.
1 Introduction Despite paramount advances in treatment effectiveness and patient quality of life over the last two decades, cancer treatment remains a problem for researchers around the world. Skin cancer has a greater rate of occurrence than all other cancers put together. According to World Health Organization (WHO) reports, One-third of the worldwide diagnosed cancers are skin cancer, and its prevalence is only increasing with time. Malignant melanoma (MM), squamous cell carcinoma (SCC), and basal cell carcinoma (BCC) are the three most commonly reported skin cancers, with BCC and SCC accounting for non-melanocytic cancers. Non-melanocytic skin cancers account for the great majority of skin cancers. Melanoma is the most deadly of skin malignancies and accounts for the majority of skin cancer deaths, despite its rarity [1]. Melanin is one of a prominent skin ingredient produced by melanocytes. Moreover, Melanin is exist in variable degrees according on a population’s historical sun exposure and determines an individual’s skin, hair, and eye colour [2]. Moreover, the two main chemical types of melanin are eumelanin and pheomelanin. As a dark pigment, eumelanine is more effective than light pigment, as a photoprotective agent. Although the pheomelanin level in light and dark skinned persons are about same, high-pigmented people, thanks to an excess of epidermal eumelanine, are less sensitive to ultra violet (UV) radiation or other tanning causes and are hence less likely to suffer skin cancer. Melanoma affects both men and women, with male patients having a greater fatality rate. According to recent research, skin cancer may be caused by both UV-A and UV-B radiations [3]. Skin cancer can be caused by a number of causes, including a family history of malignant genes, a patient’s hair colour, and an increased incidence of benign melanocytic and dysplastic nevi. Melanoma may be formed from an existing mole, which over time has changed its form, colour, dimension, or texture, or from a recently produced and seemingly aberrant mole. These malignant moles spread fast and widely to other areas of the body, bones and brain included. Even today, the average 5-year survival for advanced-stage patients of melanoma is around 15%, whereas early diagnosis raises the survival rate to 95%, demonstrating that the survival rate is directly related to the time it takes to diagnose and treat the disease [4]. According to the annual report of the American Cancer Society for 2019, there will be roughly 96,480 new cases of melanoma diagnosed in 2019, with 7230 individuals likely to die [5]. As a result, It is crucial to detect melanoma as early as feasible, which is a difficult task due to its different intricacies of nature and extreme quickness of spread when compared to other types of skin cancer. The techniques for detecting the evolution of skin in order to suspect a potential melanoma have changed through time. Melanomas were typically recognized simply by the naked eye until the turn of the twentieth century, depending on the mole’s distinguishing traits like volume, ulceration, or bleeding [6]. Early diagnosis, on the other hand, was a pipe dream during those years because, due to the lack of superior technological gear and software imaging tools, one had to rely only on manual observation. As time progressed, non-invasive techniques such as epiluminescence
Efficient Classification Model for Melanoma Based on Convolutional …
17
microscopy and dermoscopy became more popular. It works on the principle of lesion region transillumination by intense magnification to analyze its delicate aspects [7]. This method, however, has its limits, as melanoma detection accuracy is predicted to be just 75–84%. Although most dermatopathologists agree on the histological diagnosis of melanocytic lesions using the analysis of conventional microscopic, some of the melanocytic neoplasms are known as atypical melanocytic proliferations that require specialist consultation before being categorized as benign or malignant. Therefore, computer aided diagnostics systems (CADS) gradually improved the speed and accuracy of diagnosis as technology advanced and the impact of machine learning on medical research became more apparent. Numerous systems and techniques, such as the Menzies approach, the seven-point checklist, and ABCD (Asymmetry, Border irregularity, Color variation, and Diameter) rule, have since been devised and implemented, adding to the diagnostic system’s efficiency by overcoming the limitations of classic dermoscopy approaches [8]. Though CADS are now connected with smart phones, the first systems ran on PCs or workstations and allowed doctors and researchers to discover malignant lesions that were not visible to the naked eye [9, 10]. In recent decades, the rising use of artificial intelligence (AI) and machine learning (ML) in healthcare and medicine has sparked a lot of research interest. AI is a computational framework that enables computers to simulate human cognitive capabilities including learning and reasoning. Artificial intelligence includes both machine learning and its subset deep learning (DL). Since high-performance computing has grown increasingly accessible, DL approaches based on deep neural networks have grown in popularity. Because of its capability for accessing a wide range of features, DL gets more strength and versatility while working with unstructured data. The data is processed through the DL model’s several layers, each of which could extract characteristics. Moreover, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Recursive Neural Network are some of the architectures that can be used to accomplish deep learning. Transfer learning (TL) is a concept that can be used by learning algorithms to transfer knowledge between tasks. DL methods take a long time to learn and require a big amount of data to train on. Other recognition tasks can be used to replace existing CNN models that function on huge datasets [11, 12]. The problem of a restricted amount of samples in one or more classes can also be solved using pre-trained CNNs. Some examples of pre-trained CNNs that ordinarily used include GoogleNet, AlexNet, ResNet-50, and VGG-16. This article provides a model for classifying melanoma skin cancer based on CNNs architecture. We assessed the performance of the suggested technology on the ISIC 2020 Challenge data which is extremely unbalanced and has less than 2 percent malignancy samples. We use random over-sampling approach to overcome this issue. Random over-sampling is the process of randomly replicating minority class examples and adding them to the training dataset. The remainder of the paper is organized as follows: Section 2 describes current relevant work on melanoma skin cancer using machine and deep learning approaches.
18
I. Elansary et al.
The latest state-of-the-art methodologies for DL classification models are discussed in Sect. 3. Section 4 presents the new proposal made in this study. Section 5 brings this paper to conclusion and discusses future research.
2 Related Work The study of skin cancer diagnosis via image analysis has advanced greatly over the years, with a variety of methods have been tested. Researchers have used a variety of categorization algorithms and strategies to try to enhance diagnosis accuracy. While there is a variety of image classification literature out there, we focused on deep learning algorithms for skin cancer images. A dermoscopic images library is created by the international skin imaging collaboration (ISIC). ISIC sponsors an annual competition to examine images of lesions in the skin to stimulate research and approaches for the melanoma automatic diagnosis [13]. An approach to classifying melanoma cases by deep learning in dermoscopic pictures was suggested by Karki et al. [14]. The method showed that heavy augmentation is encouraging results and demands additional exploration during training and testing. The suggested method was assessed using the SIIM-ISIC Melanoma Classification 2020 dataset and 0.9411 area under the ROC curve is attained with the best ensemble model. Ha et al. [15], SIIM-ISIC Melanoma Classification Challenge on Kaggle’s winning solution. Their proposed approach is the ensemble of CNNs with different backbones and input sizes. They mixed precedent years data (including ISIC 2018 and ISIC 2019) with the current year’s data (ISIC 2020) and fivefold cross-validation was performed on the merged dataset. Although last year’s (about 25,000 pictures) data are smaller than this year’s data, the ratio of the confirmed melanoma sample is ten times greater (17.85%), makes the accuracy (Acc) more consistent. Their proposed cross-validation solution was 0.9600 Acc for using ISIC 2020 only and 98.45 Acc for merging three ISIC datasets. A two-stage technique has been presented by Yuval et al. [16], with contextual information from numerous pictures of patients that improve the lesion classification. In addition, the ISIC 2019 data were used as there were a very small number of positive cases of the 2020 training data. A proposed model classification of skin lesions was proposed by Kassem et al. [17]. The suggested methodology employs transmission learning and a pre-trained model with GoogleNet. The model parameters are used as initial values and vary over time. In order to evaluate the model’s capacity to recognize different types of skin lesions, HAM10000 dataset was employed. Ali et al. [18] suggested a deep CNN model to accurately classifying benign and malignant skin lesions. In preprocessing, first of all, the filters have been used to remove noises and artifacts, then the input images have been normalized and the features have been extracted which contribute to a precise classification. Finally, the data augmentation increased the number of pictures which enhance the classification rate accuracy. The DCNN model was compared to some transfer learning models like ResNet, MobileNet, AlexNet,
Efficient Classification Model for Melanoma Based on Convolutional …
19
DenseNet, VGG-16, etc. to assess the performance of their proposed model. The model was assessed using the HAM10000 dataset and they obtained a maximum of 93.16% of training and 91.93% of test accuracy.
3 Materials and Methods 3.1 Imbalanced Data Handling Class imbalances are a sort of data irregularities which are very widespread in a number of real-world issues like medical diagnosis, detection of fraud etc. [19, 20]. A P ⊆ X training set is regarded to be class imbalanced when it did not include equal training instances number from all classes, particularly those which match rare and hence relevant occurrences. This leads to a biased classification in favor of the classes of the majority and therefore to a larger misclassification of minority groups. Certainly, these biases during the performance evaluation should be effectively adjusted. Different techniques have been suggested to relieve the problem of data imbalance. Three main approaches to learning from imbalanced data could be distinguish which are Data-level, Algorithm-level and Hybrid approaches [21]. Data-level approach, focuses on training set modifying to make it appropriate for the algorithm of conventional learning. As far as balancing distributions are concerned, we can classify the methods that generate a new items for minority sets (over-sampling) and that reduce objects from majority groups (under-sampling). Standard methods employ random method to target samples choose for preprocessing. This, nevertheless, often means that relevant samples are removed or that unreasonable new items are introduced. Thus more strategies that attempt to maintain groups’ structures and/or generate new data based on the underlying distributions were presented. This algorithms family also includes methods to clean items that overlap and delete noisy objects, which could have a bad effect on learners [22]. Algorithm-level approaches, focused on adjusting present learners to relieve their preference for majority sets. This demands an excellent understanding of the updated learning algorithm and accurate identification of the causes for failure in handling imbalanced classes. Cost sensitive techniques are the most common method. In this case, the learner has been changed to include different penalties for each group of examples considered. In this way, we increase the cost of less represented classes in the learning process and increase the importance of them (which should to be aimed to minimize the global cost associated with mistakes) [23]. Hybrid approaches, focused on integrating the above approaches to extract their strengths and reduce their weaknesses. Fusion of data-level solutions with classification ensembles is extremely popular and leads to robust and efficient learners [21].
20
I. Elansary et al.
3.2 Deep Learning Approach Traditional neural networks often comprise three layers: input, output and one hidden layer. A difficulty was observed with training methods for such networks such that, when weights are updated, gradient values can be equal to or near zero. The vanishing gradient is referred to as this problem. Deep convolutionary architecture is used to overcome classical learning issues. For skin diseases classification, we utilize EfficienNet which is one of the recent state-of-the art deep learning classification networks. Efficient Net, success has grown as the models in the ImageNet dataset have grown more complicated since 2012, although many have not been computationally effective. EfficientNet model can be considered a group of CNN models that ranks among the state-of-the-art models with an accuracy rate of 84.4 percent with 66 Million parameter in the ImageNet classification issues. The EfficientNet Group consists of 8 models from B0-B7, and the number of parameters calculated rises not much as the exactness noticeably increases. Instead than using the Rectifier Linear Unit (RELU) activation function, Instead of previous conventional CNN, EfficientNet uses a novel activation function called Swish. The objective is to develop more effective ways using smaller models in the field of deep learning architectures. EfficientNet delivers more efficient outcomes with consistent scaling of depth, width and resolution, in contrast to other cutting-edge models, while scaling the model down. The initial step of the composite scaling approach is to look for the grid that can be linked with the various scaling dimensions under a fixed resource limitation of the base line network. This determines an appropriate scaling factor for the dimensions of width, depth and resolution. Then these factors are implemented to the intended target network for the baseline network. EfficientNet is mostly built on an inverted MBConv that first appeared in MobileNetV2 [24], but is employed significantly more than MobileNetV2 because of the higher budget of FLOPS (specifically floating point operations per second). In MBConv, the blocks are made up of the layer which first expands and then compress the channels, such that direct links between bottlenecks connecting far less channels than expansion layers are used. In comparison with typical layers where k is kernel size, this architecture has in-depth, separable convolutions which decrease calculation by nearly K 2 factor. Compound scaling uses the ϕ compound coefficient to a uniformed depth, width and resolution in accordance with the criteria set out in Eq. (1). depth : d = α ϕ ; width : w = β ϕ ; r esolution : r = γ ϕ α ≥ 1; β ≥ 1; γ ≥ 1
(1)
where, α, β, γ are constants that are determined through grid search. Intuitively α, β, and γ indicate how these additional resources might be appointed to network breadth, depth, and Resolution correspondingly. ϕ is a user-specified co-efficient that governs how many more resources are available for model scaling [25].
Efficient Classification Model for Melanoma Based on Convolutional …
21
4 The Proposed Melanoma Classification Model The World Health Organization (WHO) announced in 2018 that more than 14 million individuals were newly diagnosed with cancer and more than 9.6 million worldwide fatalities. These figures reveal that the leading cause of human death is cancer. Skin Cancer starts first on the upper skin layer, which is evident and could be observed via human eyes. One of the most major death factors in the globe is skin cancer. Various forms of skin malignancies were detected. Melanoma is a famous type of skin cancer, usually the most aggressive lesion in relation to other skin lesions. One of the most spreading rapidly cancers of the skin is melanoma, which recent studies have been shown year by year that the number of skin cancer patients increase [26]. The precise classification of skin lesions by automatic computer-supported methods are advantageous to the life of people. In this paper, we propose a model of the classification of melanoma that will help to improve the diagnosis speed and precision. The classifier was trained with ISIC 2020 dataset which is imbalanced dataset. We use random over-sampling approach to handle imbalanced issue. Figure 1 depicts the Architecture of the proposed model for Melanoma classifying.
Fig. 1 Architecture of the proposed model for Melanoma classifying
22
I. Elansary et al.
a. Malignant
b. Benign
Fig. 2 ISIC 2020 dataset samples of melanoma and non-melanoma
4.1 Description of Dataset In this article, we use the 2020 SIIM-ISIC Melanoma Classification Challenge Dataset to evaluate the proposed model. The ISIC 2020 dataset was made available for download as part of a live competition on the Kaggle platform. This dataset includes 33,126 images from 2056 patients collected from various medical centers around the world, including New York’s Memorial Sloan Kettering Cancer Center, Brisbane’s University of Queensland, the Melanoma Institute Australia and the Melanoma Diagnosis Centre in Sydney, Vienna Medical University is located in Vienna, Austria, and etc. The ISIC dataset included 2,056 patients from three continents, with 20.8 percent having at least one melanoma and 79.2 percent having no melanomas, each patient had an average of 16 lesions. Moreover, There are just 584 (1.8 percent) histopathologically confirmed melanomas among the 33,126 dermoscopic photos compared to benign melanoma mimickers. The training images had an average pixel count of 12,743,090, ranging from 307,200 to 24,000,000. Furthermore, the data is available in two separate formats: JPEG and TFRecord. The JPEG format is used in this paper. Some samples of melanoma and non-melanoma are shown in Fig. 2.
4.2 Data Preprocessing (1)
Data Over-Sampling
ISIC 2020 skin cancer dataset is severe imbalanced which contains only 584 confirmed melanoma cases and 33,126 benign cases. By randomly replicating some of the melanoma photos, a random over-sampling strategy is utilized to increase the size of the minor class (melanoma cases).
Efficient Classification Model for Melanoma Based on Convolutional …
Benign
1.80%
30%
70%
98.20%
Melanoma
23
Melanoma Benign
Before over-sampling
AŌer over-sampling
1.80% 98.20%
30% 70%
Fig. 3 The imbalanced dataset before and after over-sampling
Figure 3 depicts the distribution of classes before and after using the over-sampling method. The dataset is separated into three sets after it has been over-sampled: 80% of the dataset becomes the training set, 10% becomes the test set, and 10% becomes the validation set. (2)
Data Augmentation
Deep learning classification networks require a substantial amount of training data to be properly trained. Unfortunately, the quantity and paucity of current databases of dermoscopy pictures, as well as the availability of credible annotated ground-truths, continue to hinder automatic diagnosis of skin lesion disorders. To address this issue, augmentation operations were done on the training set to enhance the number of training images and avoid the overfitting problem that can occur when only a small amount of training data is used during the training process. Data was augmented by applying several augmentation parameters like random cropping, rotation, mirroring, and color-shifting utilizing principal component analysis. (3)
Image Resizing
The dataset images were resized to 528 × 528 to be compatible with the EfficientNet B-6 architecture.
24
I. Elansary et al.
4.3 Data Preprocessing The Keras package, which is part of the TensorFlow Python library, was used to implement this methodology. Specifically, to make augmentation generators for the training and validation data, the Keras ImageDataGenerator was employed. A global average pooling layer, a dropout layer with a rate of 0.5, a dense layer of 256 neurons with the ReLU activation function, a dropout layer with a rate of 0.4, and finally a dense layer of 4 neurons with the sigmoid activation function were added to each EfficientNet. Subsampling is done by the pooling layer. The layer essentially decreases the previous layer’s dimensions by consolidating the neuron clusters into a single neuron. In the dropout layer, a dropout rate is a rate at which input units are set to 0. If the units are not set to 0, they are updated and scaled up by 1–1/dropoutrate to guarantee that the sum of all inputs remains constant. Each neuron in a dense layer is linked to every other neuron in the layer before it. As a result, the number of parameters in a network is greatly increased by these layers. In the training set, we adopt a weighted cross-entropy loss function by giving a larger weight-based frequency for the minority class. Each class is multiplied by N a factor n i = 2∗Ni , where N represents the total number of training photos, Ni represents the number of images in class i and 2 represents the number of classes. With a learning rate of 0.0001, the Adam optimizer was employed; this value was empirically verified and determined to offer the best results. The number of training ebochs is set to 15 and batch size is set to 128.
5 Experimental Results The results of the suggested melanoma classification model are detailedand discussed in this section. Different metrics are used, like accuracy (Acc), sensitivity (Sen), specificity (Spe) and f-score, to measure the results of EfficiNet and other state-ofthe-art CNN models presented in this paper. Figure 4 depicts the proposed model’s classification accuracy during the training phase using EfficientNet B-6. We compare and evaluate the performance of the proposed model with other CNN architectures. Table 1 compares the overall performance of the proposed model for Melanoma detection and classification with other state-of-the-art models previously proposed by Ha et al. [15], Kassem et al. [17], and Ali et al. [18]. It’s worth noting that the ISIC 2020 dataset has only been used in a few studies. All of the findings show that the proposed technique has the best Acc, Se, Spe, F-score, and specificity when using ISIC 2020 only. In a medical computer-assisted diagnostics system, excellent sensitivity and specificity results are critical.
Efficient Classification Model for Melanoma Based on Convolutional …
25
Fig. 4 Training accuracy of the proposed model
Table 1 Comparison of the suggested model’s obtained results with those of other proposed models Acc
Sen
Spe
f-score
Dataset
Ha et al. [15]
96.00
–
–
–
ISIC 2020
Ha et al. [15]
98.45
–
–
–
ISIC 18-19-20
Kassem et al. [17]
94.92
79.8
97
–
ISIC 2019
Ali et al. [18]
93.16
–
–
95.09
HAM 10,000
Proposed model
97.84
99.1
97.1
98.1
ISIC 2020
6 Conclusion Skin cancer is the most aggravating type of cancer due to its quick spread to other parts of the body. Melanoma is a deadly type of skin cancer that can be cured with a simple operation if caught early enough. As a result, dermatologists can cure patients and save their lives by accurately detecting skin lesions in their early stages. This article presents a potential melanoma classification model. The suggested model’s ability to classify Melanoma is tested using ISIC 2020, the most current well-known public challenge dataset. The SIC 2020 dataset is severely imbalanced. We utilize a random over-sampling strategy followed by data augmentation to deal with the unbalanced data and avoid the overfitting problem. Furthermore, the class weight methodology was used by assigning a weight to each class, resulting in a classification that can learn equally from all classes while emphasizing the minority class. EfficientNet-B6 is used in the proposed melanoma classification model to classify patients’ skin lesions as benign or malignant. The proposed model generally scored better compared with other state-of-the-art models previously proposed, in terms of accuracy, sensitivity, specificity, and f-score.
26
I. Elansary et al.
References 1. Koh, H. K., Geller, A. C., Miller, D. R., Grossbart, T. A., & Lew, R. A. (1996). Prevention and early detection strategies for melanoma and skin cancer. Current status. Archives of Dermatology, 132(4), 436–443. 2. Parkin, D. M., Mesher, D., & Sasieni, P. (2011). 13. Cancers attributable to solar (ultraviolet) radiation exposure in the UK in 2010. British Journal of Cancer, 105(S2), S66–69. 3. Radiation: Ultraviolet (UV) radiation and skin cancer. Who.int. [Online]. https://www.who. int/news-room/q-a-detail/radiation-ultraviolet-(uv)-radiation-and-skin-cancer. Last accessed: 31 May 2021. 4. Goodson, A. G., & Grossman, D. (2009). Strategies for early melanoma detection: Approaches to the patient with nevi. Journal of the American Academy of Dermatology, 60(5), 719–735; quiz 736–738. 5. Siegel, R. L., Miller, K. D., & Jemal, A. (2019). Cancer statistics, 2019: Cancer statistics, 2019. CA: A Cancer Journal for Clinicians, 69(1), 7–34. 6. Mayer, J. E., Swetter, S. M., Fu, T., & Geller, A. C. (2014). Screening, early detection, education, and trends for melanoma: current status (2007–2013) and future directions: Part II. Screening, education, and future directions. Journal of the American Academy of Dermatology, 71(4), 611.e1–611.e10; quiz 621–622. 7. Rigel, D. S., Russak, J., & Friedman, R. (2010). The evolution of melanoma diagnosis: 25 years beyond the ABCDs. CA: A Cancer Journal for Clinicians, 60(5), 301–316. 8. Banerjee, S., Singh, S. K., Chakraborty, A., Das, A., & Bag, R. (2020). Melanoma diagnosis using deep learning and fuzzy logic. Diagnostics (Basel), 10(8), 577. 9. Elansary, I., Darwish, A., & Hassanien, A. E. (2021). The future scope of internet of things for monitoring and prediction of COVID-19 patients. In Digital Transformation and Emerging Technologies for Fighting COVID-19 Pandemic: Innovative Approaches (pp. 235–247). Cham: Springer International Publishing. 10. Elaraby, A., & Elansary, I. (2021). A framework for multi-threshold image segmentation of low contrast medical images. Traitement du Signal, 38(2), 309–314. 11. Emara, T., Afify, H. M., Ismail, F. H., & Hassanien, A. E. (2019) A modified inception-v4 for imbalanced skin cancer classification dataset. In 2019 14th International Conference on Computer Engineering and Systems (ICCES). 12. Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? arXiv [cs.LG]. 13. Rotemberg, V., et al. (2021). A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Scientific Data, 8(1), 34. 14. Karki, S., Kulkarni, P., & Stranieri, A. (2021). Melanoma classification using EfficientNets and ensemble of models with different input resolution. In 2021 Australasian Computer Science Week Multiconference. 15. Ha, Q., Liu, B., & Liu, F. (2020). Identifying melanoma images using EfficientNet ensemble: Winning solution to the SIIM-ISIC melanoma classification challenge. arXiv [cs.CV]. 16. Yuval, J., & O’Gorman, P. A. (2020). Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions. Nature Communications, 11(1), 3295. 17. Kassem, M. A., Hosny, K. M., & Fouad, M. M. (2020). Skin lesions classification into eight classes for ISIC 2019 using deep convolutional neural network and transfer learning. IEEE Access, 8, 114822–114832. 18. Ali, M. S., Miah, M. S., Haque, J., Rahman, M. M., & Islam, M. K. (2021). An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models. Machine Learning with Applications, 5, 100036. 19. Mullick, S. S., Datta, S., Dhekane, S. G., & Das, S. (2020). Appropriateness of performance indices for imbalanced data classification: An analysis. Pattern Recognition 102, (107197). 20. Haixiang, G., Yijing, L., Shang, J., Mingyun, G., Yuanyue, H., & Bing, G. (2017). Learning from class-imbalanced data: Review of methods and applications. Expert Systems with Applications, 73, 220–239.
Efficient Classification Model for Melanoma Based on Convolutional …
27
21. Krawczyk, B. (2016). Learning from imbalanced data: Open challenges and future directions. Progress in Artificial Intelligence, 5(4), 221–232. 22. Stefanowski, J. (2016). Dealing with data difficulty factors while learning from imbalanced data. In Studies in Computational Intelligence (pp. 333–363). Cham: Springer International Publishing. 23. Zhou, Z.-H., & Liu, X.-Y. (2010). On multi-class cost-sensitive learning. Computational Intelligence, 26(3), 232–257. 24. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. -C. (2018). MobileNetV2: Inverted residuals and linear bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 25. Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. arXiv [cs.LG]. 26. Swetter, S. M., et al. (2019). Guidelines of care for the management of primary cutaneous melanoma. Journal of the American Academy of Dermatology, 80(1), 208–250.
Machine Learning-Supported MRI Analysis of Brain Asymmetry for Early Diagnosis of Dementia Nitsa J. Herzog and George D. Magoulas
Abstract The chapter focuses on the detection of early degenerative processes in the human brain using computational algorithms and machine learning classification techniques. The research is consistent with the hypothesis that there are changes in brain asymmetry across stages of dementia and Alzheimer’s Disease. The proposed approach considers the pattern of changes in the degree of asymmetry between the left and right hemispheres of the brain using structural magnetic resonance imaging of the ADNI database and image analysis techniques. An analysis of levels of asymmetry is performed with the help of statistical features extracted from the segmented asymmetry images. The diagnostic potential of these features is explored using variants of Support Vector Machines and a Convolutional Neural Network. The proposed approach produces very promising results in distinguishing between cognitively normal subjects and patients with early mild cognitive impairment and Alzheimer’s Disease, providing evidence that image asymmetry features or MRI images of segmented asymmetry can offer insight on early diagnosis of dementia.
1 Introduction The human brain is examined with a help of advanced modern technology, which provides detailed scans of the brain tissues and demonstrates the functional activity of the brain regions associated with a specific mental or behavioural task. Brain scanning is divided into two large categories: structural imaging and functional imaging. The most common structural neuroimaging methods are X-ray, structural Magnetic Resonance Imaging (sMRI), Diffusion Tensor Imaging (DTI), N. J. Herzog (B) Department of Computer Science, Birkbeck, University of London, London WC17HZ, UK e-mail: [email protected] G. D. Magoulas Birkbeck Knowledge Lab, University of London, London WC17HZ, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_3
29
30
N. J. Herzog and G. D. Magoulas
modification of MRI, and Computerized Tomography (CT) [1]. Well-known functional methods are Electroencephalography (EEG), functional Magnetic Resonance Imaging (fMRI), and Positron Emission Tomography (PET) [2]. MRI covers around 50% of imaging data used for the diagnosis of brain diseases [3]. A significant advantage of MRI over popular CT and X-ray scans is the absence of ionizing radiation during the MRI session. The MRI contrasting agent is less allergic than iodine-based substances of CT scans and X-rays. Another advantage of MRI is the possibility to provide a high level of soft-tissue contrast resolution compared to a CT scan, which is superior at imaging hard anatomical structures. All these factors make MRI the method of choice for regular health checks in the population older than 60. Highresolution images make a significant impact on the computer-aided diagnosis of brain-related disorders. Early dementia, or amnestic Mild Cognitive Impairment (aMCI), belongs to the group of neurocognitive disorders and is characterized by some sort of short-time memory loss, language difficulties, lack of reasoning and judgment, hardship coping with daily routines [4]. Approximately 10% of the world population, aged between 70 and 79, and 25% of the population older than 80, are diagnosed with MCI. It is acknowledged that 80% of the patients with aMCI develop severe dementia, in the form of Alzheimer’s disease, within 7 years. The proportion of dementia in the general population is 7.1%, which is roughly 46.8 million people. Neurogenerative disorders, such as Alzheimer’s disease (AD), which is the most common, followed by vascular dementia, Lewy body dementia, Frontotemporal dementia, Parkinson’s disease and Huntington’s disease, severely affect memory and other mental tasks [5]. As amnestic MCI often becomes a prodrome of Alzheimer’s disease, it is important to identify this form of dementia in the early stage when proper care and treatment can stop or slow down the progression of the disease. The diagnosis of MCI is based on neuropsychological testing, blood testing, and neuroimaging [6, 7]. Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Geriatric Mental State Examination (GMS) are the most common cognitive screening assessments. The tests usually include groups of questions assessing orientation in place and time, short-time memory, attention, recall, and language ability of coherent speaking and understanding. For clinical judgment between MCI and AD certain types of biomarkers, measured in the cerebrospinal fluid (CSF), are used. Amyloid-beta 42 (Ab42), total tau (T-tau), and phosphorylated tau (P-tau) globulins are identified in the early stage of Alzheimer’s disease, whilst Hippocampal volume and rate of brain atrophy finalize the diagnosis. The research presented in this chapter is focused on the early detection and classification of dementia using sMRI, when changes in the brain are not obvious for radiologists or clinical practitioners, the amyloid-beta deposition may be present or not, and the tau globulin is absent. This work involves the segmentation and evaluation of asymmetries in the cortex of the brain and the classification of dementia using machine learning algorithms.
Machine Learning-Supported MRI Analysis of Brain …
31
2 Literature Review The two brain hemispheres have slightly different anatomy and function, and a detailed examination of their structure shows a variety of asymmetrical areas. The revealed lateralization originates from genetic and epigenetic factors in the evolutionary development of the human brain [8]. The exposure of the pathological factors during a human life also might cause changes in the lateralization of the brain. The evolutionary expansion of the left-hemispheric area is closely connected to speech production, perception, and motor dominance. The earliest observations of brain asymmetry were reported by the French physician, anatomist and anthropologist Pierre Paul Broca in the nineteenth century, and then, 10 years later by German neurologist Carl Wernicke. They found that the language of the patient was severely impaired when a stroke or tumor had affected the left-brain hemisphere. Broca localized the afflicted area in the anterior left hemisphere, including some parts of the inferior frontal gyrus (the so-called “Broca’s area”). The pathological process in that area of the brain significantly changed the language production and syntactic processing of the patients. Changes in language comprehension, such as understanding spoken words, were primarily discovered by Wernicke in the posterior temporal-parietal region (the so-called “Wernicke’s area”). Thus, it was confirmed that differences in the anatomical structure of the brain correlate with their functional lateralization. The left hemisphere is mostly responsible for language processing and logical thinking. The right hemisphere specializes in spatial recognition, attention, musical and artistic abilities. Emotions and their manifestation are also connected to the right hemisphere [9]. Brain asymmetry is closely related to human handedness. An interesting fact is that the foetal orientation during the pregnancy is correlated with the handedness of a newborn child. These asymmetries are first observed in the 29–31 weeks of gestational age. Almost 90% of the human population is right-handed [10]. “Petalia and Yakovlevian torque” [3] is a term that describes an overall leftward posterior and rightward anterior asymmetry usually presented in right-handed individuals. Around 95% of right-handed persons have their speech and language zones in the left hemisphere, while only 5% show the language zone representation in the right hemisphere or bilateral. Compared to the right-handed people, the left-handed demonstrate a higher ratio of hemispheric lateralization. A strongly dominant right hemisphere lateralization presents only in 7% of left-handers. This proportion can vary with age. Up to 85% of left-handed children have language area dominance in the left hemisphere [11]. Some studies highlight the differences in hemispheric lateralization between males and females [12]. The distinctions can be noticeable in linguistic performance, visuospatial or motor skills. The female brains show more symmetries in both cerebral hemispheres. The level of asymmetry also depends on the age of the person. The brain functional hemispheric asymmetry in the frontal lobes of young adults is more lateralized than in elderly healthy persons. The activity reduction of the frontal cortex
32
N. J. Herzog and G. D. Magoulas
leads to age-related cognitive decline. It is registered by functional neuroimaging as changes in the domains of semantic, episodic, or working memory, perception, and inhibitory control. Elderly people demonstrate compensatory processes in the brain that transform brain lateralization. Sometimes it looks like bilateral hemispheric activity [13].
2.1 Brain Asymmetry for the Diagnosis of Brain-Related Disorders The brain regions show a progressive decrease in the degree of asymmetry in patients with Mild Cognitive Impairment (MCI) and an increase of asymmetry in patients with Alzheimer’s disease (AD) [14]. To prove this concept Yang et al. used diffusion tensor image tractography to construct the hemispheric brain white matter networks. The researchers concluded that the brain white matter (WM) networks show the rightward topological asymmetry, when the right cerebral hemisphere becomes dominant in AD patients, but not in the early phase of the MCI. Left-hemisphere regions are affected earlier and more severely. The abnormal hemispheric asymmetry of AD and MCI patients significantly correlates with memory performance. The functional cortical asymmetry progressively decreases in patients with MCI [15]. Liu et al.’s research was based on whole-brain imaging. They registered and compared the spontaneous brain activity in patients with MCI, AD and NC (normal controls) using functional MRI. They discovered that patients with MCI and AD have abnormal rightward laterality in the brain compared to healthy controls with observed leftward lateralization. At the same time alterations in the brain lateralization between patients with MCI and normal controls were different from alteration between patients with AD and normal controls. The rightward lateralization in the patients with MCI and AD may be reflected as a relative increase in brain activation within the right hemisphere or a relative decrease in brain activation within the left hemisphere. Patients with MCI showed an increase in the activation of several brain regions in the right hemisphere during the processing of word memory tasks. Those areas were compensatorily activated compared to the activation zones in the left hemisphere of the healthy controls. Liu et al. suppose that the reason for the abnormal right-lateralized pattern in patients with AD might be more complex than in patients with MCI. They think that functional results are potentially influenced by structural differences between the groups, but they did not investigate the relationship between brain structural asymmetry and brain functional lateralization in their research. All participants in the study were right-handed. The researchers did not determine whether the same right brain lateralization occurs in left-handed persons. They found a significant difference in brain functionality between MCI and AD patients. The patients with MCI had normal leftward lateralization with some elements of abnormal rightward activity. In patients with AD, the normal pattern of left lateralization disappeared, and some abnormal right-lateralized pattern was detected.
Machine Learning-Supported MRI Analysis of Brain …
33
The degree of asymmetry is not the same in the different parts of the brain [16]. Kim et al. tested the hypothesis that individuals with aMCI and different stages of AD have reductions of asymmetries in the heteromodal neocortex. They found significant changes in the degree of asymmetry in the inferior parietal lobe of the brain of right-handed adults. The cortical asymmetry was investigated using surfacebased morphometry (SBM) to measure the cortical thickness. Their results show that the neocortical thickness asymmetries of the medial and lateral sides of the right and left parts of the brain were different from each other. The decrease of asymmetry was registered in the lateral parts of the frontal and parietal lobes and there was an increase in the temporal lobe. The left perisylvian areas responsible for language functions, except Broca’s speech area, demonstrated leftward asymmetry. Other areas of the brain, which specialized in spatial perception, facial recognition, and memory processing, showed rightward asymmetry. Kim et al. assumed that the cortical asymmetry shown in healthy controls generally decreases in AD. However, they did not directly examine the changes in cortical asymmetry observed during the AD progression, which gives a clear picture of an increase in asymmetry in the case of severe AD. Also, it is unclear whether similar changes in cortical asymmetry can be caused by other degenerative diseases. Wachinger et al. investigated the neurodegenerative processes in the subcortical brain structures of patients with AD [17]. They proposed a measure of brain asymmetry which is based on spectral shape descriptors from the BrainPrint. BrainPrint is an ensemble of shape descriptors that represents brain morphology and captures shape information of cortical and subcortical structures. Progressive dementia is associated with a significant increase in the neuroanatomical asymmetry in the hippocampus and amygdala. The research findings (see Table 1 for an overview) Table 1 State-of-the-art neuroscientific methods of registration of brain asymmetry References
Method
Conclusion
Yang et al. [14]
Diffusion tensor image (DTI) tractography to construct the hemispheric brain white matter networks
Decrease of asymmetry in patients with MCI and an increase of asymmetry in patients with AD
Liu et al. [15]
Registration of the spontaneous The functional cortical asymmetry brain activity in patients with MCI, progressively decreases in patients AD and NC using functional MRI with MCI (fMRI)
Kim et al. [16]
Surface-based morphometry (SBM) The degree of asymmetry is not the to measure cortical thickness same in the different parts of the brain
Wachinger et al. [17] Measurement of brain asymmetry based on spectral shape descriptors using BrainPrint
Progressive dementia is associated with an increase in asymmetry in the hippocampus and amygdala; shape analysis can detect the progression of dementia earlier than volumetric measures
34
N. J. Herzog and G. D. Magoulas
prove that shape analysis can detect the progression of dementia earlier than volumetric measures. Shape asymmetry, based on longitudinal asymmetry measures in the hippocampus, amygdala, caudate and cortex can be a powerful imaging biomarker for the early presymptomatic prediction of dementia.
2.2 Classification of Alzheimer’s Disease and Early Mild Cognitive Impairment Using the ADNI Database The Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu) was launched in 2003 as a public–private partnership led by Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD) (for up-to-date information, see www.adni-info.org). Multiple research studies have benefited from MRI data of the ADNI database. The rest of this section focuses on machine learning approaches for the modelling and classification of neurogenerative disease, including mild cognitive impairment and Alzheimer’s Disease, stable and progressive forms of MCI, as these are more relevant to this work. A large portion of research in this area consists of machine learning-based diagnostic approaches that use features engineering as this has been shown to contribute towards successful modelling. For example, [18] implemented featureranking and a genetic algorithm to analyze structural magnetic resonance imaging data of 458 subjects. The researchers state that the proposed system can distinguish between stable and progressive MCI and is able to predict the conversion of MCI to Alzheimer’s Disease from one to three years ahead it will be clinically diagnosed. Beheshti et al. [18] identified atrophic gray matter (GM) regions using voxel-based morphometry (VBM). The features were extracted after applying a 3D mask and they were ranked according to their t-test scores. Features with t-test values higher than 70% were combined into new subsets. A genetic algorithm, with the Fisher criterion function [19], evaluated the separation between the two groups of data and helped to select the most discriminative feature subsets for the classification. The classification process was finalized with linear SVM [20], and classification performance was evaluated with a tenfold cross-validation procedure. The calculated accuracy shows 93.01% for stable MCI and 75% for progressive MCI. The feature selection process raised the accuracy from 78.94 to 94.73%. Another group of scientists [21] investigated the conversion of MCI to AD. Their algorithm identifies AD in a period between one to three years prior to the development of clinical symptoms. The proposed algorithm is based on aggregated biomarkers and a random forest (RF) classifier [22]. The MRI data were preprocessed by removing images with age-related changes in the anatomical structure of the brain
Machine Learning-Supported MRI Analysis of Brain …
35
using a linear regression model. Feature selection was implemented on AD and NC images by a regularized logistic regression (RLG) algorithm [23]. The classification stage is performed using a semi-supervised low-density separation (LDS) method (the LDS is a two steps algorithm, which relies on the graph-distance kernel and the Transductive Support Vector Machine-TSVM learning). At the beginning of this stage, the classifier is trained with labelled AD and NC data. Then, unlabeled MCI images are fed into the classifier. It helps to separate the stable and progressive MCI and get them labelled. In the final stage the output of the LDS classifier, as an input feature, is combined with age and cognitive measurements feature vectors into the RF classifier. The aggregated biomarker distinguishes between stable and progressive MCI and approximates the probability of conversion of MCI to AD. The testing of image sequences of 825 subjects was done with a tenfold cross-validation method. The results showed that the predictive performance of the aggregated biomarker is higher than the performance of single biomarkers. MRI data with a combination of cognitive and age measures improves the classification accuracy by 5.5% (from 76.5 to 82%). Another approach, [24], proposed an algorithm for the Joint Regression and Classification (JRC) problem in the diagnosis of MCI and AD. The idea behind this regularization-based method is to consider the related similarity of features, samples, and their responses. The features are related to each other if their respective weight coefficients are similar. The weight coefficients are linked to the response variables via feature vectors and demonstrate the resembling type of relation. The same rule is applied to a similar pair of samples and their respective response values. The regularization method was tested with MRI and PET (Positron Emission Tomography) image sequences of 202 subjects [25]. The images were separated into 93 regions of interest (ROI) using a volumetric measure of the gray matter of the brain. Structural MRI scans were aligned with functional PET images using affine registration. The average intensity values were calculated for each ROI. Structurally and functionally related to each other features were extracted from each ROI and sent to the feature selection process using the regularization algorithm. Extracted features were expected to predict jointly one clinical label and two clinical scores. Imaging data for clinical labelling were classified with SVM. Other types of data obtained from cognitive tests were used for training two more Support Vector Regression (SVR) models for prediction of clinical scores of AD Assessment Scale-Cognitive Subscale (ADAS-Cog) and Mini-Mental State Examination (MMSE) [26]. The results were obtained using binary classification methods and tenfold cross-validation. In the initial stage of the experiment, the classification and regression tasks were performed without feature selection. The results of this stage were considered as a baseline. In the next run, the baseline was compared to single-task results, when the selected features are classified independently, and multi-task results when the features are classified jointly for the classification and regression models. The proposed joint approach shows the superiority of the single-task approach by 5.6%. Compared to the baseline, the average accuracy for a single task increases by 6%, and for multi-task
36
N. J. Herzog and G. D. Magoulas
by 8.8%. The proposed models were compared with two state-of-the-art methods: High-Order Graph Matching (HOGM) [27] and Multi-Modal Multi-Task (M3T) [28]. The Joint Regression and Classification model outperform their competitors by improving classification accuracy by 5% (vs. HOGM) and 4.7% (vs. M3T) for MRI, and 4.6% (vs. HOGM) and 4.2% (vs. M3T) for PET. The highest archived accuracy for classification AD versus NC is 95.7%, MCI versus NC is 79.9%. Another stream of research is taking advantage of machine learning methods that generate features as part of the training process. These methods employ Artificial Neural Networks and Deep Learning and have attracted a lot of attention in the area of medical image analysis and classification recently. They can process a large amount of data and learn in a supervised (labelled) or unsupervised (unlabelled) mode. Particularly, diagnostic approaches that use Deep Learning in most cases do not require complicated, time-consuming image preprocessing and feature engineering techniques and produce state-of-the-art results. In this setting, the Convolutional Neural Network (CNN) is one of the models successfully adapted to classify imaging data [29]. Basaia et al. [30] built and evaluated a CNN algorithm that predicts AD, progressive cognitive mild impairment, and stable cognitive impairment. The architecture of the network included 12 repeated blocks of convolutional layers, an activation layer, a fully-connected layer, and one logistic regression output layer. The researchers used T1-waited structural MRIs of 1409 subjects from the ADNI database. The image data was split into training, validation, and testing sets in the proportion of 90% for the first two, and 10% for the last one. Tenfold cross-validation was applied. Weights of the CNN used for classification of AD versus HC dataset were applied as pre-trained initial weights to the other CNNs. This technique reduced the training time and increase the network performance. High predictive accuracy was achieved in both databases with no significant difference. The highest percentage for AD vs HC (healthy control) classification accuracy was: 99% for ADNI. For c-MCI versus HC, and s-MCI versus HC accuracy was 87 and 76% respectively, while for AD versus c-MCI and AD versus s-MCI performance was 75 and 86%, and for c-MCI versus s-MCI 75%. Multi-Layer Perceptron and a Convolutional Bidirectional Long Short-Term Memory (ConvBLSTM) model were proposed by Stamate et al. in the diagnosis of dementia [31]. Different clinical sources and protocols of 1851 participants of the ADNI database were combined. The collected biomarkers consist of 51 input attributes and include baselines demographics data, functional activity questionnaire, Mini-Mental State Exam (MMSE), cerebrospinal fluid (CSF) biomarkers, neuropsychological tests, and measurements received from MRI, RET and genetic data. The ReliefF method [32] and permutation test [33], including 500 permutations of labels, were combined for feature selection and ranking. The top-10 ranked features have been sent to classification models. 75% of the data were used for training and the rest for testing. The predictive results were obtained using Monte Carlo simulations [34]. All models were able to accurately predict dementia and mild cognitive impairment. The highest accuracy of 86% was achieved with the Multi-Layer Perceptron model.
Machine Learning-Supported MRI Analysis of Brain …
37
Lastly, another study [35] proposed an unsupervised deep learning method for the classification of AD, MCI, and NC. The algorithm extracts the features with PCA [36] and processes them with a Regularized Extreme Learning Machine (RELM) [37]. RELM is based on single hidden-layer feedforward neural network. The investigators chose high-level features using the Softmax function (a function that takes a vector of real numbers as input and normalizes it into a probability distribution). The results of RELM are compared with multiple kernel SVM and import vector machine (IVM) (IVM classifier based on Kernel Logic Regression uses a few data points to define the decision hyperplane and has a probabilistic output). The researchers have done 100 tests of imaging data collected from 214 subjects using tenfold cross-validation and 10 tests with the leave-one-out method. They separated training and testing images with a ratio of 70/30 for the tenfold cross-validation, and 90/10 for the leave-one-out validation. The study confirmed that RELM improves the classification accuracy of AD and MCI from 75.33 to 80.32% for binary classification and 76.61% for multiclass classification. The above approaches are summarized in Table 2.
3 The Proposed Approach The machine learning workflow for early diagnosis of dementia (Fig. 1) includes image preprocessing, segmentation of image asymmetries, extraction of statistical features and image analysis, and machine learning classification algorithms. The visualized differences between the right and the left hemispheres of the MRI slices of the brain are used for features extraction. This simplifies the feature engineering stage because the collected features are already selected from the brain regions affected by degenerative processes. The images of segmented asymmetries require less storage than original MRIs. This speed up the classification processing of large datasets using images as an input. In the last stage of the workflow, different kinds of machine learning algorithms can be applied. This can include two potential pathways: one that exploits image asymmetry features and another one that uses images of segmented asymmetry. Machine learning classifiers, such as Naïve Bayes (NB), Linear Discriminant (LD), Support Vector Machines (SVMs) and K-Nearest Neighbor typically operate on the basis of feature vectors, such as image asymmetry features that can be used for training and testing. In contrast, a Deep Network (DN) classifier receives images of segmented asymmetry and generates its own features through training. The data processing pipeline, including image processing and machine learning classification, has been implemented in Matlab using affordable and easy to obtain commodity hardware: Windows 10 Enterprise, processor—Intel (R) Core (TM), i7-7700 CPU@ 3.60 GHz, 16 GB RAM.
38
N. J. Herzog and G. D. Magoulas
Table 2 Methods of diagnosis of mild cognitive impairment and Alzheimer’s disease used in the literature References
Modality
Classification type
Method
Complexity
Best result
Beheshti et al. [18]
sMRI
Linear SVM
Feature ranking and genetic algorithm
Medium
Stable MCI 93.01%, progressive MCI 75%
Moradi et al. [21]
Aggregated biomarkers (MRI + age + cognitive measures)
Semi-supervised low-density separation (LDS) + Random Forest (RF)
Feature High selection from MRI with LDS, final result obtained with RF
Progressive MCI 82%
Tripepi et al. [23]
sMRI, PET, cognitive measures
Joint regression and classification (SVM)
Alignment of structural and functional features, feature selection with regularization algorithm
High
AD versus NC 95.7%, MCI versus NC 79.9%
Yamashita et al. [29]
sMRI
CNN
Pretrained CNN
Low
AD versus NC 98%, sMCI versus cMCI 75%
Basaia et al. 51 [30] aggregated biomarkers without imaging data
Multi-Layer Perceptron and a Convolutional Bidirectional Long Short-Term Memory (ConvBLSTM) model
Feature Medium selection and ranking, top 10 features are used with NN models
Dem versus MCI versus CN 86%
Johansen et al.[34]
Unsupervised DL (Regularized Extreme Learning Machine (RELM))
Features extracted with PCA and process with RELM
AD versus MCI versus NC 76.61%, AD versus MCI 80.32%
sMRI
Medium
3.1 Image Preprocessing and Segmentation of Image Asymmetry The preprocessing stage includes image normalization and image resizing procedures. All images are then segmented by implementing a brain segmentation algorithm with an adjusted threshold level of the pixel values (Fig. 2). Brain segmentation is an important task in the detailed study and analysis of the anatomical
Machine Learning-Supported MRI Analysis of Brain …
39
Fig. 1 Machine learning workflow including image transformation stages, asymmetry features generation and machine learning classification algorithms
Fig. 2 The segmentation of the brain tissues from the skull: original image (left), and segmented image of the brain (right)
regions of the brain and their symmetries. It is often the most critical step in many medical applications. The manual segmentation step can be replaced with automatic segmentation software (AnalyzeDirect—https://analyzedirect.com/analyz e14/, FreeSurfer—https://surfer.nmr.mgh.harvard.edu/, etc.). There are many computer vision techniques proposed for the segmentation of specific brain areas in accordance with the anatomical atlas [38]. The current study
40
N. J. Herzog and G. D. Magoulas
Fig. 3 The image transformation stages for detection and segmentation of image asymmetries
presents an algorithm for the segmentation of the hemispheric asymmetries whose key point is the detection of the vertical axis of symmetry between the left and right hemispheres of the brain (Fig. 3). The hypothesis being tested in this part of the work is that there is an axis of reflective symmetry running through the center of the brain [39]. The center point of the brain is allocated using an image binarization technique and calculating the image centroid [40]. In the context of image processing and computer vision, the centroid is the weighted average of all the pixels in an image. The “weighted” centroid, or center of mass, is always at the exact center and depends on the gray levels in the image. The technique of the allocation of an imaging center is accompanied by image binarization [41], which converts a 256-shaded grayscale image to a binary (black and white-colored). The binarization is done according to the adjusted level of a threshold. All pixels in the image above the threshold level are replaced by the value 1 (white) and other pixels that are below that level, by the value 0 (black). The brain center might differ from the center of the whole image including the background. If such a case occurs, the brain needs to be translated into the center of the image and rotated to the correct angle via the vertical axis. As soon as the brain centralization, translation, and rotation techniques are performed the image can be flipped or reversed from the left to the right across the vertical axis [42]. The mirroring process is finalized by the segmentation of image asymmetries. The last image of this stage (see Fig. 3) is obtained as a result of mirroring of the left-brain hemisphere to the right and of the right-brain hemisphere to the left, which is followed by subtraction of the hemispheres from each other:
Machine Learning-Supported MRI Analysis of Brain …
41
Fig. 4 An illustrative example of matrix transformation values of a gray-scaled image of size 6-by6: initial matrix (left) and Matrix of segmented asymmetry (right), mirrored via the vertical axis. The numbers in the cells correspond to the gray level of the pixel values
D = (L − R) + (R − L),
(1)
where D is an image asymmetry, L is an image matrix of the left hemisphere, R is an image matrix of the right hemisphere. The symmetrical image areas (Fig. 4) get a value of 0 due to matrix subtraction. They are visualized as black areas in the image. The asymmetrical parts of the image are represented as different intensity gray levels from 1 to 255. The algorithm was tested on single slices of the brain, but the same idea can be extended and applied to the whole 3D brain image.
3.2 Feature Engineering and Analysis Approaches that are based on statistical features for representing image properties are well-established in image processing [43]. The statistical description of the image texture, color or morphological properties generates a limited number of relevant and distinguishable features. The proposed machine learning workflow uses ten strong and stable statistical features to represent the image asymmetries: MSE (Mean Squared Error), Mean, Std (Standard deviation), Entropy, RMS (Root Mean Square), Variance, Smoothness, Kurtosis, Skewness, IDM (Inverse difference moment). The first feature on the list, MSE, has been calculated directly from the original image and its mirrored version, while the rest of them are generated using discrete wavelet transform (DWT) as shown in Fig. 5. In image processing, a discrete wavelet transform is a technique to transform image pixels into wavelets [44].
42
N. J. Herzog and G. D. Magoulas
Fig. 5 The DWT schema of the 1-st and the 2-nd level of the image decomposition after applying high- and low-pass filters in the horizontal and vertical directions
A brief description of each statistical feature used in this study is provided below.
3.2.1
Statistical Features Description
The features calculated from images or image asymmetries give information about the likelihood of gray pixel values in a random position in an image, their orientation, and interaction with other surrounding pixels. They are defined as follows: Mean squared error (MSE)
The average squared intensity difference in the pixel values between the corresponding pixels of two images [45]
Mean
The texture feature that measures the average value of the intensity pixel values [46], represents the brightness of the image
Standard deviation (Std)
Shows the contrast of gray level intensities; indicates how much deviation or dispersion exists from the mean or average [47]
Entropy
Entropy characterizes the image texture and measures the randomness of the pixel intensity distribution [48] It is the highest when all the pixel probabilities are equal
Root mean square (RMS)
Measures the magnitude of a set of values [49], shows how far these values are from the line of best fit
Variance
Measures the image heterogeneity [47], shows how the grayscale values differ from their mean
Inverse difference moment (IDM) Inverse difference moment (IDM) indicates the local homogeneity of an image [47], increases when pixel pairs are close in their grayscale values Smoothness
Smoothness measures the relative smoothness of intensity in an image [50], it is high for an image region of constant intensity, and low for regions with large deviations in their intensity values (continued)
Machine Learning-Supported MRI Analysis of Brain …
43
(continued) Kurtosis
Measures the peak of the distribution of the intensity values around the mean [51], often interpreted in combination with noise and resolution of the image (high kurtosis value is accompanied by low noise and low resolution)
Skewness
Shows the asymmetry of the probability distribution of the pixel intensity values about the mean value [46], reveals information about image surfaces (darker and glossier surfaces tend to be more positively skewed than lighter and matte surfaces); any symmetric data have skewness near zero
3.2.2
Analysis of Image Asymmetries
The analysis part is based on an evaluation of the statistical properties of segmented asymmetries. Figure 6 shows averaging normalized (from 0 to 1) statistical data of each extracted feature from a set of 300 MRI slices of different patients with segmented asymmetries equally divided into three classes, those with Alzheimer’s Disease—AD, Early Mild Cognitive Impairment—EMCI and Normal Cognitively—NC. Comparison of statistical features of AD, EMCI, and NC classes demonstrates the differences in their statistical characteristics. The highest difference between corresponding pixels of two images (MSE feature) belong to the AD class, the lowest result is shown for the EMCI class. The smallest averaging pixel intensity values, variance and standard deviation in the intensities are discovered in the EMCI class. These findings point to the relatively symmetrical object compared to those (AD and NC classes) which have noticeable pixel distribution values around the average. Pixels distribution values and probability around the mean demonstrate some sort of separation of the AD, EMCI and NC classes despite the amplitude of these values is less than registered with other statistical features.
Fig. 6 Red and blue bars correspond to the statistical mean of each image asymmetry feature for EMCI and AD patients: the green line shows how the AD and EMCI data differ from the NC patients
44
N. J. Herzog and G. D. Magoulas
Texture features, such as entropy and IDM, prove the concept of variability between image classes. In this way, the features calculated the difference between original and inverted image matrices clearly indicate that EMCI data show more symmetry than NC and AD imaging data. Figures 7, 8 and 9 illustrate features comparison in three binary datasets: AD versus EMCI, AD versus NC, and EMCI versus NC. EMCI class demonstrates more symmetry than AD and NC classes in Figs. 7 and 9. In Fig. 8, statistical feature data of the AD class shows less symmetry than NC data. The MSE feature analysis with a Pareto chart has been performed for male and female subjects (Figs. 10 and 11). The MSE value for each class has been calculated from the differences between the original image and its mirrored version for all images in classes of AD, EMCI, and NC. The Pareto bar chart indicates the impact of asymmetry for each class. The highest MSE bar is associated with patients of the AD class. It confirms that changes in symmetry in the MRI slices of this image group are sizable compared to changes in the symmetry of other groups. At the same time, the EMCI image group has smaller values. The pattern of changes in male and female subjects does not show significant differences; nevertheless, the findings are consistent with the view that the female brain is more symmetrical than the male brain. The cumulative line of the secondary axis shows the contribution of each bar (image class) in the total value as a percentage. Fig. 7 Statistical mean of asymmetry feature values for binary classes: AD versus EMCI
Fig. 8 Statistical mean of asymmetry feature values for binary classes: AD versus NC
Machine Learning-Supported MRI Analysis of Brain …
45
Fig. 9 Statistical mean of asymmetry feature values for binary classes: EMC versus NC
Fig. 10 Pareto chart of MSE feature analysis for MRI slices of the male subject. The total MSE feature value is placed in the coloured bars. The axis, on the right, indicates the cumulative percentage of the total value for each class
Fig. 11 Pareto chart of MSE feature analysis for MRI slices of the female subject
A comparison of MSE values of 4 datasets is provided in Fig. 12. Figure 12 illustrates the changes in MSE features between the three image classes, AD, EMCI and NC, in four completely different image sets. From the figure we can see the pattern of changes in the symmetry between the original and inverted images. The lowest MSE value for all image sets is obtained by images of the EMCI class. The above analysis (Figs. 6, 7, 8, 9, 10, 11 and 12) supports the view that image asymmetry decreases in the initial stage of the generative process in the brain (Early
46
N. J. Herzog and G. D. Magoulas
Fig. 12 Comparison of MSE feature values between the three classes of MRIs. Numbers, 1, 2…4, indicate the investigated datasets: #1 refers to a 150-image set of male MRIs in the coronal plane, #2 is a 150-image set of female MRIs in the coronal plane, #3 corresponds to a 300-images set of male subjects in the coronal plane, and #4 represents a 300-images set of males in the axial plane
Mild Cognitive Impairment) and grows when the person develops moderate and severe dementia (Alzheimer’s disease).
4 Experiments and Results In this section, the two diagnostic pathways of the machine learning workflow are demonstrated and the robustness of brain asymmetry image and asymmetry features for early diagnosis of dementia is verified. To this end, SVMs and DNs are used, and the diagnosis problem is formulated as a set of binary classification problems. T1-weighted MRIs of subjects aged between 55 and 75 years were used from the ADNI database. A total of 600 MRIs of brain asymmetries were generated, equally divided into groups of normal cognitively (NC) subjects, early mild cognitive impairment (EMCI) and Alzheimer’s Disease (AD). MRIs were combined into 3 binary datasets: EMCI vs NC, AD versus NC and AD vs EMCI. The datasets consist of images of 2 dimensions (planes): vertical (frontal) and horizontal (axial). For the first diagnostic pathway, statistical features collected from image asymmetries were enriched with Bag-of-Features (BOF) to get the most detailed image “signatures” [52]. These were used to feed SVM classifiers with cubic and quadratic kernels (CSVM and Q-SVM). The SVM performance was estimated using 10 simulation runs of a tenfold cross-validation procedure and all models were tested on unseen data. For the second diagnostic pathway (cf. Fig. 1), segmented brain asymmetry images were used, and features were generated as part of the training process of a Convolutional Neural Network (CNN)- a DN architecture that has shown in the literature very good performance in image classification tasks. To this end, transfer learning was used by adapting a well-known CNN, the so-called AlexNet [53]. AlexNet has a total of 8 deep layers: five convolutional layers that are used for feature generations
Machine Learning-Supported MRI Analysis of Brain …
47
Fig. 13 Adapted AlexNet architecture used in the experiments
and three fully connected layers. The 1-st layer requires an input image of size 227by-227-by-3, where 3 is the number of color channels. The last 3 layers of AlexNet were preliminarily configured to 1000 classes as it was trained to solve a different classification problem, but they were replaced with a fully connected layer, a Softmax layer, and a binary classification output layer to fit the needs of the diagnosis tasks considered in this study (Fig. 13). The performance of the classification models was evaluated in terms of accuracy, sensitivity, specificity, and area under the curve (AUC) [54]. The best available results obtained with the polynomial SVM classifier (C-SVM and Q-SVM) [55] are shown in Table 3. The highest accuracy was achieved for the sets of EMCI versus NC, and AD versus NC. Based on the results, the detection of the EMCI in the early stage of the disease with quadratic and cubic SVMs gives an accuracy of 93%. These tests show a sensitivity of 92 and 93%, a specificity of 94 and 93%, and an AUC of 0.97 and 0.98 respectively. AD versus NC shows the highest performance equal to 95% of accuracy, 97% of sensitivity, 93% of specificity, and 0.99 of AUC for C-SVM. To verify binary classification performance using brain asymmetry images, CNN parameters have been set as follows: 10 epochs, mini-batch size of 128, validation data frequency of 50. Before processing, the segmented asymmetry images were resized to 227 × 227 × 3 and fed into the model with 80% of the images used for training, 10% for validation, and 10% for testing. Table 4 summarises the best available result of the adapted AlexNet in the testing of the early mild cognitive impairment, normal cognitively, Alzheimer’s disease datasets. Table 3 Best available performance of Q-SVM and C-SVM in binary classification
Performance
EMCI versus NC Q-SVM C-SVM
AD versus NC Q-SVM C-SVM
AD versus EMCI Q-SVM C-SVM
Accuracy
0.93 0.93
0.93 0.95
0.87 0.86
Sensitivity
0.92 0.93
0.94 0.97
0.84 0.90
Specificity
0.94 0.93
0.92 0.93
0.90 0.82
AUC
0.97 0.98
0.97 0.99
0.94 0.94
48 Table 4 Best available performance of CNN (adapted AlexNet) in binary classification
N. J. Herzog and G. D. Magoulas Performance
EMCI versus NC
AD versus NC
AD versus EMCI
Accuracy
0.76
0.90
0.82
Sensitivity
0.80
0.91
0.75
Specificity
0.72
0.89
0.89
AUC
0.90
0.92
0.88
Satisfactory performance with the CNN is obtained for all datasets by operating directly on images of segmented asymmetry without any feature engineering or finetuning prior to the classification. The average performance of CNNs in this problem is also promising as shown in [56] and one would expect that additional model optimization can improve the performance of CNNs further. This is a promising avenue for investigation, but it is considered out-of-scope for this chapter, and it will be the focus of our future work. The present study has verified the robustness and value of image asymmetries and demonstrated the operation of the machine learning workflow as a diagnostic tool when either image asymmetry features or segmented images of brain asymmetry are used.
5 Discussion and Conclusion Diagnosis based on an analysis of the changes in brain asymmetry opens a new possibility for the classification of mild cognitive impairment and Alzheimer’s disease using MRI data. The chapter presented an approach for the analysis of brain asymmetries and the generation of image asymmetry features either by feature engineering or by learning segmented asymmetry MRI images. These findings confirmed that changes in asymmetries convey important information about the progression of the disease. Thus, the segmented asymmetries can be beneficial for feature engineering and machine learning classification. The proposed machine learning workflow offers a low-cost alternative for the classification of dementia as it does not require special hardware equipment. In contrast to other methods in the literature (see Sect. 2), the proposed image processing and feature engineering stages are less complex and take on average 0.1 min per MRI image. This stage is not typically required when Deep Networks are applied since these models use the segmented MRI asymmetry slices directly as an input and can generate image features during network training. Although optimizing the architecture of the machine learning algorithms can potentially increase performance, the accuracy of the models used in this study appears comparable with results obtained by more complex methods (see Sect. 2) for the ADNI database. The proposed methodology has a perspective to explore the stages of Alzheimer’s disease further. This research can be based on longitudinal studies of patient data.
Machine Learning-Supported MRI Analysis of Brain …
49
Changes in the shape of asymmetry and mapping these changes to the brain atlas will direct in those brain areas which are affected by the pathological process. One more point for investigation is the comparison of asymmetries between the gray and white matter of the brain. Additional computer vision segmentation techniques might give a clue to the source of initial tissue deformation, which opens a direction for the early prediction and even prevention of the disease. Acknowledgements Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd. and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
References 1. Kimberley, T. J., & Lewis, S. M. (2007). Understanding neuroimaging. Physical Therapy, 87(6), 670–683. 2. Bunge, S.A., Kahn, I. (2009). Cognition: An overview of neuroimaging techniques. 3. Segato, A., Marzullo, A., & Calimeri, F., et al. (2020). Artificial intelligence for brain diseases: A systematic review. APL Bioengineering, 4(4), 041503. 4. Janelidze, M., & Botchorishvili, N. (2018). Mild cognitive impairment. Alzheimer’s Disease: The 21st Century Challenge, 91. 5. Agarwal, M., Alam, M. R., Haider, M., et al. (2021). Alzheimer’s disease: An overview of major hypotheses and therapeutic options in nanotechnology. Nanomaterials, 11(1), 59. 6. DSM-V (Diagnostic and statistical manual of mental disorders) 2013 updated https://www.psy chiatry.org/psychiatrists/practice/dsm. Last accessed 10 June 2020 7. ICD-11 (International Classification of Diseases) for Alzheimer and dementia 2018 released and endorsed in May 2019. https://icd.who.int/en. Last accessed 27 Jan 2021. 8. Isles, A. R. (2018). Epigenetics, chromatin and brain development and function. Brain and Neuroscience Advances, 2, 2398212818812011. 9. Gainotti, G. (2019). The role of the right hemisphere in emotional and behavioral disorders of patients with frontotemporal lobar degeneration: An updated review. Frontiers in Aging Neuroscience, 11, 55. 10. McManus, C. (2019). Half a century of handedness research: Myths, truths; fictions, facts; backwards, but mostly forwards. Brain and Neuroscience Advances, 3, 2398212818820513.
50
N. J. Herzog and G. D. Magoulas
11. Szaflarski, J. P., Rajagopal, A., & Altaye, M. (2012). Left-handedness and language lateralization in children. Brain Research, 1433, 85–97. 12. Tomasi, D., & Volkow, N. D. (2012). Laterality patterns of brain functional connectivity: Gender effects. Cerebral Cortex, 22(6), 1455–1462. 13. Cabeza, R., Daselaar, S. M., Dolcos, F., et al. (2004). Task-independent and task-specific age effects on brain activity during working memory, visual attention and episodic retrieval. Cerebral Cortex, 14(4), 364–375. 14. Yang, C., Zhong, S., Zhou, X., et al. (2017). The abnormality of topological asymmetry between hemispheric brain white matter networks in Alzheimer’s disease and mild cognitive impairment. Frontiers in Aging Neuroscience, 9, 261. 15. Liu, H., Zhang, L., Xi, Q., et al. (2018). Changes in brain lateralization in patients with mild cognitive impairment and Alzheimer’s disease: A resting-state functional magnetic resonance study from Alzheimer’s disease neuroimaging initiative. Frontiers in Neurology, 9, 3. 16. Kim, J. H., Lee, J. W., Kim, G. H., et al. (2012). Cortical asymmetries in normal, mild cognitive impairment, and Alzheimer’s disease. Neurobiology of Aging, 33(9), 1959–1966. 17. Wachinger, C., Salat, D. H., Weiner, M., et al. (2016). Whole-brain analysis reveals increased neuroanatomical asymmetries in dementia for hippocampus and amygdala. Brain, 139(12), 3253–3266. 18. Beheshti, I., Demirel, H., Matsuda, H., et al. (2017). Classification of Alzheimer’s disease and prediction of mild cognitive impairment-to-Alzheimer’s conversion from structural magnetic resource imaging using feature ranking and a genetic algorithm. Computers in Biology and Medicine, 83, 109–119. 19. Welling, M. (2005). Fisher linear discriminant analysis. University of Toronto. 20. Evgeniou, T., & Pontil, M. (1999). Support vector machines: Theory and applications (pp. 249– 257). In Advanced Course on Artificial Intelligence. Springer. 21. Moradi, E., Pepe, A., Gaser, C., et al. (2015). Machine learning framework for early MRI-based Alzheimer’s conversion prediction in MCI subjects. NeuroImage, 104, 398–412. 22. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. 23. Tripepi, G., Jager, K. J., Dekker, F. W., et al. (2008). Linear and logistic regression analysis. Kidney International, 73(7), 806–810. 24. Zhu, X., Suk, H. I., & Wang, L. (2017). A novel relational regularization feature selection method for joint regression and classification in AD diagnosis. Medical Image Analysis, 38, 205–214. 25. Wong, D. F., Maini, A., Rousset, O. G., et al. (2003). Positron emission tomography: A tool for identifying the effects of alcohol dependence on the brain. Alcohol Research & Health, 27(2), 161. 26. Nogueira, J., Freitas, S., & Duro, D. (2018). Alzheimer’s disease assessment scale-cognitive subscale (ADAS-Cog): Normative data for the Portuguese population. Acta Medica Portuguesa, 31(2), 94–100. 27. Duchenne, O., Bach, F., Kweon, I. S., et al. (2011). A tensor-based algorithm for high-order graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12), 2383–2395. 28. Zhang, D., & Shen, D. (2012). Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer’s disease. NeuroImage, 59(2), 895–907. 29. Yamashita, R., Nishio, M., Do, R. K. G., et al. (2018). Convolutional neural networks: An overview and application in radiology. Insights Into Imaging, 9(4), 611–629. 30. Basaia, S., Agosta, F., Wagner, L., et al. (2019). Automated classification of Alzheimer’s disease and mild cognitive impairment using a single MRI and deep neural networks. NeuroImage: Clinical, 21, 101645. 31. Stamate, D., Smith, R., Tsygancov, R., et al. (2020). Applying deep learning to predicting dementia and mild cognitive impairment (pp. 308–319). In IFIP International Conference on Artificial Intelligence Applications and Innovations. Springer. 32. Robnik-Šikonja, M., & Kononenko, I. (2003). Theoretical and empirical analysis of ReliefF and RReliefF. Machine Learning, 53(1), 23–69.
Machine Learning-Supported MRI Analysis of Brain …
51
33. Pesarin, F., & Salmaso, L. (2010). The permutation testing approach: A review. Statistica, 70(4), 481–509. 34. Johansen, A.M., Evers, L., & Whiteley, N. (2010). Monte carlo methods. In Lecture notes. 35. Lama, R.K., Gwak, J., & Park, J.S., et al. (2017). Diagnosis of Alzheimer’s disease based on structural MRI images using a regularized extreme learning machine and PCA features. Journal of Healthcare Engineering. 36. Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics, 2(4), 433–459. 37. Ding, S., Xu, X., & Nie, R. (2014). Extreme learning machine and its applications. Neural Computing and Applications, 25(3), 549–556. 38. Despotovi´c, I., Goossens, B., & Philips, W. (2015). MRI segmentation of the human brain: challenges, methods, and applications. Computational and Mathematical Methods in Medicine. 39. Liu, Y., Collins, R. T., & Rothfus, W. E. (2001). Robust midsagittal plane extraction from normal and pathological 3-D neuroradiology images. IEEE Transactions on Medical Imaging, 20(3), 175–192. 40. Teverovskiy, L., & Li, Y. (2006). Truly 3D midsagittal plane extraction for robust neuroimage registration. In 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro (pp. 860–863). IEEE. 41. Michalak, H., & Okarma, K. (2019). Improvement of image binarization methods using image preprocessing with local entropy filtering for alphanumerical character recognition purposes. Entropy, 21(6), 562. 42. Ruppert, G.C., Teverovskiy, L., & Yu, C.P., et al. (2011). A new symmetry-based method for mid-sagittal plane extraction in neuroimages. In 2011 IEEE International Symposium on Biomedical Imaging: from Nano to Macro (pp. 285–288). IEEE. 43. Di Ruberto, C., & Fodde, G. (2013). Evaluation of statistical features for medical image retrieval. In International Conference on Image Analysis and Processing (pp. 552–561). Springer, Berlin. 44. Usman, K., & Rajpoot, K. (2017). Brain tumor classification from multi-modality MRI using wavelets and machine learning. Pattern Analysis and Applications, 20(3), 871–881. 45. Wang, Z., Bovik, A. C., Sheikh, H. R., et al. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. 46. Kumar, V., & Gupta, P. (2012). Importance of statistical measures in digital image processing. International Journal of Emerging Technology and Advanced Engineering, 2(8), 56–62. 47. Esmael, B., Arnaout, A., Fruhwirth, R. K., et al. (2015). A statistical feature-based approach for operations recognition in drilling time series. International Journal of Computer Information Systems and Industrial Management Applications, 5, 454–461. 48. Yang, X., Tridandapani, S., Beitler, J. J., et al. (2012). Ultrasound GLCM texture analysis of radiation-induced parotid-gland injury in head-and-neck cancer radiotherapy: An in vivo study of late toxicity. Medical Physics, 39(9), 5732–5739. 49. Lee, C., Zhang, A., Yu, B., et al. (2017). Comparison study between RMS and edge detection image processing algorithms for a pulsed laser UWPI (Ultrasonic wave propagation imaging)based NDT technique. Sensors, 17(6), 1224. 50. Malik, F., & Baharudin, B. (2013). The statistical quantized histogram texture features analysis for image retrieval based on median and Laplacian filters in the dct domain. The International Arab Journal of Information Technology, 10(6), 1–9. 51. Ho, A. D., & Yu, C. C. (2015). Descriptive statistics for modern test score distributions: Skewness, kurtosis, discreteness, and ceiling effects. Educational and Psychological Measurement, 75(3), 365–388. 52. Rueda, A., Arevalo, J., Cruz, A., et al. (2012). Bag of features for automatic classification of Alzheimer’s disease in magnetic resonance images (pp. 559–566). In Iberoamerican Congress on Pattern Recognition. Springer. 53. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), L:84–90.
52
N. J. Herzog and G. D. Magoulas
54. Yang, S., & Berdine, G. (2017). The receiver operating characteristic (ROC) curve. The Southwest Respiratory and Critical Care Chronicles, 5(19), 34–36. 55. Jakkula, V. (2006). Tutorial on support vector machine (SVM) (p. 37). Washington State University. 56. Herzog, N., & Magoulas, G. D. (2021). Deep learning of brain asymmetry images and transfer learning for early diagnosis of dementia (pp. 57–70). In International Conference on Engineering Applications of Neural Networks.
Feature Selection Based Coral Reefs Optimization for Breast Cancer Classification Lobna M. Abouelmagd , Mahmoud Y. Shams , Noha E. El-Attar , and Aboul Ella Hassanien
Abstract Early detection of cancer cases is one of the most important things that help complete treatment and disappearance of this disease from the human body. Breast cancer is the most widespread invasive cancer in women, and after lung cancer, it is the second leading cause of cancer death in women. The first symptoms of breast cancer usually appear as an area of thickened tissue in the breast or a lump in the breast or an armpit. Consequently, many features can be found to indicate the existence of cancer or not. This chapter employs the coral reefs optimization (CRO) algorithm for feature selection; the CRO has shown to be very effective with various classification approaches. In this chapter, we used five standard classifiers: Logistic Regression (LR), K-nearest neighbor (KNN), Support Vector Machine with Radial Basis Function (SVM-RBF), Random Forest (RF), Decision Tree (DT). All These classifiers are presented with and without feature selection using the CRO algorithm. The results indicated that using the feature selection based on CRO achieved better results before the feature selection. The most common dataset called Breast Diagnostic Cancer Wisconsin (BDCW) is utilized to select the most significant and classify the cancer cases with a tested accuracy of 100%, 99.1%, 100%, 100%, and 100% using LR, KNN, SVM-RBF, RF, and DT respectively.
L. M. Abouelmagd (B) Misr Higher Institute for Commerce and Computers, Mansoura, Egypt e-mail: [email protected] M. Y. Shams Faculty of Artificial Intelligence, Kafrelsheikh University, Kafr El-Shaikh 33511, Egypt e-mail: [email protected] N. E. El-Attar Faculty of Computers and Artificial Intelligence, Benha University, Benha, Egypt e-mail: [email protected] A. E. Hassanien Faculty of Computer and AI, Cairo University and Scientific Research Group in Egypt (SRGE), Cairo, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_4
53
54
L. M. Abouelmagd et al.
1 Introduction Breast cancer is a heterogeneous group of diseases that begin in the breast, move to the axillary lymph nodes, become aggressive, and eventually spread to other organs. Many factors determine the risk that increases the likelihood of breast cancer, its symptoms and signs, methods to combat it, diagnosis, staging, and treatment options [1, 2]. Early detection is one of the most important secondary prevention strategies. The goal is to diagnose breast cancer in the early stages of the disease and promote more hospitals’ diagnostic and treatment procedures. Early detection enhances the quality and effectiveness of breast cancer treatment, increasing the cure rate to over 95% and lowering the mortality rate by 30% [2, 3]. Selecting feature subsets that have significant and distinctive influence is crucial in high-dimensional data analysis. In recent years, high-dimensional data sets have gained popularity in various real-world applications, including genomics, medical image processing, and data mining [4]. However, the massive dimensionality of the datasets could be produced by irrelevant or redundant features, limiting the learning algorithm’s effectiveness or causing data overfitting [5–7]. Feature selection (FS) has proven to be an effective data preparation method for coping with the dimensionality curse. Feature selection strategies aim to pick a subset of characteristics based on a range of criteria while maintaining the physical meanings of the original features [8–11]. It can increase learning model comprehension and perception. The FS technique tries to lower the search space size to increase the learning process’s efficiency by improving the prediction and classification performance and reducing training time [12]. The power of the feature selection methodology comes from two essential processes: search and evaluation. A combinatorial explosion may arise when selecting the most valued features from the original set while passing all incoming subsets. To appropriately select the worthy traits, search strategies are used. Forward and backward search has been employed as typical greedy search techniques [13, 14]. There are three feature selection approaches based on the methods used to evaluate feature subsets: filter, wrapper, and embedding. The inherent properties of the data are used to select features for a filter model. Filter models are referred to as classifier-independent, where they evaluate the significant features for classification regardless of the machine learning technique [15–17]. Filter approaches are fast since they don’t use a learning algorithm to analyze attributes, but they don’t provide enough information to categorize samples. Pearson Correlation, LDA, ANOVA, and Chi-Square are filter types [18, 19]. Wrapper and embedded models, on the other hand, are dependent on the classifier. Forward feature selection, backward feature elimination, and recursive feature elimination are wrapper approaches. The wrapper model searches the space of possible solutions using a machine learning technique. To evaluate the selected subset, the validation accuracy of a specific classifier is used. The embedded model uses feature selection as a portion of the learning process [14, 20]. The most famous examples of embedded methods are RIDGE regression and LASSO [21–23].
Feature Selection Based Coral Reefs Optimization …
55
Feature Selection is classified as an optimization problem since it seeks out the near-optimal feature subset. In this circumstance, exhaustive search strategies will be unreliable since they will create all feasible solutions to discover just the best [24]. In general, meta-heuristic and evolutionary algorithms outperform traditional optimization procedures to avoidlocal optima. Meta-heuristic algorithms inspired by nature have recently increased as the preferred method for solving optimization problems [25–27]. Single-solution and population-based meta-heuristic algorithms are two types of meta-heuristic algorithms. The solution is created at random and then improved in single-solution-based algorithms like Simulated Annealing and Tabu Search [28, 29]. On the other hand, the goal of population-based algorithms is to iteratively evolve a set of answers (i.e., population) in a specified search space until the optimal answer is discovered. A variety of methods are helpful in the disciplines of optimization and feature selection, such as Genetic Algorithm (GA) [30], Particle Swarm Optimization (PSO) [31], Ant Colony Optimization (ACO) [32], Scatter Search Algorithm (SSA) [33], Artificial Bee Colony (ABC) algorithm [34], Swallow Swarm Optimization (SSO) [35], Dragonfly Algorithm (DA) [36], and Archimedes Optimization Algorithm (AOA) [37]. In this work, we utilized Coral Reef Optimization (CRO) algorithm as a feature selector. It is considered an effective feature selection algorithm based on artificial simulation of the creation and reproduction of coral reefs. The CRO algorithm simulates several stages of coral reproduction and the competition for space in the reef throughout this process, resulting in an effective solution for tackling challenging optimization problems [38–40]. This paper has adopted the CRO to solve the feature selection problem in breast cancer classification. The CRO is utilized to select the most significant features of breast cancer with enhanced results than before feature selection. We utilized LR, KNN, SVM-RBF, RF, and DT for classifying the extracted features from CRO with promising results compared with the most recent approaches. In the related work Sect. 2, we will survey the most common approaches that use the Breast Diagnostic Cancer Wisconsin (BDCW) dataset. In Sect. 3, we will introduce the methods and materials of the proposed study. The results and discussion, including the proposed method’s comparative study, are investigated in Sect. 4. Finally,the conclusion and future work is illustrated in Sect. 5.
2 Related Work The causes of breast cancer are not fully known and understood, as there are a group of interrelated factors, including the genetic factor, hormonal factor, environmental factors, social biology, and organ physiology, that can affect its development, in addition to other risk factors, such as disorders of breast reproduction associated with the development of breast cancer, especially if the biopsy shows typical hyperplasia. However, 70% of cancer patients cannot identify risk factors [41–43]. Tests and procedures used to diagnose breast cancer includeBreast tests. The doctor performs
56
L. M. Abouelmagd et al.
a clinical breast examination for both breasts and the lymph nodes beneath the armpit, feeling for lumps or other abnormalities. A mammography or mammogram is an Xray of the breast if abnormalities are detected on the examination. Ultrasound of the breast. Ultrasound is an imaging technology that employs sound waves to create images of things deep within the body. It can be used to assess whether a new breast lump is a solid mass or a fluid-filled cyst. An MRI machine uses both magnetic and radio waves to create images of the inside of the breast. Unlike other imaging tests, radiation is used to create the images [44–47]. Artificial Intelligence (AI) tools currently play an important role in diagnosing, diagnosing, and detecting breast cancer disease using image processing and computer vision analysis of breast images. This section highlights the most recent machine learning approaches used to classify breast cancer based on the BDCW dataset. Furthermore, it displays different feature selection algorithms used to extract the most relevant features to enhance the ML approaches performance in classification, regression, and prediction processes [48–51]. A Decision Tree (DT) has been introduced by Lavanya and Rani [52] to classify breast cancer with and without feature selection. They evaluate a subset of features generated from an enrolled dataset by a specific criterion. The accuracy of the breast cancer Wisconsin dataset in its original diagnostic achieves with and without feature selection was 94.84% and 92.97%, respectively. A comparative study using five classifiers applied tothe BDCW dataset has been presented by Salama et al. [53]. The applied classifiers are Multilayer Perceptron (MLP), Naïve Bayes (NB), Sequential Minimal Optimizer (SMO), Decision Tree (DT), and K-Nearest Neighbor (KNN) for instances based achieves 96.66%, 92.97%, 97.71%, 93.14, and 95.95%, respectively. Agarap in [54] has presented another technique that employs MLP, Linear Regression (LR), Support Vector Machine (SVM), Nearest Neighbor (NN), and Softmax Regression on the BDCW dataset. The utilization of the KNN clustering algorithm in centroid, epoch, attribute, and the split methods, has been presented by Dubey et al. [55]. They used KNN to cluster the features extracted from the breast cancer Wisconsin dataset and the classification algorithm based on Euclidean and Manhattan distance with an average accuracy of 92%. They divided the dataset into 70% for training and the remaining 30% for testing, and the accuracy exceeded 90% based on the Gated Recurrent SVM (GRSVM). The precise values of the produced accuracies were 93.75%, 96.09%, 99.03%, 93.56%, 97.65%, and 96.09% based on GRU-SVM, LR, MLP, NN, Softmax regression, and SVM, respectively. Laghmati et al. in [56] have used a feature selection model based on Neighborhood Components Analysis (NCA) to predict the presence of breast cancer using the BDCW dataset. The results obtained were 98.86% based on SVM, and the Area Under the Curve (AUC) was 99% based on KNN, 89% for the DT, and 93% for the Adaboost classifier. Another technique based on the tunning of the hyperparameter values of the Random Forest (RF) classifier applied on the BDCW dataset is introduced by [57]. The results achieved were 99.92% for training and 80.00% for testing with a balanced error rate of 16%. Moreover, the Receiver Operating Characteristics (ROC) achieved
Feature Selection Based Coral Reefs Optimization …
57
88% based on the hybrid Neural Networks (NNs) and DT. While the ROC reached 96% using the RF algorithm, the decision forest classifier’s Bayesian optimization algorithm achieved ROC reached 98%. A fuzzy genetic algorithm was presented by Murphy [58] to classify and analyze the dataset of BDCW with an accuracy of 92.11%, and the confusion matrix calculations were 92.86% for precision, 86.67% for recall, 95.65% specificity, and 89.66% for F1-score. This work further utilized the most common classifier LR, KNN, SVM-RBF, RF, and DT to classify the BDCW dataset of breast cancer using a CRO feature selector. And to prove the importance of the feature selection stage, we determine the classification accuracies before and after feature selection. We obtained higher accuracies after performing feature selection using the CRO algorithm.
3 Methods and Materials 3.1 The CRO Algorithm (Coral Reef Optimization) The CRO presented in [59] is a meta-heuristic search technique based on the K of corals, composed of a N × M square grid. At first, each square (i, j) of K is supposed to assign a coral (or coral colony) Cij . It is a solution to the current optimization problem, encoded as a set of numbers in a given alphabet I. The CRO method is initially randomly started by filling some squares in K with corals (i.e., problem solutions) while leaving some squares blank. ρ is the rate of free/occupied squares in K; it’s a key parameter in the CRO method, and it’s defined as 0 < ρ 4
97.4 97.3 96.5 95.6 97.4
No. of features 7 Accuracy
6
99.1 96.5 97.4 96.5 98.2
SVM No. of features 4 Accuracy
6
4
7
99.1 99.1 98.2
7 5
5 5
7 2
5 3
100
95.6 95.6 95.6 91.2
5
6
100
99.1 100
6
4
6
97.4 97.4
Fig. 5 The accuracy rate for all the classifiers for each epoch
4.3 Statistical Results Analysis Performance matrices must be examined in this research to evaluate the performance of the various classifiers. Accuracy, Precision, Recall, and F1 Score [14] are the most common performance measures to calculate, and they are determined using Eqs. (8) to (11).
Feature Selection Based Coral Reefs Optimization …
67
Accuracy = TP + TN/TP + FP + FN + TN
(8)
Precision = TP/TP + FP
(9)
Recall = TP/TP + FN
(10)
F1 − Score = 2∗(Recall*Precistion)/(Recall + Precision)
(11)
where TP is the count of True Positive samples, TN is the count of True Negative samples, FP is the count of False Positive samples, and FN is the count of False Negative samples from the confusion matrix [73]. Table 4 compares the five utilized classifiers in the three experiments on the breast dataset. As shown in the Table 4, It is noted that CRO as feature selection enhances the accuracy rate for all the classifiers. The KNN, DT, SVM, and RF recorded a 100% accuracy rate while LR recorded 99.1%. Finally Table 5 investigates the comparative sytudy between the proposed CRO based LR, KNN, SVM-RBF, RF, and DT with most resent approaches in terms of the accuracy for the BDCW dataset.
5 Conclusion and Future Work Breast cancer is the most prevalent invasive cancer in women and the second most significant cause of cancer death after lung cancer. The dataset used contains 30 features for each feature’s vector. Using these features for the classification provided unconvincing results with different classification algorithms. In this work, the results are improved when transforming the feature and selecting a high correlated feature with about 20 features. With multiple classification approaches, using the coral reefs optimization process for feature selection has yielded excellent results. The applied classifiers are Logistic Regression, K-Nearest Neighbor, SVM with RBF, Random Forest Classifier, and Decision tree, were gave test accuracies; 100, 99.1, 100, 100, and 100, respectively. We intend to utilize a different feature selection method in the future to detect the essential elements of the breast cancer image. We also intend to visualize the photos to identify other elements that may help drugs handle and treat cancer in its early stages.
71.9
36.8
91.12
61.4
95.6
KNN
LR
DT
SVM
RF
0.911
0
0.833
0.364
0.708
0.97
0
0.952
1
0.407
0.943
0
0.889
0.53
0.51
97.1
98.8
89.5
98.2
97.7
Acc
F1
Feature transformation
Rec
Acc
Perc
Without feature selection
0.97
0.985
0.847
0.985
0.971
Perc
Table 4 All results of the cancer breast classifiers with different vector’s feature-length
0.955
0.985
0.897
0.97
0.97
Rec Rec
0.96
0.98
0.87
0.98
0.97
F1 F1
100
100
100
99.1
100
Acc
1
1
1
0.98
0.99
Perc
Feature selection using CRO
1
1
1
0.97
0.98
Rec Rec
1
1
1
0.97
0.98
F1
68 L. M. Abouelmagd et al.
Feature Selection Based Coral Reefs Optimization …
69
Table 5 The comparative study of the prpopsed method compared with the recent methods Author
Utilized dataset
Method
Salama et al. [53]
Breast Diagnostic Cancer Wisconsin (BDCW)
NB
Accuracy 92.97
MLP
96.66 lePara>
J-48
93.14
SMO
97.71
IBK
95.95
Dubey et al. [55]
Breast Cancer Wisconsin (BCW)
K-means clustering + Euclidean/Manhattan distance
92.00
Agarap et al. [54]
Breast Diagnostic Cancer Wisconsin (BDCW)
GRU-SVM
93.75
LR
96.09
MLP
99.03
Softmax Regression
97.65
SVM
96.09
Neighborhood Components Analysis (NCA) + KNN
99.12
Neighborhood Components Analysis (NCA) + SVM
98.86
Breast Diagnostic Cancer Wisconsin (BDCW)
RF classifier
96.00
Bayesian optimization + RF classifier
98.00
Murphy et al. [58]
Breast Diagnostic Cancer Wisconsin (BDCW)
Genetic Fuzzy Algorithm
92.86
Proposed Method
Breast Diagnostic Cancer Wisconsin (BDCW)
LR
Laghmati et al. [56]
Kumar et al. [57]
Breast Cancer Wisconsin (BCW)
KNN
100.00 99.1
SVM-RBF
100.00
RF
100.00
DT
100.00
References 1. Prat, A., & Perou, C. M. (2011). Deconstructing the molecular portraits of breast cancer. Molecular Oncology, 5(1), 5–23. 2. Bianchini, G., Balko, J. M., Mayer, I. A., Sanders, M. E., & Gianni, L. (2016). Triple-negative breast cancer: Challenges and opportunities of a heterogeneous disease. Nature Reviews Clinical Oncology, 13(11), 674–690. 3. Parkin, D. M., & Fernández, L. M. (2006). Use of statistics to assess the global burden of breast cancer. The Breast Journal, 12, S70–S80. 4. Pes, B., Dessì, N., & Angioni, M. (2017). Exploiting the ensemble paradigm for stable feature selection: A case study on high-dimensional genomic data. Information Fusion, 35, 132–147. 5. Liu, H., et al. (2005). Evolving feature selection. IEEE Intelligent Systems, 20(6), 64–76.
70
L. M. Abouelmagd et al.
6. Hira, Z. M., & Gillies, D. F. (2015). A review of feature selection and feature extraction methods applied on microarray data. Advances Bioinformatics, 2015. 7. Kamala, R., & Thangaiah, R. J. (2019). An improved hybrid feature selection method for huge dimensional datasets. IAES International Journal Artifical Intelligence, 8(1), 77. 8. Tang, J., Alelyani, S., & Liu, H. (2014). Feature selection for classification: A review. Data Classification Algorithms Appllication, 37. 9. Roffo, G., Melzi, S., & Cristani, M. (2015). Infinite feature selection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4202–4210. 10. Ang, J. C., Mirzal, A., Haron, H., & Hamed, H. N. A. (2015). Supervised, unsupervised, and semi-supervised feature selection: A review on gene selection. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13(5), 971–989. 11. Li, J., et al. (2017). Feature selection: A data perspective. ACM Computer Survey CSUR, 50(6), 1–45. 12. Cai, J., Luo, J., Wang, S., & Yang, S. (2018). Feature selection in machine learning: A new perspective. Neurocomputing, 300, 70–79. 13. Caruana, R., & Freitag, D. (1994). Greedy attribute selection. In Machine Learning Proceedings. Elsevier, 1994, 28–36. 14. Jovi´c, A., Brki´c, K., & Bogunovi´c, N. (2015). A review of feature selection methods with applications. In 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1200–1205. 15. Yao, C., Liu, Y.-F., Jiang, B., Han, J., & Han, J. (2017). LLE score: A new filter-based unsupervised feature selection method based on nonlinear manifold embedding and its application to image recognition. IEEE Transactions on Image Processing, 26(11), 5257–5269. 16. Talavera, L. (2005). An evaluation of filter and wrapper methods for feature selection in categorical clustering. In International Symposium on Intelligent Data Analysis, pp. 440–451. 17. Wang, S., & Zhu, W. (2016). Sparse graph embedding unsupervised feature selection. IEEE Transaction System Man Cybernetics System, 48(3), 329–341. 18. Velliangiri, S., & Alagumuthukrishnan, S. (2019). A review of dimensionality reduction techniques for efficient computation. Procedia Computer Science, 165, 104–111. 19. Krishnaveni, N., & Radha, V. (2019). Feature selection algorithms for data mining classification: a survey. Indian J Sci Technol, 12(6). 20. Chandrashekar, G., & Sahin, F. (2014). A survey on feature selection methods. Computers & Electrical Engineering, 40(1), 16–28. 21. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B: Methodological, 58(1), 267–288. 22. Owen, A. B. (2007). A robust hybrid of lasso and ridge regression. Contemporary Mathematics, 443(7), 59–72. 23. Pereira, J. M., Basto, M., & da Silva, A. F. (2016). The logistic lasso and ridge regression in predicting corporate failure. Procedia Economy Finance, 39, 634–641. 24. Zhang, Y., Gong, D., & Cheng, J. (2015). Multi-objective particle swarm optimization approach for cost-based feature selection in classification. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 14(1), 64–75. 25. Moorthy, R. S., & Pabitha, P. (2018). A study on meta heuristic algorithms for feature selection. In International Conference on Intelligent Data Communication Technologies and Internet of Things, pp. 1291–1298. 26. Ghadimi, N., Akbarimajd, A., Shayeghi, H., & Abedinia, O. (2018). Two stage forecast engine with feature selection technique and improved meta-heuristic algorithm for electricity load forecasting. Energy, 161, 130–142. 27. Sharma, M., & Kaur, P. (2021). A Comprehensive analysis of nature-inspired meta-heuristic techniques for feature selection problem. Archives of Computational Methods in Engineering, 28(3). 28. Jahromi, M. H., Tavakkoli-Moghaddam, R., Makui, A., & Shamsi, A. (2012). Solving an one-dimensional cutting stock problem by simulated annealing and tabu search. Journal of Industrial Engineering International, 8(1), 1–8.
Feature Selection Based Coral Reefs Optimization …
71
29. Said, G. A. E.-N. A., Mahmoud, A. M., & El-Horbaty, E.-S. M. (2014). A comparative study of meta-heuristic algorithms for solving quadratic assignment problem. ArXiv Prepr. ArXiv14074863. 30. Mirjalili, S. (2019). Genetic algorithm. In Evolutionary Algorithms and Neural Networks, Springer, pp. 43–55. 31. Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. Swarm Intelligence, 1(1), 33–57. 32. Dorigo, M., Birattari, M., & Stutzle, T. (2006). Ant colony optimization. IEEE Computational Intelligence Magazine, 1(4), 28–39. 33. López, F. C. G., Torres, M. G., Pérez, J. A. M., & Vega, J. M. M. (2003). Scatter search for the feature selection problem. In Conference on Technology Transfer, pp. 517–525. 34. Karaboga, D., & Basturk, B. (2008). On the performance of artificial bee colony (ABC) algorithm. Applied Soft Computing, 8(1), 687–697. 35. Neshat, M., Sepidnam, G., & Sargolzaei, M. (2013). Swallow swarm optimization algorithm: A new method to optimization. Neural Computing and Applications, 23(2), 429–454. 36. Mirjalili, S. (2016). Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications, 27(4), 1053–1073. 37. Hashim, F. A., Hussain, K., Houssein, E. H., Mabrouk, M. S., & Al-Atabany, W. (2021). Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Applied Intelligence, 51(3), 1531–1551. 38. Yang, Z., Zhang, T., & Zhang, D. (2016). A novel algorithm with differential evolution and coral reef optimization for extreme learning machine training. Cognitive Neurodynamics, 10(1), 73–83. 39. Bermejo, E., Chica, M., Damas, S., Salcedo-Sanz, S., & Cordón, O. (2018). Coral reef optimization with substrate layers for medical image registration. Swarm and Evolutionary Computation, 42, 138–159. 40. Duran-Rosal, A. M., Gutierrez, P. A., Salcedo-Sanz, S., & Hervas-Martinez, C. (2018). A statistically-driven Coral Reef Optimization algorithm for optimal size reduction of time series. Applied Soft Computing, 63, 139–153. 41. Davis, D. L., Bradlow, H. L., Wolff, M., Woodruff, T., Hoel, D. G., & Anton-Culver, H. (1993). Medical hypothesis: Xenoestrogens as preventable causes of breast cancer. Environmental Health Perspectives, 101(5), 372–377. 42. MacMahon, B. (2006). Epidemiology and the causes of breast cancer. International Journal of Cancer, 118(10), 2373–2378. 43. Riihimäki, M., Thomsen, H., Brandt, A., Sundquist, J., & Hemminki, K. (2012). Death causes in breast cancer patients. Annals of Oncology, 23(3), 604–610. 44. Moore, S. K. (2001). Better breast cancer detection. IEEE Spectrum, 38(5), 50–54. 45. Islam, M. S., Kaabouch, N., & Hu, W. C. (2013). A survey of medical imaging techniques used for breast cancer detection. IEEE International Conference on Electro-Information Technology, EIT, 2013, 1–5. 46. Bickelhaupt, S., et al. (2016). Fast and noninvasive characterization of suspicious lesions detected at breast cancer X-ray screening: Capability of diffusion-weighted MR imaging with MIPs. Radiology, 278(3), 689–697. 47. Çalı¸skan, R., Gültekin, S. S., Uzer, D., & Dündar, Ö. (2015). A microstrip patch antenna design for breast cancer detection. Procedia-Social Behavioral Sciences, 195, 2905–2911. 48. Tran, W. T., et al. (2019). Personalized breast cancer treatments using artificial intelligence in radiomics and pathomics. Journal Medical Imaging Radiation Science, 50(4), S32–S41. 49. Bi, W. L., et al. (2019). Artificial intelligence in cancer imaging: Clinical challenges and applications. CA: A Cancer Journal for Clinicians, 69(2), 127–157. 50. Ullah, M., Akbar, A., & Yannarelli, G. G. (2020). Applications of artificial intelligence in early detection of cancer, clinical diagnosis and personalized medicine. 51. Kaur, S., et al. (2020). Medical diagnostic systems using artificial intelligence (ai) algorithms: Principles and perspectives. IEEE Access, 8, 228049–228069.
72
L. M. Abouelmagd et al.
52. Lavanya, D., & Rani, D. K. U. (2011). Analysis of feature selection with classification: Breast cancer datasets. Indian Journal Computer Science Engineering IJCSE, 2(5), 756–763. 53. Salama, G. I., Abdelhalim, M., & Zeid, M. A. (2012). Breast cancer diagnosis on three different datasets using multi-classifiers. Breast Cancer WDBC, 32(569), 2. 54. Agarap, A. F. M. (2018). On breast cancer detection: an application of machine learning algorithms on the wisconsin diagnostic dataset. In Proceedings of the 2nd International Conference on Machine Learning and Soft Computing, pp. 5–9. 55. Dubey, A. K., Gupta, U., & Jain, S. (2016). Analysis of k-means clustering approach on the breast cancer Wisconsin dataset. International Journal of Computer Assisted Radiology and Surgery, 11(11), 2033–2047. 56. Laghmati, S., Cherradi, B., Tmiri, A., Daanouni, O., & Hamida, S. (2020). Classification of patients with breast cancer using neighbourhood component analysis and supervised machine learning techniques. In 2020 3rd International Conference on Advanced Communication Technologies and Networking (CommNet), pp. 1–6. 57. Kumar, P., & Nair, G. G. (2021). An efficient classification framework for breast cancer using hyper parameter tuned random decision forest classifier and bayesian optimization. Biomedical Signal Processing Control, 68, 102682. 58. Murphy, A. (2021). Breast Cancer Wisconsin (Diagnostic) Data Analysis Using GFS-TSK. In North American Fuzzy Information Processing Society Annual Conference, pp. 302–308. 59. Salcedo-Sanz, S., Pastor-Sánchez, A., Prieto, L., Blanco-Aguilera, A., & García-Herrera, R. (2014). Feature selection in wind speed prediction systems based on a hybrid coral reefs optimization–Extreme learning machine approach. Energy Convers. Manag., 87, 10–18. 60. Ahmed, S., Ghosh, K. K., Garcia-Hernandez, L., Abraham, A., & Sarkar, R. (2020). Improved coral reefs optimization with adaptive b-hill climbing for feature selection. Neural Computer Appllication, 1–20. 61. Krishnapuram, B., Carin, L., Figueiredo, M. A., & Hartemink, A. J. (2005). Sparse multinomial logistic regression: Fast algorithms and generalization bounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(6), 957–968. 62. Kim, M., Song, Y., Li, B., & Micciancio, D. (2020). Semi-parallel logistic regression for GWAS on encrypted data. BMC Medical Genomics, 13(7), 1–13. 63. Candès, E. J., & Sur, P. (2020). The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression. Annals of Statistics, 48(1), 27–42. 64. Kuha, J., & Mills, C. (2020). On group comparisons with logistic regression models. Sociolgy Methods Research, 49(2), 498–525. 65. Patel, H., & Thakur, G. S. (2019). An improved fuzzy k-nearest neighbor algorithm for imbalanced data using adaptive approach. IETE Journal Research, 65(6), Art. no. 6. 66. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297. 67. Awad, M., & Khanna, R. (2015). Support vector machines for classification. In Efficient Learning Machines, Springer, pp. 39–66. 68. Benbelkacem, S., & Atmani, B. (2019). “Random forests for diabetes diagnosis”, in. International Conference on Computer and Information Sciences (ICCIS), 2019, 1–4. 69. Rokach, L., & Maimon, O. (2005). Top-down induction of decision trees classifiers-a survey. IEEE Transaction System Man Cybernetics Part C Appllication Revcovery, 35(4), 476–487. 70. Rokach, L., & Maimon, O. Z. (2007). Data mining with decision trees: Theory and applications, vol. 69. World scientific. 71. Yan, C., Ma, J., Luo, H., & Patel, A. (2019). Hybrid binary coral reefs optimization algorithm with simulated annealing for feature selection in high-dimensional biomedical datasets. Chemometrics and Intelligent Laboratory Systems, 184, 102–111. 72. Benesty, J., Chen, J., Huang, Y., & Cohen, I. (2009). Pearson correlation coefficient. In Noise Reduction in Speech Processing, Springer, pp. 1–4. 73. Sarhan, S., Nasr, A. A., & Shams, M. Y. (2020). Multipose face recognition-based combined adaptive deep learning vector quantization. Computer Intelligence Neuroscience, 2020.
A Comprehensive Review on Brain Disease Mapping—The Underlying Technologies and AI Based Techniques for Feature Extraction and Classification Using EEG Signals Jaideep Singh Sachadev and Roheet Bhatnagar Abstract Medical images play an important role in the diagnosis of diseases effectively. Human brain is consisting of millions of neurons which work in proper coordination with one another, and human behavior is an outcome of the response of neurons to internal/external motor or sensory stimuli. These neurons are the carriers of signals from different parts of the human body and the brain. Human cognition studies focus on interpreting either these signals or brain images and there are various technologies which are in use for brain disease studies. Every image or technology generates lot of data in different form which can be modeled using Artificial Intelligence models. The electroencephalogram (EEG) is a recording of the electrical activity of the brain from the scalp. The recorded waveforms reflect the cortical electrical activity. Signal intensity: EEG activity is quite small, measured in microvolts (mV). Delta, Theta, Alpha and Beta are the main frequencies of human EEG waves and EEG electrodes are used to generate EEG plots. In this chapter, authors have tried to capture details on various technologies, techniques, usage of AI models in Brain mapping for disease predictions. This chapter presents a comprehensive account on them and will appeal to researchers in the field of human cognition. Keywords Electroencephalogram · Brain disease · Human cognition · EEG electrodes · Artificial Intelligence · Neurons
1 Introduction Brain disorder and problems are one of the primary causes of worldwide disabilities and are one of the most complex researched domain. Improper brain functioning and motor function deficits lead to poor overall life quality. Brain-Computer Interface (BCI) is a specialised domain that provides a communications medium and interaction between an external system & human brain, without any motor pathway being involved. Neurological phenomena are the basis for regulating BCI systems J. S. Sachadev · R. Bhatnagar (B) Department of CSE, Manipal University Jaipur, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_5
73
74
J. S. Sachadev and R. Bhatnagar
and we know that brain functions based on the electrical signals between neurons in the brain. There are various approaches to capturing brain signals and one of the methods is EEG. EEG records electrical activity along the scalp surface enabling users to monitor their brain function effectively. EEG signals contain many vital information on the electrical activities of the brain which are used to make prognosis of brain diseases. Accurate classification is very much significant in reducing the damage caused by brain stroke and Artificial Intelligence and Machine Learning based models are proving to be of great help for the medical practitioners. Developing such framework and solution are active area of research and this chapter also discusses some of the key case studies along with detailed account of EEG signals and the medical image analysis stages.
2 Raw EEG Signals The sequence of processes followed in brain mapping using EEG are as shown in the Fig. 1. As can be seen from the figure Signal Acquisition is the very first step which involves many different activities & methods. 1.
ERP (Event Related Potential): Event-Related Potentials are time-locked EEG Signals which are generated in response to specific motor, cognitive or sensory events. ERPs in humans are divided into 2 categories i.e., sensory and cognitive. It is termed as “sensory” when we see a peak within the first 100 ms after the stimulus as waves are highly dependent on it and if generated in a later part are termed as “cognitive” as they examine information processing [1]. a.
P50 ERP Wave: P50 is the positive peak between 40–75 ms after the evoked stimuli. It changes in Psychiatric disorders like Schizophrenia, Bipolar affective disorder, and posttraumatic stress disorder [1]. Authors in this work examined 104 schizophrenia patients and 102 control subjects which resulted in a deficit of P50 [2]. Further analysis showed a large S2 amplitude of P50 [3]. investigated a relationship between P50 inhibition and cognitive dysfunction in schizophrenia. The study found that schizophrenia patients had a lower amplitude of S1 and a higher P50 ratio than control subjects but no correlation with cognitive impairment [4]. investigated the auditory sensory gating deficits between Han patients with first-episode, drug naïve (FEDN) schizophrenia and found that patients with FEDN schizophrenia have P50 inhibition defects.
Fig. 1 Basic workflow in medical image analysis
A Comprehensive Review on Brain Disease Mapping …
b.
c.
d.
e.
75
N100 ERP Wave: N100 has a negative peak between 90 to 200 ms after the environmental stimulus. It changes in Alcohol dependence syndrome and Schizophrenia. Reference [2] examined 104 schizophrenia patients and 102 control subjects which resulted in a deficit of N100. Further analysis showed a smaller S1 amplitude of N100. Reference [5] stated a reduction of the auditory N100 in the first episode of schizophrenia. Reference [7] combined Transcranial Magnetic stimulation (TMS) and Electroencephalography (EEG) in order to estimate cortical excitability by TMS over different cortical regions like left primary auditory cortices, left prefrontal, left motor, posterior cerebellum, and the vertex. All five areas generated similar N100s that peaked at the vertex region and therefore it was concluded N100 by TMS can be used as a generic cortical electrophysiological marker. P200 ERP Wave: P200 has a positive peak between 100 to 250 ms after the environmental stimulus. P200 with the N100 component may represent the sensation-seeking behaviour of an individual. It also changes in Alcohol dependence syndrome and Schizophrenia like N100. Reference [2] examined 104 schizophrenia patients and 102 control subjects which resulted in a deficit of P200. Further analysis showed a smaller S1 amplitude of P200. Reference [8] conducted an Age-related effects study on 14 younger and 14 older subjects, it was found that P200 mean latency differ between both the groups which were not seen in standard (high-probability) stimuli. Reference [9] introduced a novel technique for aesthetic quality assessment of a subject using N400 and P200 responses. It was found that for bad quality stimulus P200 signal gave a high positive amplitude and for a good quality stimulus N400 gave a high negative amplitude, therefore a low P200 and a high negative N400 indicated a good quality stimulus. N200 ERP Wave: N200 waveform is a negative peak at about 200 ms after the environmental stimulus. It has 3 components N2a, N2b, and N2c. N2a represents the encoding of the change in stimulus. N2b represents changes in the physical property of the stimulus that is task-relevant. N2c is elicited when the classification of disparate stimuli is needed. N200 changes in Alcohol dependence syndrome, obsessive–compulsive disorder, and Schizophrenia like N100. Reference [10] examined ERP differences inhibitory control in 44 older subjects and 41 young subjects. It was found that older subjects showed a slow frontal N200 latency and a larger N200 negative amplitude. Their study concluded that N200 during the stop-signal task was sensitive to Alzheimer’s Dementia (AD). N300 ERP Wave: N300 has a peak between 250 to 350 ms after the environmental stimulus and determines the semantic congruity and expectancy [6]. Reference [11] tested 26 participants for each stimulus onset asynchrony (SOA) condition in order to differentiate semantic priming mechanisms, the result suggested that the N300 wave determines expectancy for semantic and categorical similarity priming.
76
J. S. Sachadev and R. Bhatnagar
f.
g.
h.
i.
j.
P300 ERP Wave: P300 has a peak in activity between 250 to 400 ms after the target stimulus, this stimulus is alternated with standard stimuli to create an ‘oddball’ paradigm, which is most commonly auditory. It changes in Alcohol dependence syndrome, Schizophrenia, Bipolar affective disorder, Depression, and neurotic disorders like Phobia, Panic disorder, dissociative disorder, Personality disorders. 65 subjects who were diagnosed with minor stroke, were examined with Neuropsychological Scales and Event-Related Potentials. These subjects were divided into two groups i.e., Normal Cognition and Cognitive Impairment, Cognitive Impairment Group (CIG) was further divided into Vascular Dementia and Vascular Cognitive Impairment with no Dementia. It was found that the P300 latency delay is sensitive towards Cognitive Impairments and can be used to detect them [12]. N400 ERP Wave: N400 has a negative peak in activity between 300 to 600 ms after the environmental stimulus. It describes the context of semantic incongruity and is inversely proportional to the expectancy of a given word to end a sentence. As discussed in P200 [9] gave a novel technique for aesthetic quality assessment of a subject using N400 and P200 responses, where for a good quality stimulus N400 gave a high negative amplitude. Reference [13] registered 12 blind children and 12 sighted children for a 6-month haptic tactile stimulation training with a Sensory Substitution Device (SSD), ERP Sessions were performed while training, and it was found that blind children performed better than the control group in recognizing SSD-delivered letters. N400 amplitude was greater in the occipital region for the blind group. P600 ERP Wave: P600 effect comes into the picture when we talk about language processing, sentences that contain a nonpreferred syntactic structure, have a complex syntactic structure, or have a syntactic violation. Reference [14] tested 14 aphasic patients (where 7 of them have basal ganglia lesions and the other 7 have temporoparietal lesions) with two auditory oddball tasks i.e., P300 oddball paradigm and language-related P600. It was found that all the subjects showed the P300 effect, only the temporoparietal lesions group showed P600 in the language experiment. MRCP ERP Wave: MRCPs occur during or after a movement. It has four components i.e., Reafferent potential, Motor potential, Bereitschafts potential, and Pre-motion positivity. It changes in Alcohol dependence syndrome. CNV ERP Wave: Contingent Negative Variation (CNV) is a negative wave that gets elicited by paired stimuli without any motor response (S1-S2 paradigm) or by a standard reaction time paradigm (S1-S2motor response). Early Contingent Negative Variation (ECNV) indicates arousal Late Contingent Negative Variation (LCNV) indicates attention to the experimental task. It changes in Alcohol dependence syndrome, Schizophrenia, Personality disorders. CNV can be used as an EEG measure of cognition, in the study presented [15] 62 subjects performed odour identification tasks, these subjects were divided into two groups YOUNG (30
A Comprehensive Review on Brain Disease Mapping …
k.
2.
3.
4.
5.
77
subjects within age 18–30 yrs.) and OLD (32 subjects with age > 40 yrs.). EEG was recorded with channels Fp2, Fz, Cz, and Pz. The results that emerged were that CNV exhibited weak relations with odour pleasantness and 69% of subjects had consistent results of CNV throughout all the tasks. PINV ERP Wave: Post Imperative Negative Variation (PINV) is also called delayed Contingent Negative Variation (CNV) resolution and it indicates sustained cognitive activity.
Sleep Stage Scores: Here we record the EEG Signals for each subject overnight when they are asleep, these records are classified into different sleep stages i.e., 1,2,3,4, and rapid eye movement. This dataset is used for understanding patient sleep stages with the help of machines. Reference [16] performed Sleep Stage Classification for effective treatment of sleep-related disorders. Cyclic Alternating Pattern (CAP) sleep database is used which have EEG Signals of subjects suffering from 7 different kinds of sleeping disorders. Optimized Wavelet filters were used to decompose EEG epochs into bands. Ensembled of Bagged Tree (EBT) gave the highest performance of 85.3% and 92.8% for unbalanced and balanced datasets. Seizure detection: We create two datasets, one where EEG Signals are recorded for epileptic patients during the seizure and seizure-free periods. Another dataset contains EEG records for non-epileptic patients and is labelled as the control class. This complete dataset can be used to detect the upcoming seizure. Reference [17] proposed a deep learning approach for detecting seizures in paediatric patients using EEG technology. Dataset used in this study is collected from the Massachusetts Institute of Technology (MIT) and Children’s Hospital Boston (CHB). Supervised Deep Convolutional Autoencoder (SDCAE) model which used BI-LSTM based classifier gave a max accuracy of 98.79%. Emotion recognition: In this each subject watches a video containing a specific emotion, EEG is recorded during the task and each subject performs a self-assessment. A pair of valences (inputs by subject) and arousal (class labels) are used to describe different emotions. Reference [18] proposed a novel emotion recognition method using ERDL (novel DL Model), this study used DEAP dataset. The methodology followed in the study was to extract different entropy from each EEG segment to get a feature cube, these feature cubes were input for the ERDL model, in this model Graph Convolutional Neural Network (GCNN) was fused with Long-Short Term Memories Neural Networks (LSTM). An average accuracy of 90.60% and 90.45% for arousal and valence in subject-dependent experiments while 85.27% and 84.81% in subject-independent experiments were obtained. Motor imagery: In this each subject performs a movement without actually performing the movement, here subject imagine certain muscle movement on limbs or tongue. While performing the imagery motor task, EEG brain signals are recorded which later is used to accurately classify a user’s intended movement. Reference [19] proposed 5 schemes for CNN-based EEG-BCI system, to decode Hand Motor Imagery (MI), these schemes fine-tune the deep learning
78
6.
J. S. Sachadev and R. Bhatnagar
Pre-Trained Models for efficient performance. This study gave the highest accuracy of 84.19%. These proposed schemes obtained a statistically significant improvement in classification. Mental Workload: In this, each subject undergoes various degrees of mental task complexity. We categorize different workloads based on the increasing number of actions a subject need to perform. These tasks are further divided into two areas i.e. BMI Performance Monitoring and Cognitive Stress Monitoring. Reference [18] estimated the mental workload using Deep BLSTM-LSTM network and Evolutionary Algorithm, this was done using “STEW” dataset which consists of two tasks i.e. No Task and SIMKAP (Simultaneous Capacity, multitasking activity). To select the optimized features of the mental activity Grey Wolf Optimizer (GWO) was used. A hybrid model of Bidirectional Long Short-Term Memory (BLSTM) and Long Short-Term Memory (LSTM) was trained on these features and accuracies of 86.33% and 82.57% were obtained for the No Task and SIMKAP task (Fig. 2).
Fig. 2 Signal Acquisition types
A Comprehensive Review on Brain Disease Mapping …
79
3 Signal Processing 1.
2.
3.
4.
Independent Component Analysis (ICA): Independent Component Analysis (ICA) decomposes complex datasets into additive subcomponents, it is a special case of blind source separation where artifacts like ECG, EMG, and eye movements are generated by independent signal sources are rejected, and clean EEG Signals are obtained. Reference [20] applied Independent Component Analysis (ICA) technology to remove artifacts from BCI. ICA is applied using four different algorithms i.e., Approximation Diagonalization of Eigenmatrices (JADE), Hyvarinen’s Fixed Point Algorithm (Fast ICA), Second Order Blind Identification (SOBI), and Infomax. All four algorithms were compared in [20] and it was found that out of all the four algorithms SOBI algorithm can separate EEG signals accurately and precisely. Another study [21] showed that Independent Component Analysis (ICA) with high-order statistics removed a variety of artifacts from the EEG Signal efficiently. Principal Component Analysis (PCA): Principal Component Analysis (PCA) uses statistical methods like orthogonal transformation to alter an EEG signal of interacted vectors into disrelated vectors called principal components, we discard the artifact vector or components and recreate the EEG Signal. Reference [21] suggested that Principal Component Analysis (PCA) can be used to remove eye artifacts from EEG Signals but is not efficient when EEG cerebral activations and artifacts have similar amplitudes. Common Average Reference (CAR): The main problem in working with EEG Signals is very low Signal to Noise Ratio (SNR), Common Average Reference (CAR) removes common parts of EEG Signal and retain specific characteristics of each electrode, this improves the Signal to Noise Ratio (SNR) significantly. To diagnose depression [22] used Common Average Reference signal processing method to remove background noise, to remove ocular artifacts Independent Component Analysis (ICA) was used. Reference [23] study provided a deep learning model to classify hand motions from low frequency EEG Signals. All EEG signals from the dataset was transformed using the Common Average Reference (CAR) filter, spatial filters were used to enhance the EEG Signal components and finally down sampling of every signal was done to 16 Hz. This study achieved a good performance of 71% and 65% for the proposed methodology. Common Spatial Patterns (CPS): This filtering is derived from Common Spatial Subspace Decomposition (CSSD) filtering algorithm for multichannel EEG Signals. It extracts the task-related signal component and removes irrelevant signal components which include noise; thus, we get a series of spatial filters with the help of diagonalization of the matrix and therefore obtain a higher discrimination feature vector. Reference [24] used Common Spatial Pattern (CSP) and spatiotemporal discrepancy feature to extract EEG Signal features for Motor Imagery Classification. ESVL algorithm proposed in this study was evaluated on BCI Competition IV datasets 2a and 2b and maximum accuracies of 60% and 71% were achieved in both the datasets.
80
J. S. Sachadev and R. Bhatnagar
5.
Surface Laplacian (SL): It is the most efficient spatial filter, and is robust against artifacts caused due to uncovered parts by the electrode cap. Adaptive Filter (AF): It doesn’t need statistical characteristics of the EEG Signal and artifacts in advance. It automatically adjusts the parameters during the process for the best filtering effect. It has four components i.e., Input EEG Signals, Structure of I/O connection, and parameters of the relationship between Input and Output connections. Reference [25] proposed an Adaptive Filter that relies on an accelerometer-based referential signal. The proposed algorithm was tested on 48 subjects who performed the MATB-II (Revised Multi-Attribute Task Battery-II), and an accuracy of 95% was achieved using Random Forest. Digital Filters (DF): It processes the EEG signals in the frequency domain, it is required that EEG Signals and artifacts have different frequencies. It divides the EEG Signal into different frequencies using different filters like band-stop filters, band-pass filters, high-pass filters, and low-pass filters. Reference [26] proposed a hybrid method to remove artifacts like power interference, eye, muscle, and cardiac movements, from visually evoked EEG Signals, in this study. This hybrid method uses a combination of Digital Filters, Transient Artifact Reduction Algorithm (TARA), and Independent Component Analysis (ICA), SNR increased by 13.47% and 26.67% in the values of simulated signals and real data (Fig. 3).
6.
7.
Fig. 3 Signal processing types and techniques
A Comprehensive Review on Brain Disease Mapping …
81
4 Feature Extraction Method 1.
2.
3.
4.
Independent Component Analysis (ICA): It can be used for both preprocessing and feature extraction, it breaks the EEG Signal into independent components and takes the important component as features. It is useful for processing large-sized signals in a fast and efficient manner. Principal Component Analysis (PCA): It is used for both signal pre-processing and feature extraction, it performs dimensionality reduction by removing artifacts and extracting important components from the signal. Reference [27] performed double feature extraction on a publicly available dataset of epileptic EEG using PCA and Wavelet Transform, It was found that this method was able to separate the signals in a low-dimensional feature subspace as in the case of epileptic EEG signals. We increase the functionality of EEG signals by transforming them into wavelets. Power spectra transformation also makes the extraction of features using PCA easy. Power Spectrum Density (PSD): It represents the signal’s power as a function of frequency. We use the Weighted Difference of PSD (WDPSD) as a feature extraction method. Reference [28] applied Power Spectral Density (PSD) feature extraction method on Graz BCI competition IV dataset. This dataset consists of two classes of motor imagery right-hand and left-hand motion. PSD Feature extraction method gave better and consistent results with the LDA classifier. Autoregressive Method: It is used in linear prediction problems of time-series data, it is mostly used for non-stationary time series data. Many research support that it can be used as the feature extraction method for EEG Signals using the time-domain approach. Reference [35] used the Burg AR estimator in their proposed methodology, which was used to minimize the error in forward and backward predictions. This study extracted features from Electroencephalogram (EEG) records and Electrooculogram (EOG) using Autoregressive Modelling. a.
b.
5.
Yule-Walker Method: It is also called the autocorrelation method; it estimates the Power Spectral Density (PSD) of the signal by fitting the AR model with the windowed input signal by minimization of the least-squares. Burg’s Method: It is similar to the Yule-Walker method as both estimate PSD of a signal, The estimation of PSDs of the signal looks very similar to the original signal, therefore can be used as a feature extraction method.
Fast Fourier Transform (FFT) Method: It is a feature extraction method where EEG signals are transformed from the time domain to the frequency domain, in order to perform spectral analysis on it. Here signals are divided into four EEG bands i.e., delta, theta, alpha, and beta which contain the major characteristic of the EEG signal. Reference [33] detected drowsiness using respiratory signal analysis, in this study FFT feature extraction method was used, Artificial Neural Network (ANN) used in this study gave an accuracy of 83.5%. It gave accuracy better than the SVM machine learning model for the EEG drowsiness signals for one EEG channel.
82
J. S. Sachadev and R. Bhatnagar
6.
Wavelet Transform (WT) Method: Wavelet Transform is a feature extraction method where EEG signals are represented by wavelets, we calculate these wavelets from a derived function through operations like shifting, translation, and stretching along the time axis. Wavelet Transform is divided into two categories Continuous Wavelet Transform (CWT) and Discrete Wave Transform (DWT). Reference [34] used genetic algorithm for detection of optimal Wavelet Transform (WT) parameter, to denoise EEG Signals. Two datasets were used in this study EEG Motor Movement and Imagery dataset. The proposed algorithm when evaluated produced better results than manual configurations based on ad hoc strategy. a.
b.
7.
Continuous Wavelet Transform (CWT) Method: Continuous Wavelet Transform (CWT) decomposes signals into wavelets, it creates a time– frequency representation of it. It is an excellent tool to map non-stationary signals because of their changing property. Reference [29] proposed a novel framework where EEG Signals for each trial are pre-processed common spatial pattern (CSP) method and then decomposed to 2D images using Continuous Wavelet Transform (CWT). These images were trained using the CNN model which gave better classification results. This framework was applied to publicly available BCI competition IV dataset 2a. Discrete Wavelet Transform (DWT): Discrete Wavelet Transform (DWT) decomposes signals into sets, it creates a time series of coefficients that represents the change of a signal for a frequency band throughout time. Reference [30] demonstrated that discrete wavelet transforms (DWT) gave significantly better classification accuracy than previous methods. Two combinations were proposed DWT-db4 combined with SVM and DWT-db2 combined with RF which were effective in the classification of epileptic seizures. This study also showed that even if the dataset is unbalanced, these combinations will be effective in classifying the problem.
Eigenvectors: It calculates frequency and power from artifact-dominated measurements, even if a signal is corrupted Eigen decomposition can correlate with it. Some of the methods of eigenvector decomposition are Minimum-Norm Method, Pisarenko’s Method, and MUSIC Method. Reference [32] proposed an expert system for the detection of variability in EEG. The methodology included two stages Eigen Vector Feature Extraction for getting features of the EEG Signal and then using these features to train Classifiers. It was found that Modified Mixture of Experts (MME) which was trained on the extracted features, gave better accuracy rates that were higher than that of the Mixture of Experts (ME). a.
Pisarenko’s Method: It is an eigenvector method to calculate Power Spectral Density (PSD). It estimates the PSD using signal desired equation.
A Comprehensive Review on Brain Disease Mapping …
b.
c. 8.
83
MUSIC Method: It is an eigenvector method to calculate Power Spectral Density (PSD). The advantage of using it is that it removes the issue related to false zero with the help of spectra’s average equivalent to artifact subspace of the whole eigenvectors. Minimum Norm Method: It is an eigenvector method to calculate Power Spectral Density (PSD).
Time–Frequency Distributions: It requires a restricted pre-processing stage because as it requires a noise-free signal to provide good performance. The windowing process is important in its pre-processing module because it deals with the stationary principle as it is a Time–frequency method. Reference [31] compared Time–Frequency Distributions (TFD) and many dissimilarity measures for epileptic seizures detection. Their framework was evaluated on 13 different classification problems and was found effective in the detection of seizures (Fig. 4).
Fig. 4 Feature extraction methods
84
J. S. Sachadev and R. Bhatnagar
5 Classification 1.
2.
3.
4.
5.
6.
Artificial Neural Network (ANN): An artificial Neural Network (ANN) is a collection of a large number of neurons used for non-linear classification. Reference [39] study classified two activities open eyes (OE) and close eyes (CE) sessions for 14 subjects having a stroke. The proposed methodology uses Artificial Neural Network (ANN) to train on input signals having EEG bands i.e., alpha, beta, theta, and delta, results obtained from the study suggested that ANN can be used to successfully classify stroke using brain activity. Deep Learning (DL): It is a form of ML and has hidden layers and a huge number of neurons, and many hyperparameters to be tuned. Reference [23] the study provided a deep learning model to classify hand motions from lowfrequency EEG Signals. All EEG signals from the dataset were transformed using the Common Average Reference (CAR) filter, spatial filters were used to enhance the EEG Signal components, and finally, down sampling of every signal was done to 16 Hz. This study achieved a good performance of 71% and 65% for the proposed methodology. K Nearest Neighbours (KNN): Basic idea behind this algorithm is that if most of the k most neighbour samples in a feature space remain within a label or a class, then it will belong to it and will have features of the samples of that class. Reference [38] proposed a system to detect drowsiness using EEG technology, in this study EEG signals were processed and features were extracted using Discrete Wavelet Transform (DWT). K-Nearest Neighbour (K-NN) was trained on these features, highest accuracy of around 90–100% can be achieved using the proposed system. Linear Discriminant Analysis (LDA): [37] proposed an advanced classification technique for Motor imagery (MI) related Electroencephalogram (EEG) signal. Features extracted from the signals were fed to Linear Discriminant Analysis (LDA) classifier, which gave better classification accuracy. LDA was used in this study because of very low computational specification which is suitable for online BCI systems. Support Vector Machine (SVM): It is mostly used for binary classification, the basic idea is to find the optimal decision boundary so that data can be distributed on both sides to achieve classification. Reference [22] classified seizure patients based on the EEG Signals, these patients were classified in three groups Generalized Non-Specific Seizure (GNSZ), Focal Non-Specific Seizure (FNSZ), and Tonic–Clonic Seizure (TCSZ). Dataset used in this study is from Temple University Hospital Seizure Corpus (TUSZ). SVM was trained on the feature extracted and the following results were obtained 97.83%, 90.25%, and 91.4% of average specificity, average sensitivity, and accuracy respectively Naive Bayes (NB): This ML algorithm is based on the Bayes theorem, the main idea is that we calculate the probability of each type if that item will appear in that condition or not, of probability is large that item will be classified in that category. Reference [36] used image descriptors for epileptic seizure
A Comprehensive Review on Brain Disease Mapping …
85
Fig. 5 Classification methods and techniques
detection, in this methodology Short-time Fourier transform (STFT) was used to transform EEG Signals into images, the extracted feature matrix was used as an input for Naïve Bayes (NB) classifier which gave an accuracy of 98%, which was significantly better (Fig. 5).
6 Case Studies Consolidation and Their Analysis In one of the study the BCI Competition Dataset 3a was used, this dataset contains EEG recordings of three subjects for MI (Motor Imagery) classification. There are four classes i.e., Left Hand, Right Hand, Foot, and Tongue based on which classification is made. These EEG recordings were sampled with the sampling frequency of 250 Hz, followed with filtration of signal between 1 to 50 Hz using Notch filter. There are 60 EEG channels i.e. ‘AFFz’, ‘F1h’, ‘Fz’, ‘F2h’, ‘FFC1’, ‘FFC1h’, ‘FFCz’, ‘FFC2h’, ‘FFC2’, ‘FC3h’, ‘FC1’, ‘FC1h’, ‘FCz’, ‘FC2h’, ‘FC2’, ‘FC4h’, ‘FCC3’, ‘FCC3h’, ‘FCC1’, ‘FCC1h’, ‘FCCz’, ‘FCC2h’, ‘FCC2’, ‘FCC4h’, ‘C5’, ‘C5h’, ‘C3’, ‘C3h’, ‘C1h’, ‘Cz’, ‘C2h’, ‘C4h’, ‘C4’, ‘C6h’, ‘C6’, ‘CCP3’, ‘CCP3h’, ‘CCP1’, ‘CCP1h’, ‘CCPz’, ‘CCP2h’, ‘CCP2’, ‘CCP4h’, ‘CCP4’, ‘CP3h’, ‘CP1’, ‘CP1h’, ‘CPz’, ‘CP2h’, ‘CP2’, ‘CP4h’, ‘CPP1’, ‘CPP1h’, ‘CPPz’, ‘CPP2h’, ‘CPP2’, ‘P1h’,
86
J. S. Sachadev and R. Bhatnagar
Fig. 6 Timeline of trials performed by each subject
‘Pz’, ‘P2h’. There were 6 events in the dataset annoted as: ‘1023’: ‘rejected trial, ‘768’: ‘start of the trial’, ‘769’: ‘cue for left hand’, ‘770’: ‘cue for right hand’, ‘771’: ‘cue for foot’, ‘772’: ‘cue for tongue’. Each subject performed 40 trials; each trial begins with a Blank Screen for 2 s followed by a beep (acoustic stimuli) which indicates the beginning of the run. From the 3rd s Arrow (Left, Right, Up, and Down) was displayed with a fixation cross “ +” and then after 4th sec, only fixation cross is displayed for the next 3 s. During these 3 s subjects was asked to imagine movements in the left hand, right hand, foot, and tongue. Figure 6 displays the trial performed by each subject. These EEG recordings were epoched using the MNE library of python, each epoch length is of 3 s and 4 events were taken into consideration i.e., 769, 770, 771, and 772 which are annotations for Left Hand, Right Hand, Foot, and Tongue. Figure 7 displays butterfly plots of evoked data using men library, here we used evoked.plot() to get plots for all 4 events. We plotted gradients of micro voltage for all 60 channels using the MNE library’s function plot_image(). It shows the change in voltage with time for all the four conditions as shown in Figs. 8 and 9.
Fig. 7 Butterfly plots of evoked data for subjects
A Comprehensive Review on Brain Disease Mapping …
87
Fig. 8 Change in voltage plots for all the four conditions
ArƟcle
Year
Processing Method
[42]
2019
[43]
2019
[44]
2018
Bandpass Filter and Laplacian Filtering
[45]
2018
[46]
2017
Chebyshev type-II Digital filter BuƩer-worth Digital Filter
[47]
2017
[48]
2016
DecomposiƟon into sub bands using a fixedsize overlapping window Filter Bank Common SpaƟal PaƩern (FBCSP)
Bandpass Digital Filter (4 diīerent filters) Bayesian network (BN) construcƟon
Feature ExtracƟon Method One-versus-the-rest Common SpaƟal Pattern (OVR-CSP)
ClassificaƟon Method ConvoluƟon Neural Network (CNN)
Accuracy (%)
Common SpaƟal PaƩern (CSP) with Time domain parameters Band power Technique
Mulit-class Support Matrix Machines (MSMM)
91.68%
Linear Discriminant Analysis (LDA) Classifier MulƟ-Class Fuzzy System MulƟclass Support Matrix Machine (MSMM)
91.98%
Support Vector Machine (SVM) Support Vector Machine (SVM)
85.5%
Common SpaƟal PaƩern (CSP) Common SpaƟal PaƩern (CSP) with Time-Domain Parameters (TDP) Common SpaƟal PaƩern (CSP) Common Bayesian Network (CBN)
91.9%
91.1% 94.8%
98%
Fig. 9 Some significant studies
Reference [40] proposed a methodology where a One-Versus-The-Rest Common Spatial Pattern (OVR-CSP) algorithm is used to extract features from the Raw EEG Signals and then classified using a novel CNN model. An average accuracy of 91.9% is achieved using this methodology for all three subjects. Reference [41] proposed
88
J. S. Sachadev and R. Bhatnagar
Fig. 10 Most prominent method types
a novel approach to multiclass support matrix machine (M-SMM). Statistical tests show that M-SMM is effective in the classification of Motor Imagery (MI) EEG for Brain-Computer Interface (BCI) [42] study established a correlation between EEG MI signals with Event Related Desynchronization (ERD) and Event-Related Synchronization using Band Power Features. Initially, the EEG Signals were preprocessed, and then Band Power Features for Cz, C3, and C4 were calculated. LDA classifier was used for classification which gave an accuracy of 91.98%. Reference [43] introduced a novel method for multi-class classification for MI EEG Task. It used Common Spatial Pattern as a Feature Extraction Method to extract features and then used a fuzzy classifier for the classification. This study shows the effectiveness of this method and gave effective results. Reference [44] study introduced a feature extraction method using Common Spatial Pattern (CSP), it used mu and beta frequency ranges. These features are used to train SVM for classification. An accuracy of 85.5% was achieved using this technique. Reference [45] proposed a novel method that uses a Gaussian mixture model in each channel followed by the creation of a Common Bayesian network (CBN) [46] and then SVM to learn about common edges for classification. The proposed methodology gives an excellent classification performance. Based on our survey it was evident that Signal Processing using Chebyshev type-II Digital filter, Feature Extraction using Common Spatial Pattern (CSP), and classification using Support Vector Machine (SVM) were the most prominent methods for Motor Imagery (MI) EEG Signals Classification as shown in Fig. 10.
7 Conclusion Medical practitioners today mainly depend on the image analysis reported by the EEG specialists. EEG signals and images provide a comprehensive account on the mental state of human, and AI & ML techniques are widely being used by the researchers to better interpret & analyze the pattern for diagnosis. The chapter is written for the researchers in the domain and introduces EEG as a technology for brain mapping, discusses signal acquisition and signal processing techniques, feature extraction methods and classification techniques. The chapter is thus aimed at acquainting the readers with different aspects/processes of brain disease mapping using EEG. Acknowledgements We acknowledge Manipal University Jaipur for all the support extended.
A Comprehensive Review on Brain Disease Mapping …
89
References 1. Shravani, S., & Sinha, V. K. (2009). Event-related potential: An overview. Industrial Psychiatry Journal, 18(1), 70–3. https://doi.org/10.4103/0972-6748.57865. 2. Shen, C. L., Chou, T. L., Lai, W. S., Hsieh, M. H., Liu, C. C., Liu, C. M., & Hwu, H. G. (2020). P50, N100, and P200 auditory sensory gating deficits in schizophrenia patients. Frontiers in Psychiatry, 11, 868. 3. Xia, L., Yuan, L., Du, X. D., Wang, D., Wang, J., Xu, H., Huo, L., Tian, Y., Dai, Q., Wei, S., & Wang, W. (2020). P50 inhibition deficit in patients with chronic schizophrenia: Relationship with cognitive impairment of MATRICS consensus cognitive battery. Schizophrenia Research, 215, 105–112. 4. Xia, L., Wang, D., Wei, G., Wang, J., Zhou, H., Xu, H., Tian, Y., Dai, Q., Xiu, M., Chen, D., & Wang, L. (2021). P50 inhibition defects with psychopathology and cognitive impairment in patients with first-episode drug naive schizophrenia. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 107, 110–246. 5. Ren, X., Fribance, S. N., Coffman, B. A., & Salisbury, D. F. (2021). Deficits in attentional modulation of auditory N100 in first-episode schizophrenia. European Journal of Neuroscience, 53, 2629–2638. https://doi.org/10.1111/ejn.15128 6. Kumar, M., Federmeier, K. D., & Beck, D. M. (2021). The N300: An index for predictive coding of complex visual objects and scenes. Cerebral Cortex Communications, 2(2), 030. 7. Du, X., Choa, F. S., Summerfelt, A., Rowland, L. M., Chiappelli, J., Kochunov, P., & Hong, L. E. (2017). N100 as a generic cortical electrophysiological marker based on decomposition of TMS-evoked potentials across five anatomic locations. Expriment Brain Research 235(1), 69– 81. https://doi.org/10.1007/s00221-016-4773-7. Epub 2016 Sep 14. PMID: 27628235; PMCID: PMC5269602. 8. Bourisly, A. K., & Shuaib, A. (2018). Neurophysiological effects of aging: A P200 ERP study. Translational Neuroscience, 9, 61–66. https://doi.org/10.1515/tnsci-2018-0011. 9. Laha, M., Konar, A., Das, M., Debnath, C., Sengupta, N., & Nagar, A. K. (2020). P200 and N400 induced aesthetic quality assessment of an actor using type-2 fuzzy reasoning. 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8. https://doi.org/10. 1109/FUZZ48607.2020.9177773. 10. Elverman, K. H. et al. (2021). Event-Related Potentials, Inhibition, and Risk for Alzheimer’s Disease Among Cognitively Intact Elders’, 1413–1428. 11. Franklin, M. S., Dien, J., Neely, J. H., Huber, E., & Waterson, L. D. (2007). Semantic priming modulates the N400, N300, and N400RP. Clinical Neurophysiology, 118(5), 1053–1068. 12. Zhang, Y., Xu, H., Zhao, Y., Zhang, L., & Zhang, Y. (2021). Application of the P300 potential in cognitive impairment assessments after transient ischemic attack or minor stroke. Neurological Research, 43(4), 336–341.https://doi.org/10.1080/01616412.2020.1866245. 13. Ortiz, T. et al. (2019). A N400 ERP study in letter recognition after passive tactile stimulation training in blind children and sighted controls, 197–206. 14. Frisch, S., Kotz, S. A., von Cramon, D. Y., & Friederici, A. D. (2003). Why the P600 is not just a P300: The role of the basal ganglia. Clinical Neurophysiology, 114(2), 336–340. 15. Thaploo, D., Zelder, S., & Hummel, T. (2021). Olfactory modulation of the contingent negative variation to auditory stimuli. Neuroscience, 470, 16–22. 16. Sharma, M., Tiwari, J., & Acharya, U. R. (2021). Automatic sleep-stage scoring in healthy and sleep disorder patients using optimal wavelet filter bank technique with EEG signals. International Journal of Environmental Research and Public Health, 18(6), 3087. 17. Chen, M. C., Sorooshyari, S. K., Lin, J. S., & Lu, J. (2020). A layered control architecture of sleep and arousal. Frontiers in computational neuroscience, 14, 8. 18. Chakladar, D. D., Dey, S., Roy, P. P., & Dogra, D. P. (2020). EEG-based mental workload estimation using deep BLSTM-LSTM network and evolutionary algorithm. Biomedical Signal Processing and Control, 60, 101989.
90
J. S. Sachadev and R. Bhatnagar
19. Zhang, K., Robinson, N., Lee, S. W., & Guan, C. (2021). Adaptive transfer learning for EEG motor imagery classification with deep convolutional neural network. Neural Networks, 136, 1–10. 20. Chen, Y., Xue, S., Li, D., & Geng, X. (2021). The application of independent component analysis in removing the noise of EEG signal. 2021 6th International Conference on Smart Grid and Electrical Automation (ICSGEA), 2021, pp. 138–141. https://doi.org/10.1109/ICS GEA53208.2021.00036. 21. Popescu, T. D., Artifact removing from EEG recordings using independent component analysis with high-order statistics. 22. Mahato, S., & Paul, S. (2020). Classification of depression patients and normal subjects based on electroencephalogram (EEG) signal using alpha power and theta asymmetry. Journal of Medical Systems, 44, 28. https://doi.org/10.1007/s10916-019-1486-z 23. Bressan, G., Cisotto, G., Müller-Putz, G. R., & Wriessnegger, S. C. (2021). Deep learningbased classification of fine hand movements from low frequency EEG. Future Internet, 13(5), 103. 24. Luo, J., Gao, X., Zhu, X., Wang, B., Lu, N., & Wang, J. (2020). Motor imagery EEG classification based on ensemble support vector learning. Computer Methods and Programs in Biomedicine, 193, 105464. 25. Rosanne, O., Albuquerque, I., Cassani, R., Gagnon, J. F., Tremblay, S., & Falk, T. H. (2021). Adaptive filtering for improved eeg-based mental workload assessment of ambulant users. Frontiers in Neuroscience, 15, 341. 26. Sheela, P., & Puthankattil, S. D. (2020). A hybrid method for artifact removal of visual evoked EEG. Journal of Neuroscience Methods, 336, 108638. 27. Xie, S. (2021). Wavelet power spectral domain functional principal component analysis for feature extraction of epileptic EEGs. Computation, 9(7), 78. Crossref. Web. 28. Alam, M. N., Ibrahimy, M. I., & Motakabber, S. M. A. (2021). Feature extraction of EEG signal by power spectral density for motor imagery based BCI. 2021 8th International Conference on Computer and Communication Engineering (ICCCE), pp. 234–237. https://doi.org/10.1109/ ICCCE50029.2021.9467141. 29. Mahamune, R., & Laskar, S. H. (2021). Classification of the four-class motor imagery signals using continuous wavelet transform filter bank-based two-dimensional images. International Journal Imaging System Technology, 1– 12. https://doi.org/10.1002/ima.22593. 30. Feudjio, Cyrille & Djimna Noyum, Victoire & Peuriekeu Mofenjou, Younous & Rockefeller, Rockefeller & Fokoue, Ernest. (2021). A novel use of discrete wavelet transform features in the prediction of epileptic seizures from EEG data. 31. Ech-Choudany, Y., Scida, D., Assarar, M., Landré, J., Bellach, B., & Morain-Nicolier, F. (2021). Dissimilarity-based time–frequency distributions as features for epileptic EEG signal classification. Biomedical Signal Processing and Control, 64, 102268. 32. Übeyli, E. D., & Güler, ˙I. (2007). Features extracted by eigenvector methods for detecting variability of EEG signals. Pattern Recognition Letters, 28(5), 592–603. 33. Belakhdar, I., Kaaniche, W., Djmel, R., & Ouni, B. (2016). A comparison between ANN and SVM classifier for drowsiness detection based on single EEG channel. 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), pp. 443–446, doi: https://doi.org/10.1109/ATSIP.2016.7523132. 34. Alyasseri, Z. A. A., Khader, A. T., Al-Betar, M. A., Abasi, A. K., & Makhadmeh, S. N. (2021). EEG signal denoising using hybridizing method between wavelet transform with genetic algorithm. In: Md Zain, Z. et al. (Eds.) Proceedings of the 11th National Technical Seminar on Unmanned System Technology 2019. Lecture Notes in Electrical Engineering, vol. 666. Springer, Singapore. https://doi.org/10.1007/978-981-15-5281-6_31. 35. Gupta, A., Bhateja, V., Mishra, A., & Mishra, A. (2019) Autoregressive modeling-based feature extraction of EEG/EOG signals. In S. Satapathy, & A. Joshi (Eds.) Information and Communication Technology for Intelligent Systems. Smart Innovation, Systems and Technologies, vol 107. Springer, Singapore. https://doi.org/10.1007/978-981-13-1747-7_72.
A Comprehensive Review on Brain Disease Mapping …
91
36. Sameer, M., & Gupta, B. (2021). ROC analysis of EEG subbands for epileptic seizure detection using naïve bayes classifier. Journal of Mobile Multimedia, 299–310. 37. Hasan, M. R., Ibrahimy, M. I., Motakabber, S. M. A., & Shahid, S. (2015) Classification of multichannel EEG signal by linear discriminant analysis. In H. Selvaraj, D. Zydek, & G. Chmaj (Eds.) Progress in Systems Engineering. Advances in Intelligent Systems and Computing, vol 366. Springer, Cham. https://doi.org/10.1007/978-3-319-08422-0_42. 38. Ekaputri, C., Fu’adah, Y. N., Pratiwi, N. K. C., Rizal, A., & Sularso, A. N. (2021). Drowsiness detection based on EEG signal using discrete wavelet transform (DWT) and K-Nearest Neighbors (K-NN) methods. In Triwiyanto, H. A. Nugroho, A. Rizal, & W. Caesarendra (Eds.) Proceedings of the 1st International Conference on Electronics, Biomedical Engineering, and Health Informatics. Lecture Notes in Electrical Engineering, vol 746. Springer, Singapore. https://doi.org/10.1007/978-981-33-6926-9_42. 39. Narudin, S. K., Nasir, N. H. M., & Fuad, N. (2021) Brainwave classification of task performed by stroke patients using ANN. Annals of Emerging Technologies in Computing (AETiC), 5(5). 40. X. Tang, J. Zhao and W. Fu, “Research on extraction and classification of EEG features for multi-class motor imagery,.” (2019). IEEE 4th advanced information technology. Electronic and Automation Control Conference (IAEAC), 2019, 693–697. https://doi.org/10.1109/IAE AC47372.2019.8998049 41. Razzak, I., Blumenstein, M., & Xu, G. (2019 Jun). Multiclass support matrix machines by maximizing the inter-class margin for single trial EEG classification. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27(6), 1117–1127. https://doi.org/10.1109/ TNSRE.2019.2913142 Epub 2019 Apr 25 PMID: 31021801. 42. Bhatnagar, M., Gupta, G. S., & Sinha, R. K. (2018). Linear discriminant analysis classifies the EEG spectral features obtained from three class motor imagination. 2018 2nd International Conference on Power, Energy and Environment: Towards Smart Technology (ICEPE), pp. 1–6, https://doi.org/10.1109/EPETSG.2018.8659292. 43. Nguyen, T., Hettiarachchi, I., Khatami, A., Gordon-Brown, L., Lim, C. P., & Nahavandi, S. (2018). Classification of multi-class BCI data by common spatial pattern and fuzzy system. IEEE Access, 6, 27873–27884. https://doi.org/10.1109/ACCESS.2018.2841051 44. Zheng, Q., Zhu, F., Qin, J., & Heng, P. A. (2018). Multiclass support matrix machine for single trial EEG classification. Neurocomputing, 275, 869–880. 45. Mahmood, A., Zainab, R., Ahmad, R. B., Saeed, M., & Kamboh, A. M. (2017). Classification of multi-class motor imagery EEG using four band common spatial pattern. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1034–1037. https://doi.org/10.1109/EMBC.2017.8037003. 46. He, L., Hu, D., Wan, M., Wen, Y., von Deneen, K. M., & Zhou, M. (June 2016). Common Bayesian network for classification of EEG-based multiclass motor imagery BCI. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 46(6), 843–854. https://doi.org/10.1109/ TSMC.2015.2450680
Recognition of Ocular Disease Based Optimized VGG-Net Models Hanaa Salem , Kareem R. Negm, Mahmoud Y. Shams , and Omar M. Elzeki
Abstract The detection of ocular diseases is the most interesting point for the Optometrist. Due to the cost of the devices that discover and classify the different types of ocular disease. Artificial Intelligence (AI) based image processing and Machine Learning are currently utilized to classify and detect ocular disease. In this chapter, we present an improved classification model based on an improved VGG-Net to classify the ocular disease of the stored eye image datasets. The dataset was collected and prepared to generate the image list and then the data are divided into 80% training and the remaining 20% for testing. We highlighted to classify the cataract and diabetes disease from the eye ocular images. The proposed pretrained model is tested based on deep neural networks based on VGGNet. We utilized VGGNet-16 and VGGNet-19 and applied the Adam optimizer to improve the results of VGGNet and tackle the overfitting problem. The evaluation results of the proposed VGGNet-19 based Adam optimizer indicated that the superiority rather than the optimized VGGNet-16 for cataracts and VGGNet-19 is superior to VGGNet-19 for diabetes. Keywords Artificial intelligence · Image processing · Ocular disease · Deep learning · VGG · Transfer learning · Adam optimizer · Classification · Machine learning
H. Salem (B) Faculty of Engineering, Delta University for Science and Technology, Gamasa, Egypt e-mail: [email protected] K. R. Negm · M. Y. Shams Faculty of Artificial Intelligence, Kafrelsheikh University, Kafr El Sheik 33511, Egypt e-mail: [email protected] O. M. Elzeki Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt e-mail: [email protected] Faculty of Computer Science, New Mansoura University, Gamasa, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_6
93
94
H. Salem et al.
1 Introduction The identification of a certain disease mainly required analysis of the data such that these data might be numerical values, images, videos, or signals [1]. The attempts to discover the diseases in its early stages are potentially required because it protects people from possible negative effects in the future [2]. Therefore, in this paper, we used image processing methodology to detect and classify ocular disease from the entered eye images. Generally, image processing is one of the most efficient tools in AI to detect diseases, especially in medical images. The processing includes the detection of the most irrelevant features of the images as well as the patterns and shape of a certain disease [3]. The ocular disease can be detected using high resolution with a certain high-cost device. AI can detect ocular easily with low-cost analysis based on image processing and ML architectures. The major classification of ocular diseases is Normal, Diabetics, Glaucoma, Cataract, Age-related macular degeneration effect, Hypertension, Pathological myopia, and other Abnormal cases, as shown in Fig. 1 and [4]. The classification process is the last in image processing after the features are extracted by which the data is classified into two or more class labels based on supervised ML approaches. Deep Neural Networks (DNN) approaches are multilayer stacked hidden layers able to extract the features and further classify the images into two or more class labels [5]. The input medical images are firstly enrolled using acquired camera and sensors and then the collected datasets are divided into training and testing phases. Afterward, the features are extracted to obtain the most interesting points in the medical image. Fig. 1 The most common class labels of Ocular disease
DiabeƟcs
Ocular Disease
Age-related macular degeneraƟon eīect Glaucoma Abnormal cases Cataract Hypertension Pathological myopia
Recognition of Ocular Disease Based Optimized VGG-Net Models
95
Fig. 2 Example of the enrolled medical images and the main steps including preprocessing, feature extraction, and classification
Therefore, the extracted features are ready to classify using classification approaches based on DNN and transfer learning models. VGG-Net is selected as a promising deep learning architecture to perform both feature extraction and classification of the entered ocular images. To improve the accuracy achieved, a modified algorithm based on Adam optimized and the inner structures of the classical VGG-16 and VGG19. As shown in Fig. 2, there are different medical images such as a tumor, ocular diabetes, and skin disease acquired to be classified based on ML approaches. The first step is the preprocessing by which the input images are localized, segmented and normalized. Localization and segmentation are the processes to detect the Region of Interest (ROI) of the medical images. The ROI is essential to extract the most relevant features of an image. The features are extracted and then classified to get the class labels on the entered medical images. The contribution of this work is to classify the ocular images, especially cataract and diabetes images, using improved VGG-Net and Adam optimizer to enhance the results and tackle the overfitting problem. The rest of this chapter is the related work in Sect. 2, the methodology in Sect. 3, the experimental results in Sect. 4 and finally, the conclusion and future work in Sect. 5.
2 Related Work The efforts to diagnose ocular diseases are recently performed on different datasets with different class labels. AI tools can be efficiently utilized to decrease ocular health disparities from concept to implementation [6]. Lu et al. [7] presented an overview of recent studies of using image recognition in the diagnosis of eye diseases based
96
H. Salem et al.
on AI. They recommended the utilization of ML techniques such as Support Vector Machine (SVM), Random Forest (RF), and Deep Learning (DL) to classify the ocular disease images easily and more accurately. Cataracts are one of the most recent diseases in the eye, Wu et al. [8] presented a platform based on AI to collaborate cataracts using different scenarios. They validated that there are three strategies which are the capturing of an image, the classification of cataracts such as normal, postoperative, and referable. Further, they determine the Area under the Curve (AUC) with different scenarios and the average AUC reached to 99.00%. In addition to cataracts, Diabetics Retinopathy (DR) is an essential and harmful disease in the ocular, especially in the retina. Padhy et al. [9] listed different studies to screen DR based on AI and ML methodologies. They investigated that there are different measurements such as sensitivity, specificity, accuracy for a variable number of tested images and the applied datasets. Kaggle, ORIGA, and Retina datasets are presented as a comparative study by Islam et al. [10] between different datasets such as is presented. They highlighted cataracts normal and glaucomatous diseases, and the evaluation results were based on different transfer learning models such as Google-Net, Alex-Net, CNN, and SVM. They further determined the accuracy, AUC, F1-score of the tested images. An outline of cataracts and systematic disease is described by Ang and Afshari [11]. They studied the different epidemiologic for age-related cataracts including age and sex. Furthermore, they presented diabetes, hypertension, metabolic syndrome, and renal impairment that causes direct blindness. Therefore, Ocular Disease Intelligent Recognition (ODIR) in 2019 is utilized by Wang et al. [12] to diagnose and classify multi-label fundus images using the Efficient-Net Algorithm. They classified eight types of ocular diseases such as normal, diabetes, glaucoma, cataract, AMD, hypertension, myopia, and other diseases. Besides, they used Efficient-Net Algorithm to integrate both gray and color histogram equalization images. They used different measurements to evaluate the results and they achieved F1score 58.00, 86.00, and 88.00% using VGG-16, VGG-19, and Efficient-Net, respectively. A dense correlation DL approach presented by He et al. [13] is presented to tackle the problem of traditional CNN in classifying ocular diseases based on the ODIR-2019 dataset. This is because the ability of CNN to extract two different sets of features from the color fundus photographs, and therefore, the pixel-wise correlations are encapsulating the two feature sets to find the patient-level representation, and this is called Spatial Correlation Module (SCM). They utilized ResNet-18, ResNet-34, ResNet-50, and ResNet-101 with and without SCM and the AUC reached to 93.00% in case of using SCM with ResNet-101. Nine deep learning transfer models are presented to classify two fundus images and the features are extracted for both left and right eye images and applied to the vectors to the classifiers. They used three fusion strategies which are summation, product, and concatenation on both Off-site and On-site test sets. This chapter utilized improved VGG-Net to classify cataracts and diabetes on the eye and compare the results with the traditional VGG-16 and VGG-19.
Recognition of Ocular Disease Based Optimized VGG-Net Models
97
3 Methodologies 3.1 The Proposed Model The proposed model is divided into three sections. The first is dataset preparation, and the second is VGG-Net networks trained to classify eye diseases (Cataract and Diabetes) dataset. To improve the robustness of the classifier, Adam optimizer is selected to obtain the best values for these particular VGG-Net networks. Finally, all VGG-Net networks are evaluated using different evaluation measures and tested as shown in Fig. 3. Two up-to-date and most recent deep convolutional neural network architectures (DCNN) are used in the proposed model for feature extraction and further classification, including VGGNet-16 [14] and VGGNet-19 [15], and they have been improved by adding extra layers to them. Here, we present in the following the corresponding objectives of the proposed work. • With prepared ocular disease images, we compare the accuracy of pre-trained CNN models VGG-16 and VGG-19. • Improve the robustness of VGG-Net classifiers by adjusting classifier hyperparameters with the Adam optimizer.
Fig. 3 The proposed eye disease classification model
98
H. Salem et al.
• The results are evaluated for pre-trained and improved CNN models by performance measures. For the classification of eye disease, the model used a sequence of operations as its fundamental blocks • Image dataset preparation. • Deep convolutional neural network (DCNN) classification models, and Adam optimizer. • Evaluation measures.
3.2 Ocular Disease Recognition Dataset Currently, ocular disease screening is primarily based using optical coherence tomography (OCT) images and fundus images. This experiment used ocular disease recognition dataset (https://www.kaggle.com/andrewmvd/ocular-disease-rec ognition-odir5k). Ocular Disease Intelligent Recognition (ODIR) is an organized ophthalmology dataset containing 5,000 patients’ ages, color fundus images of the left and right eyes, and doctors’ diagnostic keywords. Cameras of different types are used to record fundi such as Canon, Zeiss, and Kowa and this has resulted in the resolution of the images varying. To solve this, the photography assignment has been flagged by skilled human readers under the supervision of the Director of Quality Control. Samples from the dataset. There are eight patient classifications, which include: • • • • • • • •
Normal (N), Diabetes (D), Glaucoma (G), Cataract (C), Age-related Macular Degeneration (A), Hypertension (H), Pathological Myopia (M), Other diseases/abnormalities (O).
3.2.1
Image ODIR Dataset Preparation
To balance the dataset for perfect training, we extracted cataract and normal images from the dataset throughout the dataset preparation phases. As shown in Fig. 4 samples of the ODIR dataset were prepared. We extracted 304 images of cataracts from the left eye and 290 from the right. Then, 300 normal images were extracted from the left eye and 300 from the right, and 917 images of diabetes from the left eye and 960 from the right. Then, 920 normal images were extracted from the left eye and 970 from the right. At that time created a new dataset from all the images extracted from the original dataset and resize the size of the images to become 224 × 224. The
Recognition of Ocular Disease Based Optimized VGG-Net Models
99
Fig. 4 Input samples of ODIR images. a Cataracts and normal samples, b Diabetic and normal samples
proposed model is implemented using a training/testing technique. It is indeed important to note that the training/testing technique (80 and 20%, respectively) accurately reflects the effectiveness and power of the proposed model.
3.3 DCNN Classification Models Deep Learning (DL) is a class of Artificial Intelligence (AI) approaches based on artificial neural networks and inspired by the structure of the human brain. DL approaches, unlike standard machine learning methods, require significantly less human supervision because they do not depend on the production of hand-crafted
100
H. Salem et al.
features, which may be time-consuming and laborious, but instead learn suitable features directly from the data. Furthermore, when the amount of data grows, DL approaches scale considerably better than standard ML methods.
3.3.1
The Visual Geometry Group Network (VGG-Net)
VGGNet is a multi-layered deep neural network [16]. The VGGNet is a CNN-based model that is used in the ocular disease recognition dataset. The general structure of both VGG-16 and VGG-19 is discussed in Fig. 5. The different multi-stacked convolutional layers are investigated in Fig. 5. While VGG-16 used 13 convolutional layers, VGG-19 used 16 layers. The number of dense layers in VGG-16 is three, and the number of fully connected layers in VGG-19 is three. The pooling is similar in both VGG-16 and VGG-19 in the first two CONV layers while the remaining last three CONV layer VGG-16 pooling 3 convolutional layers and VGG-19 pooling 4 convolutional layers [17, 18]. To perform Ocular Disease classification (Cataract, Diabetes), we fine-tune two pre-trained CNN networks (VGGNet-16 and VGGNet-19). Table 1 describes two CNN pre-trained networks on the ODIR images dataset and their properties.
3.3.2
Optimization
Throughout the training stage, the weights of the Neural Network nodes are modified for minimizing the loss function. The direction and magnitude of weights [19] performance of the Optimizer are learning rate and regularization. As a result, when introduced to updated data, the classifier’s generalization power increases [20]. A value of the learning rate that is too large or too small results in either non-convergence of the loss function or the reach of the local, but not absolute, minima, respectively. Simultaneously, regularization prevents model overfitting by penalizing the dominant weight values for accurate classification. The Optimizers used in the experiments is Adam optimizer [21].
Adam Optimizer The Adam optimization technique [22] was originally developed to enhance the properties of the Nesterov momentum, AdaGrad, and RMSProp algorithms. Equation (1) is used to update the weights i − wsi = ws−1
λ¯
ϑˆ + ε ms mˆ s = 1 − β1s
· mˆ s
(1) (2)
Recognition of Ocular Disease Based Optimized VGG-Net Models
101
Fig. 5 VGGNet models. a VGGNet-16, b VGGNet-19
where: ϑˆ s =
ϑs 1 − β2s
(3)
m s = β1 m s−1 + (1 − β1 )g
(4)
2 ϑs = β2 ϑs−1 + (1 − β2 ) g
(5)
102
H. Salem et al.
Table 1 VGGNet-16 and VGGNet-19 configuration Layer (Type)
Output shape
Param #
VGG16 (Functional)
(None, 7, 7, 512)
14714688
Flatten (Flatten)
(None, 25088)
0
Dense_4 (Dense)
(None, 1)
25089
Model
Total params: 14,739,777 Trainable params: 25,089 Non-trainable params: 14,714,688 VGG19 (Functional)
(None, 7, 7, 512)
20024384
Flatten (Flatten)
(None, 25088)
0
Dense_4 (Dense)
(None, 1)
25089
Total params: 20,049,473 Trainable params: 25,089 Non-trainable params: 20,024,384
Table 2 Formulation of the most common evaluation measures based confusion matrix
Evaluation measures
Formulation
Accuracy (%)
(TP + TN) / (TP + FN + TN +FP)
Precision (%)
(TP) / (TP + FP)
Recall (%)
(TP) / (TP + FN)
F1-score (%)
2.((Precision.Recall)/(Precision + Recall))
g = ∇w Cost(ws )
(6)
where: λ¯
The hyperparameter learning rate
Cost ()
The cost function
ws
The weights at the s step.
g
The weight parameter gradient.
βi
The quantity of data required from the previous update. βi ∈ [0, 1]
ms
The gradients’ running average.
ϑs
the squared gradients’ running average
3.3.3
VGGNet Evaluation Criteria
Evaluation Metrics Various criteria were performed to measure the efficacy of the highest-rated DL model. To compute the true or false categorization of the cataract and diabetics diagnosed in eye disease images examined, do the following. The confusion matrix predicts the four results listed below. True Positive (TP) has indeed been found as having the correct identification as well as several abnormalities. True Negative (TN) is an incorrectly estimated number of periodic cases. False positives (FP) are a group of periodic cases. To determine the values of possible results in the confusion matrix, the following performance measures are developed as in Table 2.
Recognition of Ocular Disease Based Optimized VGG-Net Models
103
measures have been developed in the literature. The next is the approach used in the proposed model to overcome overfitting.
Early Stopping Early stopping is a preventative approach used to avoid the network from overfitting, and it may be described as ending the network’s training stage when performance on the validation set stops developing after a prespecified number of epochs. This predefined number normally varies between 10 and 50 epochs. The number of epochs in our case is 20.
4 Experimental Results and Discussion We presented a classified model in this work using the deep learning approach; specifically, we used a model based on the VGG-Net architecture and ODIR image dataset to train and test the data. Figures 6 and 7 illustrate the details of the accuracy obtained and the loss error for diabetes ocular disease using VGGNet-16 and VGGNet-19, respectively. While Figs. 8 and 9 demonstrates the details of the accuracy obtained and the resulting loss error for cataract disease using optimized VGGNet-16 and VGGNet-19. Furthermore, the proposed model has 96.00 and 88.24%, respectively for cataract and diabetes disorders utilizing optimized VGGNet-16. While using optimized VGGNet-19, it has a 96.64 and an 84.49% accuracy for cataract and diabetic illnesses, respectively. As a result, the best classification for cataracts is optimized VGGNet-19, and for diabetes is optimized VGGNet-16. The architecture is tuned for cataracts (594 cataracts, 600 normal) and diabetes (1877 diabetes, 1890 normal) sample images over 20 epochs, with a learning rate of 0.00005 using the Adam optimizer for faster network optimization. Table 3 summarizes our training hyperparameter settings.
4.1 Performance Evaluation The proposed model was evaluated using a validation measure of (119, 120), (375, 378) image (cataract, normal), and (diabetes, normal), respectively. To analyze proposed model performance using the Accuracy, precision, recall, and f1 scores. The weighted averages of the precision, recall, and F1 scores are 0.98165, 0.93043, and 0.96, respectively for cataracts and 0.887, 0.80512, and 0.8449, respectively for diabetes using optimized VGGNet-16. VGGNet-19 has precision, recall, and f1 scores of 0.98198, 94782, and 0.96638, respectively for cataract and 0.98165, 0.930434, and 0.96, respectively for diabetes. Table 4 shows the Ocular Disease clas-
104
H. Salem et al.
Fig. 6 a Accuracy in training and validation, b Loss of training and validation for cataract disease using VGGNet-16
sification system’s assessment of the weighted mean of accuracy, precision, recall, and f1 score.
4.2 Confusion Matrix The confusion matrix reflects the classification model’s performance on the testing image ODIR dataset by comparing the actual label to the predicted label. As shown in Figs. 10 and 11, our model’s matrix confusion is evaluated as a one-label classification technique for 2 classes. Our proposed model shows the best image ODIR dataset results by making the correct prediction images for cataract and normal are 122 and 115 images out of 2 and 8, respectively, by using the classification model optimized VGGNet-16, prediction the cataract and normal samples presented in Fig. 10. While the correct prediction images for cataract and normal are 123 and 115 images out of 2 and 6, respectively, by using the classification model optimized VGGNet-19. Although the correct prediction images for diabetes and normal are 337 and 323 images out of 35 and 53, respectively, by using the classification model optimized VGGNet-16, prediction
Recognition of Ocular Disease Based Optimized VGG-Net Models
105
Fig. 7 a Accuracy in training and validation, b Loss of training and validation for cataract disease using VGGNet-19
diabetes, and normal samples are presented in Fig. 11. While the correct prediction images for diabetes and normal are 318 and 314 images out of 40 and 76, respectively, by using the classification model optimized VGGNet-19. The comparison between the proposed optimized VGG-Net with the traditional VGGNet-16 and VGGNet-19 and other architectures are shown in Table 5.
5 Conclusion CNN stands for computer-aided analysis of images generated with current computing technology. This is mostly owing to their capacity to attain performance that is comparable to, if not superior to, that of humans. Nonetheless, training a CNN, like other deep learning models, is a time-consuming process that necessitates a large number of pictures. This is a critical constraint in any sector where data is limited and difficult to get, such as medicine. Transfer learning may be a feasible solution in this circumstance. This paper analyzed the performance optimizer used with CNN’s for the classification of ocular disease images, motivated by the success of transfer
106
H. Salem et al.
Fig. 8 a Accuracy in training and validation, b Loss of training and validation for Diabetes disease using VGGNet-16
learning in the analysis of medical images and for additional investigation into this interesting research area. The evaluation approach included the usage of two VGGNets (VGGNet-16, VGGNet-19) based on the Adam optimizer. The weights of these networks were fine-tuned to suit the considered ocular disease dataset after they were trained on the ODIR dataset, which contains large images. The evaluation results of the proposed VGGNet-19 based Adam optimizer indicated that the superiority rather than the optimized VGGNet-16 for cataracts and VGGNet-19 is superior to VGGNet-19 for diabetes. The results highlight the importance of choosing the network’s hyperparameters, and we plan to expand this research to have more hyperparameters optimizers and another dataset in the future, to give assessment structure for specialist doctors who want to employ CNN models in their regular tasks.
Recognition of Ocular Disease Based Optimized VGG-Net Models
107
Fig. 9 a Accuracy in training and validation, b Loss of training and validation for Diabetes disease using VGGNet-19
Table 3 The description of our training hyperparameter settings
Table 4 The ocular disease classification system’s evaluation
Hyperparameters
Value
Loss function
Binary cross-entropy
Optimizer
Adam
Batch size
32
Epoch
20
VGGNet-16 Classes
Precision (%)
Recall (%)
F1 scores (%)
Cataract
98.165138
93.043478
96
Diabetes
88.700565
80.512821
84.49
Classes
Precision (%)
Recall (%)
F1 scores (%)
Cataract
98.198198
94.782609
96.638655
Diabetes
98.165138
93.043478
96
VGGNet-19
108 Fig. 10 Confusion matrix of classification model’s [a optimized VGGNet-16 and b optimized VGGNet-19] for cataract disease
H. Salem et al.
Recognition of Ocular Disease Based Optimized VGG-Net Models Fig. 11 Confusion matrix of classification model’s [a optimized VGGNet-16 and b optimized VGGNet-19] for diabetes disease
109
110 Table 5 The comparative study between the optimized VGG-Net and the current architectures DL
H. Salem et al. Author
Dataset used
Architecture
Accuracy (%)
Wu et al. [8] CMAAI
cataract AI agent
88.79
Islam et al. [10]
ODIR-5K
Deep CNN
85.00
Wang et al. [12]
ODIR
VGG-16
86.00
He et al. [13] ODIR
Proposed optimized VGG-NET
ODIR
VGG-19
86.00
EfficientNet
90.00
Resnet-18 + SCM
91.14
Resnet-34 + SCM
92.40
Resnet-50 + SCM
92.80
Esnet-101 + SCM
93.00
Optimized VGGNet-16
90.24
Optimized VGGNet-19
96.32
References 1. Lee, J.-G., et al. (2017). Deep learning in medical imaging: general overview. Korean Journal of Radiology, 18(4), 570–584. 2. Gomel, N., et al. (2021). Teleophthalmology screening for early detection of ocular diseases in underserved populations in Israel. Telemedicine E-Health 3. Bernardes, R., Serranho, P., & Lobo, C. (2011). Digital ocular fundus imaging: a review. Ophthalmologica, 226(4), 161–181. 4. Perdomo Charry, O. J., & González, F. A. (2020). A systematic review of deep learning methods applied to ocular images. Ciencia e Ingeniería Neogranadina, 30(1), 9–26.
Recognition of Ocular Disease Based Optimized VGG-Net Models
111
5. Courbariaux, M., Bengio, Y., & David, J.-P. (2015). Binaryconnect: training deep neural networks with binary weights during propagations. In Advances in neural information processing systems (pp. 3123–3131). 6. Campbell, J. P., et al. (2021). Artificial intelligence to reduce ocular health disparities: Moving from concept to implementation. Translational Vision Science Technology, 10(3), 19–19. 7. Lu, W., Tong, Y., Yu, Y., Xing, Y., Chen, C., & Shen, Y. (2018). Applications of artificial intelligence in ophthalmology: general overview. Journal of Ophthalmology 2018 8. Wu, X., et al. (2019). Universal artificial intelligence platform for collaborative management of cataracts. British Journal of Ophthalmology, 103(11), 1553–1560. 9. Padhy, S. K., Takkar, B., Chawla, R., & Kumar, A. (2019). Artificial intelligence in diabetic retinopathy: A natural step to the future. Indian Journal Ophthalmology, 67(7), 1004. 10. Islam, M. T., Imran, S. A., Arefeen, A., Hasan, M., & Shahnaz, C. (2019). Source and camera independent ophthalmic disease recognition from fundus image using neural network. In 2019 IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON) (pp. 59–63). 11. Ang, M. J., & Afshari, N. A. (2021). Cataract and systemic disease: A review. Clinical Experimental Ophthalmology, 49(2), 118–127. 12. Wang, J., Yang, L., Huo, Z., He, W., & Luo, J. (2020). Multi-Label classification of fundus images with efficientnet. IEEE Access, 8, 212499–212508. 13. He, J., Li, C., Ye, J., Qiao, Y., & Gu, L. (2021). Multi-label ocular disease classification with a dense correlation deep neural network. Biomedical Signal Processing and Control, 63, 102167. 14. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:14091556. 15. Zhou, J., Yang, X., Zhang, L., Shao, S., & Bian, G. (2020). Multisignal VGG19 network with transposed convolution for rotating machinery fault diagnosis based on deep transfer learning. Shock and Vibration, 2020 16. Jeyapriya, J., & Umadevi, K. S. (2020). An efficient method for identification of severity level in diabetic retinopathy using deep neural networks. Journal of Critical Reviews, 7(6), 2029–2036. 17. ul Hassan, M. (2018). VGG16-convolutional network for classification and detection. En Líneaconsulta 10 Abril, 2019 Dispon En Httpsneurohive Ioenpopular-Networksvgg 16. 18. Özyurt, F. (2020). Efficient deep feature selection for remote sensing image recognition with fused deep learning architectures. The Journal of Supercomputing, 76(11), 8413–8431. 19. Zhang, Z. (2018). Improved adam optimizer for deep neural networks. In 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS) (pp. 1–2). 20. Sarki, R., Michalska, S., Ahmed, K., Wang, H., & Zhang, Y. (2019). Convolutional neural networks for mild diabetic retinopathy detection: an experimental study. bioRxiv, 763136. 21. Bock, S., Goppold, J., & Weiß, M. (2018). An improvement of the convergence proof of the ADAM-optimizer. ArXiv Prepr, ArXiv180410587. 22. Kandel, I., Castelli, M., & Popoviˇc, A. (2020). Comparative study of first order optimizers for image classification using convolutional neural networks on histopathology images. Journal of Imaging, 6(9), 92.
Artificial Intelligence in COVID—19 Diagnosis
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social Life Abdulqader M. Almars , Ibrahim Gad , and El-Sayed Atlam
Abstract COVID-19 illness has been recognized as an International Public Health Emergency (PHEIC). According to the World Health Organization (WHO), more than 200 countries worldwide have been impacted by the COVID-19. More than 4.7 M people in the globe have lost their lives due to the coronavirus pandemic. Governments and hospitals have taken several actions and measures to prevent the further spread of infectious diseases, protect all people, and minimize illness and death rates. Artificial Intelligence (AI) and the Internet of Things (IoT) have great potential to create prevailing tools for battling COVID-19. Several technologies such as deep learning, machine learning, natural language processing, and the Internet of Things have been used to properly address healthcare concerns, including diagnosis, drug and vaccine development, vaccine distribution, sentiment analysis, and fake news identification regarding COVID-19 reviews. The primary goal of this chapter is to focus on the potential of Artificial Intelligence (AI) and the Internet of Things (IoT). This chapter aims to provide a comprehensive overview of different approaches of AI and IoT related to combating COVID-19. In addition, the chapter discusses several actions and measurements that governments have used to prevent the spread of COVID-19. This chapter covers all methodologies applied to forecast the acceptance and the demand of the COVID-19 vaccine.
A. M. Almars · E.-S. Atlam (B) College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia e-mail: [email protected] A. M. Almars e-mail: [email protected] I. Gad · E.-S. Atlam Department of Computer Science, Faculty of Science, Tanta University, Tanta, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_7
115
116
A. M. Almars et al.
1 Introduction COVID-19 (Coronavirus) is a global pandemic caused by SARS-CoV-2 that has killed thousands of people worldwide. An unprecedented number of people have died worldwide due to the COVID-19 pandemic, posing a health challenge unparalleled in modern times. In response to COVID-19, several countries have taken early action and measures to control the impact of Coronavirus disease. To combat this crisis effectively, researchers from around the world have proposed different solutions using artificial intelligence (AI) and the internet of things (IoT). Many organizations and governments use artificial intelligence to help them deal with many aspects of the healthcare industry. Hence, it has an application during these troubled times as well. Diagnose, misinformation detection, vaccine development and destructions, vaccination demand prediction, drug development, and sentiment analysis of COVID-19 comments are examples of different applications. In this chapter, a comprehensive study has been presented to show the impact of COVID-19. Furthermore, multiple applications of AI and IoT are discussed that have proven effective in combatting and handling COVID-19. The following section outlines and discusses the different applications of AI and IoT deployed to deal with the COVID19 pandemic. Section 3 describes the effect of the COVID-19 pandemic on society. Section 4 outlines the plans and measurements that governments and hospitals have taken to tackle this crisis. Section 5 describes the demand for the COVID-19 vaccine. The conclusion and future directions are presented in Sect. 6.
2 Applications of AI and IoT in COVID-19 COVID-19 has advanced the adoption of artificial intelligence (AI) in healthcare due to challenges connected to aging populations and shortages of healthcare professionals before the COVID-19 pandemic [1]. Artificial intelligence (AI) and the internet of things (IoT) have both played a vital role since the beginning of the COVID-19 crisis, demonstrating how they can be quite helpful in coping with this sort of pandemic [2]. With the rapid development of Artificial Intelligence, the healthcare industry has been completely improved [3]. Various state-of-the-art technologies such as Artificial Neural Networks, Convolutional Neural Networks, and others are being used in various health care fields such as ophthalmology, radiology, dermatology, and genome interpretation [4]. In addition, In the healthcare sector, the Internet of Things (IoT) has revolutionized sensors to monitor a patient’s physiological system [5]. Other successful technologies like blockchain and unmanned aerial vehicles (UAVs) have been applied to overcome COVID-19 issues such as image diagnosis, drug and vaccine development, and vaccine monitoring and drug distribution [6]. In the following subsections, we describe the usage of AI and the IoT to deal with COVID-19.
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social …
117
2.1 Diagnosis of COVID-19 COVID-19 can be diagnosed most accurately using the rRT-PCR test. However, there aren’t enough test kits to meet the demand due to the number of cases and prospective patients constantly increasing. Due to this, many computational solutions are being utilized to provide accurate diagnoses for COVID-19. Even though these models are less accurate and reliable than traditional biological diagnostic methods, they have proven very helpful in these times of desperate need. They are largely used as a screening tool [7]. Deep learning methods such as Convolutional Neural Networks (CNN) have proven to be effective for diagnosing COVID-19 by recognizing and classifying images. Computational healthcare uses CNN to identify various diseases by analysing medical imaging. A comprehensive study by Ardakani et al. [8] evaluated various open source CNNs, such as GoogleNet and AlexNet, to identify COVID-19 symptoms from computed tomography (CT) images of patients’ lungs. They developed their systems to recognize COVID instances in the absence of any cases and other viral cases. Elaziz et al. [9] proposed a method for identifying COVID-19 from x-ray images of the lung of potential patients using machine learning algorithms. Their study notes that even though their approach uses significantly fewer data to train on, it produces comparable results compared to deep learning CNNs. They applied the proposed model in two datasets. The model provided an accuracy of 96.09% on the first dataset, while on the second dataset, the model provided an accuracy of 98.09%. Imran et al. [10] introduced a machine learning model called AI4COVID19 to diagnose those suffering from COVID-19 based on their cough sounds. This method can distinguish COVID-19 cough from a variety of other coughs, even normal coughs that are not infected. The model has been evaluated using ESC-50 is a public dataset that consists of a large collection of both environmental and human sound output.
2.2 Drug and Vaccine Development Artificial intelligence in drug development has long been an ongoing practice among chemists and biomedical engineers [11]. For example, Deep learning has been used to predict various biological chemicals’ activities and properties. Furthermore, they can predict reactions and be used to analyze retrosynthetic reactions. Deep learning and related technologies can make medication development quicker and less expensive, and they are also being utilized to create antiviral medicines against the COVID-19 virus. A generational adversarial network (GAN) is the most popular model used recently to develop drugs that are suitable for a specific purpose. Using GANs makes it feasible to produce trained information that is statistically comparable to the input information [12]. IBM researchers also utilized deep generative models to create and evaluate COVID-19 medicines. They introduced an end-to-end pipeline named Controlled generation of Compounds (CogMol) that may be used to develop new tiny,
118
A. M. Almars et al.
powerful molecules that tackle a wide range of viral proteins. An extremely effective sampling scheme and a vibrational autoencoder (VAE) pre-training technique are combined in this model in order to develop and test the drug. The result of experiments shows the proposed approach produces very promising drugs to study further for their effectiveness toward COVID-19 as well as their safety [13]. A vaccine is made up of proteins or weakened pathogens injected into the patient body to build immunity to that specific condition. Machine learning and artificial intelligence technology are useful in predicting which molecules or proteins can act as vaccines. The VAXIGN-ML algorithm was used by Ong et al. [14] to predict alternative vaccine candidates for the SARS-CoV-2 virus. The VAXIGN-ML algorithm applies a gradient boosting machine learning algorithm with hyper parameter optimization to determine which protein is the most suitable for vaccines. A selection of five supervised machine learning algorithms was employed in the VAXIGN-ML tool, with XGBoost demonstrating the highest accuracy. Understanding specific protein structures is crucial for developing vaccines. Several methods are used to determine the contents of proteins, but it is particularly tricky to determine the protein’s 3D orientation or fold. A program from Deep Mind, called Alpha Fold, has been introduced to predict protein structure orientation using Google’s algorithm. It is being used to identify the structure of several proteins associated with the virus that causes COVID-19 [15].
2.3 Distribution of COVID-19 Vaccine Using AI With the beginning of the domestic vaccination campaign against COVID-19, several significant logistical and access concerns occurred. Hence, distributing the COVID19 vaccine in a country with 1.3 billion people and insufficient resources, such as India, is no easy task. Despite the approval of two COVID-19 vaccines (Bharat Biotech’s Covaxin and Oxford-AstraZeneca’s Covishield), distributing the two-dose vaccination throughout India will be a huge challenge. Distributing the COVID vaccine presents four distinct but related challenges: • Demand forecasting: how much vaccine to ship, where to ship, and when to ship. • Supply chain management: Monitoring the network for bottlenecks. • Quality assurance: Make sure vaccine is legitimate and made to the correct standard. • Adverse event surveillance: Monitoring the side effect of vaccine. Using AI algorithms and new technologies to guide vaccination programs could be a promising approach in each of these steps. IBM and other companies are now using AI technologies to help healthcare systems analyze patient demographics, determine high-risk populations, and determine the best ways to ensure vaccines are distributed equitably in the United States. The company assists hospitals and states in managing the limited vaccine supply. For example, IBM released forecasting software called IBM’s Watson Health Analytics to gather information on people’s attitudes toward
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social …
119
vaccinations and forecast demand. Macro-Eyes is an AI company that uses machine learning technologies to estimate the demand for medicines and healthcare services. IBM also releases object-based supply chain management software to locate every vaccine vial as closely as possible in real-time. Other technologies such as blockchain can play an important role in the distribution of the COVID-19 vaccine. Even though thousands of people have been tested for side effects, there is no guarantee between vaccines and their safety. Some governments are turning to AI to help find those vaccines’ side effects [16, 17].
2.4 Monitoring Vaccine Temperate Using IoT COVID-19 vaccines require temperature-controlled storage in the majority of cases. For example, Covishield from Oxford-AstraZeneca and Covaxin from Bharat Biotech must be stored at 2–8 °C. One such application is sensors-based IoT technology to continuously monitor data in real-time, which can help ensure reliable data storage. Temperature sensors will read a change in temperature and alert the device for the next vaccine shipment. A large amount of data is also generated during this process, which must be stored and maintained in a cloud. Real-time monitoring of a country’s vaccine supply chain in remote areas is also a major concern. Using location-based analytics, the government can minimize this issue in the vaccine supply chain. The Indian government has launched a start-up challenge to enhance the intelligence platform through cutting-edge technological solutions to tackle the issues of mentoring and the storage of COVID-19 vaccines [18].
2.5 Management of Vaccine (Co-WIN) Co-WIN (the COVID Vaccine Intelligence Network) was established in terms of the two vaccines’ acceptance. A version of Co-WIN was created with assistance from the United Nations Development Program (UNDP) [19]. Co-WIN is released to help the United Nations Universal Immunization Program to manage vaccines thought a smartphone app. Indeed, Co-WIN is a cloud-based system that uses artificial intelligence and IoT to design, execute, monitor, and evaluate the COVID-19 vaccination [20]. Government registration and vaccination drives are in full swing, and recipients receive message notifications about their vaccination centres. Health experts and stakeholders have pointed out the limitations of this framework. They summarized that Co-Win should also allow tracing of vaccines during transit, storage of data, and training of center, district, and state authorities for effective immunization programmes, they said. This can be done also sing IoT and AI technologies.
120
A. M. Almars et al.
2.6 Application of AI in Predicting COVID-19 Outcome A rapid and accurate clinical evaluation of the disease’s severity is critical, not only to aid healthcare decision-making but also to support logistical planning [21]. Infection severity, hospitalization needs, and disease outcome can be predicted using the demographics of the patients, such as age, clinical symptoms, and comorbidities [22]. Prognostic-based prediction models support physicians’ decisions and aid in screening for high-risk patients. By predicting disease progression accurately and early, it is possible to potentially reduce the mortality of COVID-19 patients. With the help of XGBoost, a high-performance machine learning algorithm, three potential biomarkers can be identified that includes: high-sensitivity C-reactive protein (hs-CRP), lymphocytes, and Lactate dehydrogenase (LDH). A recursive treebased decision system makes the XGBoost algorithm highly interpretable, and it predicts patient mortality nearly two weeks in advance through its 90% accuracy [23]. Another research shows that SARS-CoV-2 can be identified based on several laboratory parameters such as urea, white blood cell, myoglobin…etc. [24]. Such parameters can be predicted by combining the least absolute shrinkage and selection operator (LASSO) logistic regression model with mRMR algorithm characteristics. According to this research, this multi-feature-based method can predict SARS-CoV-2 pneumonia prognosis with 98% sensitivity and 91% specificity [24]. Furthermore, AI can also be used to predict the risk of adverse events or the progression of COVID-19 [24, 25]. An AdaBoost Random Forest model was built using different datasets (e.g. health, history of travel, location, and demographic information). A COVID-19 patient’s outcome was predicted with 94% accuracy with this model [26]. In a study of 13,690 patients, it was determined that the ML model was effective Pathogens 2021, 10, 1048 8 of 21 when combined with a list of features. The clinical, demographic, and comorbidity information of patients were evaluated to identify COVID-19 outcome, which is helpful to physicians in decision-making [27]. In another example, the ventilation requirements of COVID-19 patients are more predictably determined. An ML model is used over physiological management systems (MEWS) to make this prediction. As a result of this model, a COVID-19 patient successfully predicted the need for a mechanical ventilator during hospitalization and improved patient care [28]. A mortality prediction model for COVID-19 was created using the XGBoost method based on clinical and demographic information. High accuracy (AUC score of 0.91) combined three main characteristics, such as oxygen saturation level and age as COVID-19 disease has three highly accessible clinical features that make the model easily implementable [29].
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social …
121
2.7 Sentiment Analysis and Fake News Detection Regarding Covid-19 COVID-19 is also being fought using several natural processing techniques (NLP), specifically for detecting fake news and sentiment analysis. A deep learning approach such as LSTM has been used in COVID-19 comments to understand people opinions and sentiments toward covid-19 sentiment analysis. A comprehensive approach based on natural language processing is introduced in order to extract and evaluate reviews on Reddit related to COVID-19 [30]. For sentiment analysis of the collected reviews, they utilized the LSTM model and identified trending topics related to COVID-19 [31–33]. The experiment shows that People/infection was the most frequently discussed topic. Additionally, the study found that negative reviews like “someone has died etc.,” were discussed more than positive reviews like “get well soon”. One of the most important areas for researchers to focus on is curtailing fake news about COVID-19. These fakes news and misinformation have major impacts on people, because information shapes our world view: people make critical decisions based on information [34]. Serrano et al. [35] applied machine learning techniques to identify misinformative in YouTube videos, and YouTube comments were analyzed to accomplish this. The accuracy of 89.4% was achieved using pre-trained natural processing techniques methods that were fine-tuned for their application.
3 Impact of COVID-19 Vaccine on Social Life COVID-19) is a worldwide pandemic caused by the coronavirus 2 (SARS-CoV-2) that causes a serious illness. In December 2019, Wuhan, China, became the first city to report the novel virus, which quickly spread to other cities in mainland China and across the globe. The disease can produce unnoticeable symptoms to life-threatening illnesses. Patients with certain medical conditions and those who are elderly are more likely to develop severe symptoms. The disease spreads through droplets and airborne particles. According to the World Health Organization (WHO), over 229 million new cases will be reported in 2021, making it one of the deadliest epidemics in history [36]. COVID-19 vaccine is a key preventive step that will aid in the prevention of the COVID-19 pandemic. COVID-19 vaccinations are now readily accessible in the United States, and the CDC advises that all individuals 12 years of age or older be immunized against COVID-19. The US Food and Drug Administration (FDA) approved an mRNA vaccine (Pfizer-BioNTech/Comirnaty) on August 23, 202, as a 2-dose series for the prevention of symptoms COVID-19 in individuals ages 16 and older. An Emergency Use Authorization (EUA) also allows the administration of this vaccine for persons between the ages of 12 and 15. An EUA has approved the use of a second mRNA vaccine (Moderna) and a recombinant, replication-incompetent
122
A. M. Almars et al.
adenovirus serotype 26 (Ad26) vector vaccine [Janssen vaccine (Johnson & Johnson) ] in people over the age of 18. In certain immune compromised patients, both mRNA vaccines can be given an additional dose. Public health recommendations on the use of COVID-19 vaccines consider evidence of how effective the vaccines are in preventing symptomatic COVID-19 with and without severe adverse outcomes and the vaccine’s impact on the transmission SARS-coV-2. Additionally, societal factors and other individuals need to be considered when assessing the advantages and risks of additional preventive measures among vaccinated individuals (e.g., masking, physical distance). The Advisory Committee on Immunization Practices and the CDC evaluate vaccine recommendations based on factors like population values, acceptability, and feasibility, along with individual health benefits and risks. The CDC paid attention to these factors when developing interim public health recommendations for vaccine recipients. Here, we summarize the good evidence for the COVID-19 vaccine currently authorized or approved (administered according to the recommended schedules) as well as additional considerations that help guide public health recommendations for fully vaccinated individuals, including: • SARS-CoV-2 vaccine effectiveness and safety in the general population and among immune compromised individuals. • The effectiveness of heterologous (mixed) vaccine series. • The performance of the SARS-CoV-2 vaccine (i.e., effectiveness and immunogenicity) against emerging variants, with special attention to the Delta (B.1.617.2) variant. • The efficacy, effectiveness, and immunogenicity of the COVID-19 vaccine. Generally, immunogenicity is defined as the ability of a vaccine antigen to generate effective protective immunity. Effectiveness describes a vaccine’s performance in real-life observational studies, while efficacy describes its performance in carefully controlled clinical trials. A number of clinical studies demonstrate that the approved or authorized COVID-19 vaccines are effective and efficacious against laboratory-confirmed symptomatic and severe COVID-19. Furthermore, evidence suggests the COVID-19 vaccine can also reduce asymptomatic infections and transmission as shown below. SARS-CoV-2 infection levels (both symptomatic and asymptomatic) will decrease as a result of substantial reductions in SARS-CoV-2 infections, thereby reducing SARS-CoV-2 virus transmission in the United States. Research is on-going to assess whether fully vaccinated people with SARS-CoV2 infections can transmit the disease to other people who have vaccinated against it or have not. According to preliminary evidence, Delta SARS-CoV-2 infections in completely immunized persons may be transmitted; however, SARS-CoV-2 transmission between unvaccinated persons is the primary reason for continued spread.
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social …
123
4 The Plans of Government and Hospitals to Reduce the Impact of Corona Especially in Saudi Arabia With the huge spread of As SARS-CoV-2 pandemic, all countries has faced a critical challenge dealing with this pandemic. The SARS-CoV-2 pandemic, like the H1N1 pandemic in 2009, infected most of the world’s countries. Different countries implement plans and strict measures to tackle the impact of COVID-19. For example, in Chinese cities and towns, since the COVID-19 outbreak in late December of 2019, the government has implemented strict control measures, including lockdowns and a travel ban, shutdown schools and businesses, and applies some restrictions in transportation. Furthermore, the government built hospitals in just a few days, quickly developing testing protocols and applying cutting-edge technologies to track every patient [37]. All of these severe measures and enormous responses, along with new techniques, have resulted in a significant slowing of the virus’s growth in China. Saudi Arabia was one of the countries that had a significant influence during these worldwide pandemics. In addition to China, Saudi Arabia has also shown a great success dealing with the spread of COVID-19. Saudi Arabia is the Arab world’s second biggest country with 24 M people plus a non-Saudi population of 37 [38, 39]. The majority of the population is categorized as middle-aged, ranging from 15 to 64, while those aged 0–14 and >65 comprise 32.4 and 2.8%, respectively, [40]. Since millions of Muslims flock to it for Umrah and Hajj, which rank among the biggest global people gathering. Saudi Arabia is a serious hotspot for pandemic outbreaks because foreign pilgrims travel there from around the world. Even though the Saudi Arabian government makes hug efforts every year to ensure the safety of pilgrims, Umrah and Hajj still pose a serious risk of the spread of infectious diseases globally [41, 42]. The Kingdom of Saudi Arabia has been among the first countries to act quickly and take unprecedented actions to mitigate the impacts of SARS-CoV-2 when it arrives in the country; such actions and responses were implemented before the country recorded its first case on March 2nd, 2020. In response to COVID-19 pandemic, the Kingdom of Saudi Arabia take the following actions. Frist, pausing all air travel between Saudi Arabia and China was a successful pandemic reaction [43]. On February 27th, all international pilgrims and visitors have been barred from visiting Makkah and Madinah by the government. To curb the spread of COVID-19, Saudi Arabia banned people to enter those from SARS-affected countries, including those from member states of the Gulf Cooperation Council. Second, in order to minimize COVID-19’s devastating effects and restrain its spread, the Umrah was entirely halted, and the two holy mosques were shuttered every day beginning form March 5th. Furthermore, the Saudi government began implementing a remote learning programme and virtual classrooms in schools and universities on March 8th. Following the incident, all international and domestic flights, sports and events were suspended. Furthermore, the five daily prayers have been outlawed in thousands of mosques around the kingdom, and for the first time in the country’s
124
A. M. Almars et al.
history, all Saudi Muslims have been asked to worship at home. As a result, implementing such strict masseurs shows Saudi Arabia has a limited spread of the virus compared to other countries.
5 Acceptance and Demand for COVID-19 Vaccines A number of studies have been conducted to determine vaccination intentions and demand for Coronavirus disease 2019 (COVID-19) vaccines [44]. Different methods are used gather information about the demand of COVID-19 vaccines. In June 2020, Lazarus [45] used a survey to find substantial cross-country heterogeneity in vaccine hesitancy, with countries with greater trust in government with a higher acceptance rate. Neumann-Böhme et al. [46] suggest that individual perceptions of benefits and dangers, influenced by information from peers and trusted institutions, impact vaccination choices. In a comprehensive number of pre-pandemic studies [47], vaccine mistrust, concerns about side effects, and inconsistencies in policy messages from experts and elected officials have been linked to low vaccination rates [48]. Recent studies have found that fake news and conspiracy theories, especially those crafted in pseudo-scientific language, have a serious impact on the acceptance of COVID-19 vaccines [49, 50]. This hesitancy is also related to the use of social media and distrust in traditional and authoritative media sources [51]. The results confirm that even after controlling for demographic information and perceptions of COVID-19 risk, trust and peers are the most important drivers of vaccine acceptance. Moreover, boosting trust through assurances about side effects, in particular, can seriously affect vaccine hesitancy and should therefore be prioritized. In the prepandemic period, a large body of research has examined the effect of vaccination on the dynamics of COVID-19 diseases—Rowthorn and Toxvaerd [50] provide useful reviews. Another work is conducted by Radzikowski and Dizioli [51] that extends the standard SIR technique to incorporate vaccination policy and reluctance. It uses this methodology to investigate the possible impact of vaccine hesitancy on COVID-19.
6 Conclusion This chapter discusses the scope of implementation of medical parts such as dermatology genome interpretation and infection detection, depending on IOT and AI techniques. Moreover, the new advancements in IoT for detailing patients infected with COVID-19, just as the possible applications in computerized checking through wide advancements have been reported. Other successful technologies like blockchain and Automated Elevated Vehicles (AEVs) have been applied to overcome COVID-19 issues such as image diagnosis, drug and vaccine development, vaccine distribution, and monitoring. In addition, Security and protection parts of the AI and IoT advances have likewise been expounded, alongside their effect on the basic strategy making
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social …
125
tasks especially in Saudi Arabia, to have a limited spread of the virus compared to other countries. Future research will expand the discoveries for growing new calculations for IoT-supported COVID-19 well-being checking applications.
References 1. Singh, R. P., Javaid, M., Haleem, A., & Suman, R. (2020). Internet of things (IoT) applications to fight against COVID-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research Review, 14(4), 521–524. 2. Ghimire, A., Thapa, S., Jha, A. K., Adhikari, S., & Kumar, A. (2020). Accelerating business growth with big data and artificial intelligence. In 2020 Fourth International conference on I-SMAC. 3. Thapa, S., Adhikari, S., Naseem, U., Singh, P., Bharathy, G., & Prasad, M. (2020). Detecting Alzheimer’s disease by exploiting linguistic information from Nepali transcript. In 2020 27th International Conference on Neural Information Processing. Springer. 4. Thapa, S., Singh, P., Jain, D. K., Bharill, N., Gupta, A., & Prasad, M. (2020). Data-Driven approach based on feature selection technique for early diagnosis of Alzheimer’s Disease. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE. 5. Thapa, S., Adhikari, S., Ghimire, A., & Aditya, A. (2020). Feature selection based twinsupport vector machine for the diagnosis of Parkinson’s Disease. In 2020 8th R10 Humanitarian Technology Conference (R10-HTC. IEEE. 6. Chamola, V., Hassija, V., Gupta, V., & Guizani, M. (2020). A Comprehensive Review of the COVID-19 Pandemic and the Role of IoT, Drones, AI, Blockchain, and 5G in Managing its Impact. IEEE Access, 8, 90225–90265. 7. Wynants, L., et al. (2020). Prediction models for diagnosis and prognosis of COVID-19 infection: systematic review and critical appraisal. Bmj, 369. 8. Ardakani, A. A., Kanafi, A. R., Acharya, U. R., Khadem, N., & Mohammadi, A. (2020). Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Computers in Biology and Medicine, 103795. 9. Elaziz, M. A., Hosny, K. M., Salah, A., Darwish, M. M., Lu, S., & Sahlol, A. T. (2020). New machine learning method for image-based diagnosis of COVID-19. PLos One, 15(6), e0235187. 10. Imran, A., et al. (2020). AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app. arXiv preprint, arXiv:2004.01275. 11. Chen, H., Engkvist, O., Wang, Y., Olivecrona, M., & Blaschke, T. (2018). The rise of deep learning in drug discovery. Drug Discovery Today, 23(6), 1241–1250. 12. rukh Khattak, G., Vallecorsa, S., & Carminati, F. (2018). Three dimensional energy parametrized generative adversarial networks for electromagnetic shower simulation. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 3913–3917). IEEE. 13. Chenthamarakshan, V., et al. (2020). Target-specific and selective drug design for COVID-19 using deep generative models. arXiv preprint, arXiv:2004.01215. 14. Ong, E., Wang, H., Wong, M. U., Seetharaman, M., Valdez, N., & He, Y. (2020). Vaxign-ML: Supervised machine learning reverse vaccinology model for improved prediction of bacterial protective antigens. Bioinformatics, 36(10), 3185–3191. 15. Senior, A., Jumper, J., Hassabis, D., & Kohli, P. (2018). AlphaFold: Using AI for scientific discovery. DeepMind, Recuperado de https://deepmind.com/blog/alphafold. 16. Vaccine distribution: How AI and blockchain can speed up COVID vaccinations | Fortune. 17. Can AI Speed Up Vaccine Distribution? managedhealthcareexecutive.com. 18. CoWIN: Ministry of Electronics & IT launches challenge for strengthening COVID-19 vaccine intelligence network. ANI Retrieved December 24, 2020.
126
A. M. Almars et al.
19. Sachdeva, S. (2021). How can India devise a seamless vaccination program using AI & IoT? Geospatial World, Retrieved January 13, 2021, from https://dashboard.cowin.gov.in/. 20. Heldt, F.S., Vizcaychipi, M.P., Peacock, S., Cinelli, M., McLachlan, L., Andreotti, F., Jovanovic, S., Dürichen, R., Lipunova, N., & Fletcher, R.A. (2021). Early risk assessment for COVID-19 patients from emergency department data using machine learning. Scientific Report, 11, 4200. 21. Hu, C., Liu, Z., Jiang, Y., Shi, O., Zhang, X., Xu, K., Suo, C., Wang, Q., Song, Y., Yu, K., et al. (2021). Early prediction of mortality risk among patients with severe COVID-19, using machine learning. International Journal of Epidemiology, 49, 1918–1929. 22. Yan, L., Zhang, H.-T., Goncalves, J., Xiao, Y., Wang, M., Guo, Y., Sun, C., Tang, X., Jing, L., Zhang, M., et al. (2020). An interpretable mortality prediction model for COVID-19 patients. Nature Machine Intelligence, 2, 283–288. 23. Wu, G., Zhou, S., Wang, Y., Lv, W., Wang, S., Wang, T., & Li, X. (2020). A prediction model of outcome of SARS-CoV-2 pneumonia based on laboratory findings. Science and Reports, 10, 14042. 24. Chowdhury, M.E.H., Rahman, T., Khandakar, A., Al-Madeed, S., Zughaier, S.M., Doi, S.A.R., Hassen, H., Islam, M.T. (2021). An early warning tool for predicting mortality risk of COVID19 patients using machine learning. Cognition Computation, 1–16. 25. Iwendi, C., Bashir, A. K., Peshkar, A., Sujatha, R., Chatterjee, J. M., Pasupuleti, S., Mishra, R., Pillai, S., & Jo, O. (2020). COVID-19 patient health prediction using boosted random forest algorithm. Frontiers in Public Health, 8, 357. 26. Souza, F. S. H., Hojo-Souza, N. S., Santos, E. B., Silva, C. M., & Guidoni, D. L. (2020). Predicting the disease outcome in COVID-19 positive patients through machine learning: A retrospective cohort study with brazilian data. medRxiv. 27. Burdick, H., Lam, C., Mataraso, S., Siefkas, A., Braden, G., Dellinger, R.P., McCoy, A., Vincent, J.-L., Green-Saxena, A., & Barnes, G., et al. (2020). Prediction of respiratory decompensation in COVID-19 patients using machine learning: The ready trial. Computers in Biology and Medicine, 124, 103949. 28. Yadaw, A. S., Li, Y.-C., Bose, S., Iyengar, R., Bunyavanich, S., & Pandey, G. (2020). Clinical features of COVID-19 mortality: Development and validation of a clinical prediction model. The Lancet Digital Health, 2, e516–e525. 29. Jelodar, H., Wang, Y., Orji, R., & Huang, H. (2020). Deep sentiment classification and topic discovery on novel coronavirus or COVID-19 online discussions: Nlp using lstm recurrent neural network approach. arXiv preprint, arXiv:2004.11695. 30. Malki, Z., Atlam, E., Dagnew, G., Alzighaibi, A. R., Ghada, E., & Gad, I. (2020). Bidirectional residual LSTM-based human activity recognition. Computer and Information Science, 13(3), 40. https://doi.org/10.5539/cis.v13n3p40. 31. Gad, I., Hosahalli, D., Manjunatha, B. R., & Ghoneim, O. A. (2020). A robust deep learning model for missing value imputation in big NCDC dataset. Iran Journal of Computer Science. https://doi.org/10.1007/s42044-020-00065-z 32. Malki, Z., Atlam, E. S., Ewis, A., Dagnew, G., Reda, A., Elmarhomy, G., Elhosseini, M. A., Hassanien, A. E., Gad, I. (2020). ARIMA models for predicting the end of COVID-19 pandemic and the risk of a second rebound. Neural Computing and Applications https://doi.org/10.21203/ rs3.rs-34702/v1. 33. Tasnim, S., Hossain, M. M., & Mazumder, H. (2020). Impact of rumors and misinformation on COVID-19 in social media. Journal of preventive medicine and public health, 53(3), 171–174. 34. Serrano, J. C. M., Papakyriakopoulos, O., & Hegelich, S. (2020). NLP–based feature extraction for the detection of COVID-19 misinformation videos on youtube. openreview.net. 35. Malki, Z., Atlam, E. S., Hassanien, A. E., Dagnew, G., Elhosseini, M. A., & Gad,. (2020). Association between weather data and COVID-19 pandemic predicting mortality rate: Machine learning approaches. Chaos, Solitons & Fractals, 138, 110137. https://doi.org/10.1016/j.chaos. 2020. 36. Xiao, Y., & Torok, M. E. (2020). Taking the right measures to control COVID-19. The Lancet Infectious Diseases. https://doi.org/10.1016/S1473-3099(20)30152-3.
Applications of AI and IoT in COVID-19 Vaccine and Its Impact on Social …
127
37. General authority of statistics, Kingdom of Saudi Arabia. https://www.stats.gov.sa/en/indica tors/1. 38. Memish, Z. A., Zumla, A., Alhakeem, R. F., Assiri, A., Turkestani, A., & Al Harby, K. D. (2014). Hajj: Infectious disease surveillance and control. Lancet, 383, 2073–2082. https://doi. org/10.1016/S0140-6736(14)60381-0. 39. Hashem, A. M., Al-Subhi, T. L., Badroon, N. A., Hassan, A. M., Bajrai, L. H. M., & Banassir, T. M. (2019). MERS-CoV, influenza and other respiratory viruses among symptomaticpilgrims during 2014 Hajj season. Journal of Medical Virology, 91(6), 911–917. https://doi.org/10.1002/ jmv.25424. 40. Saudi Arabia bars travel to China amid coronavirus. Retrieved March 25, 2020, from https:// www.arabnews.com/node/1623851/saudi-arabia. 41. Leading countries based on number of Twitter users as of January2020. https://www.statista. com/statistics/242606/number-of-active-twitter-users-in-selected-countries/. 42. Lin, Y., Hu, Z., Zhao, Q., Alias, H., Danaee, M., & Wong, L. P. (2020). Understanding COVID19 vaccine demand and hesitancy: A nationwide online survey in China. PLoS Neglected Tropical Diseases, 14(12), e0008961. 43. Lazarus, J. V., Ratzan, S. C., Palayew, A., Gostin, L. O., Larson, H. J., Rabin, K., Kimball, S., & El-Mohandes, A. (2021). A global survey of potential acceptance of a COVID-19 vaccine. Nature Medicine, 27, 225–228. 44. Maftei, A., & Holman, A. C. (2021). SARS-CoV-2 Threat Perception and Willingness to Vaccinate: The mediating role of conspiracy beliefs, brief research report article. Frontiers in Psychology https://doi.org/10.3389/fpsyg.2021.672634. 45. Escandón, K., Rasmussen, A. L., Bogoch, I. I., Murray, E. J., Escandón, K., Popescu, S. V., & Kindrachuk, J. (2021). COVID-19 false dichotomies and a comprehensive review of the evidence regarding public health, COVID-19 symptomatology, SARS-CoV-2 transmission, mask wearing, and reinfection. BMC Infectious Diseases, 21, 710. 46. Martinez-Bravo, M., & Stegmann, A. (2021). In vaccines we trust? The effects of the CIA’s vaccine ruse on immunization in Pakistan. Journal of the European Economic Association, https://doi.org/10.1093/jeea/jvab018. 47. Dabla-Norris, E., Khan, H., Lima, F., & Sollaci, A. (2021). Who doesn’t want to be vaccinated? Determinants of vaccine hesitancy during COVID-191. International Monetary Fund. 48. Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5, 337–348. 49. Murphy, J., Vallières, F., Bentall, R. P., Shevlin, M., McBride, O., Hartman, T. K., McKay, R., Bennett, K., Mason, L., Gibson-Miller, J., Levita, L., Martinez, A. P., Stocks, T. V. A., Karatzias, T., & Hyland, P. (2021). Psychological characteristics associated with COVID-19 vaccine hesitancy and resistance in Ireland and the United Kingdom. Nature Communications, 12, 29. 50. Rowthorn, R., & Toxvaerd, F. (2020). The optimal control of infectious diseases via prevention and treatment. https://doi.org/10.17863/CAM.52481. 51. Dizioli, A. G., & Radzikowski, A. (2021).a pandemic forecasting framework: an application of risk analysis. International Monetary Fund. 52. Van der Schaar, M., Alaa, A. M., Floto, A., Gimson, A., Scholtes, S., Wood, A., McKinney, E., Jarrett, D., Lio, P., & Ercole, A. (2020). How artificial intelligence and machine learning can help healthcare systems respond to COVID-19. Machine Learning, 110, 1–14.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm and Multilayer Feed-Forward Neural Network Rizk M. Rizk-Allah and Aboul Ella Hassanien
Abstract COVID-19 is a novel coronavirus that was emerged in December 2019 within Wuhan, China. As the crisis of its severe, increasing dynamic outbreak in all parts of the globe, the forecast maps and analysis of confirmed cases (CS) becomes a vital excellent changeling task. In this study, a new forecasting model is presented to analyze and forecast the CS of COVID-19 for the coming days based on the reported data since 22 January 2020. The proposed forecasting model, named ISACL-MFNN, integrates an improved interior search algorithm (ISA) based on chaotic learning (CL) strategy into a multilayer feed-forward neural network (MFNN). The ISACL incorporates the CL strategy to enhance ISA’s performance and avoid the trapping in the local optima. This methodology is intended to train the neural network by tuning its parameters to optimal values and thus achieving high-accuracy level regarding forecasted results. The ISACL-MFNN model is investigated on the official data of the COVID-19 reported by the World Health Organization (WHO) to analyze the confirmed cases for the upcoming days. The performance regarding the proposed forecasting model is validated and assessed by introducing some indices, including the mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE) and the comparisons with other optimization algorithms are presented. The proposed model is investigated on the most affected countries (i.e., USA, Italy, and Spain). The experimental simulations illustrate that the proposed ISACL-MFNN provides promising performance rather than the other algorithms while forecasting the candidate countries’ task. Keywords COVID-19 · Feed forward neural network · Forecasting · Interior search algorithm · Hybridization R. M. Rizk-Allah (B) Faculty of Engineering, Department of Basic Engineering Science, Menoufia University, Shebin El-Kom, Egypt URL: http://www.egyptscience.net A. E. Hassanien Faculty of Computer and AI, Cairo University and Scientific Research Group in Egypt (SRGE), Cairo, Egypt URL: http://www.egyptscience.net © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_8
129
130
R. M. Rizk-Allah and A. E. Hassanien
1 Introduction A novel coronavirus, named COVID-19, emerged in December 2019 from Wuhan in central China. It causes an epidemic of pneumonia in humans and poses a serious threat to global public health. This virus appears through a range of symptoms involving shortness of breath, cough, and fever [1]. Despite the drastic containment measures taken by governments associated with different countries, it is swiftly spread to hit other parts of several countries. Besides, China presents the mainland for the outbreak of this epidemic, and the USA found itself at the top country with the worst outbreak. Some governments implement a decree to entire lockdown parts of the country due to the exponential daily increase in the infected people. As the confirmed cases are increased daily, and this virus’s dangerousness, drastic policies, and plans must be explored. In this sense, developing a critical forecasting model to predict the upcoming days is vital for the officials in providing a drastic protection measure. Recently some efforts have been presented to address the COVID-19. Zhao et al. [2] developed some statistical analysis based on the Poisson concept to expect the real number of COVID-19 cases that had not been reported in the first half of January 2020. They estimated that the unreported cases reach 469 from 1 to 15 January 2020, and those after 17 January 2020 had increased 21-fold. Nishiura et al. [3] presented a statistical estimation model to determine the infection rate regarding the COVID-19 on 565 Japanese citizens (i.e., from 29 to 31 January 2020) evacuated from Wuhan, China, on three chartered flights. They estimate the infection rate, which is 9.5%, and the death rate is 0.3% to 0.6%. Tang et al. [4] developed a likelihood-based estimation model to estimate the risk of transmission regarding COVID-19. They concluded the reproduction number could be effectively reduced by the isolation and avoiding the intensive contact tracing. In [5], the transmission risk of COVID-19 through human-to-human is studied on 47 patients. Duccio Fanelli et al. [6] develop a differential equations model to analyze the exponential growth of the COVID-19 on three countries, including China, Italy, and France, in the time window from 22/01 to 15/03/2020. Accordingly, the literature involves some models that were developed for forecasting some epidemics. These models include a compartmental model proposed by DeFelice et al. [7] to forecast the transmission and spillover risk of the human West Nile (WN) virus. They applied their model on the historical data reported from the mainland of this virus, Long Island, New York, form 2001 to 2014. In [8], forecasting pattern via time series models based on time-delay neural networks, multilayer perceptron (MLP), auto-regressive, and radial basis function, are proposed to gauge and forecast the hepatitis A virus infection, where these models are investigated on thirteen years of reported data from Turkey country. They affirmed that the MLP outcomes with the other models. In [9], a forecasting model using an ensemble adjustment Kalman filter is developed to address the outbreaks of seasonal influenza. They are employed the seasonal data of New York City from 2003 to 2008. In [10], a dynamic model based on the Bayesian inference concept is presented to forecast
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
131
Ebola’s outbreaks in some African countries, including Liberia, Guinea, and Sierra Leone. Massad et al. [11] developed a mathematical model to forecast and analyze the SARS epidemic, while Ong et al. [12] presented a forecasting model for influenza A (H1N1-2009). Moreover, a probability-based model is proposed by Nah et al. [13] to predict the spreading of the MERS. The feed-forward neural network (FNN) [14] presents one of the most commonly used artificial neural networks (ANNs) that has been applied in a wide range of forecasting applications with a high level of accuracy. The FNN possesses some distinguishing features that it makes valuable and attractive for forecasting problems. The first feature is a data-driven self-adaptive technique with few prior assumptions about the model. The second is that it can generalize. The third is a universal function approximation that provides a high degree of performance while approximating a large class of functions. Finally, it possesses nonlinear features. Due to these advantages of FNN, it has drawn overwhelming attention in several felids of prediction or forecasting tasks. For example, FNN was presented with two layers for approximating functions [14]. Isa et al. [15] demonstrated the FNN based on multilayer perceptron (MLP) that is conducted on the data set from University of California Irvine (UCI) repository. Lin et al. [16] developed a modified FNN based on quantum radial basis function to deal with the UCI repository dataset. Malakooti et al. [17] and Hornik et al. [18] proposed the FNN to obtain the approximation function’s optimum solution. In [19, 20], the back-propagation (BP) method has been employed as a training technique for FNN. However, these methods perceive some limitations, such as the falling in a local minimum and slow convergence. To alleviate these limitations, many researchers have been proposed combining meta-heuristic algorithms to improve the performance and the utility of the FNN. In this context, Zhang et al. [21] developed a combined particle swarm optimization (PSO) based on the BP method. Bohat et al. [22] proposed the gravitational search algorithm (GSA) and PSO for training an FNN. At the same time, Mirjalili et al. [23] introduced a combination of the PSO and GSA to optimize the FNN, where they presented an acceleration coefficient in GSA to improve the performance of the overall algorithm. Si et al. [24] developed a differential evolution (DE)-based method to search for the ANN’s synaptic weight coefficients’ optimal values. Shaw and Kinsner [25] presented the simulated annealing (SA) based on a chaotic strategy to train FNN, to mitigate the possibility of sticking in the local optima. Still, this algorithm suffered from slow convergence [26]. Furthermore, Karraboga [27] presented the artificial bee colony (ABC) for training FNN, where it suffers from local exploitation. Irani and Nasimi [28] proposed ant colony optimization (ACO) to optimize the weight vector of the FNN, where it deteriorates the global search. Apart from the previously proposed approaches based on FNN, the literature is wealthy, with many recent optimization models that have been employed for training tasks [29–32]. Although many related studies seem to be elegant for the forecasting tasks, they may deteriorate the diversity of solutions and get trapped in local optima. Furthermore, these methods may be problem-dependent. Therefore, these limitations can deteriorate the forecasting output’s performance and may achieve an unsatisfactory and imprecise quality of the outcomes. It motivated
132
R. M. Rizk-Allah and A. E. Hassanien
us to present a promising alternative model for forecasting tasks to achieve more accurate outcomes and avoid the previous limitations by integrating the strengths of the chaotic-based parallelization scheme and interior search method to attain better results. Interior Search Algorithm (ISA) is a novel meta-heuristics algorithm presented based on beautifying objects and mirrors. It was proposed by Gandomi 2014 [33] for solving global optimization problems. It contains two groups: mirror and composition to attain optimum placement of the mirrors and the objects for more attractive view. The prominent feature of ISA is contained in involving only one control parameter. ISA has been applied to solving many engineering optimization fields [34–38]. However, ISA may face the sucking in the local optimum while implemented for complicated and/or high dimensional optimization problems. Therefore, to alleviate these shortages, ISA needs more improvement strategies to impact its performance greatly. In this paper, an improved interior search algorithm (ISA) based on chaotic learning (CL), a strategy named ISACL, is proposed. The ISACL is started with the historical COVID-19 dataset. This dataset is then sent to the MFNN model to perform the configuration process based on parameters of the weight and biases. In this context, the ISACL is invoked to improve these parameters as the solutions by starting with ISA to explore the search space and CL strategy to enhance the local exploitation capabilities. By this methodology, it is intended to improve the quality and alleviate the falling in local optima. The quality of solutions is assessed according to the fitness value. The algorithm process is continued for updating the solutions (parameters) iteratively until the stop condition is reached. Then the achieved best parameters is invoked for configuring the structure of MFNN model to perform the forecasting and analyzing the number of confirmed cases of COVID-19. The contribution points for this work can be summarized as follows: (1) (2) (3) (4)
A brilliant forecasting model has proposed to deal with the COVID-19, and the analysis for the upcoming days is performed based on the previous cases. An improved configuration based-MFNN model is presented using the ISACLMFNN algorithm. The proposed model is compared with the original MFNN and other metaheuristic algorithms such as GA, PSO, GWO, SCA, and ISA. The performance of the ISACL-MFNN affirms its efficacy in terms of the reported results and can achieve accurate analysis for practical forecasting tasks.
The rest sections of this paper are organized as follows. Section 2 introduces the preliminaries for the MFNN model and the original ISA. Subsequently, the proposed ISACL-MFNN framework is provided for introduced in Sect. 3. Section 4 shows the experiments and simulation results. Finally, conclusions and remarks are provided in Sect. 5.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
133
2 Preliminaries This section provides the basic concepts of the multilayer feed-forward neural network and the original interior search algorithm.
2.1 Multilayer Feed-Forward Neural Network The artificial neural network (ANN) presents one of the widely artificial intelligence methods employed for forecasting tasks. Its structure involves the input layer, hidden layers, and the output layer. The ANN simulates or looks like the human brain, and it contains several neurons, where it performs the training and testing scenarios on the input data [39]. Input data in the present work include day, and the output data present the daily number of cases. The ANN keeps updating iteratively its network weights that connect the input and output layers to minimize the error among the input data. The ANN has advantages: it can quickly learn and make decisions and depict a relationship among inputs and the output data without obtaining mathematical formulation. Furthermore, the ANN is easy to implement and flexible when it is employed for modeling. However, the ANN includes some disadvantages: it may generate an error while the forecasting process, the training process may reach inconsistent results, and involves high dimensions parameters (weights) needed to be found optimally. Furthermore, low convergence and small sample size issues are two common shortages of ANNs. To construct the neural network model, the input and output data are determined. The number of neurons among the number of hidden layers must be carefully chosen because they influence the training accuracy. The ANN-based forecasting pattern in the present work uses the MFNN along with two hidden layers. The input layer involves several neurons equal to network inputs (days). N and M neurons are adopted for the first and second hidden layers, respectively, and one neuron has been assigned for the output layer [40, 41]. In this context, the hidden neurons transfer function is the sigmoid function, and the transfer function for the output is a linear activation function. The associated output for a particular hidden neuron ( jth) is determined as follows: yj =
1
1+e
−(
n
i=1
(ν ji xi −b j ))
,
j = 1, 2, ..., N
(1)
where ν ji represents the weight among the ith input neuron and the jth first hidden neuron, xi denotes the ith input, y j defines the first hidden layer output. Here, b j defines the base of the first hidden layer. The output induced by the second hidden layer denoted by the symbol θk is calculated as follows.
134
R. M. Rizk-Allah and A. E. Hassanien
θk =
1
1+e
, N − j=1 (νk j y j −bk )
k = 1, 2, ..., M
(2)
where νk j represents the weight among the jth first and the kth second hidden neurons, y j denotes the jth input, θk defines the second hidden layer output. Here, bk defines the base of the second hidden layer. The overall output (O P T ), from the lth output layer is computed as follows. O P Tl =
M
(νlk θk ), l = 1, 2, ..., H
(3)
k=1
where vlk denotes the weight for second hidden neuron (kth) and the output neuron (lth). The MFNN is trained using error backpropagation algorithm. In this context, the assessment of the algorithm performance while training process is determined by the means of the error that equals the difference between the output of MFNN and the target. Here, the mean square error (MSE) is considered: MSE =
n 2 1 (Yactual )i − (O P T )i n i=1
(4)
where n defines the number of training patterns, (O P T )i and (Yactual )i are output obtained by the MFNN and the target (actual) output. Here, the fitness value regarding the training process is computed as follows.
n 2 1 Fitness = Min. (M S E) = Min. (Yactual )i − (O P T )i n i=1
(5)
In this sense, a gradient-based back-propagation algorithm is presented to update weights and then minimizes the MSE value among the target output and computed output from the MFNN. The MSE is also provided to update the biases through the output layer towards the hidden layers [40]. Therefore, the weights and biases can be updated as follows:
ν ji (t + 1) = ν ji (t) + ν ji , ν ji = μ y j − xi
b j (t + 1) = b j (t) + b j , b j = μ y j − xi
(6)
where μ defines the learning rate, ν ji represents the change in the weight which hooks up the inputs of the first hidden neurons, and b j denotes the change in bias. The proposed model considers the days as the input variables for MFNN the model (Fig. 1). During these months, cumulative confirmed infected people by
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
135
Fig. 1 Structure of the presented MFNN pattern for forecasting task
COVID-19 are recorded as the neural network’s target output. As MFNN may suffer from a weak performance while searching process, it can liable to be fall into local optima. To solve this dilemma, some meta-heuristic algorithms can be usually used to improve the MFNN model’s performance. In this study, an improved interior search algorithm based-chaotic learning (CL) strategy (ISACL) is introduced to enhance the learning capability of MFNN network.
2.2 Interior Search Algorithm ISA involves two main stages which are simulates the architectural of decoration and interior design. The first stage defines the composition stage in which the composition of elements (solutions in terms of optimization viewpoint) is altered to attain more beautiful environment, which represents the better fitness in terms of optimization viewpoint. The final one represents mirror search that aims to explore better views between elements of this stage and the fittest one. More details about ISA are described as follows. (1) (2)
(3)
Generate a population of elements randomly between the search bounds, upper (upper ) and lower bounds (lower ), and record each element’s fitness value. Obtain the fittest element, which is corresponding to minimum objective function in case of minimization. The fittest part is denoted by kgb for kth iteration, where the suffix gb defines the global best. Divide the population elements into two groups with a random manner, composition group (CG) and mirror group (MG). This can be accomplished by
136
(4)
R. M. Rizk-Allah and A. E. Hassanien
defining the parameter λ. For the ith element, if ri ≤ λ, then the mirror stage is performed or else it carries out the composition group. Here ri denotes a random value ranged from 0 to 1. For composition respect, each element is updated in a random manner within a limited search space, and this stage can be formulated as.
k k + r2 . kupper − lower ik = lower
(7)
where ik defines the ith elements of the kth iteration and r2 dedicates a random value ranged from 0 to 1. Here i = 1, 2, ..., P S(PS: population size) and k = 1, 2, ..., T (T: maximum number of iterations). (5)
For the mirror group, the mirror is invoked among each element and it is updating by using the better one among the group (global best). Therefore, the position for ith element at jth iteration of a mirror is expressed as follows. km,i = r3 . ik−1 + (1 − r3 ). kgb
(8)
where r3 defines a random value within the interval 0 and 1. The position of the image or virtual position of the element depends on the mirror location, and this can be performed as follows. ik = 2km,i − ik−1 (6)
(9)
For enhancing the position of the global best, the random walk is implemented as a local search to make slight change for the global best position. This strategy is formulated as follows. kgb = k−1 gb + r n × γ
(10)
where rn represents a vector of random numbers with the same size of that normally distributed and γ is a user defined scaling factor that depends on the search space
k . size. Here, γ is taken as 0.01 kupper − lower (7)
Obtained the fitness value for each new position and update its location, if it is revival. This can be considered as k i f (ik ) < f (ik−1 ) ik = (11) else ik−1
(8)
The procedures are stopped, if assessment criteria are satisfied, repeat from step 2. The pseudo code for the traditional ISA is portrayed in Fig. 2.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
137
Fig. 2 The framework of the ISA
2.3 The Motivation of This Work As the coronavirus’s series epidemic, COVID-19, is an outbreak to hit several countries globally and causing considerable turmoil among the peoples. Thus the practical intent of the proposed work is to assist the officials with estimating a realistic picture regarding the time and the epidemic peak (i.e., estimating and forecasting the max. no. of infected individuals by utilizing forecasting model) and thus can help in developing drastic containment measures for the officials to avoid the epidemic spreading of this virus. In respect of the proposed methodology, an improved interior search algorithm (ISA) is enhanced with chaotic learning (CL) strategy to improve the seeking ability and avoid trapping in the local optima, named ISACL. The ISACL provides an optimization role in achieving an optimal configuration of the MFNN by training process in terms of tuning its parameters. Accordingly, the simulation results have demonstrated the effectiveness and robustness of the proposed forecasting model while investigating the foresting tasks.
138
R. M. Rizk-Allah and A. E. Hassanien
3 Proposed ISACL Algorithm The proposed ISACL algorithm is developed via two improvements: the composition group based on individuals’ experience to emphasize the population’s diversity. A chaotic learning strategy is carried out on the best solution to improve its quality during the optimization process. The detail behind the ISACL is elucidated as follows.
3.1 Composition-Based Experience Strategy In ISA, the composition group’s element or individual is updated in a random manner that may deteriorate the algorithm’s acceleration and population diversity. Hence, an experience strategy is introduced to improve acceleration and population diversity. In this sense, two individuals lk and rk are chosen from the population, randomly. Therefore, the likelihood search direction is illustrated in updating the current element as follows.
k i + r2 . rk − lk i f f f (rk ) < f (lk ) ik = (12) ik + r2 . lk − rk Other wise
3.2 Chaotic Learning Based-Local Searching Mechanism The main merit of the chaos behavior lies behind the sensitivity to initial conditions, which can perform the iterative search with higher speeds than the conventional stochastic search caused by its ergodicity and mixing properties. To effectively increase the algorithm’s superiority and robustness, a parallelized chaotic learning (CL) strategy is introduced. The CL strategy starts the search with different initial points and thus enhances the convergence rate and overall speed of the proposed algorithm. The steps of the CL strategy can be described as follows. Step 1. Generation of chaotic values: In this step, a N × D matrix Ck of chaotic values is generated according to N maps as follows: ⎡
k k α11 α12 k ⎢ α αk ⎢ 21 22 Ck = ⎢ . .. ⎣ .. . α kN 1 α kN 2
k · · · α1D k · · · α2D . · · · ..
· · · α kN D
⎤ ⎥ ⎥ ⎥ ⎦
(13) N ×D
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
139
where D is the number of dimensions for the decision (control) variables, α kjd denotes the generated chaotic number within the range of (0, 1) for the jth chaotic map of the kth iteration on the dth dimension. The chaotic numbers in (13) are generated using the functions that are introduced in [42]. Step 2. Mapping of candidate solution: For a certain candidate solution, = (1 , 2 , . . . D ), with the D dimensions, the candidate solution can be mapped as follows. ⎡
X kcandidate
⎤ k ⎢ k ⎥ ⎢ ⎥ =⎢ . ⎥ ⎣ .. ⎦ k
(14) N ×1
X kF I C = lower + Ck (upper − lower )
(15)
Z kchaotic = λ.X kchaotic + (1 − λ)X kF I C
(16)
where X kcandidate denotes the matrix of the individual k repeated N times, λ = k/T identifies the weighting parameter and X kF I C defines the feasible individual that is generated chaotically. In this regard, X kcandidate is considered as the best so for solution. Step 3. Updating the best solution: If f (Z kchaotic ) < f (kgb ) then put kgb = Z kchaotic , otherwise maintains kgb . Step 4. Stopping chaotic search: If the maximum iteration for the chaotic search phase is satisfied, stop this phase.
3.3 Framework of ISACL Based on the abovementioned improvements, the framework of ISACL is described by the pseudo-code as in Fig. 3 and the flowchart in Fig. 4, where the ISACL starts its optimization process by initializing a population of random solutions. After that, the ISACL and the best solution are updated using the CL phase. Then the superior solution will go to feed the next iteration. The procedures of the framework are continued until any stopping criterion is met.
140
R. M. Rizk-Allah and A. E. Hassanien
Fig. 3 The framework of the proposed ISACL
4 Experiment and Results In this section, the description regarding the COVID-19 dataset, the parameter settings for the presented algorithms, the performance measures, the results and modeling analysis, and discussions are presented.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
141
Fig. 4 Training process of the MFNN using the proposed based on the ISACL
4.1 Description of the COVID-19 Dataset This subsection provides the COVID-19 dataset that is presented in this study. The COVID-19 dataset is gathered from the WHO website [43] that involved the confirmed people. In this sense, the MFNN is setup with the input data represented by the days and output or target characterized by reported cases, where about 75%
142 Table 1 The parameters structures of the implemented algorithms
R. M. Rizk-Allah and A. E. Hassanien Algorithm
Parameters
PSO
PS = 10, inertia weight: wmin = 0.1, wmax = 0.4 acceleration coefficients: c1 = c2 = 2
GA
PS = 10, crossover probability: cr = 0.25, mutation probability: pm = 0.2
GWO
PS = 10, a = 2–0, r 1 = random, r 2 = random, A = 2.a.r 2 −a, C = 2.r 1
SCA
PS = 10, c2 ∈ [0,2π], c3 ∈ [0,2], c1 = 1–0 (Linear decreasing)
ISA& ISACL PS = 10, tuning parameter: α = 0.2 MFNN
31: Epoch = 2000, No of hidden neuron = 8, No. of training samples = 74
of the reported cases are employed for training the model. In contrast, the rest are utilized to validate the model and then generalize for any upcoming input cases. In this study, the data of three countries with the larger infected populations are selected, including USA, Italy, and Spain and referring to the period 22/1/2020 to 3/4/2020.
4.2 Parameter Settings To assess the proposed ISACL-MFNN while forecasting the COVID-19, it is compared with different algorithms including GA [29], PSO [44], GWO [44], SCA [44], and the standard ISA. Because of the high degree of haphazardness associated with the meta-heuristic algorithms and ensuring fairness, each algorithm is carried out with different independent runs. The best result is reported along with performance measures. To attain fair comparisons, the operation parameters such as maximum number of iterations, populations’ size are set to 500 and 10 which are common parameters iterations where the other related parameters associated with each algorithm are provided as reported in its corresponding literature where the overall parameters are tabulated in Table 1. All algorithms are coded with Matlab 2014b, Windows 7 (64bit)—CPU Core i5, and 4 GB RAM.
4.3 Indices for Performance Assessment To further assess the accuracy and quality of the presented algorithms, some performance indices are employed as follows.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
4.3.1
Root Mean Square Error (RMSE) N 1 2 RMSE = (Yactual )i − (Y mod el )i N i=1
143
(17)
where N represents the sample size of the data.
4.3.2
Mean Absolute Error (MAE) N 1 |(Yactual )i − (Y mod el )i | MAE = N i=1
4.3.3
4.3.4
4.3.5
Mean Absolute Percentage Error (MAPE) N 1 (Yactual )i − (Y mod el )i MAPE N i=1 (Y mod el )i
Root Mean Squared Relative Error (RMSRE) N 1 (Yactual )i − (Y mod el )i 2 RMSRE = N i=1 (Y mod el )i
Coefficient of Determination (R2 ) 2 N i=1 (Yactual )i − (Y mod el )i 2 R = 1− 2 N i=1 (Yactual )i − Y mod el
(18)
(19)
(20)
(21)
where Y mod el denotes the mean of (Yactual )i for all i. The smaller values for these metrics (i.e., RMSE, MAE, MAPE, and RMSRE), the higher the accuracy of forecasting model, while the higher value of R2 denotes a high level of correlation. Thus the closer the value of R2 is to 1, the superior result for the candidate method.
144
R. M. Rizk-Allah and A. E. Hassanien
4.4 Results and Discussions To forecast the COVID-19 of the confirmed cases of three countries (i.e., USA, Spain, Italy) that are most influenced at 3/4/2020, six optimization algorithms are implemented, where the ISACL presents the proposed one. In this sense, the results of the ISACL are compared with other competitors. The assessments regarding these algorithms are performed by some indices that are reported in Table 2. Based on the reported results, it can be observed that the proposed ISACL outperforms the other models, where it can provide that the lower values for the overall indices including RMSRE, RMSE, MAPE, and MAE and can perceive a higher value for R2 that indicates an accurate correlation between the target and the forecasted results which has nearly 1. On the other hand, all algorithms’ forecasted results for the three countries are depicted in Figs. 5, 6 and 7. These figures depict the presented algorithms’ training via the historical data of the COVID-19 and the forecasting values for twelve days. Also, the forecasted cases for the upcoming days starting from 4/4/2020 to 15/4/2020 are presented in Table 3, where the predicted data is evaluated Table 2 The results of error analysis regarding the training set (22/1/2020–30/3/2020) RMSE
MAE
MAPE
USA
151,887.9
135,994.9
2.279405
5.377831 0.570935
0.90226
0.930784 0.787904
MFNN GA-MFNN
6269.315
4959.217
2.136618 10.78139
GWO-MFNN
2572.364
1772.057
0.778763
0.861987 0.997165 1.18745
SCA-MFNN
40,486.76
28,271.16
1.071431
ISA-MFNN
13,132.18
10,816.57
0.88723
MFNN
1217.795 100,629.8
986.9662 0.960242 93,343.47
0.982571 0.264702
0.943304 0.924713 1.450868 0.999345
2.887692
7.064431 0.687694
GA-MFNN
5130.22
4357.698
1.767679
8.787012 0.983191
PSO-MFNN
2023.738
1775.697
0.862244
1.634832 0.997424
GWO-MFNN
1023.525
696.0177 0.590222
0.806383 0.999347
SCA-MFNN
8488.579
7588.972
0.987913
2.80546
ISA-MFNN
2757.326
1763.483
0.794823
1.468295 0.99566
ISACL-MFNN Spain
13,534.54
PSO-MFNN
ISACL-MFNN Italy
20,935.81
RMSRE
R2
Countries Method
MFNN
567.1477
0.999796
2.548526
6.236406 0.620392
8345.796
6066.738
0.821872
0.869479 0.917202
PSO-MFNN
2136.254
1421.257
1.31129
4.27538
3215.954
0.790442
0.890076 0.981012
1.132657
1.313739 0.28774
0.82158
0.923754 0.962392
323.3407 0.52082
0.666032 0.999375
SCA-MFNN ISA-MFNN ISACL-MFNN
3988.714 23,524.13 5748.391 728.2184
76,222.72
4.042
GA-MFNN GWO-MFNN
83,942.93
289.2544 1.095606
0.973179
16,299.57 4498.146
0.994646
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
145
Actual: Days from January 22, 2020 to April 3, 2020 (1:73 days) Forecasted: April 4, 2020 to April 15, 2020 (74:85 days)
Fig. 5 The actual data (target) with the obtained (forecasted data) by the presented methods for USA
146
R. M. Rizk-Allah and A. E. Hassanien
Fig. 6 The actual data (target) with the obtained (forecasted data) by the presented methods for Spain
against the real ones in terms of the relative absolute error. For further validation, the forecasting results of the test set’s confirmed cases, 31/3/2020–3/4/2020, are depicted in Fig. 8, where the training set is considered from 22/1/2020 to 30/3/2020. From Fig. 8, it can be noted that the proposed ISACL is very close to the actual historical data than the other algorithms.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
147
Fig. 7 The actual data (target) with the obtained (forecasted data) by the presented methods for Italy
318,928
348,003
378,447
406,612
436,815
467,424
492,024
514,192
535,889
560,308
579,157
5/4/2020
6/4/2020
7/4/2020
8/4/2020
9/4/2020
10/4/2020
11/4/2020
12/4/2020
13/4/2020
14/4/2020
15/4/2020
508,978.9
502,117.5
493,078.9
481,680.6
467,775.5
451,271.2
432,150.7
410,492.9
386,487
360,440.2
332,772.8
165,145
162,479
159,506
156,353
152,259
147,562
143,612
139,408
135,574
132,537
128,938
124,620
Real
Predicted
304,000.7
Real
297,386
Italy
USA
4/4/2020
Data Predicted
154,048.6
152,704.3
151,185.7
149,474.6
147,552.4
145,400.6
143,001
140,336.8
137,392.7
134,156.3
130,618.7
126,775.3
226,551
223,736
220,967
218,321
215,850
212,715
209,036
205,310
201,433
197,848
194,026
190,513
Real
Spain
133,539.6
133,372
133,158.2
132,875.8
132,495.3
131,977.2
131,268.8
130,301.5
128,986.4
127,212.2
124,844.1
121,728.6
Predicted
Table 3 Forecasting data for coming days of the confirmed cases in the studied countries: 4/4/2020 − 15/4/2020
0.1212
0.1039
0.0799
0.0632
0.0493
0.0346
0.0107
0.0095
0.0212
0.0357
0.0434
0.0222
USA
0.0672
0.0602
0.0522
0.0440
0.0309
0.0146
0.0043
0.0067
0.0134
0.0122
0.0130
0.0173
Italy
Relative absolute error Spain
0.4106
0.4039
0.3974
0.3914
0.3862
0.3796
0.3720
0.3653
0.3597
0.3570
0.3566
0.3610
148 R. M. Rizk-Allah and A. E. Hassanien
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
(a) USA Actual
(b) Spain GA
PSO
GWO
149
(c) Italy SCA
ISA
ISACL
Fig. 8 Forecasting results (31/3/2020–3/4/2020) of the confirmed cases for the studied different countries (test set)
Finally, based on the obtained results, it can be observed that the proposed ISACL-MFNN provides accurate results and has high ability to forecast the COVID19 dataset regarding the studied cases. In this sense, the limitations of traditional MFNN are avoided due to the integration with the ISACL methodology through the CL strategy to enhance the local exploitation capabilities and obtain high quality of solutions.
5 Conclusions This paper presented an improved interior search algorithm based on chaotic learning (CL) strategy, named ISACL, which is implemented to improve the performance of the MFNN by finding its optimal structure regarding the weights and biases. The developed ISACL-MFNN model is presented as a forecasting model to deal with the novel coronavirus, named COVID-19, that was explored in December 2019 within Wuhan, China. The proposed ISACL-MFNN model is investigated to predict the upcoming days regarding the confirmed deaths and recovered cases. The performance of the proposed ISACL-MFNN is assessed through the RMSE, MAE, MAPE, RMSRE, and R2 metrics. The obtained results affirmed the effectiveness and efficacy of the proposed model for predicting task. Accordingly, the proposed ISACL-MFNN can present a promising alternative forecasting model to deal with practical forecasting applications. We hope the presented model provides a quantitative picture to help the researchers in a specific country forecast and analyze this epidemic’s spreading with time.
150
R. M. Rizk-Allah and A. E. Hassanien
References 1. Li, R., Qiao, S., & Zhang, G. (2020). Analysis of angiotensin-converting enzyme 2 (ACE2) from different species sheds some light on cross-species receptor usage of a novel coronavirus 2019-nCoV. Journal of Infection, 80(4), 469–496. 2. Zhao, S., Musa, S. S., Lin, Q., Ran, J., Yang, G., Wang, W., Lou, Y., Yang, L., Gao, D., He, D., et al. (2020). Estimating the unreported number of novel coronavirus (2019-nCoV) cases in china in the first half of January 2020: A data-driven modelling analysis of the early outbreak. Journal of Clinical Medicine, 9, 388. 3. Nishiura, H., Kobayashi, T., Yang, Y., Hayashi, K., Miyama, T., Kinoshita, R., Linton, N. M., Jung, S. M., Yuan, B., & Suzuki, A. et al. (2020). The rate of underascertainment of novel Coronavirus (2019-nCoV) infection: Estimation using japanese passengers data on evacuation flights. Journal of Clinical Medicine, 9, 419. 4. Tang, B., Wang, X., Li, Q., Bragazzi, N. L., Tang, S., Xiao, Y., & Wu, J. (2020). Estimation of the transmission risk of the 2019-nCoV and its implication for public health interventions. Journal of Clinical Medicine, 9, 462. 5. Thompson, R. N. (2020). Novel Coronavirus outbreak in Wuhan, China, 2020: Intense surveillance is vital for preventing sustained transmission in new locations. Journal of Clinical Medicine, 9, 498. 6. Fanelli, D., & Piazza, F. (2020). Analysis and forecast of COVID-19 spreading in China, Italy and France. Chaos, Solitons & Fractals, 134, 109761. 7. DeFelice, N. B., Little, E., Campbell, S. R., & Shaman, J. (2017). Ensemble forecast of human West Nile virus cases and mosquito infection rates. Nature Communications, 8, 1–6. 8. Ture, M., & Kurt, I. (2006). Comparison of four different time series methods to forecast hepatitis a virus infection. Expert Systems with Applications, 31, 41–46. 9. Shaman, J., & Karspeck, A. (2012). Forecasting seasonal outbreaks of influenza. Proceedings of the National academy of Sciences of the United States of America, 109, 20425–20430. 10. Shaman, J., Yang, W., & Kandula, S. (2014). Inference and forecast of the current West African Ebola outbreak in Guinea, Sierra Leone and Liberia. PLoS Curr. 2014, 6. https://doi.org/10. 1371/currents.outbreaks.3408774290b1a0f2dd7cae877c8b8ff6. 11. Massad, E., Burattini, M. N., Lopez, L. F., & Coutinho, F. A. (2005). Forecasting versus projection models in epidemiology: The case of the SARS epidemics. Medical Hypotheses, 65, 17–22. 12. Ong, J. B. S., Mark, I., Chen, C., Cook, A. R., Lee, H. C., Lee, V. J., Lin, R. T. P., Tambyah, P. A., & Goh, L. G. (2010). Real-time epidemic monitoring and forecasting of H1N1–2009 using influenza-like illness from general practice and family doctor clinics in Singapore. PLoS ONE, 5. https://doi.org/10.1371/journal.pone.0010036. 13. Nah, K., Otsuki, S., Chowell, G., & Nishiura, H. (2016). Predicting the international spread of Middle East respiratory syndrome (MERS). BMC Infectious Diseases, 16, 356. 14. Irie, M. (1988). Capabilities of three-layered perceptrons. In 2004 IJCNN, 641, pp. 641–648. 15. Mat Isa, N. A., & Mamat, W. M. F. W. (2011). Clustered-Hybrid multilayer perceptron network for pattern recognition application. Applied Soft Computing, 11, 1457–1466. 16. Lin, C. J., Chen, C. H., Lee, C. Y. (2004). A self-adaptive quantum radial basis function net- work for classification applications. In 2004 IJCNN (IEEE Cat. No.04CH37541), 3264, pp. 3263–3268. 17. Malakooti, B., & Zhou, Y. (1998). Approximating polynomial functions by feed-forward artificial neural networks: Capacity analysis and design. Applied Mathematics and Computation, 90, 27–51. 18. Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feed-forward networks are universal approximators. Neural Networks, 2, 359–366. 19. Zhang, N. (2009). An online gradient method with momentum for two-layer feed-for- ward neural networks. Applied Mathematics and Computation, 212, 4 88–4 98. 20. Hagan, M. T., & Menhaj, M. B. (1994). Training feed-forward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5, 989–993.
COVID-19 Forecasting Based on an Improved Interior Search Algorithm …
151
21. Zhang, J. R., Zhang, J., Lok, T. M., & Lyu, M. R. (2007). A hybrid particle swarm optimization– back-propagation algorithm for feed-forward neural network training. Applied Mathematics and Computation, 185, 1026–1037. 22. Bohat, V. K., & Arya, K. V. (2018). An effective gbest-guided gravitational search algorithm for real-parameter optimization and its application in training of feed-forward neural networks. Knowledge-Based System, 143, 192–207. 23. Mirjalili, S., Mohd Hashim, S. Z., & Moradian Sardroudi, H. (2012). Training feed-forward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Applied Mathematics and Computation, 218, 11125–11137. 24. Si, T., Hazra, S., & Jana, N. (2012). Artificial neural network training using differential evolutionary algorithm for classification. In:S. Satapathy, P. Avadhani, & A. Abraham (Eds.), Proceedings of the International Conference on Information Systems De- sign and Intelligent Applications 2012 (INDIA 2012) held in Visakhapatnam, India, January, Springer, Berlin/Heidelberg, pp. 769–778. AISC 132. 25. Shaw, S., & Kinsner, W. (1996). Chaotic simulated annealing in multilayer feed-forward networks. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, pp. 265–269. 26. Zhang, J. R., Zhang, J., Lock, T. M., & Lyu, M. R. (2007). A hybrid particle swarm optimisation– back-propagation algorithm for feed-forward neural network training. Applied Mathematics and Computation, 128, 1026–1037. 27. Karaboga, D., Akay, B., & Ozturk, C. (2007). Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. MDAI, 7, 318–319. 28. Irani, R., & Nasimi, R. (2012). An evolving neural network using an ant colony algorithm for a permeability estimation of the reservoir. Petroleum Science and Technology, 30(4), 375–384. 29. Tang, R., Fong, S., Deb, S., Vasilakos, A. V., & Millham, R. C. (2018). Dynamic group optimisation algorithm for training feed-forward neural networks. Neurocomputing, 314, 1–19. 30. Wu, H., Zhou, Y., Luo, Q., & Basset, M. A. (2016). Training feed-forward neural networks using symbiotic organisms search algorithm. Computational Intelligence and Neuroscience 31. Huang, M. L., & Chou, Y. C. (2019). Combining a gravitational search algorithm, particle swarm optimization, and fuzzy rules to improve the classification performance of a feed-forward neural network. Computer Methods and Programs in Biomedicine, 180, 105016. 32. Xu, F., Pun, C. M., Li, H., Zhang, Y., Song, Y., & Gao, H. (2019). Training feed-forward artificial neural networks with a modified artificial bee colony algorithm. Neurocomputing. 33. Gandomi, A. H. (2014). Interior search algorithm (isa): A novel approach for global optimization. ISA Transactions, 53, 1168–1183. https://doi.org/10.1016/j.isatra.2014.03.018 34. Gandomi, A. H., and Roke, D. A. (2014). Engineering optimization using interior search algorithm. In Swarm Intelligence (SIS), 2014 IEEE Symposium, Orlando, FL, USA, pp. 1–7. IEEE. 35. Moravej, M., & Hosseini-Moghari, S.-M. (2016). Large scale reservoirs system operation optimization: The interior search algorithm (ISA) approach. Water Resources Management, 30, 3389–3407. https://doi.org/10.1007/s11269-016-1358-y 36. Kumar, M., Rawat, T. K., Jain, A., Singh, A. A., & Mittal, A. (2015). Design of digital differentiators using interior search algorithm. Procedia Computer Science, 57, 368–376. https://doi. org/10.1016/j.procs.2015.07.351 37. Yldz, B. S. (2017). Natural frequency optimization of vehicle components using the interior search algorithm. Materials Testing, 59, 456–458. https://doi.org/10.3139/120.111018 38. Rajagopalan, A., Kasinathan, P., Nagarajan, K., Ramachandaramurthy, V. K., Sengoden, V., & Alavandar, S. (2019). Chaotic self-adaptive interior search algorithm to solve combined economic emission dispatch problems with security constraints. International Transactions on Electrical Energy Systems, 29(8), e12026. 39. Singh, P., Dwivedi, P., & Kant, V. (2019). A hybrid method based on neural network and improved environmental adaptation method using Controlled Gaussian Mutation with real parameter for short-term load forecasting. Energy.
152
R. M. Rizk-Allah and A. E. Hassanien
40. Leema, N., Khanna Nehemiah, H., & Kannan, A. (2016). Neural network classifier optimization using differential evolution with global information and Back Propagation algorithm for clinical datasets. Applied Soft Computing, 49, 834–844. 41. Khan, M. M., Masood Ahmad, A., Khan, G. M., & Miller, J. F. (2013). Fast learning neural networks using Cartesian genetic programming. Neurocomputing, 121, 274–289. 42. Rizk-Allah, R. M., Hassanien, A. E., & Bhattacharyya, S. (2018). Chaotic crow search algorithm for fractional optimization problems. Applied Soft Computing, 71, 1161–1175. 43. https://ww.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports/. 44. Rizk-Allah, R. M., & Hassanien, A. E. (2019). A movable damped wave algorithm for solving global optimization problems. Evolutionary Intelligence, 12(1), 49–72.
Development of Disease Diagnosis Model for CXR Images and Reports—A Deep Learning Approach Anandhavalli Muniasamy, Roheet Bhatnagar, and Gauthaman Karunakaran
Abstract Medical images play an important role in the diagnosis of diseases effectively. Chest radiography, chest X-ray, or CXR is the most common diagnostic imaging test in medicine and provides information about chest-related diseases. Globally, CXR images are analyzed visually by clinical doctors or radiologists, which is time-consuming and leads to bias due to the complexity. So, it is challenging to classify and predict the various chest diseases shown in the CXR images. There is a huge collection of CXR imaging studies and radiological reports available in Picture Archiving and Communication Systems (PACS) of various modern hospitals. As there is a shortage of expert radiologists to cover more patients, there is a necessity to advance computer-aided diagnosis of chest diseases on CXR with the help of automated algorithms. The chapter aims to propose a model for diagnosing the disease information that is available in both images and radiology reports, which will help in decision-making. The result findings showed that that convolution neural network model DenseNet produces the highest accuracy compared to the MobileNet accuracy. In a nutshell, this chapter focuses on the automatic detection of chest diseases from CXR images based on image classification based on class labels using deep learning techniques and the disease keywords search in radiology reports based on text similarity in natural language description. Keywords Chest radiography · Chest diseases · Radiology report · Convolution model · Natural language processing · Text similarity
A. Muniasamy (B) College of Computer Science, King Khalid University, Abha, Saudi Arabia R. Bhatnagar Department of CSE, Manipal University Jaipur, Jaipur, India e-mail: [email protected] G. Karunakaran Government Pharmacy College, Sikkim University, Sikkim, India © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_9
153
154
A. Muniasamy et al.
1 Introduction Chest X-Ray, popularly known as CXR, is one of the universal kinds of clinical radiology tests. CXR is used mainly in clinical practice for medical diagnostics and the treatment of lung and cardiovascular diseases like various fourteen common diseases, including Atelectasis, Cardiomegaly, Consolidation, Effusion, Edema Emphysema, Fibrosis, Hernia, Infiltration, Nodule, Mass, Pneumothorax, Pneumonia, and Pleural Thickening. So, the analysis of CXR has an essential role in diagnosing and treating chest-related diseases. As there is a massive continuous growth of CXRs and the unavailability of trained radiologists in rural areas [1], there is a demand for computer-aided diagnosis systems, which supports the radiologists for the diagnosis of chest-related diseases from the CXR images. In the medical industry, artificial intelligence (AI) techniques have been applied to detect and interpret chest-related diseases from the CXR medical images. Today’s deep learning (DL) models can significantly analyze and segment an image [2] with high accuracy. DL techniques on CXR are gaining popularity as they are cheap, hugely available, and easy to access imaging techniques for training various DL models. The high-level abstractions from the CXR images can be successfully captured by the general learning procedures available in the DL model [3]. Recently, DL-based decision support systems have been recommended for various medical problems like brain-computer interface [4], tumors and lesions detection in medical images [5, 6], computer-aided diagnostics [7, 8], etc. using medical images. The convolutional neural networks (CNNs), one of the DL techniques, have been applied for medical problems [9] like brain tumor detection and segmentation, disease classification in X-ray images, breast cancer detection, etc. Therefore, CNN is the commonly adopted model by the research groups [10] to achieve promising image classification results. Chest-related diseases are one of humanity’s major health threats, and CXR is the prototypal radiological examination. For interpreting the information present in CXR images, a medical imaging report is usually associated with it and available as Extensible Markup Language (XML) file for the respective images. Deep learning techniques are suitable for image similarity searching applications, but understanding and linking complicated medical visual contents with accurate natural language descriptions remains challenging. The analysis of CXR images and the radiology reports will predict advances elsewhere in radiology. So, this chapter aims to apply and develop deep learning CNN model for the chest-related diseases on CXR images and the CXR report, which will lead to proper diagnosis and help in decision making. Following the introduction section, we provided the necessary background study of this research work. Then we covered the related work in applications of deep learning and CNN techniques using CXR images. Description: The methodology section explains the dataset, preprocessing procedures, and the deep learning models used for building the research framework. The model evaluation & results analysis
Development of Disease Diagnosis Model for CXR Images and Reports …
155
section reports analysis and findings of the classifier models and the achieved classification results. Finally, the chapter ends with the summary and conclusion of this research.
2 Background Study 2.1 CXR Images and Radiology Reports CXR is a common radiological testing for the examination and diagnosis of chest related diseases. Each CXR image contains combination of a huge anatomical, and pathological information in each single projection. Fourteen common 14 chest pathologies identified using CXR images (refer to Fig. 1) are listed [25] [Atelectasis, Consolidation Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural_thickening, Cardiomegaly, Nodule, Mass, and Hernia]. Superimposed dense structures such as bones and the density difference in adjacent anatomical structures may hide the causes and effects of diseases (pathology) such as cardiac problems, lung nodules, which makes the disease detection and interpretation potentially difficult using CXR data.
Fig. 1 Visuals of CXR images of chest diseases
156
A. Muniasamy et al.
Fig. 2 The architecture of Convolutional Neural Network (CNN)
2.2 Convolutional Neural Networks (CNNs) CNN’s are simple feed-forward artificial neural networks (ANNs). It consists of two constraints, namely neurons and their respective weights. For preserving the spatial structures in the image, neurons in the same filter are usually only connected to local bits or area of the image. The model’s parameters can be reduced by sharing the neuron weights in the complete network [21]. CNN consists of three building blocks, as shown in Fig. 2. • Convolution layer for learning the features. • Subsampling (Max-pooling) layer for downsampling the image. So that the computation efforts will be improved by reducing the image dimensionality. • Fully connected layer for providing classification support in the network.
2.3 Pre-trained Convolutional Neural Networks (CNNs) In this study, the task of detecting 14 diseases from the CXR has been performed using the pre-trained deep learning CNNs such as. • MobileNet [11] • DenseNet [11] 2.3.1
DenseNet
Dense Convolutional Network (DenseNet) (Fig. 3) is very narrow with 12 filters. It has comparatively fewer network parameters than a conventional CNN model so that the transition layer in DenseNet does not learn redundant feature maps [22]. DenseNet has 1 × 1 convolution kernel and 2 × 2 average pooling layer, as shown in Fig. 3. The convolution kernel reduces the number of input feature maps in the
Development of Disease Diagnosis Model for CXR Images and Reports …
157
Fig. 3 The architecture of dense convolutional neural network (DenseNet)
transition layer. The pooling layer halves the number of input feature maps. Each layer has direct access to the original input image. So, DenseNet is a suitable modelfor image classification as there is a significant reduction in computational cost [23].
2.3.2
MobileNet
MobileNets (Fig. 4) follows a streamlined architecture. It uses depth-wise separable convolutions to build lightweight deep neural networks. It has hyperparameters that efficiently trade-off between latency and accuracy. MobileNet has depth-wise and points convolution layers. It has a convolution layer in place of the pooling layer to reduce the feature map. Also, batch normalization will be done in each convolution layer. It directly convolutes the output feature map of the previous point convolution layer with stride 2. The final output layer in the dense blocks has 1 × 1 point convolution, which further reduces the number of feature maps [24].
Fig. 4 Architecture of MobileNet
158
A. Muniasamy et al.
2.4 Radiology Report Radiology plays an important role in communicating imaging information in patient and disease management systems. CXR images are generated along with radiological reports and stored in PACS for research studies. XML documents (Fig. 5) are used to store radiology reporting result in compact document representation scheme. So that CXR radiology reports are shared via Web as a universally understandable, self-defining documents [26]. SPIDER (Structured Platform-Independent Data Entry and Reporting) is a developmental system for radiology reporting. It is useful in organizing and representing the report contents as web entry forms with graphical objects like radio buttons, checkboxes, and text entry windows. It supports three types of data items namely binary tag, numeric tag and text tag. Reporting applications in SPIDER are designed to receive data from the computer system, for example, patient’s radiology report contains laboratory test results by querying the lab information system for the results of the latest blood test, etc. In the radiology report in an XML document, there are XML (parsers) programs that show the patient’s name, identification number, indication, findings, consolidation details, and patient CXR image-related information are recorded and mentioned respectively.
Fig. 5 CXR Report in XML format
Development of Disease Diagnosis Model for CXR Images and Reports …
159
2.5 Natural Language Processing in Producing the Radiology Report Natural language processing (NLP) is one of the artificial intelligence (AI) branches used for understanding, interpreting, and manipulating human languages using computers. The combined result of all NLP steps, in general, is shown in Fig. 6, involves the following. 1. 2. 3.
Feature Extraction Feature Processing System Tasks (Report Classification & Information Retrieval)
Fig. 6 NLP Pipeline
160
4. 5.
A. Muniasamy et al.
Performance Evaluation Implementation
During the feature extraction step, the NLP features from the free text report are collected through the processes like identifying the report sections (segmentation), tokenization (also called boundary detection of the textual contents), stemming and correction for word normalization, followed by syntax and semantic analysis of the concept like finding disease label, and its presence or absence using the negation detection. The collected NLP features can be further processed using machine learning or rule-based algorithms or both algorithms to produce the features as per the research requirements. Based on the features and respective reports, the researcher can apply report classification and information retrieval from the reports and produce the models. The models can be further evaluated as per the health care validation procedures. The developed models will be implemented based on the research requirements. Radiologists and medical doctors search based on keyword or similarity based on text classification or information extraction from these features [27]. NLP techniques allow automatic identification and extraction of information using keywords. In NLP, the Term Frequency—Inverse document frequency (TF-IDF) technique is useful for document search and information retrieval. It is calculated as follows: umber o f documents • Inverse document frequency = log [ N umber o fNdocumentscontainingthewor ] d N umber o f r epetitionso f wor dsinadocument • Term Frequency = N umber o f wor dsinadocuments
In machine learning using text analysis, TF-IDF algorithms are used for sorting and extracting keywords from text data and XML files. The highest scoring words are the most relevant to that document, which are keywords for that document.
3 Literature Review This section covers the related applications of deep learning (DL) & CNN techniques in CXR images. The application of DL models in chest x-ray radiology is a challenging task due to the complexity of the images and its related radiology reports. In DNN models, Popular CNN architectures like AlexNet [10], VGGNet [11], Inception (GoogLeNet), ResNet [11], DenseNet [11] are often used for CXR detection and segmentation. Krizhevsky et al. developed the model AlexNet [10] for classifying more than 1000 different classes. The model has been built with eight deep layers and 60 × 107 parameters and got a 63.3% accuracy rate. Simonyan et al. [12] proposed the model VGGNet using 16 layers consisting of 138 million parameters and achieved 92.3% accuracy rate.
Development of Disease Diagnosis Model for CXR Images and Reports …
161
Szegedy et al. [13] built the model GoogLeNet using 48 layers consisting of 27 million parameters and reported 78.8% accuracy results. [14] In their research, that ResNet model with 152 layers and 60 million parameters with 78.6% accuracy results. The Huang et al. [15] research team built the DenseNet model with 264 layers with 22 parameters and achieved 77.9% accuracy results. P. Rajpurkar et al. used the pre-trained model CheXNet to detect all 14 chestrelated diseases using ChestX-ray14 datasets, and they reached in their results all 14 diseases [16]. Wang et al. [17] proposed the ChestNet model, which consists of classification for diagnosing chest diseases on CXR radiographs. Anavi et al. [18] built a network model for rating other CXR images in their database based on their similarity to a query image. They reported that a 5-layer convolutional neural network was much more efficient than image descriptor-based similarity. Chhikara et al. [19] applied deep CNN and transfer learning to detect pneumonia, a kind of chest disease, on CXR images. Wang [20] used the MobileNet model to classify various images and achieve higher recognition accuracy with fewer parameters and computation costs. The previous studies reviewed in this section show the importance of CNN models and their applications in diagnosing chest diseases on CXR images. This study aims to apply DenseNet and Mobile CNN models to diagnose 14 chest diseases based on the CXR image classification and radiology report analysis using disease keywords similarity with the TF-IDF algorithm.
4 Methodology Generally, deep learning model development involves the following tasks. • Collecting, preparing, and analyzing the dataset. • Preparation of deep learning model. This stage has three main steps as follows: – DL model training. – DL model testing. – Model’s evaluation using measures.
4.1 Dataset Description For this study, the NIH ChestX-ray14 [28, 29] dataset from NATIONAL INSTITUTES OF HEALTH (NIH) has been used. It contains more than 106 CXR images with disease labels from 30,805 patients and 13 columns. These labels are created using NLP text analysis of disease classification using the radiology reports. The description of the dataset is shown in Table 1.
162
A. Muniasamy et al.
Table 1 Description of NIH ChestXray14 Datasets Files
Attributes
Class labels
1. Image File
1,12,120 total images with size 1024 × 1024
Atelectasis Consolidation
2. Patient’s data Image Index file Finding Labels
Infiltration Pneumothorax
Follow-up #
Edema
Patient ID
Emphysema
Patient Age
Fibrosis
Patient Gender
Effusion
View Position
Pneumonia
OriginalImage [Width, Height] OriginalImagePixelSpacing [x, y]
Pleural_thickening Cardiomegaly Nodule Mass Hernia No Findings
4.2 Exploratory Data Analysis First, exploratory analysis of the data has been carried out based on the patient’s data file. The statistical summarization, plotting the analysis of patient’s diseases are discussed in this section. The statistical analysis of a number of patients having fourteen disease classes is shown in Fig. 7. Figure 7 shows that the patient’s disease classification from the CXR dataset and the majority of the dataset have normal data and falls in the No-Finding class, Infiltration class is available in the second level and Infiltration along with Nodule combined class in the least count. Figure 8 shows the gender and age information of the patients. It has been observed that both genders have almost the same distribution. The ratio between the single and the multiple diseases of the patient details is shown in Fig. 9. The infiltration disease with other diseases combination is in the first position, and Hernia disease with other diseases are in the least count. Secondly, the following analysis has been carried out based on the CXR image files in XML format. For extracting data from the XML documents, it has been found that the images have different heights and widths, so they are resized into a common size as 227 × 227. Also, 997 rows have been dropped from the dataset as there is no value given for the ‘Findings’ attribute. In addition to that, most individuals have only two x-ray images while the maximum scan views are four for some patients. In these records, the image of a patient has the same file name except the last 4 digits with different values. Those replicated records are considered as a single report based on the patient ID.
Development of Disease Diagnosis Model for CXR Images and Reports …
Fig. 7 The number of patients versus diseases classes
Fig. 8 Patient’s gender & age information
163
164
A. Muniasamy et al.
Fig. 9 Ratio between single and multiple diseases
Thirdly, the following preprocessing steps have been applied for text cleaning on the CXR image files in XML format to make structured data ready for the modeling process. • • • • •
Converts to lowercase. Removes punctuations. Removes numbers and irrelevant text like xxxx. Removes words less than two characters except no and ct. Removes multiple full stops from the text.
The word cloud of the report for showing the visual representations that give more importance to the words that appear frequently in the report has been created for further analysis. Before simulating the models for the classifier’s implementation, the datasets are preprocessed to get training data by class labels.
4.3 Building Models We employed the CNN model for the detection of chest-related patient diseases using CXR images and reports (Fig. 10). Data preprocessing has been carried out before input the CXR images directly to the classifier model. Then classification models are developed to predict the 14 chest-related diseases from CXR images. We used two
Development of Disease Diagnosis Model for CXR Images and Reports …
165
Fig. 10 Overview of methodology
models, namely DenseNet and MobileNet, which are implemented using Python in the Keros library. • CXR Image Classifier: – Input CXR image with its labels. – CXR images are preprocessed to enhance the performance of CNN. – Using pre-trained deep CNN models, the image features are extracted and obtained fully connected layer with output features matching the number of classes that classify the disease. The image features are investigated by comparing the activated convolutional layers on the image area with the matching regions in the original images. The input image was applied to DenseNet and MobileNet models, and the output of the first convolution layer was tested, and the results are shown in Fig. 11. In Fig. 11, the labels (i), ii, iii, iv represent the first convolutional layer, strongest activation channel, deep layer images set, and deep convolutional layer on each model. • Keywords Search in Radiology Reports Searching for the reports mentioning a particular disease or findings may be of interest for the radiologists and treating physicians – Read and preprocess the radiology reports. – Build the TF-IDF matrix for the ‘corpus‘ using sklearn in Python, Vectorizes the query via ‘vectorizor’ function. Compute the cosine similarity of the query for all the reports. Returns the ‘top_k’ similar radiology reports. After that, the dataset was divided into training and testing sets. For the classification models, the two CNN models have been applied, and then the performance of every classifier is analyzed using the evaluation metrics. We used the model classifier evaluation’s accuracy and area under ROC (receiver operating characteristics) curve measures.
166
A. Muniasamy et al.
Fig. 11 Activation Map for DenseNet and MobileNet models
5 Model Evaluation and Results Analysis The primary goal of this work is to diagnose 14 chest-related diseases using CXR images. As shown in Sect. 4.3, the proposed model has been trained separately for the two models MobileNet and DensNet respectively. Then the model classifiers are evaluated using the classification accuracy, and area under the ROC curve measures (Table 2). The performance of the CNN models is shown in Fig. 11. It can be noted that DenseNet is producing the highest accuracy (91%) when compared to the MobileNet accuracy (88%). Table 3 shows the percentage of the actual and predicted disease diagnosis. The performance of the DenseNet model shows good results for actual and predicted percentage of disease diagnosis. For the prediction process, based on the results given in Table 3, the performance of MobileNet is comparatively high in most of the disease predictions and less in few diseases. The ROC-area under the curve (AUC) (Fig. 12) for the MobileNet models with false positive and true negative for 14 classes and achieved average AUC score is 60.67%. Table 2 Description of metrics Name of the metrics
Formula
Classification accuracy
T r ue Positives+T r ueN egatives Positives+N egatives
ROC Curve
It is measure of the usefulness of a test and tool for diagnostic test evaluation. More useful test has a curve with greater area
Development of Disease Diagnosis Model for CXR Images and Reports …
167
Table 3 Actual and predicted disease diagnosis Disease class
DenseNet Actual (%)
Atelectasis
MobileNet Predicted (%)
Actual (%)
Predicted (%) 21.06
23.11
15.66
22.27
Cardiomegaly
5.73
4.99
4.39
6.02
Consolidation
9.53
16.48
9.28
7.33
Edema Effusion Emphysema Fibrosis
4.86
5.84
4
4.68
26.50
35.65
26.66
20.71
4.98
1.62
5.08
4.16
3.46
2.35
2.83
3.70
Infiltration
38.01
42.80
36.82
40.02
Mass
11.53
16.62
11.13
10.04
Nodule
12.44
14.17
12.5
17.22
Pleual-Thickening
7.27
12.97
6.93
5.77
Pneumonia
2.90
2.82
3.03
3.28
10.34
5.61
10.45
3.28
Pneumothorax
Fig. 12 Area under ROC for MobileNet model
Based on the results shown in Fig. 12, it has been found that the MobileNet model showed a 75% AUC score for Edema and a 60% to 70% to AUC score for the remaining 13 chest-related diseases. Figure 13 shows the AUC of ROC curve for the DenseNet models with false positive and true negative for 14 diseases classes and achieved average roc sore is 70.50%.
168
A. Muniasamy et al.
Fig. 13 Area under ROC for DenseNet model
Based on results shown in Fig. 13, it has been found that the DenseNet model showed 83% AUC score for cardiomegaly and 60% to 70% to AUC score for the remaining 13 chest-related diseases. DenseNet model comparatively exhibits the highest accuracy. So, this model should be recommended for developing a prototype for classifying the results into the diagnosis of fourteen CXR diseases. Report Searching by Keyword Process: NLP techniques have been applied for radiology reports to enable automatic identification and extraction of information using keywords related to chest diseases. Term Frequency –Inverse document frequency (TF-IDF) algorithm is used for radiology report search and information retrieval. For the input query ’pneumonia’, the top-5 results were achieved shown in Table 4, the report which is the most similar one will appear as Top 1 as it has high frequency score. The disease keyword searching process reveals that the dataset has patients with the following diseases: Atelectasis, Effusion, Cardiomegaly, Infiltration, Pneumonia, Pneumothorax, Mass, and Nodule are in the range from high count to low count. The presence of patients with remaining other chest diseases is comparatively less, and DenseNet model comparatively exhibits the highest accuracy. So, this model should be recommended for developing a prototype for classifying the results into the diagnosis of fourteen CXR diseases.
Development of Disease Diagnosis Model for CXR Images and Reports …
169
Table 4 Report searching using keyword Score Similarity Body-Text Top 1 0.45892
The heart pulmonary and mediastinum are within normal limits; No pleural effusion pneumothorax; No focal air space opacity suggest pneumonia
Top 2 0.43167
The lungs are mildly hyperepanded there no focal airspace consolidation suggest pneumonia no pleural effusion pneumothora normal heart size and mediastinal contour
Top 3 0.41022
The cardiomediastinal silhouette normal size and configuration pulmonary vasculature within normal limits there right lower lobe pneumonia no pleural effusion no pneumothora
Top 4 0.401913 The heart pulmonary and mediastinum are within normal limits; No pleural effusion pneumothorax; No focal air space opacity suggest pneumonia; the patient was shielded Top 5 0.390011 The heart size upper limits normal the pulmonary and mediastinum are within normal limits; No pleural effusion pneumothorax; No focal air space opacity suggest pneumonia
6 Conclusion Medical practitioners today mainly depend on the image analysis reported by radiologists. Due to the advancement of imaging technology, patients also easily monitor their CXR image results. This chapter focuses on the model for diagnosing chestrelated diseases from CXR images based on image classification and searching the disease keywords in radiology reports using deep learning techniques. mobile and DenseNet were tested on NIH ChestXray datasets and evaluated using classification accuracy and ROC-AUC value. The achieved results confirm that the DenseNet model exhibits good performance than the MobileNet model. Research findings consider that deep learning CNN models can simplify the CXR images and its reports diagnostic process and improve disease management. Acknowledgements We acknowledge NIH ChestXray data repository for datasets.
References 1. Carlos, R. A. G., Marangoni, A., & Leong, L. et al. (2019). The global future of imaging. London, United Kingdom: British Institute of Radiology. 2. Liu, N., Wan, L., Zhang, Y., Zhou, T., Huo, H., & Fang, T. (2018). Exploiting convolutional neural networks with deeply local description for remote sensing image classification. IEEE Access, 6, 11215–11228. 3. Bakator, M., & Radosav, D. (2018). Deep learning and medical diagnosis: A review of literature. Multimodal Technol. Interact. 2, 47. [CrossRef]. 4. Zhang, X., Yao, L., Wang, X., Monaghan, J., & McAlpine, D. (2019). A Survey on Deep Learning based Brain Computer Interface: Recent Advances and New Frontiers. arXiv 2019, arXiv:1905.04149.
170
A. Muniasamy et al.
5. Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., van der Laak, J. A. W. M., Ginneken, B., & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88. 6. Brunetti, A., Carnimeo, L., Trotta, G. F., & Bevilacqua, V. (2019). Computer-assisted frameworks for classification of liver, breast and blood neoplasias via neural networks: A survey based on medical images. Neurocomputing, 335, 274–298. 7. Asiri, N., Hussain, M., Al Adel, F., & Alzaidi, N. (2019). Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artificial Intelligence in Medicine, 99. 8. Zhou, T., Thung, K., Zhu, X., & Shen, D. (2018). Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Human Brain Mapping, 40, 1001–1016. 9. Kallianos, K., Mongan, J., Antani, S., Henry, T., Taylor, A., Abuya, J., & Kohli, M. (2019). How far have we come? Artificial intelligence for chest radiograph interpretation. Clinical Radiology, 74, 338–345. 10. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Pdf ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60, 84–90. 11. ResNet, AlexNet, VGGNet, Inception: Understanding various architectures of Convolutional Networks. Available online: https://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-incept ion/. Accessed on 23 December 2019. 12. Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition [arXiv:1409.556p]. 13. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., & Anguelov, D. et al. (2014). Going Deeper with Convolutions. [arXiv:1409.4842 p]. 14. He K, Zhang X, Ren S, Sun J Deep Residual Learning for Image Recognition. 2015. [arXiv: 1512.03385p.]. 15. Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2016). Densely Connected Convolutional Networks. [arXiv:1608.06993 p]. 16. Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., & Duan, T. et al. (2017). CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning [arXiv:1711. 05225 p]. 17. Wang, H., & Xia, Y. (2018). ChestNet: A deep neural network for classification of thoracic diseases on chest radiography, arXiv, pp. 1–8. 18. Anavi, Y., Kogan, I., Gelbart, E., Geva, O., & Greenspan, H. (2015). A comparative study for chest radiograph image retrieval using binary texture and deep learning classification. Conference Proceedings IEEE Engineering Medicence Biology Socity, pp. 2940–2943. 19. Chhikara, P., Singh, P., Gupta, P., & Bhatia, T. (2020). Deep convolutional neural network with transfer learning for detecting pneumonia on chest X-rays. In Advances in Bioinformatics, Multimedia, and Electronics Circuits and Signals (pp. 155–168). Springer, Singapore. 20. Wang, W., Li, Y, Zou, T., Wang, X., You, J., & Luo, Y. (2020). A novel image classification approach via dense-MobileNet models. Mobile Information Systems, 8, 2020. https://doi.org/ 10.1155/2020/7602384. 21. Bailer, C., Habtegebrial, T., Varanasi, K., & Stricker, D. (2018). Fast Feature Extraction with CNNs with Pooling Layers. arXiv 2018, arXiv:1805.03096. 22. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. 23. DenseNet: Better CNN Model than ResNet. Available online: http://www.programmersought. com/article/7780717554/. Accessed 23 December 2019. 24. Howard, A. G. et. al. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, Computer Vision and Pattern Recognition, Apr 2017, arXiv:1704.04861. 25. Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., & Summers, R. M. (2017). ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462–3471, 2017.
Development of Disease Diagnosis Model for CXR Images and Reports …
171
26. Wang, C., & Kahn, C. E. (2000). Potential use of extensible markup language for radiology reporting: A tutorial. Journal of Radio Graphics, 20(1). 27. Pons, E. et al. (2016). Natural language processing in radiology: A systematic review. Journal of Radiology, 279(2). 28. NIH Chest X-Ray Dataset. https://nihcc.app.box.com/v/ChestXray-NIHCC/folder/369387 65345. 29. NIH Chest X-Ray Dataset. https://www.kaggle.com/nih-chest-xrays/data.
Intelligent Drug Descriptors Analysis: Toward COVID-19 Drug Repurposing Aya Salama Abdelhady, Yaseen A. M. M. ElShaier, Mohamed S. Refaey, Ahmed Elsyaed Elmasry, and Aboul Ella Hassanien
Abstract COVID-19 is the most critical aspect that all countries worldwide are trying to overcome. All the institutions in all the fields are trying to tackle the COVID19 different effects and treatment in depth. Finding the most effective medication or vaccination is the only refuge to get back to normal life before reaching an economic crisis that can never be handled. All medicines have impact factors that tell if this medicine is effective in curing certain diseases. In this paper, we studied the effectiveness of different COVID-19 medicines to detect the effectiveness of new drugs in curing the new Coronavirus. Another goal of this paper is to find the most dominant descriptor of the studied medicines. For this reason, different sets of descriptors have been tested. According to the conducted experiments, the lowest prediction error of 18.9856 was optimized SVM with PUK kernel. Moreover, it was found that the most powerful feature that can best represent the COVID-19 drugs is the number of rings, then the Number of NHA, the number of Chiral centers, and the number of rotatable bonds come next. These findings will help medicinal chemists design and repurpose new drugs that alleviate SARS-CoV-2.
A. S. Abdelhady (B) Faculty of Computer and Innovation, Universities of Canada in Egypt, Cairo, Egypt e-mail: [email protected] URL: http://www.egyptscience.net Y. A. M. M. ElShaier Faculty of Pharmacy, Organic and Medicinal Chemistry Department, University of Sadat City, Sadat City 32897, Egypt M. S. Refaey Faculty of Pharmacy, Pharmacognosy Department, University of Sadat City, Sadat City 32897, Egypt A. E. Elmasry Faculty of Pharmacy, Al-Azhar University, Assuit Branch, Cairo, Egypt A. E. Hassanien Faculty of Computer and AI, Cairo University and Scientific Research Group in Egypt (SRGE), Cairo, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_10
173
174
A. S. Abdelhady et al.
Keywords Machine learning · SVM · Clustering · Neural networks · Regression · COVID-19 · Drug repurposing · Drug discovery · SARS-CoV-2
1 Introduction Drug’s design is a sophisticated process that consumes huge Time and efforts to fight the targeted viruses. Drug repositioning, also known as drug repurposing, reprofiling, redirecting, re-tasking, and therapeutic switching, has been a promising alternative drug discovery approach for decades. The basic idea for drug repositioning is to find new uses for existing or failed drugs, with a well-known safety profile and pharmacokinetic profile [1]. System biology continues to make significant progress in addressing fundamental questions in biology and leading to practical applications in medicine and drug discovery. Drug repositioning includes combining drugs with distinct indications, reducing drug resistance, and combating complex diseases more effectively [2, 3]. Most of the successful examples of drug repositioning are from serendipitous clinical observations. The utilization of computational tools such as structure-based drug design, ligand-based drug design, and artificial intelligence (AI) technology helps accelerate the drug repurposing process [4]. COVID-19 is a serious virus taking millions of souls all over the world each year. Drug repositioning is one of the promise frameworks to circumvent the COVID-19 outbreak [5, 6]. Drugs produce their action by interacting with a small reactive area in the targeted tissues known as a receptor, part of protein molecules [7]. This target could be rigid or semi-rigid, biologically important macromolecules such as enzymes, nucleic acid (DNA, RNA), and cell membrane. The receptor will react with certain pharmacophoric features found in the drug (parts with important chemical and biological effects) to produce the biological influence. Drug-receptor interaction arises through the following pathways [8]: (1) irreversible (e.g., covalent bond formation); and (2) reversible through ionic bond formation, ion–dipole interaction (known as electrostatic interaction), Van der Waals interactions, and hydrogen bonds (HBs). HBs are formed between electronegative atoms (O, N, S, F, and Cl) and a proton attached to electronegative atoms. Many drugs contain hydroxyl, amino, carbonyl, or carboxyl group, and in the protein of receptor, there are many replaceable hydrogen atoms. HBs participate in a crucial role in determining the specificity of ligand (drug) action. Since HBs are easily broken, they will permit dissociation of the drug-receptor complex. The ability of a drug to elicit a pharmacologic/therapeutic effect is related to the influence of its various physical and chemical (physicochemical) properties. Poor physicochemical profile for any bioactive compound is a known drawback that hinders its approval during drug development process. Consequently, identification of excellent profile with the desired pharmacological action is a consistent challenge. Moreover, in many cases, the chemical modifications or synthesis of new drug candidates by installing certain motifs that required to improve the desired biological activity (known as pharmacophore) have a negative effect on its physiochemical parameters. Computational methods, especially artificial intelligence, and their application are expected
Intelligent Drug Descriptors Analysis …
175
to reposition drugs against various diseases effectively. Furthermore, drug repositioning is also spurred by the following observations: additional targets of many drugs have not been discovered; such drugs are likely to exert additional effects by direct activation/inhibition on their targets. Artificial intelligence in medicine is very trending. AI had tremendous medical applications, like classifying skin lesions by classifying images to decide benign or malignant. AI is also growing fatly in the pharmacy as well as medicine. It also has many pharmacy applications, such as drug discovery, dosage decisions, quality control, and many more. There have been various expert systems provided to doctors to decide the most accurate medical diagnosis. They also assist them with drug interactions, drug therapy monitoring, and drug formulary selection [5, 6]. Machine learning is one significant application of AI that enables machines to learn and make decisions by learning from previous learned data. Machine learning as an intelligent approach can deal with drug repositioning [9, 10]. Artificial Neural Networks (ANN) and Support Vector Machines are considered the most famous trends in the Machine learning fields. However, ANN is prune to overfitting as it requires tremendous training to learn from data, leading to memorizing the decisions. As a result, it will reduce its ability to function on new data. This is unacceptable in the critical field such as health. For this reason, SVM is preferable in such cases since SVM doesn’t suffer from iterating million times to update weights each Time, getting a new different hyperplane like the Artificial Neural Networks [11, 12]. Hodos et al. [13] provided a comprehensive review on drug repurposing approaches using machine learning. Similarity-based methods kernel-based ML methods such as support vector machine (SVM) represent one common approach. In [14], researched explored deep neural networks, support vector machines (SVM), and Linear regression. Drugs repurposing using AI techniques like deep convolution neural network has also been used for cancer drugs repurposing due to the availability of large data sets of cancer drugs [15]. According to the literature review, Covid 19 drugs repurposing is still a novel idea that needs several experiments to prove its feasibility. Therefore, in this paper, we analyze the physicochemical characters for some FDA-approved drugs to test their abilities to alleviate SARS-CoV-2 replications. Furthermore, we aim to figure out the most important feature that controls drugs activity as anti-SARS-CoV-2. The rest of this paper goes as follows. First, the different descriptors of different drugs are studied in Sect. 2 to explain the data set of drugs. In Sect. 3, the theoretical background of the used Machine learning approaches is elaborated. The proposed approach is then discussed in Sect. 4, with its evaluation in Sect. 5. At last, the outcomes of this research are summarized in Sect. 7.
176
A. S. Abdelhady et al.
2 Discussion 2.1 Physicochemical Properties and Descriptors Any bioactive compound or drug consists of different structural features (descriptors) that control its pharmacological activity. Among the wide spectrum of drug descriptors, the study prioritized certain descriptions because of their ability to control other descriptors and their great importance in the drug repurposing approach [16]. These include “rule of five” which has a great rule in oral druggablity [17]. tPSA is used to determine the functional group’s contribution, and it is considered a convenient descriptor in 2DQSAR (2-dimensional quantitative structure activity relationship) [18]. Furthermore, we decided to add “number of chiral centers” and “number of rotatable bonds” because these parameters control the number of isomer (enantiomers/diastereomers) or number of conformers. Isomerism and stereochemistry have a great role in drug-receptor action. Finally, our analysis incorporated several rings to examine the effect of drug flexibility/rigidity on the receptor. Explanation of each descriptor as follows: – Rule of five descriptors: Lipinski’s rule of five, also as Pfizer’s rule of five or simply the rule of five (RO5). It is a rule of thumb to evaluate drug-likeness or determine. – Lipophilicity and Partition coefficient (ClogP) descriptor: Lipophilicity is a main determining factor in a compound’s absorption, distribution in the body, metabolism, and excretion (ADME properties). The lipophilic (hydrophobic) drugs/compounds have a higher P-value, whereas the lipophilic (hydrophilic) ones have a low value [17]. CLogP (calculated logP) is applied in the pharmaceutical companies and drug discovery & development process to understand drug behavior in the body [19]. Drug candidates are often screened according to ClogP as a guide in drug selection and analog optimization pipelines. The relative solubility of any drug can be determined practically through the partition coefficient [P; the ratio of the solubility of the compound in an organic solvent to the solubility of the same compound in an aqueous environment (i.e., P = [Drug]lipid/[Drug]aqueous). P is often expressed as a log value. • Mwt: molecular weight, which is calculated from a molecular formula based on compound structure. It has a role in the rule of five, as explained above • HBD (hydrogen bond donor) means the ability of this drug to participate in HB formation with the amino acid in the receptor as a donor. • HBA (hydrogen bond acceptor) means the ability of this drug to participate in HB formation with the amino acid in the receptor as a donor. • No. of rings: Number of ring describer the general feature of chemical structure • No. of NHA: number of non-hydrogen atoms. This parameter explained the effect of other atoms without hydrogen because hydrogen is very common in drug structures.
Intelligent Drug Descriptors Analysis …
177
• tPSA: The polar surface area (PSA) or topological polar surface area (tPSA) of a molecule is defined as the surface sum over all polar atoms or molecules, primarily oxygen and nitrogen, including their attached hydrogen atoms. It’s a metric for the optimization of a drug’s ability to permeate cells. If tPSA is greater than 140 angstroms squared, it is poor to penetrate cell membranes. For drugs acting on the central nervous system (CNS), a PSA should be less than 90 angstroms square. • No. of chiral centers: how many chiral centers are inside the compound. The chiral center or asymmetric center, or stereogenic center means carbon atom attached with four different groups. The chiral center is responsible for the optical activity of drugs and the generation of isomers. • No. of heavy atoms: the importance of this descriptor lies in its role in calculation of ligand efficiency (LE) and ligand lipophilic efficiency (LEE) [20, 21]. Both LE and LLE are from the important druggabilty parameters used during drug development process [22, 23]. These metrics helps in identification the best balance in drug-lipophlicity parameter and avoid “drug obesity”. • IC50 : the concentration of drug required to kill 50% from virus
2.2 Drug Approved Analysis In this regard, the departure point starts by constructing a data set from reported anti-SARS-CoV-2 drugs (Appendix 1). Lipinski´s rule (Rule of five) and molecular properties for these drugs were calculated. Furthermore, the original pharmacological uses for these drugs are prescribed in the data. Table 1 displays selected examples from these drugs. Classification of these drugs based on their original uses is summarized in (Fig. 1). They can be categorized as antiviral drugs and non-antiviral drugs.
3 Theoretical Background of Support Vector Machine (SVM) SVM is a widely used machine learning method due to its simplicity and flexibility for solving almost all types of real decision-making cases. Most importantly, it overcame the eventual problem of all classification techniques, which is overfitting. SVM is based on statistical learning, separating the different decision circles of non-linear problems and converting them into higher dimensional space using different types of kernel functions [30]. Therefore, SVM is based on the ultimate selection of kernel methods as they provide the transformation from linearity to nonlinearity. For this reason, in this paper, different kernels experimented with, such as linear, polynomial, and RBF kernels were evaluated.
Remdesivir/Antiviral
Lopinavir/Antiviral
Ivermectin/parasite infestations
Niclosamide/Anthelmintic
Nitazoxanide/Antidiarrheal
Fluticasone 538 Propionate/Treatment of asthma, inflammatory pruritic dermatoses
Hydrocortisone/treatment of 460 various inflammatory conditions
Naphazoline/relief of redness and itching of the eye, and nasal congestion
Piroxicam/Analgesic and anti-inflammatory
Salmeterol/Bronchodilator
1
2
3
4
5
6
7
8
9
10
415
331
210
307
327
875
628
602
Mwt
4
2
1
1
1
1
2
3
4
4
HBD
Rule of five
Name/Pharmacological class
No
5
5
2
7
7
7
4
14
5
13
HBA
3.0634
1.888
3.826
0.74
1.12932
1.24282
4.34465
4.58
6.0946
1.24929
CLog P
2
3
3
4
4
2
2
6
4
4
No. of rings
30
23
16
33
30
21
21
62
57
33
No. of NHA
Table 1 Selected FDA approved drugs that inhibit SARS-CoV-2 with their features and IC50
81.95
99.07
24.39
106.97
74.6
119.57
101.14
170.09
120
201.32
tPSA
1
–
–
7
9
–
–
8
4
1
NO. of chiral center
16
2
2
5
4
4
3
8
15
13
No. of rotatable bonds
1.5
8.21
9.52
7.1
1.7
1.29
0.16
2
9.1
1.3
IC50 (μM)
[28]
[28, 29]
[28, 29]
[26]
[26]
[26, 27]
[26, 27]
[25]
[24]
[24]
References
178 A. S. Abdelhady et al.
Intelligent Drug Descriptors Analysis …
179
For psychotic disorders.
For obsessivecompulsive disorder
Treatment of schizophrenia
Antihyper lipemic agent
Antiemetic
AntiallergicAntihistaminic
Anticancer
Analgesic and antiinflammatory
Antimicrobial
Antiviral
AntiSARSCov-2
Bronchodilator
Fig. 1 General classes of drugs repositioned for treatment of COVID-19
SVM training is a complex quadratic optimization problem for obtaining the support vectors x i (with class values yi ), coefficients α, and a threshold value b. The following decision function classifies a test instance x:
ϕ(x) = sgn(
j
αi yi K (xi , x j ) + b)
(1)
where K is the kernel function: K(xi , xj ). The polynomial kernel and the RBF are the most popular kernels. However, they differ in the types of features that they target. The polynomial kernel describes the similarity between support vectors by describing the polynomials of the original values. It also determines the combinations of features that are considered as features interaction in regression. The normalized polynomial kernel has also been tested in this research which provides normalization in the kernel function. Pearson 2 Universal Kernel has also been tested, which adapts the behavior kernel. As a result, the support vectors behave accurately even if the kernel function chosen is not the best. The Polynomial kernel and RBF equations are illustrated in Eqs. 2 and 3, respectively [31]. Polynomial: K(X, Y) = (1 + X.Y)d
(2)
Radial Basis Function (RBF): K(X, Y) = exp −X − Y2 /2σ 2 .
(3)
180
A. S. Abdelhady et al.
Sequential Minimal Optimization (SMO) has been tested in this paper to train the support vector machine in solving the quadratic problem. SMO breaks this large quadratic problem into smaller quadratic problems then solves it analytically and in iteratively [32]. SMO is different from other SVM training algorithms because SMO utilizes the smallest possible QP problems that are solved in the least Time, leading to better scaling and more effective computation. For this reason, SMO performs well even when problems get bigger. Based on the literature review, SMO proved to be the best training algorithm for training SVM on most of the experiments on different benchmarks [33].
4 Intelligent Drugs Descriptors Analysis Approach Dataset has been collected for several COVID-19 drugs, and all of them had different descriptors on which the testing experiments are carried out. The proposed approach is summarized in (Fig. 2).
Fig. 2 Intelligent drugs descriptors analysis Approach
Intelligent Drug Descriptors Analysis …
181
4.1 Preprocessing Phase Any Data set requires preprocessing to be used as input for the different machine learning and artificial intelligence techniques. For this reason, we had to remove some of the data set rows so that it wouldn’t affect the output results. In addition to this, some of the data contained special characters that affected the classification and were also removed or normalized.
4.2 Support Vector Regression Phase As explained before, SVM has different characteristics, such as the number of validations and the kernel to be used. SVM was optimized to find the best hyperparameters to be selected based on the properties of the given drug so that it can give the best results. Support Vector Regression is the same as support vector machine. However, in regression, the decision boundary is represented by a curve, unlike the regular classification. Therefore, regression vectors find matches on the curve. The distance is also maximized between support vectors and curves. Same Kernels are also applied for regression. Therefore, a non-linear function or curve is regressed to map non-linear data onto a space. As a result, the contribution of support vectors to the regression problem can be computed. Sequential Minimal Optimization (SMO) has been tested in this paper to train the support vector machine in solving the quadratic problem.
4.3 Kernel Selection Different support vector kernels have been evaluated since support vector regression efficiency is based on the ultimate selection of kernel methods. They allow the transformation from linearity to nonlinearity. Therefore, several kernels have been tested as the kernel is the most critical component of SVM that shapes its efficiency. The tested kernels are the polynomial kernel, normalized polynomial kernel, RBF, and Pearson 2 Universal Kernel (PUK).
4.4 Descriptors Analysis As mentioned before, the dataset contains different chrematistics representing each of the drugs. For this reason, different combinations of different characteristics have been tested to determine which if the given properties are the most dominant and whether any of these properties can be removed if their effects are negligible.
182
A. S. Abdelhady et al.
5 Results and Discussion For SVM, we used an optimized version for the SVR, and an iterative algorithm called minimal sequential optimization (SMO). In addition, the optimized algorithm has two threshold parameters to be used in optimization for regression to be more efficient than the ordinary support vector machine. In the first experiment, a polynomial kernel was used. This kernel function is usually used with support vector machines, especially in solving non-linear models. It provides the training vectors in a feature space over polynomials of the original variables. The attributes used are HBD, HBA, CLog P, No. of rings, Mwt, No. of NHA, tPSA, NO. of chiral center, No. of Rotatable Bonds, and IC50 . With tenfold crossvalidation to avoid overfitting. Several kernel evaluations: 1830 (96.595% cached). Time is taken to build the model: 0.02 s. Pearson VII function-based Universal Kernel (PUK) has also been examined on the same attributes. PUK is universally regarded as the best kernel. With tenfold cross-validation to avoid overfitting. Number of kernel evaluations: 1830 (95.047% cached). The Time taken to build the model was 0.01 s. The Radial basis function kernel, RBF, was also experimented with, also called Gaussian kernel, as it is a Gaussian function. The RBF kernel is defined as KRBF (x, × 0) = exp h − γ kx − × 0 k 2 i. With tenfold cross-validation, number of kernel evaluations: 1830 (80.178% cached) and the Time taken to build the model: 0.02 s. The normalized polynomial kernel has also been experimented with; the normalization here provides the kernel function normalization. Number of kernel evaluations is 1830 (93.858% cached), and the Time to build the model is 0.02 s. We are also showing here the results without optimization. For polynomial kernel, number of kernel evaluations: 1830 (99.889% cached), Time taken to build model is 0.06 s, and again with PUk kernel, number of kernel evaluations is 1830 (99.985% cached) and Time taken to build the model is 0.14 s. And RBF, with number of kernel evaluations: 1830 (99.978% cached) and Time is taken to build model: 0.15 s. Correlation coefficient and mean absolute error have been used for the approach evaluation since we don’t have discrete classes in this study. Instead, we have IC 50 that we need to predict the nearest value for new tested drugs. Correlation coefficient r, examines the variation in x and y jointly (numerator) compared to the variation in each variable separately (denominator) as illustrated in Eq. 4: (x − x)(y ¯ − y¯ ) r= 2 (x − x) ¯ (y − y¯ )2
(4)
Mean absolute error (MAE) is a measure of errors between paired observations, as shown in Eq. 5.
Intelligent Drug Descriptors Analysis … Table 2 Evaluation using different SVR kernels
183
kernel type
Correlation coefficient
Mean absolute error
Optimized SVR with 0.0818 Polynomial kernel
27.8059
Optimized SVR with 0.589 PUK kernel
18.9856
Optimized SVR with 0.2116 RBF kernel
24.5451
Optimized SVR with 0.2028 normalized Polynomial kernel
25.5536
SVR with normalized 0.0013 Polynomial kernel
26.4671
SVR with Polynomial kernel
0.0725
28.9731
SVR with PUK kernel
0.5894
18.9625
SVR with RBF kernel
0.2156
24.5756
Linear regression
0.3405
31.029
|Forecast Errors| Actual
Number of Forecasts
(5)
As obvious, in Table 2 SVR, whether optimized or non-optimized, using PUK kernel gave the least mean absolute error of 18.98, which proves the feasibility of the proposed approach in predicting the Ic factor of new COVID-19 drugs. Furthermore, the optimized SVR machine was less prone to error than SVR with all the studied kernels. As mentioned before, the data set consists of the characteristics that powerfully represent the studied drugs. In addition to the experiments that have been conducted to examine the IC 50% of the testing set, it was also important to test the most dominant descriptor. Therefore, the filtering method was applied. To do this, each one of the properties of the drug has been removed at each experiment to test its effect on the predicted IC 50%. It wasn’t surprising that removing some of the features improved the predicted output as they can be considered as noisy features. However, removing some other factors increased the forecasting error. These experiments have been conducted using the support vector machine with the PUK kernel as it almost gave the least error with the SVR. The following table shows the results after removing each of the characteristics respectively. By studying the results presented in Fig. 3, we can conclude that HBD can be considered a noisy feature that improves prediction error (17.00736). However, a number of rings characteristic increased the error (20.0049), which means that it is the most dominant feature that can best represent any of the tested drugs with the help of the other rule of five descriptors. Number of
184
A. S. Abdelhady et al.
Descriptors filtering 20.5 20 19.5 19 18.5 18 17.5 17 16.5 16 15.5 Removing Removing Removing Removing Removing Removing Removing Removing Removing Mwt HBD HBA CLOG p Number of NHA TPSA Chiral No. of rings center ratatable bonds Error after removing one of the drug descriptors
Fig. 3 Filtering the most dominant descriptors in drugs repurposing
NHA, number of Chiral centers, and a number of rotatable bonds are also considered effective descriptors since removing them affected the prediction significantly unlike the rest of descriptors. The rest of descriptors as obvious are neutral because removing them didn’t affect the results.
6 Conclusion and Future Work Drugs are considered a refuge for people during any pandemic. Therefore, testing the effectiveness of any drug to be used to combat any virus is critical. For this reason, in this paper, we proved the feasibility of testing the effectiveness of new drugs by testing their similarity with the characteristics of already tested medicine that proved their ability to fight the virus. SVR and linear regression were tested as they proved that they are the best to find the relation between the different attributes describing the given drugs to be trained on and tested. Different kernels have been adopted with SVR and accordingly, PUK kernel gave the least prediction error. For this reason, PUK kernels have been used in testing the most powerful descriptors for the studied drugs. By studying the results, it was concluded that HBD could be considered a noisy feature that affected the classification and that number of rings characteristic can be considered the most dominant feature that best represents any of the drugs with the help of the other descriptors.
7 Appendix See Table 3.
Remdesivir
Lopinavir
Ivermectin
Amodiaquine dihydrochloride dihydrate
Amodiaquine hydrochloride
Benztropine mesylate
Chloroquine phosphate
Chlorpromazine hydrochloride
Clomipramine hydrochloride
Fluphenazine dihydrochloride
Fluspirilene
1
2
3
4
5
6
7
8
9
10
11
No Name
355
319
307
392
355
875
628
602
Treatment of schizophrenia
Management of manifestations of psychotic disorders 475
437
Treatment of 314 obsessive–compulsive disorder
Treatment of schizophrenia
Antimalarial
Adjacent therapy in treatment of parkinsonism
Antimalarial
Antimalarial
parasite infestations
Antiviral
Antiviral
1
1
0
0
1
0
2
2
3
4
4
3
5
2
3
3
2
4
4
14
5
13
5.416
4.1175
5.9208
5.2996
5.06028
3.52
5.46528
5.46528
4.58
6.0946
1.24929
Mwt HBD HBA CLog P
Pharmacological class Rule of five
Table 3 FDA approved drugs that inhibit SARS-CoV-2 with their features and IC50
5
4
3
3
2
3
3
3
6
4
4
35
30
22
22
22
23
25
25
62
57
33 4
35.58
29.95
6.48
6.48
27.63
12.47
47.86
47.86
1
–
–
–
1
1
–
–
170.09 8
120
7
6
4
4
8
4
6
6
8
15
13
3.16
6.36
5.63
3.14
42.03
13.8
2.36
2.59
2
9.1
1.3
No. of No. of IC50 chiral rotatable (μM) center bonds
201.32 1
No. No. tPSA of of rings NHA
(continued)
[28]
[28]
[28]
[28]
[28, 29, 34]
[28]
[28, 29, 34]
[28, 29, 34]
[25]
[24]
[24]
References
Intelligent Drug Descriptors Analysis … 185
Imatinib mesylate
Mefloquine hydrochloride
Promethazine hydrochloride
Tamoxifen citrate
Terconazole
Thiethylperazine maleate
Toremifene citrate
Triparanol
Amikacin sulphate
Amoxicillin
Chloramphenicol
14
15
16
17
18
19
20
21
22
23
24
Antibiotic
Antibiotic
Antibiotic
Antilipemic agent
Treatment of breast cancers
Treatment of nausea and vomiting
Antifungal
Estrogen receptor positive metastatic breast cancer
Treatment of allergic conditions, nausea and vomiting, and motion sickness
Antimalarial
Treatment number of leukemias
Hydroxychloroquine Antimalarial sulfate
323
365
585
438
405
399
532
371
284
378
493
335
3
4
13
1
0
0
0
0
0
2
2
2
5
6
17
3
2
5
8
2
3
3
7
4
3
− 1.8715
1
3
- − 6.2983 1.283
3
3
4
5
3
3
3
5
2
20
25
40
31
30
27
27
28
20
26
37
23
–
–
–
1
–
1
1
–
1
121.37 3
132.96 4
6
4
10
10
9
6
8
8
3
2
7
9
16.94
16.12
16.81
4.68
4.77
7.09
11.92
34.12
9.21
7.11
3.24
9.21
No. of No. of IC50 chiral rotatable (μM) center bonds
331.94 1
32.7
12.47
9.72
62.13
12.47
6.48
44.62
84.69
47.86
No. No. tPSA of of rings NHA
6.7068
6.5308
4.552
4.8136
6.8178
4.3998
3.66889
4.52871
4.11588
Mwt HBD HBA CLog P
Pharmacological class Rule of five
13
No Name
Table 3 (continued)
(continued)
[24, 26]
[26]
[26]
[28]
[28]
[28]
[28]
[28]
[28]
[28, 29, 34]
[28]
[28, 29, 34]
References
186 A. S. Abdelhady et al.
Ceftriaxone
Cefoperazone
Ceftazidime
Clindamycin
Ciprofloxacin
Doxycycline
Flucloxacillin
Levofloxacin
Linezolid
Moxifloxacin
Nitrofurantoin
Neomycin
Niclosamide
Nitazoxanide
Nystatin
Acetyl Salicylic acid Analgesic, antipyretic 180 and anti-inflammatory
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Antifungal
Antidiarrheal
Anthelmintic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
Antibiotic
926
307
327
496
238
401
337
361
453
444
331
424
546
645
554
347
455
27
Antibiotic
Cephalexin
Antibiotic
Cefotaxime
26
1
12
1
2
13
1
2
1
1
2
6
2
4
3
4
4
3
3
3
17
7
4
10
6
6
5
6
6
9
5
7
11
13
15
5
11
2 3
− 3.12249
1
3
− 6.36085 1.0235
2
1.24282
2
5
− 1.050075 − 0.467 4.34465
3
0.168099
4 4
2.33996
4
− 1.50097
0.460259
2 3
4
− 7.34102 2.09457
34
5
− 0.240761
− 0.725242
36
− 0.6496417 4
13
65
21
21
34
17
29
24
26
30
32
24
27
37
24
3
30
3
− 2.50958
1
2
1
1
2
63.6
–
262.26 19
119.57 –
101.14 –
248.39 13
122.81 –
82.11
71.11
73.32
108.3
181.62 6
72.88
102.26 8
189.82 2
216.98 2
208.45 2
112.73 3
2
3
4
3
9
3
4
4
2
4
2
3
7
9
9
8
4
7
[26, 35]
[24, 26]
[26]
[26]
[26]
[26]
[26]
[26]
References
[26, 27]
[26, 27]
[26, 27]
[26]
[26]
[26]
[24, 26]
12.16
(continued)
[26, 27]
160.85 [26]
1.29
0.16
18.12
16.22
12.23
16.3
13.84
157.78 [26]
5.1
61.62
15.67
46.14
12.36
16.14
13.17
42.72
No. of No. of IC50 chiral rotatable (μM) center bonds
172.98 2
No. No. tPSA of of rings NHA
− 1.01812
Mwt HBD HBA CLog P
Pharmacological class Rule of five
25
No Name
Table 3 (continued)
Intelligent Drug Descriptors Analysis … 187
Chlorpheniramine maleate
Dexamethasone
Diclofenac
Fluticasone Propionate
Formoterol Fumarate Bronchodilator
Hydrocortisone
Indomethacin
Ibuprofen
45
46
47
48
49
50
51
52
392
274
540
381
460
344
538
Analgesic, antipyretic 206 and anti-inflammatory
Analgesic, antipyretic 257 and anti-inflammatory
treatment of various inflammatory conditions
Treatment of asthma, inflammatory pruritic dermatoses,
Analgesic, antipyretic 296 and anti-inflammatory
treatment of various inflammatory conditions
Treatment of allergy
treatment of nasal symptoms associated with seasonal
Ciclesonide
44
Analgesic and anti-inflammatory
Celecoxib
1
1
1
4
1
2
3
0
1
1
4
4
7
5
7
2
5
2
7
4
3.679
4.18
0.74
1.2604
1.12932
4.72624
0.134719
3.148
5.87159
4.37201
Mwt HBD HBA CLog P
Pharmacological class Rule of five
43
No Name
Table 3 (continued)
1
3
4
2
4
4
4
2
6
3
15
25
33
25
30
19
28
19
40
26
2
9
–
8
1
8
–
37.3
66.84
2
–
4
4
5
9
4
4
2
5
5
3
[26]
[26]
[26, 27]
References
88.71
8.51
7.1
71.8
1.7
96.24
(continued)
[26, 27]
[26, 27]
[26]
[26]
[26]
[26, 27]
122.55 [26, 36]
3.6
4.2
13.02
No. of No. of IC50 chiral rotatable (μM) center bonds
106.97 7
90.82
74.6
49.33
94.83
15.6
99.13
75.76
No. No. tPSA of of rings NHA
188 A. S. Abdelhady et al.
Ketoprofen
Ketorolac Tromethamine
Metamizole sodium
Montelukast
Meloxicam
Methylprednisolone
Naphazoline
Piroxicam
Salmeterol
53
54
55
56
57
58
59
60
61
No Name
Table 3 (continued)
374
351
586
Bronchodilator
Analgesic and anti-inflammatory 415
331
relief of redness and 210 itching of the eye, and nasal congestion
Treatment inflammation or immune reactions
Analgesic and anti-inflammatory
Bronchodilator
Analgesic, antipyretic 311 and anti-inflammatory
Analgesic, antipyretic 255 and anti-inflammatory
Analgesic, antipyretic 254 and anti-inflammatory
4
2
1
3
2
2
1
1
1
5
5
2
5
6
4
6
3
2
3.0634
1.888
2
3
3
4
− 0.818401
3.826
3
2.2924
5
2
− 1.66098 8.472
3
2
30
23
16
27
23
41
21
19
17
81.95
99.07
24.39
94.83
99.07
69.89
81.16
57.61
54.37
No. No. tPSA of of rings NHA
1.622
2.761
Mwt HBD HBA CLog P
Pharmacological class Rule of five
1
–
–
8
–
1
–
1
1
16
2
2
2
2
12
4
3
4
[26, 27]
References
1.5
8.21
9.52
90.44
12.4
2.7
14.97
[28]
[28, 29]
[28, 29]
[26]
[26, 27]
[26, 27]
[26]
153.42 [26, 27]
21.5
No. of No. of IC50 chiral rotatable (μM) center bonds
Intelligent Drug Descriptors Analysis … 189
190
A. S. Abdelhady et al.
References 1. Gns, H. S., et al. (2019). An update on Drug Repurposing: Re-written saga of the drug’s fate. Biomedicine & Pharmacotherapy, 110, 700–716. 2. Alimadadi, A., et al. (2020). Artificial intelligence and machine learning to fight COVID-19. American Physiological Society Bethesda. 3. Álvarez-Machancoses, Ó., & Fernández-Martínez, J. L. (2019). Using artificial intelligence methods to speed up drug discovery. Expert Opinion on Drug Discovery, 14(8), 769–777. 4. Sahoo, B. M., et al. (2021). Drug repurposing strategy (DRS): emerging approach to identify potential Therapeutics for treatment of novel coronavirus infection. Frontiers in Molecular Biosciences, 8, 35. 5. Hashimoto, K. (2021). Repurposing of CNS drugs to treat COVID-19 infection: targeting the sigma-1 receptor. European Archives of Psychiatry and Clinical Neuroscience, pp. 1–10. 6. Ableton Live Digital Audio Workstation (2020). Available from: https://www.abletoncom/en/ live/. 7. Basic Principles of Drug Action and Drug Interactions (2016). Available from: https://nur sekey.com/2-basic-principles-of-drug-action-and-drug-interactions/. 8. Suvarna, B. (2011). Drug-receptor interactions. Kathmandu University Medical Journal, 9(3), 203–207. 9. Huddleston, S. H., & Brown, G. G. (2018). Machine learning. In INFORMS Analytics Body of Knowledge, pp. 231–274. 10. Libbrecht, M. W., & Noble, W. S. (2015). Machine learning applications in genetics and genomics. Nature Reviews Genetics, 16(6), 321–332. 11. Po-Wei Wang, C.-J. L. (2014). Support vector machines. In Data Classification Algorithms and Applications. Chapman and Hall/CRC, p. 18. 12. Gholami, R. F. N. (2017). Support vector machine: principles, parameters, and applications. In Handbook of Neural Computation. Curtin Research Publications, pp. 515–535. 13. Hodos, R. A., et al. (2016). In silico methods for drug repurposing and pharmacology. Wiley Interdisciplinary Reviews: Systems Biology and Medicine, 8(3), 186–210. 14. Rodriguez, S., et al. (2021). Machine learning identifies candidates for drug repurposing in Alzheimer’s disease. Nature Communications, 12(1), 1–13. 15. Issa, N. T. et al. (2021). Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology. Elsevier. 16. Vallianatou, T., Giaginis, C., & Tsantili-Kakoulidou, A. (2015). The impact of physicochemical and molecular properties in drug design: Navigation in the “drug-like” chemical space. GeNeDis 2014 (pp. 187–194). Springer. 17. Bergström, C. A., & Porter, C. J. (2016). Understanding the challenge of beyond-rule-of-5 compounds. Advanced Drug Delivery Reviews, 101, 1–5. 18. Prasanna, S., & Doerksen, R. (2009). Topological polar surface area: A useful descriptor in 2D-QSAR. Current Medicinal Chemistry, 16(1), 21–41. 19. Leo, A., Hansch, C., & Elkins, D. (1971). Partition coefficients and their uses. Chemical Reviews, 71(6), 525–616. 20. Schultes, S., & de Graaf, C. (2010). Haaksma EEJ de Esch IJP Leurs R. Krämer O. Drug Discovery Today: Technol 7, e157–e162. 21. Arnott, J. A., Kumar, R., & Planey, S. L. (2013). Lipophilicity indices for drug development. Journal Appllication Biopharm. Pharmacokinet, 1(1), 31–36. 22. Jabeen, I., et al. (2012). Structure–activity relationships, ligand efficiency, and lipophilic efficiency profiles of benzophenone-type inhibitors of the multidrug transporter P-glycoprotein. Journal of Medicinal Chemistry, 55(7), 3261–3273. 23. Islam, M. S., et al. (2019). Synthesis of new thiazolo-pyrrolidine–(spirooxindole) tethered to 3-acylindole as anticancer agents. Bioorganic Chemistry, 82, 423–430. 24. Singh, T. U. et al. (2020). Drug repurposing approach to fight COVID-19. Pharmacological Reports, pp. 1–30.
Intelligent Drug Descriptors Analysis …
191
25. Caly, L. et al. (2020). The FDA-approved drug ivermectin inhibits the replication of SARSCoV-2 in vitro. Antiviral Research, 178, 104787. 26. Mostafa, A., et al. (2020). FDA-approved drugs with potent in vitro antiviral activity against severe acute respiratory syndrome coronavirus 2. Pharmaceuticals, 13(12), 443. 27. Sultana, J., et al. (2020). Challenges for drug repurposing in the COVID-19 pandemic era. Frontiers in Pharmacology, 11, 1657. 28. Weston, S., et al. (2020). Broad anti-coronavirus activity of Food and Drug Administrationapproved drugs against SARS-CoV-2 in vitro and SARS-CoV in vivo. Journal of Virology, 94(21), e01218-e1220. 29. Vincent, M. J., et al. (2005). Chloroquine is a potent inhibitor of SARS coronavirus infection and spread. Virology Journal, 2(1), 1–10. 30. Elangovan, K., et al. (2017). Fault diagnosis of a reconfigurable crawling–rolling robot based on support vector machines. Applied Sciences, 7(10), 1025. 31. Jan, S. U., et al. (2017). Sensor fault classification based on support vector machine and statistical time-domain features. IEEE Access, 5, 8682–8690. 32. Gandhi, R. (2018). Support vector machine—introduction to machine learning algorithms. Towards Data Science, 7. 33. Fan, S. (2018). Understanding the mathematics behind Support Vector Machines. Available from https://shuzhanfan.github.io/2018/05/understanding-mathematics-behind-supportvector-machines/. 34. Al-Bari, M. A. A. (2017). Targeting endosomal acidification by chloroquine analogs as a promising strategy for the treatment of emerging viral diseases. Pharmacology Research & Perspectives, 5(1), e00293. 35. Sargiacomo, C., Sotgia, F., & Lisanti, M. P. (2020). COVID-19 and chronological aging: Senolytics and other anti-aging drugs for the treatment or prevention of corona virus infection? Aging (Albany NY), 12(8), 6511. 36. Wootton, D. (2021). Dexamethasone in hospitalized patients with COVID-19. New England Journal of Medicine, 384(8), 693–704.
Emerging Technologies and Applications
Analysis of Aortic Valve Using a Finite Element Model Kadry Ali Ezzat, Lamia Nabil Mahdy, and Ashraf Darwish
Abstract The major point of this consideration was the leaflet/aortic root interaction amid the cardiac cycle, counting the stresses created amid the interaction. Finite element examination was utilized at the side of a geometrically precise aortic valve and sinuses model. Shell elements alongside appropriate contact conditions were too utilized within the show. Weight designs amid the cardiac cycle were given as input, and a direct versatile demonstration was expected for the fabric. We found that aortic root enlargement begins recently with the pamphlet’s opening and is signed when the flyer opens. Expansion of the root alone makes a difference in opening the flyer to approximately 20%. The comparable push design appears to have a short increment in push at the cooptation surface amid closure. Stresses increment as the point of connection is drawn closer from the free surface. The complex transaction of the geometry of the valve framework can be successfully analyzed employing an advanced energetic limited component mod. Keywords Aorta · Analysis · Valve · Finite element method · Stress
1 Introduction Within the final decades, the advancement of computational science has empowered the study of biomedical issues and structures characterized by complex geometries, heterogeneous materials, and whose usefulness is decided by different concomitant factors [1]. The application of modern computational procedures within the field of biomechanics is related to a challenging inquire about action since it may lead (i) to a change of the basic information of physiology, (ii) to the advancement of strategies, symptomatic apparatuses, helpful gadgets, materials and innovative prostheses for K. A. Ezzat (B) · L. N. Mahdy Biomedical Engineering Department, Higher Technological Institute, 10th of Ramadan City, Egypt A. Darwish Faculaty of Science, Helwan University, Cairo, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_11
195
196
K. A. Ezzat et al.
therapeutic applications and, final but not slightest, (iii) to the expectation of surgical results. These viewpoints may contribute to a dynamic development in medication and, in specific, in cardiovascular surgery. Cardiovascular disease (CVD) is, in reality, the driving cause of passing both in Europe and within the United States. Each year CVD causes 4.3 million deaths (nearly half of all passing’s) in Europe, including an evaluated taken a toll of e192 billion a year [2] whereas within the Joined together States, on the premise of 2006 mortality rate information, about 2300 Americans pass on of CVD each day, a normal of 1 passing each 38 s, suggesting a add up to coordinate and circuitous taken a toll around $503.2 billion [3]. Valvular heart clutters speak to a meaningful contribution to CVD, even though frequently underestimated. The aortic valve has been broadly considered over the last few decades [4]. Be that as it may, with expanding surgical advancement in aortic root surgery, counting valve-sparing root substitutions, the shortsighted suspicion that the valve comprises of pamphlets that open and near due to weight differentials on either side is insufficient.. Modern investigation of the complex transaction between the aortic root and the pamphlets and the development of stresses and strains within the flyer amid opening and closing is presently conceivable utilizing effective computer techniques. This may aid in a much better understanding of the dynamics of the aortic valve complex and, ideally, will help in planning superior substitutes for supplanting the diseased aortic valve. More importantly, these might prove priceless in predicting designs of failure of replacement gadgets or repair methods within the workbench setting instead of encountering it within the clinical situation, with disastrous results. There have been a few endeavors to get it the mechanics and work of the aortic valve and its surrounding geometric structures through limited element analysis, an effective scientific device [1–3]. These models have utilized inactive limited component examination. Dynamic analysis has been utilized to think about stresses in an artificial leaflet by Thornton [5]. Still, as it were, a small portion of the cardiac cycle was utilized to think about, which does not bring out the energetic angles of the leaflet/aortic root interplay. As pointed out by Thubrikar within the discussion of work by David and colleagues [6], there’s a need to understand the valve’s geometry in an energetic state. Also, no other creator has considered the aortic valve is utilizing a geometrically precise framework of the flyer and the sinuses with an energetic limited component show. The work undertaken in this ponder brings out certain imperative features that previous experimental models do not capture. In the context of predictive medicine, cutting-edge computational strategies play an essential position and, in particular, finite element analysis (FEA), which is a powerful, famous, and well-mounted era for acting digital computer-primarily based simulations totally, represents the important thing to count on pathologic configurations and surgical outcomes. The first engineering and mathematical research at the aortic valve date again to the 70’s with the pioneering works via way of means of a set of the Washington University that first characterized the mechanics of human aortic valve via way of means of computing the stress/pressure distribution during the leaflet structure [7–9] and via way of means of developing unique mathematical
Analysis of Aortic Valve Using a Finite Element Model
197
models of the valve leaflets [10, 11]. In 1983, Sauren advanced a theoretical version to benefit perception into the factors which Regulate the mechanical behavior of the natural aortic valve after closure [12]. In the 1980s, several authors made significant contributions to the study of aortic valve mechanics, focusing on the design of bioprostheses from a geometric and constitutive perspective: Christie and Medland analyzed the finite element nonlinear load of the aortic valve. Kind of biological prosthesis. Valve [13], and Sabbah et al. Using the finite element model of the three-layer porcine bioprosthesis, the stress location and its correlation with calcification are specifically considered [14, 15]. Russo et al. (1988) included the properties of viscoelastic materials in their closed bioprosthesis models [16]. In 1990, Tubrikar [17] published a detailed book on the aortic valve, which is still a reference, proposed a geometric model of the valve and discussed other aspects of valve physiology, dynamics, and pathology. In the past two decades, many other computer studies have been conducted to simulate the material or geometric aspects of the aortic valve and the effect of valve pathology on valve function. The leaflets of the natural aortic valve have strong nonlinearity and anisotropy [18]. To evaluate the distribution of the navigation stress in the polymer prosthesis, the effect of anisotropy was studied [19]. In addition, finite element analysis shows that orthotropic anisotropy must be considered when manufacturing bioprostheses because it will negatively affect valve displacement and load distribution [20]. Driesen et al. A numerical representation of the mechanically induced structure of collagen fibers in aortic valve leaflets has been proposed [21–23], and Freed et al. A laterally isotropic nonlinear constitutive model was developed that takes into account the experimentally observed dispersion of collagen fibers [24]. Finally, Koch et al. [25] performed a static finite element analysis of the entire aortic root in the diastolic state to study the effects of nonlinear and anisotropic material properties [25]. Regarding the geometry of the human tricuspid aortic valve, Labrosse et al. A new method was proposed to explain the various sizes observed in normal human aortic valves based on full three-dimensional analysis [26]. Clift et al. focused on synthetic valves [27], while Knierbein et al. used finite element models to improve the design of polyurethane valves [28]. Recently, Xiong and colleagues emphasized the importance of leaflet geometry for stentless pericardial aortic valves [29]. Numerical and computational techniques have been used to study aortic valve disease. In particular, we cited the work of GrandeAllen et al. Use magnetic resonance imaging model to link aortic root dilation with valve regurgitation Gnaneshwar et al. [30] used a numerical model to examine the dynamic behavior of the aortic valve during the cardiac cycle. He modeled the entire cardiac cycle to analyze the interaction between the aortic root and the lobules [30]. Conti et al. [31] carried out dynamic finite element analysis. They obtained data such as the valve leaflets’ elongation rate, the valve leaflets’ adaptive length and adhesion movement, and the opening and closing time of the aortic valve leaflets, all of which are in line with the experimental data in the literature [31]. Finally, the
198
K. A. Ezzat et al.
finite element aortic valve model is suitable for predicting the outcome of surgery, especially for mild aortic valve technology. The chapter is sorted as follows. Section 2 discuss the materials and methods used in this work. Section 3 discusses the obtained results, and Sect. 4 shows the conclusion.
2 Materials and Methods The principle of FEM is to divide a complex object into a limited number of small units suitable for mechanical analysis. The process discretizes the finite element model and replaces the entire original object with the finite element model. Different types of elements are used to determine the material properties of the finite element model. Node connection finite element model. Through detailed mechanical analysis of its structure, the model’s internal displacement and tensile deformation under the action of external force can be obtained. Finite element modeling is very useful in valve design because you can analyze kinematics, dynamics, and stress more than manual calculations. Because the simulation is real, accurate, and accurate, design iterations can be performed faster and cheaper than traditional laboratory or cadaver testing. Although laboratory and cadaver tests are still important for validating FE model data, they can significantly reduce the total time required to develop aortic valve designs. A detailed demonstration of the valve surface was created using SolidWorks drawing software. Previous work on the aortic valve and basic geometric details are well documented [1]. These measurement results are used in follow-up research [3]. The method described by Thubrikar [1] was used to create the current model. The measured values are listed in Table 1. The model also considers the change in the thickness of the steering wheel specified in [1]. The sample surface is used to create a finite element demonstration of a shell with the same package. As many as 15,498 shell components were created. Figure 1 shows a demonstration. This work has been physically optimized to reduce damage Table 1 Dimensions of valve used in study
Part name
Dimensions (mm)
Commissures radius, Rc
11.0
Bases radius
11.0
Valve height
18.0
Leaflet free edge length
30.0
Leaflet in radial direction length
17.8
Sinus depth
18.32
Sinus height
22.11
Analysis of Aortic Valve Using a Finite Element Model
199
Fig. 1 Axial view of the designed aortic valve with sinuses
and distortion. This procedure does not use symmetry due to the type of testing that must be performed. One of the main problems in modeling the aortic valve is to determine its relaxed state. As Xie and colleagues discovered [7], the discharged organ (in their view, the vein) is not in a stress-free state. Of course, the stimulation work is necessary for checking the tension-free state or measuring the residual tension state of the aortic manual. The actual behavior of the tissue that is too nonlinear has the greatest impact on the residual stress state. The brochure should be stress-free in the open space. Although the finite element model is complete and detailed, the fabric used in this discussion is only elastic. Due to anisotropy, real fabric samples used in an ideal world are very versatile. Multifunctional fabric displays, distributed information [1], are universal and not accurate enough to correspond to Ogden display or MooneyRivlin model [6]. To obtain the correct tissue properties and pressure must be built up in the two hubs, which is the main limitation of this work. Nevertheless, in this study, the choice of modules makes it as close to the real behavior as possible. In this work, the elastic modulus of the flywheel is assumed to be 1.5 MPa [1]. The stress–strain relationship is essentially bilinear and has a moving position [1, 8]. In the calculation, the modulus changes by about 150 between the voltages before and after the transition [1]. The expansion of modules in one location is also unusually large. Grande et al. [8] except that the leaflets will be available after conversion and evaluated using the corresponding modules. In any case, certain areas in the brochure have tensions within the relocation site or even before the relocation. Therefore, the selection of the module value for research is problematic. From the data given in [1], 1.5 MPa is sensitive. Assume that the sine modulus is 3 MPa. The study of the porcine aortic root septum [9] showed that the behavior of the aortic root is strongly anisotropic. Therefore, the assumed value is 2.5 MPa in this work, as shown in Fig. 2. This suspicion has no real effect on the load because the deformity is controlled by the weight placed on the aortic skeleton. In either case, the calculated displacement
200
K. A. Ezzat et al.
Fig. 2 Fixtures and forces applied on designed aortic valve sinuses
of the best module will be saved. Choose a Poisson number of 0.3 to exceed the value. The problem occurs when the value is close to 0. Suppose the density of the shuttlecock is 1.3 g/ml, and the sinus density is 2.5 g/ml. Checking widgets requires weighted constraints and constraints as input. Next print-In this work, time dependence is taken over [10]. The surface is divided into the aortic surface, ventricular surface, and flight surface. Figure 2 shows that the force on the surface of the aorta is modeled as two slopes. The first slope represents a weight loss of 125 mmHg. art. Up to 82 mmHg art, the tilting torque shows that the pressure has increased from 82 to 125 mmHg. There are two main steps in creating a finite element model of the aortic valve, creating a model mesh and applying constraints to the model. Import the constructed model into the COSMOSM program and use FEM for numerical analysis. The static study performed on the 3D mesh created for the finite element analysis should contain a tetrahedron. The numerical model consists of 220,996 finite elements, 322,727 nodes, and 914,361 degrees of freedom, as shown in Fig. 3.
3 Results Different experts have used different methods to conduct in-depth research on the aortic valve complex, including mathematical modeling using finite element analysis. Grande-Allen and colleagues [32] used a finite element model to study the effect of sinuses on cusp load during valve-preserving tooth root replacement. Other studies have been conducted to clarify various aspects of aortic valve function [33, 35]. Cacciola et al. [36] Using a similar method, it is shown that the cusp tension of the cusp tricuspid valve is 75% lower than that of the stent-valve. However, all these
Analysis of Aortic Valve Using a Finite Element Model
201
Fig. 3 The finite element mesh of the model
studies are conducted using static finite element models. The dynamic finite element model of the aortic valve reveals some previously unrecognizable features, especially regarding the complex interaction between the aortic root and the valve leaflets. We studied two main aspects of aortic valve function in a typical cycle: the deformation of the aortic root and leaflets; and the tension of different parts of the leaflet. The most significant finding of the aortic root and valve leaflet deformation is that the valve leaflets open before overpressure, mainly due to the expansion of the aortic root. Before the aortic valve opens, the diameter increases significantly during contraction. This surprising finding is supported by a separate pseudo-analysis, in which pressure is not applied to the valve leaflets but only to the aortic root. Ease of testing is indeed one of the advantages of this technology. Interestingly, the discovery that animal models and acoustic measurements have independently confirmed the effect of aortic root expansion on lobule opening. Although the complex interaction between the aortic valve leaflets and the aortic root has long been obvious, this relationship was unexpectedly demonstrated for the first time in a study. The complex interaction between the aortic valve leaflets and the aortic root has long been obvious. This is the first time that this relationship has been accidentally proved in a computer simulation model. In fact, as the wall compliance increases during the parameter examination, the lobules begin to fall off, which is known in patients with the dilated annular aorta. Lead to aortic failure. Therefore, it is not surprising that the thickening of the lobules was found using David’s technique to seal the aortic valve in a rigid cylinder [32]. Another surprising finding is that the valve leaflets flap violently when opening and closing, so when used as a replacement device, it is not surprising that valve
202
K. A. Ezzat et al.
leaflet replacement parts made of different materials fail over a period of time at the aortic root. Pressure in the brochure Another aspect that has been studied in the development of pressure in different parts of the brochure. An important observation is that during the closing process, the tension on the engagement surface increases immediately. The tension increases as it approaches the attachment point of the valve at the root of the free surface and when it does not approach. It can be directly compared with Grande-Allen and his colleagues [32]; the stress states are the same as those described in these works. We focused on two aspects of aortic valve function: – malformations of the aortic root and valve leaflets that usually occur during the cardiac cycle, – tension attacks in different parts of the valve during the heart cycle.
3.1 Aortic Root and Leaflet Deformation One of the main points discussed in several articles is the influence of sinus geometry on the efficiency of aortic valve opening. To test this effect, an unapplied pseudoanalysis was performed. Pressure exerts pressure on the valve leaflets, but other pressure patterns in the aortic segment still exist, so the opening of the valve leaflets is entirely due to the expansion of the aortic wall. The deformed wing opened 3 mm, which accounted for almost 20% of the total displacement of 16 mm (Fig. 4). It is also interesting that the diameter of the aorta increases significantly at the level of adhesion in front of it. Figure 5 shows the commonly observed deformation modes. The red grid outline Fig. 4 The displacement of leaflet
Analysis of Aortic Valve Using a Finite Element Model
203
Fig. 5 The deformation of leaflet
is the initial open position of the wing. In the previous model, the waveform of the valve deformation was not observed.
3.2 Von-Mises Stress in the Leaflets Another aspect of the inspection is the development of maritime pressure. The equivalent stress (called von Mises stress) evolves overtime at multiple locations. An unexpected result obtained from the analysis of these figures is that as the connection point between the valve and the foot approaches radially from the free surface, the closing stress increases. The maximum stress points are, for example, 0.1 MPa, 0.2 MPa, and 0.4 MPa, respectively. This trend is due to the cantilever effect when the wings are closed and under pressure. When comparing the maximum stress, a similar trend in the circumferential direction can be observed. This increase can be observed in the junction area. On the other hand, the lower part of the package insert will not be affected. Figure 6 shows that the flange is not affected by locking. However, the position is subjected to a very high-stress level during the closure of the leaflet.
204
K. A. Ezzat et al.
Fig. 6 The Von-Misses stress of leaflet
4 Conclusion and Future Work With the increasing complexity of surgical treatment of aortic root diseases, including the use of valve replacement and aortic valve repair, it is clear that a thorough understanding of the complex interactions of various components of the aortic root is required. Surgical treatment of valvular heart disease is based on developing a valve replacement procedure or strategy, applying it in a clinical setting, and then waiting (sometimes years) to see if it works. If they have sophisticated testing methods, they will be predicted. With powerful computers and supporting software everywhere, the use of finite element analysis and other techniques for precise mathematical modeling enables the aortic valve to be modeled and inspected in a dynamic state. Using these methods, even the smallest changes in the valve and roots and the tension in the valve can be observed, which cannot be achieved in animals or other models. You can use these methods for testing—the development of new equipment, new brochures, and new types. Use valve stents and benchtop repair techniques before clinical use to obtain better long-term results.
References 1. Leon, M. B., Smith, C. R., Mack, M., Makkar, R., Svensson, L. G., Kodali, S., Thourani, V. H., Tuzcu, E. M., Miller, D. C., Herrmann, H. C., et al. (2016). Transcatheter or surgical aortic valve replacement in intermediate-risk patients. New England Journal of Medicine, 374, 1609–1620. 2. Reardon, M. J., Van Mieghem, N. M., Popma, J. J., Kleiman, N. S., Søndergaard, L., Mumtaz, M., Adams, D. H., Deeb, G. M., Maini, B., Gada, H., et al. (2017). Surgical or transcatheter aortic valve replacement in intermediate-risk patients. New England Journal of Medicine, 376, 1321–1331.
Analysis of Aortic Valve Using a Finite Element Model
205
3. Mack, M. J., Leon, M. B., Thourani, V. H., Makkar, R., Kodali, S. K., Russo, M., Kapadia, S. R., Malaisrie, S. C., Cohen, D. J., Pibarot, P., et al. (2019). Transcatheter aortic-valve replacement with a balloon-expandable valve in low-risk patients. New England Journal of Medicine, 380, 1695–1705. 4. Popma, J. J., Deeb, G. M., Yakubov, S. J., Mumtaz, M., Gada, H., O’Hair, D., Bajwa, T., Heiser, J. C., Merhi, W., Kleiman, N. S., et al. (2019). Transcatheter aortic valve replacement with a self-expanding valve in low-risk patients. New England Journal of Medicine, 380, 1706–1715. 5. Makkar, R. R., Fontana, G., Jilaihawi, H., Chakravarty, T., Kofoed, K. F., De Backer, O., Asch, F. M., Ruiz, C. E., Olsen, N. T., Trento, A., et al. (2015). Possible subclinical leaflet thrombosis in bioprosthetic aortic valves. New England Journal of Medicine, 373, 2015–2024. 6. Chakravarty, T., Søndergaard, L., Friedman, J., De Backer, O., Berman, D., Kofoed, K. F., Jilaihawi, H., Shiota, T., Abramowitz, Y., Jørgensen, T. H., et al. (2017). Subclinical leaflet thrombosis in surgical and transcatheter bioprosthetic aortic valves: An observational study. Lancet, 389, 2383–2392. 7. Khalique, O. K., Hahn, R. T., Gada, H., Nazif, T. M., Vahl, T. P., George, I., Kalesan, B., Forster, M., Williams, M. B., Leon, B., et al. (2014). Quantity and location of aortic valve complex calcification predicts severity and location of paravalvular regurgitation and frequency of post-dilation after balloon-expandable transcatheter aortic valve replacement. JACC. Cardiovascular Interventions, 7, 885–894. 8. Morganti, S., Conti, M., Aiello, M., Valentini, A., Mazzola, A., Reali, A., & Auricchio, F. (2014). Simulation of transcatheter aortic valve implantation through patient–specific finite element analysis: Two clinical cases. Journal of Biomechanics, 47, 2547–2555. 9. Spadaccio, C., Mazzocchi, L., Timofeva, I., Macron, L., De Cecco, C. N., Morganti, S., & Auricchio, F. (2020). Nappi bioengineering case study to evaluate complications of adverse anatomy of aortic root in transcatheter aortic valve replacement: Combining biomechanical modelling with CT imaging. Bioengineering, 7, 121. 10. Nappi, F., Mazzocchi, L., Timofeva, I., Macron, L., Morganti, S., Singh, S. S. A. A., Attias, D., Congedo, A., & Auricchio, F. (2020). A finite element analysis study from 3D CT to predict transcatheter heart valve thrombosis. Diagnostics, 10, 183. 11. De Backer, O., Dangas, G.D., Jilaihawi, H., Leipsic, J.A., Terkelsen, C.J., Makkar, R., Kini, A.S., Veien, K.T., Abdel-Wahab, M., Kim, W.-K., et al. (2020). GALILEO-4D investigators. Reduced leaflet motion after transcatheter aortic valve replacement. New England Journal of Medicine, 382, 130–139. 12. Sauren, A. A. H. J. (1983). The mechanical behavior of the aortic valve. Ph.D. thesis, Technische hogeschool Eindhoven. 13. Christie, G. W., & Medland, I. C. (1982). Finite element in biomechanics. Wiley. 14. Sabbah, H. N., Hamid, M. S., & Stein, P. D. (1985). Estimation of mechanical stresses on closed cusps of porcine bioprosthetic valves: E_ects of sti_ening, focal calcium and focal thinning. American Journal of Cardiology, 55, 1091–1096. 15. Sabbah, H. N., Hamid, M. S., & Stein, P. D. (1986). Mechanical stresses on closed cusps of porcine bioprosthetic valves: Correlation with sites of calci_cation. Annals of Thoracic Surgery, 42, 93–96. 16. Rousseau, E. P. M., van Steenhoven, A. A., Janssen, J. D., & Huysmans, H. A. (1988). A mechanical analysis of the closed hancock heart valve prosthesis. Journal of Biomechanics, 21, 545–562. 17. Thubrikar, M. J. (1990). The Aortic Valve. CRC Press. 18. Billiar, K. L., & Sacks, M. S. (2000). Biaxial mechanical properties of the natural and glutaraldehyde treated aortic valve cusp-Part I: Experimental results. Journal of Biomechanical Engineering, 122, 23–30. 19. Liu, Y., Kasyanov, V., & Schoephoerster, R. T. (2007). E_ect of _ber orientation on the stress distribution within a leaet of a polymer composite heart valve in the closed position. Journal of Biomechanics, 40, 1099–1106. 20. Arcidiacono, G., Corvi, A., & Severi, T. (2005). Functional analysis of bioprosthetic heart valves. Journal of Biomechanics, 38, 1483–1490.
206
K. A. Ezzat et al.
21. Driessen, N. J. B., Boerboom, R. A., Huyghe, J. M., Bouten, C. V. C., & Baaijens, F. P. T. (2003). Computational analyses of mechanically induced collagen ber remodeling in the aortic heart valve. Journal of Biomechanical Engineering, 125, 549–557. 22. Driessen, N. J. B., Bouten, C. V. C., & Baaijens, F. P. T. (2005). Improved prediction of the collagen _ber architecture in the aortic heart valve. Journal of Biomechanical Engineering, 127, 329–336. 23. Driessen, N. J. B., Bouten, C. V. C., & Baaijens, F. P. T. (2005). A structural constitutive model for collagenous cardiovascular tissues incorporating the angular _ber distribution. Journal of Biomechanical Engineering, 127, 494–503. 24. Freed, A. D., Einstein, D. R., & Vesely, I. (2005). Invariant formulation for dispersed transverse isotropy in aortic heart valves. Biomechanics and Modeling in Mechanobiology, 4, 100–117. 25. Koch, T. M., Reddy, B. D., Zilla, P., & Franz, T. (2010). Aortic valve leaet mechanical properties facilitate diastolic valve function. Computer Methods in Biomechanics and Biomedical Engineering, 13, 225–234. 26. Labrosse, M. R., Beller, C. J., Robicsek, F., & Thubrikar, M. J. (2006). Geometric modeling of functional trileaet aortic valves: Development and clinical applications. Journal of Biomechanics, 39, 2665–2672. 27. Clift, S. E., & Fisher, J. (1996). Finite element stress analysis of a new design of synthetic leaet heart valve. In Proceedings of the Institution of Mechanical Engineers—Part H, 210, 267–272. 28. Knierbein, B., Mohr-Matuschek, U., Rechlin, M., Reul, H., Rau, G., & Michaeli, W. (1990). Evaluation of mechanical loading of a trileaet polyurethane blood pump valve by finite element analysis. International Journal of Artificial Organs, 13, 307–315. 29. Xiong, F. L., Goetz, W. A., Chong, C. K., Chua, Y. L., Pfeifer, S., Wintermantel, E., & Yeo, J. H. (2010). Finite element investigation of stentless pericardial aortic valves: Relevance of leaet geometry. Annals of Biomedical Engineering, 38, 1908–1918. 30. Gnyaneshwar, R., Kumar, R. K., & Balakrishnan, K. R. (2002). Dynamic analysis of the aortic valve using a finite element model. Annals of T, 73, 1122–1129. 31. Conti, C. A., Votta, E., Della Corte, A., Del Viscovo, L., Bancone, C., Cotrufo, M., & Redaelli, A. (2010). Dynamic finite element analysis of the aortic root from MRI-derived parameters. Medical Engineering & Physics, 32, 212–221. 32. Grande-Allen, K. J., Cochran, R. P., Reinhall, P. G., & Kunzelman, K. S. (2001). Finite-element analysis of aortic valve-sparing: Inuence of graft shape and stiffness. IEEE Transactions on Biomedical Engineering, 48, 647–659. 33. Iqbal, J., & Serruys, P. W. (2014). Comparison of medtronic-core valve and Edwards Sapien XT for transcatheter aortic valve implantation. JACC. Cardiovascular Interventions, 7, 293–295. 34. Kiefer, P., Gruenwald, F., Kempfert, J., Aupperle, H., Seeburger, J., Mohr, F. W., & Walther, T. (2011). Crimping may affect the durability of transcatheter valves: An experimental analysis. Annals of Thoracic Surgery, 92, 155–160. 35. Alavi, S. H., Groves, E. M., & Kheradvar, A. (2014). The effects of transcatheter valve crimping on pericardial leaflets. Annals of Thoracic Surgery, 97, 1260–1266. 36. de Hart, J., Cacciola, G., Schreurs, P. J. G., & Peters, G. W. M. (1998). Collagen_bers reduce stresses and stabilize motion of aortic valve leaets during systole. Journal of Biomechanics, 31, 629–638.
Fundus Images Enhancement Using Gravitational Force and Lateral Inhibition Network for Blood Vessel Detection Kamel K. Mohammed and Ashraf Darwish
Abstract High blood pressure is a condition that produces various negative effects on the human eye, including choroidopathy, retinopathy, and optic neuropathy. The pathology of the human retina, which results from high blood pressure, is hypertensive retinopathy. Hypertension causes narrowing of retinal veins and arteries, leading to retinal cotton wool spots, hemorrhages, arteriovenous nipping, and even papilledema. One of the syndromes without any visible signs is hypertensive retinopathy which can lead to serious vision loss or even death (due to severe papilledema). The scores or types of high blood pressure retinopathy vary based on their seriousness—several eye examination methods for the detection of high blood pressure retinopathy. Fundus fluorescein angiography, ophthalmoscopy, and fundus photography are some widely used techniques. Various researchers are designing completely integrated decision support systems the detect high blood pressure retinopathy. The accurate segmentation of retinal blood vessels affects the consistency of retinal image analysis used in modern ophthalmologic diagnostics. One of the most important steps in any retinal blood vessel segmentation method is image enhancement. The accuracy of the contrast across the picture determines the segmentation’s reliability. We offer an image-enhancing technique that addresses these challenges regarding the gravitational force (GF) and lateral inhibitory network (LIN). Our method is well adapted for vessel detection, according to the findings. Keywords Hypertensive retinopathy · Contrast enhancement · Vessel detection
K. K. Mohammed (B) Center for Virus Research and Studies, Al-Azhar University, Cairo, Egypt A. Darwish Faculty of Science, Helwan University, Helwan, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_12
207
208
K. K. Mohammed and A. Darwish
1 Introduction In the past 30 years, raising health and treatment fees has driven health and therapeutic investigators and authorities to explore ways of improving and reducing operating performance costs. Computer simulations may demonstrate a system’s variability, interactions, and complexities. Tortuosity, nicking, narrowing, and dilation of the retinal vessels are linked to diseases like diabetic retinopathy, arteriosclerosis, hypertension, and cardiovascular risk factors [1, 2]. When the disease is still in its early stages, early discovery and identification of these abnormalities will aid patients in choosing the best treatment choice [3]. Fundus fluorescein angiography, ophthalmoscopy, and fundus photography are widely used as screening modalities. Hypertension causes narrowing of retinal veins and arteries, leading to retinal cotton wool spots, hemorrhages, arteriovenous nipping, and even papilledema. A highly important part of retinal disease screening is automated segmentation and vessel structure analysis to prevent visual impairment. Low contrast images can be caused by various factors, including weak or uneven illumination, nonlinearity, or a limited dynamic range of the imaging sensor, which means that lighting is distributed unevenly throughout the image. In shape, arteries and veins are the same. Furthermore, the retina’s bent structure and poor lighting might result in non-uniform illumination via retinal images during the imaging process. In this situation, variations in the color of the retina from human to human often pose an alternative issue with their biological properties. In any related image processing techniques, contrast enhancement is required [4–9]. The contrast limited adaptive histogram equalization, decorrelation stretch transform, adaptive histogram equalization, and histogram equalization are different approaches that can be utilized to improve image contrast. One of the most often used approaches for blood vessel segmentation is contrast limited adaptive histogram equalization [10–14]. Noise caused by warm object radiation and atom thermal vibration in the acquisition, storage, and processing of images is also an issue of funds pictures [15]. Based on gravity force (GF) and lateral inhibition network (LIN), this research describes an approach for improving image quality. This method is based on finetuning image pre-processing to detect blood vessels.
2 Material and Methods 2.1 Description of Dataset It has been recently designed for the Hypertensive retinopathy dataset for the research community [16]. It contains 100 retinal fundus images acquired by TOPCON and interpreted by specialist ophthalmologists from the AFIO in Pakistan. These 100 photos, 1504–1000 in size, are of arteriovenous ratio, entire vascular anatomy, veins, and retinal arteries.
Fundus Images Enhancement Using Gravitational Force …
209
Fig. 1 The illustration Newton’s law of gravity
2.2 Background of Model 2.2.1
Gravitational Force
The GF between point-like masses M1 , M2 , and is directly proportional to their masses and inversely proportional to the square of the distance between them. As shown schematically in Fig. 1. F = G
M1.M2 R2
(1)
F represents gravity force in Newton, M1 , M2 represents masses in kilogram, the gravitational constant is about 6.67 * 10–11 Nm2 /kg2 , and R represents the space between two objects in meter (m).
2.2.2
Theory of Lateral Inhibition Network
In limulus visual physiological electrical investigations, the hypothesis of the lateral network inhibition was established, so every Limulus eye is regarded as a receptor. The stimulability of the receptor decreases when it absorbs an intense light excitation, and the behavior is known as (LIN). Assuming that the incident light strengths are hA and hB for two receptors C and D, they are illuminated. In simultaneous illumination of C and D, their light radiation strengths are reduced accordingly to fC and fD . The inhibition of receptor C by receptor D is responsible for this change, and receptor D by receptor C is responsible for this change. Equations (2) and (3) define the lateral inhibitory effect mathematically: h C = f C − k DC ( f D − f C D )
(2)
h D = f D − kC D ( f C − f DC ).
(3)
In mathematical formulas (2) and (3), hC and hD denote the illumination radiation strengths of the two receptors with one illumination. Additionally, in mathematical formulas (2) and (3) fC and fD represent illumination radiation strengths of the two
210
K. K. Mohammed and A. Darwish
receptors with lateral inhibitory effects are represented by fA and fB . The coefficients for LI of the two receptors are represented by kCD and kDC, respectively, whereas the LI threshold values are represented by fCD and fDC respectively. To be used in image processing, the LIN needs to be extended to two dimensions. By omitting the threshold value, the classic lateral inhibition technique uses a non-repetitive LIN model. F(x, y) = Z (x, y) −
l l
g(i, j) · Z(x + i, y + j).
(4)
i=−l j=−l
Equation (4) denotes the input picture as Z(x, y), the output picture as F(x, y), the inhibition coefficient matrix as k(i, j), and the inhibition area as l. Typical LIN is noise-sensitive. It is frequently combined with median filtering to remove picture noise and get a cleaner picture. F(x, y) = Z (x, y) −
l l
g(i, j) · Z(x + i, y + j).
(5)
i=−l j=−l
Z (x, y) =
median [Z (K 1 + r, K 2 + s)].
(r,s)∈(−Z ,Z )
(6)
Z(x, y) the median value of Z (x, y) is indicated in Eq. (5), and Z is the size of the median filtering window.
3 GF and LIN Pattern to Improve the Image Contrast of a Vessel Image This study mainly aims to minimize noise from the color fundus picture in every dimension, modify the brightness and contrast, and efficiently expose information about the edges and regions of the image. The total GF for each RGB color image method is used to build data matrices for each dimension in the GF & LIN method. After being exposed to two threshold values, contrast adjustment, noise reduction, and edge clarification are conducted on these data matrices. The GF & LIN model processes are listed in [17]. The flowchart of the proposed approach is summarized as shown in Fig. 2. The main phases for the technique proposed are outlined as follows: 1.
2.
The vessel image is divided into two-dimensional pictures of red, green, and blue. The GF with nearby pixels is calculated for the adjacent pixels in the 3 × 3 mask [17]. The three resulting GF matrices, RGB, are then exposed to two threshold values in the second stage [17].
Fundus Images Enhancement Using Gravitational Force …
Fig. 2 GF and LIN flow chart
211
212
K. K. Mohammed and A. Darwish
The mathematical formulas (7–9) are used when the value of T1 is greater than the information on three gravitational forces: 1 1 1 F(x, y) R = ∗ 1 ∗ Z (x + m, y + n) R . 9 n=−l m=−l
(7)
F(x, y)G =
l l 1 ∗ 1 ∗ Z (x + m, y + n)G 9 n=−l m=−l
(8)
F(x, y) B =
l l 1 ∗ 1 ∗ Z (x + m, y + n) B 9 n=−l m=−l
(9)
F(x, y) R , F(x, y)G , and F(x, y) B in mathematical formulas (7–9) denoted to the output picture in every region space (dimension) of color picture, whereas Z (x, y) R , Z (x, y)G , and Z (x, y) B denoted to original picture. Because the information is not strongly associated to the surrounding pixels when it is smaller than T1 , the noise softening technique is used. If the GF data falls between T1 and T2 , mathematical formulas (10–12) based on the LIN are used: F(x, y) R = Z (x, y) R −
l l
g(m, n) · K (x + m, y + n) R
(10)
g(m, n) · K (x + m, y + n)G
(11)
g(m, n) · K (x + m, y + n) B
(12)
n=−l m=−l
F(x, y)G = Z (x, y)G −
l l n=−l m=−l
F(x, y) B = Z (x, y) B −
l l n=−l m=−l
Z(x, y) denoted to the median filter approach, while g(m, n) denoted to the constant of constraint is 1.425 in my research. K (x + m, y + n) R , K (x + m, y + n)G and K (x + m, y + n) B the sum of the acquired central gravity force for each pixel. If the T2 value is smaller than the three gravity force information, it is expected to modify the contrast and brightness within the region using mathematical formulas (13–15): F(x, y) R = Z (x, y) R ∗ ε −
l l n=−l m=−l
K (x + m, y + n) R
(13)
Fundus Images Enhancement Using Gravitational Force …
F(x, y)G = Z (x, y)G ∗ ε −
l l
213
K (x + m, y + n)G
(14)
K (x + m, y + n)G
(15)
n=−l m=−l
F(x, y) B = Z (x, y) B ∗ ε −
l l n=−l m=−l
3.
In the equations, ε is the contrast coefficient, taken as 0.35 during the experiment. F(x, y) represents the input image, while K (x, y) is the output image. The RGB output is finally acquired once the multiple RGB dimensions are blended.
4 Metrics for Evaluating the Improved Image The image quality must be measured in many image processing applications, an objective evaluation methodology was used to assess the quality of images obtained using GF and LIN. The first criteria were selected for assessment were the contrast [18], edge intensity [18] and sharpness [18] and characteristics of the original and improved fungus images.
2 1 2 n gij − Contrast = gij n(n − 1) 1
∗ g Edge intensity = ij −1
(16)
(17)
ij
⎡ ⎤ 0 −1 0 ⎣ −1 4 −1 ⎦ ∗ gij Sharpness = ij 0 −1 0
(18)
where n is the number of pixels values in the image and g is the pixel value. Secondly, the normalized cross correlations (NCC) and the Structural Similarity Index (SSI) factor approaches were utilized for every improved image and original image.
214
K. K. Mohammed and A. Darwish
5 Results and Discussion In the suggested GF & LIN approach, the G gravity constant was set to 10. Still, in the generated gravity force data T1 and T2 classification, the threshold values are taken as 0.01 and 0.7. The LIN’s k constant of limitation was set at 1.425, whereas the contrast factor for the local contrast modification was set to 0.35. These two constants were kept constant throughout all color spaces. Figure 3a, c, e, g show original images and b, d, f, h show the enhancement in contrast with GF&LIN, respectively. The details and contrast of the image were improved, as shown in Fig. 3b, d, f, h. It shows that information about the vessel and region of the picture is more obvious. The fundus’s vessel data was emphasised in the image. The image contrast values, edge intensity values, and sharpness values were all improved with the method in as shown in Table 1. The vessel and area information in the image can be observed to be more visible. Increased contrast is also beneficial in performance. To improve the quality of the improvement procedure and the picture quality, we used two assessment measures in our research. When these factors are taken into account, the superiority of our strategy becomes clear. The SSI value (0.84) in Table 1 reflects how near the improved image was to the original image. As a result of the maximum NCC value, the enhanced image exhibits less illumination change than the original image. So, the original pixel values were not significantly altered during the procedure. As can be observed in Fig. 4, the contrast procedure in our technique guaranteed that the content in the dark values in the G and B scales was increased to a significant level. Moreover, the suggested technique turns the bright red levels to a constant state during the contrast phase. As illustrated in Fig. 4, the contrast increases have reached satisfactory levels, mainly in the R and G dimensions.
6 Conclusion If a maximum amount of detail can be recognized, an image is said to be’sharply set. A feature in an image, on the other hand, is only apparent if it contrasts with a neighboring detail. When dealing with RGB images, we can adjust the contrast to equal the difference in brightness. The suitability of a recently devised contrast enhancement method, GF&LIN, for retinal blood vessel detection in fundus pictures was evaluated in this article. The qualitative and quantitative analysis revealed that GF&LIN results in enhancing retinal images for blood vessel detection.
Fundus Images Enhancement Using Gravitational Force …
Fig. 3 a, c, e, g Original images, b, d, f, h Enhanced images with GF&LIN
215
216
K. K. Mohammed and A. Darwish
Fig. 3 (continued) Table 1 Results of the evaluation parameters for fundus images Metric assessment
Original image
Enhanced image
Contrast
0.26
0.56
Edge intensity
5.4e + 03
2.43e + 04
Sharpness
1.1 + 04
3.5e + 04
NCC
1.617
SSI
0.76
Contrast
0.24
0.46
Edge intensity
4.633e + 03
1.78e + 04
Sharpness
9.8592e + 03
2.59e + 04
NCC
1.5068
SSI
0.7942
Contrast
0.221
0.384
Edge intensity
2.231e + 03
7.075e + 03
Sharpness
5.67e + 03
1.201e + 04
NCC
1.4464
SSI
0.8284
Contrast
0.24
0.453
Edge intensity
3.224e + 03
1.244e + 04
Sharpness
7.641e + 03
1.94e + 04
NCC
1.5274
SSI
0.8118
Fundus Images Enhancement Using Gravitational Force …
217
Fig. 4 RGB histograms for vessel Image
References 1. Wasan, B., Cerutti, A., Ford, S., & Marsh, R. (1995). Vascular network changes in the retina with age and hypertension. Journal of Hypertension, 13(12), 1724–1728. 2. Wong, T. Y., & McIntosh, R. (2005). Hypertensive retinopathy signs as risk indicators of cardiovascular morbidity and mortality. British Medical Bulletin, 73(1), 57–70.
218
K. K. Mohammed and A. Darwish
3. Porwal, P., Pachade, S., Kokare, M., Deshmukh, G., & Sahasrabuddhe, V. (2018). Automatic retinal image analysis to detect diabetic retinopathy. Biomedical signal and image processing in patient care (pp. 146–61). IGI Global. 4. Furtado, P., Travassos, C., Monteiro, R., Oliveira, S., Baptista, C., & Carrilho, F. (2017). “Segmentation of Eye Fundus Images by density clustering in diabetic retinopathy. IEEE EMBS International Conference on Biomedical & Health Informatics (BHI) 2017, 25–28. 5. Franklin, S. W., & Rajan, S. E. (2014). Computerized screening of diabetic retinopathy employing blood vessel segmentation in retinal images. Biocybernetics and Biomedical Engineering, 34(2), 117–124. 6. Intaramanee, T., Rasmequan, S., Chinnasarn, K., Jantarakongkul, B., & Rodtook, A. (2016). Optic disc detection via blood vessels origin using Morphological end point. International Conference on Advanced Informatics: Concepts, Theory And Application (ICAICTA), 2016, 1–6. 7. Salazar-Gonzalez, A., Kaba, D., Li, Y., & Liu, X. (2014). Segmentation of the blood vessels and optic disk in retinal images. IEEE Journal of Biomedical and Health Informatics, 18(6), 1874–1886. 8. Feng, P., Pan, Y., Wei, B., Jin, W., & Mi, D. (2007). Enhancing retinal image by the Contourlet transform. Pattern Recognition Letter, 28(4), 516–522. 9. Saranya, M., & Selvarani, A. G. (2016). Fundus image screening for diabetic retinopathy. Indian Journal of Science and Technology, 9(25). 10. Singh, N. P., & Srivastava, R. (2016). Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter. Computer Methods and Programs in Biomedicine, 129, 40–50. https://doi.org/10.1016/j.cmpb.2016.03.001. 11. Geetha Ramani, R., & Balasubramanian, L. (2016). Retinal blood vessel segmentation employing image processing and data mining techniques for computerized retinal image analysis. Biocybern Biomedical Engineering, 36(1), 102–118. https://doi.org/10.1016/j.bbe.2015. 06.004. 12. Azzopardi, G., Strisciuglio, N., Vento, M., & Petkov, N. (2015). Trainable COSFIRE filters for vessel delineation with application to retinal images. Medical Image Analysis, 19(1), 46–57. https://doi.org/10.1016/j.media.2014.08.002 13. Agurto, C., Yu, H., Murray, V., Pattichis, M. S., Nemeth, S., Barriga, S., et al. (2015). A multiscale decomposition approach to detect abnormal vasculature in the optic disc. Computerized Medical Imaging and Graphics, 43, 137–149. https://doi.org/10.1016/j.compmedimag.2015. 01.001 14. Vostatek, P., Claridge, E., Uusitalo, H., Hauta-Kasari, M., Fält, P., & Lensu, L. (2016). Performance comparison of publicly available retinal blood vessel segmentation methods. Computerized Medical Imaging and Graphics. https://doi.org/10.1016/j.compmedimag.2016. 07.005. 15. Sonali, S. S., Singh, A. K., Ghrera, S. P., & Elhoseny, M. (2019). An approach for denoising and contrast enhancement of retinal fundus image using CLAHE. Optics & Laser Technology, 110, 87–98. https://doi.org/10.1016/j.optlastec.2018.06.061 16. Akram, M. U„ Akbar, S., Hassan, T., Khawaja, S. G., Yasin, U., & Basit, I. (2020). Data on fundus images for vessels segmentation, detection of hypertensive retinopathy, diabetic retinopathy and papilledema. Data Brief, 29, 105282. 17. Katırcıo˘glu, F., Çay, Y., & Cingiz, Z. (2019). Infrared image enhancement model based on gravitational force and lateral inhibition networks. Infrared Physics & Technology, 100, 15–27. 18. Sharawy, A. A., Mohammed, K. K., Aouf, M., & Salem, M. A.-M. (2018). Ultrasound transducer quality control and performance evaluation using image metrics. Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2018.
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring for Risk Evaluation of Cardiac Patients Ridhima Mehta
Abstract In this chapter, the multi-dimensional fuzzy logic design is implemented to develop smart and pervasive healthcare application systems for determining the risk status of cardiac patients. This heuristic decision-making and fuzzy reasoning mechanism is based on the medical observations of several miscellaneous physiological conditions of systolic blood pressure, blood sugar level, serum cholesterol, resting heart rate and body mass index. These disparate attributes controlling the heart risk factor are jointly contributed as the input variables to the presented fuzzy inference unit. The deployed sample record comprising of the particular heart patients’ health data informatics is effectively enforced by the fuzzy control scheme and is consequently analyzed through the rigorous simulation experiments. Wide range of error and fairness indicators are employed for corroborating the accuracy estimation of the fuzzy output parameter of heart disease risk level. Specifically, we applied the normalized root-mean-square error, mean Poisson deviance, absolute logarithmic quotient error, relative absolute error, symmetric mean absolute deviation, entropy and Jain’s fairness index for empirical substantiation of the proposed method in relation to the actual heart disease observational statistics. From the simulation analysis results, it was established that the optimal solution is obtained with the normalized root-mean-square error of 0.459 and mean Poisson deviance of 0.244 error approximations. In addition, the absolute log quotient error, relative absolute error, and symmetric mean absolute deviation measures are accordingly computed as 0.73, 4.6, and 0.65. Besides, the Shannon entropy and fairness indices for the statistical analysis of the proposed soft computing based biomedical application are relatively improved by almost 3.337 and 13.24% as compared to the actual heart disease dataset under surveillance. Keywords Cardiac risk level · Error metrics · Entropy · Fuzzy logic design · Jain’s fairness index
R. Mehta (B) School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_13
219
220
R. Mehta
1 Introduction The reinforced advances in the field of artificial intelligence has provided considerable opportunities for remote healthcare services to facilitate the diagnosis and treatment of patients. This course of action requires minimal dependence on the classical medical infrastructure support and essential in-person clinical assistance. In this context, the cognitive artificial intelligence solutions for monitoring the human health based on the knowledge of symptoms and physical conditions is capable of supporting patients’ treatment and management through automated means. This has become especially significant in the dangerous spreading of the current coronavirus pandemic across the world that has remarkably effected the primary healthcare sector and technology industry. Such intelligent system design is imperative to prevent and control the most common cardiac failure disease that contributes majorly to the cause of the deceased population in the world till date. For determining the potential heart conditions and body fitness level of specific patients, various fundamental health-based issues and testing procedures are employed as the input attributes for joint decision-making. Multiple classes for these parameters are contemplated through the input categorization of diverse linguistic terms. The distinctive parameters determining the person’s heart health status monitor the patient situation remotely in a collaborative way. It ensures the continuous distant surveillance of the patients’ cardiac health while mitigating the frequency of patients visit to the hospital for consultation with the medical expertise. The current health state of the patient evaluated by the related background reports facilitates forecasting of the heart illness probability for seeking the required medical treatment and the future hospital expenditures. This generalized patient care and monitoring system is addressed and implemented with intelligent fuzzy logic design and the associated knowledge base for determining the conditions of the considered sample population. The automatic investigation of cardiac health state through the proposed collection and analysis of the associated data of the specific patients helps to achieve the secure and quick medical attention, and prevention of the relevant health threats in future. The fuzzy reasoning approach for the knowledge representation necessitates prior learning of the inference rules [15]. This self-adapting soft computing technique presents a novel mathematical framework for modeling the real-world verbal explication by the precise numerical structures [16]. The rest of this chapter is organized as follows. The next section discusses the related work in the area of healthcare tracking network systems. The basic architecture of fuzzy logic system is outlined in Sect. 3. Section 4 presents the proposed fuzzy logic conceptual model together with the simulation setup framework. The simulation results incorporating the detailed analysis of accuracy of the developed fuzzy modeling healthcare approach is provided in Sect. 5. The chapter is decisively concluded in Sect. 6, followed by some future implications of the conferred research domain.
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
221
2 Related Work In recent years, a lot of research endeavours have been carried out in the field of medical structures provisioning and healthcare monitoring schemes. The proposed work in [7] surveyed the operation of wireless sensor network and miniature medical sensing devices in the healthcare application domain for guaranteeing the security and privacy aspects of medical data collected through the computer-assisted clinical laboratory settings. Multiple-patient health monitoring through the wireless body sensor network architecture is implemented in [1] by data forwarding from the supervisory sensing node connected to the human body to the envisaged base station. Health improvement strategies for the chronic disease control supplemented by the immensely variable symptoms are developed with the integrated design of the mobile health and the Internet of Things architecture [4]. The application of wireless body area network is explored in [9] for the secluded detection and cure of serious illness causing diseases by using the embedded physical sensor devices. Wireless simulation of the heterogeneous network comprised of the mobile cognitive radio and physiological sensors is executed for the effective real-time monitoring of maternal health issues during the indispensable period of gestation [12]. A crosslayer protocol for heterogeneous body area networks is designed based on the source node synergetic paradigm to address the human health management issues across the developing world with the primitive resource-restrained medical amenities [11]. In addition, the body temperature and heart rate measuring sensors are employed for assessment by the health personnel by using various types of economical hardware devices such as Arduino microcontroller interface, LCD output layout, temperature and pulse sensors [3]. Recently, the authors in [8] discussed the various types of commonly occurring diseases such as neurological disorder, cardiovascular disease, diabetes, etc. via the remote health detection and monitoring perspective. This developed system is implemented wirelessly for the patients’ health data acquisition with the sensors attached to the human body as well as the ambient sensors measuring the environmental impact. Xie et al. [14] presented the security and privacy protection analysis during sensitive data transmission of the patients’ health information using the elliptic curve cryptosystem applied to the wireless sensor network. Work in [2] predicted the cardiac attack plausibility by employing the cloud-based Internet of Medical Things framework. This scheme aims to collect and process specific patient data for subsequent recommendation of the essential therapeutic care and feasible disease prevention. Besides, the application of the advanced 6G communications technology is discussed to support healthcare monitoring system in crucial disaster stages with security and privacy enforcement of the patients’ vital information [6]. In contrast to these previous models, our modern machine learning-based heart healthcare system helps in the identification of the disease and autonomic treatment of patients remotely through the multi-dimensional processing of the patient’s data. The proposed fuzzy-based modeling scheme is analyzed and applied to the sample realworld datasets using a novel medical decision-support system. This smart healthcare
222
R. Mehta
system is particularly effective for monitoring the condition of patients affected by persistent illness together with physical disabilities, mobility and/or age-related issues. The reliability of the developed method for heart disease observation and intelligent management is essentially measured using multiple error, data entropy and fairness indicators. The deployed training database stores the patient’s medical history and laboratory testing reports, which can be consequently applied to the newly observed patients information. Analysis of this heart disease-related health data can be examined by the hospital staff to form the effective treatment arrangements for the particular person under observation.
3 General Layout of Fuzzy Logic Theory A fuzzy rule-based expert technique applies the multi-dimensional decision-making logic for automated reasoning with intelligent evaluation and controlling of the nonlinear complex systems. This qualitative data processing framework provides a feasible soft computing method for approximate modeling of the human experiences and cognitions with imprecise concepts. The domain-specific knowledge-intensive implementation is predicated on the notion of the rule set depository and linguistic inference engine. Figure 1 illustrates the fundamental architecture of the fuzzy logic design unit. This data flow representation depicts the basic operational components of the fuzzy control paradigm. In this mechanism, the preliminary step is to identify and specify the system model in terms of real input and output attributes of the fuzzy interpretation process. In particular, the crisp input data is fed to the fuzzification interface to yield the fuzzified input set that serves as the antecedent part of the fuzzy IF–THEN rules. This fuzziness in the fuzzy set theory is completely characterized by the arbitrary membership curves that efficiently determine the partial mapping of each element in the input space to the positive membership grade. In many-valued fuzzy logic, the membership function denotes the intermediate degree
Fig. 1 Basic structure of the fuzzy logic designer
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
223
of belongingness of a real-valued fuzzy set over the continuous interval of values ranging from 0.0 to 1.0. These indicator functions for each input and output variable in a fuzzy inference scheme are concisely expressed by the mathematical formulas with the distinctive shape types executed for specific applications. Each deployed fuzzy set has an associated universe of discourse with differing units and indefinite boundaries. The fuzzified inputs from the fuzzification module are fed to the core inference mechanism of the fuzzy decision-making unit, whose task is to implement the control rules stored in the fuzzy knowledge base. Similarly, the classical output data is generated from the defuzzification module with the cooperation of output membership functions. This database collection consisting of the defuzzified crisp outputs can be subsequently employed for the presented model verification and analysis.
4 Proposed Fuzzy-Based Cardiac Patient Monitoring Model In this section, multiple-attribute fuzzy design operation for the surveillance of cardiovascular disease risk factor is modeled and mathematically formulated. This health monitoring system for assessment of the cardiac patients takes into consideration diverse parameters determining the generic physical fitness and medical conditions. Consolidated effects of the miscellaneous criteria leading to the eventual heart defects contemplated in this work incorporate the systolic human blood pressure, fasting blood sugar level, serum cholesterol, resting heart rate and body mass index. These medical tests can be conducted manually or through the automated technologysupported clinical laboratory procedures with the healthcare facilities. The developed fuzzy logic-based cardiac patient monitoring system with five input fuzzy variables and one output fuzzy variable is demonstrated in Fig. 2. Thus, we consider the multiple-input single-output (MISO) fuzzy inference methodology, in which multitude of input variables are fed to the fuzzy logic module to generate the crucial
Fig. 2 The proposed fuzzy logic controller architecture
224
R. Mehta
cardiac health output. The employed heterogeneous health-related input dimensions represent the distinguishing features influencing the dynamic cause of the potential congestive heart failure. Processing of these inputs takes place in accordance with the stored knowledge base that is acquired by the expert controller schemes, which significantly influences the performance of the proposed healthcare monitoring system. Fuzzy control architecture can be perceived as an efficient design approach for the combined assessment of various disparate parameters associated with controlling the likelihood of heart failure epidemic risk. This adaptive multi-dimensional decisionmaking phenomenon is modeled with the imprecise and non-numerical medical data ranging over the allowable predetermined interval.
4.1 Simulation Environment Setting The graphical outline of the membership curves for each of the fuzzy inputs with typically overlapping regions is depicted in Fig. 3. Likewise, the membership function associated with the singleton fuzzy output is shown in Fig. 4. It is delineated by the four linguistic terms of {Low, Medium, High, Very High} coupled with the heart risk factor. We employ several forms of the widely-applied membership function distributions such as triangular, trapezoidal, generalized bell-shaped, Gaussian and Gaussian combination membership functions to plot the congregation of the exploited fuzzy inputs and output. The set of individual linguistic labels used to refer each input and output fuzzy variables, in conjunction with the respective membership function types and the corresponding fuzzy numbers are listed in Table 1. The first four input attributes of blood pressure, blood glucose levels, cholesterol and heart rate are described by the three linguistic concepts in each instance. Besides, the body mass index fuzzy metric is represented by two linguistic terms.Thus, the single stipulated fuzzy output of heart risk indicator involves an overall 34 ∗ 2 = 162 collection of fuzzy rules for different integrations of the input variables. Table 2 enumerates these fuzzy control rules comprised of the relationship between the input and the output linguistic parameters for the expert knowledge representation and qualitative evaluation of the patient health status. In algebraic terms, the deployed membership functions for each input and output attributes are mathematically formulated as shown in the generic Eqs. (1)–(4). Here, α is the input feature vector for which the corresponding membership grades are computed using the suitable membership functions. Triangular- and trapezoidalshaped characteristic fuzzy sets identified by the linear membership curves and non-smooth crossover transitions are specified using three (x1 , x2 , x3 ) and four (x1 , x2 , x3 , x4 ) parameters, respectively. In addition, the non-linear generalized bellshaped membership function numerically represented in Eq. (3) is described in terms of the controlling variables of the standard deviation x 1 , the mean x3 , and the nonnegative parameter x2 representing the slope of this particular distribution. Similarly, the Gaussian (and Gaussian Combination) membership function in Eq. (4) is graphically denoted by a non-linear indicator subroutine in terms of the centre x1 , and
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
225
Fig. 3 Membership functions associated with the five input fuzzy sets of systolic blood pressure, fasting blood sugar, serum cholesterol, heart rate and body mass index
the breadth x2 of the membership curve. The aforementioned specific parameters for both the generalized bell and Gaussian membership function types ensure the smooth consistency of the non-linear procedure around the curvature shifts. ⎧ ⎪ ⎪ ⎨
⎫ 0 α ≤ x1 ⎪ ⎪ ⎬ (α − x1 )/(x2 − x1 ) x1 ≤ α ≤ x2 (α; x1 , x2 , x3 ) = ⎪ (x − α)/(x3 − x2 ) x2 ≤ α ≤ x3 ⎪ ⎪ ⎪ ⎩ 3 ⎭ 0 x3 ≤ α
(1)
226
R. Mehta
Fig. 4 Membership function associated with the output fuzzy set of cardiac arrest risk factor
⎧ ⎫ ⎪ 0 α ≤ x1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ (α − x1 )/(x2 − x1 ) x1 ≤ α ≤ x2 ⎪ ⎬ (α; x1 , x2 , x3 , x4 ) = 1 x2 ≤ α ≤ x3 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ (x4 − α)/(x4 − x3 ) x3 ≤ α ≤ x4 ⎪ ⎪ ⎪ ⎪ ⎩ 0 x4 ≤ α ⎭ (α; x1 , x2 , x3 ) =
1 3 2x2 1 + α−x x1 1
(α; x1 , x2 ) =
(α−x1 )2
e
(2)
(3)
(4)
2x2 2
Besides, our model uses the commonly applied center of area or centroid mechanism for execution of the defuzzification phenomenon commenced by Sugeno in 1985, in which the optimal defuzzified output value g ∗ is evaluated as given below: ∗
g =
g max (g) ∗ gdg g min gmax (g)dg g min
(5)
This estimated value using the continuous scaled membership function and the deployed defuzzification module provides the real output value that is measured as the abscissa of the center of gravity of the fuzzy set delimited over the universe of discourse g of the defuzzified output variables. The definite integral in the denomi gmax nator gmin (g) dg ascertains the combined overall area of the region bounded by the membership curve across the interval (g min , g max ). The fuzzy inference technique utilized in this numerical analysis scheme is the min–max Mamdani modeling procedure, in which the output variables are designated by the corresponding discrete
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring … Table 1 Fuzzy linguistic terms and their corresponding membership functions and fuzzy numbers for different input and output variables
227
Variable
Linguistic term
Membership function
Fuzzy numbers
Systolic blood pressure
Low
Trapezoidal
[5.5 69.5 90 110]
Medium
Triangular
[100 135.08 170]
High
Trapezoidal
[155 190.317 215 250]
Low
Triangular
[40.4 54.98 70.1]
Normal
Generalized bell-shaped
[18.24 1.53 100]
High
Gaussian
[8.203 147]
Low
Trapezoidal
[62.2 90.38 112 152]
Normal
Gaussian combination
[31.725 205.38 31.8 194]
High
Trapezoidal
[239.95 272.95 291.95 368.95]
Low
Generalized bell-shaped
[12.986 2.5 51.2]
Normal
Triangular
[67.6 85.2 105.853]
Very High
Generalized bell-shaped
[13.155 2.5 120]
Body mass index
Normal
Gaussian
[1.924 20.5]
High
Gaussian
[1.25 28]
Risk level (Output)
Low
Triangular
[−0.0675 0.11 0.25]
Medium
Triangular
[0.1 0.3 0.5]
High
Triangular
[0.4 0.6468 0.88]
Very high
Triangular
[0.774 0.9378 1.15]
Blood glucose fasting
Serum cholesterol
Heart rate
fuzzy sets. These mathematical fuzzy numbers representing the vague qualitative data subsequently require the defuzzification mechanism for mapping the fuzzy output variables to the crisp quantifiable resulting output variables. Moreover, this paradigm incorporates the multi-attribute decision-making concerning the input parameters categorization and synchronous intelligent data processing relevant to the medical service platform. In Fig. 5, we plot the core inference rule viewer structure such that given the inputs as: Systolic Blood Pressure = 160 mmHg, Fasting Blood Glucose = 90.6 mg/dL, Serum Cholesterol = 254 mg/dL, Heart Rate = 110 beats per minute, Body Mass
228
R. Mehta
Table 2 Fuzzy logic inference rules for the proposed model implementation of cardiac patient monitoring Rule no
Systolic blood pressure
Blood glucose fasting
Serum cholesterol
Heart rate
Body mass index
Risk level
1
Low
Low
Low
Low
Normal
Low
2
Low
Low
Low
Low
High
Medium
3
Low
Low
Low
Normal
Normal
Low
4
Low
Low
Low
Normal
High
Medium
5
Low
Low
Low
Very high
Normal
Medium
6
Low
Low
Low
Very high
High
Medium
7
Low
Low
Normal
Low
Normal
Low
8
Low
Low
Normal
Low
High
Medium
9
Low
Low
Normal
Normal
Normal
Low
10
Low
Low
Normal
Normal
High
Medium
11
Low
Low
Normal
Very high
Normal
Medium
12
Low
Low
Normal
Very high
High
Medium
13
Low
Low
High
Low
Normal
Medium
14
Low
Low
High
Low
High
High
15
Low
Low
High
Normal
Normal
Medium
16
Low
Low
High
Normal
High
High
17
Low
Low
High
Very high
Normal
High
18
Low
Low
High
Very high
High
High
19
Low
Normal
Low
Low
Normal
Low
20
Low
Normal
Low
Low
High
Medium
21
Low
Normal
Low
Normal
Normal
Low
22
Low
Normal
Low
Normal
High
Medium
23
Low
Normal
Low
Very high
Normal
Medium
24
Low
Normal
Low
Very high
High
Medium
25
Low
Normal
Normal
Low
Normal
Low
26
Low
Normal
Normal
Low
High
Medium
27
Low
Normal
Normal
Normal
Normal
Low
28
Low
Normal
Normal
Normal
High
Medium
29
Low
Normal
Normal
Very high
Normal
Medium
30
Low
Normal
Normal
Very high
High
High
31
Low
Normal
High
Low
Normal
Medium
32
Low
Normal
High
Low
High
High
33
Low
Normal
High
Normal
Normal
High
34
Low
Normal
High
Normal
High
High
35
Low
Normal
High
Very high
Normal
High (continued)
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
229
Table 2 (continued) Rule no
Systolic blood pressure
Blood glucose fasting
Serum cholesterol
Heart rate
Body mass index
Risk level
36
Low
Normal
High
Very high
High
High
37
Low
High
Low
Low
Normal
Medium
38
Low
High
Low
Low
High
Medium
39
Low
High
Low
Normal
Normal
Medium
40
Low
High
Low
Normal
High
Medium
41
Low
High
Low
Very high
Normal
Medium
42
Low
High
Low
Very high
High
High
43
Low
High
Normal
Low
Normal
Low
44
Low
High
Normal
Low
High
Medium
45
Low
High
Normal
Normal
Normal
Low
46
Low
High
Normal
Normal
High
Medium
47
Low
High
Normal
Very high
Normal
Medium
48
Low
High
Normal
Very high
High
Medium
49
Low
High
High
Low
Normal
High
50
Low
High
High
Low
High
High
51
Low
High
High
Normal
Normal
Medium
52
Low
High
High
Normal
High
High
53
Low
High
High
Very high
Normal
High
54
Low
High
High
Very High
High
High
55
Medium
Low
Low
Low
Normal
Low
56
Medium
Low
Low
Low
High
Medium
57
Medium
Low
Low
Normal
Normal
Low
58
Medium
Low
Low
Normal
High
Medium
59
Medium
Low
Low
Very high
Normal
Medium
60
Medium
Low
Low
Very high
High
Medium
61
Medium
Low
Normal
Low
Normal
Low
62
Medium
Low
Normal
Low
High
Med ium
63
Medium
Low
Normal
Normal
Normal
Low
64
Medium
Low
Normal
Normal
High
Medium
65
Medium
Low
Normal
Very high
Normal
Medium
66
Medium
Low
Normal
Very high
High
Medium
67
Medium
Low
High
Low
Normal
Medium
68
Medium
Low
High
Low
High
High
69
Medium
Low
High
Normal
Normal
Medium
70
Medium
Low
High
Normal
High
High (continued)
230
R. Mehta
Table 2 (continued) Rule no
Systolic blood pressure
Blood glucose fasting
Serum cholesterol
Heart rate
Body mass index
Risk level
71
Medium
Low
High
Very high
Normal
High
72
Medium
Low
High
Very high
High
High
73
Medium
Normal
Low
Low
Normal
Low
74
Medium
Normal
Low
Low
High
Medium
75
Medium
Normal
Low
Normal
Normal
Low
76
Medium
Normal
Low
Normal
High
Medium
77
Medium
Normal
Low
Very high
Normal
Medium
78
Medium
Normal
Low
Very high
High
Medium
79
Medium
Normal
Normal
Low
Normal
Low
80
Medium
Normal
Normal
Low
High
Medium
81
Medium
Normal
Normal
Normal
Normal
Low
82
Medium
Normal
Normal
Normal
High
Medium
83
Medium
Normal
Normal
Very high
Normal
Medium
84
Medium
Normal
Normal
Very high
High
Medium
85
Medium
Normal
High
Low
Normal
Medium
86
Medium
Normal
High
Low
High
Medium
87
Medium
Normal
High
Normal
Normal
Medium
88
Medium
Normal
High
Normal
High
Medium
89
Medium
Normal
High
Very high
Normal
Medium
90
Medium
Normal
High
Very high
High
High
91
Medium
High
Low
Low
Normal
Low
92
Medium
High
Low
Low
High
Medium
93
Medium
High
Low
Normal
Normal
Low
94
Medium
High
Low
Normal
High
Medium
95
Medium
High
Low
Very high
Normal
Medium
96
Medium
High
Low
Very high
High
Medium
97
Medium
High
Normal
Low
Normal
Low
98
Medium
High
Normal
Low
High
Medium
99
Medium
High
Normal
Normal
Normal
Low
100
Medium
High
Normal
Normal
High
Medium
101
Medium
High
Normal
Very high
Normal
Low
102
Medium
High
Normal
Very high
High
Medium
103
Medium
High
High
Low
Normal
Medium
104
Medium
High
High
Low
High
Medium
105
Medium
High
High
Normal
Normal
Medium (continued)
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
231
Table 2 (continued) Rule no
Systolic blood pressure
Blood glucose fasting
Serum cholesterol
Heart rate
Body mass index
Risk level
106
Medium
High
High
Normal
High
Medium
107
Medium
High
High
Very high
Normal
Medium
108
Medium
High
High
Very high
High
High
109
High
Low
Low
Low
Normal
Medium
110
High
Low
Low
Low
High
Medium
111
High
Low
Low
Normal
Normal
Medium
112
High
Low
Low
Normal
High
Medium
113
High
Low
Low
Very high
Normal
Medium
114
High
Low
Low
Very high
High
High
115
High
Low
Normal
Low
Normal
Medium
116
High
Low
Normal
Low
High
Medium
117
High
Low
Normal
Normal
Normal
Medium
118
High
Low
Normal
Normal
High
Medium
119
High
Low
Normal
Very high
Normal
Medium
120
High
Low
Normal
Very high
High
Medium
121
High
Low
High
Low
Normal
Medium
122
High
Low
High
Low
High
Medium
123
High
Low
High
Normal
Normal
Medium
124
High
Low
High
Normal
High
High
125
High
Low
High
Very high
Normal
High
126
High
Low
High
Very high
High
Very high
127
High
Normal
Low
Low
Normal
Medium
128
High
Normal
Low
Low
High
Medium
129
High
Normal
Low
Normal
Normal
Medium
130
High
Normal
Low
Normal
High
Medium
131
High
Normal
Low
Very high
Normal
Medium
132
High
Normal
Low
Very high
High
Medium
133
High
Normal
Normal
Low
Normal
Medium
134
High
Normal
Normal
Low
High
Medium
135
High
Normal
Normal
Normal
Normal
Medium
136
High
Normal
Normal
Normal
High
Medium
137
High
Normal
Normal
Very high
Normal
Medium
138
High
Normal
Normal
Very high
High
Medium
139
High
Normal
High
Low
Normal
High
140
High
Normal
High
Low
High
High (continued)
232
R. Mehta
Table 2 (continued) Rule no
Systolic blood pressure
Blood glucose fasting
Serum cholesterol
Heart rate
Body mass index
Risk level
141
High
Normal
High
Normal
Normal
High
142
High
Normal
High
Normal
High
High
143
High
Normal
High
Very high
Normal
High
144
High
Normal
High
Very high
High
Very high
145
High
High
Low
Low
Normal
Medium
146
High
High
Low
Low
High
Medium
147
High
High
Low
Normal
Normal
Medium
148
High
High
Low
Normal
High
Medium
149
High
High
Low
Very high
Normal
Medium
150
High
High
Low
Very high
High
Medium
151
High
High
Normal
Low
Normal
Medium
152
High
High
Normal
Low
High
Medium
153
High
High
Normal
Normal
Normal
Medium
154
High
High
Normal
Normal
High
Medium
155
High
High
Normal
Very high
Normal
Medium
156
High
High
Normal
Very high
High
Medium
157
High
High
High
Low
Normal
High
158
High
High
High
Low
High
High
159
High
High
High
Normal
Normal
High
160
High
High
High
Normal
High
High
161
High
High
High
Very high
Normal
High
162
High
High
High
Very high
High
Very high
Index = 26.6, then the corresponding output Risk Level = 0.571, that is indicative of the “High Risk Level” for the specified inputs.
5 Simulation Results and Analysis In this section, the designed intelligent healthcare expert system using multiplecriteria fuzzy logic model is implemented in the Matlab simulation software [10]. Concretely, we utilize the Fuzzy Logic Toolbox Graphical User Interface (GUI) for modeling and analyzing the evolved system behaviour, followed by the linguistic rules specification and interpretation in the key fuzzy inference unit. Figure 6 exhibits the three-dimensional decision surface plots depicting the presented system’s output for various combinations of the contemplated input variables.
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
Fig. 5 Fuzzy inference rule viewer in the fuzzy logic toolbox module
233
234
R. Mehta
Fig. 5 (continued)
The specified fuzzy modeling configuration is trained with a sample dataset retrieved from the explicit healthcare medical database observed over a given time interval. The model is substantiated and the system accuracy is testified by varying the size of data record from 1 to 30. Table 3 demonstrates the discrete values of characteristic inputs for the provided dataset linked with the values of the output measured in accordance with the conventional technique and the heart speculation output predicted by the proposed fuzzy logic framework. The scrutinized health information samples related to the human heart functioning and normal blood circulation anatomy constitutes the simulated real data. This can be used for the optimal
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
235
Fig. 6 Three-dimensional decision surfaces for the risk level fuzzy output implemented in the linguistic inference engine
model establishment to ameliorate the system design performance on the successive inexperienced data analogous to the new patient observations. Figure 7 graphically represents the actual and the corresponding fuzzy estimated values of the heart risk metric for each dataset instance in the considered sample. On an average, the risk level associated with the heart failure symptoms computed using the proposed fuzzy strategy is 46.6% lower than the standard modeling technique. This substantiates the improved execution of the developed fuzzy healthcare application with intelligent computational monitoring in the potential prediction of the risk associated with the concerned illness acquisition. Next, we evaluate the accuracy performance of the proposed fuzzy-based cardiovascular patient assessment model using various error metrics. These include the normalized root-mean-square error,
236
R. Mehta
Table 3 Sample dataset for the proposed model implementation of cardiac patient illness status S. no
Systolic blood pressure
Blood glucose fasting
Serum cholesterol
Heart rate
Body mass index
Actual risk level
Predicted risk level
1
114.2
81.6
142.2
2
103.5
70.32
165.8
96.63
25.29
0.4569
0.27
97.6
27.02
0.4647
0.402
3
162.8
107
242.5
108.2
4
178
85.83
183.5
92.79
24.52
0.69195
0.473
27.98
0.6325
0.3
5
161.3
116.9
227.8
6
108.1
104.2
207.1
76.44
24.71
0.6125
0.243
75.48
23.75
0.46738
7
88.35
83.01
0.127
183.5
87.02
19.9
0.35487
0.113
8
95.95
9
117.2
109.8
210.1
70.67
16.63
0.35216
0.12
123.9
242.5
99.52
24.13
0.6183
10
137
0.232
132.4
269
92.79
26.44
0.70268
0.342
11 12
181
115.4
289.7
23.56
0.758
0.631
190.1
97.12
248.4
26.83
0.722
0.51
13
176.5
73.14
272
104.3
24.52
0.668
0.569
14
196.2
102.8
230.7
111.1
27.6
0.7785
0.3
15
165.8
123.9
266.1
94.71
19.71
0.648
0.53
16
143
111.2
224.8
87.02
22.6
0.569
0.113
17
150.6
83.01
189.4
103.4
25.1
0.5767
0.251
18
129.4
107
266.1
100.5
23.94
0.6285
0.283
19
120.3
74.55
218.9
95.67
22.4
0.479
0.152
20
100.5
97.12
186.5
88.94
22.6
0.4445
0.116
21
121.8
131
245.4
99.52
21.63
0.608
0.239
22
105.1
115.4
218.9
110.1
24.52
0.5956
0.36
23
115.7
128.1
195.3
114.9
22.6
0.603
0.276
24
141.5
111.2
236.6
100.5
21.06
0.592
0.196
25
162.8
73.14
280.8
24.9
0.635
0.512
26
137
60.45
233.7
23.75
0.53358
0.264
27
168.9
95.71
263.1
24.9
0.673
0.507
28
179.5
105.6
242.5
28.56
0.7916
0.384
29
158.2
87.24
227.8
94.71
24.52
0.5995
0.23
30
187.1
105.6
251.3
90.87
22.98
0.6708
0.529
105.3 95.67
95.67 103.4 96.63 115.9
mean Poisson deviance, absolute logarithmic quotient error, relative absolute error, and symmetric mean absolute percent deviation. These metrics are mathematically expressed as shown in the Eqs. (6)–(11). Here, N is the total number of samples contemplated for the associated error assessment, ξ is the normalized mean of the measured data δ observed over the specified interval of sample instances, δi is the
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
237
Fig. 7 Actual and fuzzy estimation values associated with the output variable of heart risk level for each individual sample of the considered data collection
¨ i is the corresponding value estimated actual value of the ith risk level output, through the proposed fuzzy model designed and implemented via the simulation experiments, and δi is the mean computation of the observed standard values.
N 1 1 ¨ i − δi 2 N or mali zed Root − Mean − Squar e Err or = ξ N i=1 Mean Poisson Deviance =
N 1 δi ¨i − δi + 2 δi ln ¨i N i=1
(6)
(7)
¨i , ∀i = (1, 2, . . . , N ) Absolute Log Quotient Err or = ln δ
(8)
N ¨ i − δi 100% Symmetric Mean Absolute % Deviation = ¨ i + δi /2 N i=1
(9)
i
N ¨ i=1 i − δi Relative Absolute Err or = N i=1 δi − δi δi =
N 1 δi N i=1
(10)
(11)
In Fig. 8, the progression of the normalized root-mean-square error and the mean
238
R. Mehta
Fig. 8 Normalized root-mean-square error and mean Poisson deviance metrics of the predicted fuzzy output plotted as a function of the varying number of data samples
Poisson deviance metrics is plotted as a function of the size of the deployed sample dataset. The expected values of these precision indicators for the developed fuzzy model are accordingly measured to be 0.46 and 0.244. Besides, the evolution of the absolute logarithmic quotient error for the varying number of specific healthcare record cases is demonstrated in Fig. 9. The average value of this quantitative evidence measure is estimated as 0.7325, which is individually 37.2 and 66.69% higher in proportion in contrast to the preceding two error metrics. Further, Fig. 10 illustrates the convergence of the relative absolute error computed using the presented fuzzy system for heart risk management. The mean value of
Fig. 9 Evolution of the absolute logarithmic quotient error assessed for the heart risk output depicted as a function of the varying number of record samples
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
239
Fig. 10 Evolution of the relative absolute error measured with the heart risk output represented as a function of the increasing number of data record samples
this indicator is estimated as 4.598, with the gradual rough approximation focusing towards the fixed value of 3.25. Likewise, the variation of the symmetric mean absolute deviation of the proposed computational framework is shown in Fig. 11 for the advancing number of the patient cardiac health record samples. The average accuracy level related to this error measurement is evaluated to 64.24%. This is mitigated by around 86.03% compared with the relative absolute error metric. All these error
Fig. 11 Convergence of the symmetric mean absolute % deviation for the heart risk output depicted as a function of the distinct collection of records
240
R. Mehta
metrics are employed to validate the performance of the proposed fuzzy healthcare model for preliminary risk prediction of the subsequent unseen physiological information associated with the medical diagnosis system. In addition, Jain’s fairness index measure for the actual and predicted values of chronic heart disease probability σ with the evolvement of sample dataset size is displayed in Fig. 12. Proposed by Rajendra K. Jain, Jain’s fairness index [5] is quantitatively computed as: J ain s Fair ness I ndex =
N i=1
N
N
σi
i=1
2
σi2
(12)
This classification metric for the fair resource distribution in terms of the distinguished parameters performance is bounded between 0 and 1. It can be investigated that the expected measure of this fairness indicator is 0.96 for the classical implementation of the presented system design. On the contrary, the mean fairness valuation for the fuzzy-related heart patient monitoring system is estimated to be 0.834 that converges to 0.817, with the proposed fuzzy optimization model integrity efficiency mitigated by a nominal factor of 1.1526. Furthermore, the effectiveness of the developed fuzzy model is examined by analyzing the average entropy measure associated with the observed and the predicted values of deployed output. Introduced as an information uncertainty criterion, the entropy hypothesis was proposed by Shannon and Weaver in 1949 [13]. The entropy η(ρ) of a specific probability distribution measures the randomness property of the distribution which is given as:
Fig. 12 Evolution of the Jain’s fairness index measure evaluated for the heart risk level fuzzy output contrasting with the series of sample data observations
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
241
Fig. 13 Plot of the entropy measure histogram per sample dataset instance for the fuzzy computed and the actual values of the heart risk status
η(ρ) = −
N
ρi log 2 ρi
(13)
i=1
This Ndistribution vector ρ = (ρ1 , ρ2 , . . . , ρ N ), where ρi ≤ 1, ∀i = (1, 2, . . . , N ) ρi = 1 is defined in terms of each individual sample record instance σi in and i=1 the following way: σi ρi = N i=1
σi
(14)
For the given set of the heart patients’ data, the plot of entropy histogram for each record instance is shown in Fig. 13. The average entropy measure for the predicted cardiac abnormality risk output using the proposed model is 3.34% lower than the observed heart risk level, indicating the reduced uncertainty associated with the execution of the proposed computationally intelligent diagnosis of heart patients.
6 Conclusions In this work, we propose a novel adaptive healthcare monitoring system for cardiac patients based on the multivariate fuzzy logic controller architecture. Distinctive health status characteristics involving several variables with indeterminate qualitative data formulation are modeled and assessed with the efficient fuzzy computing framework. Based on the Mamdani fuzzy optimization design and the knowledge
242
R. Mehta
rule-based algorithm, these diverse fuzzified inputs with different units and range of values evaluate the singular fuzzy output of the associated heart risk susceptibility. The proposed computationally intelligent modeling approach can be employed to assess the validity of the uncertain information regarding the human health database of the practical data sources. A factual sample dataset relevant to the general health issues concerning the computer-assisted proper heart functioning is used to generate a model verification tool. This is aimed at providing the innovative advancement of the proposed artificial intelligence-oriented system performance on the subsequent topical unknown information of the future medical diagnosis model in its preliminary stages. The accuracy of the developed heart disease detection model is assessed in terms of various conventional error and data entropy metrics. It can be deduced from the simulation analysis results that the normalized root-mean-square error and mean Poisson deviance metrics provide optimal solutions with upto 46 and 25% error approximations. Besides, the absolute log quotient error, relative absolute error, and symmetric mean absolute deviation measures are estimated to be around 0.73, 4.6, and 0.65, respectively. The Shannon entropy index for the empirical analysis of the proposed biomedical application based on the soft computation method is alleviated by a factor of 1.0345 in relation to the actual heart disease observational record. Also, the standard technique possesses 13.24% higher fairness accompanied with more than 46% increased heart failure risk level. Superior scale of the considered sample data model with more number of heartrelated complication features can be employed in future to obtain simulation results with enhanced statistical efficacy and higher system accuracy. Additionally, future research might adopt programmed rule establishment and optimization for smart patient monitoring through the advanced machine learning strategies. Furthermore, it is suggested to utilize the evolutionary heuristic procedures including genetic algorithm, particle swarm optimization, ant colony optimization, etc. for implementing the multi-dimensional decision-making model to explore the general applicability of the proposed intelligent technique for assessing the complex conditions of the critical patients under remote observation.
References 1. Aminian, M. (2013) A hospital healthcare monitoring system using wireless sensor networks. Journal of Health & Medical Informatics, 4. https://doi.org/10.4172/2157-7420.1000121. 2. Arunpradeep, N., Niranjana, G., & Suseela, G. (2020). Smart healthcare monitoring system using IoT. International Journal of Advanced Science and Technology, 29(06), 2788–2796. 3. Azizulkarim, A. H., Jamil, M. M. A., & Ambar, R. (2017). Design and development of patient monitoring system. In IOP Conference Series: Materials Science and Engineering, International Research and Innovation Summit (IRIS2017) (Vol. 226), Melaka, Malaysia. https://doi. org/10.1088/1757-899X/226/1/012094. 4. Gómez, J., Oviedo, B., & Zhuma, E. (2016). Patient monitoring system based on Internet of Things. Procedia Computer Science, 83, 90–97. https://doi.org/10.1016/j.procs.2016.04.103
Multivariate Fuzzy Logic Based Smart Healthcare Monitoring …
243
5. Jain, R., Chiu, D., & Hawe, W. (1984). A quantitative measure of fairness and discrimination for resource allocation in shared systems, digital equipment corporation. Technical Report DEC-TR-301, Tech. Rep. 6. Janjua, M. B., Duranay, A. E., & Arslan, H. (2020). Role of wireless communication in healthcare system to cater disaster situations under 6G vision. Frontiers in Communications and Networks, 1, 1–10. https://doi.org/10.3389/frcmn.2020.610879 7. Ko, J., Lu, C., Srivastava, M. B., Stankovic, J. A., Terzis, A., & Welsh, M. (2010). Wireless sensor networks for healthcare. Proceedings of the IEEE, 98(11), 1947–1960. https://doi.org/ 10.1109/JPROC.2010.2065210 8. Malasinghe, L.P., Ramzan, N., & Dahal, K. (2019). Remote patient monitoring: a comprehensive study. Journal of Ambient Intelligence and Humanized Computing, 10, 57–76. https://doi. org/10.1007/s12652-017-0598-x. 9. Manirabona, A., Fourati, L. C., & Boudjit, S. (2017). Investigation on healthcare monitoring systems: Innovative services and applications. International Journal of E-Health and Medical Communications, 8(1), 1–18. https://doi.org/10.4018/IJEHMC.2017010101 10. MATLAB [Online]. Retrieved from http://www.mathworks.com/products/matlab/descripti on1.html. 11. Salam, H. A., & Khan, B. M. (2016). Use of wireless system in healthcare for developing countries. Digital Communications and Networks, 2(1), 35–46. https://doi.org/10.1016/j.dcan. 2015.11.001 12. Santhi S., Gandhi A.P., Geetha M., & Nirmala, K. (2016). Smart maternal healthcare monitoring system using wireless sensors network. BioTechnology: An Indian Journal, 12(12), 1–7, 116. 13. Shannon, C., & Weaver, W. (1949). The mathematical theory of communication. University of Illinois Press. 14. Xie, Y., Li, X., Zhang, S., & Li, Y. (2019). iCLAS: An improved certificateless aggregate signature scheme for healthcare wireless sensor networks. IEEE Access, 7, 15170–15182. https://doi.org/10.1109/ACCESS.2019.2894895 15. Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8, 338–353. 16. Zimmerman, H. J. (2001). Fuzzy set theory and its applications (4th edn.). Kluwer Academic Publishers Group.
Ensemble Machine Learning Model for Mortality Prediction Inside Intensive Care Unit Nora El-Rashidy, Shaker El-Sappagh, Samir Abdelrazik, and Hazem El-Bakry
Abstract Intensive care unit (ICU) admits the most seriously ill patients requiring extensive monitoring. Early mortality prediction is a crucial issue in intensive care. If a patient’s likelihood of survival or mortality is predicted early enough, the patient will give proper and timely care to save the patient’s life. In recent decades various severity scores and machine learning models have been developed for mortality prediction. By contrast, mortality prediction is still an open challenge. The main objective of this study is to provide a new framework for mortality prediction based on an ensemble classifier. The proposed ensemble classifier amalgamates five different classifiers: Linear discriminant analysis, decision tree, multilayer perceptron, Knearest neighbor, and logistic regression. Data is vertically divided according to expert medical opinion into six feature sets, choosing the most accurate classifier for each subset of features. Framework evaluated benchmark data from Medical Information Mart for Intensive Care (MIMIC III) database using the first 24 h collected for each patient. The performance was validated using standard metrics include Precision, Recall, F-score, Area under the roc curve. Results show F-score 91.02% with precision 92.34% recall 90.14% for MLP classifier, achieved F-score of 93.7%, precision of 96.4%, recall of 91.1%, and AUROC of 93.3% for proposed ensemble classifiers which outperforms using classical classifiers. The results show the validity of the proposed system to be an assistant system for physicians in ICU. Keywords Ensemble classifier · Intensive Care Unit · Machine learning · Mortality prediction · Data fusion N. El-Rashidy (B) Faculty of Artificial Intelligence, Machine Learning and Information Retrieval Department, Kafrelsheikh University, Kafr El-Sheikh 33516, Egypt e-mail: [email protected] S. El-Sappagh Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santi-ago de Compostela, 15782 Santiago de Compostela, Spain S. Abdelrazik · H. El-Bakry Faculty of Computers and Artificial Intelligence, Information Systems Department, Benha University, Banha 13518, Egypt © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A. E. Hassanien et al. (eds.), Medical Informatics and Bioimaging Using Artificial Intelligence, Studies in Computational Intelligence 1005, https://doi.org/10.1007/978-3-030-91103-4_14
245
246
N. El-Rashidy et al.
1 Introduction ICU is a special department in the health care sector that usually helps people with life-threaten injuries and illnesses [1]. Patients in ICU need constant supervision from medical staff and caregivers to ensure a patient’s health stability. Therefore, early prediction tools would be useful caregiving aid based on reliable prediction. Mortality prediction is one of the most important issues in critical care research [2, 3]. Several scoring systems have been developed in the recent past, such as Acute Physiology and Chronic Health Evaluation System (APACHE) [4], Simplified Acute Physiology SAPS [5]. The majority of these scores are not appropriate enough for all patients in ICU since it developed a global offline model result in not so good results. Therefore, a more general and efficient model for mortality prediction is highly required, which is our study’s focus. The growth of Electronic Health Records (EHR) provides a great source for developing predictive and analytical systems such as mortality prediction systems in ICU. Various studies using single machine learning algorithms (i.e., Decision trees) [6]. Most of these studies provide unstable performance in terms of mortality prediction. It returns to the variety of data types in the ICU (i.e. categorical, numerical, time series) and the varying distance between them, affecting classifier performance. Accordingly, there is no suitable model could completely handle all dataset accurately. Ensemble classifiers add a further step in this direction by developing a more efficient classification model, taking advantage of the used classifiers, and avoiding limitations. It combines various base models, pools the prediction of the various models, takes the most voting class. This way of learning is intuitive since it simulates human behavior before considering several viewpoints before making the final decision. A single classifier serves as a solution for the problem in traditional training, but multiple classifiers work together to solve the problem in the ensemble learning process. Two major classifiers groups are: – (1) homogeneous ensemble using the same classifier such as RF (collection of DT), – (2) heterogeneous ensemble using various classifications such as SVM and DT [18]. When constructing an ensemble classification, the sequence of work of all basic classifiers and the method for individual decisions are critical when building the final ensemble classifier. The main objective of our study is to develop an ensemble classifier for mortality prediction that cloud handle ICU data challenges, compare the performance of using a single classifier include (KNN, LDA, MLP, LR, DT) and the proposed ensemble classifier. Regarding testing the proposed model’s performance, several experiments have been conducted using Medical Information Mart for Intensive Care III (MIMIC-III) clinical database. The feature set was extracted with three different window sizes (6 H, 12 H, 24 H) from 10,373 patients’ EHR recorded during the first 24 h of ICU admission. Multiple preprocessing steps were
Ensemble Machine Learning Model for Mortality Prediction Inside …
247
applied to the dataset to handle challenges such as missing values, outliers, and imbalanced classes. In order to build our proposed ensemble classifier, data is vertically divided into feature subsets according to specialist medical opinion. Every feature subset is trained using all algorithms. Then for each subset, we choose the most accurate algorithm to build the final prediction model. The proposed ensemble learning model’s results are encouraging, showing the effectiveness of the developed model n mortality prediction using data collected during the first 24 h Background. The rest of the article is organized as follows. Section 2 introduces the literature review Sect. 3 describes the used dataset. The proposed framework is discussed in Sect. 4. Section 5 discussed the results. The paper concluded in Sect. 6.
2 Related Work 2.1 Scoring System In the last decades, several models have been developed to assist in mortality prediction. Acute physiology and chronic health evaluation (APACHE) are considered the most widely used and well-known in critical care. APACHE then extended to version II and III in 1991 and 1993, respectively. The enhanced version. Another scoring system for mortality prediction Is SAPS which was developed in 1993. SAPS is a score that ranges from 0 to 163 based on specific features. Most mortality prediction models were developed for adult patients. Further info about the scoring is available at [5, 7]. Overall, conventional scorebased systems try to forecast mortality based on a certain number of vital signs and measures using similar methods. Consequently, there isn’t an up-to-date scoring system that can be used effectively to predict mortality generally. This led to the need for other methods that could contribute to earlier mortality predictions.
2.2 Machine Learning Systems The second category of works that developed mortality prediction models based on data mining. There are several problems in this group too. (1) from selecting the best algorithm point of view. Various studies suggest the early prediction of ICU mortality based on logistic regression (LR). for example, In [8] R.S. Anand et al. used LR to predict mortality between heart and coronary care patients. another precise algorithm for mortality prediction based on SVM [9]. (2) from improving the precision of the forecast point of view, In the next research [10]. To manage a spectrum of ICU and enhance prediction accuracy, authors have tried to develop a model for mortality forecasting for particular patients (i.e., patients with renal failure). While these research findings in their domains are satisfactory, they were
248
N. El-Rashidy et al.
confined to a particular domain. (3) The issue of data unavailability in the first 24 h, some researchers tried to resolve this issue by developing methods that correlated between mortality and specific measurements. for example, Krishnan et al. [11] used findings only in laboratories with a neural network that was optimized. Over state of the art. However, in terms of data unavailability, lab events may not be available for all ICU patients Other studies have developed ensemble models to improve the mortality-prediction accuracy. For example, Johnson et al. [12] provided an ensemble for survival prediction using a Bayesian ensemble schema that consists of 500 weak learners (DT); it achieved an area under the receiver operator characteristic curve (AUROC) of 86%. Using an ensemble of the same learner may not be practical and may not deliver the best accuracy. In [13], Awed et al. designed an ensemble model for mortality prediction, which includes RF, DT, and naive Bayes (NB). They applied their ensemble model on 20 features extracted in the first 6 h period. It achieves an AUROC score of 82%. It may not provide the best performance in the first 6 h as many values may not be available during this initial period. These inconsistencies in the results and performance reported in the literature clarify that no single algorithm outperforms others in terms of prediction. None of the developed systems are commonly used for prediction due to the low power of discrimination. Therefore, there is a requirement to address the above challenges to provide a more accurate hospital mortality prediction. Hence, our main objective is to provide answers for the following question. Q1: What are the most important factors that could help in the early prediction of ICU mortality? Q2: Do data mining techniques outperform the mortality scoring system in early mortality prediction?
3 Data Set 3.1 Dataset Description Medical Information Mart for Intensive Care (MIMIC III) is a database published in September 2016. It consists of EHR for ICU-related patients of the Beth Israel Deaconess Medical Center in Boston (BIDMC). MIMIC-III includes data for 53,423 separate admissions to ICU admitted in the interim period (2001 and 2012). The cohort is chosen by two key inclusion criteria from the benchmark dataset. – (1) the age criteria: only patient adult (age > 15) was selected. – (2) To avoid potential data leakage during the study, only the first admission into the ICU unit was chosen for each patient.
Ensemble Machine Learning Model for Mortality Prediction Inside …
249
3.2 Table Selection MIMIC III consists of 26 tables. One relates to demographic patient information; another relates to vital signs, medical tests, expenses in/out liquids, and other invoices. Data collected from the following tables (chart events, production events, laboratory events) were used for patient tracking and future forecasting.
4 Proposed Framework This system is primarily designed to create a model for mortality prediction, and it uses the most significant factors that indicate patient health. Data extracted from MIMIC-III dataset according to age, and the presence of 3 measures of all-time data sequence. The output variable is a binary variable with two survival values, 0 and one for dead. We select a feature set with 80 characteristics that significantly indicate the patient’s health condition and predict the risk. These features are chosen to predict risks among the most common diseases. For example, we chosen Blood Urea Nitrogen (BUN) and Urea Nitrogen for patients with renal failure. Creatinine kinase, heart rims, and cardiac enzymes for patients with heart disease. Albumin selected for hepatic failure using, portion thromboplastin time (PTT) for hemophilia, etc. The table shown all features used in our study. Three window sizes were derived from the selected feature set (6H, 12H, 24 H). (1) to construct a classification model using the entire feature set, using the single classifier (KNN, LR, DT, LDA, MLP). (2) The other module is to build an ensemble divide the chosen feature set into 6 six feature subsets. Each subset classified using all classifiers then applies statistical tests to choose the most suitable algorithm for each subset, algorithms combined using stacking, build a new ensemble classification model. Figure 1 clarifies the process of data in the proposed framework.
4.1 Data Preprocessing MIMIC III was recorded during patients’ stay in ICU at Beth Israel Deaconess Medical Center. It contains missing and outliers, and this noise may return to network errors, sensor errors, equipment changes, etc. In case most of the patient data is missing, it is considered not recorded due to system or network errors and therefore deleted. In the other case where some vital signs are missing and others are available, few preprocessing steps are used to recover missing values, as discussed in the following points.
250
N. El-Rashidy et al.
Fig. 1 The proposed framework
• Sample selection: choose attributes based on ICU Medical expertise selection and attributes commonly used in the literature review. For each patient, only the first 24 h are used to build the training model [14, 15]. • Data balance: Class imbalance is a big problem on the Medical side [16]. In the MIMIC III dataset, the number of patients who died during ICU admission in the hospital was comparatively small compared with the number of patients who survived. To solve the class imbalance problem, we randomly removed patients from class A (survive patients) and ended up with 10,644 patients included with the percentage of 57 and 43% for alive and dead patients. • Unification of measurement units: As mentioned before, MIMIC III was collected from both MetaVission and CareVue datasets. Therefore, some features have an inconsistent unit. To handle this issue. First, for every feature with multiple units, if the percentage of the one unit is >90% of the total number, we just keeping records with this major unit and discard the others. For the remaining features that have multiple units with no major unit, all features converted to a single unit according to the conversion rule • Removing Outliers: Outlier in data science is an observation too far from other values. It considers noise which affects classification performance. In our study, the acceptable range of every feature was determined by medical specialists. All values outside the acceptable range are removed, then imputing using as shown in the following subsection. • Missing values: It was not deleted due to feature importance for time-series features such as arterial blood pressure and temperature that had to miss between 40 and 60% of missing data. For this reason, in the sample selection step, only selected patients that had at least 3 records for time-series features. Missing values of time series feature filling using first filling forward, backward, and mean from the same patient’s data. Others filling using the expectation–maximization algorithm in the SPSS statistics tool.
Ensemble Machine Learning Model for Mortality Prediction Inside …
251
Fig. 2 Extracting features according to time frames
• Data Normalization: As the independent variables widely vary in terms of its range. This variety will add noise during training, multiple classifiers will not work accurately without normalization. For our study, rescale (min–max scaling normalization) is used. It is the simplest way for normalization, it based on rescaling all features to be in the range (0–1) using the Eq. (1) as follows. X =
X − min(x) max(x) − min(x)
(1)
where X is the scaled value, x is the original value of the feature. The evaluation metrics used in data classification are the learning phase (training phase) and the testing phase. Evaluation metrics are used in the training phase to use to refine the algorithm for classification. For our binary classification, various evaluation metrics were used include (precision, Recall, Accuracy, Specificity, F1score, Area under the ROC curve as investigated in the equations from (2) to (7). The method utilized to extract the features with respect to the time frames are investigated in Fig. 2. Accuracy = ACC =
Tp + Tn Tp + Fp + Tn + Fn
(2)
Tn Tn + Fp
(3)
Specificity = SP =
Area under the ROC curve = AUC =
Sp − Np + (Nn + 1)/2 Np Nn
(4)
Tp Precision = P = Tp + Fp
(5)
Tp Recall = R = Tp + Fn
(6)
F1-score = F1-score =
2TP 2TP + FP + FN
(7)
252
N. El-Rashidy et al.
where Tp is the true positive, Tn is the true negative value, Fp is the false positive, and Fn is the false negative values of the confusion matrix. In the other hand Sp represents the percentage of the applied negative class cases that is correctly classified, while Nn and Np the number of negative and positive class respectively.
5 Resuls and Discussion 5.1 Single Classifier Results In this section, we evaluate the proposed single classifier model by using the chosen feature set with three sliding windows (6, 12, 24 h). Figure 2 depicts the different ways features can be sampled. For the first three inputs (6, 12, 24 H), the event of interest considers prediction time. In this section, 9 different are constructed experiments using five ML algorithms. Each patient record represented with a feature vector that encodes summary information about the patient’s health status over the chosen time. These summarization values allow us to consider the differences be-tween values during the chosen time. We carefully chose machine learning algorithms commonly used in the medical side include (KNN, MLP, LDA, LR, and DT). Table 1 shows experiment performance through evaluation metrics including (FM, Se, Sp, Precision, recall, AUROC, etc.). As shown in Table 1 the LDA and MLP achieved the best testing re-results using the Table 1 Single classifier results Time frame
Model
CV accuracy
F1
P
R
ACC
SP
AUC
First 6 h
KNN
0.861 ± 0.391
0.837
0.822
0.852
0.847
0.795
0.824
DT
0.792 ± 0.021
0.787
0.758
0.766
0.728
0.745
0.788
LR
0.870 ± 0.011
0.889
0.893
0.889
0.889
0.919
0.863
MLP
0.901 ± 0.018
0.890
0.896
0.890
0.890
0.917
0.874
First 12 h
First 24 h
LDA
0.891 ± 0.032
0.901
0.915
0899
0.901
0.919
0.823
KNN
0.912 ± 0.029
0.877
0.877
0.868
0.877
0.867
0.877
DT
0.802 ± 0.019
0.825
0.833
0.806
0.855
0.958
0.810
LR
0.892 ± 0.022
0.874
0.876
0.878
0.878
0.913
0.863
MLP
0.916 ± 0.033
0.885
0.884
0.887
0.889
0.925
0.874
LDA
0.901 ± 0.073
0.891
0.910
0861
0.890
0.891
0.865
KNN
0.917 ± 0.021
0.904
0.916
0.891
0.889
0.906
0.892
DT
0.811 ± 0.012
0.876
0.866
0.868
0.828
0.837
0.832
LR
0.880 ± 0.038
0.890
0.905
0.861
0.880
0.902
0.898
MLP
0.929 ± 0.072
0.912
0.922
0.893
0.902
0.917
0.901
LDA
0.901 ± 0.098
0.902
0.894
0.872
0.882
0.903
0.891
Ensemble Machine Learning Model for Mortality Prediction Inside …
253
Table 2 Feature subset used in ensemble classifier Subset #
Features
Subset (1)
Arterial blood pressure diastolic, arterial blood pressure mean, arterial blood pressure systolic, glucose finger stick, heart rate, HR alarm [high], HR alarm [low], respiratory rate, high resp-rate, NBP alarm [low], NBP alarm [high], SpO2 alarm [high], SpO2 alarm [low], temperature C, heart rhythm, mean airway pressure, skin color, skin condition, skin integrity, skin temperature
Subset (2)
Arterial CO2 (Calc), arterial O2 pressure, arterial paCO2 , HCO3 (serum), PAO2 , calculated total CO2 , arterial pH
Subset (3)
Aniom gap, BUN, CVP, haematocrit, haemoglobin, cortisol, creatine kinase, bicarbonate, chloride, platelet count, prothrombin time, RCW, O2 flow (lpm), protein, glucose, ketone, lipase, oxygen, cardiac index, carbon dioxide, chloride (serum)
Subset (4)
GCS total, dorsal ped pulse [right], dorsal ped pulse [left], eye-openning, pupil response left, pupil response right, verbal response, level of consciousness, motor response, arterial line site appear, INV2 wave form appear
Subset (5)
Calcium, magnesium, phosphorous, sodium, phosphate, sodium urine, sodium whole blood, potassium, creatinine, RBC, alkaline phosphatase, urea nitrogen
Subset (6)
Tidal volume, dextrose, fentanyl, fresh frozen plasma, gastric meds, insulin, LR, norepinephrine, propofol, PO intake, solution, amiodar, epinephrine-k, KCL, neosynphrine-k, midazolam, Foley, urine out void, haemoglobin, albumin, alveolar-arterial gradient, urobilinogen, urine color, creatinine urine, urine [apperance], urine color
6 and 12 h datasets, respectively. MLP test results are F1 = 0.890 and AUC = 0.874. Using the 12 h data, the test results have been improved by both MLP and LDA by about 2–3% compared with 6 h data. While in Table 2, we investigated the feature subset utilized in the proiposed ensemble classifier. The best result of research with F1 = 0.905 and AUC = the use of the 24 h dataset obtains 0.922. We have noticed the following from these previous experiments: – (i) improvement in the performance of each model correlated with expansion in the time window; – (ii) the feature set provides good re-results compared to the previous studies in all the models – (iii) our results demonstrate the importance of the ensemble model compared to the single classifier. The results of all the function set on timescales 6, 12 and 24 h, respectively. Figure 3 compare all the feature set performances on the time frames of 6, 12, and 24 h.
5.2 Ensemble Classifier Results The ensemble classifier is a technique used to build a new classifier based on several base classifiers to improve the accuracy of the model. Figure 4 shows the
254
N. El-Rashidy et al.
Fig. 3 Charts of a single classifier
Fig. 4 Stacking ensemble model
stacking ensemble architecture with an enhancement of the stacking algorithm steps in Figure 5. We divided the feature set into six subsets according to medical expert opinions to build our proposed model. Each subset tested with all algorithms as discussed in the following subsections:
Ensemble Machine Learning Model for Mortality Prediction Inside …
255
Proposed ensemble model Input -
D: is the dataset n*d training data where class label is L= {0,1} (0: Alive, 1: died} C: is the number of chosen classifiers, M= {MLP, KNN, LDA, LR, DT}
Output -
: The composite training models Z: The new output class
Steps 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
, where
is number of heterogenous classifiers
instance X (unseen data) Classify x by Return Z
Fig. 5 Enhanced stacking algorithm
a.
b.
Classification performance testing: This step is intended to quantify the difference between test accuracies of the KNN, LDA, MLP, LR, and DT models on each subset. The most suitable algorithm for each subset is chosen based on this comparison. Statistical significance testing: The main issue encountered when choosing the suitable algorithm for each subset is the percentage of confidence in each selected model. Since our data set is balanced, the statistical significance of the discrepancy between the tested models is measured based on reliable findings and statistics testing. We depend on the Wilcoxon signed-rank test [17] for this analysis. The grid search technique has been used to set the optimization hyperparameters for each algorithm. We ended up with the most appropriate algorithm for each subnet. The most acceptable algorithm is given in Table 3 shows the selected algorithms for each subset, and the tuned hyperparameters. While the comparison between the proposed stacking model with the most recent stacking algorithms are shown in Table 4.
Our novel proposed algorithm is based on a generalization stacking (stacking) ensemble model. It is aimed to combine different learning heterogenous learning) and efficient model (every base level performs well on its side). Figure 5 shows the enhanced stacking algorithm to prove that the results of the proposed algorithm
256
N. El-Rashidy et al.
Table 3 Optimized hyperparameter for selected algorithms Subset no Algorithm Hyperparameters St1
DT
criterion = ‘entropy’, max_depth = 3, random_state = 33
St2
KNN
neighbors = 4, weights = ‘uniform’, algorithm = ‘auto’
St3
LDA
n_components = 3, solver = ‘svd’, tol = 0.0001
St4
DT
criterion = ‘entropy’, max_depth = 2, random_state = 33
St5
LR
penalty = ‘l2’, solver = ‘sag ‘, C = 1.0,random_state = 33
St6
MLP
activation = ‘tanh’, solver = ‘lbfgs’, learning_rate = ‘constant’, early_stopping = False, alpha = 0.0001, hidden_layer_sizes = (100, 3), random_state = 33
Table 4 Comparison of the proposed stacking model with other stacking models Classifier
Time frame (H)
CV Accuracy
F1
P
R
ACC
SP
AUC
Stacking
6
0.902 ± 0.036
0.87
0.88
0.86
0.90
0.92
0.89
12
0.901 ± 0.062
0.90
0.93
0.87
0.91
0.95
0.90
24
0.927 ± 0.021
0.91
0.93
0.89
0.93
0.95
0.91
6
0.932 ± 0.041
0.91
0.94
0.88
0.93
0.92
0.92
12
0.942 ± 0.032
0.92
0.95
0.87
0.94
0.94
0.92
24
0.959 ± 0.088
0.94
0.96
0.91
0.94
0.94
0.93
Proposed algorithm
confirmed the theoretical result that many decorrelated “weak learners” could asymptotically form a “strong learner”. To ensure our model’s effectiveness, we compare the performance of our proposed ensemble stacking model with a single classifier and traditional stacking where all data set are classified. Table 4 depicts the performance In the case of 6 h, the proposed model achieved F1 = 0.911 and AUC = 0.917. Compared to the traditional stacking, the proposed model achieved increments of 0.027 and 0.038 in AUC and F1, respectively. In 12 h our proposed model, the proposed ensemble model achieved increment 0.21 and 0.22 over the traditional staking in terms of AUC and F1, respectively. In the case of the 24 h dataset, the proposed model achieved values of F1 = 0.937 and AUC = 0.933. Compared to the traditional stacking model, the proposed model achieved increments of 0.025 and 0.022 in F1 and AUC. Figure 6 show the ROC curve for traditional stacking and our ensemble model in terms of various metrics. The proposed ensemble classifiers perform better than all single classifiers with increments between 0.045 and 0.142 in terms of AUC and increments between 0.035 and 0.15 in terms of F1, which confirms the core of our study. Results considered accepted and very encouraging from a medical point of view will help physicians avoid deterioration in the patient’s condition and sudden death.
Ensemble Machine Learning Model for Mortality Prediction Inside …
257
Fig. 6 Comparison between ROC curve
6 Conclusion and Future Work This chapter presented a heterogeneous and medically intuitive ensemble classifier for ICU mortality prediction. With the guidance of a medical expert, the feature list was divided into six different subsets. A comprehensive analysis was conducted to select the best model for each subset of features. We developed our proposed stacking ensemble classifier by using the LR as a meta learner. The performance of the proposed ensemble model was evaluated and compared with single classifier models and traditional ensemble models. The evaluation process was completed using the K fold CV. Our proposed ensemble classifier achieved an encouraging performance (F1 = 0.937, ACC= 0.944, and AUC = 0.933) that improved the accuracy of the state-of-the-art studies by 2–3%. The proposed model is medically more intuitive because it is based on a comprehensive list of patient features. In addition, the model has been designed under the guidance of a medical expert.
References 1. Rouleau, G., Gagnon, M. P., & Côté, J. (2015). Impacts of information and communication technologies on nursing care: An overview of systematic reviews (protocol). Systematic Reviews, 4(1), 1–8. 2. Karunarathna, K. M. D. M. (2018). Predicting ICU death with summarized patient data. In 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC 2018) (Vol. 2018, pp. 238–247), January 2018. 3. Sadeghi, R., Banerjee, T., & Romine, W. (2018). Early hospital mortality prediction using vital signals. Smart Heal, 9–10, 265–274. 4. Lee, C., & Rainer, T. (2002). Application of APACHE II in the assessment, classification of severity and predictive ability of chinese patients presenting to an emergency department resuscitation room. Hong Kong Journal of Emergency Medicine, 9(4), 188–194. 5. Jeong, S. (2018). Scoring systems for the patients of intensive care unit. Acute Critical Care, 33(2), 102–104.
258
N. El-Rashidy et al.
6. Todd, J., Gepp, A., Richards, B., & Vanstone, B. J. (2019). Improving mortality models in the ICU with high-frequency data. International Journal of Medical Informatics, 129(July), 318–323. 7. Vincent, J. L., et al. (1996). The SOFA (Sepsis-related Organ Failure Assessment) score to describe organ dysfunction/failure. On behalf of the working group on sepsis-related problems of the European society of intensive care medicine. Intensive Care Medicine, 22(7), 707–710. 8. Anand, R. S., et al. (2018). Predicting mortality in diabetic ICU patients using machine learning and severity indices. In AMIA Jt. Summits Translational Science, Proceedings (Vol. 2017, pp. 310–319). 9. Arzeno, N. M., Lawson, K. A., Duzinski, S. V., & Vikalo, H. (2015). Designing optimal mortality risk prediction scores that preserve clinical knowledge. Journal of Biomedical Informatics, 56, 145–156. 10. Bayrak, S. (2016). Intensive Care Unit—Clinical decision support system, pp. 41–44. 11. Krishnan, G. S., & Sowmya Kamath, S. (2019). A novel GA-ELM model for patientspecific mortality prediction over large-scale lab event data. Applied Soft Computing Journal, 80, 525– 533. 12. Johnson, A. E. W., Dunkley, N., Mayaud, L., Tsanas, A., Kramer, A. A., & Clifford, G. D. (2010). Patient specific predictions in the intensive care unit using a Bayesian ensemble. Computing in Cardiology, 39(Mimic), 249–252. 13. Awad, A., Bader-El-Den, M., McNicholas, J., & Briggs, J. (2017). Early hospital mortality prediction of intensive care unit patients using an ensemble learning approach. International Journal of Medical Informatics, 108(October), 185–195. 14. Kalid, N., Zaidan, A. A., Zaidan, B. B., Salman, O. H., Hashim, M., & Muzammil, H. (2018). Based real time remote health monitoring systems: A review on patients prioritization and related ‘Big Data’ using body sensors information and communication technology. Journal of Medical Systems, 42(2). 15. Muzammil, H. (2018). Based real time remote health monitoring systems: A review on patients prioritization and related ‘Big Data’ using body sensors information and communication technology. Journal of Medical Systems, 42(2). 16. Ph, D., Cooper, G. F., Ph, D., & Clermont, G. (2014). NIH public access, 46(1), 47–55. 17. Molenberghs, A. G., & Hasselt, U. (2005, October). Models for discrete longitudinal data. Models for Discrete Longitudinal Data, 0–2. 18. Kayal, P., & Kannan, S. (2017, March). An ensemble classifier adopting random subspace method based on fuzzy partial mining, 10.