386 13 5MB
English Pages 326 [328] Year 2021
Artificial Intelligence for Data-Driven Medical Diagnosis
Intelligent Biomedical Data Analysis (IBDA)
Edited by Deepak Gupta, Nhu Gia Nguyen, Ashish Khanna, Siddhartha Bhattacharyya
Volume 3
Artificial Intelligence for Data-Driven Medical Diagnosis Edited by Deepak Gupta, Utku Kose, Bao Le Nguyen, Siddhartha Bhattacharyya
Editors Assist. Prof. Dr. Deepak Gupta Dept. of Computer Science and Engineering, Maharaja Agrasen Institute of Technology, Sector-22, Rohini, Delhi, India https://sites.google.com/view/drdeepak gupta/home [email protected] Assoc. Prof. Dr. Utku Kose Dept. of Computer Engineering, Faculty of Engineering, Suleyman Demirel University, Isparta, Turkey http://www.utkukose.com/ [email protected]
Dr. Bao Le Nguyen Dept. of Computer Science, Duy Tan University, Nguyen Van Linh 254, Danang City, Vietnam [email protected] Prof. Dr. Siddhartha Bhattacharyya Dept. of Computer Science and Engineering, CHRIST (Deemed to be University), Bangalore, India http://www.drsiddhartha.net/ [email protected]
ISBN 978-3-11-066781-3 e-ISBN (PDF) 978-3-11-066832-2 e-ISBN (EPUB) 978-3-11-066838-4 ISSN 2629-7140 Library of Congress Control Number: 2020946995 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2021 Walter de Gruyter GmbH, Berlin/Boston Cover image: gettyimages/thinkstockphotos, Abalone Shell Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck www.degruyter.com
Dr. Deepak Gupta would like to dedicate this book to his father, Shri. R. K. Gupta, his mother, Smt. Geeta Gupta, for their constant encouragement, his family members including his wife, brothers, sisters and kids and to his students close to his heart. Dr. Utku Kose would like to dedicate this book to his father, Zafer Yilmaz Kose, his mother, Saduman Kose, his brother, Umut Kose, his beautiful wife, Gamze Kose, lovely Misha and to his students having a great interest in the field of artificial intelligence. Dr. Siddhartha Bhattacharyya would like to dedicate the book to Dr. Iven Jose, Dean (Engineering), CHRIST (Deemed to be University), Bangalore.
Preface The use of intelligent systems in medical diagnosis is a long-term considered research topic in the intersection of artificial intelligence and health sciences. Because of the need for more accurate and faster diagnosing systems, the humankind has been in great interest to design and develop artificial intelligence-based solutions. Here, the most important solution mechanism is provided by the machine learning subfield, which includes many algorithms that can learn the problem from known samples. Since a medical diagnosis is a combination of some medical state analyses and predictions, machine learning has become a popular approach for innovative solutions. Nowadays, it was evolved into deep learning where more advanced forms of neural networks are used for better analysis of bigger and more complex data. Especially, medical imaging and medical data from different sources have been a component for building deep learning-oriented intelligent systems. The possibility of success of such “learning” systems is always associated with user data. As the data is very remarkable in times of informatics, careful collection of data is the key point for medical diagnosis. As medical data can be in different forms, it has always been essential to focus on data, modeling the target medical diagnosis solution according to that data and even forming hybrid systems having some preprocessing modules. Nowadays, especially cancer and mortal diseases take great interest since it is important to automate early diagnosis approaches over them. Furthermore, getting effective intelligent medical diagnosis systems is also useful for planning further treatment operations. Moving from the expressed state (and, of course, scientific reasons), we would like to introduce our edited book Artificial Intelligence for Data-Driven Medical Diagnosis with the collection of 14 chapters hitting recent research ways and advancements in the context of artificial intelligence and medical diagnosis. In detail, we have included findings of different chapters in the context of different dataoriented diagnosis solutions done with the great role of artificial intelligence. Our book is supported by a great foreword by Prof. Dr. Aboul Ella Hassanien with his valuable perspectives and future insights. As he has indicated, we are currently going through a serious pandemic of COVID-19, and this book is a timely approach for understanding the current role of intelligent systems in medical diagnosis. We can briefly express each of the chapters included in the book as follows: Chapter 1 provides research regarding the use of convolutional neural network (CNN) and LightGBM techniques for the diagnosis of cancerous lung nodules. In detail, the chapter opens our minds on deriving an alternative automated deep learning solution for predicting a patient with lung cancer from chest computed tomography scans. Chapter 2 considers the way of using deep learning for cellular image analysis. In detail, the chapter gives us a general summary regarding the widely used deep
https://doi.org/10.1515/9783110668322-202
VIII
Preface
learning scheme in cellular imaging, and discusses major tasks, potential applications and both opportunities and challenges in this manner. Chapter 3 follows the use of CNN and a recent deep learning technique “CapsNet” (capsule neural network) for achieving a metastatic breast cancer diagnosis. The chapter gives us information about not only findings by both CNN and CapsNet models, but also focuses on alternative models such as Resnet, AlexNet and GoogleNet. Chapter 4 questions if machine learning can be effective enough on predicting cancer and provides us a general analysis of different machine learning techniques for cancer diagnosis. The chapter considers different techniques such as logistic regression, k-nearest neighbor (k-NN), support vector machines, naïve Bayes, decision tree algorithm and random forest classification for informing us about the results on cancer diagnosis. Chapter 5 focuses on the concept of AIM, which is the abbreviation for artificial intelligence in medicine, and informs us about how intelligent systems are applied in terms of medicine, by following the topics of diagnosis, prognosis and therapy applications. Chapter 6 provides a general view on the use of neural networks for disease diagnosis. Here, the chapter evaluates the performance of different neural network models over some datasets such as Cleveland heart diseases, Pima Indian diabetes, hepatitis and breast cancer dataset. Chapter 7 introduces a neutrosophic approach of median filter regarding computed tomography (CT)/magnetic resonance (MR) images with the denoised image that was used through the Gaussian mixture model segmentation. The authors report us the findings/results over real-time abdomen CT and brain MR images. Chapter 8 focuses on a remarkable topic “data privacy in healthcare” and comes with an artificial intelligence-based data collection and data privacy ensuring approach within healthcare applications. At this point, the authors make a general review and discuss future uses of integrating blockchain with intelligent systems as well as introducing a multiplatform interoperable scalable architecture model for healthcare data sharing. Chapter 9 aims to give the answer to the diagnosis of noncommunicable diseases (NCDs) and gives a general perspective for changes from conventional to intelligent healthcare. The chapter generally discusses the rapid rise of Internet of health things in terms of effective solutions for the vital NCDs burden. Chapter 10 proposed automated gastric cancer detection, thanks to machine learning techniques. The authors briefly perform a classification-oriented study by considering different techniques such as random forest, decision tree, k-NN and adaptive boosting. Chapter 11 starts from a general view over artificial intelligence-based medical diagnosis and gives a way to use three-dimensional printing technologies in this manner. The chapter opens many doors for readers to think about the future of medical equipment developments as well as automated diagnosis aspects.
Preface
IX
Chapter 12 considers the diagnosis of breast cancer by using the technique of deep neural networks. Here, the solution approach has been structured with a transfer learning strategy, which is done over histopathological image data. In detail, the research was done by employing five different models. Chapter 13 follows another ancient Indian health system, which is called as Ayurvedic therapy. The chapter employs machine vision-based technique-based tongue diagnosis in the context of that therapy and medicine-oriented treatment approach. In this context, it runs machine learning techniques for better analysis of diagnosis by considering the classification. Chapter 14 provides a remarkable work on breast cancer diagnosis and the employment of neural networks in this manner. This chapter brings the method of vine copula in order to overcome the complexity seen in higher dimensional medical data and provides findings/results regarding both the copula graphical model and the artificial neural network model. As mentioned, all the chapters have different perspectives on data-driven diagnosis done via intelligent systems. We hope every chapter will bring many future insights to readers. Our valuable readers are all welcome to gain in-depth understanding and read further about each chapter, in order to obtain recent information about the literature. We also seek for valuable feedback and contributive ideas from all readers about how we can improve data-driven intelligent medical diagnosis systems better as the currently experienced cases with the COVID-19 virus is changing the future of artificial intelligence and healthcare greatly. As editors, we would like to thank all readers of this book and invite everyone to see and read our future works as well.
Acknowledgment As editors, we would like to thank Aneta Cruz-Kąciak and the De Gruyter team for their valuable efforts and great support on preorganizing the content and publishing of the book.
https://doi.org/10.1515/9783110668322-203
Contents Preface
VII
Acknowledgment List of contributors
XI XV
Subrato Bharati, Prajoy Podder 1 Performance of CNN for predicting cancerous lung nodules using LightGBM 1 Kerem Delikoyun, Ersin Cine, Muge Anil-Inevi, Oyku Sarigil, Engin Ozcivici, H. Cumhur Tekin 2 Deep learning-based cellular image analysis for intelligent medical diagnosis 19 Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille 3 Deep learning approaches in metastatic breast cancer detection
55
Sonia Singla, G. Veeramalai, S. Aswath, Vinod Kumar Pal, Vikas Namjoshi 4 Machine learning: an ultimate solution for diagnosis and treatment of cancer 85 Sahar Qazi, Naiyar Iqbal, Khalid Raza 5 Artificial intelligence in medicine (AIM): machine learning in cancer diagnosis, prognosis and therapy 103 Omer Deperlioglu 6 Diagnosis disease from medical databases using neural networks: a review 127 A. Lenin Fred, S. N. Kumar, Parasuraman Padmanabhan, Balazs Gulyas, H. Ajay Kumar 7 A novel neutrosophic approach-based filtering and Gaussian mixture modeling clustering for CT/MR images 143 Enis Karaarslan, Enis Konacaklı 8 Decentralized solutions for data collection and privacy in healthcare
167
XIV
Contents
Kurubaran Ganasegeran 9 Navigation from conventional to intelligent healthcare: adoption of Internet of health things for noncommunicable disease screening, diagnosis, monitoring and treatment in community settings 191 Prajoy Podder, Subrato Bharati, M. Rubaiyat Hossain Mondal 10 Automated gastric cancer detection and classification using machine learning 207 Bekir Aksoy, Koray Özsoy, Osamah Salman, İrem Sayin 11 Artificial intelligence applications for medical diagnosis and production with 3D printing technologies 225 Kerim Kürşat Çevik, Emre Dandil, Süleyman Uzun, Mehmet Süleyman Yildirim, Ali Osman Selvi 12 Detection of breast cancer using deep neural networks with transfer learning on histopathological images 245 Harshvardhan Tiwari, Shivani S. Pai, N. S. Sumanth, Arundhati S. Hegde 13 A machine vision technique-based tongue diagnosis system in Ayurveda 265 Vilda Purutçuoğlu, Hajar Farnoudkia 14 Vine copula and artificial neural network models to analyze breast cancer data 287 Index
305
List of contributors Muge Anil-Inevi Department of Bioengineering Izmir Institute of Technology Izmir, Turkey [email protected]
Ersin Cine Department of Computer Engineering Izmir Institute of Technology Izmir, Turkey [email protected]
Bekir Aksoy Faculty of Technology Mechatronics Engineering Isparta University of Applied Sciences Isparta, Turkey [email protected]
Emre Dandil Department of Computer Engineering Faculty of Engineering Bilecik Seyh Edebali University Bilecik, Turkey [email protected]
S. Aswath Department of Computer Science PES University Bangalore, India [email protected]
Kerem Delikoyun Department of Computer Engineering Izmir Institute of Technology Izmir, Turkey [email protected]
Turgay Aydoğan Department of Computer Engineering Engineering Faculty Suleyman Demirel University Isparta, Turkey [email protected]
Omer Deperlioglu Department of Computer Technology Afyon Kocatepe University Afyonkarahisar, Turkey [email protected]
Subrato Bharati Ranada Prasad Shaha University Narayanganj, Bangladesh Institute of Information and Communication Technology Bangladesh University of Engineering and Technology Dhaka, Bangladesh [email protected] Kerim Kürşat Çevik Department of Management Information Systems Faculty of Social Sciences and Humanities Akdeniz University Antalya, Turkey [email protected]
https://doi.org/10.1515/9783110668322-205
Hajar Farnoudkia Department of Statistics Middle East Technical University Ankara, Turkey [email protected] A. Lenin Fred Mar Ephraem College of Engineering and Technology Elavuvilai, Tamil Nadu, India [email protected] Kurubaran Ganasegeran Clinical Research Centre Seberang Jaya Hospital Ministry of Health Malaysia Penang, Malaysia [email protected]
XVI
List of contributors
Balazs Gulyas Lee Kong Chian School of Medicine Nanyang Technological University Singapore [email protected] Arundhati S. Hegde Information Science and Engineering Department Jyothy Institute of Technology Bengaluru, Karnataka, India [email protected] Naiyar Iqbal Department of Computer Science and IT Maulana Azad National Urdu University Hyderabad, India [email protected] Enis Karaarslan Department of Computer Engineering Muğla Sıtkı Koçman University Merkez, Muğla, Turkey [email protected] Nazan Kemaloğlu Information Technology Application and Research Center Mehmet Akif Ersoy University Burdur, Turkey [email protected] Enis Konacaklı Department of Computer Engineering Eskişehir Technical University Eskişehir, Turkey [email protected] Ecir Uğur Küçüksille Department Of Computer Engineering Engineering Faculty Suleyman Demirel University Isparta, Turkey [email protected] H. Ajay Kumar Mar Ephraem College of Engineering and Technology Elavuvilai, Tamil Nadu, India [email protected]
S. N. Kumar Amal Jyothi College of Engineering Kanjirappally, Kerala, India [email protected] M. Rubaiyat Hossain Mondal Institute of Information and Communication Technology Bangladesh University of Engineering and Technology Dhaka, Bangladesh [email protected] Vikas Namjoshi Balaji Institute of Modern Management Pune, India [email protected] Engin Ozcivici Department of Bioengineering Izmir Institute of Technology Izmir, Turkey [email protected] Koray Özsoy Senirkent Vocational School Isparta University of Applied Sciences Isparta, Turkey [email protected]
Parasuraman Padmanabhan Lee Kong Chian School of Medicine Nanyang Technological University Singapore [email protected] Shivani S. Pai Information Science and Engineering Department Jyothy Institute of Technology Bengaluru, Karnataka, India [email protected] Vinod Kumar Pal Balaji Institute of International Business Pune, India [email protected]
List of contributors
Prajoy Podder Institute of Information and Communication Technology Bangladesh University of Engineering and Technology Dhaka, Bangladesh [email protected] Vilda Purutçuoğlu Department of Statistics Middle East Technical University Ankara, Turkey [email protected] Sahar Qazi Department of Computer Science Jamia Millia Islamia New Delhi, India [email protected] Khalid Raza Department of Computer Science Jamia Millia Islamia New Delhi, India [email protected] Osamah Salman Faculty of Technology Mechatronics Engineering Isparta University of Applied Sciences Isparta, Turkey [email protected] İrem Sayin Faculty of Technology Mechatronics Engineering Isparta University of Applied Sciences Isparta, Turkey [email protected] Oyku Sarigil Department of Bioengineering Izmir Institute of Technology Izmir, Turkey [email protected] Ali Osman Selvi Department of Computer Technology Vocational School
XVII
Bilecik Şeyh Edebali University Bilecik, Turkey [email protected] Sonia Singla University of Leicester United Kingdom [email protected] N. S. Sumanth Information Science and Engineering Department Jyothy Institute of Technology Bengaluru, Karnataka, India [email protected] H. Cumhur Tekin Department of Bioengineering Izmir Institute of Technology Izmir, Turkey [email protected] Harshvardhan Tiwari Centre for Incubation, Innovation, Research and Consultancy Jyothy Institute of Technology, Bengaluru Karnataka, India [email protected] Süleyman Uzun Department of Computer Engineering Faculty of EngineeringBilecik Seyh Edebali University Bilecik, Turkey [email protected] G. Veeramalai Department of Mathematics M. Kumarasamy College of Engineering Karur, India [email protected] Mehmet Süleyman Yildirim Department of Computer Technology Söğüt Vocational School Bilecik Şeyh Edebali University Bilecik, Turkey [email protected]
Subrato Bharati, Prajoy Podder
1 Performance of CNN for predicting cancerous lung nodules using LightGBM Abstract: Lung cancer is a common type of cancer. The objective of this chapter is to predict lung cancer in a patient using chest computerized tomography scans. Convolutional neural network (CNN) has also been used in this chapter. CNN has been pretrained on ImageNet for the purpose of generating features from the dataset. Extracted features have been fed into the proposed classifiers in order to train a good classifier. A boosted tree LightGBM (light gradient boosting machine) has been used to perform image classification on the validation dataset. Receiver operating characteristics (ROC) curve and log loss have been evaluated on the training and validation dataset using LightGBM on top of two models VGG19 (Visual Geometry Group 19) and ResNet50 generated features. Log loss is less in the residual network on the validation dataset compared to VGG. ROC curves between VGG19 and ResNet50 architecture have been compared. This research work tries to emphasize the multiple instance nature of detecting the nodules in many scanned images. The area under the ROC curve value is comparatively larger in CNN compared to the recurrent neural network in the simulation result. Keywords: VGG, ResNet, LightGBM, CNN, LSTM
1.1 Introduction Lung cancer is the second-highest leading cause of death worldwide. According to the report of World Health Organization, there were approximately 150,781 cancer patients in 2018 around Bangladesh. About 55.52% of them were males and 44.48% were females. According to the analysis of the International Agency for Research on Cancer, 12,374 (8.2%) people were affected by lung cancer. Therefore, lung cancer is common in Bangladesh. Recovery and survival rates can be improved if early detection of cancer is possible (Bharati et al., 2020a, Bharati et al., 2020c). At present, chest scans have been performed to diagnose lung cancer. A lung scan can generate a few dozen to a few hundred cross-sectional two-dimensional (2D) images of the chest for each patient. These images must be examined by the radiologist. Subrato Bharati, Ranada Prasad Shaha University, Narayanganj, Bangladesh; Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh Prajoy Podder, Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh https://doi.org/10.1515/9783110668322-001
2
Subrato Bharati, Prajoy Podder
The radiologist can notice the shape changes between neighboring slides and recognize the lung nodules. Then, he/she can decide whether the lung nodule is malignant (cancerous) or not (benign). A lung nodule is a small growth on the lungs. Normally, its size is less than 3 cm. If the size of the nodule is larger than 3 cm, then it is suspicious (Wei et al., 2015). There are already many proposed machine learning methods available in order to identify the cancerous nodules. There are many methods where a two-step procedure is used. These steps are region generation and classification. The region generation system recognizes the desired regions that might carry pulmonary nodules. The classification system decides the possibility of cancerous nodules (Awais et al., 2015, Rushil et al., 2016). To address limited data, time and resources, networks ideally take advantage of transfer learning. While lung cancer scanning seems to be highly specialized, there have been successful projects that use pretrained convolutional neural networks (CNNs) and fine-tune them to detect lung cancer nodules (Bharati et al., 2020d). For example, Ramaswamy et al. (2016) demonstrated the efficacy of transfer learning from general image classification. They used AlexNet and GoogLeNet pretrained on ImageNet, and by fine-tuning, they were able to achieve good scores on predicting whether a nodule was cancerous (Ramaswamy et al., 2016). More pertinent to this project, the paper of Tripathy et al., (2018) showed that by using residual network (ResNet) (pretrained on ImageNet) as the feature generator, they were able to train a boosted tree that achieved reasonable results in predicting whether the complete set of a patient’s scans show any cancer (with no specific nodule detection). These models cannot often take advantage of the three-dimensional (3D) information across sections (e.g., simply average acrosssections; Ramaswamy et al., 2016). This is the problem of these models. Hence, 3D CNNs have been used on chest scans (Ramaswamy et al., 2016). Ypsilantis et al. (2016) achieved good results with a model that added recurrent neural network (RNN) with a traditional 2D CNN. They used their network on patches that might contain cancer nodules (the size of the recurrence was always seven slices, so this created a sort of fixed-size voxel). Bharati et al. (2020a) proposed vanilla neural network, CNN, modified VGG (Visual Geometry Group) and capsule network for detecting several lung diseases from the lung X-ray images. The purpose of our research is to find the probability that a given set of scans shows any cancer. We have taken the key part of transfer learning and added RNN long short-term memory (LSTM) features. RNN is normally an unsupervised learning model because RNN LSTM features will allow the transfer learning model to learn 3D information. RNN LSTM not only deals with the various instances of nature of lung nodule detection but also works with the variable number of slices per patient.
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
3
Full images have been used in our experiment. There are no patches. RNN has been used over all slices with one output. The benefit of adding LSTM with its performance over the boosted tree model has also been described. The main contributions of our work have been illustrated as follows: (i) Transfer learning has been adopted with a CNN. (ii) CNN has been pretrained on ImageNet to generate features from the dataset. The two common pretrained models used as featurizers are ResNet50 and VGG19. Tensorflow works in the backend. (iii) Extracted features are fed into the proposed classifiers in order to train a good classifier. A boosted tree (light gradient boosting machine) LightGBM is used to perform image classification on the validation dataset. (iv) Log loss has been evaluated in VGG19 and ResNet50 architecture. (v) A comparison of receiver operating characteristics (ROC) curve is illustrated for both models in this chapter.
1.2 Literature reviews In 1980, the computer-aided detection (CAD) system was designed in order to detect the lung nodule. But that system was not successfully accomplished, because there were not enough computational resources in order to implement advanced image processing. When CNN was invented, the implementation and performance of CAD-based advanced image processing techniques became very fruitful. Many researchers have already proposed many deep learning models. But there were few deep learning modelbased image analyses proposed for lung nodule detection. Some relevant deep-learning-based lung cancer detection and classification methods have been mentioned in this literature review. We have tried to focus on recent literature works; a few of the most relevant lung nodule detection and classification methods are mentioned here. A 3D CNN was proposed in Setio et al. (2017) for the purpose of reducing false positive rate (FPR) to classify lung nodules. To examine the 3D nature of computerized tomography (CT) image, a 3D network was proposed. Zhu et. al. (2018) proposed a faster region-based CNN, also known as R-CNN, for cancerous lung nodule detection. Faster R-CNN shows very good results for object detection. Three dimensional dual-path blocks and GBM were used for classifying lung nodules (Zhu et. al., 2018). UNet encoder–decoder architecture was also used (Zhu et. al., 2018) for detecting nodules. LIDC-IDRI was used as a dataset. Ren et al. (2017) proposed a special CNN-based model named region proposal networks for efficient object detection of an input image of any size. According to Jiang et al. (2017), CNN was applied in LIDC-IDRI dataset that is used in the CAD system where the grade level of lung nodule was 4. They used 2D images as input. Deep fully CNN was proposed by Masood et al. (2018) for diagnosing lung cancers from the image of LISS database. The basic input image size
4
Subrato Bharati, Prajoy Podder
of DFC was 512 × 512 with a gray channel but they resized the image to 100 × 100. Three-dimensional deep CNN was proposed, which was applied in segmented images for lung nodule detection (Gu et al., 2018). The performance of 3D CNN is much better than a 2D CNN with rich features. Huang et al. (2017) proposed a densely connected CNN in order to extend the performance of deep CNN. The dense block had five layers. CIFAR was used as a dataset. Nasrullah et al. (2019) proposed a mixed link architecture (MixNet), where on ImageNet dataset, four blocks were used. The blocks are mixed link blocks. The size of input image was 224 × 224. They also used CIFAR and SVHN datasets. Recently, COVID-19 has become a global pandemic. CNN can also play an important role in detecting COVID-19 in patients from chest X-ray and CT images. Because the COVID-19 affected patients may develop pneumonia and experience failure of multiple organs leading to possible death in many cases. COVID-19 spreads by respiratory droplets from patients (Mondal et al., 2020).
1.3 Theoretical description 1.3.1 Visual Geometry Group Visual Geometry Group (VGG) was proposed in many studies (Simonyan et al., 2014; Bharati et al., 2020b). At the time of training, the input to VGG model is an RGB image, whose size is 224 × 224. The input image is then passed through a series of convolutional layers. At that time, filters have been used with a 3 × 3 field. The convolution length can be fixed to one pixel. The spatial padding is one pixel for 3 × 3 convolution layers. Five maxpooling layers are needed in order to implement spatial pooling. Maxpooling operation can be accomplished over a window size of 2 × 2 pixels. FC-4096 means fully connected layers of 4,096 nodes. VGG configuration is illustrated in Figure 1.1.
1.3.2 ResNet ResNet was first introduced by He et al. (2015). Unlike VGG, ResNet has several microarchitecture modules. ResNet architectures were demonstrated with 50, 101 and even 152 layers. The deeper the ResNet obtained, the more its performance grew. Figure 1.3 illustrates a basic ResNet structure. The structure can be classified into some categories such as convolution, batch norm and rectified normal unit. ResNet50 is used in this chapter as a pretrained model. The size of the input image is 224 × 224 and depth is 3. The output size of the first layer is 112 × 112 and the output size of the last layer is 7 × 7. Table 1.1 describes the weight layers of ResNet50 architecture.
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
A 11 weight layers
conv3–64
conv3–128
conv3–256 conv3–256
conv3–512 conv3–512
conv3–512 conv3–512
A-LRN 11 weight layers
ConvNet configurations B C 13 weight 16 weight layers layers
D 16 weight layers
Input RGB image (size: 224×224) conv3–64 conv3–64 conv3–64 conv3–64 conv3–64 conv3–64 Maxpool conv3–128 conv3–128 conv3–128 conv3–128 conv3–128 conv3–128 conv3–128 Maxpool conv3–256 conv3–256 conv3–256 conv3–256 conv3–256 conv3–256 conv3–256 conv3–256 conv1–256 conv3–256 Maxpool conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv1–512 conv3–512 conv3–64 LRN
conv3–512 conv3–512
Maxpool conv3–512 conv3–512 conv3–512 conv3–512 conv1–512
conv3–512 conv3–512 conv3–512
E 19 weight layers
conv3–64 conv3–64 conv3–128 conv3–128 conv3–256 conv3–256 conv3–256 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512 conv3–512
Maxpool FC–4096 FC–4096 FC–1000 Softmax Figure 1.1: VGG configurations.
VGG 19
Maxpool Maxpool Maxpool Maxpool Depth = 256 Depth = 512 Depth = 512 Size = 4096 3×3 conv 3×3 conv Depth = 64 Depth = 128 3×3 conv FC1 conv3_1 conv5_1 conv4_1 3×3 conv 3×3 conv FC2 conv3_2 conv5_2 conv4_2 conv1_1 conv2_1 Size = 1000 conv3_3 conv5_3 conv4_3 conv1_2 conv2_2 Softmax conv3_4 conv5_4 conv4_4 Maxpool
Figure 1.2: Basic diagram of VGG 19.
5
6
Subrato Bharati, Prajoy Podder
1×1 convolution
Summation
ReLu
Input
Convolution
Batch norm
ReLu
Convolution
Batch norm
Figure 1.3: Residual network block.
Table 1.1: ResNet50’s layer. Name of the layer conv conv.x conv.x conv.x conv.x
Output size
No. of layers
× × × × ×
1.4 Approaches In this section, we have described various approaches used for detecting and classifying lung cancer.
1.4.1 LightGBM on top of ResNet50 features and VGG19 features This section provides a detailed description of conducting LightGBM model (Lin et al., 2019) on top of VGG19 (Simonyan et al., 2014; Rahib et al., 2018) and ResNet50 (He et al., 2015) features. The basic structure can describe the classification of image. In this method, features are created according to the pretrained model. This process
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
7
will train an improved tree LightGBM for calculating the predictive results in the validation dataset.
1.4.1.1 Basic architecture of model Several deep learning-driven applications use pretrained models. For classification of image, the first several layers of a CNN generally illustrate low-level features, whereas the last several layers illustrate high-level features, mainly for the classification of specific image tasks. We have applied two pretrained models. The architecture of this model is the same. We will discard the last pretrained layer of CNN models and observe outputs from penultimate layers into the prediction function. In order to create features, formerly an improved tree conducting Microsoft LightGBM is used for classifying the images. Figure 1.4 demonstrates the basic ResNet architecture, where 224 × 224 is the input image size with a depth of 3, resembling the three color (RGB) channels. The image is visualized to convolutions in each internal layer, reducing in size as well as developing in depth. The last layer provides an output where the predicted class of network and a vector of 1,000 classes of ImageNet are the elements with advanced probability.
K patch of images
Penultimate layer
224 × 224 × 3 image
Features
ResNet (N–1) layers
Figure 1.4: Demonstration of ResNet.
8
Subrato Bharati, Prajoy Podder
1.4.1.2 Extract features In order to extract features, VGG19 and ResNet 50 have been used as pretrained CNN models in this chapter. The proposed model is shown in Figure 1.5. The input image size of scanned image of the patient is 224 × 224 × 3. This input image has been fed into the network model in batches and convoluted in every layer until the penultimate layer. The output of the penultimate layer has been converted into features. At that time, “Keras” feature extraction function has been adopted. The extracted features are different for every patient because there are a different number of slides in the scanned image. The shape of the feature of VGG19 and ResNet50 architecture is (k,7,7,512) and (k,1,1,2048), respectively, where the number of batches fed into neural network is denoted by k. If a scanned patient image has m slices, then k is equal to m/3. LightGBM classifies the extracted images into two classes – cancer or without cancer.
K patch of images
Penultimate layer
CNN LightCBM
224 × 224 ×3 image
Features
ResNet (N–1) layers
Cancer (yes/no)
Boosted tree
Figure 1.5: Workflow of the proposed solution.
1.4.1.3 Train LightGBM classifier Tree-based learning methods are used in GBM. Features generated from VGG19 and ResNet are fed into the LightGBM classifier. There are two approaches used in training this classifier. Approach 1: Calculation of average features of each patient as trained data Approach 2: Use of every image features for every patient
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
9
We have performed cross-validation in order to train the LightGBM classifiers where training data is 80% and validation data is 20%. A comparison of the two approaches is illustrated in Table 1.2. The second approach needs more training time than the first approach for not only ResNet but also VGG. Approach 2 has lower trained log loss compared to approach 1. Table 1.2: Comparison and evaluation of two methods. ResNet Approach Train data shape Train time (s) Stopping rounds Final train log loss
VGG
Approach
Approach
Approach
Combination of ResNet and VGG
(,,) (,,) (,,,) (,,,) (,,,) . ,
,. ,
,. ,
,. ,
,. ,
.
.
.
.
.
1.4.2 CNN and LSTM In this section, a network combining CNN and LSTM is illustrated. The proposed hybrid model can realize to calculate that a patient has cancer according to the existence of leastways one node. RNN LSTM layers can allow the network to realize the 3D dependences among slices, and similarly manage the information that cancerous nodules simply appear on few slices; however, we desire output for every patient. The proposed hybrid model (CNN–LSTM) also permits for a potential “depth.” Therefore, we will not need to up- or downsample the data into a reliable size. This is particularly suitable since the number of slices for all patients differs considerably.
1.4.2.1 Model CNN and LSTM models are shown in Figure 1.6. At first, the input is associated with a totally connected layer, where two convolutional layers for a maxpooling with stride are fed into LSTM with CNN block. One of the reasons for choosing is that it has been recognized to diminish the disappearing gradient problem. The convolutional layers are observed by Leaky ReLu with batch normalization. For RNN block, two LSTM layers with sizes of 100 are stacked. The output block is totally associated with our ultimate prediction layer, which goes through softmax. We have used Adam for optimization with a momentum of 0.9 and a learning rate of 0.001, where weights are modified with Glorot normal (Diederik et al., 2014; Shakeel
10
Subrato Bharati, Prajoy Podder
FC Input CNN 1
CNN 2 15
30 15
30
5
LSTM
LSTM
LSTM
LSTM
LSTM
LSTM
LSTM
LSTM
LSTM
LSTM
5 2048 4500
Max pooling with stride 2 Leaky Relu Batch normalization
FC
Figure 1.6: Overview of CNN and LSTM models.
et al., 2020). Loss has been computed according to log loss on the ultimate softmax output.
1.4.2.2 Variations Some variations have been tested on this simple model (Figure 1.6). We first use the output of last layers from the combination of LSTM and CNN, since this block has realized all data. We have also tried to carry out the max and mean of all LSTM block outputs. Finally, we tested the presentation of eliminating the CNN layers, as well as the input, which is fed conventionally into the LSTM and CNN block.
1.4.2.3 High mean This chapter has tested a new layer that distributes a comparable purpose as receiving the max or mean of the combination of LSTM and CNN outputs; however, it should be a healthier fit for our various instance problem. Since cancerous nodules exist in some slices, so the mean should not be considered for the slices which are not cancerous. In order to get the desired results two types of operation has been performed. The first operation is to reduce the mean of layers to a single number. The second operation is to take the mean across the slices axis and after taking it appropriate layers have to be selected where the mean matrix is smaller than more than fifty percentage of the matrix. Comparison and evaluation show in Table 1.4.
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
11
1.5 Experiments and results 1.5.1 Dataset In this chapter, the stage 1 CT image data is used. This dataset is collected from Kaggle 2017 competition. The dataset was prepared by the National Cancer Institute. There are 285,380 CT images from 1,595 patients in this dataset. Each patient’s identification comprises a set of 2D slices. The number of slices is not the same for each patient. Every slice has the same shape of (512,512). Figure 1.7 shows the distribution of the number of slices per patient. The image of the dataset has been resized in order to feed them. The size of input image is (224,224,3), so we need to resize data into (224,224), then group three of them to get shape (224,224,3) and then feed them into our model. The data includes labels for each patient, 1 represents cancer and 0 represents no cancer. Validation data is collected also from Kaggle, which includes CT images from 198 patients. The portion of cancerous patients is the same in the training and validation data. Histogram of DICOM count per patient 700
Number of patients
600 500 400 300 200 100 0
100
200
300 DICOM files
400
500
Figure 1.7: Distribution of slice counts per patient.
1.5.2 Evaluation metrics 1.5.2.1 Evaluation using log loss Log loss is used as an evaluation metric in order to compare the proposed models: Log loss = −
n 1X yj log ypj + ð1 − yj Þlog 1 − ypj n j=1
(1:1)
12
Subrato Bharati, Prajoy Podder
where n is the number of tested patients and ypj is the predicted probability of images of cancer patients.
1.5.2.2 Evaluation using ROC curve ROC curve has been used in order to evaluate the performance of classifiers or trained models. ROC curve is produced by plotting true positive rate (TPR) against FPR at a threshold value. The threshold value can be specified by users. 1 0.9
True positive rate
0.8 0.7 0.6 0.5 0.4 0.3 Worthless
0.2
Good
0.1
Excellent
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False positive rate Figure 1.8: Comparing ROC curves.
1.5.3 Results 1.5.3.1 GBM results Table 1.3 shows the log loss of the training and validation datasets using LightGBM on top of two pretrained models VGG19 and ResNet50 generated features. Comparing results between VGG19_1 and VGG19_2, and ResNet50_1 and ResNet50_2, we can find that approach 2 has a lower log loss on the training dataset for both models. As mentioned earlier, approach 2 means to train LightGBM using every image feature generated from the training dataset, which means that we have more data to train a better classifier; of course, it also takes a longer time. We expect that approach 2 can provide better performance when prediction on validation data is performed, but surprisingly, we can see that approach 1 has lower log loss among all the results for both models. (In order to make approach 2 more reasonable, we get the average probability, maximum
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
13
Table 1.3: Log loss comparison. Model
Log loss
VGG_ VGG_
Avg
For validation dataset
.
.
.
.
Max
.
Min
.
ResNet_ ResNet_
For training dataset
Avg
.
.
.
.
Max
.
Min
.
When two models are combined
.
.
probability and minimum probability for every patient to find a better one.) We have also combined the two models. Log loss is 43.6% and 56% for training and validation datasets, respectively. Table 1.3 shows the log loss comparison of VGG19 and ResNet50 models. In Table 1.3, VGG19_1 means average features are used for every patient scan at the time of training the classifier. VGG19_2 means every feature is used for every patient scan at the time of training the classifier. It bears the same meaning for ResNet50_1 and ResNet50_2, where ResNet represents residual network. Figure 1.9 shows the ROC curve using LightGBM on top of VGG19 features. From left to right, top to bottom, first one is ROC curve via approach 1, second to last is ROC curve via approach 2. ROC curve comparison is shown in Figure 1.10. LightGBM is used on top of VGG19 generated features. ROC curve can be defined as a probability curve. Area under the curve (AUC) signifies the degree or amount of separability. AUC can decide how much a model is proficient in differentiating between classes. A higher value of AUC can provide a better classification model in order to distinguish between affected and unaffected patients. In Figure 1.8, TPR is on y-axis, whereas FPR is on x-axis. From Figure 1.8, it can be noticed that approach 1 has a large value of AUC compared with approach 2. The AUC of approach 1 is 0.69, whereas the average AUC of approach 2 is 0.64. The first approach has a lower log loss compared with the second approach. Log loss is a performance-measuring parameter of the classifiers. In approach 2, the lowest value of AUC is 0.59 which is obtained from VGG19. ROC curve has been shown in Figure 1.11, where LightGBM has been used on top of ResNet50 features.
14
Subrato Bharati, Prajoy Podder
VGG19 ROC curve
1.0
0.8 True positive rate
0.8 True positive rate
VGG19 avg probability ROC curve
1.0
0.6 0.4 0.2
0.6 0.4 0.2 vgg19_label_image_avg ROC curve (area = 0.64)
vgg19 ROC curve (area = 0.69)
0.0 0.0
1.0
0.2
0.4 0.6 False positive rate
0.8
0.0 0.0
1.0
VGG19 max probability ROC curve
0.6 0.4
0.6 0.4
vgg19_label_image_Max ROC curve (area = 0.59)
0.2
0.4 0.6 False positive rate
0.8
1.0
vgg19_label_image_Min ROC curve (area = 0.66)
0.0 0.0
0.2
0.4 0.6 False positive rate
Figure 1.9: VGG19 ROC curve.
VGG19 ROC comparison
1.0
0.8
True positive rate
1.0
0.2
0.2
0.6
0.4 VGG19 ROC curve (area = 0.69) VGG19 a2 avg ROC curve (area = 0.64) VGG19 a2 max ROC curve (area = 0.59) VGG19 a2 min ROC curve (area = 0.66)
0.2
0.0 0.0
0.8
0.8 True positive rate
True positive rate
0.4 0.6 False positive rate
VGG19 min probability ROC curve
1.0
0.8
0.0 0.0
0.2
0.2
0.4 0.6 False positive rate
Figure 1.10: VGG19 ROC curve comparison.
0.8
1.0
0.8
1.0
15
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
ROC curve comparison is shown in Figure 1.12. In this figure, LightGBM is used on top of ResNet50-produced features. Comparing AUC between two approaches, it can be found that approach 1 has a larger AUC value than approach 2. The AUC of approach 1 is 0.65, whereas the average AUC of approach 2 is 0.62. First approach has a lower log loss compared to the second approach which is illustrated in Table 1.3. Log loss is a performance-measuring parameter of the classifiers. The AUC in maximum probability condition is 0.60 and the AUC in minimum probability condition is 0.58 in approach 2.
ResNet50 ROC curve
1.0
0.8
True positive rate
True positive rate
1.0
0.6 0.4 0.2
0.8 0.6 0.4 0.2 resNet_label_image_avg ROC curve (area = 0.62)
ResNet50 ROC curve (area = 0.65)
0.0 0.0
0.4 0.6 False positive rate
0.8
0.0 0.0
1.0
ResNet50 max probability ROC curve
1.0
0.8 True positive rate
True positive rate
1.0
0.2
0.6 0.4 0.2
0.2
0.4
0.6
False positive rate
0.8
0.2
0.4 0.6 False positive rate
0.8
1.0
ResNet50 min probability ROC curve
0.8 0.6 0.4 0.2
resNet_label_image_Max ROC curve (area = 0.60)
0.0 0.0
ResNet50 avg probability ROC curve
resNet_label_image_Min ROC curve (area = 0.58)
1.0
0.0 0.0
0.2
0.4
0.6
0.8
1.0
False positive rate
Figure 1.11: ResNet50 ROC curve.
From Figure 1.13, we can see that the AUC value of ResNet50 architecture is 0.65, whereas the AUC value of VGG19 architecture is 0.69. Table 1.4 shows the comparison of our proposed model according to loss and AUC, where CNN and mean provide good results. Our RNN/CNN models have been developed upon the enactment of our experiments where we provided the baseline boosted tree.
16
Subrato Bharati, Prajoy Podder
ResNet50 ROC curve comparison
1.0
True positive rate
0.8
0.6
0.4 ResNet50 ROC curve (area = 0.65) ResNet50 a2 avg ROC curve (area = 0.62) ResNet50 a2 max ROC curve (area = 0.60) ResNet50 a2 min ROC curve (area = 0.58)
0.2
0.0 0.0
0.2
0.4
0.6
0.8
1.0
False positive rate
Figure 1.12: ResNet50 ROC curve comparison.
ResNet50 and VGG19 ROC curve comparison 1.0
True positive rate
0.8
0.6
0.4
ResNet50 ROC curve (area = 0.65)
0.2
VGG19 ROC curve (area = 0.69) Two model combine ROC curve (area = 0.68) 0.0 0.0
0.2
0.4
0.6
False positive rate
Figure 1.13: ResNet versus VGG19 ROC comparison. Table 1.4: Loss and AUC of CNN/RNN models. Model
Loss
AUC
CNN and mean CNN and RNN CNN, RNN and mean RNN and mean RNN
. . . . .
. . . . .
0.8
1.0
1 Performance of CNN for predicting cancerous lung nodules using LightGBM
17
1.6 Conclusion In this chapter, VGG19 and ResNet50 architecture have been proposed and developed for diagnosing lung cancer- affected nodules. A LightGBM has been used as a classifier. Two approaches have been performed for the comparative analysis of CNN and RNN: the first approach is CNN and LightGBM, and the second approach is CNN and LSTM. CNN models provide better AUC than RNN. CNN has lower log loss when compared with RNN. A higher value of AUC can provide a better classification model in order to distinguish between the affected and unaffected patients. Our future work is to implement a mixed link network using this dataset.
References Abiyev, R. and Ma’aitah, M.K.S. (2018). Deep convolutional neural networks for chest diseases detection, Journal of healthcare engineering, 4168538. Doi: 10.1155/2018/4168538. Anirudh, R., et al. (2016). Lung nodule detection using 3D convolutional neural networks trained on weakly labeled data. SPIE Medical Imaging, International Society for Optics and Photonics. Bharati, S., Podder, P. and Rubaiyat Hossain Mondal, M. (2020a). Hybrid deep learning for detecting lung diseases from x-ray images, Informatics in Medicine Unlocked, Article in press. Bharati, S., Podder, P., Mondal, R., Mahmood, A. and Raihan-Al-Masud, M. (2020b). Comparative performance analysis of different classification algorithm for the purpose of prediction of lung cancer, Advances in Intelligent Systems and Computing, Springer, 941, 447–457. Doi: 10.1007/978-3-030-16660-1_44. Bharati, S., Podder, P. and Mondal, R.H.M. (2020c). Artificial neural network based breast cancer screening: a comprehensive review, International Journal of Computer Information Systems and Industrial Management Applications (ISSN 2150-7988), 12, 125–137. Diederik, K. and Ba, J., (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Diederik, P.K. and Ba, J. (2020). Adam: A method for stochastic optimization. arXiv. [Preprint] Available from: http://arxiv.org/abs/1412.6980 [Accessed: 20 May 2020]. Gu, Y., Lu, X., Yang, L., Zhang, B., Yu, D. et. al. (2018). Automatic lung nodule detection using a 3D deep convolutional neural network combined with a multi-scale prediction strategy in chest CTs, Computers in Biology and Medicine, 103, 220–231. He, K., Zhang, X., Ren, S. and Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv. [Preprint] Available from: http://arxiv.org/abs/1512.03385 [Accessed: 20 May 2020]. Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q. (2017). Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp. 2261–2269. Available at: https://ieeexplore.ieee.org/document/ 8099726 [Accessed: 20 May 2020]. Jiang, H., Ma, H., Qian, W., Gao, M. and Li, Y. (2017). An automatic detection system of lung nodule based on multigroup patch-based deep learning network, IEEE journal of biomedical and health informatics, Jul 14, 22(4), 1227–1237. Jiang, H., Ma, H., Qian, W., Gao, M. and Li, Y. (2018). An automatic detection system of lung nodule based on multigroup patch-based deep learning network, IEEE Journal of Biomedical and Health Informatics, 22, 1227–1237.
18
Subrato Bharati, Prajoy Podder
Lin, H. et al. (2019). A super-learner model for tumor motion prediction and management in radiation therapy: development and feasibility evaluation, Scientific Report, 9(1), 14868. Doi: 10.1038/s41598-019-51338-y. Mansoor, A. et al. (2015). Segmentation and image analysis of abnormal lungs at CT: current approaches, challenges, and future trends, RadioGraphics, 35(4), 1056–1076. Masood, A., Sheng, B., Li, P., Hou, X., Wei, X., Qin, J. and Feng, D. (2018). Computer-assisted decision support system in pulmonary cancer detection and stage classification on CT images, Journal of Biomedical Informatics, 79, 117–128. Doi: 10.1016/j.jbi.2018.01.005. Mondal, MRH., Bharati, S., Podder, P. and Podder, P. (2020). Data analytics for novel coronavirus disease, Informatics in Medicine Unlocked, Jan 1, 20, 100374. Nasrullah, N., Sang, J., Alam, M.S. and Xiang, H. (2019). Automated detection and classification for early stage lung cancer on CT images using deep learning, Pattern Recognition and Tracking XXX, International Society for Optics and Photonics, Bellingham, WA, USA. Petros-Pavlos, Y. and Montana, G. (2016). Recurrent convolutional networks for pulmonary nodule detection in CT imaging. 2016. arXiv preprint arXiv:1609.09143. Rahib, H. Abiyev, and Mohammad Khaleel Sallam Ma’aitah. (2018). Deep convolutional neural networks for chest diseases detection. Journal of Healthcare Engineering. 2018, 4168538. Published 2018 Aug 1. doi:10.1155/2018/4168538. Ramaswamy, S. and Truong, K. (2017). Pulmonary nodule classification with convolutional neural networks. Available at: http://cs231n.stanford.edu/reports/2016/pdfs/324_Report.pdf [Accessed 20 May 2020] Ren, S., He, K., Girshick, R. and Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 39, 1137–1149. Setio, A.A.A. et al. (2017). Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge, Medical Image Analysis, 42, 1–13. Doi: 10.1016/j.media.2017.06.015. Shakeel, P.M., Tolba, A., Al-Makhadmeh, Z. et al. (2020). Automatic detection of lung cancer from biomedical data set using discrete AdaBoost optimized ensemble learning generalized neural networks, Neural Computing and Applications, 32, 777–790. Shen, W., Zhou, M., Yang, F., Yang, C. and Tian, J. (2015). Multi-scale Convolutional Neural Networks for Lung Nodule Classification, Ourselin, S., Alexander, D., Westin, C.F. and Cardoso, M., eds, Information Processing in Medical Imaging, IPMI 2015. Lecture Notes in Computer Science, Vol. 9123, Springer, Cham. Simonyan, K. and Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv. [Preprint] Available from: http://arxiv.org/abs/1409.1556 [Accessed: 20 May 2020]. Tripathi, S. and Tu, R. (2018). Towards Deeper Generative Architectures for GANs using Dense Connection. arXiv. [Preprint] Available from: http://arxiv.org/abs/1804.11031v2 [Accessed: 20 May 2020]. Zhu, W., Liu, C., Fan, W. and Xie, X. (2018). DeepLung: Deep 3D dual path nets for automated pulmonary nodule detection and classification. In: IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, IEEE, pp. 673–681. Available at: https://ieeex plore.ieee.org/iel7/8345804/8354104/08354183.pdf [Accessed: 20 May 2020].
Kerem Delikoyun, Ersin Cine, Muge Anil-Inevi, Oyku Sarigil, Engin Ozcivici, H. Cumhur Tekin
2 Deep learning-based cellular image analysis for intelligent medical diagnosis Abstract: Advances in microscopy have been revealing the mysteries in biological sciences since the first microorganism was observed with a simple compound microscope in the seventeenth century, which led to an understanding of the origin of diseases and the development of treatments for human longevity. Since then progress in optics was transformative for gathering a wide range of information regarding the molecular infrastructure of the cell. Thus, more sophisticated instruments have been developing for the investigation of biological entities with different aspects. Morphological and functional heterogeneity of human cells is the greatest challenge for detection in biomedical sciences. Moreover, the quantification of the acquired image data is critical for clinical translation. Conventional imageprocessing algorithms require a series of strict rules regarding the data of interest (e.g., segmentationof single cells in 2D cell culture) and a high level of user interpretation that tunes the parameters to obtain satisfactory results. This procedure is labor-intensive although it performs relatively well for routine tasks. However, research and development-oriented image processing of cells cannot be led to a categorization based on a single range of visual parameters (e.g., circularity or intensity of blood cells). Thus, in the last decade, software tools that learn from the data itself have gained great attention in biological sciences. Although classical machine learning models have a certain level of success in these cases, the major problem is that features are still needed to be designed prior to the specific application. Since a model inspired by deep-learning architecture obtained unprecedented success in the 2012 ImageNet Large Scale Visual Recognition Challenge, several applications arose for medical applications due to its comparable ability of learning and inferring at the human level. Thus, a method called transfer learning, which was trained for recognizing low features of objects within an image (e.g., object edges), has been used for training special medical datasets for diagnosis. There are four main fields in cellular imaging for medical diagnostics that might effectively benefit from
Acknowledgment: The authors would like to thank The Scientific and Technological Research Council of Turkey (119M052) for financial support. Kerem Delikoyun, Muge Anil-Inevi, Oyku Sarigil, Engin Ozcivici, H. Cumhur Tekin, Department of Bioengineering, Izmir Institute of Technology, Izmir, Turkey Ersin Cine, Department of Computer Engineering, Izmir Institute of Technology, Izmir, Turkey https://doi.org/10.1515/9783110668322-002
20
Kerem Delikoyun et al.
deep learning: (i) image classification based on the features of a group of cells for clinical diagnostics (e.g., malign or benign cell detection), (ii) image segmentation partitions of an image for grouping each object presented (e.g., identifying cells in an image), (iii) visualization of changes regarding a cell or group of cells at limited time scale (e.g., bacterial cell growth or motility of embryo) by object tracking for cellular image analysis and (iv) fusing different imaging modality to reveal hidden information in each domain such as brightfield microscopy, fluorescence microscopy and digital holographic microscopy. This chapter introduces the latest achievements using deep-learning-based image processing techniques for identification and/or sorting of cell of interest that could be potentially applied in clinics for diagnosis and treatment. These techniques became accurate, sensitive, rapid and cost-effective over time so therefore it is expected that they will be translated to assist medical professionals and improve healthcare outcomes in the future. Keywords: cellular imaging, microscopy, deep learning, artificial neural networks, diagnostics
2.1 Introduction Advances in technology have enabled the production of a large amount of knowledge in translational medicine (Densen, 2011, Kelly, 2018). The rapid rise in data production has brought new scientific challenges and the opportunity to reveal hidden relationships. Deep learning becomes a substantial tool in various fields of medical research such as biomedical imaging systems (e.g., computed tomography and magnetic resonance imaging) (Song et al., 2017, Wang et al., 2016), tissue histopathology (Chen and Chefd’Hotel, 2014, Hartono, 2018, Pan et al., 2015) and single-cell level (Godinez et al., 2017, Lugagne et al., 2019), to genetics for a variety of different analysis (Yue and Wang, 2018). In this chapter, we focus on applications of deep learning to analyze cellular images in the single-cell level that could be potentially utilized in the fields of medical diagnosis and biological researches. Identification of cells based on their features and determination of particular types with conventional techniques is extremely labor-intensive and time-consuming. Deep-learning applications can serve to learn and automatically extract high level properties of cells (e.g., morphological and physiological status) and classify them at the same time. It provides an opportunity for automatization of data acquired from labeled cell images (Segebarth et al., 2018). Moreover, deep learning allows for the identification of differences in cellular properties in a label-free manner alleviating the need for the additional time required to label cells and cell fixation process that can affect the morphological and functional status of cells. Image based diagnosis
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
21
can be improved by processing the cell images obtained by different microscopy techniques such as brightfield microscopy (Sadafi et al., 2019), quantitative phase imaging (QPI) (Kim et al., 2019b) and digital holographic microscopy (Gupta et al., 2019). These improvements can be achieved through deep-learning tasks including segmentation (Lee et al., 2019), classification (Kandaswamy et al., 2016) or cell tracking in image time series (Wen et al., 2018). Here, we reviewed translational biomedical applications in diagnostics developed through deep learning. These applications include detection of infected cells in the blood especially for malaria (Vijayalakshmi, 2019), classification of white blood cells (up to 40 categories) that is crucial in diagnostics (Qin et al., 2018) and classification of red blood cells with respect to morphological alterations to detect hematological disorders (Xu et al., 2017). Moreover, in cancer research, deep-learning strategies are used for the determination of leukemia and classification of its subtypes in blood (Shafique and Tehsin, 2018) or bone marrow (Rehman et al., 2018), detection of solid cancer-type cells in a heterogeneous population (Chen et al., 2016a, Rubin et al., 2019) and detection of circulating tumor cells (Huttinga, 2017). In addition to these more common uses, researchers began to apply deep learning in more advanced cases such as assessment of T-cell activation state for monitoring immune responses (Karandikar et al., 2019) and identification of pathogens (Kim et al., 2019a). Followed by the rapid rise of deep learning and immediate applications in biological and medical sciences, there have been a variety of surveys that introduces the latest achievements with different perspectives. Deep learning could potentially transform our understanding of the human body and change the way we treat patients (Mamoshina et al., 2016, Topol, 2019). Since radiology and histopathology are fields that have wide applications in diagnostics, reviews have been made to show possible outcomes of utilizing deep learning in these fields (Serag et al., 2019). Cancer diagnosis is another field that can greatly benefit from deep learning (Munir et al., 2019) and liquid biopsy (Paeglis et al., 2018). More recently, large sets of information can be gathered at cellular level so that, perhaps, cellular imaging is one of the most important fields that deep learning will contribute (Gupta et al., 2019, Kan, 2017, Lu et al., 2017, Moen et al., 2019, Razzak et al., 2018, von Chamier et al., 2019). Here, we aim to introduce studies showing impressive performances of deep learning that reach or suppress human-level interpretation in cellular imaging with cutting-edge technologies.
2.2 Neural networks and cellular images Deep learning (LeCun et al., 2015, Schmidhuber, 2015) has recently achieved tremendous success in a wide range of domains including virtually all computer vision tasks and thus, analysis of cellular images. In recent years, it became the most popular
22
Kerem Delikoyun et al.
approach for the data-driven paradigm of artificial intelligence both in academia and industry. Unlike classical machine learning models that use tedious and suboptimal handcrafted features for prediction, deep-learning models are trained directly on the raw data in an end-to-end fashion performing an implicit automatic feature extraction as an intermediate step. This approach provides useful models for different domains of data such as convolutional neural networks (CNNs) (Krizhevsky et al., 2012, LeCun et al., 1990) for spatial data (e.g., images) or recurrent neural networks (RNNs) (Greff et al., 2016, Hochreiter and Schmidhuber, 1997) for temporal data (e.g., time series). These classes of deep neural networks exploit spatial locality and temporal continuity of data, respectively. For analyzing spatiotemporal data (e.g., videos), these two types of neural networks can be fused into one (e.g., Sainath et al., 2015). Another class of prevalent neural networks called generative adversarial networks (GANs) (Goodfellow et al., 2014, Salimans et al., 2016) are widely used for generating data. For the most basic task of GANs, generating random data similar to the given dataset without any condition, the training set is completely unsupervised.
2.2.1 Tasks In computer vision, there are many tasks, most of which are closely related. Figure 2.1 illustrates the main tasks in cellular image domain, which are classification, detection, tracking and segmentation. Image classification can be considered the most basic among all. An image classifier classifies a given image as one of the predefined classes. These classes should be collectively exhaustive and mutually exclusive in the relevant domain. Object detection is another standard task in computer vision. The aim of an object detector is, in a given image, detecting the minimum bounding box (e.g., Faster R-CNN (Ren et al., 2015)) or fine-grained pixel mask (e.g., Mask R-CNN (He et al., 2017)) of each instance of predefined object classes. Object detectors are typically designed as extensions to image classifiers. They often rely on existing classifiers (e.g., EfficientDet (Tan et al., 2019)), but using custom classifiers can improve the performance (Chen et al., 2019). On the other hand, image classifiers can utilize outputs of object detectors to improve classification accuracy as well (Sermanet et al., 2013). Sometimes the task with bounding boxes is referred to as object detection, and the task with pixel masks is referred to as instance segmentation. Semantic segmentation is another kind of image segmentation in which object classes are identified at pixel level without considering the boundaries of individual objects. This task is sometimes considered as an application of pixel-wise classification (Ronneberger et al., 2015), image-to -image translation (Haering et al., 2018) or style transfer (Hollandi et al., 2019). The task of object tracking concerns locating the bounding box of a particular object in each frame of a video given ground truth in the initial frame. The main assumption of the most object trackers is that position, size and appearance of the object do change rapidly in the consecutive frames. In practice, objects are often tracked and detected
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
(a)
23
(b) Live
Dead
(c)
(d)
t0
t1
t2
Figure 2.1: General tasks performed using deep-learning algorithms. (a) Classification algorithm. It predicts of input’s class among multi classes, such as live–dead cells of D1 bone marrow stem cell. (b) Detection algorithm. It identifies the input’s class and position, such as live and dead cells. (c) Tracking algorithm. It recognizes of alterations in time and morphology. (d) Segmentation algorithm. It identifies instances within an image, such as an embryo of a sea urchin.
simultaneously for contributing to the performance of each other (Feichtenhofer et al., 2017, Girdhar et al., 2018).
2.2.2 CNN CNNs are composed of multiple layers of computational units, most of which are sparsely interconnected and perform spatially local operations (such as convolution or pooling) (Figure 2.2). They can automatically learn accurate data representations with many levels of abstraction from a dataset of natural images. Once an accurate representation is acquired, it is possible to build a predictive (e.g., image classification or object detection) or descriptive (e.g., image clustering or dimensionality reduction) model for any given computational task. Moreover, any CNN model with a successful application provides a reusable architecture for other problems with little or no modification. The most notable of these architectures include AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan and Zisserman, 2014), ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), DenseNet (Huang et al., 2017) and
24
Kerem Delikoyun et al.
Dead
Feature extraction
Classification
Figure 2.2: Convolutional neural networks (CNN) . each layer recognizes a different feature of input data (e.g., cell). After learning features in convolution layers, classification takes place in fully connected layers and output is the predicted class (e.g., dead cell) of a given sample.
EfficientNet (Tan and Le, 2019). Those problems that can be solved using these architectures not only include other types of samples (such as biological samples on different levels such as cells, tissues or organs; everyday objects; to astronomical objects) but also data from a variety of sources (such as a brightfield microscope, a medical X-ray imager, a magnetic resonance imaging machine, a digital single lens reflex camera or a reflecting telescope) and for other tasks (for instance, segmentation or semantic segmentation). Furthermore, these models not only provide an empty architecture but also parameters of the architecture for similar problems with little extra training (Zamir et al., 2018). With all these favorable features, CNNs are our main tools for analyzing cellular images. Recent object detectors use class and bounding box prediction networks which are based on pyramids of CNN features (e.g., NASFPN (Ghiasi et al., 2019) and EfficientDet (Tan et al., 2019)). Segmentation networks typically extract the essence of images using CNNs and then enlarge the image back to the original size using inverse CNN consisting of inverse operations such as transposed convolution and unpooling (e.g., namely U-net (Ronneberger et al., 2015)) and its improved forms (Gómez-de-Mariscal et al., 2019, Lee et al., 2019, Zhou et al., 2018)). The architectures for object tracking vary, however, using a CNN as the backbone in the early stage for predicting important features is common to all recent successful methods to our knowledge (Bhat et al., 2019, Danelljan et al., 2019, Li et al., 2019b, Liu et al., 2019). With their successes and all these favorable features, CNNs are our main tools for analyzing cellular images.
2.2.3 RNN CNNs perform inference on a snapshot: most often on a single image, rarely on a set of a fixed and small number of images, such as several images taken from different angles or distances, or video frames in a short time window. When the temporal dimensionality is involved in data, RNNs are widely used. This kind of neural
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
25
network aggregates information in every time step and updates prediction considering all clues from the past. In non-real-time applications, this arrow of time can be reversed by simply reversing the input data in time axis or even point both directions. Hence, clues from both past and future can be used with bidirectional RNNs (Schuster and Paliwal, 1997, Zhang et al., 2016). RNNs can be designed to predict a single output (e.g., video classification (Yue-Hei Ng et al., 2015)), or multiple outputs. These multiple outputs can be predicted simultaneously with the input data feed (e.g., video semantic segmentation (Siam et al., 2017)), or after the input data is completely fed (e.g., future trajectory prediction (Xue et al., 2018)). Their ability to capture temporal continuity of data make RNNs a critical extension to CNNs for analyzing cellular image sequences (Arbelle and Raviv, 2019, Chen et al., 2016b, Payer et al., 2018, Stollenga et al., 2015, Su et al., 2017).
2.2.4 GAN Besides CNNs and RNNs which are useful for data analysis (e.g., image/video analysis), there is another class of neural networks called GANs, which is useful for data generation (e.g., image/video generation). These networks are composed of two main components, each of which can be considered as a standalone neural network: (i) a generator network which tries to generate fake but realistic data by mimicking a dataset and (ii) a discriminator network which tries to discriminate between real (i.e., recorded) and fake (i.e., generated) data via binary classification. These two networks are trained in a competitive way in a zero-sum game and are forced to get better when the complementary one gets better. Main applications of GANs include image generation with (Mirza and Osindero, 2014) or without (Radford et al., 2015) conditions, image-to-image translation which can be done in paired (Isola et al., 2017) or unpaired (Zhu et al., 2017) setting and different tasks of image restoration such as deblurring (Kupyn et al., 2018), denoising (Chen et al., 2018) or super-resolution (Ledig et al., 2017). Despite their successes in everyday images, applications of GANs in critical domains such as medical images have generally been limited due to the hallucinative nature of their mechanisms. Still, there is increasing number of such works (Arbelle and Raviv, 2019, Baniukiewicz et al., 2019, Han and Yin, 2018, Majurski et al., 2019, Rivenson et al., 2019a, Rivenson et al., 2019b, Rubin et al., 2019, Theagarajan et al., 2018, Wang et al., 2019, Zanjani et al., 2018, Zhang et al., 2019).
2.2.5 Cellular images A major challenge working with cellular images is the high rates of label noise, where datasets contain many examples with incorrect desired output. These wrong labels can lead to inefficient training and unreliable testing. However, deep neural
26
Kerem Delikoyun et al.
networks were reported to be especially robust to such labels (Rolnick et al., 2017). An even bigger challenge is that, in cellular image domain, both data acquisition and labeling are very costly, compared to the general image domain. Researchers need special setup and machines (e.g., a special microscope) instead of a simple camera, biological samples (e.g., blood smear) instead of everyday objects and domain experts (e.g., laboratory technicians) instead of cheap crowdsourcing. Datasets typically contain many more raw features (e.g., number of pixels in an image) than examples (e.g., number of images). These difficulties result in lack of publicly available large datasets and thus need for improvements and innovations during model training. Common solutions to overcome this problem of small datasets include the followings.
2.2.5.1 Delegating feature extraction Extracting relevant features with respect to a task requires considering the visual cues in multiple levels of abstraction. These visual cues are extracted as a hierarchy of features including lower-level features such as particular kinds of edges, corners or textures, and higher-level features such as parts of objects or classes of shapes, which are composed of lower-level features. Sometimes these cues involve not only intensity but also color and depth information. Learning how to extract such features consumes the data. Thus, in the absence of sufficient data it is reasonable to delegate this step to other data sources. 2.2.5.1.1 Transfer learning In its most basic form, transfer learning involves using lower-level layers of a network that is pre-trained on another large dataset (such as ImageNet (Deng et al., 2009), replacing only the upper level layers with new ones, and training only these new layers. In this way, the model will have fewer parameters to learn and thus can be trained using a smaller dataset (Yosinski et al., 2014). 2.2.5.1.2 Explicit feature extraction When the domain experts have sufficient insight about important features of the data with respect to a particular task, it is possible to use specialized algorithms to extract these handcrafted features and train a shallow but dense neural network (e.g., multilayer perceptron) on top of these features instead of raw inputs (Kim et al., 2019b). In this case, it is also possible to use other machine learning models that work on tabular data (Ota et al., 2018).
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
27
2.2.5.1.3 Learning from sparsely labeled data When there is excess unsupervised data, it is possible to pre-train the feature extractor of the model using them (Erhan et al., 2010). Furthermore, in the presence of extra data labeled with more noise, the classification performance can be improved by utilizing this data (Veit et al., 2017, Xie et al., 2019).
2.2.5.2 Avoiding overfitting Learning a model which does not overfit (i.e., a model which generalizes beyond the training data, favoring deeply understanding the task rather than memorizing the training data) is required for predictive tasks. In applications for cellular images where the data can be greatly diverse, this property is especially important. The most effective way of dealing with this problem is obtaining larger and more diverse datasets. However, this way is impractical beyond a certain threshold which is significantly low in the medical domain. The solutions include as follows. 2.2.5.2.1 Simpler architectures Models with too many parameters are prone to overfit. Simpler architectures with fewer parameters for learning require fewer data to train. The problem with this option is that if the model has less expressive power than the task requires, it will underfit so that it will not even model the training set accurately. 2.2.5.2.2 Regularization For regularizing a model, it is most common to add a regularization term to the objective function besides the actual error in prediction. A way of doing this is penalizing large parameters for avoiding dominance of certain neurons (i.e., the basic unit of a neural network that receives one or more inputs) – for example, L2 regularization. It is also typical to remove random neurons or connections temporarily during training for avoiding strong dependence on them (e.g., Dropout (Srivastava et al., 2014)). It is worth mentioning that bigger models need more regularization due to the large number of parameters that overfit easier. 2.2.5.2.3 Early stopping Stopping training when the validation error starts to saturate, even if the training error has still room to improve, results in models that generalize better (Caruana et al., 2001).
28
Kerem Delikoyun et al.
2.2.5.3 Exploiting invariant representations Tasks on images are usually invariant to several properties. For example, a slightly shifted, rotated, zoomed, darkened and noised version of an image should be classified as the same image. Image classifiers can be equipped with this capability either explicitly via data augmentation or implicitly via invariant networks. 2.2.5.3.1 Data augmentation Increasing diversity of data synthetically without collecting new data, named as data augmentation, is a de facto standard for learning invariant representations (Perez and Wang, 2017). For cellular images, researchers augment images by warping them using only transforms that do not require interpolation or extrapolation (such as rotating images by 90, 180 or 270 degrees, and reflecting them on one or both of the axes). When the dataset is sufficiently large, it is even possible to learn appropriate parameters to augment a particular dataset (Cubuk et al., 2018, Cubuk et al., 2019). For example, on a digit recognition dataset, it is possible to automatically detect that the model should not be provided with images rotated by 180 degrees. If not, the model can confuse the digit “6” with the digit “9”. A non-trivial alternative to simply warping images is using a GAN for generating similar images and adding them to the training set (Antoniou et al., 2017, Frid-Adar et al., 2018, Rubin et al., 2019). 2.2.5.3.2 Invariant networks An alternative to data augmentation is using an invariant neural network (e.g., a rotation-invariant CNN (Cheng et al., 2016)). This kind of networks has special structures that allow them to learn particular classes of data invariants implicitly, without augmenting dataset in the pre-processing stage. One of the reasons this technique is not as common as data augmentation is that there can be many different kinds of invariant neural networks and theoretically studying all of them and optimizing the software framework for each of them is extremely costly.
2.3 Applications Data driven approaches have become widely applied techniques in biological and medical sciences for various applications. For instance, in cancer or drug researches, single cell profiling for clinical diagnostics generates data, which are far beyond human capacity to comprehend and interpret. Recently, some novel approaches mainly powered by deep-learning models enable dynamic systems that can recognize hidden patterns and relations that are not obvious to human perception in the data. Here, we cover image-based applications in cellular analysis
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
29
(Table 2.1) with diverse imaging modalities (e.g., brightfield or fluorescence microscopy) that could be potentially used as clinical routines that are currently laborintensive and time-consuming manual processes. The ability to “learn” of machines can shed light on complex nature of diseases at the cellular level to physiological consequences, thus, more rapid and effective therapies and regular screening techniques for diagnostic, prognostic and preventive purposes could be developed.
2.3.1 Blood cell analysis for clinical decision-making Recent data published by the American Medical Association (AMA) indicates that over 41 million complete blood count (CBC) tests were ordered only in the USA in 2015 (BRIEF, 2015). CBC is a routine and comprehensive medical test that provides a patient’s overall health status at a glance to physicians that any detected abnormality can be further investigated with more specific testing procedures. Thus, identification of each cell within the blood is of great importance for accurate and rapid diagnosis of several diseases. Since 1 mL of blood contains approximately 5.5 billion of blood cells with over 5 billion of red blood cells (RBC), 4–11 million of white blood cells (WBC) and 150–400 million of platelets (PLT), (Dean, 2005) identification and counting of blood cells are very challenging due to the sub-populations and morphological heterogeneity of blood cells. Traditionally, smear test, Coulter counters and flow cytometers have been widely used for CBC test. Blood smear is examined through highpower microscope objectives, typically 100×, by a trained physician to evaluate the number and morphologies of each different type of stained blood cells. For counting blood cells using Coulter counters, the impedance difference of each cell, which is directly related to the size of the cell, is used. On the other hand, flow cytometers utilize forward and back-scattering of a laser beam from the cells in order to classify them. Cells can also be stained by fluorescent dyes to identify the cell of interest. Furthermore, the detected cells can be sorted in highly sophisticated instruments such as fluorescently activated cell sorting (FACS). Traditional methods rely on rulebased processing of signals to detect cells. However, deep-learning-based algorithms can be adapted to the traditional methods to enhance the classification accuracy of these methods without increasing analysis time.
2.3.1.1 Flow cytometry-based cell analysis The next generation of flow cytometers uses fluorescent and brightfield images of each cell, which is called imaging flow cytometers (IFC). By using machine learning in these systems, high-throughput (100,000 particles/s) analysis with a low false positive rate (one in a million) could be achieved (Goda et al., 2012). It was demonstrated that deep learning automates and improves the detection and sorting scheme of
30
Kerem Delikoyun et al.
different cell lineages including erythrocytes, leukocytes, platelets and circulating EpCAM + cells, etc. in a microfluidic channel (Nitta et al., 2018). Furthermore, IFC can be integrated with other measurement techniques that allow more detailed information on single cells. For instance, a deep-learning-based imaging flow cytometer was integrated with stimulated Raman scattering (SRS) for morphological and structural cell screening with a throughput of 140 cells/s (Suzuki et al., 2019). Thus, combining different modalities reduces or eliminates the need for fluorescent labeling and sample preparation that may negatively affect cells. Hence, this technique allows for whole blood cell analysis and marker-free cancer detection from blood. The trained model results in 98.8% and 93.4% accuracy to identify blood cells and cancer cells (HT29), respectively. Deep cytometry utilized direct signal processing through neural network architecture by illuminating cells with rainbow flashes (Li et al., 2019d). Since acquiring images is often a cumbersome and costly technique in terms of data storage and processing, raw signals generated from an imaging sensor were utilized for this approach. These signals could be classified through a neural network with a more manageable data rate than the acquired images (Li et al., 2019d). This system achieved an accuracy of 95.74% with 95.71% of F1 score (i.e., a measure of accuracy considering precision and recall) for detection of SW-480 epithelial cancer cells with a cell flow rate of 1.3 m/s.
2.3.1.2 Microscopy-based cell analysis 2.3.1.2.1 Erythrocytes Erythrocyte morphology can reveal much about underlying diseases that indicate pathophysiological conditions including parasitic infections or metabolic diseases. Thus, microscopic examination of cell morphologies from blood smears in hematology clinics is routinely performed by physicians in order to detect blood related disorders such as iron-deficiency anemia, reticulocytosis, hereditary spherocytosis (Adewoyin et al., 2019). Quantitative Phase Imaging (QPI) was used to detect such hematological disorders by feeding a multilayer perceptron (MLP) with morphological (volume, surface area and sphericity), chemical (hemoglobin content and concentration) and mechanical properties (membrane fluctuation) of single erythrocytes to classify the pathological condition of a patient (Kim et al., 2019b). These classes include the healthy condition and various unhealthy conditions: such as iron-deficiency anemia, reticulocytosis, hereditary spherocytosis and diabetes mellitus. Classification accuracy of 80% was obtained to identify unhealthy erythrocytes and over 98% accuracy by averaging predictions of 10 erythrocytes from the same individual indicating that larger datasets will enable end-to-end learning using neural networks. Moreover, blood smear images for detecting sickle cell anemia were used to train deep CNN that classified small image patches into three classes (normal, sickle
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
31
cell anemia and others) with 83.96% accuracy (Alzubaidi et al., 2018). Their improved version of CNN which replaces simple SoftMax function with a multi-class support vector machine increased the accuracy to 86.34%. The majority voting over classifications of multiple patches from the same image increased the accuracy further to 92.06%. Deep CNN can also identify malaria-infected erythrocytes by simply analyzing patient blood into a regular microscope with a rapid and precise processing scheme. For instance, malaria detection through a variety of architectures achieved 95.79%, 96.18% and 98.13% of accuracies with training AlexNet, LeNet-5 and GoogleNet, respectively (Dong et al., 2017). 2.3.1.2.2 Leukocytes Classification of subtypes of white blood cells (WBC) can provide a wide spectrum of information that physicians use to diagnose a variety of abnormalities originating from infection to leukemia. Classification and counting of WBCs are traditionally performed through a manual process that is labor-intensive, time-consuming and error-prone. Although there are automated cellular image analysis programs relying on traditional digital image processing algorithms such as the creation of binary mask through thresholding and segmentation (Ulman et al., 2017), they do not achieve the sensitivity of experienced eyes that can discriminate differences in morphologies of WBC types under the microscope. Recent studies show that deep-learning-based classifiers can compete with a trained physician for identification of WBC sub-populations. For instance, subtypes of WBCs were classified with more than 99% accuracy for stained samples (Hegde et al., 2019). Authors trained CNN with hand-crafted features and they also applied transfer learning without any human intervention. Thus, they could successfully classify six subtypes of WBCs, namely, lymphocytes, monocytes, neutrophils, eosinophils, basophils and abnormal cells. Screening immune response by quantification of T and B cells provides insight into the adaptation capability of the immune system against adversarial factors. Thus, the quantification of these cells is of great importance for the diagnosis of immunodeficiency disorders such as viral infections and auto-immune abnormalities (Raje and Dinakar, 2015). CNN can greatly improve to identify these cells from microscopic images with an accuracy of 98%, specificity of 99% and sensitivity of 97% (Turan et al., 2018). Additionally, it was shown that T-cell activation could be recognized by CNN architecture trained with phase images of cells obtained using diffraction phase microscopy, which is a label-free imaging modality for measuring a variety of biophysical properties including morphology and biomechanics (Karandikar et al., 2019). Furthermore, a deep CNN trained with microscopic images of WBCs allowed identification of Acute Lymphoblastic Leukemia (ALL) patients with 96.6% accuracy (Thanh et al., 2018). Fine tuning of a pretrained AlexNet model resulted in an
32
Kerem Delikoyun et al.
accuracy of 99.50% for ALL detection and accuracy of 96.06% for detection of subtypes of ALL (Shafique and Tehsin, 2018). 2.3.1.2.3 Platelets Platelets, which are produced in bone marrow, are primarily important to prevent bleeding at the sides of damaged blood vessels (Nurden et al., 2008) but also have a role in inflammation, tissue regeneration and cancer metastasis (Etulain, 2018). Moreover, platelet aggregates formed in the blood stream of patients cause a variety of diseases such as atherothrombosis, namely, myocardial infraction and stroke (Mauri et al., 2014, Roe et al., 2012). However, the detection of platelet aggregates is challenging. Optofluidic time-stretch microscopy can be used to detect platelet aggregates from obtained blood cell images (Zhou et al., 2019b). Subsequently, obtained cell images can be recognized through classification by using a deep CNN combined with encoder–decoder pair for the elimination of artifacts (such as background noise) on images. Accuracy of the proposed classifier for aggregates formed by adenosine diphosphate (ADP), collagen, thrombin receptor activator peptide-6 (TRAP-6), U46619 and control ranges from 58% to 99% depending on the class. 2.3.1.2.4 Rare cells 2.3.1.2.4.1 Cancer cells Over 90% of cancer deaths are caused by metastasis of primary tumor to distant organ systems (Cheung et al., 2016). Circulating tumor cells (CTCs) are suspected to have a major role in metastasis (Liu et al., 2009). However, the main challenge of capturing CTCs is that they present with extremely low quantities; that is, 1–1,000 CTCs in 1 mL of a blood sample containing more than 5 billion blood cells (Dorsey et al., 2015) and diverse morphologies. Recent methodology mostly relies on sorting CTCs by utilizing either biochemical methods on surface–antigen relations or solely on biophysical properties such as size, density or membrane charge difference compared to other blood cells (Miyamoto et al., 2014). Since these rare cells exhibit quite heterogeneous morphological differences, it is extremely challenging to identify them by using microscope images of cells for a physician. Deep neural networks have been recently shown to be used for the detection of CTCs with impressive accuracy and high-throughput (Mao et al., 2015). Fluorescence microscope images of cells including wild-type breast cancer cells (MCF7-wt) were segmented by CellProfiler to evaluate morphological feature parameters of the nucleus and cells themselves. After identifying morphological features, transfer learning-based deep neural network was utilized for discriminating wild-type breast cancer cells from the rest of the populations. Results using deep neural network show a significant improvement in the accuracy (i.e., 87.62%) compared to classical machine learning algorithm using a linear support vector machine with an accuracy of
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
33
20.95% (Kandaswamy et al., 2016). Using deep residual networks were also proved to be highly effective for automatic detection of stained CTCs with an accuracy of 99.8% while U-Net achieved 98.2% accuracy (Li et al., 2019a). 2.3.1.2.4.2 Stem cells Stem cells are able to differentiate into various cell lineages determined by their potency. Mainly, pluripotent stem cells may differentiate into any type of cells, whereas multipotent and unipotent cells can only give rise to specific cell types and tissues. They are the highly promising source that can be utilized in various fields from regenerative and precision medicine to drug screening due to their renewal capacity (Singh et al., 2016). Stem cells with their therapeutic potential are already used in different applications for some health problems such as skin defects, neurodegenerative diseases and leukemia (Watt and Driskell, 2010). For practical applications, it is crucial to identify the potency of stem cells that are traditionally performed by using either immunostaining or lineage tracing. However, these techniques consist of costly and time-consuming labeling procedure and complex instrumentation. On the other hand, morphologies of cells can involve characteristic properties relying on gene expression specific to that cell. In general, a morphologic analysis relying on microscopic examination by an expert is prone to human error. However, several studies show that deep CNN can recognize some of the hidden cell patterns that are vague to human eye (Serag et al., 2019). Thus, CNNs can provide label-free and low-cost identification of stem cell potency using simple microscopy settings (Kusumoto and Yuasa, 2019). For instance, brightfield microscope images of stem and progenitor cells were used for classification of cell lineages up to three generations before molecular markers could identify these cells with area under curve (AUC, AUC = 1.0 and AUC = 0.5 meaning that perfect classification and random assignment, respectively) of 0.87 ± 0.01 (Buggenthin et al., 2017). Moreover, brightfield images were utilized for the identification of reprogrammed cells that can undergo differentiation into Induced Pluripotent Stem (iPS) cell using a CNN resulting in a top prediction accuracy of 90.8% and a top-2 prediction accuracy of 99.16% (Chang et al., 2017). Phase contrast microscopy was also enabled recognition of cell differentiation potency of C2C12 progenitor cells from myoblasts into myotubes by using deep CNN with an accuracy of 91.3% (Niioka et al., 2018). In addition, the viability of D1 bone marrow stem cells was assessed using lensless digital inline holographic microscopy utilized with transfer learning and achieved accuracies were 97.4% for VGGNET19 and 98.0 for ResNet50 (Delikoyun et al., 2019). Embryonic stem cells (ESC) are derived from the inner cell mass of blastocysts at a developmental stage, can divide for a prolonged period of time with maintenance of the undifferentiated state (self-renewal) and can give rise to any cell type in the body (pluripotency) (Yu and Thomson, 2006). These properties make ESCs an important resource for regenerative medicine to treat any kinds of diseases such as
34
Kerem Delikoyun et al.
spinal cord injuries (Shroff and Gupta, 2015), cardiovascular disease (Fernandes et al., 2015) and osteoarthritis (Cheng et al., 2014) and for basic developmental research (Dvash et al., 2006). Different formations of human embryonic stem cells (hESC) such as cell clusters or attached cells show similar texture (Theagarajan et al., 2018). However, the fusing of CNN and Triplet CNN could lead to increase the classification accuracy from 86.14% to 91.71 using 2-fold cross validation on real hESC. A deep convolutional generative adversarial network(DCGAN) was also designed to perform hESC image generation synthetically due to lack of data. The same fused network architecture was trained with synthetic hESC images generated by DCGAN and tested on real hESC images. By doing so, accuracy was increased to 94.07% (Theagarajan et al., 2018).
2.3.2 Other applications using cell imaging analysis 2.3.2.1 Infection analysis The classification of bacteria is essential to diagnose and treat infectious diseases. Detection of bacteria can take several hours to days with polymerase chain reaction (PCR) and conventional culture techniques. An easy and reliable screening method for pathogens using digital holographic microscopy was proposed by relying on trained CNN as a fast and highly accurate detection scheme. Five different pathogens, including Bacillus anthracis, were used to train the deep classifier, and binary classification performance for Bacillus anthracis against other species reached an accuracy of 96.3% (Jo et al., 2017). Antimicrobial susceptibility testing (AST) can also be performed using deep learning of urine samples flown through a microfluidic channel (Yu et al., 2018). Further studies enable the identification of bacteria with a variety of modalities such as light sheet fluorescence microscopy (Hay and Parthasarathy, 2018) and Raman spectroscopy (Ho et al., 2019). In addition, human fungal infection can be predicted through deep CNN with high validation accuracy of 96% using GoogleNet (Zhou et al., 2019a).
2.3.2.2 Urinalysis Urinalysis is a substantial tool to detect several disorders, such as urinary tract infections and kidney diseases through physical analysis, chemical examination and microscopic analysis of urine sediment (Misdraji and Nguyen, 1996). Deep learning can provide important opportunities in order to analyze quickly and accurately urine sedimentation images. Deep CNNs provide rapid and highly accurate urine sedimentation analysis to detect a variety of particles and cells found in urine such
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
35
as RBC, WBC, epithelial cells, casts or crystal-like microparticles (Li et al., 2019c, Liang et al., 2018, Sanghvi et al., 2019; Xiang et al., 2019, Xu et al., 2019).
2.3.2.3 Reproductive medicine Approximately 30% of infertility cases are induced by male-originated and male infertility becomes an important global health issue (Nosrati et al., 2017). Any infertility case is subject to pre-screening of sperm quality with WHO criteria (WHO, 2010). The most widely used criteria for sperm quality is a morphological examination performed either by an experienced laboratory technician or by computer-assisted semen analysis (CASA) devices (Agarwal et al., 2016, Luna et al., 2015). It is extremely labor and time-intensive to perform morphological analysis of sperm cells manually for medical experts as well as give rise to human error and subjectivity on results. On the other hand, identifying of the difference between healthy and abnormal sperm cells by using an automated system relies on serial steps of image processing. Recently, a pre-trained deep CNN architecture, VGGNet-16 achieved a 94.1% true positive rate for sperm head classification (Riordon et al., 2019). Further analysis regarding the DNA integrity of sperm cells requires certain labeling procedures such as sperm chromatin structure assay (SCSA). SCSA is performed using a flow cytometer which gives DNA Fragmentation Index (DFI) as the ratio of damaged cell count to total cell count (Marzano et al., 2019). It was recently proposed a deep-learning-based classification and scoring scheme for determination of high-quality sperms without any labeling procedure. The method purely relied on training CNN with microscope images of sperm cells known to have intact DNA. The model enables sperm selection of low DFI index that associates with high DNA integrity, and it was shown that there was a significant correlation between actual and predicted DFI so that sperm selection can be performed within the 86th percentile (McCallum et al., 2019). The selection of oocyte is of great importance for giving a healthy born. However, selection protocols rely on the expertise of embryologist, and since it is subjective, the repeatability and success of the procedure are greatly varied (Fauser et al., 2005, Gnoth et al., 2011, Lemmen et al., 2016, Malizia et al., 2009). Deep-learning-based classification was deployed to assess the morphological quality of oocytes with a maximum predictive power of 86.0% (Thirumalaraju et al., 2019). In in vitro fertilization (IVF) procedure, blastocysts to be implanted are selected by visual inspection of embryologists evaluating the morphological integrity and quality of the embryo (Kragh et al., 2019). The success rate of IVF procedure can alter easily with the subjective evaluation of blastocysts. A deep learning-based approach for identifying high-quality human blastocysts from transmitted light microscopy images give rise to successful conception with an accuracy of 97.53% (Khosravi et al., 2019).
36
Kerem Delikoyun et al.
2.3.2.4 Segmentation Segmentation is a very challenging task in traditionally applied image processing algorithms, particularly for medical data containing cell images. Cellular segmentation is also crucial to track changes in the morphology of the cell prior to different processes (e.g., drug response, toxicity, etc.). Additionally, cell segmentation gains importance in identifying abnormalities that cause changes in cell properties, such as the texture of the nucleus and cytoplasm, as observed in cervical cancer (Araújo et al., 2019). Cell segmentation can be obtained in a microscopic image using traditional algorithms namely Otsu’s thresholding (Otsu, 1979) and Watershed algorithm (Malpica et al., 1997). However, these techniques rely on objects in the image to be with a certain pixel boundary. On the other hand, deep learning methods approach the problem rather with different aspects, by classification of each pixel as background, boundary or foreground to identify whether it is a part of the object (Caicedo et al., 2019). In silico labeling (ISL) enables that different parts (e.g., nuclei, cell boundaries) of unlabeled fixed or live cell images acquired from transmitted-light microscopy can be predicted by training the deep learning model with fluorescent images (Christiansen et al., 2018). Thus, predicting fluorescent labels from label-free transmission light microscopy accelerates the imaging process by eliminating not only the staining procedure but also some defects caused by photobleaching and phototoxicity. Image style transfer enables augmentation of training data so that segmentation of nuclei performance can be increased for even unseen and unlabeled data (Hollandi et al., 2019). Therefore, label-free imaging techniques eliminate the need for fluorescent agents and improved performance with deep learning models, allowing 3D cell visualization and 4D cell nucleus segmentation through time-lapse optical diffraction tomography (Lee et al., 2019). Furthermore, U-Net (Ronneberger et al., 2015) and DeepCell (Van Valen et al., 2016) achieved segmentation accuracies of 89.8% and 85.8%, respectively while CellProfiler (McQuin et al., 2018), a package relying on thresholding algorithms, performed only 81.8% accuracy for cell nuclei segmentation (Caicedo et al., 2019).
2.3.2.5 Tracking Tracking of cell migration, morphology changing, differentiation and engraftment is important for cell-based therapy studies (Sutton et al., 2008). Results of the Cell Tracking Challenge suggest that utilizing prior and contextual information improves tracking performance (Ulman et al., 2017). Furthermore, when the image resolution and frame rate are sufficiently high to capture cell overlaps in the consecutive frames, contour evolution algorithms generally perform better than tracking-by-detection algorithms. Otherwise, the tracking-by-detection paradigm outperforms other alternatives (Ulman et al., 2017). Linking detected objects across
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
37
frames can be optimized using methods such as linear programming (Berclaz et al., 2011, Haubold et al., 2016). In recent years, many deep learning-based generalpurpose object trackers were introduced (Marvasti-Zadeh et al., 2019). In the cellular image domain, a recent work (Zhou et al., 2019c) proposed jointly performing detection and segmentation using deep learning and utilizing the results for multicell tracking.
2.4 Discussion In traditional image processing, data is processed through a single pixel level with strict mathematical relations. Thus, relations in neighborhood pixels dominate the meaningful outcomes at image level. Since neural networks are able to process data through recognizing patterns within the image, they outperform traditional methodology. A major advantage of benefiting from deep learning in biological and medical image-based data is the capability of neural network that reveals the hidden visual relations to human eye. Diagnostic abilities of many neural networks were reported to be comparable or superior to human counterparts. However, they are not widely used in hospitals, since these models are not sufficiently robust meaning that their performances can drop significantly when they are tested on data from different distributions. Both imaging device and operator taking images can dramatically reduce the performance of a neural network that was previously trained with a particular dataset. Because of that, data standardization arrogates great attention. Solving such a standardization issue will also lead to widely used and scalable deep learning products deployed in either research or clinical settings. Another obstacle to shipping a product in such a decision-critical domain is that these neural networks are not usually interpretable. Building interpretable models rather than black-box models should also assist in building more robust systems as well as increasing other favorable features such as causality, privacy and fairness (Doshi-Velez and Kim, 2017). Label-free techniques offer great simplicity for medical imaging. However, the simplicity in general dramatically reduces the sensitivity of imaging modality by losing distinguishing features within the biological entity. While fluorescent labels can bring extra cost and labor to the cell analysis, they can unveil information at the molecular level. It is difficult and impractical to image the sample with a variety of imaging modalities simultaneously. Although multimodal imaging-based microscopes combine multispectral data with impressive spatial resolution, their complexity limits widespread applications globally (Vinegoni et al., 2006). Generative models are aimed to be trained for generating and fusing images with different modalities into one information-rich data source to predict from a given single modality. More clearly, a generative model can be trained to learn the structural
38
Kerem Delikoyun et al.
relations between namely fluorescent microscope images of cells through brightfield microscope images (Christiansen et al., 2018). Hence, it can generate a fluorescent image as a response to a given brightfield image even though the input image does not contain any fluorescence staining. There is a substantial effort on bringing multimodal and spectral cellular imaging for fusing information with different domains into a single scheme. For instance, holographic imaging was enhanced by introducing brightfield corresponding (Wu et al., 2019). Furthermore, GANs can generate three-dimensional fluorescence imaging using a single transmitted light microscope image (Ounkomol et al., 2018). Resolution enhancement of microscopic images captured with limited resolving power optical settings to obtain greater detail can be achieved through GANs (Wang et al., 2019). As a result, fusing the multimodal nature of imaging brings the concept called augmented microscopy. Thus, it can expand possible outcomes of recent imaging modalities and democratizing high quality cellular microscopic examination for accurate medical diagnostics. Analysis of cellular images in medicine and biology is a considerable challenge due to the requirement of extensive efforts by medical/biological expert that is prone to human error and subjectivity. Analytical and post-analytical errors lead to wrong results and interpretation in laboratory medicine (De la Salle, 2019, Shukla et al., 2016). Automated systems can reduce the error numbers but still it is not enough to totally eliminate errors. The inadequacy of automated systems makes these analyses time-consuming and reduces the reproducibility of the results. Automation of the analysis processes offers a great opportunity to decrease the time and labor required for the process and large variations across different human experts (Huang and Xu, 2017). Deep learning has been recently applied to biological image analysis so that researchers and physicians have concerns about consequences for applications of decision-making due to the uninterpretable nature of the process and complexity of how the output is achieved (Hinton, 2018). However, deep learning not only enhances consistency and reproducibility via rapid interpretation, but also reduce medical errors and allows detection of sub-visual clues that may be new potential signatures in cellular image analysis (Topol, 2019). Cellular image analysis using deep learning can be successfully applied in many fields such as analysis of blood cells to detect various disorders such as anemia (Alzubaidi et al., 2018), malaria (Dong et al., 2017), leukemia (Thanh et al., 2018), determination of lineage choice in stem cells (Buggenthin et al., 2017), determination of sperm (McCallum et al., 2019) and blastocyst (Khosravi et al., 2019), quality in fertility studies, and determination of circulating cancer cells (Zeune et al., 2018) (Table 2.1). Through cellular images cells can be identified by specific markers expressed on the cell surface, or by differences in phenotypic character of the cells. It is possible to increase accuracy and sensitivity in identifying cells whose specific markers on the cell surface are labeled with different-colored agents by alleviating misclassification due to the variability of cells in size, morphology and color intensity (Turan et al., 2018) and to improve morphology-dependent
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
39
cellular recognition (Niioka et al., 2018). Besides, in order to eliminate the requirement of cell labeling that is not applicable to cell types whose surface markers frequently vary, new label-free protocols are generated by analyzing images obtained from label-free techniques such as quantitative phase imaging (Kim et al., 2019b), holographic microscopy (Jo et al., 2017) and Raman scattering microscopy (Suzuki et al., 2019) through deep learning. It is seen that neural networks can recognize hidden visual information to human eye through even unlabeled microscopic data to discriminate cell lineages (Buggenthin et al., 2017), DNA fragmentation (McCallum et al., 2019) or fluorescent signals (Christiansen et al., 2018).
2.5 Conclusion Lately, efforts focusing on developing automated systems, which can learn from the data itself, have been widely adapted into biological and medical sciences due to tremendous need for objective, standardized and quantitative analysis. This results by either eliminating the human error in clinical decision processes or increasing efficiency with highly accurate and rapid analyses, which could also lead to new discoveries and improvements in biological sciences. Medical data coincide with effective representation capability of neural networks for its complex and high-dimensional nature. One of the most promising fields in biological and medical fields is to benefit neural networks for image analysis that human eye can be insensitive for a variety of applications including objective analysis of morphological cell images (e.g., for detection of rare cells or time-lapse imaging of stem cell differentiation). Needless to say, deep-learning-based computer-aided diagnosis (CAD) systems will revolutionize both research and clinical applications while considering that these approaches have just been employed in different imaging analyses at the cellular level, including classifying cells, detection of target cell lineages, identifying specific morphological changes, or extracting and fusing information from miscellaneous domains that outperforms traditional methodologies that some particular applications can even exceed humanlevel performance. Most of the attention has been given to transfer learning that accelerated research because of low-cost access to state-of-the-art architectures with little data and relatively low computational power can generally deal with the training of those models. Although we have experienced great advances in several areas in biology and medicine thanks to deep learning, it may be a little too early for expecting to see real-life applications of such systems before overcoming some of the most fundamental challenges regarding the interpretability of those systems mostly known as the black box problem, accessing huge amounts of data without violating the privacy of patient rights, interoperability and reliable inference without ignoring high
Brightfield microscopy
Quantitative phase microscopy
Anemia
Hematological disorders
Erythrocyte
Erythrocyte
Colon cancer and white blood cells
Cell detection Signal based and/or sorting
RBC
Microalgal, peripheral blood mononuclear cells (PBMCs), human colon cancer (HT) cells and human T lymphoma (Jurkat) cells
Cell Detection Raman and/or Sorting scattering microscopy
Cell type
Microalgal and blood cells
Imaging modality
Cell detection Frequencyand/or sorting divisionmultiplexed microscopy
Cytometry
Application
Table 2.1: Deep learning-based cellular applications.
Classification Unspecified
NR
–
Detection
–
https://zenodo.org/record/
Classification https://www.nhlbi.nih.gov/healthtopics/sickle-cell-disease. Accessed Sept http://sicklecellanaemia.org/. Accessed Sept
Detection
–
http://www.goda.chem.s.u-tokyo. ac.jp/intelligentIACS /software.zip
Dataset*
+
Detection
Task
+
Staining*
(Kim et al., b)
(Alzubaidi et al. )
(Li et al., a)
(Suzuki et al., )
(Nitta et al., )
Reference
40 Kerem Delikoyun et al.
Diffraction phase microscopy
Brightfield microscopy
Brightfield microscopy
T-cell activation
Leukemia
Leukemia
Fluorescence microscopy
Fluorescence microscopy
Cancer cells
Cancer cells
Rare Cells
Platelet
Optofluidic time-stretch microscopy
Fluorescence microscopy
Detection of T and B cells
Platelets
Brightfield microscopy
Brightfield microscopy
Leukocyte classification
WBC
Malaria
NR
Breast cancer
Platelet
Leukocyte
Leukocyte
T cells
T and B cells
Leukocyte
Erythrocyte
+
+
+
+
+
–
+
+
+
NR
(Karandikar et al., )
(Turan et al., )
(Hegde et al., )
(Dong et al., )
https://homes.di.unimi.it/scotti/all
Detection
Unspecified
Classification http://www.broadinstitute.org/ bbbc, accession BBBC
Classification NR
Detection
(continued)
(Li et al., a)
(Kandaswamy et al., )
(Zhou et al., b)
(Shafique and Tehsin, )
Classification Dataset ALL-IDB, https://homes.di. (Thanh et al., unimi.it/scotti/all )
Classification NR
Detection
Classification NR
Classification http://peir-vm.path.uab.edu /about.php
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
41
Fluorescence microscopy
Brightfield microscopy
Brightfield microscopy
Phasecontrast microscopy
Phasecontrast microscopy
Cancer cells
Prediction of stem cell differentiation
Prediction of stem cell differentiation
Prediction of stem cell differentiation
Classification of stem cells
Quantitative phase microscopy
Light sheet fluorescence microscopy
Bacterial infection
Identification of bacteria
Miscellaneous
Imaging modality
Application
Table 2.1 (continued)
Bacteria in the intestines of larval zebrafish
Bacillus anthracis
Human embryonic stem cells
Myoblast cells (CC)
Human cord blood CD + cells
Hematopoietic stem and progenitor cells (HSPCs)
Prostate and lung cancer
Cell type
+
–
Classification https://github.com/rplab/ BacterialIdentification
Classification Upon request
Classification NR
–
NR
Classification NR
Detection
–
–
Classification https://github.com/QSCD/ HematoFatePrediction
–
Dataset*
Classification NR
Task
+
Staining*
(Hay and Parthasarathy, )
(Jo et al., )
(Theagarajan et al., )
(Niioka et al., )
(Chang et al., )
(Buggenthin et al., )
(Zeune et al., )
Reference
42 Kerem Delikoyun et al.
Brightfield microscopy
Phasecontrast microscopy
Brightfield microscopy
Detection of fungal conidia
Antimicrobial susceptibility testing (AST)
Sperm quality evaluation
Blastocyte
Blastocyte quality evaluation
* +/–: Available/Not; NR: Not Reported.
Brightfield microscopy
Oocyte
Oocyte quality NR evaluation
Sperm
Escherichia coli
Fungus
Raman Common bacterial pathogens spectroscopy
Identification of pathogenic bacteria
Classification NR
Classification NR
– –
NR
NR
Classification https://figshare.com/articles/ Deep_learning-%based_selec tion_of_ human_ sperm_ with_high_ DNA_integrity/
Detection
Detection
Classification “Methods” for full dataset
+
_
+
–
(Thomas et al., )
(Thirumalaraju et al., )
(McCallum et al., )
(Yu et al., )
(Zhou et al., a)
(Ho et al., ) 2 Deep learning-based cellular image analysis for intelligent medical diagnosis
43
44
Kerem Delikoyun et al.
dimensional nature of medical data for assistance and decision making systems. Perhaps, the most important criteria for adapting these technologies in order to assist medical experts rely not only on the performance of models used but also on meeting the demands of providing interpretable results for critical applications. As a result, artificial neural networks have recently attracted great attention with unprecedented performances over traditional image processing techniques for image-based assays to high content cellular imaging and many experts believe that those systems could be deployed for mostly diagnostics applications soon, just after overcoming some major challenges. In this chapter, we summarized the main deep learning scheme used in cellular imaging with major tasks, potential applications and opportunities and challenges for biological and medical sciences.
References Adewoyin, A.S., Adeyemi, O., Davies, N.O. and Ogbenna, A.A. (2019). Erythrocyte Morphology and Its Disorders. Erythrocyte. IntechOpen. Agarwal, A., Borges, J.A. and Setti, A.S. (2016). Non-invasive sperm selection for in vitro fertilization, Springer. Alzubaidi, L., Al-Shamma, O., Fadhel, M.A., Farhan, L. and Zhang, J. (2018). Classification of Red Blood Cells in Sickle Cell Anemia Using Deep Convolutional Neural Network. International Conference on Intelligent Systems Design and Applications. Springer. Antoniou, A., Storkey, A. and Edwards, H. 2017. Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340. Araújo, F.H., Silva, R.R., Ushizima, D.M., Rezende, M.T., Carneiro, C.M., Bianchi, A.G.C. and Medeiros, F.N. (2019). Deep learning for cell image segmentation and ranking, Computerized Medical Imaging and Graphics, 72, 13–21. Arbelle, A. and Raviv, T.R. 2019. Microscopy cell segmentation via convolutional LSTM networks. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). IEEE. Baniukiewicz, P., Lutton, J.E., Collier, S. and Bretschneider, T. (2019). Generative adversarial networks for augmenting training data of microscopic cell images, Frontiers in Computer Science, 1, 10. Berclaz, J., Fleuret, F., Turetken, E. and Fua, P. (2011). Multiple object tracking using k-shortest paths optimization, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(9), 1806–1819. Bhat, G., Danelljan, M., Van Gool, L. and Timofte, R. (2019). Learning Discriminative Model Prediction for Tracking. arXiv preprint arXiv:1904.07220. BRIEF, H.O.D. (2015). Medicare Payments for Clinical Laboratory Tests in 2014: Baseline Data. Buggenthin, F., Buettner, F., Hoppe, P.S., Endele, M., Kroiss, M., Strasser, M., Schwarzfischer, M., Loeffler, D., Kokkaliaris, K.D. and Hilsenbeck, O. (2017). Prospective identification of hematopoietic lineage choice by deep learning, Nature Methods, 14(4), 403. Caicedo, J.C., Roth, J., Goodman, A., Becker, T., Karhohs, K.W., Broisin, M., Csaba, M., McQuin, C., Singh, S. and Theis, F. (2019). Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. BioRxiv:335216. Caruana, R., Lawrence, S. and Giles, C.L. (2001). Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. Advances in neural information processing systems.
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
45
Chang, Y.-H., Abe, K., Yokota, H., Sudo, K., Nakamura, Y., Lin, C.-Y. and Tsai, M.-D. (2017). Human induced pluripotent stem cell region recognition in microscopy images using convolutional neural networks. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE. Chen, C.L., Mahjoubfar, A., Tai, L.-C., Blaby, I.K., Huang, A., Niazi, K.R. and Jalali, B. (2016a). Deep learning in label-free cell classification, Scientific Reports, 6, 21471. Chen, J., Chen, J., Chao, H. and Yang, M. (2018). Image blind denoising with generative adversarial network based noise modeling. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Chen, J., Yang, L., Zhang, Y., Alber, M. and Chen, D.Z. (2016b). Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. Advances in neural information processing systems. Chen, T. and Chefd’Hotel, C. (2014). Deep learning based automatic immune cell detection for immunohistochemistry images. International workshop on machine learning in medical imaging. Springer. Chen, T.-J., Zheng, W.-L., Liu, C.-H., Huang, I., Lai, -H.-H. and Liu, M. (2019). Using deep learning with large dataset of microscope images to develop an automated embryo grading system, Fertility & Reproduction, 1(01), 51–56. Cheng, A., Kapacee, Z., Peng, J., Lu, S., Lucas, R.J., Hardingham, T.E. and Kimber, S.J. (2014). Cartilage repair using human embryonic stem cell‐derived chondroprogenitors, Stem Cells Translational Medicine, 3(11), 1287–1294. Cheng, G., Zhou, P. and Han, J. (2016). Rifd-cnn: Rotation-invariant and fisher discriminative convolutional neural networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Cheung, K.J., Padmanaban, V., Silvestri, V., Schipper, K., Cohen, J.D., Fairchild, A.N., Gorin, M.A., Verdone, J.E., Pienta, K.J. and Bader, J.S. (2016). Polyclonal breast cancer metastases arise from collective dissemination of keratin 14-expressing tumor cell clusters, Proceedings of the National Academy of Sciences, 113(7), E854–E863. Christiansen, E.M., Yang, S.J., Ando, D.M., Javaherian, A., Skibinski, G., Lipnick, S., Mount, E., O’Neil, A., Shah, K. and Lee, A.K. (2018). silico labeling: predicting fluorescent labels in unlabeled images, Cell, 173(3). 792–803. e719. Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V. and Le, Q.V. (2018). Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501. Cubuk, E.D., Zoph, B., Shlens, J. and Le, Q.V. (2019). Randaugment: Practical automated data augmentation with a reduced search space. arXiv preprint arXiv:1909.13719. Danelljan, M., Bhat, G., Khan, F.S. and Felsberg, M. (2019). Atom: Accurate tracking by overlap maximization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. De la Salle, B. (2019). Pre‐and postanalytical errors in haematology, International Journal of Laboratory Hematology, 41, 170–176. Dean, L. (2005). Blood groups and red cell antigens. National Center for Biotechnology Information. Delikoyun, K., Cine, E., Anil-Inevi, M., Ozuysal, M., Ozcivici, E. and Tekin, C.T. (2019). Deep Convolutional Neural Networks For Viability Analysis Directly From Cell Holograms Captured Using Lensless Holographic Microscopy. 23rd International Conference on Miniaturized Systems for Chemistry and Life Sciences. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K. and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition. IEEE.
46
Kerem Delikoyun et al.
Densen, P. (2011). Challenges and opportunities facing medical education, Transactions of the American Clinical and Climatological Association, 122, 48. Dong, Y., Jiang, Z., Shen, H., Pan, W.D., Williams, L.A., Reddy, V.V., Benjamin, W.H. and Bryan, A.W. (2017). Evaluations of deep convolutional neural networks for automatic identification of malaria infected cells. 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI). IEEE. Dorsey, J.F., Kao, G.D., MacArthur, K.M., Ju, M., Steinmetz, D., Wileyto, E.P., Simone, C.B. and Hahn, S.M. (2015). Tracking viable circulating tumor cells (CTCs) in the peripheral blood of non–small cell lung cancer (NSCLC) patients undergoing definitive radiation therapy: Pilot study results, Cancer, 121(1), 139–149. Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Dvash, T., Ben-Yosef, D. and Eiges, R. (2006). Human embryonic stem cells as a powerful tool for studying human embryogenesis, Pediatric Research, 60(2), 111. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P. and Bengio, S. (2010). Why does unsupervised pre-training help deep learning?, Journal of Machine Learning Research, 11 ((Feb)), 625–660. Etulain, J. (2018). Platelets in wound healing and regenerative medicine, Platelets, 29(6), 556–568. Fauser, B.C., Devroey, P. and Macklon, N.S. (2005). Multiple birth resulting from ovarian stimulation for subfertility treatment, The Lancet, 365(9473), 1807–1816. Feichtenhofer, C., Pinz, A. and Zisserman, A. (2017). Detect to track and track to detect. Proceedings of the IEEE International Conference on Computer Vision. Fernandes, S., Chong, J.J., Paige, S.L., Iwata, M., Torok-Storb, B., Keller, G., Reinecke, H. and Murry, C.E. (2015). Comparison of human embryonic stem cell-derived cardiomyocytes, cardiovascular progenitors, and bone marrow mononuclear cells for cardiac repair, Stem Cell Reports, 5(5), 753–762. Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J. and Greenspan, H. (2018). GANbased synthetic medical image augmentation for increased CNN performance in liver lesion classification, Neurocomputing, 321, 321–331. Ghiasi, G., Lin, T.-Y. and Le, Q.V. (2019). Nas-fpn: Learning scalable feature pyramid architecture for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Girdhar, R., Gkioxari, G., Torresani, L., Paluri, M. and Tran, D. (2018). Detect-and-track: Efficient pose estimation in videos. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Gnoth, C., Maxrath, B., Skonieczny, T., Friol, K., Godehardt, E. and Tigges, J. (2011). Final ART success rates: a 10 years survey, Human Reproduction, 26(8), 2239–2246. Goda, K., Ayazi, A., Gossett, D.R., Sadasivam, J., Lonappan, C.K., Sollier, E., Fard, A.M., Hur, S.C., Adam, J. and Murray, C. (2012). High-throughput single-microparticle imaging flow analyzer, Proceedings of the National Academy of Sciences, 109(29), 11630–11635. Godinez, W.J., Hossain, I., Lazic, S.E., Davies, J.W. and Zhang, X. (2017). A multi-scale convolutional neural network for phenotyping high-content cellular images, Bioinformatics, 33(13), 2010–2019. Gómez-de-Mariscal, E., Maška, M., Kotrbová, A., Pospíchalová, V., Matula, P. and Muñoz-Barrutia, A. (2019). Deep-learning-based segmentation of small extracellular vesicles in transmission electron microscopy images, Scientific Reports, 9(1), 1–10. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems.
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
47
Greff, K., Srivastava, R.K., Koutnίk, J., Steunebrink, B.R. and Schmidhuber, J. (2016). LSTM: A search space odyssey, IEEE Transactions on Neural Networks and Learning Systems, 28(10), 2222–2232. Gupta, R.K., Chen, M., Malcolm, G.P., Hempler, N., Dholakia, K. and Powis, S.J. (2019). Label-free optical hemogram of granulocytes enhanced by artificial neural networks, Optics Express, 27 (10), 13706–13720. Haering, M., Grosshans, J., Wolf, F. and Eule, S. (2018). Automated segmentation of epithelial tissue using cycle-consistent generative adversarial networks. BioRxiv:311373. Han, L. and Yin, Z. (2018). A Cascaded Refinement GAN for Phase Contrast Microscopy Image Super Resolution. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. Hartono, P. (2018). A transparent cancer classifier. Health informatics journal:1460458218817800. Haubold, C., Aleš, J., Wolf, S. and Hamprecht, F.A. (2016). A generalized successive shortest paths solver for tracking dividing targets. European Conference on Computer Vision. Springer. Hay, E.A. and Parthasarathy, R. (2018). Performance of convolutional neural networks for identification of bacteria in 3D microscopy datasets, PLoS Computational Biology, 14(12), e1006628. He, K., Gkioxari, G., Dollár, P. and Girshick, R. (2017). Mask r-cnn. Proceedings of the IEEE international conference on computer vision. He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. Hegde, R.B., Prasad, K., Hebbar, H. and Singh, B.M.K. (2019). Comparison of traditional image processing and deep learning approaches for classification of white blood cells in peripheral blood smear images, Biocybernetics and Biomedical Engineering, 39(2), 382–392. Hinton, G. (2018). Deep learning – a technology with the potential to transform health care, Jama, 320(11), 1101–1102. Ho, C.-S., Jean, N., Hogan, C.A., Blackmon, L., Jeffrey, S.S., Holodniy, M., Banaei, N., Saleh, A.A., Ermon, S. and Dionne, J. (2019). Rapid identification of pathogenic bacteria using Raman spectroscopy and deep learning. arXiv preprint arXiv:1901.07666. Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory, Neural Computation, 9(8), 1735–1780. Hollandi, R., Szkalisity, A., Toth, T., Tasnadi, E., Molnar, C., Mathe, B., Grexa, I., Molnar, J., Balind, A. and Gorbe, M. (2019). A deep learning framework for nucleus segmentation using image style transfer. BioRxiv:580605. Huang, G., Liu, Z., Van Der Maaten, L. and Weinberger, K.Q. (2017). Densely connected convolutional networks. Proceedings of the IEEE conference on computer vision and pattern recognition. Huang, J. and Xu, Z. (2017). Cell detection with deep learning accelerated by sparse kernel, Deep Learning and Convolutional Neural Networks for Medical Image Computing, Springer, 137–157. Huttinga, N. (2017). Insights into deep learning methods with application to cancer imaging, University of Twente. Isola, P., Zhu, J.-Y., Zhou, T. and Efros, A.A. (2017). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition. Jo, Y., Park, S., Jung, J., Yoon, J., Joo, H., Kim, M.-H., Kang, S.-J., Choi, M.C., Lee, S.Y. and Park, Y. (2017). Holographic deep learning for rapid optical screening of anthrax spores, Science Advances, 3(8), e1700606. Kan, A. (2017). Machine learning applications in cell image analysis, Immunology and Cell Biology, 95(6), 525–530.
48
Kerem Delikoyun et al.
Kandaswamy, C., Silva, L.M., Alexandre, L.A. and Santos, J.M. (2016). High-content analysis of breast cancer using single-cell deep transfer learning, Journal of Biomolecular Screening, 21(3), 252–259. Karandikar, S.H., Zhang, C., Meiyappan, A., Barman, I., Finck, C., Srivastava, P.K. and Pandey, R. (2019). Reagent-free and rapid assessment of t cell activation state using diffraction phase microscopy and deep learning, Analytical Chemistry, 91(5), 3405–3411. Kelly, S. (2018). The continuing evolution of publishing in the biological sciences. bio037325. Khosravi, P., Kazemi, E., Zhan, Q., Malmsten, J.E., Toschi, M., Zisimopoulos, P., Sigaras, A., Lavery, S., Cooper, L.A. and Hickman, C. (2019). Deep learning enables robust assessment and selection of human blastocysts after in vitro fertilization, Npj Digital Medicine, 2(1), 21. Kim, G., Ahn, D., Kang, M., Jo, Y., Ryu, D., Kim, H., Song, J., Ryu, J.S., Choi, G. and Chung, H.J. (2019a). Rapid and label-free identification of individual bacterial pathogens exploiting threedimensional quantitative phase imaging and deep learning. bioRxiv:596486. Kim, G., Jo, Y., Cho, H., Min, H.-S. and Park, Y. (2019b). Learning-based screening of hematologic disorders using quantitative phase imaging of individual red blood cells, Biosensors and Bioelectronics, 123, 69–76. Kragh, M.F., Rimestad, J., Berntsen, J. and Karstoft, H. (2019). Automatic grading of human blastocysts from time-lapse imaging, Computers in Biology and Medicine, 115, 103494. Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. and Matas, J.Ã. (2018). Deblurgan: Blind motion deblurring using conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Kusumoto, D. and Yuasa, S. (2019). The application of convolutional neural network to stem cell biology, Inflammation and Regeneration, 39(1), 14. LeCun, Y., Bengio, Y. and Hinton, G. (2015). Deep learning, Nature, 521(7553), 436–444. LeCun, Y., Boser, B.E., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.E. and Jackel, L.D. (1990). Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J. and Wang, Z. (2017). Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the IEEE conference on computer vision and pattern recognition. Lee, J., Kim, H., Cho, H., Jo, Y., Song, Y., Ahn, D., Lee, K., Park, Y. and Ye, S.-J. (2019). Deep-learning -based label-free segmentation of cell nuclei in time-lapse refractive index tomograms, IEEE Access, 7, 83449–83460. Lemmen, J., Rodriguez, N., Andreasen, L., Loft, A. and Ziebe, S. (2016). The total pregnancy potential per oocyte aspiration after assisted reproduction – in how many cycles are biologically competent oocytes available?, Journal of Assisted Reproduction and Genetics, 33(7), 849–854. Li, B., Ge, Y., Zhao, Y. and Yan, W. (2019a). Automatic detection of circulating tumor cells with very deep residual networks. Proceedings of the 2019 9th International Conference on Biomedical Engineering and Technology. Li, B., Wu, W., Wang, Q., Zhang, F., Xing, J. and Yan, J. (2019b). Siamrpn++: Evolution of Siamese visual tracking with very deep networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Li, Q., Yu, Z., Qi, S., He, Z., Li, S. and Guan, H. (2019c). A recognition method of urine cast based on deep learning. 2019 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE.
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
49
Li, Y., Mahjoubfar, A., Chen, C.L., Niazi, K.R., Pei, L. and Jalali, B. (2019d). Deep cytometry: Deep learning with real-time inference in cell sorting and flow cytometry, Scientific Reports, 9(1), 1–12. Liang, Y., Tang, Z., Yan, M. and Liu, J. (2018). Object detection based on deep learning for urine sediment examination, Biocybernetics and Biomedical Engineering, 38(3), 661–670. Liu, M.C., Shields, P.G., Warren, R.D., Cohen, P., Wilkinson, M., Ottaviano, Y.L., Rao, S.B., EngWong, J., Seillier-Moiseiwitsch, F. and Noone, A.-M. (2009). Circulating tumor cells: a useful predictor of treatment efficacy in metastatic breast cancer, Journal of Clinical Oncology, 27(31), 5153. Liu, W., Song, Y., Chen, D., He, S., Yu, Y., Yan, T., Hancke, G.P. and Lau, R.W. (2019). Deformable Object Tracking with Gated Fusion. IEEE Transactions on Image Processing. Lu, L., Zheng, Y., Carneiro, G. and Yang, L. (2017). Deep learning and convolutional neural networks for medical image computing, Advances in Computer Vision and Pattern Recognition, Springer, New York, NY, USA. Lugagne, J.-B., Lin, H. and Dunlop, M.J. (2019). DeLTA: Automated cell segmentation, tracking, and lineage reconstruction using deep learning. bioRxiv:720615. Luna, D., Hilario, R., Dueñas-Chacón, J., Romero, R., Zavala, P., Villegas, L. and García-Ferreyra, J. (2015). The IMSI procedure improves laboratory and clinical outcomes without compromising the aneuploidy rate when compared to the classical ICSI procedure, Clinical Medicine Insights: Reproductive Health, 9(CMRH), S33032. Majurski, M., Manescu, P., Padi, S., Schaub, N., Hotaling, N., Simon, J.C. and Bajcsy, P. (2019). Cell Image Segmentation Using Generative Adversarial Networks, Transfer Learning, and Augmentations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Malizia, B.A., Hacker, M.R. and Penzias, A.S. (2009). Cumulative live-birth rates after in vitro fertilization, New England Journal of Medicine, 360(3), 236–243. Malpica, N., De Solórzano, C.O., Vaquero, J.J., Santos, A., Vallcorba, I., García‐Sagredo, J.M. and Del Pozo, F. (1997). Applying watershed algorithms to the segmentation of clustered nuclei, Cytometry: The Journal of the International Society for Analytical Cytology, 28(4), 289–297. Mamoshina, P., Vieira, A., Putin, E. and Zhavoronkov, A. (2016). Applications of deep learning in biomedicine, Molecular Pharmacy, 13(5), 1445–1454. Mao, Y., Yin, Z. and Schober, J.M. (2015). Iteratively training classifiers for circulating tumor cell detection. 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE. Marvasti-Zadeh, S.M., Cheng, L., Ghanei-Yakhdan, H. and Kasaei, S. (2019). Deep learning for visual tracking: A comprehensive survey. arXiv preprint arXiv:1912.00535. Marzano, G., Chiriacò, M.S., Primiceri, E., Dell’Aquila, M.E., Ramalho-Santos, J., Zara, V., Ferramosca, A. and Maruccio, G. (2019). Sperm selection in assisted reproduction: A review of established methods and cutting-edge possibilities, Biotechnology Advances, 107498. Mauri, L., Kereiakes, D.J., Yeh, R.W., Driscoll-Shempp, P., Cutlip, D.E., Steg, P.G., Normand, S.-L.T., Braunwald, E., Wiviott, S.D. and Cohen, D.J. (2014). Twelve or 30 months of dual antiplatelet therapy after drug-eluting stents, New England Journal of Medicine, 371(23), 2155–2166. McCallum, C., Riordon, J., Wang, Y., Kong, T., You, J.B., Sanner, S., Lagunov, A., Hannam, T.G., Jarvi, K. and Sinton, D. (2019). Deep learning-based selection of human sperm with high DNA integrity, Communications Biology, 2(1), 1–10. McQuin, C., Goodman, A., Chernyshev, V., Kamentsky, L., Cimini, B.A., Karhohs, K.W., Doan, M., Ding, L., Rafelski, S.M. and Thirstrup, D. (2018). CellProfiler 3.0: Next-generation image processing for biology, PLoS Biology, 16(7), e2005970. Mirza, M. and Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.
50
Kerem Delikoyun et al.
Misdraji, J. and Nguyen, P.L. (1996). Urinalysis: when – and when not – to order, Postgraduate Medicine, 100(1), 173–192. Miyamoto, D.T., Sequist, L.V. and Lee, R.J. (2014). Circulating tumour cells: Monitoring treatment response in prostate cancer, Nature Reviews Clinical Oncology, 11(7), 401. Moen, E., Bannon, D., Kudo, T., Graf, W., Covert, M. and Van Valen, D. (2019). Deep learning for cellular image analysis. Nature methods:1. Munir, K., Elahi, H., Ayub, A., Frezza, F. and Rizzi, A. (2019). Cancer diagnosis using deep learning: A bibliographic review, Cancers, 11(9), 1235. Niioka, H., Asatani, S., Yoshimura, A., Ohigashi, H., Tagawa, S. and Miyake, J. (2018). Classification of C2C12 cells at differentiation by convolutional neural network of deep learning using phase contrast images, Human Cell, 31(1), 87–93. Nitta, N., Sugimura, T., Isozaki, A., Mikami, H., Hiraki, K., Sakuma, S., Iino, T., Arai, F., Endo, T. and Fujiwaki, Y. (2018). Intelligent image-activated cell sorting, Cell, 175(1). 266–276. e213. Nosrati, R., Graham, P.J., Zhang, B., Riordon, J., Lagunov, A., Hannam, T.G., Escobedo, C., Jarvi, K. and Sinton, D. (2017). Microfluidics for sperm analysis and selection, Nature Reviews Urology, 14(12), 707. Nurden, A.T., Nurden, P., Sanchez, M., Andia, I. and Anitua, E. (2008). Platelets and wound healing, Frontiers in Bioscience: A Journal and Virtual Library, 13, 3532–3548. Ota, S., Horisaki, R., Kawamura, Y., Ugawa, M., Sato, I., Hashimoto, K., Kamesawa, R., Setoyama, K., Yamaguchi, S. and Fujiu, K. (2018). Ghost cytometry, Science, 360(6394), 1246–1251. Otsu, N. (1979). A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62–66. Ounkomol, C., Seshamani, S., Maleckar, M.M., Collman, F. and Johnson, G.R. (2018). Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy, Nature Methods, 15(11), 917. Paeglis, A., Strumfs, B., Mezale, D. and Fridrihsone, I. (2018). A review on machine learning and deep learning techniques applied to liquid biopsy. Pan, H., Xu, Z. and Huang, J. (2015). An effective approach for robust lung cancer cell detection. International Workshop on Patch-based Techniques in Medical Imaging. Springer. Payer, C., Štern, D., Neff, T., Bischof, H. and Urschler, M. (2018). Instance segmentation and tracking with cosine embeddings and recurrent hourglass networks. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. Perez, L. and Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621. Qin, F., Gao, N., Peng, Y., Wu, Z., Shen, S. and Grudtsin, A. (2018). Fine-grained leukocyte classification with deep residual learning for microscopic images, Computer Methods and Programs in Biomedicine, 162, 243–252. Radford, A., Metz, L. and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Raje, N. and Dinakar, C. (2015). Overview of immunodeficiency disorders, Immunology and Allergy Clinics, 35(4), 599–623. Razzak, M.I., Naz, S. and Zaib, A. (2018). Deep learning for medical image processing: Overview, challenges and the future, Classification in BioApps, Springer, 323–350. Rehman, A., Abbas, N., Saba, T., Rahman, S.I.U., Mehmood, Z. and Kolivand, H. (2018). Classification of acute lymphoblastic leukemia using deep learning, Microscopy Research and Technique, 81(11), 1310–1317. Ren, S., He, K., Girshick, R. and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems.
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
51
Riordon, J., McCallum, C. and Sinton, D. (2019). Deep learning for the classification of human sperm, Computers in Biology and Medicine, 103342. Rivenson, Y., Liu, T., Wei, Z., Zhang, Y., de Haan, K. and Ozcan, A. (2019a). PhaseStain: The digital staining of label-free quantitative phase microscopy images using deep learning, Light: Science & Applications, 8(1), 23. Rivenson, Y., Wang, H., Wei, Z., de Haan, K., Zhang, Y., Wu, Y., Günaydın, H., Zuckerman, J.E., Chong, T. and Sisk, A.E. (2019b). Virtual histological staining of unlabelled tissueautofluorescence images via deep learning, Nature Biomedical Engineering, 3(6), 466. Roe, M.T., Armstrong, P.W., Fox, K.A., White, H.D., Prabhakaran, D., Goodman, S.G., Cornel, J.H., Bhatt, D.L., Clemmensen, P. and Martinez, F. (2012). Prasugrel versus clopidogrel for acute coronary syndromes without revascularization, New England Journal of Medicine, 367(14), 1297–1309. Rolnick, D., Veit, A., Belongie, S. and Shavit, N. (2017). Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694. Ronneberger, O., Fischer, P. and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computerassisted intervention. Springer. Rubin, M., Stein, O., Turko, N.A., Nygate, Y., Roitshtain, D., Karako, L., Barnea, I., Giryes, R. and Shaked, N.T. (2019). TOP-GAN: Label-free cancer cell classification using deep learning with a small training set. Medical Image Analysis. Sadafi, A., Koehler, N., Makhro, A., Bogdanova, A., Navab, N., Marr, C. and Peng, T. (2019). Multiclass Deep Active Learning for Detecting Red Blood Cell Subtypes in Brightfield Microscopy. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. Sainath, T.N., Vinyals, O., Senior, A. and Sak, H. (2015). Convolutional, long short-term memory, fully connected deep neural networks. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A. and Chen, X. (2016). Improved techniques for training gans. Advances in neural information processing systems. Sanghvi, A.B., Allen, E.Z., Callenberg, K.M. and Pantanowitz, L. (2019). Performance of an artificial intelligence algorithm for reporting urine cytopathology, Cancer Cytopathology, 127(10), 658–666. Schmidhuber, J. (2015). Deep learning in neural networks: An overview, Neural Networks, 61, 85–117. Schuster, M. and Paliwal, K.K. (1997). Bidirectional recurrent neural networks, IEEE Transactions on Signal Processing, 45(11), 2673–2681. Segebarth, D., Griebel, M., Duerr, A., von Collenberg, C.R., Martin, C., Fiedler, D., Comeras, L.B., Sah, A., Stein, N. and Gupta, R. (2018). DeepFLaSh, a deep learning pipeline for segmentation of fluorescent labels in microscopy images. bioRxiv:473199. Serag, A., Ion-Margineanu, A., Qureshi, H., McMillan, R., Saint Martin, M.-J., Diamond, J., O’Reilly, P. and Hamilton, P. (2019). Translational AI and Deep Learning in Diagnostic Pathology. Frontiers in Medicine 6. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R. and LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229. Shafique, S. and Tehsin, S. (2018). Acute lymphoblastic leukemia detection and classification of its subtypes using pretrained deep convolutional neural networks, Technology in Cancer Research & Treatment, 17, 1533033818802789.
52
Kerem Delikoyun et al.
Shroff, G. and Gupta, R. (2015). Human embryonic stem cells in the treatment of patients with spinal cord injury, Annals of Neurosciences, 22(4), 208. Shukla, D.K.B., Kanetkar, S.R., Gadhiya, S.G. and Ingale, S. (2016). Study of pre-analytical and post-analytical errors in hematology laoratory in a tertiary care hospital, Journal of Medical Science and Clinical Research, 4(12), 14964–14967. Siam, M., Valipour, S., Jagersand, M. and Ray, N. (2017). Convolutional gated recurrent networks for video segmentation. 2017 IEEE International Conference on Image Processing (ICIP). IEEE. Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Singh, V.K., Saini, A., Kalsan, M., Kumar, N. and Chandra, R. (2016). Describing the stem cell potency: The various methods of functional assessment and in silico diagnostics, Frontiers in Cell and Developmental Biology, 4, 134. Song, Q., Zhao, L., Luo, X. and Dou, X. (2017). Using deep learning for classification of lung nodules on computed tomography images, Journal of Healthcare Engineering, 2017. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting, The Journal of Machine Learning Research, 15(1), 1929–1958. Stollenga, M.F., Byeon, W., Liwicki, M. and Schmidhuber, J. (2015). Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. Advances in neural information processing systems. Su, Y.-T., Lu, Y., Chen, M. and Liu, -A.-A. (2017). Spatiotemporal joint mitosis detection using CNNLSTM network in time-lapse phase contrast microscopy images, IEEE Access, 5, 18033–18041. Sutton, E.J., Henning, T.D., Pichler, B.J., Bremer, C. and Daldrup-Link, H.E. (2008). Cell tracking with optical imaging, European Radiology, 18(10), 2021–2032. Suzuki, Y., Kobayashi, K., Wakisaka, Y., Deng, D., Tanaka, S., Huang, C.-J., Lei, C., Sun, C.-W., Liu, H. and Fujiwaki, Y. (2019). Label-free chemical imaging flow cytometry by high-speed multicolor stimulated Raman scattering, Proceedings of the National Academy of Sciences, 116(32), 15842–15848. Tan, M. and Le, Q.V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv preprint arXiv:1905.11946. Tan, M., Pang, R. and Le, Q.V. (2019). Efficientdet: Scalable and efficient object detection. arXiv preprint arXiv:1911.09070. Thanh, T., Vununu, C., Atoev, S., Lee, S.-H. and Kwon, K.-R. (2018). Leukemia blood cell image classification using convolutional neural network, International Journal of Computer Theory and Engineering, 10(2), 54–58. Theagarajan, R., Guan, B.X. and Bhanu, B. (2018). DeephESC: An automated system for generating and classification of human embryonic stem cells. 2018 24th International Conference on Pattern Recognition (ICPR). IEEE. Thirumalaraju, P., Bormann, C.L., Kanakasabapathy, M.K., Kandula, H. and Shafiee, H. (2019). Deep learning-enabled prediction of fertilization based on oocyte morphological quality, Fertility and Sterility, 112(3), e275. Thomas, T., Gori, F., Khosla, S., Jensen, M.D., Burguera, B. and Riggs, B.L. (1999). Leptin acts on human marrow stromal cells to enhance differentiation to osteoblasts and to inhibit differentiation to adipocytes, Endocrinology, 140(4), 1630–1638. Topol, E.J. (2019). High-performance medicine: the convergence of human and artificial intelligence, Nature Medicine, 25(1), 44. Turan, B., Masuda, T., Noor, A.M., Horio, K., Saito, T.I., Miyata, Y. and Arai, F. (2018). High accuracy detection for T-cells and B-cells using deep convolutional neural networks, ROBOMECH Journal, 5(1), 29.
2 Deep learning-based cellular image analysis for intelligent medical diagnosis
53
Ulman, V., Maška, M., Magnusson, K.E., Ronneberger, O., Haubold, C., Harder, N., Matula, P., Matula, P., Svoboda, D. and Radojevic, M. (2017). An objective comparison of cell-tracking algorithms, Nature Methods, 14(12), 1141. Van Valen, D.A., Kudo, T., Lane, K.M., Macklin, D.N., Quach, N.T., DeFelice, M.M., Maayan, I., Tanouchi, Y., Ashley, E.A. and Covert, M.W. (2016). Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments, PLoS Computational Biology, 12(11), e1005177. Veit, A., Alldrin, N., Chechik, G., Krasin, I., Gupta, A. and Belongie, S. (2017). Learning from noisy large-scale datasets with minimal supervision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Vijayalakshmi, A. (2019). Deep learning approach to detect malaria from microscopic images. Multimedia Tools and Applications:1–21. Vinegoni, C., Ralston, T., Tan, W., Luo, W., Marks, D.L. and Boppart, S.A. (2006). Multi-modality imaging of structure and function combining spectral-domain optical coherence and multiphoton microscopy. Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine X. International Society for Optics and Photonics. von Chamier, L., Laine, R.F. and Henriques, R. (2019). Artificial Intelligence for Microscopy: What You Should Know. Wang, H., Rivenson, Y., Jin, Y., Wei, Z., Gao, R., Günaydın, H., Bentolila, L.A., Kural, C. and Ozcan, A. (2019). Deep learning enables cross-modality super-resolution in fluorescence microscopy, Nat. Methods, 16, 103–110. Wang, S., Su, Z., Ying, L., Peng, X., Zhu, S., Liang, F., Feng, D. and Liang, D. (2016). Accelerating magnetic resonance imaging via deep learning. 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). IEEE. Watt, F.M. and Driskell, R.R. (2010). The therapeutic potential of stem cells, Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1537), 155–163. Wen, C., Miura, T., Fujie, Y., Teramoto, T., Ishihara, T. and Kimura, K.D. (2018). Deep-learning-based flexible pipeline for segmenting and tracking cells in 3D image time series for whole brain imaging. bioRxiv:385567. WHO. 2010. WHO laboratory manual for the examination and processing of human semen. Wu, Y., Luo, Y., Chaudhari, G., Rivenson, Y., Calis, A., de Haan, K. and Ozcan, A. (2019). Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram, Light: Science & Applications, 8(1), 25. Xiang, H., Chen, Q., Wu, Y., Xu, D., Qi, S., Mei, J., Li, Q. and Liu, X. (2019). Urine Calcium Oxalate Crystallization Recognition Method Based on Deep Learning. 2019 International Conference on Automation, Computational and Technology Management (ICACTM). IEEE. Xie, Q., Hovy, E., Luong, M.-T. and Le, Q.V. (2019). Self-training with Noisy Student improves ImageNet classification. arXiv preprint arXiv:1911.04252. Xie, S., Girshick, R., Dollár, P., Tu, Z. and He, K. (2017). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition. Xu, M., Papageorgiou, D.P., Abidi, S.Z., Dao, M., Zhao, H. and Karniadakis, G.E. (2017). A deep convolutional neural network for classification of red blood cells in sickle cell anemia, PLoS Computational Biology, 13(10), e1005746. Xu, X.-T., Zhang, J., Chen, P., Wang, B. and Xia, Y. (2019). Urine Sediment Detection Based on Deep Learning. International Conference on Intelligent Computing. Springer. Xue, H., Huynh, D.Q. and Reynolds, M. (2018). SS-LSTM: A hierarchical LSTM model for pedestrian trajectory prediction. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE.
54
Kerem Delikoyun et al.
Yosinski, J., Clune, J., Bengio, Y. and Lipson, H. (2014). How transferable are features in deep neural networks? Advances in neural information processing systems. Yu, H., Jing, W., Iriya, R., Yang, Y., Syal, K., Mo, M., Grys, T.E., Haydel, S.E., Wang, S. and Tao, N. (2018). Phenotypic antimicrobial susceptibility testing with deep learning video microscopy, Analytical Chemistry, 90(10), 6314–6322. Yu, J. and Thomson, J.A. (2006). Embryonic Stem Cells. Yue, T. and Wang, H. (2018). Deep learning for genomics: A concise overview. arXiv preprint arXiv:1802.00810. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R. and Toderici, G. (2015). Beyond short snippets: Deep networks for video classification. Proceedings of the IEEE conference on computer vision and pattern recognition. Zamir, A.R., Sax, A., Shen, W., Guibas, L.J., Malik, J. and Savarese, S. (2018). Taskonomy: Disentangling task transfer learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Zanjani, F.G., Zinger, S., Bejnordi, B.E., van der Laak, J.A. and de With, P.H. (2018). Stain normalization of histopathology images using generative adversarial networks. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE. Zeune, L.L., de Wit, S., Berghuis, A.S., IJzerman, M.J., Terstappen, L.W. and Brune, C. (2018). How to agree on a CTC: Evaluating the consensus in circulating tumor cell scoring, Cytometry Part A, 93(12), 1202–1206. Zhang, H., Fang, C., Xie, X., Yang, Y., Mei, W., Jin, D. and Fei, P. (2019). High-throughput, highresolution deep learning microscopy based on registration-free generative adversarial network, Biomedical Optics Express, 10(3), 1044–1063. Zhang, K., Chao, W.-L., Sha, F. and Grauman, K. (2016). Video summarization with long short-term memory. European conference on computer vision. Springer. Zhou, Y., Feng, Y. and Zhang, H. (2019a). Human Fungal Infection Image Classification Based on Convolutional Neural Network. Chinese Conference on Image and Graphics Technologies. Springer. Zhou, Y., Yasumoto, A., Lei, C., Huang, C.-J., Kobayashi, H., Wu, Y., Yan, S., Sun, C.-W., Yatomi, Y. and Goda, K. (2019b). Classification of platelet aggregates by agonist type. Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N. and Liang, J. (2018). Unet++: A nested u-net architecture for medical image segmentation, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, 3–11. Zhou, Z., Wang, F., Xi, W., Chen, H., Gao, P. and He, C. (2019c). Joint Multi-frame Detection and Segmentation for Multi-cell Tracking. arXiv preprint arXiv:1906.10886. Zhu, J.-Y., Park, T., Isola, P. and Efros, A.A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision.
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
3 Deep learning approaches in metastatic breast cancer detection Abstract: Breast cancer is one of the most common dangerous and fatal diseases in the world. Breast cancer, which is usually observed in women, is also observed in men, although rarely. Nowadays, thanks to developing technology, the early diagnosis and treatment of breast cancer have resulted in a decrease in mortality due to this disease. Although cancer occurs in a particular organ, it can then spread from that organ to other organs. This condition, called metastasis, is defined as the spread of cancerous cells to other areas directly or through blood–lymph vessels outside the tissue in which they are located. Cancerous cells can move to other parts of the body, settle there and quickly spread to new areas. As they spread to new parts of the body, abnormal cells of the same type develop and continue to have the same name as the first tumor. For example, if breast cancer has spread to the lung, the cells herein are essentially breast cancer cells, and the tumor that is formed in the lung is called “metastatic breast cancer” rather than “lung cancer.” The microscopic examination of adjacent lymph nodes is performed for metastatic breast cancer detection. In this procedure performed by pathologists, the detection of lymph nodes especially with very small tumors is quite difficult and time-consuming. Therefore, computerassisted metastasis detection increases the sensitivity and speed of the procedure. The data obtained from medical images can be processed in models using various artificial intelligence algorithms. The results obtained at the end of this procedure provide guidance to specialists in the diagnosis and treatment of the disease. Therefore, the use of computer vision systems in the diagnosis and treatment of the disease is becoming widespread. Furthermore, the deep learning approach, which has become popular recently, has accelerated this field. Convolutional neural network (CNN), which is a deep learning approach especially used to classify images, is a highly successful model. CNN models are insufficient to determine the properties of objects in the image, such as location, orientation, position and angular value. For this reason, capsule neural networks have been designed in order to obtain the image properties. In this section, breast metastases to axillary lymph nodes, a publicly available breast cancer dataset, were classified with CNN and capsule neural networks. Using the dataset consisting of 130 pathological images of 78 patients
Nazan Kemaloğlu, Information Technology Application and Research Center, Mehmet Akif Ersoy University, Burdur, Turkey Turgay Aydoğan, Ecir Uğur Küçüksille, Engineering Faculty, Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey https://doi.org/10.1515/9783110668322-003
56
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
available, a study was performed to detect metastases on images. In the classification stage, CNN and capsule neural networks developed specifically for this study, as well as ResNet, AlexNet and GoogleNet, which are pretrained neural network models, were used to measure their success on breast cancer images, and the results were presented. Keywords: neural networks, breast cancer classification, deep neural network, convolutional neural network
3.1 Introduction Cancer is the uncontrolled growth of body cells. As a result of this growth, adjacent tissues are invaded, or cancerous cells can spread to a place farther away from the organ from which they originate (Kızılbey and Akdeste, 2013). The incidence of cancer and mortality due to cancer are rapidly increasing worldwide. Breast cancer is the type of cancer causing the highest number of deaths among women worldwide (Bray et al., 2018). According to the American Cancer Society January 2019 data, more than 3.8 million US women have a history of breast cancer (ACS, 2019). It is possible to reduce the risk of disease progression with the help of diagnosis and treatment at an early stage of breast cancer, which is so common and causes deaths. Accordingly, people will have a better chance of survival and a better quality of life. However, after breast cancer treatment, the risk of disease progression is still high, and there are some treatment challenges. Therefore, novelty is still needed in the early treatment of breast cancer (Elmore et al., 2019). Cancer cells can reach the surrounding tissues. They can spread to other tissues and organs of the body through the blood circulation, lymphatic system and body cavities. This spread is called metastasis (Kızılbey and Akdeste, 2013). Breast cancer can metastasize all over the body (Parlar et al., 2005). The most common metastasis locations of breast cancer are lymph nodes, bone, lung, liver and brain (Bostanci et al., 2017). The first place where metastatic breast cancer is likely to spread is considered to be lymph nodes. The lymph node is surgically removed and examined microscopically to check for signs of metastasis in these nodes. With advances in digital pathology, better detection and analysis can be made from high-resolution images obtained from lymph nodes (Grover and Singh, 2018). Metastasis detection is performed by pathologists. This process requires intensive labor and is open to mistakes (Liu et al., 2017). The computer-aided diagnosis (CAD) uses computers and software to interpret medical information. The aim of CAD is to increase the diagnostic accuracy of diseases. CAD is used in many areas in the field of health. One of these is the medical diagnosis of breast cancer (Mohebian et al., 2017).
3 Deep learning approaches in metastatic breast cancer detection
57
Deep learning is widely used in CAD-based studies on breast cancer diagnosis. Information about some studies in the literature on breast cancer diagnosis is given later. Wang et al. presented a deep-learning-based approach to identify cancer metastasis from the images of breast sentinel lymph nodes. They stated that with the use of the model developed by the researchers as a result of their studies, the accuracy and clinical evaluation of pathological diagnoses could be improved (Wang et al., 2016). Chen et al. established a deep learning model using AlexNet architecture to detect metastases in lymph nodes. With the model they developed, they stated that it could be used to detect metastases in lymph nodes and could be expanded for all kinds of images related to breast cancer (Chen et al., 2016). Wollman and Rohr developed an automated measurement application for the detection of breast cancer metastases in the lymph nodes. In their study, they first performed preprocessing using color thresholding and morphological operators from regions of interest described in the images. Afterward, they carried out the classification process by using deep-learning-based models (Wollmann and Rohr, 2017). Liu et al. studied the detection of metastasis in breast cancer. They developed the LYmph Node Assistant algorithm, which is based on deep learning and can be applied on images, and applied it to the Camelyon2016 dataset. They observed the performance of the analyses obtained from this dataset. They emphasized that computer-aided analyses will provide more detailed sectioning and more accurate measurements of tumor sizes (Liu et al., 2018). In order to develop decision support systems for pathology, Campanella et al. presented a multisample learning-based deep learning system. They used basal cell carcinoma, prostate cancer and breast cancer pathology images that metastasized to lymph nodes for testing the system they presented (Campanella et al., 2019). Arevalo et al. presented an approach to the classification of breast cancer mass lesions in mammography images. First, they performed preprocessing to increase the details in the images, and then, using CNN, they performed supervised training to learn both the features and the classifier of breast imaging lesions (Arevalo et al., 2016). Wahaba et al. presented a two-stage model to mitigate the problem of class bias during the classification of mitotic and nonmitotic nuclei using CNN for breast cancer (Wahaba et al., 2017). Rasti et al. proposed a new CAD system for breast contrast-increased magnetic resonance imaging, which has two main stages. In the first stage of the proposed system, an automated solution is proposed for the sectioning of areas that may have a tumor based on the density and morphological information of masses in the image, while in the second stage, a new deep learning model is proposed to classify breast tumors which they called a mixture ensemble of CNN (Reza Rasti, 2017). Dhungel et al. proposed an integrated methodology for detection, segmentation and classification of breast cancer masses from mammographies with minimal user
58
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
intervention. In their methodology, they used CNN and the random forest classification method for the classification process (Dhungel et al., 2017). Sun et al. developed a graph-based semisupervised learning scheme using a deep CNN for breast cancer diagnosis. In their scheme, the use of a small amount of labeled data is sufficient, rather than using a large amount of labeled data during the CNN training process. Their system consists of four modules: data weighing, feature selection, dividing cotraining data labeling and CNN (Sun et al., 2017). Chougrad et al. developed a CAD system using CNN models to assist radiologists in classifying mammography mass lesions in breast cancer screening (Chougrad et al., 2018). Xu et al. designed a classifying solution for breast cancer diagnosis. They used a common optical tomography system appropriate for recurring measurements in mass screening. They classified the optical tomographic images obtained with this system (Xu et al., 2019). Singla et al. applied a predetermined Inception-v3 CNN model with inverse active learning to classify healthy and malignant breast tissue using optical coherence tomography images. They stated that the algorithm they developed in their study could be used for real-time, rapid, in vivo, intraoperative margin assessment on large surface areas while preserving the tissue structure (Singla et al., 2019). Alom et al. conducted a study that proposed methods of breast cancer diagnosis with a dual class and multiclass. For the diagnosis method, they made classification using the inception recurrent residual CNN approach. They used two datasets, BreakHis and Breast Cancer (BC Classification Challenge 2015), to test the classifications (Alom et al., 2019). In computerized analysis methods, diagnostic procedures based on deep learning were performed, and positive results were obtained. By using deep learning algorithms, accurate results can be obtained by interpreting pathological images at the human performance level (Bejnordi et al., 2017). In this study, a deep learning approach for the detection of metastatic breast cancer in lymph nodes is presented. In the second section, information about CNN architecture is given.
3.2 Convolutional neural network People use machine learning in all areas in order to reduce the workload of the work they do in daily life, to reduce the effort they make for the work and also to get better results of the work (Deperlioğlu and Köse, 2018). One of the computer-based decision-making machine learning techniques is artificial neural networks (ANN). When the usage areas of ANN are examined, they are widely used in science, technology, industry and health. In health applications,
3 Deep learning approaches in metastatic breast cancer detection
59
they are generally used as a tool for healthcare personnel to make a more accurate diagnosis and assessment. They are used in the diagnosis and assessment in many areas such as heart diseases, diabetes and chest diseases by evaluating the archived patient information and making decisions about the disease status (Yıldız, 2019). The CNN is a special type of multilayer neural network that has recently been used in the field of computer vision in deep learning (Bayar and Stamm, 2016). The CNN consists of three basic layers: the convolutional layer, the subsampling/ downsampling or the pooling layer, and the fully connected layer, the training layer (Shan et al., 2017, Lin and Chen, 2013). The structure of a CNN model is given in Figure 3.1. Some of the basic operations in these convolutional and pooling layers in CNN aim to reveal important features of the image. Furthermore, sending unnecessary parts of the image to the ANN is prevented. Thus, the speed and accuracy of the system are increased (Arı and Hanbay, 2019).
Feature extraction
Classification
Input images
Convolutional layer
Pooling layer
Convolutional layer
Pooling layer
Figure 3.1: Convolutional neural network model (Hatipoğlu and Bilgin, 2015).
Fully connected layer
Output layer
60
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
3.2.1 Convolutional layer When an image is taken as an input in the CNN model, the computer will read the pixels of the image as a two-dimensional array. The value of each element in this array will vary between 0 and 255 (Prisilla and Iyyanki, 2018). Depending on the image resolution, the size of the matrix array representing an image can be expressed as h × w × d. Here, h represents the image height, w represents the image width and d represents the dimension of the image, that is, the number of channels it has. If the image is a color image, there will be three color channels such as red, green and blue, so the d value is taken as 3. In other words, color images are expressed with the array in three pieces of h × w dimensions (h × w × 3). Gray-level images are expressed in one piece of h × w dimension (h × w × 1) as they contain only one color channel. Figures 3.2 and 3.3 show an example of reading the images as digital.
=
6×6×3
+
6×6×1
+
6×6×1
6×6×1
Figure 3.2: Color image representation.
=
6×6×1
6×6×1
Figure 3.3: Gray-level image representation.
In this layer, new values are obtained, and feature maps are created by applying various filters smaller than the size of the images to the images used as input data (Kapadia and Paunwala, 2018). The images used as input data are convoluted as a result of the filters applied in this layer. A CNN model can have more than one convolutional layer. These convolutional layers are conceived to learn features from previous layers. To calculate new feature maps, the feature maps in the previous layer are convoluted by using a few convolution kernels which are also called filters (Wu et al., 2018). Figure 3.4 shows an example layer, the kernel applied to this layer and the resulting feature map (Huang et al., 2015). The kernel size indicates the filter size to be applied to the layer. Figure 3.5 shows the values of the feature map
3 Deep learning approaches in metastatic breast cancer detection
61
x
y
Convolutional (previous) layer
Kernel (filter)
Feature map
Figure 3.4: Creation of a feature map in the convolutional layer (Huang et al., 2015).
1 0 1 0 1 0
0 1 0 1 1 0
1 0 0 0 1 1 0 1 0 1 1 1
1 1 0 0 1 0
0 1 0 1 0 1
×
arrayll input image 6×6
1
0
–1
0
0
0
–1
0
1
0
1
–1
–1
0
1
–1
–1
–1
–1
2
0
1
1
–1
0
=
arrayFM filter matrix 3×3
arrayFM feature matrix 4×4
Figure 3.5: Creation of a feature matrix.
when a 3 × 3 filter is applied to the one color channel of the 6 × 6 image. Some of the filters applied are identity, edge detection, sharpen and Gaussian blur filters (Albawi et al., 2017). The arrayFM filter shown in Figure 3.5 is hovered over the arrayII input image, and this reveals the arrayFM feature matrix present. Here, the arrayFM[0,0] element is calculated as follows: arrayFM½0½0 =
2 X
ðarray½0½i*arrayFM½0½i + array½1½i*arrayFM½1½i
i=0
+ array½2½i*arrayFM½2½iÞ
(3:1)
62
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Assuming that the image size is m × n and the size of the filter matrix is p × q, the size of the feature matrix resulting from the k pieces of filters used is determined using the following equation (Kapadia and Paunwala, 2018): r = kxðm − p + 1Þxðn − q + 1Þ
(3:2)
Accordingly, r = 16 value is found from r = 1 × (6–3 + 1) × (6–3 + 1). The feature matrix is calculated as a 16-element array. When the example in Figure 3.5 and the result in eq. (3.2) are considered, the size of the matrix to be obtained at the output is small compared to the input matrix. In order to prevent this shrinkage in the output matrix, insertion can be made to the input matrix, that is, the image. This insertion process is called padding. How many pixels to insert to the input matrix by padding is calculated according to the following equation (Gulati et al., 2018): p=
F−1 2
(3:3)
where F represents the filter size applied to the input image. As a result of applying padding to the input matrix, the new output matrix size is calculated according to the following equation (Gulati et al., 2018): l = m + 2p − F + 1
(3:4)
where m represents the matrix size of the input image, F represents the size of the applied filter and p represents the padding value. When the padding operation is performed with p = 1 on the matrix of the input image given in Figure 3.5, the new matrix in Figure 3.6 is obtained. Padding is performed in two different ways. The first is “same” padding, and the second is “valid” padding. The valid padding process does not have any insertion operation. In the same padding process, pixels are inserted into the input matrix (Shah and Kapdi, 2017). Zero padding is most commonly used for padding operations with pixel insertion (Zhang et al., 2017). Zero is inserted to the edges of the matrix according to the p-value calculated in zero padding.
1 0 1 0 1 0
0 1 0 1 1 0
1 0 1 0 0 1
0 0 1 1 1 1
1 1 0 0 1 0
0 1 0 1 0 1
Padding = 1
Input image Figure 3.6: Application of the padding process.
0 0 0 0 0 0 0 0
0 1 0 1 0 1 0 0
0 0 1 0 1 1 0 0
0 1 0 1 0 0 1 0
0 0 0 1 1 1 1 0
0 1 1 0 0 1 0 0
0 0 1 0 1 0 1 0
0 0 0 0 0 0 0 0
Input image after padding
3 Deep learning approaches in metastatic breast cancer detection
63
The process of determining the number of steps for shifting on the input matrix of the filter to be applied to the input matrix is defined as a stride. If the stride value is 2 here, it will be applied by shifting 2 pixels from the first point after the filter is applied (Öztürk and Akdemir, 2019). If the stride value is 1, then the filter is applied again by shifting 1 pixel from the point of application according to the starting point. It is recommended that the stride value be equal to or smaller than the size of the filter. Otherwise, any important parameters or features associated with the image will disappear from the feature map after the filter is applied (Velandia et al., 2010). If the stride value is different from one, the output matrix size calculated with eq. (3.4) is calculated according to eq. (3.5) (Hossain and Sajib, 2019): l=
m + 2p − F +1 s
(3:5)
Here, m represents the matrix size of the input image, F represents the size of the applied filter, p represents the padding value and s represents the stride value. Finally, in the convolution layer, an activation function is applied to the feature map. This activation function plays a role in better solving complex problems. The most popular activation functions are Sigmoid, TanH and ReLU (Ye et al., 2018).
3.2.2 Pooling layer After the convolution layer, the feature map contains a lot of features. The high number of these features may reveal computational complexity. The pooling layer is applied to reduce the size of the feature map to reduce this computational complexity. There are generally two methods used. These are average pooling and max pooling (Wang et al., 2018). Navigating with a matrix mask of which size is determined on the feature map matrix and averaging the values in the area occupied by the mask are called average pooling. Taking into account the maximum value in the areas where the mask is navigated is also called max pooling (Ye et al., 2018). Figure 3.7 shows an example of average pooling and max pooling application.
1 1 5 6 3 2 1 2
2 7 1 3
4 8 0 4
Average pooling with 2 × 2 mask
Max pooling with 2 × 2 mask
3 2
5 2
6 3
8 4
Figure 3.7: Average pooling and max pooling application example (Ye et al., 2018).
64
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
3.2.3 Fully connected layer The classical neural network learning process is carried out in this layer in order to classify the features obtained after the completion of previous operations (Li et al., 2017). All neurons in this layer are connected to all of the neurons in the pooling layer (Gowda and Rasheed, 2017). The fully connected layer uses the backpropagation algorithm. First of all, the features are propagated in the feedforward direction. By comparing the actual output with the expected output, the error is backpropagated. With this process, the weight matrix is adjusted to minimize error (Kapadia and Paunwala, 2018).
3.3 Capsule neural network The CNN is a deep learning algorithm that is often used in image recognition and image classification. Although this algorithm is used successfully in the mentioned fields, it has some deficiencies. With the pooling process used in CNN, the image is summarized in a smaller area, and the size of the data is reduced. Thanks to the pooling layer, small changes in the position of features in the image ensure that the resulting feature map does not change. Although this may sound pleasant, it is a fact that there is a loss of information by pooling (Patrick et al., 2019, Towards DataScience, 2019). Geoffrey Hinton, one of the pioneers of deep learning, confirms the problem by stating that: “The pooling operation used in convolutional neural networks is a big mistake, and the fact that it works so well is a disaster!” Moreover, CNN does not take into account the features of objects such as size, position and orientation when making image recognition. For example, when a facial recognition application is performed with CNN, it is difficult for CNN to notice if the eyes, mouth and nose are moved (Heartbeat.Fritz, 2019, Towards DataScience, 2019, Paoletti et al., 2018). Sabour et al. (2017) proposed capsule neural networks in 2017 to find solutions to these problems. Capsules in a capsule neural network are groups of neurons (Hinton et al., 2011). The activity of neurons in an active capsule represents characteristics of a given entity in the image, such as size, location and orientation (Sabour et al., 2017). In capsule neural networks, the pooling layer is replaced by a so-called routing by agreement criterion. With the help of this criterion, the result in the first layer is transmitted to the parent capsule in the next layer. The capsule neural network for each parent may increase or decrease the connection strength (Sabour et al., 2017, Hoogi et al., 2019). The second major innovation in capsule
65
3 Deep learning approaches in metastatic breast cancer detection
neural networks is the use of the squash function as the activation function. This function is given in the following equation (Sabour et al., 2017): 2 sj sj vj = 2 sj 1 + sj
(3:6)
where vj is the vector output of capsule j, and sj is the total input of capsule j. This function is used to normalize the size of vectors. The outputs from this function determine how the data will be routed between the various capsules trained to learn different concepts (Heartbeat.Fritz, 2019, Sabour et al., 2017). A simple capsule neural network architecture is shown in Figure 3.8.
16 ReLU Conv1 9×9
9×9
256
DigitCaps PrimaryCaps
20
8
|| L2 || 10
10
32 6
Wij = [8 × 16]
Figure 3.8: A simple capsule neural network architecture (Sabour et al., 2017).
This architecture consists of three layers: convolutional layer, primary capsule layer and classification capsule layer. The convolutional layer is the layer in which the pixel density is converted into the activities of local feature sensors, which are then used as input to the first capsules. The primary capsule layer is the lowest level of multidimensional entities and, from an inverse graph perspective, corresponds to the activation of primary capsules, that is, inversion of the creation process. The third layer is the classification capsule layer where the classification process shown as DigitLayer in Figure 3.8 is performed (Sabour et al., 2017, Hoogi et al., 2019).
3.4 The pioneering CNN models Under this section, information about ready-made network architectures will be given.
66
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
3.4.1 AlexNet It is a broad and deep neural network architecture developed by Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton to split 1.2 million high-resolution images presented in the ImageNet ILSVRC-2010 competition into 1,000 different classes. AlexNet consists of five convolutional layers and three fully connected layers. The architecture contains 60 million parameters and 650,000 neurons. The architecture is shown in Figure 3.9. It takes between 5 and 6 days to train the network on two GTX 580 3GB GPUs (Krizhevsky et al., 2012). The innovations presented in this network are presented in the following section.
3.4.1.1 ReLU nonlinearity The ReLU nonlinearity was applied after all convolution and fully connected layers. The ReLU nonlinearity of the first and second convolution layers follows a local normalization step before pooling. However, the researchers later found that normalization is not very useful (Krizhevsky et al., 2012, Learn Opencv, 2019).
3.4.1.2 Training on multiple GPUs A single GTX 580 graphics processing unit (GPU) has only 3 GB of memory that limits the maximum size of the network that can be trained. Because the size of the dataset in the competition is too large to be processed on a single GPU, the designed network was distributed to two GPUs, and the GPUs were run in parallel (Krizhevsky et al., 2012).
3.4.1.3 Local response normalization The reason for using local response normalization (LRN) is to promote lateral inhibition. Lateral inhibition refers to the ability of a neuron to reduce the activity of its neighbors in neurobiology (Towards DataScience, 2019). Here, LRN enables the suppression of adjacent layers to create a local maximum to create a contrast in that region and thereby increasing the perception power (Medium, 2019). Thus, local contrast enhancement is performed in order to use the maximum pixel value locally as a stimulation for the next layers (Towards DataScience, 2019).
3
Stride of 4
55
55
48
5
5
48
Max pooling
27
27
128
128
3
3
3
3
3
3
3
13
13
Max pooling
Figure 3.9: AlexNet architecture (Krizhevsky et al., 2012).
224
11
11
224
11
11
5
5
192
192
3
3
3
3
13
13
192
192
3 3
3
3
13
13
Dense
128 Max pooling
128
2,048
Dense
2,048
2,048
1,000
2,048 Dense
3 Deep learning approaches in metastatic breast cancer detection
67
68
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
The mathematical formula of LRN is as follows (Krizhevsky, 2019): aix, y bix, y =
PðN − 1, i +2 nÞ j 2 β ax, y k+ α j = maxð0, i −2 nÞ
(3:7)
where bix,y refers to the regularized output for the kernel i in the (x, y) position, aix,y refers to the source output for the kernel i applied in the (x, y) position, N refers to the total number of kernels and n refers to the magnitude of the normalization neighborhood. On the other hand, Α, β and k are hyperparameters (Krizhevsky, 2019, Towards Datascience, 2019).
3.4.1.4 Overlapping pooling Overlapping pooling contributes to a higher classification accuracy due to the retention of the more original information. For example, let the kernel dimensions be z × z and step value be s in each of the convolution and deconvolution layers. If s = z, the traditional local pooling commonly used in CNN is obtained. If s < z, overlapping pooling is obtained. Overlapping pooling models are slightly more resistant to overfitting during training (Li, M., 2019).
3.4.2 GoogleNet It is a deep neural network architecture, which is 12 times faster and significantly more accurate than AlexNet (proposed by Krizhevsky et al.). It was prepared by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan and Vincent Vanhoucke for the ILSVRC-2014 competition (Szegedy et al., 2015). One of the key features of GoogleNet is that it is a very deeply designed network that has a depth of 22 times when only the layers with parameters are counted and 27 times when the pooling layers are also counted. Another key feature of GoogleNet is the presenting of a new local beginning module in CNN. The beginning module is basically designed to find the most suitable local structure. One of the most important deficiencies of GoogleNet is that it allows to significantly increase the unit number at each stage without an uncontrolled growth in computational complexity. Thus, CNN can be trained not only very deeply but also efficiently (Zhong et al., 2015). Using the GoogleNet architecture, 60 million parameters in AlexNet have been reduced to 4 million. The network is designed with computing efficiency and practicality in mind. Therefore, it can be operated individually, especially in devices with a low memory area, including those with limited computing resources (Szegedy et al., 2015).
3 Deep learning approaches in metastatic breast cancer detection
69
3.4.3 ResNet One of the biggest problems in deep networks is the degradation problem as the depth increases. Adding layers to a model of appropriate depth results in a high training error. This degradation in training accuracy shows that optimizing all systems is similarly not easy. With ResNet, a deep residual learning framework solution is introduced to the degradation problem. Figure 3.10 shows the classical CNN structure, and Figure 3.11 shows the residual CNN structure (He et al., 2016).
X
X
Weight layer Weight layer ReLU
F(x) H(x)
ReLU
Weight layer
X Identity
Weight layer F(x) + X ReLU Figure 3.10: Classical CNN.
+ ReLU
Figure 3.11: Residual CNN.
In classical CNN, the section from the input to the output can be mapped by a nonlinear H(x) function. In the ResNet architecture, this section is mapped by another nonlinear function defined as F(x): = H(x) – x instead of H(x). Furthermore, by making a shortcut connection from the input to the output, the x (input) value is added to the F(x) function arithmetically. The function F(x) + x is then passed through ReLU. In other words, by adding input to the end of the second layer, it is aimed to transmit the values from the previous layers to the next layers more strongly. Experiments based on this hypothesis showed that the same function was achieved in both approaches, but the ease of training the network was different. In the network structure shown in Figure 3.11, the connection between input and output is called “shortcut connections” (He et al., 2016, Medium.com, 2019). The 152-layer network created is the deepest network that has been run on the ImageNet dataset to date (Medium.com, 2019).
3.5 Breast metastases to axillary lymph dataset The dataset used in the analysis consists of 130 pathological images of 78 patients. Metastasis was detected in 36 images taken from 27 patients, while the remaining 104 images were metastasis free. Images were obtained at the Memorial Sloan
70
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Kettering Cancer Center, and the class labels of each positive or negative image for breast carcinoma were obtained from the pathology report of the corresponding image (Clark et al., 2013, Campanella et al., 2019). Metastatic and nonmetastatic images are shown in Figures 3.12 and 3.13.
Figure 3.12: Metastatic pathology image.
Figure 3.13: Nonmetastatic pathology image.
3 Deep learning approaches in metastatic breast cancer detection
71
3.6 Experimental results 3.6.1 Experimental results with CNN In this study, the performance of CNN was measured for 130 pathological images. The images used in the study consist of images with two different class labels. Thirty-six of the images included cancerous cells with metastasis, while 94 of them included healthy cells. Cross-validation was used in all models due to the difference between labels and the small size of the dataset. Dataset was analyzed with 10-fold cross-validation on CNN model. The installed CNN model architecture is given in Table 3.1. Table 3.1: CNN model summary. Layer
Output shape
convd_ (ConvD) max_poolingd_ (MaxPooling) dropout_ (Dropout) convd_ (ConvD) max_poolingd_ (MaxPooling) dropout_ (Dropout) convd_ (ConvD) max_poolingd_ (MaxPooling) dropout_ (Dropout) flatten_ (Flatten) dense_ (Dense) dense_ (Dense) dense_ (Dense)
(None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, ,) (None, ) (None, ) (None, )
Param# , , ,, ,
Total parameters: ,, Trainable parameters: ,, Nontrainable parameters:
While the “ReLU” activation function was used in the mentioned layers other than the output layer, the “softmax” activation function was used since the output labels were evaluated categorically. The developed CNN model contains approximately 3 million parameters in total. Model 10 epoch was run. Images were processed in epochs of four pieces. The “adam” optimization method was organized, which is the learning coefficient multiplier that is used to minimize the error rate. After organizing the model, the dataset was analyzed by 10-fold cross-validation. The test success and loss values obtained at each step in cross-validation are given in Figure 3.14.
72
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Accuracy/loss of 10-fold cross-validation 0.8
CNN accuracy
1.0 0.8
0.7
0.6
0.6
0.4
CNN loss
0.5
0.2 0.4 0.0 0
2
4
6
8
0
2
4
6
8
Figure 3.14: Test accuracy and loss values for the CNN model.
As a result of the analysis, a total success of 72.31% was obtained in 10-fold crossvalidation.
3.6.2 Experimental results with AlexNet The images used in the study consist of images with two different class labels. Thirty-six of the images included cancerous cells with metastasis, while 94 of them included healthy cells. Cross-validation was used in all models due to the difference between labels and the small size of the dataset. Dataset was analyzed with 10-fold cross-validation on AlexNet model. The AlexNet model architecture is given in Table 3.2. The developed AlexNet contains approximately 90 million parameters in total. Model 50 epoch was run. Images were processed in epochs of four pieces. The “man” optimization method was organized, which is the learning coefficient multiplier that is used to minimize the error rate. While the “softmax” function was used for the output layer which is the last layer, the “ReLU” activation function was used in all other layers. After organizing the model, the dataset was analyzed by 10-fold cross-validation. The test success and loss values obtained at each step in cross-validation are given in Figure 3.15. As a result of the analysis, a total success of 66.92% was obtained in 10-fold validation.
73
3 Deep learning approaches in metastatic breast cancer detection
Table 3.2: AlexNet model. Layer
Output shape
Param#
convd_ (ConvD) max_poolingd_ (MaxPooling batch_normalization_ (Batch) convd_ (ConvD) max_poolingd_ (MaxPooling batch_normalization_ (Batch) convd_ (ConvD) batch_normalization_ (Batch) convd_ (ConvD) batch_normalization_ (Batch) convd_ (ConvD) max_poolingd_ (MaxPooling) batch_normalization_ (Batch) flatten_ (Flatten) dense_ (Dense) dense_ (Dense) dense_ (Dense)
(None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, ,) (None, ,) (None, ,) (None, )
, ,, , ,, , ,, , , , ,, ,, ,
Total parameters: ,, Trainable parameters: ,, Nontrainable parameters: ,
Accuracy/loss of 10-fold cross-validation AlexNet accuracy
1.0
AlexNet loss
17.5 15.0
0.8 12.5 10.0
0.6
7.5 0.4 5.0 2.5
0.2
0.0 0
2
4
6
8
0
Figure 3.15: Test accuracy and loss values for the AlexNet model.
2
4
6
8
74
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
3.6.3 Experimental results with GoogleNet Dataset was analyzed with 10-fold cross-validation on GoogleNet model. Model architecture is given in Table 3.3.
Table 3.3: GoogleNet model architecture. Layer
Output shape
input_ (InputLayer) convd (ConvD) max_poolingd (MaxPoolingD) batch_normalization (BatchNorma) convd_ (ConvD) convd_ (ConvD) batch_normalization_ (BatchNor) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate (Concatenate) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD)
(None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , )
Param# , , , , , , , , , , , , , , , , , , , , , , , ,
3 Deep learning approaches in metastatic breast cancer detection
75
Table 3.3 (continued ) Layer
Output shape
convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) concatenate_ (Concatenate) convd_ (ConvD) convd_ (ConvD) max_poolingd_ (MaxPoolingD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD) convd_ (ConvD)
(None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , )
Param# , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
76
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Table 3.3 (continued ) Layer
Output shape
Param#
concatenate_ (Concatenate) max_poolingd_ (MaxPoolingD) flatten_ (Flatten) dropout_ (Dropout) dense_ (Dense) main (Dense)
(None, , , ) (None, , , ) (None, ,) (None, ,) (None, ) (None, )
,,
Total parameters: ,, Trainable parameters: ,, Nontrainable parameters:
The developed GoogleNet contains approximately 22 million parameters in total. Model 10 epoch was run. Images were processed in epochs of four pieces. The “adam” optimization method, which is the learning coefficient multiplier used to minimize the error rate, was organized. While the “softmax” function was used for the output layer, the “ReLU” activation function was used in all other layers. After organizing the model, the dataset was analyzed by 10-fold cross-validation. The test success and loss values obtained at each step in cross-validation are given in Figure 3.16.
Accuracy/loss of 10-fold cross-validation GoogleNet loss
1.0
1.6 1.4
0.8
1.2
0.6
1.0 0.4
0.8
0.2
0.6
0.0
GoogleNet accuracy
0
2
4
6
8
0.4 0
2
4
6
8
Figure 3.16: Test accuracy and loss values for the GoogleNet model.
As a result of the analysis, a total success of 73.85% was obtained in 10-fold validation.
3 Deep learning approaches in metastatic breast cancer detection
77
3.6.4 Experimental results with ResNet Dataset was analyzed with 10-fold cross-validation on ResNet model. The installed ResNet model architecture is given in Table 3.4.
Table 3.4: ResNet model. Layer
Output shape
input_ (InputLayer) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) max_poolingd_ (MaxPoolingD) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNo) activation_ (Activation) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation)
(None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , )
Param# , , , , , , , , , , , ,
78
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Table 3.4 (continued ) Layer
Output shape
convd_ (ConvD) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) batch_normalization_ (BatchNor) activation_ (Activation) convd_ (ConvD) add_ (Add) batch_normalization_ (BatchNor) activation_ (Activation) average_poolingd_ (AveragePoo) flatten_ (Flatten) dense_ (Dense)
(None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, , , ) (None, ) (None, )
Param# , , , , , , , ,, , , ,, , ,, , ,, , ,, , ,
Total parameters: ,, Trainable parameters: ,, Nontrainable parameters: ,
The ResNet developed contains approximately 11 million parameters in total. Model 10 epoch was run. Images were processed in epochs of four pieces. The “adam” optimization method, which is the learning coefficient multiplier used to minimize the error rate, was organized. While the “softmax” function was used for the output layer which is the last layer, the “ReLU” function was used in all other layers.
79
3 Deep learning approaches in metastatic breast cancer detection
After organizing the model, the dataset was analyzed by 10-fold cross-validation. The test success and loss values obtained at each step in cross-validation are given in Figure 3.17.
Accuracy/loss of 10-fold cross-validation 1.0
50
0.8
40
0.6
30
0.4
20
0.2
10
0.0
ResNet accuracy
0
2
4
6
ResNet loss
0
8
0
2
4
6
8
Figure 3.17: Test accuracy and loss values for the ResNet model.
As a result of the analysis, a total success of 86.92% was obtained in 10-fold validation.
3.6.5 Experimental results with capsule neural network Dataset was analyzed with 10-fold cross-validation on capsule neural network model. The CAPSULE model architecture is given in Table 3.5.
Table 3.5: Summary of capsule neural network model. Layer
Output shape
Param#
input_ (InputLayer) convd_ (ConvD) primarycap_convd (ConvD) primarycap_reshape (Reshape) primarycap_squash (Lambda) digitcaps (CapsuleLayer) input_ (InputLayer) mask_ (Mask)
(None, , , ) (None, , , ) (None, , , ) (None, , ) (None, , ) (None, , ) (None, ) (None, )
, ,, ,,
80
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Table 3.5 (continued ) Layer
Output shape
Param#
capsnet (Length) decoder (Sequential)
(None, ) (None, ,,)
,,
Total parameters: ,, Trainable parameters: ,, Nontrainable parameters:
The ResNet developed contains approximately 8 million parameters in total. Model 10 epoch was run. Images were processed in epochs of four pieces. The “adam” optimization method, which is the learning coefficient multiplier used to minimize the error rate, was organized. While the “softmax” function was used for the output layer which is the last layer, the “ReLU” function was used in all other layers. The test success and loss values obtained at each step in cross-validation are given in Figure 3.18.
Accuracy/loss of 10-fold cross-validation 1.0
12
Capsule NN accuracy
Capsule NN loss
10
0.8
8 0.6 6 0.4 4 0.2
2
0.0
0 0
2
4
6
8
0
2
4
6
8
Figure 3.18: Test accuracy and loss values for the capsule neural network model.
As a result of the analysis, a total success of 71.54% was obtained in 10-fold validation.
3.6.6 General evaluation In this study, pathological images of breast cancer were analyzed by deep learning methods. The capsule neural network, ResNet, AlexNet and GoogleNet models,
3 Deep learning approaches in metastatic breast cancer detection
81
which were developed based on CNN architecture and CNN, frequently used in the computer vision field and having achieved significant success, were tested in a dataset containing 130 images in total. According to the results obtained, the highest result was obtained with ResNet with 86.92%. Furthermore, when the operating time is taken into consideration, the AlexNet model is the model trained for the longest time with approximately 70 min, while the classical CNN model is the model operated for the shortest time with approximately 4 min of training. When the error rates obtained on the test set are compared, it is observed that the CNN model gives the highest result with a 0.56 error rate. The prominent results are shown in Table 3.6 with bold values. Table 3.6: Capsule neural network model summary. Model
Loss
Accuracy (%)
Time (s)
CNN CAPSULE ResNet AlexNet GoogleNet
. . . . .
. . . . .
. ,. ,. ,. ,.
References ACS. (2019). Breast Cancer Facts & Figures 2019-2020, Atlanta, American Cancer Society. Albawi, S., Mohammed, T.A. and Al-Zawi, S. (2017). Understanding of a Convolutional Neural Network. In 2017 International Conference on Engineering and Technology, 1–6. Alom, M.Z., Yakopcic, C., Nasrin, M.S., Taha, T.M. and Asari, V.K. (2019). Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network, Journal of Digital Imaging, 605–617. Alpaslan, N. (2019). Meme Kanseri Tanısı İçin Derin Öznitelik Tabanlı Karar Destek Sistemi, Selcuk Univ. J. Eng. Sci. Tech, 213–227. Arevalo, J., González, F.A., Ramos-Pollán, R., Oliveira, J.L. and Lopez, M.A. (2016). Representation learning for mammography mass lesion classification with convolutional neural networks, computer methods and programs in biomedicine, 248–257. Arı, A. and Hanbay, D. (2019). Tumor detection in MR images of regional convolutional neural networks, Journal of the Faculty of Engineering and Architecture of Gazi University, 1395–1408. Bayar, B. and Stamm, M.C. (2016). A deep learning approach to universal image manipulation detection using a new convolutional layer. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, 5–10. Bejnordi, B.E., Veta, M., Diest, P.J., Ginneken, B.V., Karssemeijer, N., Litjens, G. . . . Consortium, C. (2017). Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer, Jama, 2199–2210.
82
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Bostanci, H., Nasirov, M., Buyukkasap, C., Dikmen, K., Koksal, H. and Kerem, M. (2017). Breast Cancer Metastasis Mimicking Cholangiocarcinoma: A Case Report, Gazi Medical Journal, 218–219. Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R.L., Torre, L.A. and Jemal, A. (2018). Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA: A Cancer Journal for Clinicians, 394–424. Campanella, G., Hanna, M.G., Brogi, E. and Fuchs, T.J. (2019). Breast Metastases to Axillary Lymph Nodes [Data set], The Cancer Imaging Archive. Doi: https://doi.org/10.7937/ tcia.2019.3xbn2jcc. Campanella, G., Hanna, M.G., Geneslaw, L., Miraflor, A., Silva, V.W., Busam, K.J. and Fuchs, T.J. (2019). Clinical-grade computational pathology using weakly supervised deep learning on whole slide images, Nature Medicine, 1301–1309. Clark, K., Vendt, B., Smith, K., Freymann, J., Kirby, J., Koppel, P. . . . Tarbox, L. (2013). The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository, Journal of Digital Imaging, 26(6). 1045–1057. Chen, R., Jing, Y. and Jackson, H. (2016). Identifying Metastases in Sentinel Lymph Nodes with Deep Convolutional Neural Networks. arXiv preprint arXiv:1608.01658, 1–5. Cheng, H., Lee, H. and Ma, S. (2018). CNN-Based Indoor Path Loss Modeling with Reconstruction of Input Images. In 2018 International Conference on Information and Communication Technology Convergence, 605–610. Chougrad, H., Zouaki, H. and Alheyane, O. (2018). Deep Convolutional Neural Networks for breast cancer screening, Computer Methods and Programs in Biomedicine, 19–30. Deperlioğlu, Ö. and Köse, U. (2018). Diabetes Determination Using Retraining Neural Network. In 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), 1–5. Dhungel, N., Carneiro, G. and Bradley, A.P. (2017). A deep learning approach for the analysis of masses in mammograms with minimal user intervention, Medical Image Analysis, 114–128. Elmore, N., King, S., Exley, J., Rodriguez-Rincon, D., Larkin, J., Jones, M.M. and Manville, C. (2019). Findings from a systematic review to explore the patient and societal impacts of disease progression in women who were treated for early breast cancer : Implications for future research, policy and practice., Cambridge, F. Hoffmann-La Roche Ltd. Grover, P. and Singh, R.M. (2018). Automated Detection of Breast Cancer Metastases in Whole Slide Images. First International Conference on Secure Cyber Computing and Communication, 111–116. Gulati, A., Aujla, G.S. and Chaudhary, R. (2018). Deep Learning-based Content Centric Data Dissemination Scheme for Internet of Vehicles. In 2018 IEEE International Conference on Communications, 1–6. Hatipoğlu, N. and Bilgin, G. (2015). Segmentation of Histopathological Images with Convolutional Neural Networks using Fourier Features. In 2015 23nd Signal Processing and Communications Applications Conference, 455–458. He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778) Heartbeat.Fritz, https://heartbeat.fritz.ai, 2019. Hernández-Arteaga, A., Nava, J.D., Kolosovas-Machuca, E.S., Velázquez-Salazar, J.J., Vinogradova, E., José-Yacamán, M. and Navarro-Contreras, H.R. (2017). Diagnosis of breast cancer by analysis of sialic acid concentrations in human saliva by surface-enhanced Raman spectroscopy of silver nanoparticles, Nano Research, 3662–3670. Hinton, G.E., Krizhevsky, A. and Wang, S.D. (2011, June). Transforming auto-encoders. In International Conference on Artificial Neural Networks (pp. 44–51). Springer, Berlin, Heidelberg.
3 Deep learning approaches in metastatic breast cancer detection
83
Hoogi, A., Wilcox, B., Gupta, Y. and Rubin, D.L. (2019). Self-Attention Capsule Networks for Image Classification. arXiv preprint arXiv:1904.12483. Hossain, A. and Sajib, S.A. (2019). Classification of Image using Convolutional Neural Network, Global Journal of Computer Science and Technology, 12–18. Huang, J., Zhou, W., Li, H. and Li, W. (2015). Sign Language Recognition using 3D convolutional neural networks. In 2015 IEEE international conference on multimedia and expo (ICME), 1–6. Kapadia, M.R. and Paunwala, C.N. (2018). Improved CBIR system using Multilayer CNN. Proceedings of the International Conference on Inventive Research in Computing Applications, 840–845. Kapadia, M.R. and Paunwala, C.N. (2018). Improved CBIR system using Multilayer CNN. Proceedings of the International Conference on Inventive Research in Computing Applications, 840–845. Kızılbey, K. and Akdeste, Z.M. (2013). Melanoma Cancer, Journal of Engineering and Natural Sciences, 555–569. Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 1097–1105. Opencv, L., https://www.learnopencv.com, 2019. Li, M. (2019). DC-Al GAN: Pseudoprogression and True Tumor Progression of Glioblastoma multiform Image Classification Based on DCGAN and AlexNet. arXiv preprint arXiv:1902.06085. Li, S., Jiang, H. and Pang, W. (2017). Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading, Computers in Biology and Medicine, 156–167. Lian, Z., Powell, A., Ersoy, I., Poostchi, M., Silamut, K., Palaniappan, K. . . . Thoma, G. (2016). CNNbased image analysis for malaria diagnosis. 2016 IEEE International Conference on Bioinformatics and Biomedicine, 493–496. Lin, M. and Chen, Q. (2013). Network In Network. arXiv preprint arXiv, 1312–4400. Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G.E., Kohlberger, T., Boyko, A. . . . Stumpe, M.C. (2017). Detecting Cancer Metastases on Gigapixel Pathology Images. arXiv:1703.02442, 1–13. Liu, Y., Kohlberger, T., Norouzi, M., Dahl, G.E., Smith, J.L., Mohtashamian, A. . . . Stumpe, M.C. (2018). Artificial Intelligence–Based Breast Cancer Nodal Metastasis Detection: Insights Into the Black Box for Pathologists, Archives of Pathology & Laboratory Medicine, 859–868. Medium.com, https://medium.com, 2019. Gowda, M.N. and Rasheed, A.I. (2017). Hardware Implementation of Hybrid Classifier to Detect Cancer Cells. In 2017 14th IEEE India Council International Conference, 1–5. Mohapatra, P., Panda, B. and Swain, S. (2019). Enhancing Histopathological Breast Cancer Image Classification using Deep Learning, International Journal of Innovative Technology and Exploring Engineering, 2024–2032. Mohebian, M.R., Marateb, H.R., Mansourian, M., Mañanas, M.A. and Mokarian, F. (2017). A Hybrid Computer-aided-diagnosis System for Prediction of Breast Cancer Recurrence (HPBCR) Using Optimized Ensemble Learning, Computational and Structural Biotechnology Journal, 75–85. Öztürk, Ş. and Akdemir, B. (2019). A convolutional neural network model for semantic segmentation of mitotic events in microscopy images, Neural Computing and Applications, 3719–3728. Paoletti, M.E., Haut, J.M., Fernandez-Beltran, R., Plaza, J., Plaza, A., Li, J. and Pla, F. (2018). Capsule networks for hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing, 57(4). 2145–2160. Parlar, S., Kaydul, N. and Ovayolu, N. (2005). Meme Kanseri ve Kendi Kendine Meme Muayenesinin Önemi, Atatürk Üniv. Hemşirelik Yüksekokulu Dergisi, 72–83. Patrick, M.K., Adekoya, A.F., Mighty, A.A. and Edward, B.Y. (2019). Capsule Networks–A survey, Journal of King Saud University-Computer and Information Sciences.
84
Nazan Kemaloğlu, Turgay Aydoğan, Ecir Uğur Küçüksille
Prisilla, J. and Iyyanki, V.M. (2018). Convolution Neural Networks: A Case Study on Brain Tumor Segmentation in Medical Care. In International Conference on ISMAC in Computational Vision and Bio-Engineering, 1017–1027. Reza Rasti, M.T. (2017). Breast cancer diagnosis in DCE-MRI using mixture ensemble of convolutional neural networks, Pattern Recognition, 381–390. Sabour, S., Frosst, N. and Hinton, G.E. (2017). Dynamic routing between capsules, Advances in Neural Information Processing Systems, 3856–3866. Shah, M. and Kapdi, R. (2017). Object Detection Using Deep Neural Networks. International Conference on Intelligent Computing and Control Systems, 787–790. Shan, K., Guo, J., You, W., Lu, D. and Bie, R. (2017). Automatic Facial Expression Recognition Based on a Deep Convolutional-Neural-Network Structure. In 2017 IEEE 15th International Conference on Software Engineering Research, Management and Applications (SERA), 123–128. Singla, N., Dubey, K. and Srivastava, V. (2019). Automated assessment of breast cancer margin in optical coherence tomography images via pretrained convolutional neural network, Biophotonics, 1–8. Sun, W., Tseng, T.-L., Zhang, J. and Qian, W. (2017). Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data, Computerized Medical Imaging and Graphics, 4–9. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9). Towards.com, https://towardsdatascience.com, 2019. Velandia, N.S., Beleno, R.D. and Moreno, R.J. (2010). Applications of Deep Neural Networks, International Journal of Signal System Control and Engineering Application, 61–76. Wahaba, N., Khana, A. and Leeb, Y.S. (2017). Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection, Computers in Biology and Medicine, 86–97. Wang, D., Khosla, A., Gargeya, R., Irshad, H. and Beck, A.H. (2016). Deep Learning for Identifying Metastatic Breast Cancer. arXiv preprint arXiv:1606.05718, 1–6. Wang, S.-H., Tang, C., Sun, J., Yang, J. and Huang, C. (2018). Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network With Batch Normalization, Dropout, and Stochastic Pooling, Front. Neurosci, 1–11. Wollmann, T. and Rohr, K. (2017). Automatic breast cancer grading in lymph nodes using a deep neural network. arXiv:1707.07565, 1–4. Wu, H., Huang, Q., Wang, D. and Gao, L. (2018). A CNN-SVM combined model for pattern recognition of knee motion using mechanomyography signals, Journal of Electromyography and Kinesiology, 136–142. Xu, Q., Wang, X. and Jiang, H. (2019). Convolutional neural network for breast cancer diagnosis using diffuse optical tomography, Visual Computing for Industry, Biomedicine, and Art, 1–6. Ye, J., Shen, Z., Behrani, P., Ding, F. and Shi, Y.-Q. (2018). Detecting USM image sharpening by using CNN, Signal Processing: Image Communication, 258–264. Yıldız, F. (2019). 1940, NM Fiber Lazer Kaynağının Karaciğer Dokusundaki Isıl Hasarının Yapay Sinir Ağları İle Tahmini, Uludağ University Journal of the Faculty of Engineering, 583–594. Zhang, H., Liu, D. and Xiong, Z. (2017). CNN-Based Text Image Super-Resolution Tailored for OCR. In 2017 IEEE Visual Communications and Image Processing, 1–4. Zhong, Z., Jin, L. and Xie, Z. (2015, August). High performance offline handwritten Chinese character recognition using GoogleNet and directional feature maps. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR) (pp. 846–850). IEEE.
Sonia Singla, G. Veeramalai, S. Aswath, Vinod Kumar Pal, Vikas Namjoshi
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer Abstract: Cancer is described as a disease that is caused by uncontrolled cell growth and division. African/American men have more possibility of dying due to cancer (with approximately 239.9 per 100,000 individuals) as compared with the rate regarding Asian women (about 88.3 per 100,000 individuals). Currently, about 15,270 children with the age between 0 and 19 are expected to have cancer. It is important to ask: can machine learning (ML) predict future cancers? ML, which is a piece of mechanized thinking over the target predata, can perceive problem solutions and choose decisions with negligible human intervention. In the past, there are many different successful results from ML solutions applied for different medical diseases. In this chapter, we have tried to show the performance results of different ML solutions in various types of cancer. Keywords: cancer, artificial intelligence, pathology
4.1 Introduction Cancer is one of the most genuine medical issues in the world. It happens in various structures relying on the cell of the source, area and familial adjustments. This malady has totally encompassed the world because of progress in propensities in the individual, for example, increased usage of tobacco, debasement of dietary propensities and absence of exercises. Recovering from this ailment has progressed toward becoming simple in contrast with early days because of headway in medicine. Fundamentally, harm level chooses the kind of disease treatment to be pursued. Recognition of cancer frequently includes radiological imaging. Radiological imaging is utilized to check the spread of malignancy and progress of treatment. It is additionally used to screen disease. Oncological imaging is ceaselessly ending up progressively changed and precise. Distinctive imaging systems mean to decide the most appropriate treatment choice for every patient. Imaging procedures are regularly utilized in mix to acquire adequate data. Sonia Singla, University of Leicester, UK G. Veeramalai, Department of Mathematics, M. Kumarasamy College of Engineering, Karur, India S. Aswath, Department of Computer Science, PES University, Bangalore, India Vinod Kumar Pal, Balaji Institute of International Business, Pune, India Vikas Namjoshi, Balaji Institute of Modern Management, Pune, India https://doi.org/10.1515/9783110668322-004
86
Sonia Singla et al.
Location of malignancy has consistently been a significant issue for the pathologists and medicinal experts for finding and treatment arranging. Recognizing disease from tiny biopsy pictures is abstract in nature and may change from master-to-master contingent upon their mastery and different elements that incorporate the absence of explicit and precise quantitative measures to group the biopsy pictures as a typical or carcinogenic one. Cancer is formed when our body system stops working normally. Old cells instead of getting removed start growing abnormally and form tumors. In other words, cancer is just abnormal growth of cells. Some are not harmful, but some can be dangerous, for example, carcinogenic tumor is much harmful as it can spread to other body parts. There are several treatments available for cancers like surgery, chemotherapy and radiation therapy to recover at the early stage. There are basically four main types of cancer, namely: Carcinomas: Such type of cancer affects the cells that line the internal organs, for example, liver and kidney, and can prove fatal if remains untreated. It occurs mostly from the ultraviolet rays from the Sun that damage the DNA of the cells, and it starts growing abnormally. Sarcomas: It is the cancer that is mostly found in bones, muscles and other body parts like fat, nerves and blood vessels. Leukemia is a malignant growth of the blood. It starts when sound platelets change and develop wildly. The four fundamental kinds of these are intense lymphocytic leukemia, incessant lymphocytic leukemia, intense myeloid leukemia and ceaseless myeloid leukemia. Lymphomas: It is blood cancer that affects lymphocytes, our immune system, and can be fatal. Because of the advantages of artificial intelligence (AI) and machine learning (ML), many researchers from biomedical, bioinformatics and other fields are trying to utilize these to cure cancer with models like artificial neural networks (ANNs), Bayesian networks (BNs), support vector machines (SVMs) and decision trees (DTs). Despite the fact that it is obvious that the use of ML systems can improve our perception of sickness development, an appropriate level of endorsement is required altogether for these techniques to be considered in the normal clinical practice (Kourou et al., 2015). Consistently, pathologists determine 14 million new patients to have malignant growth around the world, that is, a huge number of individuals who will confront long periods of vulnerability. Pathologists have been performing malignant growth judgments and anticipations for a considerable length of time. Most pathologists have a 96%–98% achievement rate for diagnosing malignant growth. A graph as shown in Figures 4.1 and 4.2 was developed by data visualization of dataset in R (A Comprehensive Guide to Data Visualisation in R for Beginners, 2020).
87
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
Age-standardized rate per 100,000 350 250 300
Figure 4.1: Showing the abnormal growth of cell.
0
10
20
30
40
Index Figure 4.2: Cancer in women in 2018 worldwide (Global cancer data by country | World Cancer Research Fund, 2020).
50
300
350
Sonia Singla et al.
250
Age-standardized rate per 100,000
88
ia ny nd ly ia nd ico kia in ina rus ria us urg ent Bela ulga Cypr Eston ermaIcela Ita Latv mbo Zeala rto R lova Spa g r A B e e S G w Pu Lux Ne
UK US
Country
Figure 4.3: Cancer in both sexes worldwide in 2018 (Global cancer data by country | World Cancer Research Fund, 2020).
4.2 Cancer worldwide Research says people die more due to cancer worldwide. Around 14.1 million new cases were seen in year 2012 and about 8.2 million deaths in world, and it is expected to be 23.6 million by 2030. In India (the second largest population in the world), cancer is the second main cause having 0.3 million deaths every year due to many reasons (Leukemia: Definition, Risk Factors, Causes, Symptoms, and Treatment, 2020). According to the latest research by Oslo University, the precision of anticipations is 60% for pathologists. A guess is the piece of a biopsy that comes after malignancy has been analyzed. It foresees the improvement of the malady. Number of cancer deaths worldwide in 2018
508585 626679 781631 78265 880792 1761007 0
200,000 400,000 600,000 800,000 100,0000 1,200,000 1,400,000 1,600,000 1,800,000 2,000,000 Oesophagus
Breast
Liver
Stomach
Colorectum
Figure 4.4: Worldwide deaths in 2018 due to various cancers.
Lung
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
89
As shown in Figure 4.2, the number of death cases due to lung cancer is more from the data collected by the World Health Organization. In India, new cases in 2018 were more from breast cancer (14%) followed by lip, oral cavity (10.4%), cervix uteri (8.4%), lung (5.9%), stomach (5%) and other cancers, but lung cancer is ranking at first position worldwide; however, the number of death cases were more due to leukemia at ages 0–14 years as shown in Figure 4.2.
Number of deaths at age of 0–14
0%
5%
10% Other
15% Liver
20%
25%
30%
Kidney
Brain, CNS
35%
40%
45%
Leukemia
Figure 4.5: The number of death cases due to leukemia in 2018 is more at 0–14 years.
4.3 Causes of cancer Cancer is a malady brought about by hereditary changes prompting uncontrolled cell development and tumor arrangement. The essential reason for sporadic malignant growths is DNA harm and genomic changes. A minority of malignant growths are because of acquired hereditary transformations. Most malignant growths are identified with ecological, way of life or conduct exposures. Malignant growth is commonly not infectious in people; however, it tends to be brought about by oncoviruses and disease microbes. The expression “ecological,” as utilized by malignant growth scientists, alludes to everything outside the body that associates with people. The Earth is not constrained to the biophysical condition, yet in addition, incorporates way of life and conduct factors.
90
Sonia Singla et al.
4.4 Types of cancer Leukemia is a malignant growth of the blood or bone marrow. Bone marrow produces platelets. Leukemia can create because of an issue with platelet generation. As a rule, it influences the leukocytes or white platelets. White blood cells (WBCs) are an imperative piece of your insusceptible framework. They shield your body from attack by microbes, infections and organisms, just as from strange cells and other outside substances. In leukemia, the WBCs do not have the same capacity like typical WBCs. They can likewise partition too rapidly and in the long run group out typical cells. WBCs are generally created in the bone marrow, however, specific sorts of WBCs are likewise made in the lymph hubs, spleen and thymus organ. When framed, WBCs circle all through your body in your blood and lymph (liquid that flows through the lymphatic framework), amassing in the lymph hubs and spleen. Several studies propose that AI calculations can remove prognostically pertinent data from tumor histology supplementing the as of now utilized prognostic factors in breast cancer disease. There are two sections to malignancy: prediction/prognosis and detection/ diagnosis. In prediction/prognosis, there are three focuses: – forecast of disease defenselessness (e.g., hazard appraisal) – forecast of disease repeat – forecast of disease survivability
80
Acute lymphoblastic leukemia (ALL) is the most widely recognized dangerous malignancy among children. Current hazard-adjusted medications and steady care have expanded the endurance rate to over 90% in the created nations. Be that as it
6
60
4
0
40
Genes
2
20
–2 –4
5
10
15
20 25 Samples
30
35
Figure 4.6: Showing gene expression of the leukemia dataset (Jagadev and Virani, 2017).
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
91
may, around 20% of kids who backslide have a poor guess, making ALL the main source of malignant growth mortality in pediatric issue. A noteworthy test in youth ALL administration is to arrange patients into proper hazard bunches for better administration. Stratifying chemotherapeutic treatment through the early acknowledgment of applicable results is basically significant to alleviate poor infection courses in these patients (Leukemia AML ALL, 2020). Presently, let us center around the qualities with a high outright relationship with the classes.
0
PC 2, 11 98 % 50
100
AML ALL
–60
–40
–20
0 20 PC 1, 14.99 %
40
60
80
X37 X33 X36 X30 X6 X23 X9 X3 X10 X11 X7 X35 X22 X8 X27 X28 X32 X34 X31 X38 X29 X18 X12 X25 X21 X20 X15 X4 X16 X5 X24 X19 X13 X1 X26 X17 X2 X14
0
50,000
100,000 150,000 200,000
Figure 4.7: Use of FactoMineR which separates the AML and ALL.
Y
Figure 4.8: Hierarchical clustering (Jagadev and Virani, 2017).
92
Sonia Singla et al.
Principal Components Analysis (PCA) is run utilizing the FactoMineR bundle. The plot beneath demonstrates that acute myeloid leukemia (AML) and ALL might be, for the most part, simple to isolate (Jagadev and Virani, 2017). The dataset originates from a proof-of-idea concentrate distributed in 1999 by Golub et al. It demonstrated how new instances of disease could be grouped by quality articulation checking (by means of DNA microarray) and in this way given a general way to deal with distinguishing new malignancy classes and doling out tumors to known classes. This information was utilized to order patients with intense AML and intense ALL.
4.4.1 Discovery of leukemia and its sorts utilizing picture handling and AI About 220 blood-smeared pictures of patients affected with leukemia and nonleukemia were taken to study. The image division estimations that have been used are kimplies batching computation, marker-controlled watershed figuring and HSV concealing-based division count. The morphological pieces of customary and leukemic lymphocytes shift through and through; in this way, various features have been isolated from the separated lymphocyte pictures. The leukemia is also described into its sorts and subtypes by using the SVM classifier, which is an ML classifier, along these lines to recognize leukemia and choose if it is AML, Chronic myelogenous (CML) or ALL; right now, the request strategy is one walk further in the field of research (Negm, Hassan and Kandil, 2018). The acknowledgment of intense leukemia shoot cells in shaded tiny pictures is a difficult errand. The primary significant advance in the programmed acknowledgment of this infection, picture division, is viewed as the most basic advance. In another study, the investigation presents a choice emotionally supportive network that incorporates the board determination and division utilizing k-implies bunching to recognize the leukemia cells and highlights extraction and picture refinement. After the choice emotionally supportive network effectively recognizes the cells and its inward structure, the cells are ordered by their morphological highlights of this examination. The choice emotionally supportive network was tried to utilize an open dataset intended to test division systems for distinguishing explicit cells, and the aftereffects of this investigation were contrasted and those of different strategies, which were proposed by different scientists, applied to similar information. The calculation was then applied to another dataset, extricated under the supervision by a specialist pathologist, from a nearby medical clinic; the all-out dataset comprised of 757 pictures accumulated from two datasets. The pictures of the datasets are named with three distinct marks, which speak to three sorts of leukemia cells: impact, myelocyte and fragmented cells. The way toward marking of these pictures was amended by the master pathologist. The calculation testing utilizing
D49824_s_at
HG1428-HT1428_s_at
hum_alu_at
L06499_at
Count
0.0
2.0
1.5
1.0
0.5
0.0
Figure 4.9: Top 10 genes expressed in AML and ALL patients (Negm, Hassan and Kandil, 2018).
AFFX-HSAC07/X00351_M_at
Gene accession number
M25079_s_at
ALL
Top 10 genes expressed in AML and ALL patients AML
2.0
1.5
Based on original dataset
ge genelvls 60,000 55,000 50,000 45,000 40,000
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
93
1.0
0.5
94
Sonia Singla et al.
0.0
0.2
Error 0.4
0.6
0.8
Random forest
0
100
200
300
400
500
Trees Figure 4.10: Random forest prediction (Negm, Hassan and Kandil, 2018).
this dataset showed a general exactness of 99.517%, the affectability of 99.348% and particularity of 99.529% (http://aircconline.com/mlaij/V2N4/2415mlaij01.pdf, 2020). The result shows that the set contains three unique sorts of cells: impact, myelocyte and fragmented cells, which had recently been marked by a specialist and extricated from 624 AML pictures, which demonstrates that the calculation effectively distinguished 802 cells with an affectability and explicitness of 100% and 99.747%, separately, and a precision of 99.76%. The classifiers in Weka have been arranged into various gatherings, for example, Bayes, functions, lazy, rules and tree-based classifiers. A decent blend of calculations has been browsed through these gatherings that incorporate naive Bayes (from Bayes), k-nearest neighbor (k-NN), SVM, random forest, bagging and AdaBoost (https://www. ijser.org/researchpaper/Detection-of-Breast-Cancer-using-Data-Mining-Tool-WEKA.pdf, 2020).
4.4.2 Breast cancer Breast malignant growth has turned into the essential explanation of death in women in created nations. Breast malignant growth is the second most basic reason for death in women around the world. The high frequency of bosom malignant growth in women has expanded essentially during the most recent couple of decades. In this chapter, we have talked about different information mining approaches that have been used for early identification of bosom malignant growth. The best method to lessen bosom malignant growth passing is to identify it prior. A decent measure of research on bosom malignant growth datasets is found in literature. A significant number of them show great characterization exactness (Breast Cancer Prediction using Machine Learning | Kaggle, 2020).
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
95
=== Classifier model (full training set) === Random forest Time taken to test model on training data: 0.08 s === Summary === Correlation coefficient Mean absolute error Root-mean-square error Relative absolute error Root relative squared error Total number of instances
0.9902 0.002 0.003 14.8676% 16.4741% 569
Linear regression === Summary === Correlation coefficient Mean absolute error Root-mean-square error Relative absolute error Root relative squared error Total number of instances
0.9733 0.0029 0.0041 21.4463% 22.9741% 569
Breast cancer attributes correlation
1.00 0.95 0.90 0.85 0.80 0.75
0 5 10 15 20 25 30 0
5
10
15
20
25
30
Figure 4.11: Correlation of breast cancer attributes (GitHub – gscdit/Breast cancer detection: Breast cancer detection using machine learning, 2020).
96
Sonia Singla et al.
A mammogram is an X-ray image of the breast cancer. It is utilized to check for bosom malignant growth in women who have no signs or side effects of the infection. It can also be utilized if you have a bump or other indication of bosom malignant growth. Screening mammography is the kind of mammogram that checks you when you have no side effects. It can help decrease the quantity of passing from bosom disease among women aged 40–70. Be that as it may, it can likewise have downsides. Mammograms can now and again discover something that looks strange yet is not disease. This prompts further testing and can lead to tension. Now and then mammograms can miss disease when it is there. It likewise opens you to radiation. You should converse with your primary care physician about the advantages and downsides of mammograms (https://www.ijitee.org/wp-content/ uploads/papers/v8i6/F3384048619.pdf, 2020). An assortment of these methods, including ANNs, BNs, SVMs and DTs have been generally applied in malignant growth to investigate for the advancement of prescient models, bringing about viable and precise basic leadership. X-ray image of the breast cancer is known as mammogram. It is being used to check any malignant growth occurring in women when it shows no sign of symptoms. The study shows that women die due to breast cancer more at the age of 40–70, so in order to reduce the death rate, screening mammography is used; however, it can also miss it, for which alternative radiation is used (https://www.ijitee. org/wp-content/uploads/papers/v8i6/F3384048619.pdf, 2020). It is the second risky disease after lung malignant growth. In 2018, as per the insights given by the World Cancer Research Fund, it is assessed that more than 2 million new cases were recorded out of which 626,679 passings were approximated. Of the considerable number of malignant growths, bosom disease comprises 11.6% in new malignancy cases and 24.2% of tumors among women. If there should arise an occurrence of any sign or indication, typically individuals visit specialist promptly, who may allude to an oncologist, whenever required. The oncologist can analyze bosom disease by undertaking intensive therapeutic history, physical assessment of both the bosoms and furthermore check for expanding or solidifying any lymph hubs in the armpit (Lg and At, 2013). SVM is a developing amazing AI procedure to order cases. SVM has been utilized in a scope of issues and they have just been fruitful in acknowledgment in bioinformatics and malignant growth finding (Torre, Siegel, Ward and Jemal, 2016). Several trials have demonstrated that SVM is the best for prescient investigation with a precision of 92.7%. We deduce from several examination that SVM is the appropriate calculation for expectation, and all in all k-NN is introduced well by SVM (Lg and At, 2013).
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
97
4.4.3 Lung cancer According to the research study, it has been found that lung malignancy death rates among women have increased in countries like USA, UK and Australia where women take up smoking at early ages than later. Lung disease can likewise be brought about by certain word-related exposures, just as air contamination, both indoor (from cooking and warming utilizing coal or ignitable materials) and open air. Presentation to indoor air contamination is thought to represent out of the blue high paces of lung malignant growth among certain population with a low smoking pervasiveness (Coudray et al., 2018). Adenocarcinoma (LU-AD) and squamous cell carcinoma (LUSC) are two major types of lung tumors that need to be visually examined by pathologists, and it has been found that deep learning (DL) models can help pathologists to accomplish the task that was otherwise considered impossible (What Is Deep Learning? | How It Works, Techniques & Applications – MATLAB & Simulink, 2020). Models are prepared by utilizing a huge arrangement of marked information and neural system designs that contain numerous layers. DL which is an important branch of AI where the computers take up new information and produce an output without any human needs like facial recognition software and self-driven vehicles are some of the few examples (Deep Learning for Cancer Diagnosis: A Bright Future, 2020). In other words, DL is an individual from the bigger AI (ML) and man-made consciousness (AI) family. It has been applied in numerous fields like PC vision, discourse acknowledgment, common language handling, object identification and sound recognition. DL designs, including profound neural systems (Deep Recurrent Neural Networks (DNNs)) and intermittent neural systems (Recurrent Neural Networks (RNNs)), have been constantly improving the cutting edge in medication disclosure and infection diagnosis. DL can possibly accomplish great exactness for the determination of different kinds of tumors, for example, bosom, colon, cervical and lung malignant growth. It constructs an effective calculation dependent on various handling layers of neurons (see Figure 4.1). Nonetheless, the yield (e.g., precision) of any profound learning model relies upon different elements including, however not restricted to, information type (numeric, content, picture, sound and video), information size, design and information ETL (separate, change, load, etc.). The human genome is a perplexing grouping of nucleic acids. It encodes as DNA inside 23 chromosomes. It is notable that the declaration of qualities changes as indicated by the circumstance and thus such changes direct numerous organic capacities. Strangely, certain qualities change just because of explicit neurotic conditions (like malignant growth) or with treatment. These qualities are called biomarker(s) for a tumor. As of late, a gathering of researchers from Oregon State University utilized a profound learning way to deal with certain basic distinguished qualities for the finding of bosom disease. Another gathering of researcher from China applied a profound
98
Sonia Singla et al.
learning model for elevated level highlights extraction between combinatorial SMP (physical point changes) and malignant growth types (Xu et al., 2019). In another investigation, it was shown that DL can incorporate imaging filters at numerous time points to improve clinical result expectations. Computer intelligence-based noninvasive radiomic biomarkers. can have a critical effect in the center, given their ease and negligible prerequisites for human information (Levine et al., 2019). DL alludes to a lot of PC models that have, as of late, been utilized to gain phenomenal ground in the manner PCs separate data from pictures. These calculations have been applied to errands in various medicinal claims to fame, most widely radiology and pathology, and sometimes have achieved execution practically identical to human specialists. Moreover, it is conceivable that profound learning could be utilized to remove information from therapeutic pictures that would not be evident by human examination and could be utilized to advise on subatomic status, forecast or treatment affectability. In this survey, we plot the present advancements and cutting edge in applying profound learning for malignancy analysis and examine the difficulties in adjusting the innovation for far-reaching clinical sending (Example of Logistic Regression in Python – Data to Fish, 2020). The achievement of AI lies in the correct arrangement of highlights. Highlight building assumes an urgent job in AI. If we handcraft the correct arrangement of highlights to foresee a specific result, at that point the AI calculations can perform well, yet finding and building the correct arrangement of highlights is not a simple assignment. With DL, we do not need to handcraft such highlights. Since profound ANNs utilize a few layers, it learns the complex inherent highlights and staggered conceptual portrayal of information without anyone else’s input. How about we investigate this piece with a relationship? How about we guess we need to play out a picture grouping task. We are figuring out how to perceive whether a picture contains a pooch or not. With AI, we have to handcraft the model to comprehend whether the picture contains a canine. We send these carefully assembled highlights as contributions to AI calculations, which at that point gain proficiency with a mapping between the highlights and the mark (hound). In any case, removing highlights from a picture is a dreary errand. With profound learning, we simply need to nourish in a lot of pictures to the profound neural systems, and it will consequently go about as a component extractor by learning the correct arrangement of highlights. As we have learned, ANN utilizes various layers; in the main layer, it will gain proficiency with the fundamental highlights of the picture that describe the canine, state and the body structure of the pooch; and, in the succeeding layers, it will become familiar with the perplexing highlights. When it learns the correct arrangement of highlights, it will search for the nearness of such highlights in the picture. On the off chance that those highlights are available, at that point it says that the given picture contains a canine. In
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
99
this manner, not at all like AI, with DL, we do not need to physically design the highlights, rather, the system will itself learn the right arrangement of highlights required for the assignment. 0
0.8
25
0.6
50
0.4
75
0.2
100
0.0
125 –0.2 150 –0.4
175
–0.6
200
–0.8 0
50
100
150
200
250
300
Figure 4.12: Image of lung cancer.
There are various sorts of arrangement calculations in ML: 1. Logistic regression 2. k-NN 3. SVMs 4. Kernel SVM 5. Naïve Bayes 6. DT algorithm 7. Random forest classification We used breast cancer dataset to calculate accuracy using various algorithms. SVM or SVM calculation is a straightforward yet incredible supervised ML calculation that can be utilized for structuring both relapse and arrangement models. SVM calculation can perform truly well with both directly distinct and nonstraight detachable datasets. Indeed, even with a restricted measure of information, the help vector machine calculation does not neglect to demonstrate its enchantment. Accuracy – 0.90
100
Sonia Singla et al.
75
M
72 53
64
B
12
78
Actual
80
39
14
B
M
56 B Predicted
60 45
M
90
Actual
B
88
30 15
Predicted
Figure 4.13: Logistic regression (What is Logistic Regression? – Statistics Solutions, 2020).
Logistic regression Logistic regression is one of the ML algorithms that is used for classification, mainly for binary classification also known as classification algorithm with two classes but can also be used for multiple classes and is normally used to find out the relationship between a dependent variable (target) and one or more independent variables. Accuracy – 0.62
k-Nearest neighbor k-NN algorithm is mainly used for both classification and regression problems. It works well with small data and is easy to utilize; however, in case of large data it may not work well enough. At long last, k-NN is groundbreaking since it does not expect anything about the information, other than a separation measure can be determined reliably between any two examples. In that capacity, it is called nonparametric or nonstraight as it does not expect a utilitarian structure (Implementing SVM and Kernel SVM with Python’s Scikit-Learn, 2020). It shows the accuracy: 0.8601398601398601
DT algorithm Points of interest: DTs are anything but difficult to clarify. It brings about a lot of guidelines. It pursues a similar methodology as people by and large pursue while deciding. Accuracy – 85.96491228070175
4 Machine learning: an ultimate solution for diagnosis and treatment of cancer
101
Random forest Random forest is a classification algorithm and is found to be the best model for forecast. Accuracy – 0.9298245614035088
Naïve Bayes Accuracy – 0.89 (Naive Bayes Algorithm in Python – CodeSpeedy, 2020) From all the algorithms, SVM, random forest shows an accuracy of 90%.
4.5 Conclusion Cancer disease as almost known is due to abnormal growth of cells and can be spread to different parts of body, making it fatal. Pathologists almost take more than 10 days for biopsy, but machine takes few seconds to complete thousands of such tasks and gives accurate results; so it is not surprising if in future ML takes the role of pathologists. SVM random forest is found to be the best algorithm, showing accuracy rate of 90% compared with other algorithms. Improvement of classification further can lead to major role of machine learning in various diseases like cancer.
References Anon, 2020. Breast Cancer Prediction using Machine Learning | Kaggle. [online] Available at:
[Accessed 21 May 2020]. Anon, 2020. Deep Learning for Cancer Diagnosis: A Bright Future. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. Example of Logistic Regression in Python – Data to Fish. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. GitHub – gscdit/Breast-Cancer-Detection: Breast Cancer Detection Using Machine Learning. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. Global cancer data by country | World Cancer Research Fund. [online] Available at:
[Accessed 20 May 2020]. Anon, 2020. http://aircconline.com/mlaij/V2N4/2415mlaij01.pdf. [online] Available at: [Accessed 21 May 2020].
102
Sonia Singla et al.
Anon, 2020. https://www.ijitee.org/wp-content/uploads/papers/v8i6/F3384048619.pdf. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. https://www.ijser.org/researchpaper/Detection-of-Breast-Cancer-using-Data-MiningTool-WEKA.pdf. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. Implementing SVM and Kernel SVM with Python’s Scikit-Learn. [online] Available at:
[Accessed 21 May 2020]. Anon, 2020. Leukemia AML ALL. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. Leukemia: Definition, Risk Factors, Causes, Symptoms, and Treatment. [online] Available at: [Accessed 20 May 2020]. Anon, 2020. Naive Bayes Algorithm in Python – CodeSpeedy. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. What Is Deep Learning? | How It’ ’ Works, Techniques & Applications – MATLAB & Simulink. [online] Available at: [Accessed 21 May 2020]. Anon, 2020. What is Logistic Regression? – Statistics Solutions. [online] Available at: [Accessed 21 May 2020]. Coudray, N., Ocampo, P.S., Sakellaropoulos, T., Narula, N., Snuderl, M., Fenyö, D., Moreira, A.L., Razavian, N. and Tsirigos, A. (2018). Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning, Nature Medicine, 24(10), 1559–1567. Jagadev, P. and Virani, H.G.. (2017). Detection of leukemia and its types using image processing and machine learning. In: 2017 International Conference on Trends in Electronics and Informatics (ICEI). 2017 International Conference on Trends in Electronics and Informatics (ICOEI). IEEE, pp.522–526. Kourou, K., Exarchos, T.P., Exarchos, K.P., Karamouzis, M.V. and Fotiadis, D.I. (2015). Machine learning applications in cancer prognosis and prediction, Computational and Structural Biotechnology Journal, 13, 8–17. Levine, A.B., Schlosser, C., Grewal, J., Coope, R., Jones, S.J.M. and Yip, S. (2019). Rise of the machines: advances in deep learning for cancer diagnosis, Trends in Cancer, 5(3), 157–169. Lg, A. and At, E. (2013). Using three machine learning techniques for predicting breast cancer recurrence, Journal of Health & Medical Informatics, 04, 02. Negm, A.S., Hassan, O.A. and Kandil, A.H. (2018). A decision support system for Acute Leukaemia classification based on digital microscopic images, Alexandria Engineering Journal, 57(4), 2319–2332. Torre, L.A., Siegel, R.L., Ward, E.M. and Jemal, A. (2016). Global Cancer Incidence and Mortality Rates and Trends–An Update, Cancer Epidemiology, Biomarkers & Prevention, 25(1), 16–27. Xu, Y., Hosny, A., Zeleznik, R., Parmar, C., Coroller, T., Franco, I., Mak, R.H. and Aerts, H.J.W.L. (2019). Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging, Clinical Cancer Research, 25(11), 3266–3275.
Sahar Qazi, Naiyar Iqbal, Khalid Raza
5 Artificial intelligence in medicine (AIM): machine learning in cancer diagnosis, prognosis and therapy Abstract: Artificial intelligence (AI) is the science of mimicking human intelligence, which is widely employed to solve complex computational problems in different fields of studies such as sciences and medicines. In this manner, applications of AI brought a drastic change in the healthcare industry which is backed up by the rapid advancement in biosensors technology, data management in the cloud and various computational intelligence approaches, including deep learning. In this chapter, a general look at the applications of AI in the context of medicine, that is, the concept of AIM, was considered accordingly. As the essential way of applying AI, machine learning has been targeted while analyzing developments in medicine. In detail, the perspective was directed to cancer diagnosis, prognosis and therapy applications. As AI-based devices are critical in terms of the latest advances, the discussion was done over that subject, too. Keywords: artificial intelligence, medicine, medical, machine learning, cancer diagnosis
5.1 Introduction Artificial intelligence (AI) is the science that can simulate human intelligence tasks – which require learning, reasoning and self-correction–easily by machines. The basic idea of AI lies in mimicking human cognitive behavior, which mainly includes thinking and acting rationally. It has been widely employed to solve many computational, industrial, business, engineering and biomedical problems as it provides umbrella wide opportunities for inferring new solutions or improving the existing solutions. Moreover, because of its multifaceted performance and high robustness, it has now paved a way in healthcare fraternity as well, which is collectively termed as artificial Acknowledgments: Sahar Qazi was supported by the INSPIRE Fellowship of Department of Sciences and Technology, Government of India. Sahar Qazi, Khalid Raza, Department of Computer Science, Jamia Millia Islamia, New Delhi, India Naiyar Iqbal, Department of Computer Science & IT, Maulana Azad National Urdu University, Hyderabad, India https://doi.org/10.1515/9783110668322-005
104
Sahar Qazi, Naiyar Iqbal, Khalid Raza
intelligence in medicine (AIM) (Coiera, 1996, Szolovits, 2019). It has driven a drastic change in healthcare sector, which is backed up by the rapidly increasing availability of data and instant improvisation of rational technologies (Miller, 1986). Since data is unstructured in nature, it becomes mandatory to analyze healthcare data which can be obtained in both structured and unstructured forms. Very famous and widely appreciated AI methods for structured data are the support vector machines (SVMs), artificial neural networks (ANNs) and deep learning approach. There are many different AI techniques available that have the potential to solve complex medical problems. For instance, ANN has been employed for various disease diagnosis including cancers of breast, gastric, thyroid, oral epithelial cells, radiographic and histopathological image analysis (computed tomography (CT) and magnetic resonance imaging (MRI) scans) and interpretation of data in intensive care units (ICUs) (Penã-Reyes and Sipper, 1999, Penã-Reyes and Sipper, 2000). Another intelligent system, fuzzy logic (FL), has been used to diagnose several cancers including lung cancer using tumor profile markers (Schneider et al., 2002). The evolution-based algorithm, genetic algorithm, has been studied to predict outcomes for illnesses such as lung cancer, melanoma and response to drugs (such as warfarin) (Güler et al., 2005). The researchers working in the field of AIM are the janitors and it is their onus to develop reliable and relatable evidence that these techniques work on reasonable and rational ground. Nevertheless, there is still a lag between actual and AI-based techniques in medical fraternities. However, there is much documentation that proves that AIM has better efficiency and, henceforth, is important for making it a part and parcel of individual healthcare. This chapter highlights the existing AIM systems that are being employed to diagnose, prognose and treat myriad cancers such as blood, breast, ovarian, cervical, prostate, lung, skin and liver. Furthermore, this chapter seeks to provide the extravagant use of AIM systems as “medical intelligence” for efficient point of care (PoC) for various types of cancer. Let us first define the term “intelligence.” Human intelligence is a combination of many diversified traits that encapsulate the basics of problem understanding, problem solving, understanding the behavior of the problem and how one perceives it. The basic crux of human intelligence lies in the adaptation and learning of phenomena. For instance, if a person touches a hot pan, he/she will automatically remove his/her hand from the pan. This stimulus made the person learn that the hot pan must not be touched as it can burn his/her hand. We must understand that learning can be derived from two sources, namely education and experience. Thus, human intelligence tends to learn from their experiences, which are referred to as generalization, that is, when we utilize learning from past experiences in new situations (Lamson, 2018). Learning, rationality, problem solving and perception are the main concerns that are dealt with by AI to make machines more “human-like.” There are many forms of learning that are used in AI. The simplest form of learning is trial-and-error method (https://searchen terpriseai.techtarget.com/definition/AI-Artificial-Intelligence). For example, a computer program for solving “mate-in-one” chess problems moves at random until it finds the “mate.” The program then stores all the probable position solutions so that
5 Artificial intelligence in medicine (AIM)
105
the next time if the program faces such a situation, it would recall the same position solution. This simple learning is termed as rote learning, which is easy for the computer program to grasp. The second aspect of AI is rationality. Alan Turing wrote a seminal paper in the 1950s on AI wherein he questions the hot topic of his time – “Can Machines Think?” (Harnad, 1992, Harnad, 2001). AI allows machines to “think” rationally and accordingly make inferences and decisions. True rationality is restricted not only to making inferences but to make them more relevant to the solution of a specific problem. This inferencing rationale is one of the biggest problems in AI which is being worked on. Problem-solving is simply a systematic search employing myriad actions so as to reach the goal node in the best and optimal path as possible. Problemsolving approaches in AI can either be for a special purpose or for a general purpose. A special-purpose approach is specific to a problem that cannot be generalized. However, a general-purpose problem-solving approach can be employed for a myriad number of problems, for example, in the case of a robot, problem solving could be composed of PICKUP, PUTDOWN, MOVEFORWARD, TRACEBACK, MOVELEFT and MOVERIGHT until the solution is achieved. Finally, perception refers to the viewing of the environment which is done by means of various sensory units (organs) wherein the visual of the environment is broken into separate objects in many spatial relationships. The study tends to get complex as an object’s visual may vary from one angle to another, so does the density and intensity of the background. AI has gained a lot of appreciation as with the advent of the latest technology. It is only getting better and is being used not only in engineering, industries and marketing but in healthcare fraternities as well. The next section will highlight the rise of AIM and how a new horizon began for computational biologists and bioinformaticians.
5.1.1 History of AI in medicine Back in the ancient times, Aristotle first tried to describe “logical/rational thinking” using his syllogisms. Taking inspiration from these initial studies, for instance, Turing’s explanation of logical thinking and his understanding of “Can machines think?” established the main definition of contemporary rational thinking. Computational programs that can simulate human-like cognitive intelligence are known as artificial intelligent systems. Alan Turing, British mathematician and one of the founders of modern-day computer science and AI (1950), defined human-like behavior in a computer as the capacity to successfully attain human-level performance in day-to-day tasks, which is known as the “Turing test” (Turing, 1950). Since the past century, computational researchers have dig in deep in order to determine various applications of AIM (Lusted, 1955, Ledley and Lusted, 1959, Ramesh et al., 2004). Employment of AIM was first successively achieved by Gunn (1976) by determining the chances of diagnosing acute abdominal pain with computational approaches (Gunn, 1976). This gigantic leap of computational approaches in a medicinal fraternity has allowed much more work in
106
Sahar Qazi, Naiyar Iqbal, Khalid Raza
medical AI. Currently, the healthcare industry is facing apex problems in acquiring, analyzing and applying the humongous information which is required to solve complex clinical problems. The spontaneous and rapid growth of AIM has been only due to the progression of AI programs that aid consultants in the diagnosis of the diseases and in turn developing accurate prognosis and treatment strategies for the same. The main aim of AIM is to design an appropriate intelligent healthcare system that can help medical practitioners in developing a robust healthcare regimen without relying on unpredictable, unrealistic knowledge. Such intelligent systems that are being utilized for AIM encapsulate are ANNs, fuzzy systems, evolutionary algorithms (EA) and hybrid approach using neural networks and EA (Ramesh et al., 2004).
5.1.2 Overview of machine learning algorithms Machine learning (ML) is one of the applications of AI, which facilitates a system to automatically learn and improve from past experience (i.e., historical data) without being explicitly programmed. Tom M. Mitchell defined the ML algorithms as “a computer program is said to learn from experience E with respect to some class of task T and performance measure P if its performance at tasks in T, as measured by P, improved with experience E” (Mitchell, 1997). The learning starts with looking for patterns in past example data so that accurate decisions can be made in the future based on new example data. The main aim of the ML algorithm is to facilitate the computers to learn automatically without human intervention. In other words, ML algorithms construct a mathematical model based on past example data, called “training data” (Jabeen et al., 2018). ML algorithms are mainly classified into four categories: supervised learning, unsupervised learning, semisupervised learning and reinforcement learning (Bazazeh and Shubair, 2016). Supervised learning is applied for solving classification and regression problems. Classification algorithms are applied when the output contains a limited set of values (i.e., nominal value prediction), while regression algorithms are known for their continuous value prediction. In this approach, supervision in the learning derives from the labeled samples in the training dataset with the goal to learn a general rule to map inputs to outputs. Some of the supervised learning algorithms are ANNs, SVMs, linear regression, logistic regression, naïve Bayes, decision tree, knearest neighborhood and linear discriminant analysis. Unsupervised learning algorithms are applied when data used for training is unlabeled or uncategorized. It attempts to infer a function in order to describe a hidden structure from unlabeled data, like clustering or grouping data. The unsupervised learning not only infers hidden patterns in the data and group it, but also uses these insights for unbiased or expert from decision making (Raza and Singh, 2018). Some of the commonly used unsupervised learning algorithms are
5 Artificial intelligence in medicine (AIM)
107
clustering, principal component analysis (PCA), singular value decomposition (SVD), autoencoders, deep belief networks (DBNs) and generative adversarial networks (GANs). Semisupervised learning is the mixture of supervised and unsupervised learning approaches, which have both labeled and unlabeled examples that are used to tackle classification and regression problems. The learning approach of this hybrid technique is that the labeled samples are used to learn class models and unlabeled samples are used to refine the boundaries between classes (Zhu, 2005). Usually, it contains a very less volume of labeled data and a huge volume of unlabeled data. The basic procedure of semi-supervised learning is that first similar data are clustered using an unsupervised learning algorithm and then labeled data are applied to label the remaining unlabeled data (Zhu and Goldberg, 2009). In the reinforcement learning approach, few feedbacks are given to the system that trains the algorithm to interact with a dynamic environment with the aim to perform a certain goal without a guide, explicitly telling it whether it has attained its goal or not (Sutton et al., 1998). It takes a suitable action to maximize reward in a particular situation (e.g., chess game). In reinforcement learning, the reinforcement agent decides what to do to perform the given task. Some of the reinforcement learning algorithms are Q-learning (Watkins and Dayan, 1992), state–action–reward–state–action (Hausknecht and Stone, 2015), deep Q-network (Hester et al., 2018) and deep deterministic policy gradient (Casas, 2017, Kaelbling et al., 1996).
5.1.3 Transition from industry to medicine: motivation of AI in healthcare AI is quickly moving into the healthcare industry, and its techniques will play a vital role with purpose, in addition to smartly helping diverse therapeutic capabilities (LaRosa and Danks, 2018). Social assurance AI and mechanical independence are planned for either currently or as soon as possible. Assurance of patients, execution of essential medicinal techniques, well-described endeavors inside progressively staggering methodologies, detecting patients’ prosperity and psychological wellness in short and whole deal care workplaces, central physical interventions to improve tolerant self-rule during physical or mental break down (e.g., physical escort, or reminder to take medicines), independent patient movability (e.g., speech recognition wheelchairs) and even explicit assignments require physical arbitrations in prevailing settings (Jiang et al., 2017, Vayena et al., 2018).
108
Sahar Qazi, Naiyar Iqbal, Khalid Raza
5.2 Becoming of artificial intelligence in medicine system 5.2.1 Medical data The advancement in medical and health information technology resulted in a many-fold increase in medical data in the big data, cloud computing and Internet of things era. Medical data are a valuable source that can be utilized for various types of analysis that has the potential to improve disease diagnosis, prognosis and treatment. The electronic health records (EHRs) or computerized patient records are needed to improve the quality of the current healthcare system. Medical data are used by all the stakeholders of the healthcare system including the administrator, doctors, researchers and accounts for their own purposes. These data not only help to improve healthcare quality but also aid in new eHealth product development, health policymaking by the government and other organizations. Medical data refers to any health-related information associated with regular patient care, including (i) general numerical information such as vital signs (e.g., heart rate, respiratory rate and temperature), (ii) diagnostic-related information such as blood tests, genetic tests, culture results and radio images, (iii) treatment information that includes medication, doses and doses duration and (iv) other clinical data such as administrative data, claim data, patient disease registry and health services. Hence, medical data are available in both structured and unstructured form. The type of medical data includes narrative, textual, numerical, drawings, images and device inputs. Some of the sources of medical data are hospital records, clinic records, disease records, medical literature, web-based databases, survey reports and real-time sensors data. The various sources of medical/health data are depicted in Figure 5.1.
5.2.2 AI medical devices The AI is on par with human intelligence when it comes to medical diagnosis. It has opened possibilities within the medical device industry to develop AI-enabled and AI-optimized devices enabling doctors and medical practitioners to provide relevant and effective care quickly. AI-assisted medical devices offer healthcare consumers more control with a variety of features. For instance, “One Drop System” (https://one drop.today/) is a digitized glucose meter linked to a mobile app to automatically capture blood sugar parameter, and helps the diabetic patients keep track of diet, exercise and medications (Figure 5.2a). Smart biosensors (sensors combined with AI) are being manufactured in order to track the vital signs and alarm the consumers in case of any medical emergency. Smart wearable ECG/EKG monitor by QardioArm
Pathology labs
Genetics and genomics
Figure 5.1: Various sources and users of health data.
Hospitals
Medical practitioners and devices
Pharmaceuticals
Medical claims
Sources of medical data
Medical instruments
Health agencies
U.S. National Library of Medicine
User end
5 Artificial intelligence in medicine (AIM)
109
110
(a)
Sahar Qazi, Naiyar Iqbal, Khalid Raza
(b)
(c)
Figure 5.2: Some of the AI-assisted medical devices: (a) One Drop System, (b) smart wearable ECG/KCG monitor and (c) smart wireless blood pressure monitor.
(https://www.getqardio.com/) tracks electrocardiograph trace for deeper heart health insights, stores the data on the smartphone and also automatically shares these data with the doctors (Figure 5.2b). Smart blood pressure monitor by QardioArm (https:// www.getqardio.com/) is an US Food and Drug Administration (FDA)-approved and clinically validated wireless device that measures the systolic–diastolic blood pressure and heart rate, and detects the irregular heartbeat (Figure 5.2c). Patients with high risk of heart attacks, stroke and other critical health issues would seek medical attention before the occurrence of a critical emergency, and these devices would have greater importance in such situations. AI is also extensively applied to give diagnostic information for various diseases and cancers, including skin cancer. In fact, AI has been a boon to the healthcare industry with countless applications, whether it is being utilized for discovering links between genetic codes, to power surgical robots or even to maximize efficiency in the healthcare industry. AI techniques are being integrated to make the medical devices more reliable, accurate and automated. For instance, the field of medical imaging is steadily gaining traction but clinically validated wearable devices are still emerging. The development and improvement of devices to manage and treat chronic diseases will continue to be a major area of focus.
5 Artificial intelligence in medicine (AIM)
111
5.2.3 Diseases spanned by AIM systems It is very much evident that AIM is an autodidact. AI has been widely appreciated across healthcare fraternity. We live in times wherein we discuss whether AIM can replace human interference, such as consultants, nurses and laboratory technicians in the future. As far as the current scenario of AIM is concerned, the latter cannot be profoundly asserted. However, AIM can surely assist consultants to make accurate clinical decisions in critical and functional areas of medical well-being. The abundance of medical data along with the development of humongous data analytic approaches has successfully allowed AIM-based applications in the progression of efficient healthcare (Miller and Brown, 2018). Some of the diseases that are spanned for detection, diagnosis and therapeutics are gynecological disorders including breast cancer (Penã-Reyes and Sipper, 1999, Kahn et al., 1995), cervical cancer (Mat-Isa et al., 2008), ovarian cancer (Tan et al., 2005, Wang and Ng, 2006), prostate cancer (Saritas et al., 2003), diabetes mellitus (Kalaiselvi and Nasira, 2014), hematology disorders (Payandeh et al., 2009), genital problems for men (Abu Naser and Al Hanjori, 2016), liver diseases (Lin, 2009), oral cancers (de Bruijn et al., 2011), lung cancers (Zhou et al., 2002), hypertension (Kaur and Bhardwaj, 2014), cardiovascular and coronary disorders (Vijaya et al., 2010) and neurological disorders (FernandezBlanco et al., 2012, Amato et al., 2013).
5.2.4 AIM in oncology AIM has been pivotal to many diseases as mentioned in the previous section. However, this section discusses AIM-based approaches for diagnosing, detection and treatment strategies for various oncologies. Oncology is one of the most challenging diseases that have been tagged as the most lethal diseases of all! AIM has been employed widely to improve clinical decisions by consultants. This section presents a brief account of AIM in oncology. Wang et al. (2004) discerned that a modified self-organizing Takagi–Sugeno–Kang fuzzy neural network (FNN), namely a six-layered multi-input single-output feedforward system plays a crucial role in ovarian cancer diagnosis. This AIM-based hybrid model plays a pivotal role by giving a simplistic platform in order to perform reasoning and inference and can generate four classifiers for diagnosing ovarian cancer of five different stages, with an accuracy of approximately 84%. FNNs have been employed to diagnose cancers because of their accuracy and genuine reasoning process. Complementary Learning FNN (CLFNN) is an improvised version of the standard FNN that can give fuzzy sets and formulate fuzzy rules rapidly, as it utilizes both positive and negative learning, thus, decreases the dimensionality and can provide a commendable classification performance. Pseudo-associative complementary learning
112
Sahar Qazi, Naiyar Iqbal, Khalid Raza
approach is like CLFNN, which has been successfully used as a diagnostic indicator for ovarian cancer (Tan et al., 2005). AIM applications also span digital images and use deep learning approaches to recognize and interpret heart diseases or any other oncologic disorders. Generally, the input for such an intelligent system is plain X-rays, angiograms, Computed Tomography (CT or CAT), Positron Emission Tomography (PET) and MRI scans. The Professor Fidelio system, an intelligent heuristic system based on defined diagnostic patterns, has been examined on 366 patient specimens suffering from lymphoproliferative disorders, leukemias and lymphomas. Fidelio’s interpretations were closer to consultant decisions in most of the cases with the only disagreement in 300/366 samples (Zini, 2005). Another study, based on lung cancer executed by Zhou et al. (2004), stated that neural ensemble-based detection can be employed to determine lung cancer in patients using images of needle biopsies. Many hybrid approaches encapsulating EA and neural networks have been developed for disease classification and detection. Widely used and appreciated hybrid technique is a genetic algorithm-based artificial neural network (GA-ANN) which has shown the potential to classify cancer as benign (not cancerous) or malignant (severe cancerous). GA-ANN hybrid intelligence has been developed for breast cancer detection (Ahmad et al., 2010). Raza and Hasan (2015) performed a comprehensive evaluation of 10 ML techniques for prostate cancer prediction using microarray gene expression data.
5.3 Major components of AIM systems 5.3.1 Classical machine learning algorithms ML builds data analytics models to draw out attributes from data. Inputs to ML approaches consist of patient symptoms and sometimes health results of attention. A patient’s symptoms usually consist of reference point data (e.g., age, gender and sickness history) and outbreak particular data (e.g., symptomatic imaging, gene expression, electrophysiological test, physical inspection results, medical symptoms and medicine). In addition to the symptoms, medical results of patients are frequently gathered in medical investigation. These comprise ailment markers, patient’s persistence periods and quantifiable ailment levels (Jordan and Mitchell, 2015). Based on whether to integrate the consequences, ML systems can be classified into two broad groups, namely, unsupervised learning and supervised learning. Unsupervised learning is used to draw inferences from unlabeled datasets, while supervised learning is appropriate for developing prediction model which maps an input to an output to infer a function from the labeled training datasets. Additionally, semisupervised learning has been developed as a hybridized technique of unsupervised
5 Artificial intelligence in medicine (AIM)
113
learning and supervised learning that is appropriate for situations where the result is absent for specific problems (Jiang et al., 2017).
5.3.2 Support vector machines The SVMs was first proposed by Vapnik and Chervonenkis in 1963 but Boser and collaborators devised a way to create nonlinear classifiers (Boser et al., 1992). It is a kind of ML system that can be applied to solve classification and regression problems. It expects fundamental numerical information in territories, for example, math, vector geometry and Lagrange multipliers (Fletcher, 2009). SVM has significantly two variations to support linear and nonlinear problems. Linear SVM is without kernel that searches the maximum margin hyperplane to solve the particular problem. Support vectors with kernels are applied when linear separation is not possible. The essential problem space is separated in a linear manner in linear SVM. In this approach, a hyperplane is drawn by the model, which maximizes the margin of the classes. The borderline nodes in the attribute space are known as support vectors. In light of their relative position, the largest margin is inferred and an ideal hyperplane is attracted to the center point. The nonlinear SVM approach is applied when the attributes space is not possible to separate linearly. A kernel approach is applied to draw a novel hyperplane for every training dataset. The dispersal of tags in novel hyperplane will be with the end goal that the training dataset becomes linearly separate. Thereafter, a linear arc is applied for the classification of the labels in the hyperplane. At the time when the classification outcomes are predictably back to the attribute space, a nonlinear solution is achieved (Meyer, 2001, Hearst et al., 2018).
5.3.3 Artificial neural networks An ANN is a biologically enthused computational system designed from many single elements, called artificial neurons, linked with coefficients (weight values) which establish the neural assembly. As neurons process the information, they are also called processing units. Every processing unit has some input weight, activation function and one yield value. The processing unit is basically a condition that stabilizes input and output vectors. ANNs are also known as the association is systems because the connection weights epitomize the retention of the model (Iqbal and Islam, 2019). Even though a single neuron is able to accomplish some simple computation of information, the supremacy of neural computations hails from neural networks. The hypothetical intelligence of ANN is the stuff of contention. ANNs hardly have
114
Sahar Qazi, Naiyar Iqbal, Khalid Raza
more than a couple of hundred or thousand processing units, while the human brain has approximately a hundred billion neurons. The human brain is considerably more multifaceted and unluckily, several of its logical capacities are yet not notable. ANNs are capable of handling broad measures of information, nevertheless, making predictions that are once in a while amazingly correct (AgatonovicKustrin and Beresford, 2000).
5.3.4 Deep learning: a new vision in AIM systems Deep learning is a more newly technologically advanced approach of ML that impersonates the human brain by applying numerous layers of ANN. Even though there are no clear measures on the limit of deepness to distinguish between shallow and deep learning, the last is orthodoxly demarcated as taking several hidden layers (Lee et al., 2017). Deep learning applies deep networks by means of several middle layers of artificial neurons between the input and the output, and similar to the visual cortex, these artificial neurons absorb a pecking order of increasingly additional composite attribute finders. By learning attribute finders that are enhanced for classification, deep learning can significantly perform better model that relies on features supplied by domain specialists or that are intended manually. Deep learning surpasses at demonstrating enormously complex associations between inputs and outputs. This approach can be applied for works as complex as forecasting imminent medical cases from previous cases. Deep learning is already reaching outcomes that are equivalent to or exceed those of human specialists (Hinton, 2018). Continuously datasets are growing and artificial machines are becoming stronger. The outcomes reached by deep learning will become better, alike with no enhancement in the elementary learning methods, even though these methods are being upgraded. The neural networks in the human brain absorb from a smaller amount of data and develop into a deeper, increasingly conceptual comprehension of the world. In divergence to ML techniques that depend on the establishment of huge volumes of labeled data, human reasoning is able to search structure in unlabeled data, a method usually named unsupervised learning. The formation of a variety of complicated attribute finders on the basis of unlabeled data seems to establish the phase for humans to absorb a classifier from only a fewer quantity of labeled data. In what manner the brain does this is yet a secret, but will not remain so. As novel unsupervised learning techniques are developed, the data competency of deep learning will be significantly improved in the future, and its efficiency of applications in medical care and additional areas will rise quickly (Lakhani and Sundaram, 2017).
5 Artificial intelligence in medicine (AIM)
115
5.3.5 Natural language processing Natural language processing (NLP) is significant for propelling medical care since it is required to renovate appropriate information protected in the script into organized data that can be applied by AI machine processes intended to renovate patient care and progress in treatment (Duch et al., 2008, Friedman et al., 2013). NLP applications are commonly prepared and authenticated on a reference set, which is a lot of reports that were physically interpreted by one or more specialists in medical radiology for attention. Frequently the interpreted results are binary regardless of whether the situation is contemporary; the finding is significant. The reference set is frequently split into a training and authentication set. The authentication setting is withdrawn at the time of training and only applied to evaluate the performance of the application. The alternative mutual method is the usage of crossvalidation, in which the reference set is divided into various subsets. The model is repeatedly trained, leaving out a dissimilar subset for authentication in every phase. Hence, entire data are applied for training, whereas the validation outcomes are meant over the various phases of cross-validation (Pons et al., 2016).
5.3.6 Evolutionary systems The area of evolutionary computation encompasses the investigation of the fundamentals and the utilization of computational approaches on the basis of the philosophies of nature-inspired evolution. Evolution in nature is accountable for the standard structure of every single living existences on the Earth, and for the plans, they used to interrelate with one another. Evolutionary algorithms utilize this strong structure idea to discover answers to tough issues (Pena-Reyes and Sipper, 2000). Reproduction, mutation and selections are the three elementary systems inspired in natural evolution. These mechanisms play eventually on the chromosomes holding the genotype, that is, hereditary information of the individual. Reproduction is the procedure where new individuals are made known into a populace. In sexual reproduction, crossover happens, transmitting chromosomes, which are a mélange of both parents’ hereditary material, to the offspring. Mutation brings slight variations into the hereditary chromosomes; it is frequently affected by replication errors throughout reproduction process. Selection is a procedure of persistence of the fitness. The fittest individuals are those best adjusted to their environmental conditions, who therefore survive and reproduce (Tan et al., 2003). Evolutionary computation is the utilization of the transformation of nature-inspired evolution. Based on this allegory, an issue assumes the role of a situation in which a populace of individuals survives, everyone acts for a probable answer to the issue. The level of adjustment of each individual that is the candidate solution to its situation is articulated by a competence measure called as the fitness function. Similar to evolution in nature,
116
Sahar Qazi, Naiyar Iqbal, Khalid Raza
evolutionary techniques possibly yield increasingly improved solutions to the issue. This is a conceivable gratitude to the consistent presentation of novel genetic material into the populace, by applying self-styled genetic operators which are the computational parallels of natural evolutionary systems. There are numerous kinds of evolutionary models, among which the popularly known are genetic algorithms, genetic programming, evolution strategies and evolutionary programming. Although they are different in the essentials but are all based on identical general philosophies.
5.3.7 Fuzzy expert systems FL offers strong cognitive approaches that can deal with uncertainty and imprecision. The fuzzy expert systems (FES) describe imprecise information and give an etymological impression with a brilliant guesstimate to medical scripts. FL is a technique that specifically concentrates on what is imprecise in the medicine domain (Saritas et al., 2003). The FES is a composite of an expert and a fuzzy system. This system involves an expert individual, knowledge engineer and fuzzy system. The fuzzy system itself comprises four components that are fuzzy rule base, fuzzy inference engine, fuzzification and defuzzification (Neshat et al., 2008, Raza, 2019).
5.4 Advantages and disadvantages of AIM system The AIM has become a ubiquitous part of healthcare today. Without AIM, medical practitioners cannot imagine an accurate PoC decision systems for effective healthcare strategies for patients (Qazi and Raza, 2019). However, it is even more mandatory to not turn a blind eye toward its loopholes which still exist currently. So far, we have discussed AI-based algorithms such as ANN, SVM and deep learning. The major advantage of deep learning algorithms is that it can read CT scans rapidly than humans while NLP algorithms can work through unstructured medical data in EHRs. SVMs and ANNs are being widely used for disease diagnosis and prognosis. Furthermore, it has improved rural medical care settings by barging the gap of urban to rural areas with excellent higher technology (Qazi and Raza, 2019). The applications of AIM are umbrella wide indeed. Howbeit, every technology faces some harsh criticism from skeptics. Some die-hard evangelists seek to demean the influential potentials of AIM in managing a robust PoC. One of the apex challenges of AIM is the lack of normalized datasets that help in developing supervised models for disease diagnosis predictions. In the healthcare community especially, the experienced consultants and practitioners find it hard to accept novel changes in
5 Artificial intelligence in medicine (AIM)
117
healthcare. A simple example to justify this analogy can be stated here. The technology of the past decade called electronic medical records (EMR) was devised to make billing tasks easy as they were user-friendly, highly efficient and consumed less time. However, with the lack of well-trained individuals, medical billing tasks were mishandled and thus, EMRs were appreciated but not adopted widely (Brown, 2018). Another major loophole for complete adaptation of AIM in healthcare is the fact that people are not ready to share their personal data. Usually, people are afraid that their private medical data may get misused or mishandled, or have some ethical issues. Such serious and genuine issues must be acknowledged by AIM in healthcare. AIM-based approach developers should work more vigorously toward gaining patient’s and practitioner’s trust (Bresnick, 2018).
5.5 Applications of AIM systems for point of care AIM can help consultants to make better clinical decisions in critical and functional areas of medical well-being. This section discusses the vital applications of AIM for PoC well-being. AIM-based applications in the progression of efficient healthcare (Miller and Brown, 2018) are in detection, diagnosis and therapeutics in various diseases such as gynecological disorders including breast cancer (Penã-Reyes and Sipper, 1999, Kahn et al., 1995), cervical cancer (Mat-Isa et al., 2008), ovarian cancer (Tan et al., 2005, Wang and Ng, 2006), prostate cancer (Saritas et al., 2003), diabetes mellitus (Kalaiselvi and Nasira, 2014), hematology disorders (Payandeh et al., 2009), genital problems of men (Abu Naser and Al Hanjori, 2016), liver diseases (Lin, 2009), oral cancers (de Bruijn et al., 2011), lung cancers (Zhou et al., 2002), hypertension (Kaur and Bhardwaj, 2014), cardiovascular and coronary disorders (Vijaya K et al., 2010) and neurological disorders (Fernandez-Blanco et al. 2012, Amato et al., 2013). Progress in information technology delivery along with the huge amount of data produced in healthcare systems makes various medical problems ripe for AIM applications. In 2018, researchers at Seoul National University Hospital and College of Medicine devised an AIM-based algorithm called deep-learning-based automatic detection (DLAD) which had the potential to study chest radiographs and rectify cancers. A huge success was achieved by this algorithm as its predictions were compared with experienced practitioners and it had a better accuracy rate. Another recent application was given by Google researchers. They developed a learning algorithm named lymph node assistant (LYNA). This algorithm mainly analyzes histology-stained sections in order to identify breast cancer tumors from lymph nodes. LYNA was observed to be amazingly accurate to distinguish tumors from lymph nodes which were almost impossible for the human eye to catch! The
118
Sahar Qazi, Naiyar Iqbal, Khalid Raza
algorithm was checked on cancerous and normal datasets, wherein LYNA was able to detect cancerous cells from noncancerous ones with an accuracy rate of 99% (Greenfield, 2019).
5.6 AIM system: a central bottleneck for clinicians? So far we have discussed almost every basic aspect of AIM approaches in healthcare. It becomes very essential to also mention how the latest technologies in medical care are still a central bottleneck for medical consultants. Traditional experts who are not aware of the benefits of AIM cannot digest that machines can replace human intervention. AIM approaches and medical experts have had a tiff over disease predictions and clinical decision-making for efficient treatments for patients.
5.6.1 Medical practitioners versus AIM systems Medical practitioners versus AIM systems are a tiff that has existed since the past decade. About 100 years ago, medical consultants were so popular with every household. Finding the best doctor, competent with efficient medical practice and caring for his/her patients, was a big challenge then. Consultants had to diagnose so many patients that sometimes led to mistakes due to rush and stress. Such irreplaceable errors made by medical practitioners rose drastically becoming the third leading cause of death in the USA. Howbeit, the advent of AI expanding its branches in medical fraternities led to a gigantic change of medical practice. AI is simply a set of intelligent algorithms that cut short human intervention for the fulfillment of robust solutions for complex problems. AIM systems have undoubtedly marked a revolutionary era changing the slow dynamics of medical healthcare to become fast and highly accurate, with efficient disease diagnosis and robust treatment decision system potential. It also made mal medical practice to get drastically reduced, but increased the expenditure heavily. It must be mentioned here very clearly that AIM systems are not to replace human consultants, but to assist them in making better medical decisions that can help in better patient treatment strategies and their healthcare management (Broida, 2018).
5.6.2 Reasons for distrust in AIM systems Traditional knowledge can never confide in the latest technology. The basic reason that medical consultants or patients cannot completely rely on AIM systems is the
5 Artificial intelligence in medicine (AIM)
119
fact that they are not ready to share the medical data. They raise ethical, legal and social issues (ELSI) and are afraid that their private data may get misused or mishandled or even leaked for various purposes. AIM-based approach developers should work more vigorously toward gaining patient’s and practitioner’s trust (Bresnick, 2018). With rapidly increasing awareness of AIM system, it seems that AI will occupy a big chunk of space in several medicinal areas and will also contribute to research. This fear of “getting replaced” is another major issue which concerns medical consultants to express dejection about AIM systems (Chawla, 2018).
5.7 Challenges and future directions for AIM systems: still a long way to trot! AIM is the key to the dynamic transformation of current medical healthcare toward PoC-based healthcare. By now, we can discern that AIM systems hold the potential to develop medical equipment, infrastructure and decision strategies that are smarter, efficient, better and quick. Medical imaging and lab results are far too fast, and examinations near accurate. With a few loopholes fixed, AIM will reach the zenith very soon. Future prospects for AIM systems in medical healthcare are seemingly bright and research is ongoing. The first and foremost future direction for AIM systems is to expand its prospects in personalized medicines using strong decision systems by aggregating various datasets at a cost-effective, user-friendly manner for disease diagnosis and treatments. Moreover, AIM systems will do wonders if it was employed for medical imaging. This does not mean that radiologists will be replaced. But, it will raise the demands for beneficial imaging examinations and will prevent diagnostic errors which are usually caused by silly mistakes by radiologists or consultants. It will also aid in enhanced performance of medical imaging for serious diseases such as cancer (https://www.siemens-healthineers.com/news/msodossier-ai.html).
5.8 Discussions Human intelligence is a combination of many diversified traits that encapsulate the basics of problem understanding, problem solving, understanding the behavior of the problem and how one perceives it. The basic crux of human intelligence lies in adaptation and learning of phenomenon. We must understand that learning can be derived from two sources – education and experience. Thus, human intelligence tends to learn from experiences, which is referred to as generalization, that is, when
120
Sahar Qazi, Naiyar Iqbal, Khalid Raza
we utilize learning from past experiences in new situations (Lamson, 2018). Learning, rationality, problem solving and perception are the main concerns that are dealt by AI to make machines more “human-like.” AI holds the potential to simulate human intelligence-based tasks involving learning, reasoning and self-correction, by machines. The crux of AI lies in modeling human cognitive behavior inclusive of two things – thinking and acting rationally. One of the gargantuan challenges of AIM is the lack of normalized datasets that help in developing supervised models for disease diagnosis predictions (Brown, 2018). Another loophole in complete adaptation of AIM in healthcare is the fact that people are not ready to share their personal data. Usually, people are afraid that their private medical data may get misused or mishandled or have some ethical issues. AIM-based approach developers should work more vigorously toward gaining patient’s and practitioner’s trust (Bresnick, 2018). Even though there are a few limitations of AIM systems, there are myriad applications, especially (Miller and Brown, 2018) in detection, diagnosis and therapeutics of various diseases. In 2018, researchers at Seoul National University Hospital and College of Medicine devised an AIM-based algorithm – DLAD – which had the potential to study chest radiographs and rectify cancers. Another recent application was given by Google researchers. They developed LYNA. This algorithm mainly analyzes histologystained sections in order to identify breast cancer tumors from lymph nodes (Greenfield, 2019). It is often quoted: “Traditional knowledge can never confide in latest technology!” Medical consultants or patients cannot completely rely on AIM systems as they are not ready to share the medical data. They raise ELSI and are afraid that their private data may get misused or mishandled or even leaked for various purposes (Bresnick, 2018). With rapidly increasing awareness of AIM system, it seems that AI will occupy a big chunk of space in several medicinal areas and will also contribute to research. This fear of “getting replaced” is another major issue which concerns medical consultants to express dejection about AIM systems (Chawla, 2018). However, the traditional medicine practitioners must understand that AIM systems are here to help them in making better clinical decisions and prescribe an effective PoC healthcare regimen to the patients. AIM systems are developed to help humans in making their lives easy. They do not intend to “replace” anyone! If medical practitioners understand this and allow the gusty winds of change to sway, surely, our healthcare policies will change for the better!
5 Artificial intelligence in medicine (AIM)
121
5.9 Conclusions AI is the science of mimicking human intelligence which is widely employed to solve complex computational problems in different fields of studies, including health sciences and medicines, generally termed as AIM. The AIM brought a drastic change in the healthcare industry which is backed up by the rapid advancement in biosensors technology, data management in the cloud and various computational intelligence approaches, including deep learning. The AIM system has been widely applied for various disease diagnosis including cancers of breast, gastric, thyroid, oral epithelial cells, lung, melanoma and response to drugs, radiographic and histopathological image analysis and interpretation of data in ICUs. Nevertheless, there is still a lag between actual and AI-based techniques in medical fraternities. Due to the advancement in biosensor and other technologies, there is a rapid increase in medical data. Medical data are a valuable source that can be utilized for various types of analytics and has the potential to improve disease diagnosis, prognosis and treatment. Medical data are used by all the stakeholders of the healthcare system including the administrator, doctors, researchers and accounts for their own purposes. These data not only help to improve healthcare quality but also aid in new eHealth product development, health policymaking by the government and other organizations. The AI has opened possibilities within the medical device industry to develop AI-enabled and AI-optimized devices, enabling doctors and medical practitioners to provide relevant and effective healthcare quickly. AI-assisted medical devices offer healthcare consumers more control with a variety of features. Smart wearable biosensors (sensors combined with AI) are being manufactured in order to track the vital signs and alarm the consumers in case of any medical emergency. The FDA is also actively engaged in the process of its clinical validation and approval. Also, the field of medical imaging is steadily gaining traction but clinically validated wearable devices are still emerging. The development and improvement of devices to manage and treat chronic diseases will continue to be a major area of focus.
References Abu Naser, S.S. and Al Hanjori, M.M. (2016). An expert system for men genital problems diagnosis and treatment, International Journal of Medicine Research, 1(2), 83–86. Agatonovic-Kustrin, S. and Beresford, R. (2000). Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research, Journal of Pharmaceutical and Biomedical Analysis, 22(5), 717–727. Ahmad, F., Mat-Isa, N.A., et al. (2010).Genetic Algorithm – Artificial Neural Network (GA-ANN) Hybrid Intelligence for Cancer Diagnosis. IEEE. Second International Conference on Computational Intelligence, Communication Systems and Networks.
122
Sahar Qazi, Naiyar Iqbal, Khalid Raza
Amato, F., Lôpez, A. et al. (2013). Artificial neural networks in medical diagnosis, Journal of Applied Biomedicine, 11, 47–58. Bazazeh, D. and Shubair, R. (2016, December).Comparative study of machine learning algorithms for breast cancer detection and diagnosis. In 2016 5th International Conference on Electronic Devices, Systems and Applications (ICEDSA) (pp. 1–4).IEEE. Boser, B.E., Guyon, I.M. and Vapnik, V.N. (1992, July). A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory (pp. 144–152). ACM. Bresnick, J. (2018). Arguing the Pros and Cons of Artificial Intelligence in Healthcare. Health IT Analytics. https://healthitanalytics.com/news/arguing-the-pros-and-cons-of-artificialintelligence-in-healthcare Broida, J. (2018). Artificial Intelligence VS Medical Doctors, The Health Edge. http://thehealthedge. com/2018/04/19/artificial-intelligence-vs-medical-doctors/. Brown, T. (2018). Pros and Cons of Artificial Intelligence in healthcare, JAMA Software. https:// www.jamasoftware.com/blog/pros-and-cons-of-artificial-intelligence-in-health-care/. Casas, N. (2017). Deep deterministic policy gradient for urban traffic light control. arXiv preprint arXiv:1703.09035. Chawla, B. (2018). Artificial intelligence versus doctors: who wins?, Delhi Journal of Ophthalmology, 28, 4–5. Coiera, E.W. (1996). Artificial intelligence in medicine: the challenges ahead, Journal of the American Medical Informatics Association, 3(6), 363–366. de Bruijn, M., Bosch, L. et al. (2011). Artificial neural network analysis to assess hypernasality in patients treated for oral or oropharyngeal cancer, Logoped Phoniatr Vocol, 36, 168–174. Duch, W., Matykiewicz, P. and Pestian, J. (2008). Neurolinguistic approach to natural language processing with applications to medical text analysis, Neural Networks, 21(10), 1500–1510. Fernandez-Blanco, E., Rivero, D. et al. (2012). Automatic seizure detection based on star graph topological indices, Journal of Neuroscience Methods, 209, 410–419. Fletcher, T. (2009). Support vector machines explained, Tutorial Paper. Friedman, C., Rindflesch, T.C. and Corn, M. (2013). Natural language processing: state of the art and prospects for significant progress, a workshop sponsored by the National Library of Medicine, Journal of Biomedical Informatics, 46(5), 765–773. Greenfield, D. (2019). Artificial Intelligence in Medicine: Applications, Implications & Limitations. http://sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applicationsimplications-and-limitations/. Güler, I., Polat, H. and Ergün, U. (2005). Combining neural network and genetic algorithm for prediction of lung sounds, Journal of Medical Systems, 29(3), 217–231. Gunn, A.A. (1976). The diagnosis of acute abdominal pain with computer analysis, Journal of the Royal College of Surgeons of Edinburgh, 21, 170–172. Harnad, S. (1992). The turning test is not a trick: turning indistinguishability is a scientific criterion, SIGART Bulletin, 3(4), 9–10. Harnad, S. (2001). Minds, Machines, and Turing: The Indistinguishability of Indistinguishable, Journal of Logic, Language and Information, 9(4), 425–455. Hausknecht, M. and Stone, P. (2015). Deep reinforcement learning in parameterized action space. arXiv preprint arXiv:1511.04143. Hearst, M.A., Dumais, S.T., Osuna, E., Platt, J. and Scholkopf, B. (1998). Support vector machines, IEEE Intelligent Systems and Their Applications, 13(4), 18–28.
5 Artificial intelligence in medicine (AIM)
123
Hester, T., Vecerik, M., Pietquin, O., Lanctot, M., Schaul, T., Piot, B. . . .Dulac-Arnold, G. (2018, April). Deep q-learning from demonstrations. In Thirty-Second AAAI Conference on Artificial Intelligence. Hinton, G. (2018). Deep learning – a technology with the potential to transform health care, Jama, 320(11), 1101–1102. Iqbal, N. and Islam, M. (2019). Machine learning for dengue outbreak prediction: A performance evaluation of different prominent classifiers, Informatica, 43(3), 363–371. Jabeen, A., Ahmad, N. and Raza, K. (2018). Machine learning-based state-of-the-art methods for the classification of RNA-Seq Data. In: Dey N., Ashour A., Borra S. (eds) Classification in BioApps. Lecture Notes in Computational Vision and Biomechanics, Springer, 26, 133–172. https://doi. org/10.1007/978-3-319-65981-7_6. Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S. . . .Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future, Stroke and Vascular neurology, 2(4), 230–243. Jordan, M.I. and Mitchell, T.M. (2015). Machine learning: Trends, perspectives, and prospects, Science, 349(6245), 255–260. Kaelbling, L.P., Littman, M.L. and Moore, A.W. (1996). Reinforcement learning: A survey, Journal of Artificial Intelligence Research, 4, 237–285. Kahn, C.E. and Linda, M.R., et al. (1995). Preliminary Investigation of a Bayesian Network for Mammographic Diagnosis of Breast Cancer.AMIA.0195-4210/95. Kalaiselvi, C. and Nasira, G.M. (2014). A New Approach for Diagnosis of Diabetes and Prediction of Cancer using ANFIS, World Congress on Computing and Communication Technologies, IEEE. Kaur, A. and Bhardwaj, A. (2014). Artificial intelligence in hypertension diagnosis: a review, International Journal of Computer Science and Information Technologies, 5(2), 2633–2635. Kovalerchuk, B., Triantaphyllou, E. et al. (1997). Fuzzy logic in computer-aided breast cancer diagnosis: analysis of lobulation, Artificial Intelligence in Medicine, 11, 75–85. Lakhani, P. and Sundaram, B. (2017). Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks, Radiology, 284(2), 574–582. Lamson, M. (2018). The Role of AI in Learning and Development: Make Way for Artificial Intelligence in Learning and Development, https://www.inc.com/melissa-lamson/the-role-of-ai-in-learning -development.html LaRosa, E. and Danks, D. (2018, December).Impacts on trust of healthcare AI. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 210–215). Ledley, R.S. and Lusted, L.B. (1959). Reasoning foundations of medical diagnosis, Science, 130, 9–21. Lee, E.J., Kim, Y.H., Kim, N. and Kang, D.W. (2017). Deep into the brain: artificial intelligence in stroke imaging, Journal of Stroke, 19(3), 277. Lin, H.R. (2009). An intelligent model for liver disease diagnosis, Artificial Intelligence in Medicine, 47, 53–62. Lusted, L.B. (1955). Medical progress – medical electronics, The New England Journal of Medicine, 252, 580–585. Mat-Isa, N.A., Mashor, M.Y. and Othman, H.R. (2008). An Automated Cervical Pre-cancerous Diagnostic System, Artificial Intelligence in Medicine, 42(1), 1–11. Meyer, D. (2001). Support vector machines, Porting R to Darwin/X11 and Mac OS X, 1, 23. Miller, D.D. and Brown, E.W. (2018). Artificial intelligence in medical practice: the question to the answer?, The American Journal of Medicine, 131(2). Miller, P.L. (1986). The evaluation of artificial intelligence systems in medicine, Computer Methods and Programs in Biomedicine, 22(1), 3–11.
124
Sahar Qazi, Naiyar Iqbal, Khalid Raza
Muhamedyev, R. (2015). Machine learning methods: An overview, Computer Modelling& New Technologies, 19(6), 14–29. Neshat, M., Yaghobi, M., Naghibi, M.B. and Esmaelzadeh, A. (2008). December. Fuzzy expert system design for diagnosis of liver disorders, In 2008 International Symposium on Knowledge Acquisition and Modeling, 252–256. IEEE. Payandeh, M., Aeinfar, M., Aeinfar, V. and Hayati, M. (2009). A new method for diagnosis and predicting blood disorder and cancer using artificial intelligence (artificial neural networks), International Journal of Hematology-Oncology and Stem Cell Research, 3(4), 25–33. Pena-Reyes, C.A. and Sipper, M. (1999). A fuzzy-genetic approach to breast cancer diagnosis, Artificial Intelligence in Medicine, 17(2), 131–155. Pena-Reyes, C.A. and Sipper, M. (2000). Evolutionary computation in medicine: an overview, Artificial Intelligence in Medicine, 19(1), 1–23. Pons, E., Braun, L.M., Hunink, M.M. and Kors, J.A. (2016). Natural language processing in radiology: a systematic review, Radiology, 279(2), 329–343. Qazi, S. and Raza, K. (2019). Smart Biosensors for an efficient Point of Care (PoC) Health Management, Smart Biosensors in Medical Care, Academic Press, 1–20. Doi: https://doi.org/ 10.1016/B978-0-12-820781-9.00004-8. Ramesh, A.N., Kambhampati, C. et al. (2004). Artificial intelligence in medicine, Annals of The Royal College of Surgeons of England, 86, 334–338. Raza, K. (2019). Fuzzy logic based approaches for gene regulatory network inference, Artificial Intelligence in Medicine, 97, 189–203. Raza, K. and Hasan, A.N. (2015). A comprehensive evaluation of machine learning techniques for cancer class prediction based on microarray data, International Journal of Bioinformatics Research and Applications, 11(5), 397–416. Raza, K., and Singh, N.K. A tour of unsupervised deep learning for medical image analysis arXiv preprint arXiv:1812.07715. Saritas, I., Allahverdi, N. and Sert, I. (2003). A Fuzzy Expert System Design for Diagnosis of Prostate Cancer, International Conference on Computer Systems and Technologies – CompSysTech’2003. Saritas, I., Allahverdi, N. and Sert, I.U. (2003, June). A fuzzy expert system design for diagnosis of prostate cancer. In Proceedings of the 4th international conference on Computer systems and technologies: e-Learning (pp. 345–351). ACM. Schneider, J., Bitterlich, N., Velcovsky, H.G., Morr, H., Katz, N. and Eigenbrodt, E. (2002). Fuzzy logic-based tumor-marker profiles improved sensitivity in the diagnosis of lung cancer, International Journal of Clinical Oncology, 7(3), 145–151. State and Prospects of Artificial Intelligence Siemens Healthineers. (2018) https://www.siemenshealthineers.com/news/mso-dossier-ai.html Sutton, R.S. and Barto, A.G. (1998). Introduction to reinforcement learning, Vol. 2, No. 4, Cambridge, MIT press. Szolovits, P. Ed. (2019). Artificial intelligence in medicine, Routledge. Tan, K.C., Yu, Q., Heng, C.M. and Lee, T.H. (2003). Evolutionary computing for knowledge discovery in medical diagnosis, Artificial Intelligence in Medicine, 27(2), 129–154. Tan, T.Z., Quek, C. and Ng, G.S. (2005). Ovarian cancer diagnosis by hippocampus and neocortexinspired learning memory structures, Neural Networks, 18, 818–825. Turing, A.M. (1950). Computing machinery and intelligence, Mind, 59, 433–460. Vayena, E., Blasimme, A. and Cohen, I.G. (2018). Machine learning in medicine: Addressing ethical challenges, PLoS Medicine, 15(11), e1002689.
5 Artificial intelligence in medicine (AIM)
125
Vijaya, K., Khanna Nehemiah, H. et al. (2010). Fuzzy neuro genetic approach for predicting the risk of cardiovascular diseases, International Journal of Data Mining, Modelling and Management, 2(4). Wang, D. and Ng, G.S. (2006). Ovarian cancer diagnosis using fuzzy neural networks empowered by evolutionary clustering technique, IEEE Congress on Evolutionary Computation, 0-78039487-9/06. Wang, D., Quek, C. and Ng, G.S. (2004). Novel self-organizing Takagi Sugeno Kang fuzzy neural networks based on ART-like clustering, Neural Processing Letters, 20(1), 39–51. Watkins, C.J. and Dayan, P. (1992). Q-learning, Machine Learning, 8(3–4), 279–292. Zhou, Z.H., Jiang, Y. et al. (2002). Lung cancer cell identification based on artificial neural network ensembles, Artificial Intelligence in Medicine, 24, 25–36. Zhu, X. and Goldberg, A.B. (2009). Introduction to semi-supervised learning, Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1), 1–130. Zhu, X.J. (2005). Semi-supervised learning literature survey, University of Wisconsin-Madison Department of Computer Sciences. Zini, G. (2005). Artificial intelligence in Hematology, Hematology, 10(5), 393–400.
Omer Deperlioglu
6 Diagnosis disease from medical databases using neural networks: a review Abstract: Developments in information and communication technologies affect our daily lives in every field. In addition to daily routine work, computers are able to perform operations that can be performed in a very long time by humans in a very short time. These developments are also reflected in all units in the field of medicine. Nowadays, it is possible to see computerized decision support systems in almost every field such as diagnosis, treatment and patient follow-up. In this chapter, first the basic structure of medical decision support systems, using neural networks applications in their inference or estimation unit, was briefly explained. In this study, a brief introduction about my previous studies which include both medical data and neural networks were given. It is stated that by using different features of artificial neural network and autoencoder neural networks, it is possible to increase the classification estimation rate without the need for mixed algorithms with heavy workload. Performance increases have been demonstrated using the most well-known medical databases in the machine learning laboratory such as Cleveland heart diseases, Pima Indian diabetes, hepatitis and breast cancer data sets. At the end of the chapter, the results obtained from the classifications were compared with each other. After evaluation, the best classification performance achieved with neural networks for these medical data sets was discussed. In addition, prominent features of neural networks used for medical data were given. Keywords: neural networks, deep neural network, diagnosis diseases, classification of medical data
6.1 Introduction Advances in hardware and computer technologies have brought about the development of software. Also the development of database management systems from the 1950s onward has facilitated the processing of large data groups. With the developments in the Internet, it is possible to benefit from databases and data groups at remote locations. Since there is a lot of data in the field of medicine, it is very difficult for physicians to investigate the specific symptoms and development of each case in detail. Usually,
Omer Deperlioglu, Department of Computer Technology, Afyon Kocatepe University, Afyonkarahisar, Turkey https://doi.org/10.1515/9783110668322-006
128
Omer Deperlioglu
expert support is needed when evaluating large amounts of information for these types of reviews. The daily workloads of doctors are very heavy and they need support for such examinations. Software systems have also been developed to facilitate the work of doctors, to examine large data sets and to support decision-making to find other similarities and differences between situations. Lipkin and Hardy first mentioned the concept of decision support in 1958. Later, the first system that could automatically extract was developed by Ledley and Lusted (1959) and Lipkin and Hardy (1958). Decision support systems basically make suggestions by making use of large databases created by experts and making inferences in line with user requests with artificial intelligence algorithms. These systems are therefore also called knowledge-based systems or expert systems. Many medical decision support systems have been developed since the 1960s to the present day. These expert systems are called medical decision-making systems (MDMS). It is one of the main components of clinical decision support systems that include many functions. Figure 6.1 shows an exemplary MDMS (Deperlioglu, 2018a). As shown in Figure 6.1, the task of MDMSs is to assist doctors in the diagnosis and treatment by processing large amounts of data. The diagnosis and treatment of any patient may involve many different conditions. This is because the patient’s previous illnesses, medications, laboratory test results, medical images and other analyzes may be very different from other patients. In addition, the drugs or treatment methods to be used may have different side effects for each patient. MDMS makes an inference to clinic decision support systems for each patient taking into account all these situations and similar cases in the databases and the electronic records. The most important task in this process belongs to the classification unit. Scientists working for the development of MDMSs mostly focus on increasing classification success (Deperlioglu, 2018b; Deperlioglu, 2019a). Therefore, in order to obtain successful classification results, classification was made with different artificial intelligence algorithms. In addition, classification accuracy rate was tried to be increased by selecting a feature with different optimization algorithms (Deperlioglu, 2018c). In this study, the studies with neural networks that are widely used in MDMS will be examined. The studies on the classification of four different medical databases with artificial neural networks (ANN) and autoencoder neural network (AEN) will be examined. The results obtained in these studies will be discussed and the differences between neural networks will be revealed.
6.2 Literature review The most well-known medical databases in the machine learning laboratory (UCI) such as Cleveland heart diseases, Pima Indian diabetes, hepatitis and breast cancer data sets have been used in these studies. Some of the previous diagnostic studies with these data sets are explained later.
Proposed solution
I n t e r f a c e
Expert clinician knowledge
Knowledge engineers
Domain experts
Knowledge base
Inference engine
Patient current case
Medical imaging centers
Figure 6.1: The main components of the medical decision-making system (Deperlioglu, 2018b).
User
Problem
User may not be an expert
Laboratory units
Clinical examination
Rulebase
Electronic record systems (EMS)
If ... then ... rules
Consultations and criticisms
Multidisciplinary physician consultations
Pharmacy departments
Electronic patient records
6 Diagnosis disease from medical databases using neural networks: a review
129
130
Omer Deperlioglu
For the processing and classification of medical data, the most commonly used artificial intelligence methods are fuzzy logic, support vector machines (SVM) and ANN. Jaganathan and Kuppuchamy suggested fuzzy entropy-based feature conformity measurement for the classification of medical databases. They used the radial basis function network classifier and ANN for threshold selection to test their proposed method. In their study, they selected prominent features before classification using fuzzy entropy, thus attempting to increase classification success (Jaganathan and Kuppuchamy, 2013). Likewise, Lekkas and Mikhailov used fuzzy classification in their articles to allow online data processing by repetitively editing the fuzzy rule base on the basis of data streams. They also applied this method to two separate medical data sets (Lekkas and Mikhailov, 2010). In the research of Kahramanli and Allahverdi, a mixed neural network including ANN and fuzzy neural network was developed. To demonstrate the applicability of their methods, they used diabetes and heart disease data sets from the UCI machine learning center. In order to assess the prediction of the proposed method, they used performance measures such as accuracy, specificity and sensitivity, which are widely used in the classification of medical data (Kahramanli and Allahverdi, 2008). In another study, Polat and Günes attempted to diagnose diabetes using principal component analysis and adaptive neural fuzzy inference system. There were two stages in the proposed system. In the first stage, the size of the eight-feature diabetes data set was reduced to four features using principal component analysis. In the second stage, diabetes data set was classified by adaptive neural fuzzy inference system to identify diabetes (Polat and Günes, 2007). Parthibane and Subramanian proposed a method based on cooperative neurofuzzy subtraction (CANFIS) to identify heart disease. This CANFIS model includes the qualitative approach to fuzzy logic integrated with genetic algorithm to extract the characteristics of the neural network and then the presence of the disease (Parthiban and Subramanian, 2008). Data mining techniques are also widely used in bioinformatics to analyze biomedical data. The discovery of information from medical databases is important for effective medical diagnosis. The purpose of data mining is to remove the data from the database and make clear and understandable explanations. Parpinelli et al. proposed a colony-based data mining Ant-Miner algorithm. The aim of Ant-Miner is to remove the classification rules from the data. The algorithm was inspired both by research on the behavior of real ant colonies and by some data mining concepts and principles. UCI hepatitis data set was used to evaluate the performance of the proposed method (Parpinelli et al., 2002). Han et al. performed data mining by selecting RapidMiner as a tool to analyze the Pima Indian diabetes data set. Basically, they have focused on data preprocessing, including attribute identification and selection, removal of outlier data, data normalization and numerical discretization, visual data analysis, discovery of hidden relationships and diabetes prediction modeling (Han et al., 2008). In another study conducted with data mining techniques, a new approach to generate relationship rules on numerical data was presented. A modified
6 Diagnosis disease from medical databases using neural networks: a review
131
equal width separation range approach has been proposed to make continuous valued attributes discrete. The approximate width of the desired ranges was selected based on the opinion of the medical expert and the model was provided to be an input parameter (Ganji and Abadeh, 2010). Aljarullah used the decision tree method to predict patients with diabetes using the Pima Indian diabetes data set. The study consists of two stages. In the first step, the data was preprocessed, including attribute identification and selection, processing missing values and numerical discretization. In the second stage, a diabetes prediction model was formed by using the decision tree method (AlJarullah, 2011). Ba-Alwi and Hintaya in their study analyzed hepatitis data using seven different classification algorithms such as naive Bayes, updateable naive Bayes, FT tree, KStar, J48, LMT and neural networks. Looking at the classification accuracy and time values, they concluded that the classification performance with naive Bayes for hepatitis data set was better than other classification techniques (Ba-Alwi and H. M. Hintaya, 2013). In recent years, many applications of mixed artificial intelligence approaches seem to have completely changed traditional approaches. The additional capabilities of mixed approaches allow them to be used in any system. Processing biomedical data is another area that has undergone a major change for several years. Various new approaches and many new models are proposed. At this point, Kala et al. conducted a literature review to compare and evaluate various mixed approaches in recent years using Pima Indian data as an example of biomedical approaches. For this, they chose three main hybrid systems and standard backpropagation algorithms. These were adaptive neural fuzzy inference systems, community ANN and evolutionary ANN. At the end of the study, they stated that they require a repetitive design approach to obtain a good artificial intelligence system for problem solving within the knowledge of the theoretical and practical aspects of various mixed methods (Kala et al., 2009). There are also various classification studies made with the same data sets using SVM. In Karatsiolis and Schizas studies, they proposed a method that can use RBF networks and k-nearest neighborhood approaches in an efficient way such as neural networks in order to increase the success rates of classification. The proposed algorithm divided the training set into two subsets: the first consisted of combining consistent data regions and the second consisted of the data part that is difficult to cluster. As a result, the first subset was used to train the SVM, which was an radial basis function core, while the second subset was used to train a SVM with a polynomial nucleus (Karatsiolis and Schizas, 2012). Köse et al. proposed a diabetes diagnostic system built through SVM and the cognitive development optimization algorithm. During the training of SVM, the cognitive development optimization algorithm was used to determine the sigma parameter of the Gaussian core function and then the classification process was performed (Köse et al., 2016). Similarly, in another study, vortex optimization algorithm was used to determine the sigma parameter of Gaussian core function during the training of SVM, and they performed classification
132
Omer Deperlioglu
process for Pima Indian diabetes data set (Köse et al., 2015). Kumari and Chitra achieved 78% classification accuracy for Pima Indian data set by using SVM using radial basis core (Kumari and Chitra, 2013). In another research, a cascade learning system was proposed for classification of diabetes disease. Generalized discriminant analysis and least square SVM was used in this method (Polat et al., 2008). Bathia and colleagues developed a decision support system to classify heart diseases using an SVM and an integer-coded genetic algorithm. In this study, the genetic algorithm was used to maximize the accuracy of the simple SVM to select related features and to delete unnecessary ones (Bhatia et al., 2008). Recently, deep learning methods have also been used to classify heart sounds. For example, Potes and his colleagues proposed a method based on a classifier group that combines the outputs of AdaBoost and convolutional neural network (CNN) to classify normal/abnormal heart sounds (Potes et al., 2016). Rubin et al. proposed an automatic heart sound classification algorithm that allows the incorporation of time–frequency heat map indicators into a CNN (Rubin et al., 2017). Deperlioglu in his studies has used CNN to classify phonocardiograms obtained from Pascal and PhsioNet heart sounds database, segmented and nonsegmented (Deperlioglu, 2018b; Deperlioglu, 2019a). In addition, disease diagnoses can be made by using medical images. For example, the diabetes diagnosis was done by using 400 retinal fundus images within the MESSIDOR database. This study included employment of image processing with histogram equalization and the contrast-limited adaptive histogram equalization techniques. They used CNN for the classification of retinal fundus images (Hemant et al., 2018). Köse and Deperlioğlu proposed a practical method for retinal fundus image enhancement approach, including HSV-V expansion and histogram equalization methods. These methods are well-known image enhancement methods and the proposed method is very easy and quick to implement. At the final stage of image processing, the retinal fundus images were filtered with the Gaussian low-pass filter. Then the retinal fundus images were classified with the CNN. The performance of the proposed method was evaluated using 400 retinal fundus images in the Kaggle database (Deperlıoglu and Kose, 2018, October). In the classification studies for medical data sets, many mixed methods are used to improve the classification performance, as shown in the earlier examples. In this study, instead of increasing the processing load by using mixed methods or using different algorithms for feature selection, the researches showing that the performance of classification can be increased by using ANN using different training algorithms or deep neural networks such as autoencoder are reviewed. At the end of the study, performances of neural networks were examined.
6 Diagnosis disease from medical databases using neural networks: a review
133
6.3 Components of the method In this study, classification studies with ANN and AEN using four different data sets are examined. These are explained in detail later. All classification studies were done with MATLAB software.
6.3.1 Medical data sets In the study, Pima Indian diabetes, hepatitis, Wisconsin breast cancer and Cleveland heart disease data sets, which are commonly used medical data sets from the UCI, were used (UCI, 2007). The characteristics of these data sets are given in Table 6.1. Table 6.1: The characteristics of data sets. Data set
Samples
Number of features
Number of output classes
Pima Indian diabetes Wisconsin breast cancer Hepatitis Cleveland heart disease
6.3.1.1 Pima Indian diabetes Various restrictions have been made on the selection of these samples in the Pima Indian diabetes data set from a larger database. In particular, all patients were selected from women of Pima Indian origin at least 21 years of age. There are 768 samples in the data set and eight properties are available for each sample. These features include (1) pregnancy number, (2) plasma glucose concentration, (3) diastolic blood pressure, (4) arm muscle skin thickness, (5) serum insulin, (6) body mass index, (7) diabetes pedigree function and (8) age. The data in the data set consists of two output classes such as healthy (diabetes negative) and diabetes (diabetes positive) (Jaganathan and Kuppuchamy, 2013).
6.3.1.2 Wisconsin breast cancer Wisconsin breast cancer data set was collected by William H. Wolberg between 1989 and 1991 in the University of Wisconsin Madison Hospitals. That data set includes 699 samples with a total of nine features: (1) cluster thickness, (2) uniformity of cell size, (3) uniformity of cell shape, (4) marginal adhesion, (5) single epithelial cell size, (6) bare nuclei, (7) bland chromatin, (8) normal and (9) mitoses. All these
134
Omer Deperlioglu
are used for predicting malignant or benign growths. In this data set, six tissue classes were studied using newly extracted electrical impedance measurements. These tissues are cancer 21, fibroadenoma 15, mastopathy 18, gland 16, binder 14 and fat 22 (UCI, 2017).
6.3.1.3 Hepatitis data set Hepatitis data set was obtained from Carnegie Mellon University. Each example contains 20 features such as (1) age, (2) sex, (3) steroids, (4) antivirals, (5) fatigue, (6) malaise, (7) anorexia, (8) liver big, (9) liver film, (10) spleen palpable, (11) spiders, (12 acids, (13) varices, (14) bilirubin, (16) alkaline phosphatase, (17) Serum glutamik oksaloasetik transaminaz (SGOT), (18) albumin, (19) protime and (20) histology. The data set has two output classes such as live or dead. There are a total of 155 samples in this data set (UCI, 2007, Ba-Alwi ve Hintaya, 2013).
6.3.1.4 Cleveland heart disease The data set contains approximately 303 samples, each taken from the Cleveland Clinic Foundation, and each of the 13 properties was selected from 76 raw features initially. Features include (1) age, (2) sexual intercourse, (3) type of chest pain, (4) resting blood pressure, (5) cholesterol, (6) fasting blood sugar, (7) resting electrocardiographic results, (8) maximum heartbeat, (9) exercise-induced angina, (10) depression induced by exercise by segment, (11) slope of peak exercise, (12) number of large vessels and (13) A blood disorder called thalassemia (THAL): (3 = normal; 6 = fixed defect; 7 = reversible defect). The data in the data set consists of five classes: 1, 2, 3, 4 indicate the presence of heart disease and 0 indicates the absence of heart disease (UCI, 2007, Detrano et al. 1989, Aha and Kibler 1988, Gennari et al., 1989).
6.3.2 Classification ANN and AEN were used in these studies.
6.3.2.1 Multilayer feedforward network ANN creates a large array of networks connected in a specific way to communicate between units. These units, also called nodes or neurons, are simple parallel processors that operate in parallel. The ability to process the network is recorded with a number of training examples in the combination of weight obtained with the
6 Diagnosis disease from medical databases using neural networks: a review
135
learning process. The multilayer feedforward network is a feedforward ANN with multiple weight layers. There are one or more layers between the input and output layers, which are called hidden layers (Gurney, 2004, Tutorials Point, 2017). Feedforward multilayer networks are networks in which no loops occur in the network path. A learning rule is defined as a procedure for changing the weights and deviations of a network in order to minimize the mismatch between the output from the network and the desired output for any input. The learning rule or network training algorithm is used to adjust the weights and deviations of the network to move the network outputs close to the targets. The first developed training algorithm is the classical backpropagation algorithm. Backpropagation is the simplest application of learning and updates network weights and deviations in the direction in which the performance function decreases (slope negative) in the fastest way with second-order optimization algorithms such as conjugate gradient, Levenberg–Marquardt and Bayes regularization learning algorithms.
6.3.2.2 Autoencoder network AEN was first introduced by Hinton and Salakhutdinov (2006). Structurally, it shows great similarity to feedforward neural networks. It can be said that it is a trained network for better representation of network inputs by coding. Also, in network architecture, there are generally fewer neurons than the input and output layers. Thus, this network can be defined as an encoder and decoder suite that provides connection with hidden layers containing a small number of neurons that transmit the encoded representation of the inputs to the output. Each layer functions from previous layers, which is sufficient to obtain new representations of information corresponding to coding information (Valentine and Jeannot, 2012). AEN is basically a neural network with three main layers: an input layer, hidden layers (encoding–decoding) and an output layer. AEN is trained to select and transfer features that best represent their input. It forces hidden layers to learn to choose the best representations of inputs. Therefore, AEN is an unsupervised machine learning algorithm that organizes backpropagation to best represent target values of inputs. AEN is trained to transfer the best features of its input to its output (Chicco et al., 2014, Chan-drayan, 2018). Automated encoders belong to a class learning algorithm known as unsupervised learning, and unchecked learning algorithms do not need tagged information for data. In fact, an ANN with linear hidden layers will learn to project data to its first major components. Nonlinear hidden layers allow an ANN to learn more complex coding functions, as with additional hidden layers (Chicco et al., 2014, Le, 2015).
136
Omer Deperlioglu
6.3.3 Performance evaluation In medical classification studies, accuracy, sensitivity and specificity measures, which are widely used to evaluate performance, were used to evaluate the performance of the proposed method. These dimensions are calculated as follows (Gharehbaghi et al., 2015, Zhang et al., 2017, Kahramanli and Allahverdi, 2008):
Ac =
TP TP + FP
(6:1)
Se =
TP TP + FN
(6:2)
Sp =
TN FP + T N
(6:3)
where TP represents the numbers of true positives, FP the numbers of false positives, TN the numbers of true negatives and FN represents the numbers of false negatives. Of these measures, the accuracy rate indicates the accuracy of the model’s ability to make an accurate diagnosis. The sensitivity indicates the ratio of the model to accurately define the formation of the target class. Specificity refers to the scope of the model’s capacity to separate the target class (Kahramanli and Allahverdi, 2008).
6.4 Applications of classification In the first study with ANN, the network structure used in multilayer feedforward network classification studies trained with different learning algorithms is the same in all applications. There are a total of 10 neurons in the hidden layers of the networks. Levenberg–Marquardt, Bayes regularization and scaled conjugate gradient algorithms were used as learning algorithms, respectively. Mean square error function was used for Levenberg–Marquardt and Bayes regularization as performance algorithms. Cross entropy cost function was used for the scaled conjugate gradient algorithm. In all classification studies, the maximum number of periods is set to 1,000. Seventy percent of the samples in the data sets were arranged as training data, 15% as verification data and finally 15% as test data. Each classification procedure has been tried 20 times, and the highest values have been selected. Bayes regulation learning algorithm gave the best accuracy values in all classification studies performed in four data sets. The results
6 Diagnosis disease from medical databases using neural networks: a review
137
of the learning algorithm, in which the highest accuracy rates were obtained from these studies, were discussed (Deperlioglu, 2018d). In the second study with ANN, the retrained ANN was used. The purpose of retraining the network is to increase the generalization of the network and to prevent overflow. In this study, a classification study was made to diagnose diabetes with Pima Indian diabetes data set. Of the 768 samples in the data set, 90% (691 samples) were used as training data and 10% (77 samples) were used as test data. In the classification study, multilayer feedforward network was used. There are 24 hidden layers in the network. The multilayer feedforward neural network trained using Bayesian arrangement and the mean square error function was used for classification. The ANN was retrained 10 times in a row. After the retrained process, the classification process was repeated 20 times, and the average values were taken (Deperlioglu and Kose, 2018, September). In the classification study conducted with AEN, there are 24 hidden layers in the network. In the coding phase, the scaled conjugate gradient algorithm was used as the learning algorithm and the cross entropy cost function was used as the performance function. In the decoding phase, Levenberg–Marquardt algorithm and mean square error method were used. Several attempts have been made with the data set to find the most appropriate parameters for AEN. With the obtained AEN, 80% of the data set was classified as training data and 20% as test data. The proposed classifier model has been tried 20 times with different training and test data. The average values obtained from these studies were taken as accuracy, sensitivity and specificity (Deperlioğlu and Kose, 2018, November; Deperlioglu, 2019, April-a; Deperlioglu, 2019, April-b). The results of classification studies performed with data sets are considered separately. First, we examine the classification studies for diabetes detection using the Pima Indian diabetes data set. The results of these studies were given in Table 6.2; the highest values were shown in bold. It is observed from the table that the best performance appraisal measure values were obtained with AEN.
Table 6.2: Comparison data for the diabetes. Order no.
Study
Method
Deperlioglu (d) Deperlioglu and Kose (, September) Deperlioğlu and Kose (, November)
ANN Retrained ANN AEN
Accuracy (%)
Sensitivity (%)
Specificity (%)
.
.
.
.
.
.
.
.
.
138
Omer Deperlioglu
Second, the results of the classification using the Wisconsin breast cancer data set were examined. In this data set, six tissue classes were classified using electrical impedance measurements. So there are six output classes in the classification. Comparison data of this data set were given in Table 6.3; the highest values were shown in bold. It is observed from the table that the best performance appraisal measure values were obtained with AEN. Although grading accuracy decreases when the output class is generally more than two in the classification operations, quite high accuracy rates have been obtained in both classification studies. Table 6.3: Comparison data for the breast cancer. Order no.
Study
Method
Deperlioglu (d) Deperlioglu (, April-b)
ANN AEN
Accuracy (%)
Sensitivity (%)
Specificity (%)
. .
. .
. .
Third, classification studies using the hepatitis data set were examined. Both studies have two exit classes, including hepatitis and healthy. Comparison data for this data set were given in Table 6.4; the highest values were shown in bold. It is observed from the table that the best performance appraisal measure values were obtained with AEN. Table 6.4: Comparison data for the hepatitis. Order no.
Study
Method
Deperlioglu (d) Deperlioglu (, April-a)
ANN AEN
Accuracy (%)
Sensitivity (%)
Specificity (%)
. .
. .
. .
Finally, studies using the Cleveland heart disease data set were examined. All classification studies were carried out for five output classes, four diseased classes and one healthy class. Comparison data for this data set were given in Table 6.5; the highest values were shown in bold. It is observed from the table that the best performance appraisal measure values were obtained with AEN. Despite the five output classes in the classification process, both classification studies have achieved very high accuracy rates. Table 6.5: Comparison data for the Cleveland heart disease data set. Order no. Study
Method Accuracy (%)
Sensitivity (%)
Specificity (%)
ANN AEN
. .
. .
. .
Deperlioglu (d) Deperlioglu ()
6 Diagnosis disease from medical databases using neural networks: a review
139
6.5 Conclusion In this chapter, neural network classification studies using the most well-known four data sets in the machine learning laboratory’s (UCI) databases, such as Cleveland heart diseases, Pima Indian diabetes, hepatitis and breast cancer data sets, are reviewed. Therefore, one of the prominent points is that the classification accuracy rate could be increased by using different learning methods in ANN or by using deep learning without using mixed methods or making feature selection with different optimization methods. In addition, with ANN, when the values of the performance measures are analyzed, it is also seen that the best results are obtained with the Bayesian arrangement algorithm, especially for medical data sets with multiple output classes. Bayes regularization algorithm has a higher performance than other learning algorithms, especially when the number of output classes is more than two. For the same medical data sets, it is observed that the classification success of AEN is better than ANN. Another prominent detail is that deep learning methods process faster than normal ANNs and save time. Although it takes some time to learn on CNNs as they process images, AEN processes faster than both. As a result of the investigations, it can be suggested that AEN can be used as a method for better classification of medical databases without using mixed methods. High accuracy indicates that the proposed model has high accuracy diagnostic and prediction capabilities. The high sensitivity rate indicates that the model also has the ability to accurately determine the detection of the target class. The high specificity ratio shows that the model also has a high target class separation capacity. Thus, a wellorganized AEN can be used to classify all types of medical data with high accuracy.
References Aha, D. and Kibler, D. (1988). Instance-based prediction of heart-disease presence with the Cleveland database, University of California, 3(1), 3–2. Al Jarullah, A.A. (2011, April). Decision tree discovery for the diagnosis of type II diabetes. In 2011 International conference on innovations in information technology (pp. 303–307). IEEE. Ba-Alwi, F.M. and Hintaya, H.M. (2013). Comparative study for analysis the prognostic in hepatitis data: data mining approach, Spinal Cord, 11, 12. Bhatia, S., Prakash, P. and Pillai, GN. (2008, October). SVM based decision support system for heart disease classification with integer-coded genetic algorithm to select critical features. In Proceedings of the world congress on engineering and computer science (pp. 34–38). Chandrayan, P. Deep Learning: Autoencoders Fundamentals and types, https://codeburst.io/deeplearning-types-and-autoencoders-a40ee6754663. Son erişim 25 Ocak 2018. Chicco, D., Sadowski, P. and Baldi, P. (2014, September). Deep autoencoder neural networks for gene ontology annotation predictions. In Proceedings of the 5th ACM conference on bioinformatics, computational biology, and health informatics (pp. 533–540). ACM. Cleveland Heart Disease data set, http://archive.ics.uci.edu/ml/datasets/Heart+Disease Last Access: 10.01.2019.
140
Omer Deperlioglu
Deperlioglu, O. (2018a). Intelligent Techniques Inspired by Nature and Used in Biomedical Engineering. Nature-Inspired Intelligent Techniques for Solving Biomedical Engineering Problems, 51–77, Hershey, USA, IGI Global. Doi: 10.4018/978-1-5225-4769-3.ch003. Deperlioglu, O. (2018b). Classification of phonocardiograms with convolutional neural networks, BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 9(2), 22–33. Deperlioglu, O. (2018c). Classification of segmented heart sounds with Artificial Neural Networks, International Journal of Applied Mathematics, Electronics and Computers, 6(4), 39–44. Deperlioglu, O. (2018d) The effects of different training algorithms on the classification of medical databases using artificial neural networks, 2nd European Conference on Science, Art & Culture (ECSAC 2018), Antalya, Turkey between April 19 to 22, 2018. Deperlioglu, O., (2019, April-a). Hepatitis disease diagnosis with deep neural networks. In 2019 International 4th European Conference on Science, Art & Culture (ECSAC’2019), ISBN: 978605-7809-73-5, pp. 467–473, April 18 to 21, Antalya. Deperlioglu, O. (2019, April-b). Using autoencoder deep neural networks for diagnosis of breast cancer. In 2019 International 4th European Conference on Science, Art & Culture (ECSAC’2019), ISBN: 978-605-7809-73-5, pp. 475–481, April 18 to 21, 2019, Antalya. Deperlioglu, O. (2019a). Classification of segmented phonocardiograms by convolutional neural networks, BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 10(2), 5–13. Deperlioglu, O. and Kose, U. (2018, September). Diabetes determination using retraining neural network. In 2018 International Conference on Artificial Intelligence and Data Processing (IDAP) (pp. 1–5). IEEE. Deperlioglu, O. and Kose, U. (2018, October). Diagnosis of diabetic retinopathy by using image processing and convolutional neural network. In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1–5). IEEE. Deperlioğlu, Ö. and Köse, U. (2018, November). Diagnosis of Diabetes mellitus Using Deep Neural Network. In 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 1–5). IEEE. Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J.J., Sandhu, S. . . . Froelicher, V. (1989). International application of a new probability algorithm for the diagnosis of coronary artery disease, The American Journal of Cardiology, 64(5), 304–310. Ganji, M.F. and Abadeh, M.S. (2010, May). Using fuzzy ant colony optimization for diagnosis of diabetes disease. In 2010 18th Iranian Conference on Electrical Engineering (pp. 501–505). IEEE. Gennari, J.H., Langley, P. and Fisher, D. (1989). Models of incremental concept formation, Artificial Intelligence, 40(1–3), 11–61. Gharehbaghi, A., Borga, M., Sjöberg, B.J. and Ask, P. (2015). A novel method for discrimination between innocent and pathological heart murmurs, Medical Engineering & Physics, 37(7), 674–682. Gurney, K. (2004). An Introduction to Neural Networks, London, Taylor & Francis e-Library, UCL Press Limited. Han, J., Rodriguez, J.C. and Beheshti, M. (2008, December). Diabetes data analysis and prediction model discovery using rapidminer. In 2008 Second international conference on future generation communication and networking (Vol. 3, pp. 96–99). IEEE. Hemanth, D.J., Deperlioglu, O. and Kose, U. An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network, Neural Computing and Applications, 1–15. Hinton, G.E. and Salakhutdinov, R.R. (2006). Reducing the dimensionality of data with neural networks, Science, 313(5786), 504–507. Jaganathan, P. and Kuppuchamy, R. (2013). A threshold fuzzy entropy based feature selection for medical database classification, Computers in Biology and Medicine, 43(12), 2222–2229.
6 Diagnosis disease from medical databases using neural networks: a review
141
Kahramanli, H. and Allahverdi, N. (2008). Design of a hybrid system for the diabetes and heart diseases, Expert Systems with Applications, 35(1–2), 82–89. Kala, R., Shukla, A. and Tiwari, R. (2009, December). Comparative analysis of intelligent hybrid systems for detection of PIMA indian diabetes. In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC) (pp. 947–952). IEEE. Karatsiolis, S. and Schizas, C.N. (2012, November). Region based support vector machine algorithm for medical diagnosis on pima Indian diabetes dataset. In 2012 IEEE 12th International Conference on Bioinformatics & Bioengineering (BIBE) (pp. 139–144). IEEE. Kose, U., Guraksin, G.E. and Deperlioglu, O. (2016). Cognitive development optimization algorithm based support vector machines for determining diabetes, BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 7(1), 80–90. Köse, U., Güraksın, G.E. and Deperlioğlu, Ö. (2015, October). Diabetes determination via vortex optimization algorithm based support vector machines. In 2015 Medical Technologies National Conference (TIPTEKNO) (pp. 1–4). IEEE. Kumari, V.A. and Chitra, R. (2013). Classification of diabetes disease using support vector machine, International Journal of Engineering Research and Applications, 3(2), 1797–1801. Le, Q.V. (2015). A tutorial on deep learning part 2: Autoencoders, convolutional neural networks and recurrent neural networks, Google Brain, 1–20. Ledley, R.S. and Lusted, L.B. (1959). Reasoning foundations of medical diagnosis, Science, 130, 9–21. Lekkas, S. and Mikhailov, L. (2010). Evolving fuzzy medical diagnosis of Pima Indians diabetes and of dermatological diseases, Artificial Intelligence in Medicine, 50(2), 117–126. Lipkin, M., & Hardy, J. D. (1958). Mechanical correlation of data in differential diagnosis of hematological diseases. Journal of the American Medical Association, 166(2), 113–125. Parpinelli, R.S., Lopes, H.S. and Freitas, A.A. (2002). Data mining with an ant colony optimization algorithm, IEEE Transactions on Evolutionary Computation, 6(4), 321–332. Parthiban, L. and Subramanian, R. (2008). Intelligent heart disease prediction system using CANFIS and genetic algorithm, International Journal of Biological, Biomedical and Medical Sciences, 3(3). Polat, K. and Güneş, S. (2007). Automatic determination of diseases related to lymph system from lymphography data using principles component analysis (PCA), fuzzy weighting preprocessing and ANFIS, Expert Systems with Applications, 33(3), 636–641. Polat, K., Güneş, S. and Arslan, A. (2008). A cascade learning system for classification of diabetes disease: generalized discriminant analysis and least square support vector machine, Expert Systems with Applications, 34(1), 482–487. Potes, C., Parvaneh, S., Rahman, A. and Conroy, B. (2016, September). Ensemble of feature-based and deep learning-based classifiers for detection of abnormal heart sounds. In 2016 Computing in Cardiology Conference (CinC) (pp. 621–624). IEEE. Rubin, J., Abreu, R., Ganguli, A., Nelaturi, S., Matei, I. and Sricharan, K. (2017). Recognizing abnormal heart sounds using deep learning. arXiv preprint arXiv:1707.04642. Tutorials Point, Artificial Neural Network, Tutorials Point (I), Pvt. Ltd. 2017. UCI Machine Learning Repository https://archive.ics.uci.edu/ml/datasets/pima+indians+diabetes, Irvine, CA: University of California, School of Information and Computer Science, Last access 10 October 2017. Valentine, A.P. and Trampert, J. (2012). Data space reduction, quality assessment and searching of seismograms: Autoencoder networks for waveform data, Geophysical Journal International, 189(2), 1183–1202. Zhang, W., Han, J. and Deng, S. (2017). Heart sound classification based on scaled spectrogram and tensor decomposition, Expert Systems with Applications, 84, 220–231.
A. Lenin Fred, S. N. Kumar, Parasuraman Padmanabhan, Balazs Gulyas, H. Ajay Kumar
7 A novel neutrosophic approach-based filtering and Gaussian mixture modeling clustering for CT/MR images Abstract: Medical imaging modalities like computed tomography (CT), magnetic resonance (MR) imaging, ultrasound and positron emission tomography have revolutionized modern medicine. The computer-aided algorithms are used for the analysis of medical images for disease diagnosis and treatment planning. Noise is unavoidable in medical images, CT images are corrupted by Gaussian noise and MR images are corrupted by Rician noise. This chapter focuses on a novel neutrosophic approach median filter for the CT/MR images, and denoised image was subjected to Gaussian mixture model segmentation. The neutrosophic domain filtering approach was found to be superior when compared with other filters and was validated in terms of performance metrics for phantom images. The real-time abdomen CT and brain MR images were also used for analysis, and the Gaussian mixture model segmentation results were found to be proficient when coupled with the neutrosophic domain filtering approach. Keywords: neutrosophic median filter, Gaussian mixture model, segmentation
7.1 Introduction The computer-aided algorithms are used for the inspection of medical images for disease diagnosis and treatment planning. The medical images acquired are affected by noise, computed tomography (CT) images are affected by Gaussian noise and magnetic Acknowledgments: The authors would like to acknowledge the support from Nanyang Technologıcal Unıversıty under NTU Ref: RCA-17/334 for providing the medical images and supporting them in the preparation of the manuscript. Parasuraman Padmanabhan and Balazs Gulyas also acknowledge the support from Lee Kong Chian School of Medicine and Data Science and AI Research (DSAIR) Centre of NTU (Project Number ADH-11/2017-DSAIR) and the support from the Cognitive NeuroImaging Centre (CONIC) at NTU. A. Lenin Fred, H. Ajay Kumar, Mar Ephraem College of Engineering and Technology, Elavuvilai, Tamil Nadu, India S. N. Kumar, Amal Jyothi College of Engineering, Kanjirappally, Kerala, India Parasuraman Padmanabhan, Balazs Gulyas, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore https://doi.org/10.1515/9783110668322-007
144
A. Lenin Fred et al.
resonance (MR) images are affected by Rician noise (Gravel et al., 2004, He and Greenshields, 2008). The role of preprocessing is vital prior to subsequent processes like segmentation, classification and compression. The selection of segmentation algorithm relies on the modality of medical image and the nature of ROI, (Kumar et al., 2018a). The neutrosophic set-based filter generates better denoising results that outperform the median and mean filters for the standard Lena image corrupted by Gaussian and salt-and-pepper noise (Guo et al., 2009). The neutrosophic set median filter generates efficient results for simulated MR images from brain web database corrupted by Rician noise (Mohan et al., 2012b), and robust results were produced when compared with the anisotropic diffusion filter and unbiased nonlocal mean filter. The neutrosophic domain filtering approach was efficient for the removal of speckle noise, and when coupled with the fuzzy c-means (FCM) clustering, lesion was extracted efficiently. The lesion boundary was accurately extracted in the breast ultrasound images (Shan et al., 2012). The segmentation results for neutrosophic filter-based clustering were superior than the classical active contour and watershed algorithms. The neutrosophic wiener filtering approach was found to be efficient for the removal of Rician noise in MR images, and the results outperform the anisotropic diffusion, wiener, nonlocal means and total variation filtering techniques (Mohan et al., 2013a). The accuracy of level set segmentation was improved by the incorporation of neutrosophic filtering approach prior to extraction of the ROI (Guo and Şengür, 2013). The expectation–maximization (EM) segmentation algorithm was coupled with the neutrosophic filtering approach for the extraction of lungs in CT images. The postprocessing was performed by morphological operations, and for performance validation, Jaccard index, average distance and Hausdorff distance are used (Guo et al., 2013). The liver segmentation accuracy was improved when the neutrosophic filter was coupled with the FCM algorithm, and the oversegmentation was minimized by the filtering approach (Anter et al., 2014). Novel edge detection based on neutrosophic and alpha mean operation was proposed for the boundary detection in images; efficient results were produced for noisy images and images with complex objects. The statistical analysis also reveals the proficiency of the neutrosophic set-based edge detection technique (Guo and Şengür, 2014). The K-means clustering when combined with neutrosophic filtering approach yields efficient segmentation results (Akhtar et al., 2014). The neutrosophic c-means clustering and indeterminacy filtering approach generate efficient segmentation results for the synthetic and natural images (Guo et al., 2017a). The neutrosophic approach was employed in the multilevel thresholding segmentation algorithm to yield better segmentation results; the cricket optimization algorithm was employed for the selection of threshold value (Canayaz and Hanbay, 2016). The neutrosophic graph cut segmentation approach produces better segmentation results when compared with the neutrosophic clustering approach; the indeterminacy filter was applied after converting the image into neutrosophic
7 A novel neutrosophic approach-based filtering and Gaussian mixture modeling
145
domain (Guo et al., 2017b). Nguyen et al. (2019) performed a detailed analysis on the application of neutrosophic set on the biomedical diagnosis. The image is converted into the neutrosophic domain and two operations, α-mean and β-enhancement were employed for enhancement of image. The TOPSIS segmentation approach applied in the enhanced image generates better segmentation, when compared with the classical segmentation approaches (Xu et al., 2018). The neutrosophic approach was found to be efficient for the enhancement of grayscale images (Salama et al., 2018). The Chan–Vese algorithm was coupled with the neutrosophic approach for the detection of boundary in objects (Sert and Alkan, 2019). The denosing was done by wavelet transform and for ROI extraction, neutrosophic FCM clustering was employed. (Wen et al., 2019). The neutrosophic graph-cut-based segmentation detects cervical cancer efficiently (Devi et al., 2018). Section 7.2 describes the neutrosophic median filter, Section 7.3 describes theGaussian mixture model segmentation and, finally, results and discussion are highlighted in Section 7.4.
7.2 Neutrosophic logic-based median filter Neutrosophy is defined as a branch of philosophy that is used to analyze imprecise data more effectively than intuitionistic fuzzy logic (Mohan et al., 2013b). In the neutrosophic domain, a variable is represented as true, false and indeterminate values. The concept of neutrosophy gain its importance in preprocessing as well as for ROI extraction on images (Kumar et al., 2020, Mohan et al., 2013). The image in the spatial domain is converted into neutrosophic domain, and the image is depicted in terms of three membership sets: true (T), indeterminate (I) and false (F) Y ða, bÞ = Y ðT, I, F Þ In terms of the pixel values, the image in the neutrosophic domain is expressed as follows: YNS ðp, qÞ = fT ða, bÞ, I ða, bÞ, F ða, bÞg where T(a,b), I(a,b) and F(a,b) indicate the probability density of white pixels, pixels in indeterminate set and nonwhite pixels. The equations for T(a,b), I(a,b) and F(a,b) are represented as follows: ða, bÞ − h min h T ða, bÞ = hmax − hmin
146
A. Lenin Fred et al.
where ða, bÞ = h
aX − w=2 bX − w=2 1 hðm, nÞ w × w m = a − w=2 n = b − w=2
I ða, bÞ =
μða, bÞ − μmin μmax − μmin
where ða, bÞ μða, bÞ = abs hða, bÞ − h F ða, bÞ = 1 − T ða, bÞ The pixel gray value spread is indicated by the entropy value. The higher entropy value represents the uniform distribution of pixels and lower value represents the nonuniform distribution of pixels. The entropy of an image in the neutrosophic set is expressed as follows: ENS = ET + EI + EF where max ðT Þ X
ET =
PT ðxÞ ln pT ðxÞ
x = minðT Þ
EI = −
max XðI Þ
PI ðxÞ ln pI ðxÞ
x = minðI Þ
EF = −
max ðF Þ X
PF ðxÞ ln pF ðxÞ
x = minðF Þ
The entropy expression reveals that it is a function of probability density values of T, F and I pixels. The value of I(a,b) depicts the indeterminacy in XNS(a,b). The ^ NS , is expressed as follows: gamma filtering operation for XNS(a,b), X ^ ðγÞ, ^I ðγÞ, F ^ ðγÞ ^ NS ðγÞ = P T X where ( ^ ðγÞ = T
T I γ
^γ ða, bÞ = medianðm, nÞ = S fT ða, bÞg T a, b
7 A novel neutrosophic approach-based filtering and Gaussian mixture modeling
147
where ( ^ ðγÞ = F
F I γ
^γ ða, bÞ = medianða, bÞ = S fF ða, bÞg F p, q where ^ ^I ðγÞ = μT^ ða, bÞ − μTmin μTmax − μ ^ ^ Tmin ^ ða, bÞ − T ^ ða, bÞ μT^ ða, bÞ = abs T
^ ða, bÞ = T
xX − w=2 vX − w=2 1 ^ ðm, nÞ T w × w a = x − w=2 b = v − w=2
The stages in neutrosophic median filtering are represented in the following steps: Stage 1: The input image is transformed into the neutrosophic domain. Stage 2: The gamma median filter is applied to the true subset (T) to acquire Tbγ . Stage 3: Determine the entropy of the indeterminate set Ibγ , Eb ðxÞ. Iγ
Stage 4: Pass to stage 5, if Eb ðx + 1Þ − Eb ðxÞ=ðEb ðxÞÞ < μ is satisfied; else, T=Tbγ , Iγ Iγ Iγ return to stage 2. Stage 5: The resultant image is converted into the pixel domain. The filtering approach based on neutrosophic logic outperforms the classical approaches. In the neutrosophic median filter, initial gamma value of 0.1 is chosen and its value is incremented with a step size of 0.005. The iterative procedure stops, when the difference of entropy between previous and current iteration is under the predetermined threshold value.
7.3 Gaussian mixture model segmentation The input medical images are filtered by neutrosophic domain median filter prior to segmentation. The pixel grey value in an image reflects the intensity profile and is expressed in terms of Gaussian distribution f ð yÞ =
k X i=1
pi N yjμi σi2
148
A. Lenin Fred et al.
Let Y represents the random variable that depicts the gray values, k is the number of classes and Pi > 0: 1 − ðy − μi Þ2 N μi , σi2 = pffiffiffi exp 2σi2 σ 2pi where μi is the mean, and σi is the standard deviation of class i. The image is represented as X, the pixel values denote the lattice data and GMM is the pixel base model. The number of regions in GMM is assumed from the histogram of lattice data. The EM-MAP algorithm is used here and the steps are summarized as follows: Step 1: The input image is observed as a vector yj , j = 1, 2, . . . , n and i 2 f1, 2, . . . , kg labels set. Step 2: Initialize the parameters as f0g f0g f0g f0g 2f0g 2f0g θf0g = p1 , . . . , pk , μ1 , . . . , μk , σ1 , . . . , σk Step 3: E step ðr + 1Þ
pij
=P
ðr + 1Þ
ðrÞ ðrÞ 2ðrÞ Pi N yj jμi , σi ijyj = f yj
Step 4: M step ðr + 1Þ
pi
n 1X ðr Þ P n j = 1 ij
=
n P ðr + 1Þ μi n P 2ðr + 1Þ
σi
=
j=1
j=1
=
ðr + 1Þ
Pij
ðr + 1Þ
npi
ðr + 1Þ
Pij
yj
ðr + 1Þ
yj − μi
ðr + 1Þ
npi
Step 5: Iterate steps 3 and 4 until a specified error, that is, Step 6: Calculate ðfinalÞ
plj = ArgMaxi pij
X
e2 i i
thresh distðx, yÞ = maxVal otherwise
276
Harshvardhan Tiwari et al.
Original image
BINARY
BINARY_INV
Figure 13.12: Binary thresholding.
13.3.2.3 Otsu binarization The final step of primary segmentation is Otsu binarization. Otsu binarization, named after Nobuyuki Otsu, performs image thresholding. The algorithm uses a single intensity value. It classifies the images into foreground or background based on threshold value, where the sum of these values spans a minimal. A white class variance is determined using the associated weights. Let t to be the threshold value. The threshold subdivides the tongue picture into two classes: c0 and c1. Now calculate the class variance of the segmented classes c0 and c1. Image thresholding using binarization and Otsu binarization methods are shown in Figure 13.12 and 13.13 respectively.
13.3.3 Secondary segmentation 13.3.3.1 Edge detection A method that extracts beneficial structural records from one-of-a-kind imaginative and prescient objects and dramatically lessens the amount of information to be processed is known as canny edge detection. It has been substantially carried out in computer vision systems. Canny has discovered that the requirements for the utility of edge detection on diverse vision structures are exceedingly similar. As a consequence, an edge detection approach to deal with those necessities may be achieved in an extensive kind of situations.
13.3.3.2 ROI segmentation ROI segmentation is the primary technique in secondary segmentation manner. This “area of interest” or ROI is generally decided on the premise of pixel intensity values or person-decided regions (through drawing and subsequent protecting). Segmentation is referred to a manner of isolating objects of interest from uninteresting objects.
Figure 13.13: Otsu binarization with histogram values.
350
300
250
200
150
100
50
0
250
200
150
100
50
0
0
0
50
100
200
Gray scale map
100 150 200 250 300 350
Gray scale map
0
250
500
750
1000
1250
1500
1750
2000
0
500
1000
1500
2000
2500
3000
3500
4000
0
0
50
50
150
100
150
Histogram
100
Histogram
200
200
250
250
350
300
250
200
150
100
50
0
250
200
150
100
50
0
0
0
50
100
200
Segmentation picture
100 150 200 250 300 350
Segmentation picture
13 A machine vision technique-based tongue diagnosis system in Ayurveda
277
278
Harshvardhan Tiwari et al.
Original image
Edge image
Figure 13.14: Edge detection.
Figure 13.15: ROI segmentation.
A segmented picture based totally on threshold intensity is separated into uninteresting from interesting pixels. Once segmented, the picture pixels may reassign the intensity values of either 0 or 1. Results of edge detection is depicted in Figure 13.14. On this operation, geometry remains identical, but the grayscale image is converted right into a binary image along with simplest depth 1 or 0. Morphometrics is a system of counting adjoining pixels with a cost of one and ignoring those with a value of zero. If depth values are required, then the binary photo may be used as a mask to overlay the unique image exposing the most effective ROI pixels. As soon as masked, the depth values of the photograph may be obtained. Figure 13.15 shows ROI segmentation.
13.3.4 Feature extraction The final step of the process is classification of the model into vata, pitta and kapha. Before the process, we have to extract the required features based on color, coating, texture and shape. Accuracy of the model increases with the increase in the number
13 A machine vision technique-based tongue diagnosis system in Ayurveda
279
of features that are used to obtain the model. Texture of the model helps in identifying the dryness of the tongue. Shape of the tongue helps in determining the classification. These feature extractions help in the better classification of the model.
13.3.4.1 RGB color model Color models refer to a selected manner of organizing colors. A color model is actually a combination of two things: a color model and a mapping feature. The motive is we want color models because it allows us in representing pixel values using tuples. The mapping function maps the color model to the set of all possible colors that can be represented. There are many different color models that are useful. RGB, YUV, HSV and Lab are some of the most popular color models. Different color models provide exclusive advantages. We simply need to choose the color model that is right for the given problem. An RGB component of a tongue image is stored as an m × n × 3 data array that defines red, green and blue color components for each individual pixel. RGB images do not use a palette. The color of each pixel is determined by the combination of the red, green and blue intensities stored in each color plane at the pixel’s location. RGB images are stored as 24-bit images, where the red, green and blue components are 8 bits each. This yields a potential of 16 million colors. Since the color of tongue image is a combination of pink and white color, the most nearest value to the tongue image is the red component and, hence, red color component is extracted from the tongue image. Tongue RGB color components are shown in Figure 13.16.
Figure 13.16: RGB color component.
13.3.4.2 Local binary pattern The local binary pattern (LBP) algorithm parameters control how local binary patterns are computed for each pixel in the input image. The in-built function extractLBPFeatures() in MATLAB helps us to extract number of features that is required for the tongue image. This function has three parameters namely the input
Figure 13.17: Features after interlacing.
280 Harshvardhan Tiwari et al.
13 A machine vision technique-based tongue diagnosis system in Ayurveda
281
image, “interpolation” “linear,” Interpolation method used to compute pixel neighbors, specified as the comma-separated pair consisting of “interpolation” and either “linear” or “nearest.” If we use nearest, then it will yield less accurate but faster computation. Here, we use linear to gain more accuracy and more features. Hence, this function will extract required features where these features are nothing but the pixel values of the required region and they are divided into five columns and these columns are further divided into number of blocks. In the first set of column, further divided into a number of blocks, it represents color components of the image in fraction values. In the second set of column, each block represents texture of the tongue image, where the first block represents side blocks of the tongue image, the next four blocks represent the tip of the tongue image and the last block represents root of the tongue image (which is not taken under consideration). In the third set of column, each block represents sinθ, cosθ, xsinθ, ycosθ, ysinθ and xcosθ. In the fourth set of column, first three blocks represent Σ(sigma) values, next six blocks represent θ values and the last two blocks are used to check threshold values that detect whether the given image is the tongue image or the nontongue image. In the fifth set of column, each block represents the shape of the tongue image where it is divided into three subclasses, in which the first block represents width and ratio of the tongue image, the second block represents half distance and center distance, the third block represents circular area, square area and triangular area. The feature extraction results are shown in Figure 13.17.
13.3.5 Classification Classification can be done using f-measure components including precision and recall: !γ fin ðx, yÞ − finmin ðx, yÞ fout ðx, yÞ = 255 × finmax ðx, yÞ − finmin ðx, yÞ Precision =
tp tp , Recall = t p + fp tp + fn
fout (x, y) is the color level for the output pixel (x, y) after the contrast stretching process. fin (x, y) is the color-level input for the pixel (x, y). finmax (x, y) is the maximum value for color level in the input image. finmin (x, y is the minimum value for color level in the input image, γ is a constant that defines the shape of the stretching curve.
282
Harshvardhan Tiwari et al.
13.3.5.1 Support vector machine Support vector machine is a supervised machine learning algorithm that can be used for both classification and regression challenges. However, it is mostly used in classification problems. In this algorithm, we plot each data item as a point in ndimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyperplane that differentiates the two classes very well. Suppose that we had a vector w which is always normal to the hyperplane (perpendicular to the line in two dimensions). We can determine how far away a sample is from our decision boundary by projecting the position vector of the sample on to the vector w. As a quick refresher, the dot product of two vectors is proportional to the projection of the first vector onto the second. If it is a positive sample, we are going to insist that the proceeding decision function (the dot product of w and the position vector of a given sample plus some constant) returns a value greater than or equal to 1.
13.3.5.2 Model performance In the classification process, we have quantified the images with global feature descriptors. Mainly, there are three important contributing attributes to be considered – color, shape and texture. These three could be used separately or combined to quantify images. Haralick texture descriptor is used to recognize texture in the image. The model has been trained to learn texture features from images. We have used 120 (a)
Squared error of LBP histograms 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0
Figure 13.18(a): Classification.
Kapha Pitta Vata
283
13 A machine vision technique-based tongue diagnosis system in Ayurveda
(b) 0.05
Kapha Pitta Vata
0.045 0.04 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
1
2
3
Figure 13.18(b): Least mean square error of LBP histograms.
images for each class: vata, pitta and kapha. We have used 80% data from dataset for training and 20% data from dataset for testing. Accuracy can be measured by comparing actual test set values and predicted values. Classification result is shown in Figure 13.18(a). We have calculated accuracy by a ratio of correct prediction to total number of samples. We obtained the classification rate of 92.3%. The precision and recall are also the measures of accuracy. Precision is the proportion of positive (c) Tongue diagnosis
Test image
97.9713
3.89115e–06
1.24206e–06
3.23298e–08
3.89115e–06
Accuracy
Kapha
Pitta
Vata
Nature
Figure 13.18(c): Classification and accuracy results.
284
Harshvardhan Tiwari et al.
identifications over actually correct observation. Recall quantifies the number of positive class observations made out of all positive samples in the dataset. We have measured precision and recall for predictive model 95.11% and 93.29%, respectively. We use least mean square to reduce the noise of the image. When we consider two images, it will take individual pixel values of those images and take the difference from both the images. Classification result is shown in Figure 13.18(a). Once the difference is calculated, then we take the mean or the average value of the image pixel values. Later, variance is calculated, in that we take the least value obtained. That least value is compared with the expected values. If the obtained value, that is, the least value is closer to the expected value, then it is taken under consideration. This is done a number of times, so that it will reach closer to the expected value. Classification accuracy is presented in Figure 13.18(c).
13.4 Conclusion In this proposed work, we acquired the images and we segment the tongue and nontongue regions of the images. The color mapping has been done considering RGB color model. Primary image segmentation has been carried out using decorrelation and stretch algorithm, binary thresholding and Otsu binarization. Later on, ROI segmentation and edge detection is carried out. At the beginning of the project, it was decided that we build a model and classify them into tri-dosha using image processing models. Further during the course of the project, various tools were necessary and were developed using Matlab. Least square mean error was used to distinguish the images into respective classes. We have achieved classification into three classes of vata, pitta and kapha.
References Dey, S. and Pahwa, P. (2014). Prakriti and its associations with metabolism, chronic diseases, and genotypes: Possibilities of new born screening and a lifetime of personalized prevention, Journal of Ayurveda and Integrative Medicine, vol. 5(1), 15–24. Fu, Z.C., Li, X.Q. and Li, F.F. (2009). Tongue image segmentation based on snake model and radial edge detection, Journal of Image and Graphics, 14(4), 688–693. Li, S.S.X. (2009). Automatic segmentation of tongue image based on histogram threshold and snakes model, Journal of University of Science and Technology Liaoning, 32(6), 585–590. Liang, C. and Shi, D.C. “A prior knowledge-based algorithm for tongue body segmentation,” in Proceedings of the International Conference on Computer Science and Electronics Engineering (ICCSEE ’12), 646–649, Hangzhou, China, March 2012.
13 A machine vision technique-based tongue diagnosis system in Ayurveda
285
Ning, S.J.F., Zhang, D., Wu, C.K. and Yue, F. (2012). Automatic tongue image segmentation based on gradient vector flow and region merging, Neural Computing & Applications, 21(8), 1819–1826. Paragios, N., Mellina-Gottardo, O. and Ramesh, V. “Gradient vector flow fast geodesic active contours,” in Proc. of IEEE International Conference on Computer Vision, 2001. Paragios, N., Mellina-Gottardo, O. and Ramesh, V. (2004). Gradient vector flow fast geometric active contours, IEEE Transactions on Pattern Analysis and Machine Intelligence, 3, 402–407. Shi, M.J., Li, G.Z. and Li, F.F. (2013). C2 G2 FSnake: Automatic tongue image segmentation utilizing prior knowledge, Science China Information Sciences, 56(9), 1–14. Shi, M.J., Li, G.Z., Li, F.F. and Xu, C., “A novel tongue segmentation approach utilizing double geodesic flow,” in Proceedings of the 7th International Conference on Computer Science and Education (ICCSE ’12), pp. 21–25, Melbourne, Australia, July 2012. Thakar, V.J. (1982). Diagnostic methods in ayurveda, Ancient Science of Life, 1(3), 139–145. Wang, M.Y., Zhang, X.F. and Li, Z. (2011). An improved snakes model for tongue image segmentation, Measurement and Control Technology, 30(5), 32–35. Xu, C., Pham, D. and Prince, J. (2000). Handbook of Medical Imaging: Processing and Analysis Management, ch. Image Segmentation Using Deformable Models, Academic Press, 129–174. Xu, C. and Prince, J., “Gradient vector flow: A new external force for snakes,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 66–71, 1997. Xu, C. and Prince, J. (1998). Generalized gradient vector flow external forces for active contours, Signal Processing, 3, 131–139. Xu, C. and Prince, J. (1998). Snakes, shapes, and gradient vector flow, IEEE Transactions on Image Processing, 3, 359–369. Xu, C. and Prince, J. (2000). Handbook of Medical Imaging: Processing and Analysis Management, ch. Gradient Vector Flow Deformable Models, Academic Press. Zhai, X.M., Lu, H.D. and Zhang, L.Z., “Application of image segmentation technique in tongue diagnosis,” in Proceedings of the International Forum on Information Technology and Applications (IFITA ’09), pp. 768–771, Chengdu, China, May 2009.
Vilda Purutçuoğlu, Hajar Farnoudkia
14 Vine copula and artificial neural network models to analyze breast cancer data Abstract: The relationship between the genes in every biological network is a challenging issue so that there have been many ways to infer the model of the network. This relationship can be defined in terms of conditional dependence which is caught by the inverse of the covariance matrix in the normally distributed data set. Copula Gaussian graphical is a model that describes the relationship between the variables that are mostly genes in biological networks according to regression models of the other genes as the regressors. To estimate the model parameter, the precision matrix, reversible jump Markov chain is proposed because of the nonfixed dimension. On the other hand, the copula method can find the most appropriate joint distribution based on the marginal distribution which can be applied for almost every kind of data set including biological data. Furthermore, to overcome the complexity facing in higher dimensional data, vine copula composes the model into bivariate cases which can be investigated independently from all other pairs. Each connection is called edges that can be defined according to a copula family. In this study, copula Gaussian graphical models and copula concepts are defined by details such as types and families as well as test statistics to select the best family among all possible families for the copula concept. Furthermore, an artificial neural network is defined briefly to show its application in biological data. Then, we apply two proposed methods to see the estimated relationship matrix to two real data sets with 23 and 50 genes, respectively, to compute their accuracy by two accuracy measures. Keywords: biological networks, Gaussian graphical models, vine copula, artificial neural network
14.1 Introduction The relationship between the genes in every biological network is a challenging issue; so, there have been many approaches to understand the structure of the network. One of these special techniques is to mathematically model the system via the copula Gaussian graphical model (CGGM) and to infer the associated model parameters via the reversible jump Markov chain Monte Carlo method (RJMCMC). On
Vilda Purutçuoğlu, Department of Statistics, Middle East Technical University, Ankara, Turkey Hajar Farnoudkia, Department of Statistics and Computer Sciences, Başkent University, Ankara, Turkey https://doi.org/10.1515/9783110668322-014
288
Vilda Purutçuoğlu, Hajar Farnoudkia
the other hand, another alternative modeling approach can be the artificial neural network (ANN) whose inference is conducted under distinct optimization methods. RJMCMC is a well-known Bayesian approach which was first introduced by Green (1995). But, its application in the construction of the network became slightly late and was suggested by Dobra and Lenkoski (2011) when their data are modeled by CGGM. Basically, the copula method is a very convenient mathematical expression that can find the most appropriate joint distribution of high-dimensional random variables by considering their marginal distributions according to Sklar’s theorem (1959). In this method, in order to overcome the complexity faced in higher dimensional data, the vine copula composes the model into bivariate cases that provide a chance to see the joint behavior free of all other two-by-two combinations. Here, each connection is called the edge that can be defined according to a copula family. On the other side, the ANN is a well-known machine learning approach that is based on the description of the nonlinear and complex structure either in a regression model or as a classification problem without using any distributional assumption. Therefore, it has a wide range of applications from economics and finance to engineering sciences. Hereby, in this study, we consider ANN as a nonlinear regression model specifically in order to identify the structure of protein–protein interaction (PPI) networks, hence, as a competitive model of CGGM. Accordingly, we use both CGGM and ANN in ovarian cancer data sets that comprise 23 and 50 genes, respectively. So, in the organization of the chapter, we initially explain CGGM whose inference is conducted by RJMCMC. Then, the copula concepts are defined, such as its types and families as well as test statistics to select the best family among all possible families. Later, we present the main idea of ANN and its architecture. Finally, we apply both copula approaches and the ANN model in the description of two selected cancer data sets. We summarize our findings in the last part.
14.2 Gaussian graphical model The conditional dependence between variables is one of the mostsimportant issues that has been investigating within the last 20 years through the Bayesian approach. The major challenge in this field is that, in some cases, the number of parameters, also called the dimension of parameters, resulting in the true model is unknown. In order to solve the underlying uncertainty, the Bayesian approach is used and the Markov chain Monte Carlo (MCMC) methods are the most popular approaches to infer the associated parameter estimation. In general, MCMC methods are the iterative algorithms that generate, at each iteration, a random variable from the posteriors of the parameters, yielding a Markov chain from which an estimator is computed (Mazet and Brie, 2006). There are several
14 Vine copula and artificial neural network models to analyze breast cancer data
289
ways to overcome the problem of the model selection and to infer associated model parameters simultaneously if these methods aresimplemented in the network form of the biological data modeled by the CGGM. The RJMCMC method is one of the wellknown approaches to unravel both challenges at the same time (Green, 1995). In each step of RJMCMC, the Metropolis–Hastings method is used under a changeable dimension of the system in such a way that the system proposes different dimensions in every iteration of Metropolis–Hastings in order to search the optimal number of dimensions while estimating the associated estimates of the parameters at the same time. Accordingly, in the Metropolis–Hastings algorithm, the move from (k, θk ) to ′ (k′, θk ) may not always be possible unless the new proposal space is preferred to the current position. Here, k and k′ refer to the number of edges, that is, the dimension of the precision matrix, and the proposal number of edges, respectively, and θk and ′ θk indicate the associated precision matrix under the k and proposal k′ edges, in order, thereby moving from current k to proposal k′, the acceptance probability is presented by the following equation: ′ ′ e k, θk jk′, θk P k′, θk jy q (14:1) Rk, k′ = ′ e k′, θk jk, θk P k, θk jy q where y indicates the current position of the normal random variables. Finally, P e is the kernel density. If there is a condition of refers to the likelihood function and q k′ k dim(θ , x)=dim(θ , y), which is called the “dimension matching” condition, the acceptance probability can be written equivalently by
′ ′ k
P k′, θk jx ′ e e q kjk q2 ð xÞ
∂ θ , x
× 1 × Rk, k′ = (14:2) e 2 ð y Þ ∂ θk , y e1 k′jk q q P k, θk jy e1 whose expression where x and y are variables from the proposal distribution q comes from the multiplication of likelihood ratios, proposal distributions ratio and e2 ð.Þ Jacobean term according to the Metropolis–Hastings algorithm.
′ In eq. (14.2), q
k k refers to the kernel for the given random variable and ∂ θ , x =∂ θ , y represents the determinant of the Jacobian matrix. In the calculation, we suppose a data matrix with p variables and n samples (n × p) and we aim to obtain the relationship between those variables. In this situation, which is common in social surveys and biological aspects, each variable is shown by a node in the graph and the conditional dependence between two nodes is represented by an undirected edge. Now, assume that the vector Y follows a pdimensional multivariate normal distribution via Np ð0, K − 1 Þ, where K is the inverse of the covariance matrix. Then, for a set with n samples, the likelihood function can be written proportional to
290
Vilda Purutçuoğlu, Hajar Farnoudkia
1:n n 1 T 2 p Y jK ∝ jK j exp − tr K U 2
(14:3)
where |.| shows the determinant of the given matrix and U is the trace of the Y ′Y matrix which equals the summation of the Y values. Therefore, a defined graphical model with V has E number of edges, also denoted as (V, E), and p number of nodes from a p-dimensional multivariate normal distribution with a zero mean vector and K − 1 variance–covariance matrix, that is, Np ð0, K − 1 Þ, is called the Gaussian graphical model (GGM). V = 1, 2, . . . , p and E can have at most
n p
elements that
show a full matrix. If the data matrix does not follow a Gaussian distribution, the copula can be applied to combine the data in a way that their joint distribution is Gaussian with the same covariance matrix according to Sklar’s theorem. For binary and ordinal categorical data, Muthen (1984) suggested a continuous latent variable Z with some thresholds in a way that the latent variables are from a Gaussian distribution with the same structure that they had before this transformation. In this way, any p-dimensional network under normal or nonnormal data matrix can be described via GGM by adding latent states and converting categorical/binary data to the continuous range.
14.2.1 Reversible jump Markov chain Monte Carlo method To have a robust estimator for the precision matrix in this study, we use a strategy based on the Bayesian approach. In this method, there is a linear combination of two estimators: estimated parameter based on prior distribution and the maximum likelihood estimators (MLEs). To have a better understanding of the proposed prior distribution for the precision matrix as the inverse of the covariance matrix, it is enough to point the relationship between 1=σ2 and its distribution for one parameter.model. That is, for normally distributed data, that is, Y⁓N μ, σ2 , ðn − 1Þs2 σ2 ⁓χ2ðn − 1Þ where n is the number of samples and s2 is the sample covariance. As a result, it is found that the inverse of σ2 , which is the precision matrix in the multivariate version, has some chi-square distribution. Moreover, G-Wishart is the generalized version of multivariate χ2 that has certain beneficial properties in inference. The G-Wishart distribution with parameters D and δ knowing the graph (G) is in the form of 1 1 T ðδ − 2Þ=2 exp − tr Θ D det Θ pðΘjGÞ = IG ðδ, DÞ 2 where IG ðδ, DÞ is a normalization constant. It is not straightforward to obtain the normalization constant when the graph is not a full graph, that is, a graph composed of both zero and nonzero elements. In this case, a distinct MCMC method
14 Vine copula and artificial neural network models to analyze breast cancer data
291
should be used to estimate it. One of the useful properties of the G-Wishart distribution is its conjugacy with the normal distribution. That is, if Θ⁓Wishartðδ, DÞ and y = ðy1 , . . . , yn Þ⁓MVN ð0, 1Þ, then Θjy⁓Wishartðn + δ, D + U Þ, where U = tr yT y . Furthermore, the Cholesky decomposition can be applied to the matrix to decompose it into two normally distributed matrices utilizing the relationship between normal and chi-square distribution. Therefore, Θ = φT φ, where φ is a lower triangle matrix in which each zero elementsimplies a zero element in Θ for the corresponding variables. As the conditional dependence is the issue in this study that can be realized by an adjacency matrix that consists of one and zero elements only, the goal can change the process in order to see the zero and nonzero elements in the final or estimated graph. Hence, it is easier to work with a decomposed version of the precision matrix that is normally distributed than the G-Wishart distribution with some difficulties like its normalization constant and signed elements. RJMCMC is a one of the best method based on the Bayesian approach that is adaptable for changing the dimension. The estimation procedure by RJMCMC is conducted by three steps, namely, the resampling of the latent data, the resampling of the precision and the resampling of the graph. In the computation, these three steps are repeated iteratively until the convergence of each entry of the estimated precision matrix K is satisfied.
14.2.1.1 Resample the latent data In the first step of the algorithm, we define a latent variable Z instead of the original data Y if Y is nonnormal. Hereby, I is presented as a ðn × pÞ-dimensional matrix, where each variable is represented in a distinct column in a data set which is mainly gene in the biological systems. Then, the minimum and the maximum entries of this matrix are computed per column and are called L and U, respectively, as two vectors with p-dimensions. Later, by using the K matrix and the L matrix as well as the U vector, we simulate another Z which are the values from the truncated normal distribution in the range of Li and Ui to not to be so far from the real values as follows: Zi jZi ni⁓N μi , σ2i (14:4) where μi = −
P y2bdðiÞ
Ki, x Ki, i
zx, j and bdðiÞ = fy 2 ð1, . . . , pÞ: ði, jÞ 2 Eg
when E = fði, jÞjθi, y ≠0, i≠yg is the set of all available edges between nodes and P Ki, x μi = − K zx, j y2bdðiÞ
i, i
292
Vilda Purutçuoğlu, Hajar Farnoudkia
which comes from the regression model defined as the base of GGM. Furthermore, σ2i = θ1 comes from the point that K is the inverse of the covariance matrix and θi, i = k1 i, i
i, i
as well. In the second step, these zi, j ′s will be used in the update of the precision matrix. 14.2.1.2 Resample the precision matrix In this part, the K matrix is decomposed by the Cholesky decomposition to get a positive defined matrix. By assuming the G-Wishart as the prior distribution for the precision matrix K, which is a generalized version of the inverse chi-square distribution, the Cholesky decomposition decomposes the matrix into two multivariate normal distributed matrices. This decomposition, basically, separates the complete matrix into a lower triangle matrix and its transpose in a way that K = φT φ. Here, φ refers to an upper triangle matrix in which zero shows that there is no relationship between the corresponding two elements because any zero element in φ simplies a zero element in K. In the calculation of nonzero diagonal elements in K, a Metropolis–Hastings update of φ is done by sampling a from a normal distribution truncated below zero, with mean φi, i and variance σ2p . Later, γ is replaced to the related diagonal element of φ and φ is transformed to φ′ with a probability min Rp , 1 . Here, φ Φ σi,pi γ δ + n + nbðiÞ − 1 Rp = R′p (14:5) γ φi, i σp In this expression,
n o T R′p = exp − 21 trðK ′ − KÞ D + tr Z T Z
Moreover, K ′ = φ′Tφ′ and K = φT φ. On theother side, the nondiagonal elements of φ 2 ′ will be replaced by a γ is sampled from N φi, j , σp . In these cases, φ is transformed to φ′ with a probability min 1, R′p . In our calculation, we set σ2p = 0.1 for the simplicity. Briefly, the zero values in the matrix are replaced by a new nonzero normally distributed value with some probability and it will be done element by element. 14.2.1.3 Resample the graph In the third part of the algorithm, only one element of the Cholesky matrix φi, j is selected randomly. Ifthere is no edge between Xi and Xj , it will be replaced by a value from N φi, j , σ2g in φ with a probability min 1, R′p , where ( 2) pffiffiffiffiffi T ðφ′i,j −φi,j Þ IG ðδ,DÞ 1 ′ T + (14:6) ×exp − trð K −KÞ D+tr Z Z Rp =σg 2πφi,i 2σ2g IG′ ðδ,DÞ 2
14 Vine copula and artificial neural network models to analyze breast cancer data
293
In eq. (14.6), G’ is a graph in which all elements coincide to G except Gi, j which represents an edge between related nodes. In our calculation, we set σ2g = 0.1 too. If there is no edge between those nodes, then the value is replaced by a zero value in φ with a probability min 1, R′p as follows: ( 2) pffiffiffiffiffi T ðφ′i, j − φi, j Þ I ð δ, D Þ 1 G − 1 T + × exp − trð K ′ − KÞ D + tr Z Z R′p = ðσg 2πφi, i Þ 2σ2g IG′ ðδ, DÞ 2 (14:7) In eq. (14.7), G′ indicates a graph in which all elements match with G except Gi, j which can be eliminated between related nodes.
14.3 Vine copula In the literature, the joint distribution function can be caught by the copula method as it uses only the marginal distribution of any variable. Hereby, we can fit an appropriate copula to construct the joint distribution of all variables in the best form to have a model for the whole network by using the Sklar’s theorem (Brechmann et al., 2013). This theorem says that there is always a multivariate copula function that expresses the joint distribution function by the marginal distributions of the variables as F ðxÞ = CðF1 ðx1 Þ, F2 ðx2 Þ, . . . , Fd ðxd ÞÞ, where x = ðx1 , x2 , . . . , xd Þ and F is the d-dimensional distribution of the random variable x. Accordingly, the multivariate copula formula is Cðu1 , u2 , . . . , ud Þ = F F1− 1 , F2− 1 , . . . , Fd− 1 for u1 = F1− 1 , u2 = F2− 1 , . . . , ud = Fd− 1 where d is the number of variables. By this way, we can separate the joint density into marginals and the dependence structure between marginals can be described by a separate function. As a result, the model parameters can be estimated by the likelihood function made by the copula. On the other hand, in most of the cases, the variables are not from the same distribution, it is not still easy to form the whole model by all of the variables; simultaneously, a specific type of copula, called the vine copula, can do it by dealing with pair variables without any assumption independently from other variables according to writing the joint distribution in terms of the pair copula by using the Sklar’s theorem. Hereby, in the description of the vine copula, we can present the application of the Sklar’s theorem, which we stated earlier, as follows: Let us have a simple form of the joint distribution for d = 3. The full joint probability function for d = 3 can be written as f ðx1 , x2 , x3 Þ = f1 ðx1 Þf ðx2 jx1 Þf ðx3 jx1 , x2 Þ. From this expression, we can present f ðx2 jx1 Þ as its definition for the conditional probability function via f ðx1 , x2 Þ=f1 ðx1 Þ. Thus, by the Sklar’s theorem f ðx1 , x1 Þ = c1, 2 ðF1 ðx1 Þ, F2 ðx2 ÞÞf1 ðx1 Þf2 ðx2 Þ. Accordingly, f ðx2 jx1 Þ = c1, 2 ðF1 ðx1 Þ, F2 ðx2 ÞÞf2 ðx2 Þ and f ðx3 jx1 , x2 Þ = c2, 3j1 ðF ðx2 jx1 Þ, Fðx3 jx1 ÞÞc1, 3 ðF1 ðx1 Þ, F3 ðx3 ÞÞf3 ðx3 Þ. Finally, we can define the joint function as
294
Vilda Purutçuoğlu, Hajar Farnoudkia
f ðx1 , x2 , x3 Þ = c1, 2 ðF1 ðx1 Þ, F2 ðx2 ÞÞ × c2, 3j1 ðFðx2 jx1 Þ, F ðx3 jx1 ÞÞ
(14:8)
× c1, 3 ðF1 ðx1 Þ, F3 ðx3 ÞÞ × f1 ðx1 Þ × f2 ðx2 Þ × f3 ðx3 Þ
Hence, we can write the three-dimensional joint distribution in terms of c1, 2 , c1, 3 , c2, 3j1 . So, we can show this structure briefly as f1 − 2, 1 − 3, 2 − 3j1g. Moreover, there are some other ways to write it in terms of pair copulas such as f1 − 2, 2 − 3, 1 − 3j2g or f3 − 1, 3 − 2, 1 − 2j3g. Accordingly, there is a general format of the vine copula that is called as the regular vine copula (RVine). The RVine copula includes the canonical vine copula (CVine), and the drawable vine copula (DVine) which can be also defined graphically in Figure 14.1.
1
2,1 2,3
2 2,4
2.4 3
2
4,3 4
1
4 2,3|4 2.4
4,1|3 4,3
3,1
2,1
4.1|2 2.4
3,1 3
1,2|3,4
4.3|2 2.3|4
2,3
4,1|3
1,3|2,4 4,1|2
4,3|2
Figure 14.1: CVine (left side with order {2,4,1,3}) and DVine (right side with order {2,4,3,1}) structures for four variable cases.
In this representation, on the other hand, it is seen that each vine is composed of ðd − 1) subnetworks called as a tree. The ith tree is formed of ðd − i + 1) number of nodes and, therefore, ðd − iÞ number of edges. The edges in the first tree are not conditioned, whereas, in the second tree, the edges in conditioned on one variable, and in the ith tree, the edges are conditioned on ði − 1) variables. Furthermore, the CVine trees are in the form of a star while DVine trees are in the form of a path. Here, the expert knowledge or some prior information can be needed to determine which model is appropriate for the data, that is, whether CVine or DVine copula structure is suitable. If none of them matches the data, RVine can make the best model by the vine copula, but the trees can be different than the defined models.
14 Vine copula and artificial neural network models to analyze breast cancer data
295
As a result, to complete the model, for each tree a suitable root should be chosen according to the data gained after conditioning on other variables based on the vine copula which is defined in the following algorithm. Here, it should be reminded that the algorithm is related to CVine. For other vine models, the algorithm is similar to some changes in the determination of the roots: 1) Compute the empirical distribution function based on ordered data method that makes them uniformly distributed. 2) Then, Kendall’s τ is computed to determine the first root as the one with the maximum τ summation. 3) The VuongClarke test selects the best copula function separately for each pair of variables with the first root. Then, the remaining parameter(s) is/are estimated by the MLE approach. 4) Update the data by conditioning on the first selected variable via the h function hðxjυ, θÞ: =
∂Cxυ jυ ðFðxjυ − j Þ, Fðυj jυ − j ÞjθÞ j −j ∂Fðυj jυ − j Þ
where υ is the parameter(s) estimated in the previous step and υj relates to an arbitrary component. 5) Go to the second step and continue until the ðd − 1Þth root is found. Now, the order of nodes, the structure (R, C or DVine) and the families for each edge are defined. Then, for every node, the pair copula is selected independently from other edges from one of the two main copula families, namely, elliptical and Archimedean families. The elliptical family includes the Gaussian and t-copula (Student-t copula) which are symmetric without and with the tail dependence, by order. On the other hand, the Archimedean family includes more flexible families via one-parameter and two-parameter families as well as with or without (one- or two-sided) tail dependence. Tables 14.1 and 14.2 show some properties of these copula types such as their functions, parameter ranges, the relationship between parameter, Kendall’s τ values and tail dependence. Table 14.1: Properties of bivariate elliptical copula families. Elliptical distribution
Parameter range
Kendall’s τ
Tail dependence
Gaussian Student-t
ρ 2 ð − 1, 1Þ ρ 2 ð − 1, 1Þ, ν > 2
2=π arcsinðρÞ 2=π arcsinðρÞ
pffiffiffiffiffiffiffiffiffiffiqffiffiffiffiffiffiffi 2Tυ + 1 − υ + 1 11 +− ρρ
Thus, the expression of the Gaussian copula can be represented as CGauss ðu1 , u2 Þ = Φρ Φ − 1 ðu1 Þ, Φ − 1 ðu2 Þ , where Φ denotes the bivariate standard normal distribution with ρ as the correlation coefficient parameter. For t-copula (Student’s-t copula), the associated equation has the following form
296
Vilda Purutçuoğlu, Hajar Farnoudkia
Table 14.2: Properties of bivariate Archimedean copula families. Name Clayton Gumbel Frank Joe Clayton–Gumbel Joe–Gumbel Joe–Clayton Joe–Frank
Generator function −θ 1 −1 θ t θ
ð − log tÞ h − θt i − log ee − θ −−11 i − log½1 − 1 − tÞθ ðt
−θ
− 1Þ
δ
i ð − log½1 − 1 − tÞθ Þδ ð1 − ð1 − tÞθ Þ − δ − 1 θ − log 1 − ð1 − δtÞθ 1 − ð1 − δÞ
Parameter range θ>0 θ≥1 θ 2 Rnf0g
Tail dependence 1 2 − θ, 0 1 0, 2 − 2θ
θ ≥ 1, δ > 0
ð0, 0Þ 1 0, 2 − 2θ −1 1 2 θδ , 2 − 2δ − 1 0, 2 − 2 θδ −1 1 2 δ , 2 − 2θ
θ ≥ 1, δ 2 ð0, 1
ð0, 0Þ
θ>1 θ > 0, δ ≥ 1 θ ≥ 1, δ ≥ 1
Ct ðu1 , u2 Þ = tρ, υ ðtυ − 1 ðu1 Þ, tυ − 1 ðu2 ÞÞ, where tρ, υ describes the bivariate Student’s-t distribution with parameters ρ, υ as the correlation coefficient and the degrees of freedom, respectively. From Table 14.1, it is seen that there is a one-to-one correspondence between the parameter ρ and Kendall’s τ value that can help the parameter estimation in the elliptical family. However, in the copula construction via all of the families (one or more than one parameter), the parameters are estimated by using the MLE method while the joint distribution of the data is modeled by the copula. In the Archimedean family, the formula can be complicated, specifically, for the families with two parameters. Hereby, instead of writing the copula formula in terms of the generator functions as Cðu1 , u2 Þ = φ½ − 1 ðφðu1 Þ + φðu2 ÞÞ, where φ is the generator function and φ½ − 1 is the pseudoinverse, that is, equal to the inverse of the generator function only for nonnegative values and zero for negative values, as shown in Table 14.2, some families have no terms for the tail dependence while others have one-sided or two-sided tail dependence. Accordingly, the twoparameter families encompass the nonsymmetry and the constraint of the tail dependence, resulting in more flexibility than the one-parameter families.
14.4 Artificial neural network The ANN is a nonparametric model that can successfully describe the high correlation and nonlinear relation between variables via an activation function. In the calculation, the information follows certain interconnected units, namely, the input layer, the hidden layer and the output layer. The input layer is the place where the raw data are the first time processed and they are transferred to the hidden layer. The
14 Vine copula and artificial neural network models to analyze breast cancer data
297
Weighted connections Weighted connections y
Input layer
Hidden layer
Output layer
Figure 14.2: An example of a simple artificial neural network.
hidden layer passes the information to the output layer and, finally, the output layer is the place where the results are obtained. In all these calculations, the data learn by themselves, that is, they are adaptive, by means of the known outcome, and then, they optimize the unknown coefficients, named as weights, by using this information. In Figure 14.2, we illustrate a simple architecture of ANN as an example. In the calculation of the ANN weights, different learning rules can be applied. The least mean square, gradient descent, Newton’s rule and conjugate gradient descent algorithms are some of the commonly used learning rule approaches. These learning rules can be performed with the backpropagation error method in such a way that they are used in the calculation of the error at the output layer. The total error at the output is found proportionally by the contributed error of each unit in the hidden layer. Then, the weights of each connection between hidden and output layers are optimized with respect to the errors in each hidden layer unit. This iterative process is called the backpropagation step (Bishop, 1995, Kosko, 1992). Hereby, in this study, we adopt the idea of ANN to the GGM structure andsimplement ANN as the alternative nonparametric approach in the construction of the biological network in the sense that the response node in the output layer is taken a gene and the input nodes are taken as the predictor genes of the response similar to the regression model. Accordingly, the regression coefficients are associated with the weighted connections and these coefficients can be converted to the binary values, zero and one, so that the estimated ANN regression model can be represented as a graphical model in the end. On the other side, since the biological system is composed of many genes, we perform the above ANN construction for each gene sequentially such that each variable (gene) is taken as a response or dependent variable and all other nodes are regressed on that response and this computation is repeated for each node so that we have p amount of ANN regression models. Finally, these p equations are presented as binary values. If the weighted connections, that is, regression coefficients, are less than 0.8 in the absolute value, we set to zero, meaning that, there is no connection between the node in the predictor side and the response node. Otherwise, we equate it to one and accept that the connection between the same pair of nodes exists. By this transformation, we can describe an estimated representation of the system via an undirected graph. Here,
298
Vilda Purutçuoğlu, Hajar Farnoudkia
we select 0.8 as the threshold value for the decision of the presence of the edge between two nodes, whereas, different threshold numbers can be taken for the above transformation. On the other hand, it is known that there is a proportional relation between regression coefficients and correlation coefficients in the regression models. Hence, the high regression coefficient in the absolute value can be taken as the indication of a strong relationship between associate terms, on the contrary, the low absolute value can be ignored since the number of edges in biological networks is typically sparse whichsimplies that the regression coefficients show low or ignorable correlations between components.
14.5 Application In this section, to show the accuracy of vine copula and ANN, we use two real data sets. Both data sets come from categorized and normalized data from Mok et al. (2009) with the accession number GSE18520 in the GEO database. The data are freely available. This original big data set is composed of 47,000 genes of papillary serious ovarian carcinoma from 63 arrays and 1,191 of them indicate significance between control (composed of 10 arrays) and treatment groups (composed of 53 arrays) in a one-channel microarray analysis, which is normalized by the RMA method. In these data, all the selected genes are independently identified in ovarian cancer specimens and/or cell lines by Western, Northern andsimmunohistochemical techniques. Furthermore, all normal ovarian samples are collected from postmenopausal women, and specimens are gathered by following the instruction of the IRB (Institutional Review Board)-approved protocols. Hence, in our analyses, we generate two subdata sets from differentially expressed genes in the underlying data in such a way that the first subdata set is the representation of a moderate PPI system and the latter can be an example of the large PPI system. So, the first data set, called data 1, and second data set, called data 2, are composed of 23 and 50 genes, respectively. In the following parts, we list the names of genes in each data set and represent their quasitrue network structures obtained from the STRING database. In the construction of the true networks, we take the similarity of co-expression and co-occurrence as the evidence of edges between genes in STRING. Then, once the estimated networks are obtained from vine copulas and ANN modeling approaches, we compute the following accuracy measures in comparison of the methods. These are the F1 score = 2TP=2TP + FP + FN 2 ½0, 1 in these expressions, where TP and FP denote the true positive and false positive, in order, and similarly, TN and FN represent the true negative and false negative values, respectively. Moreover, the Matthew’s correlation coefficient value can be represented by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi MCC = ðTP × TNÞ − ðFP × FNÞ= ðTP + FPÞðTP + FNÞðTN + FPÞðTN + FNÞ 2 ½ − 1, 1.
14 Vine copula and artificial neural network models to analyze breast cancer data
299
14.5.1 Data 1 As explained earlier, this data set is composed of 23 genes, listed as “S100A8,” “SOX9,” “UBE2C,” “RGS1,” “C7,” “MTSS1,” “PAPSS2,” “CAV1,” “DAPK1,” “GLS,” “GATM,” “ALDH1A3,” “CDK1,” “ST3GAL5,” “PDLIM5,” “BAMBI,” “CAV2,” “PSD3,” “EZH2,” “MAD2L1,” “TBC1D4,” “NELL2” and “PDGFRA,” and these genes indicate a very sparse quastrue network structure from the result of the STRING database as shown in Figure 14.3.
Figure 14.3: The quasitrue network structure of 23 ovarian cancer genes in data 1.
As shown in Figure 14.3, the quasitrue graph has eight significant edges. Initially, we examine CVine and DVine to realize the edges. But, we observe that, there are so many extra positive edges. Therefore, as it is clear that from the quasi true graph, we consider that the true network differs from C or DVine comparing the basic architectures of these copulas shown in Figure 14.1. Accordingly, wesimplement the RVine copula since it is a general format of the vine copula. Through RVine, we detect four true positives (TP), two false positives (FP), four false negatives (FN) and 243 true negatives (TN). Therefore, F1 scoreand MCC are found as 0.57 and 0.59, respectively. The corresponding nodes of the realized edges with a copula
300
Vilda Purutçuoğlu, Hajar Farnoudkia
family are shown in Table 14.3. On the other hand, when we infer the system via the ANN model, we compute TP = 2, FP = 12, FN = 6 and TN = 233, resulting in F1 scoreand MCC as 0.13 and 0.15, in order. Hereby, we can conclude that the copula approaches give a better fit with respect to ANN in the estimation of this system.
Table 14.3: The name of pair of genes which indicates the four positive edges, their best appropriate pair copulas and associated Kendall’s τ values for data 1. Pair of genes
Copula family
MADL and CDK EZH and UBEC CDK and UBEC TBCD and CAV
180o rotated Joe–Clyton Gumbel t Joe
Kendall’s τ . . . .
14.5.2 Data 2 This data set consists of 50 genes and the name of these genes are listed as follows: “NKX3-1” “SERPINB9” “CHRDL1,” “CD24,” “ZNF330,” “VLDLR,” “RIPOR2,” “TSPAN5,” “SLIT2,” “SLC16A1,” “TGFB2,” “GATA6,” “IDH2,” “TPX2,” “PMP22,” “PLA2G4A,” “SF1,” “TNFAIP8,” “FHL1,” “TPD52L1,” “PTGER3,” “SCTR,” “DEFB1,” “CLEC4M,” “CDK1,” “WSB1,” “TFPI,” “LGALS8,” “DAB2,” “DHRS7,” “ELF3,” “PLPP1” “TPM1,” “IL6ST,” “TCEAL2,” “CASP1,” “SULT1C2,” “DPP4,” “HSPA2,” “LRIG1,” “LAMB1,” “MDFIC,” “HBB,” “TRO,” “DCN,” “IGFBP5,” “CD44,” “C1R,” “ADGRG1” and “SPTBN1.” Similar to the previous analysis, we initially construct the quasitrue structure of the network from the STRING database. Then, we perform two models and compute the accuracies measures. According to the quasitrue graph represented in Figure 14.4, we can detect eight significant edges. In the application of the CVine and DVine copula models, we observe that the number of FP edges is large since they can catch much more edges than the quasitrue network. On the other hand, it is clear from the quasitrue graph that the structure differs from C or DVine copula structure. Thus, as performed in data 1, we apply the RVine copula approach due to its flexibility. As a result, from the findings of RVine, we compute TP = 2, FP = 1, FN = 6 and TN = 1, 216, and thus, calculate F1 score= 0.36 and MCC=0.41. The corresponding nodes of the realized edges with a copula family is shown in Table 14.4. On the other hand, when we repeat the analyses under ANN, we obtain TP = 2, FP = 33, FP = 6, and TN = 1184, and compute F1 score= 0.09 and MCC=0.11. From the analyses, it is seen that the RVine approach is more successful than ANN in the construction of the system similar to the analyses with data 1.
14 Vine copula and artificial neural network models to analyze breast cancer data
301
Figure 14.4: The quasitrue network structure of 50 ovarian cancer genes in data 2.
Table 14.4: The name of pair of genes that indicates the two positive edges, their best appropriate pair copulas and associated Kendall’s τ values for data 2. Pair of genes
Copula family
TPX and CDK PMP and LAMB
Joe–Clyton Joe
Kendall’s τ . .
14.6 Conclusions In this chapter, we have described the structure of two models, namely, the copula graphical model and the ANN model. In the copula model, we have specifically explained the vine copula approach. We have explained the DVine and CVine copula
302
Vilda Purutçuoğlu, Hajar Farnoudkia
with their architectures and then presented the R vine copula that is the generalization of both vine copulas. On the other hand, in modeling with the ANN, we have adapted its original application to the construction of the biological system and performed it as a regression model. We have specifically applied this technique for each node, that is, gene, in the system separately, and then bounded these regressions in an undirected graphical model. From this application, we have observed that the neural network model can put the edge easier than the copula approach, whereas both can capture the true link under the same performance. But as the copula approach is based on the GGM and this model is specifically designed for the sparse network structure, the modeling via the copula is more conservative in the estimation of the edge. On the contrary, the neural network is suggested to capture the nonlinear relationship under all conditions, hence, the model has more tendency to link the node with respect to the copula model. Therefore, we have seen higher accuracy under the copula model with respect to the neural network approach. But, additionally, we think that the performance of the neural network is highly sensitive to the scaling value, the threshold selection in the construction of the binary graph and the percentage of the training set used in the inference. In this chapter, we have set these selections not particularly for the sparse systems. Hereby, in our future studies, we consider controlling these criteria, resulting in stricter control for putting edges between nodes and an increase in the accuracy of the estimates under sparse biological networks. Furthermore, we think that both suggested approaches can be a promising alternative to better understand the system diseases like cancer and heart diseases since they can deal with the complex structures under highly dependent observations. Thus, we believe that thesimplementation of these models in such systems can open new avenues for the researchers.
References Atay-Kayis, A. and Monte, A. (2005). Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models, Biometrika, 92(2), 317–335. Bishop, C.M. (1995). Neural Networks for Pattern Recognition, Oxford University Press. Brechmann, E. and Schepsmeier, U. (2013). CDvine: Modeling dependence with c- and d-vine copulas in R, Journal of Statistical Software, 52(3), 1–27. Czado, C., Brechmann, E.C. and Gruber, L. (2013). Selection of vine copulas, Copulae in Mathematical and Quantitative Finance, Springer, Berlin, Heidelberg, 17–37. Dobra, A. and Lenkoski, A. (2011). Copula Gaussian graphical models and their application to modeling functional disability data, The Annals of Applied Statistics, 5, 969–993. Fan, L. and Yang, Y., Using modified lasso regression to learn large undirected graphs in a probabilistic framework. Proceedings of the National Conference on Artificial Intelligence, 2005, 801–806. Genest, C. and Favre, A.C. (2007). Everything you always wanted to know about copula modeling but were afraid to ask, Journal of Hydrologic Engineering, 12(4), 347–368.
14 Vine copula and artificial neural network models to analyze breast cancer data
303
Green, P.J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination, Biometrika, 82(4), 711–732. Kosko, B. (1992). Neural Networks and Fuzzy systems, Prentice Hall. Mazet, V. and Brie, D. (2006). An alternative to the RJMCMC algorithm. Proceeding of IAR Annual Meeting, France, 1–5. Mohammadi, A. and Wit, E.C. (2015). BDgraph: Bayesian structure learning of graphs in R, International Society for Bayesian Analysis, 10, 109–138. Mok, S.C., Bonome, T., Vathipadiekal, V., Bell, A. et al. (2009). A gene signature predictive for outcome in advanced ovarian cancer identifies a survival factor: Microfibril-associated glycoprotein 2, Cancer Cell, 16(6), 521–532. Muthen, B. (1984). A general structural equation model with dichotomous, ordered categorical, and continuous latent variables indicators, Psychometrika, 49, 115–132. Schepsmeier, U., Stoeber, J., Brechmann, E.C., Graeler, B., Nagler, T., Erhardt, T. and Killiches, M. (2015). Package ‘VineCopula’, R Package Version, 2(5). Sklar, A. and Sklar, C. Fonctions de reprtition an dimensionset leursmarges. 1959. Wang, T., Ren, Z., Ding, Y., Fang, Z., Sun, Z., MacDonald, M.L. and Chen, W. (2016). FastGGM: An efficient algorithm for the inference of Gaussian graphical model in biological networks, PLoS Computational Biology, 12(2), e1004755.
Index 3D printing 225, 228, 230, 234–235, 237–239 acceptance probability 289 accuracy 207, 214–216, 218–219, 221 AdaBoost 209, 216–221 Adenocarcinoma 97 AI 167–169, 176–183, 185–187, 230, 236, 238 AI business 181 AI datasets 181 AI decision-making 180 AI model 179–181 AI service provider 179–182 AI thinking 180, 183 AIM 122 alcohol 192 AlexNet 66–68, 72–73, 81 anisotropic diffusion filter 149 ANN 137, 231, 233 Archimedean family 295–296 artificial intelligence 128, 130–131, 167, 174, 176, 225–226, 231, 233, 235 Artificial Intelligence in Medicine (AIM) 105, 119 artificial neural networks (ANN) 105 artificial neural networks ANN attitudes 192 AUC 13, 15–17, 215–222 authenticity 175 autonomous code 170, 185 availability 172–173, 186 average distance 144 Ayurveda 265, 267, 269 basal-like 246 behaviors 192 big data 226 Binary Thresholding 275 blackish-brown 271 BLBC 247, 249, 261 bleeding detection 210 Blockchain 167, 169–187 blockchain framework 172 breast cancer 56–57, 58, 80, 246, 249, 257 cancer 21, 28, 30, 32, 36, 38, 40–42 cancerous condition 210 canny edge detection 276
https://doi.org/10.1515/9783110668322-015
canonical vine copula 294 capsule neural network 65, 79–81 Carcinomas 86 Cardiovascular 192 cell analysis 30, 37 cell detection 40 chain code 170 Chan Vese 145 classification 21–25, 27, 29, 32–36, 56, 58, 128, 130–133, 136–139, 144, 207–210, 213 classification ratio 154 clinical data 174 clinical Trials 174–175 clinics 193 cloud storage systems 200 clustering 158 CNN 2–3, 7–10, 15–17, 22–25, 28, 30–35, 58, 233–234, 236–237, 248–249, 251 community health 194 compression 144 Computer-aided diagnosis 208 conditional dependence 288–289, 291 consensus protocol 170–171, 173, 186–187 consent management 168, 178, 181, 184, 186 control 193 convolutional layers 4, 9 convolutional neural network CNN convolutional neural networks 22 copula Gaussian graphical model 287 cricket optimization 144 cropping 210 cryptoanchor 175 cryptocoin 176 cryptocurrency 170, 178 cryptology 182 CT 235, 237–238 cyberattack 168 DAPP 170 data 200 data auditing 175 data breach 168 data integrity 168, 172, 175, 186 data monetization 178, 181 data pruning 183, 187 data set 130, 132–134, 137–138
306
Index
data storage 168, 183, 187 dataset 3, 7, 11–13, 17 DDOS 173 decentralized application 170 decentralized identity 184 decentralized intelligence 177 decentralized solutions 169, 186 decision-making 177–180, 186 Decorrelation and stretch algorithm 274, 284 deep learning 20–22, 23, 28–29, 31, 34–40, 57–59, 64, 80, 105, 177, 225–226, 233–234, 236–237 deep neural networks 250, 258 deep-learning 57 diabetes mellitus 192 diagnosis 128, 130, 132, 136, 143, 194 DID 184 diets 192 digital twins 167, 187 distributed denial of service 173 distributed ledger 169–170 DLT 170 drawable vine copula 294 DT 209, 216–221 edge detection 210 electrocardiogram (ECG) 198 elliptical family 295–296 EM-MAP 148 energy consumption 182–183 entropy 146 epidemic disease 174 epidemiology 204 Euclidean distance 231 Evolutionary systems 116 expenditures 192 F1 score 214–216, 218, 220 false negative 215 false positive 214 fault tolerance 171–172, 186 feature extraction 8 feature selection 207, 209, 213, 220 features 2–3, 4, 6–8, 12–13, 15 federated learning 183 filters 212 fraud detection 175 Fuzzy Expert Systems 117
GAN 22, 25, 38, 234 gastric cancer 207–210, 213, 215, 220 Gaussian Mixture Model 147, 165 Gaussian noise 143, 149 generative adversarial network 34 generative adversarial networks GAN genetic algorithm 105 GoogleNet 68, 74, 76, 80 GPU 173, 183 G-Wishart distribution 290–291 Hausdorff distance 144 health record 174–175 heart diseases 192 high-performance computing 182 histopathological 246–247, 249, 261 HPC 182 HSV 274, 279 hybrid blockchain 171 informatics 204 input image 3–4, 7–8, 11 Internet 200 Internet of medical things 175 interoperability 176, 181, 183–184 Introduction 265 IoMT 167, 175, 179–180, 187 IVF 35 Jaccard index 144, 154 K-means clustering 144 k-nearest neighbors k-NN k-NN 216–218, 220, 222, 231 – Neighbor number 232 – weighting 232 KNN 218, 220 ledger 167, 170, 172–173, 178, 180, 182–183, 186 leukemia 86, 90, 92 lifestyle 192 LightGBM 3, 6–9, 12–13, 15 Log loss 3, 13, 15 LSTM 2–3, 9–10, 17 lung cancer 1–3, 6, 17 lung disorders 192 Lymphomas 86
Index
307
machine learning 22, 26, 29, 32, 176–177, 183, 208–209, 225–226, 231, 233, 235 machine-to-machine 179 malignancies 192 mammogram 96 Manhattan distance 232 median filter 149 medical assistive robot 180 medical device 168, 175 medical diagnosis 167–168, 185 medical information system 168–169, 178 medical supervisory system 180 Methodology 271 microscopy 21, 29, 31–36, 38–40, 43 Minkowski distance 232 model selection 177 morbidities 191 mortality 191 MPISA 169, 183, 187 multichain 184
peer-to-peer 169 penultimate layer 8 performance metrics 165 personalized medical prescription 178, 186 phantom 151 phantom images 149 pharma 174 physical activities 192 policy 192 post-quantum computing 183 prakruti. 266 precision 214–216, 218 predictive analytics 178 pretrained 248–250, 257–258, 260–261 privacy 168–169, 172–173, 181–182, 184–186 privacy-preserving 177 private ledger 171, 173 public ledger 173
Nadi 267 NAE 150 Naive Bayes 231–232 Natural language processing (NLP) 116 NCC 149 neural network 234 neural networks 20, 22, 24–26, 28, 30, 32, 37, 39 neutrosophic domain median filter 165 Neutrosophic Median Filter 145 neutrosophic set median filter 144 neutrosophic set-based filter 144 neutrosophic wiener filtering 144 Neutrosophy 145 noise 210, 212 non-cancerous condition 210 nonlinear scalar diffusion filter 149 nonlinear tensor diffusion filter 149 nonlocal median filter 149 NS-GMM 154, 165
radiology 179 Radiomics 149 Rand index 154 Ranking 213, 215 recall 214–216, 218 Recurrent neural network RNN recurrent neural networks 22 regular vine copula 294 reinforcement learning 108 resilience 178, 186 ResNet 69, 77–80 ResNet50 3, 8, 13, 17 reversible-jump Markov chain Monte Carlo method 287 RF 209, 216, 218–221 Rician noise 144, 149 RNN 22, 24–25, 233–234 RNN LSTM 2, 9 robotic 180, 186 ROC 3, 12–15, 16, 215–222 ROI 273, 276, 278, 284
Oncology 112 Ostha 268 Otsu Binarization 276, 279 P2P 169–170 Pairs plot relationship 214 patient record 167–168
quantum computing 182–183, 187
Sarcomas 86 scalability 182 screenings 192 security service 169, 172, 186 security violation 168
308
Index
segmentation 21–22, 24–25, 31, 36–37, 144, 154, 210 semisupervised learning 107–108 sensing 195 Sklar’s theorem 288, 290, 293 smart consensus protocol 182 smart contract 170, 184 smart device 195 social engineering attack 168 socially assistive robot 180 societal 192 specificity 216, 218 stem cell 23, 33, 39, 42 stem cells 38, 42 structural content (SC) 150 supervised learning 107 support vector machines (SVM) 105 Support Vector Machines SVM SVM 231, 233 systems 191 technologies 195 Telemetry 198 The structural content (SC) is expressed as follows – closer the value of SC to 1, 150
three-dimensional printing 226–227, 238–239 threshold intensity 278 tobacco 192 tracking 21–22, 24, 36 trained data 8 transfer learning 26, 31–33, 39, 246, 249, 258, 261 true negative 214 true positive 214 tumor 21, 32 tumors 246–247, 249, 261 unsupervised learning 107 validation data 11–12 vata, pitta and kapha 278 VGG 2–5, 8–9, 17 VGG19 3, 6, 9, 12–16 vine copula 288, 293–295, 298–301 white color 271 wireless 195 yellowish-green 271 YIQ 273
Computational Intelligence for Machine Learning and Healthcare Informatics Already published in the series Volume 2: Predictive Intelligence in Biomedical and Health Informatics Rajshree Srivastava, Nhu Gia Nguyen, Ashish Khanna, Siddhartha Bhattacharyya (Eds.) ISBN 978-3-11-067608-2, e-ISBN (PDF) 978-3-11-067612-9, e-ISBN (EPUB) 978-3-11-066838-4 Volume 1: Computational Intelligence for Machine Learning and Healthcare Informatics R. Srivastava, P. Kumar Mallick, S. Swarup Rautaray, M. Pandey (Eds.) ISBN 978-3-11-064782-2, e-ISBN (PDF) 978-3-11-064819-5, e-ISBN (EPUB) 978-3-11-067614-3
www.degruyter.com