Deep Learning for Personalized Healthcare Services 9783110708127, 9783110708004

This book uncovers the stakes and possibilities involved in realising personalised healthcare services through efficient

290 22 4MB

English Pages 268 Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Deep Learning for Personalized Healthcare Services
 9783110708127, 9783110708004

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Deep Learning for Personalized Healthcare Services

Intelligent Biomedical Data Analysis (IBDA)

Edited by Deepak Gupta, Nhu Gia Nguyen, Ashish Khanna, Siddhartha Bhattacharyya

Volume 7

Deep Learning for Personalized Healthcare Services Edited by Vishal Jain, Jyotir Moy Chatterjee, Hadi Hedayati, Salah-ddine Krit, Omer Deperlioglu

Editors Vishal Jain Department of Computer Science and Engineering School of Engineering and Technology, Sharda University Greater Noida, Uttar Pradesh, India [email protected] Jyotir Moy Chatterjee Lord Buddha Education Foundation, Kathmandu, Nepal [email protected]

Salah-ddine Krit Ibn Zohr University, Nouveau Complexe Universitaire, 80000 Agadir, Morocco [email protected] Omer Deperlioglu Afyon Kocatepe University, Gazligöl Yolu Rektörlük E Blok 03200 Afyonkarahisar, Turkey [email protected]

Hadi Hedayati Kabul University, Kabul University Campus, Jamal Mina, Afghanistan [email protected]

ISBN 978-3-11-070800-4 e-ISBN (PDF) 978-3-11-070812-7 e-ISBN (EPUB) 978-3-11-070817-2 ISSN 2629-7140 Library of Congress Control Number: 2021941374 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2021 Walter de Gruyter GmbH, Berlin/Boston Cover image: gettyimages/thinkstockphotos, Abalone Shell Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck www.degruyter.com

To our parents and well-wishers

Preface The present generation and scenario are a time for big leaps and bounds in terms of growth and advancement of the health information industry. This book is an attempt to unveil the hidden potential of health information and technology. Through this book, we attempt to combine numerous compelling views, guidelines, and frameworks on enabling personalized healthcare service options through the successful application of deep learning (DL) frameworks. The progress of the healthcare sector shall be incremental as it learns from the associations between data over time through the application of suitable artificial intelligence (AI), deep net frameworks, and patterns. The major challenge that healthcare is facing is an effective and accurate learning of unstructured clinical data through the application of precise algorithms. Incorrect input data, leading to erroneous outputs with false positives, shall be intolerable in healthcare, as patient’s lives are at stake. This book is being formulated with the intent to uncover the stakes and the possibilities that are involved in realizing personalized healthcare services through efficient and effective DL algorithms. In this book, we will try to explore the impacts of DL for personalized healthcare services and we will try to point out the challenges faced by the healthcare industry, which can be solved by AI. Chapter 1 discusses the novel applications of DL in improving health and medicine, where it first introduces machine learning (ML) and DL, and then highlights their applications in health, and ultimately links their relevance to the future perspectives of modern health. Chapter 2 is an effort by the researcher team to discuss human bioelectricity and measure the different biophysical factors and reasons for different ill symptoms. They have applied health sensors to capture the massive data and process it through Internet of things applications and concepts. Chapter 3 presents the state-of-the-art DL schemes applied to healthcare services. Then, factors that impact the operation of DL in healthcare are mentioned. Besides, we evaluate the potential and weakness of the existing works, and explore the advanced applications of DL in healthcare. In Chapter 4, the author’s team has tried best to examine the energy distribution on different organs among various subjects. Chapter 5 discusses ensemble ML techniques that have shown best performance on two cancer datasets (Coimbra and lung cancer datasets), while the deep neural network has shown best results for the Wisconsin dataset and the ensemble technique is very close to it. Chapter 6 discusses the analysis of clinical images with DL techniques for clinical decision making and also reviews the patient management system, wearable technology, standardization, and processing information from the sensor nodes using DL approaches. https://doi.org/10.1515/9783110708127-202

VIII

Preface

Chapter 7 presents a brief overview of the patient health record system. The main focus of Chapter 8 is to analyze various DL algorithms and find the best performance measure, which gives the maximum accuracy level for the prediction of cervical cancer. Chapter 9 presents a DL technique to classify images and detect skin cancer at an early stage. Chapter 10 describes a diagnostic model constructed using a series of sensitivity analyses over two DL algorithms, namely, recurrent neural network and convolutional neural network. Chapter 11 aims to provide an updated overview of DL, applying its essential concepts in the field of medicine, as also its relationships with frameworks that have meaningful intelligent properties, enabling cost-effective personalized digital services as a primary goal in healthcare in the current modern scenario. Finally, we would like to thank all the authors for their valuable contributions that make this book possible. Among those who have influenced this project are our family and friends, who have sacrificed a lot of their time and attention to ensure that we remained motivated throughout the time, devoted to the completion of this crucial book.

Acknowledgments I would like to acknowledge the most important people in my life – my grandfather late Shri. Gopal Chatterjee, my grandmother late Smt. Subhankori Chatterjee, my late mother Ms. Nomita Chatterjee, my father Shri. Aloke Moy Chatterjee, and my uncle Shri Moni Moy Chatterjee. The book has been my long-cherished dream, which would not have been turned into reality without the support and love of these amazing people. They have continuously encouraged me, despite my failures to give them proper time and attention. I am also grateful to my friends, who have encouraged and blessed this work with their unconditional love and patience. Jyotir Moy Chatterjee Department of IT Lord Buddha Education Foundation (Asia Pacific University of Technology and Innovation) Kathmandu 44600, Nepal

https://doi.org/10.1515/9783110708127-203

Contents Preface

VII

Acknowledgments

IX

Short Biography of Editors List of Contributors

XIII

XV

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris Deep learning for health and medicine 1 Rohit Rastogi, Mamta Saxena, Sheelu Sagar, Neeti Tandon, T. Rajeshwari, Priyanshi Garg Exploring Indian Yajna and mantra sciences for personalized health: pandemic threats and possible cures in twenty-first-century healthcare Bhanu Chander Advanced deep learning techniques and applications in healthcare services 37 Rohit Rastogi, Mamta Saxena, Devendra Kr. Chaturvedi, Mayank Gupta, Akshit Rajan Rastogi, Vrinda Kohli, Pradeep Kumar, Mohit Jain Visualizations of human bioelectricity with internal symptom captures: the Indo-Vedic concepts on Healthcare 4.0 67 Wasiur Rhmann, Babita Pandey Early cancer predictions using ensembles of machine learning and deep learning 91 D. A Janeera, G. Jims John Wesley, P. Rajalakshmy, S. Shalini Packiam Kamala, P. Subha Hency Jose, T. M. Yunushkhan Deep learning in patient management and clinical decision making Neha Mehta, SVAV Prasad Patient health record system

141

115

17

XII

Contents

S. Jaya, M. Latha Prediction of multiclass cervical cancer using deep machine learning algorithms in healthcare services 161 Sidharth Purohit, Shubhra Suman, Avinash Kumar, Sobhangi Sarkar, Chittaranjan Pradhan, Jyotir Moy Chatterjee Comparative analysis for detecting skin cancer using SGD-based optimizer on a CNN versus DCNN architecture and ResNet-50 versus AlexNet on Adam optimizer 185 Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero Coronary heart disease analysis using two deep learning algorithms, CNN and RNN, and their sensitivity analyses 205 Ana Carolina Borges Monteiro, Reinaldo Padilha França, Rangel Arthur, Yuzo Iano An overview of the technological performance of deep learning in modern medicine 225 Index

245

Short Biography of Editors Dr. Vishal Jain is an associate professor at Sharda University, Greater Noida, India. He obtained his Ph.D. in 2016 in computer science and engineering from Lingaya’s University, Academic Press, Scrivener, Emerald, and IGI Global. His research areas include information retrieval, semantic web, ontology engineering, data mining, ad hoc networks, and sensor networks. He received the Young Active Member Award for the year 2012–13 from the Computer Society of India, Best Faculty Award for the year 2017, and Best Researcher Award for the year 2019 from BVICAM, New Delhi. He first joined BVICAM as an assistant professor. Before that, he worked for several years at the Guru Premsukh Memorial College of Engineering, Delhi, India. He has more than 350 research citation indices and is a Google Scholar (h-index score 9 and i-10 index 9). He has authored more than 70 research papers in reputed conferences and journals, including Web of Science and Scopus. He has authored and edited more than 10 books with various reputed publishers, including Springer and Apple. Jyotir Moy Chatterjee is an assistant professor in the information technology department at the Lord Buddha Education Foundation (LBEF), Kathmandu, Nepal. He is the young ambassador of Scientific Research Group in Egypt (2020–2021). Before joining LBEF, he worked as an assistant professor in the Department of Computer Science and Engineering at GD Rungta College of Engineering and Technology (CSVTU), Bhilai, India. He received his M.Tech. in computer science and engineering from Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha, and a B.Tech. in computer science and engineering from Dr. MGR Educational and Research Institute, Chennai. He is credited with 50 international research paper publications, 2 conference papers, 5 authored books, 9 edited books, 11 book chapters, and 1 patent. His research interests include the Internet of things and machine learning. He is a member of various national and international professional societies, including STRA, IFERP, ASR, IRSS, IAA, MEACSE, MIAE, IRED, IAOIP, ICSES, SDIWC, ISRD, IS, SEI, IARA, and CCRI. He is serving as an editorial board member of various reputed journals of IGI Global and is serving as a reviewer for various reputed journals and international conferences of Elsevier, Springer, and IEEE. Dr. Mohammad Hadi Hedayati currently works as Deputy Minister in the Ministry of Communication and IT, Afghanistan, and teaches at Kabul University in the IT department. He researches information technology, information systems (business informatics), computer communications (networks), and computer architecture. He defended his Ph.D. titled “Ontology-Driven Reference Model for the Vocational ICT Curriculum Development” at Tallinn University, Estonia. Dr. Salah-ddine Krit is currently an associate professor at the Polydisciplinary Faculty of Ouarzazate, Ibn Zohr University Agadir, Morocco. He is currently the director of Engineering Science and Energies Laboratory and the chief of the Department of Mathematics, Informatics and Management. He received his Ph.D. in software engineering from Sidi Mohammed Ben Abdellah University, Fez, Morocco, in 2004 and 2009, respectively. During the period 2002–2008, he worked as an engineer team leader in audio and power management integrated circuits (ICs) research, design, simulation, and layout of analog and digital blocks dedicated for mobile phone and satellite communication systems using Cadence, Eldo, Orcad, VHDL-AMS technology. He has authored/co-authored over 130 journal articles, conference proceedings, and book chapters published by IEEE, Elsevier, Springer, Taylor & Francis, IGI Global, and Inderscience. His research interests include wireless sensor networks, network security, smart homes, smart cities, Internet of things, business intelligent, big data, digital money, microelectronics, and renewable energies.

https://doi.org/10.1515/9783110708127-205

XIV

Short Biography of Editors

Dr. Omer Deperlioglu received his bachelor’s degree in electrical and electronics education from the Gazi University in 1988. In 1996, he received his master’s degree in computer science from Afyon Kocatepe University. He completed his Ph.D. in 2001 at the Gazi University, Department of Computer Science. He is currently working as an associate Professor in the Department of Computer Technologies at Afyon Kocatepe University, Afyon Vocational School. His current research interests include artificial intelligence applications, signal processing, and image processing in biomedical engineering. He has published 10 books, 30 articles, and more than 40 papers, and has attended more than 40 conferences. He was a member of the International Technical Committee in six conferences and workshops. He is also the associate editor, IET The Journal of Engineering.

List of Contributors Rehab A. Rayan Department of Epidemiology, High Institute of Public Health, Alexandria University, Egypt [email protected]

Mayank Gupta IT and System Analyst, Tata Consultancy Services, Japan [email protected]

Imran Zafar Department of Bioinformatics and Computational Biology, Virtual University of Pakistan Punjab, Pakistan [email protected]

Pranav Sharma B.Tech. Civil, Second Year, DEI, Agra, India [email protected]

Christos Tsagkaris Faculty of Medicine, University of Crete, Greece [email protected] Rohit Rastogi Dept. of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected] Neha Gupta Dept. of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected] Sunny Yadav Dept. of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected] Deepanshu Rustagi Dept. of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected] Mamta Saxena Director General, Ministry of Statistics, Govt. of India, Delhi, India [email protected] Devendra Kr. Chaturvedi Professor, Dept. of Electrical Engineering, DEI, Agra, India [email protected]

https://doi.org/10.1515/9783110708127-206

Bhanu Chander Department of Computer Science and Engineering, Pondicherry University, Pondicherry 609605, India [email protected] Rohit Rastogi Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected] Mamta Saxena Director General, Ministry of Statistics, Govt. of India, Delhi, India [email protected] Devendra Kr. Chaturvedi Professor, Dept. of Electrical Engineering, DEI, Agra, India [email protected] Mayank Gupta IT and System Analyst, Tata Consultancy Services, Japan [email protected] Akshit Rajan Rastogi Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected] Vrinda Kohli Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected]

XVI

List of Contributors

Pradeep Kumar Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected]

P. Subha Hency Jose Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India [email protected]

Mohit Jain Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India [email protected]

T. M. Yunushkhan King Khalid University, Abha, KSA [email protected]

Wasiur Rhmann Babasaheb Bhimrao Ambedkar University, Satellite Campus, Amethi, Uttar Pradesh, India [email protected]

Neha Mehta Chhattisgarh Swami Vivekanand Technical University, Bhilai Chhattisgarh, India [email protected]

Babita Pandey Babasaheb Bhimrao Ambedkar University, Satellite Campus, Amethi, Uttar Pradesh, India [email protected]

SVAV Prasad ECE Department, Lingaya’s Vidyapeeth, Faridabad, Haryana, India [email protected]

Janeera D. A. Department of ECE, Sri Krishna College of Engineering and Technology, Coimbatore, Tamil Nadu, India [email protected] G. Jims John Wesley Aerospace Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India [email protected] P. Rajalakshmy Robotics Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India [email protected] S. Shalini Packiam Kamala Department of Science and Humanities, Nehru Institute of Engineering and Technology, Coimbatore, Tamil Nadu, India [email protected]

Jaya. S Department of Computer Science, Sri Sarada College for Women (Autonomous), Salem 16, Tamil Nadu, India [email protected] Latha. M Department of Computer Science, Sri Sarada College for Women (Autonomous), Salem 16, Tamil Nadu, India [email protected] Sidharth Purohit School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, India [email protected] Shubhra Suman School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, India [email protected]

List of Contributors

Avinash Kumar School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, India [email protected] Sobhangi Sarkar School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, India [email protected] Chittaranjan Pradhan School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, India [email protected] Jyotir Moy Chatterjee Department of IT, LBEF, Kathmandu, Nepal [email protected] Mildred J. Nwonye University of Botswana, Gaborone, Botswana [email protected] V. Lakshmi Narasimhan University of Botswana, Gaborone, Botswana [email protected]

XVII

Zablon A. Mbero University of Botswana, Gaborone, Botswana [email protected] Ana Carolina Borges Monteiro School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil [email protected] Reinaldo Padilha França School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil [email protected] Rangel Arthur Faculty of Technology (FT) – State University of Campinas (UNICAMP), Limeira, São Paulo, Brazil [email protected] Yuzo Iano School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil [email protected]

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

Deep learning for health and medicine Abstract: Lately, a data-centered era has emerged due to the rapidly growing health and medical fields that have yielded big data such as patient-related examinations, texts, speeches, and figures; hence, the need for technology upgrades in the current systems and processes. Machine learning (ML), a subset of artificial intelligence (AI), applies models used by computer systems for learning instructions based on the weightage assigned to the parameters and devoid of any direct instructions, via increasingly advancing algorithms, especially those for deep learning (DL). DL can automatically develop models of several processing layers for learning data representations at several abstraction levels. Research suggests that DL algorithms are getting more attention for being superior to those of ML in advanced predictive power; they are promising in learning models and in retrieving parameters from compound data. With the many representation levels and outcomes, which have outpaced human efficiency, DL has been adopted widely in health and medical informatics, for example, in molecular diagnostics for pharmacogenomics and in recognizing variants of pathogens; in explaining experimental data for sequencing DNA and splicing genes; in classifying protein structures; in predictions for medical imaging and developing drugs, among others. This chapter discusses the novel applications of DL in improving health and medicine; it first introduces ML and DL and, next, highlights their applications in health, and finally, links their relevance to the future perspectives of modern health. Keywords: deep learning, machine learning, artificial intelligence, health informatics, medicine

1 Introduction Lately, there have been dramatic advances in data acquisition techniques in life sciences, along with developments in computational biology and automatic storing technologies that have transformed today’s biology into a data-driven discipline. Hence, nowadays, research depends on data, and there are many promising interventions to biological issues. Bioinformatics aims at evaluating such big data in

Rehab A. Rayan, Department of Epidemiology, High Institute of Public Health, Alexandria University, Egypt, e-mail: [email protected], https://orcid.org/0000-0003-3852-5710 Imran Zafar, Department of Bioinformatics and Computational Biology, Virtual University of Pakistan, Punjab, Pakistan, e-mail: [email protected], https://orcid.org/0000-0002-9246-0850 Christos Tsagkaris, Faculty of Medicine, University of Crete, Greece, e-mail: [email protected] https://doi.org/10.1515/9783110708127-001

2

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

several domains, by maintaining, extracting, or exploring it. Computational biology algorithms could also enable managing data extraction. Such techniques, known as data mining, could reveal useful relationships, trends, and performances in biological data [1]. Lately, a subfield of computational science has gained more attention, namely, soft computing, which reflects all the techniques that smoothly process data while managing uncertain real-world circumstances. While hard computing targets precision, soft computing functions in the realm of incomplete facts, uncertainty, inadequacy, and estimation for solving issues [2]. Previously, only primitive systems could be accurately modeled and analyzed via computing methodologies; yet, systems of health, biology, managerial research, humanities, and similar complicated disciplines were hard to manage via traditional mathematical and analytical approaches. Soft computational technologies supplement one another; hence, biological procedures are more nearly mimicked through soft computational technologies than conventional ones, which mainly apply logic, like sentential or predicate logic. One of the key components of soft computing is neural networks, which have a broad reach in categorizing and reflecting computationally biological data. Neural networks are solid and demonstrate great learning and abstracting capability in data-driven contexts where they apply machine learning (ML) algorithms [3].

2 Machine learning for health ML is the study of processing strategies that are automated by experience [4], as a diagrammatic representation of an algorithm with the help of artificial intelligence (AI) [5]. Quantitative approaches create a blueprint for sample results to be repeated and determined, without being expressly programmed, and are called “training data.” In a variety of applications [6], ML algorithms are used to perform the necessary functions, including in healthcare [7], medicine [8], email filtering [9], and computer vision [10]. One ML category is strongly connected to predictive analytics, which focuses on the use of computers to make forecasts, but mathematical learning is not all ML. Study of mathematical optimization introduces techniques, theory, and implementation fields in ML. Data mining is a related field of research for the analysis of exploratory data, which relies on unattended learning. ML is also referred to as mathematical wisdom in its approach to market problems. The twenty-first century is two decades old, and AI will be one of the most important tools for transformation and enabling human life during this century. AI and related services and networks are well established in their architecture to change global competitiveness, work habits, and lifestyles and generate enormous wealth. No mystery that this change is mostly driven by powerful technologies (ML) and innovations such as deep convolution networks, opposing networks from generations, gradient-

Deep learning for health and medicine

3

driven tree models (GBM), and deep reinforcement training (DRL). But, these not the only AI areas affected by conventional industrial and technology industries. Health is an area that is considered ideal for implementing AI tools and technology. Compulsory approaches such as electronic medical records (EMRs) now have next-generation healthcare programs equipped to evaluate massive data resources. AI/ML tools are designed to make this flow more meaningful. The quality of automation and intelligent decision-making in primary and tertiary care and public health care is expected to improve. AI tools can have the greatest impact because they can improve the quality of life for thousands of people around the world. Machine learning in healthcare (MLH) is usually intended to predict clinical effects based on different predictors. MLH shows enormous promise, and ML-based instruments can reach human-level diagnostic and prognostic functions defined in almost all clinical areas [11]. However, the number of MLH resources used in clinical applications represents only a fraction of the investment in the entire field, showing that most MLH applications do not surpass the first edition [12]. On closer inspection, ML researchers seem to be inclined to conclude that a reliable prognosis (and thus, news) has been shown if experts from domain stakeholders can translate it into clinical practice [13]. With a high error rate of 6.8, the translation is complicated, but some potential problems can be easily handled. ML in pharmacy has recently made significant advancement. An ML algorithm has been developed by Google to identify mammographic cancer tumors. A robust skin cancer recognition learning algorithm is being used by Stanford. In a recent article by JAMA, the results of a deep-tech algorithm that diagnosed retinopathy in retinal imaging were published. In the decision-making process, ML adds another arrow. It is an easy one here. For some systems, ML is even more attractive than others. Disciplines can benefit directly from algorithms in reproducible or systematic systems. Many of those with broad picture datasets, such as radiology, cardiology, and pathology are also promising candidates. For all these procedures, ML should be able to look at the images, recognize patterns, and know the focus areas that need to be more accurate. ML will help the family practitioner or bedside trainee in the long term, by offering impartial guidance that improves reliability, results, and accuracy.

2.1 Supervised learning Supervised learning begins with the goal of predicting a known outcome or a known goal. In ML competitions, where individual participants are judged based on their results on standard datasets, recurring supervised learning problems include handwriting recognition (such as handwritten number recognition), classification of photographs of objects, for example, if it is a cat or a dog, and text classification. Another example is a clinical trial of heart disease or a financial report. All these are what an intelligent individual would do well, and so the machine also tries to

4

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

estimate human output. Supervised learning focuses on grouping, which means choosing to explain and predict a new data body among subgroups, which could include predicting an uncertain parameter such as the temperature in San Francisco tomorrow afternoon. What are examples of explanations that should be overseen in medical education? The most frequent example used by a cardiologist is automated electrocardiography (ECG) interpretation, with pattern sensing to select from a few diagnoses (i.e., a classification task). Automatic radiology also requires supervised preparation when identifying a lung node from a chest radiogram. The machine does what a skilled professional would do with great accuracy in both situations. Tracked learning is also used for risk management. Framingham’s risk points for heart disease [14] may also be the most common example of controlled medical education. These risk models are useful in atrial fibrillation therapy [15] as well as hypertrophic cardiomyopathy-automated defibrillators [16]. The machine approaches doctors’ skill in risk modeling without discovering new connections that are not obvious to men.

2.2 Unsupervised learning In unsupervised learning, however, there are no implications to foresee. We aim, instead, to consider natural models or data groups. This is a more nuanced task to determine when the value of such classes gained from unattended experience is still measured in later directed learning work (i.e., are these new patterns beneficial in some way?). When are these forms of surgery going to be used? The most interesting opportunity is potentially the “precision medicine” program [17]. The inherent heterogeneity of the most common diseases frustrates an extensive attempt to redefine diseases through pathophysiological pathways, which may provide alternative possibilities for therapy. However, some pathways for multifactorial complex diseases are difficult to classify. Think of how you can incorporate unregulated learning into your heart and take a heterogeneous pathology such as myocarditis into account. Many related people with unexplained acute systemic heart disease can be the starting point of the process. Biopsies will then be performed on people with myocardial infarction, using a procedure such as immunotherapy that characterizes the cell structure in each sample. For example, T-cytes, neutrophils, macrophages, and eosinophils will be compiled. The occurrence of repeated cell structure patterns can show pathways and lead to interventions that can be investigated. While based on genomics, a related approach led to the discovery of an eosinophilic asthma subtype [18], which responds only to new treatment against the eosinophilic cytokine IL-13 [19]. Note the difference between supervised and unsupervised learning: no expected results; only patterns are detected in the data. In fact, it can ignore these subgroups altogether,

Deep learning for health and medicine

5

because they struggle with this as a controlled learning disability – such as designing a model for cardiac muscle mortality and classifying them as dangerous.

2.3 Semi-supervised learning Semi-supervised learning (SSL) proved to be a powerful paradigm for utilizing unlabeled knowledge to minimize dependence on large volumes of selected data. The creation of a computer-assisted diagnostic system (CAD) to support diagnosis by healthcare professionals requires a significant number of samples marked (diagnosed); it is very expensive and difficult to collect data marked for expert comments. This task is seen as a challenge and is time-consuming and costly. This problem is solved by using the huge volume of unlabeled (undiagnosed) data available along with minimal tagging data to train reliable classifiers, requiring less human effort and resources. The SSL solution solves this problem.

3 Deep learning for health In DL, the health sector can process data at extraordinary speeds without losing precision [20]. It is neither ML nor AI; it is an elegant mix that employs an algorithmic layered architecture to look at data at astonishing speeds. The advantages of deep health education are abundant – quick, reliable, and precise, but that is not the end. More advantages are offered by the ability to learn in multi-layered neural networks than in AI and ML. Yet, in the name of the success of profound learning, the increasing number of profound learning technologies in health care allows one to envision a world where evidence, research, and creativity work together to benefit multiple patients. Soon, ML-based solutions can be found in many countries, combined with real-time patient data from various healthcare systems, to improve the efficacy of emerging treatment approaches that have not been available before [21]. DL uses mathematical structures that function like the human brain. Multiple network and infrastructure layers allow unprecedented computational resources and the capacity to look at vast quantities of previously missing, forgotten, or lost data [22]. These fundamental learning networks can solve difficult challenges and derive data from massive volumes of healthcare data. There is a range of credentials that have not been overlooked by health practitioners. Healthcare’s potential was never more promising. AI and DL can only create alternatives to very particular requirements in the market; but profound healthcare learning can be effective in improving doctors and changing patientcare.

6

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

4 Healthcare applications of deep learning DL has significant potential in diagnostics and telemedicine. One of the most significant problems in diagnostics is fatal medical errors. According to a large analysis, diagnostic errors account for up to 10% of patients’ mortality. DL tools can assist the diagnostic procedure by confirming physicians’ diagnosis or red-flagging significant discrepancies [23]. A meta-analysis conducted in UK-based hospitals has confirmed the efficacy of parallel DL diagnosis in cancer and ocular diseases [24]. The same principles apply to radiology to a greater extent [25]. Large workloads can overwhelm human capacity. With normal or common diagnoses being effectively managed by DL systems, the AI- and DL-based medical imaging market is expected to reach a value of $2 billion by 2023 [26]. The benefits of DL in medical imaging are twofold. Not only the volume of work becomes manageable but radiologists also have more time to devote to complicated cases, and closer collaboration with clinicians and research become possible [24]. A significant amount of data, mainly from magnetic resonance imaging (MRI), radiographic imaging, positron emission tomography (PET) and histopathology have already been employed as input for DL algorithms [27, 28]. Research and developed applications have focused on anomaly classification to diagnose cancer or schizophrenia. Segmentation properties, to ensure that DL algorithms can properly detect microscopic and macroscopic structures, are also studied [25]. A great deal of DL research investigates the interpretation of people’s behavior or emotion through MRIs [29]. In the same frame, biomedical signal processing based on recorded electrical activity of the human body has been greatly approached in the DL context, with data from electroencephalography (EEG), electrocorticography (ECoG), ECG, electromyography (EMG), and electrooculography (EOG) as inputs. Distinguishing the noise and artifacts in raw signals is essential before using them as an input for DL algorithms [30]. Apart from diagnosing patients, DL is useful in diagnosing systems’ failures. Machine Learning DL algorithms have been used to predict equipment and software failures, by analyzing historical data. DL algorithms have detected obstacles that have hindered progress of practitioners and researchers for several decades [26, 29]. Apart from the above, the interplay between DL and telemedicine is expected to become more concrete in the next few years [31]. Telemedicine has become more relevant now. With constant pressure on optimizing telemedical features across specialties and countries, disruptive technology including DL is expected to push the field forward. Telemedicine comprises both direct remote communication with the patients and the remote access to patient records, leading to proper diagnosis and treatment [32]. Recent studies reported a similar level of efficacy between telemedicine and conventional diagnostic methods, and the question is whether this similarity can be extrapolated to the biomedical applications of DL [33, 34]. A study conducted in 2017 in Stanford University reported the results of an algorithm used to diagnose

Deep learning for health and medicine

7

skin cancer remotely. The algorithm was based on a convolutional neural network that was equally effective to 21 trained dermatologists [35]. Tele-dermatological consultations assisted by AI and DL modalities are expected to reduce healthcare expenditure by 18%. However, cross-referencing the diagnosis and treatment options with a DL algorithm can increase the level of safety for both physicians and patients.

4.1 Detection of biomarkers A predictor in any biopsy or disease is a biomarker or biological marker. To determine physiopathological processes or adverse reactions to a clinical intervention, bio-brandings are also assessed and analyzed. Biomarkers are used in many scientific fields [36]. Digital biomarkers are a new evolving class in deep learning-based biomarkers, which are typically obtained by smart biosensors with a rule-based methodology [37]. To date, automated biomarkers have concentrated on measuring critical parameters such as accelerometer data and heart rate. The registers are increasingly available in modern noninvasive molecular automated biomarkers. Sweat monitoring on the skin is accomplished by the new wave of optical biomarkers. The responsible physician can quickly share digital biomarkers, and a system focused on in-depth learning can be developed for modern diagnostic approaches. Biomarkers used in medicine or precision medicine are a portion of a modern set of clinical instruments that use in-depth learning techniques. They are categorized by their therapeutic implementations in three major respects [38] – biomarkers for molecular application, smartphone biomarkers, or images. The three kinds of biomarkers have a clinical function in reducing or guiding therapeutic decisions and are predictive, prognostic, or diagnostic in a subcategory. Biomarker diagnostics, which satisfy the burden of evidence, can play a role in diagnostic reduction. This can lead to a more precise diagnosis of each condition.

4.2 Genomics Many applications have demonstrated the potential for a more profound understanding of biological processes, in recent years. Researchers can generate and interpret vast volumes of transmission data with advancements in biotechnology and the implementation of a high-performance series. Since genomics handle large-scale data [39], most bioinformatics algorithms focus on methods of ML and, more recently, on DL to detect patterns, generate forecasts, and model disease creation or treatment. Advances in profound learning have brought biomedical information technology unrivalled traction and have produced new fields of study in bioinformatics and computer biology. But most of the DL tools developed so far are designed along with a standardized dataset and/or model architecture, to deal with a specific issue.

8

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

Genomics utilizes strong ML to track data dependencies and to develop new biological assumptions. However, it takes more expressive ML models to be able to derive new ideas from the increasing amount of genomic data. The extended application of DL has brought about advancements in areas such as machine vision and natural language processing by effectively leveraging vast volumes of data [40]. It is also used to forecast the effect of genetic variation on genetic control processes, such as availability and DNA cleavage, for several genomic modelling activities.

4.3 Identification and diagnosis of diseases One of the most important applications of DL in health care is the identification and diagnosis of diseases and conditions that are otherwise considered not detectable [41]. It includes all facets of early cancer that are difficult to treat and a few hereditary abnormalities. IBM Watson Genomics is an outstanding example of how it can help make a rapid diagnosis by combining genome-based cognitive measurement with tumor sequences. Berg, a global biopharmaceutical firm, has used AI in the production of drugs in oncology. P1vital’s PReDicT has been developed to establish a commercially feasible diagnosis and treatment in routine clinical circumstances for detailed background on depression prediction.

4.3.1 Cancer diagnosis Medical imaging modalities such as computed tomography (CT), MRI, and X-ray have been used by oncologists to detect cancer, for many years. While these modalities have been proven to be effective in the diagnosis or the staging of certain cancers, a significant percentage of patients suffer from cancers that cannot be correctly detected by these machines. A DL system is some hope in the future for cancer diagnosis as a neural network (CNN). CNN can detect cancer with less misdiagnosis, relying on the same diagnostic images, which allows patients more outcomes. Researchers have recently been able to study various in-depth learning models to diagnose different forms of cancer with high precision [42].

4.4 Discovery and manufacturing of medicines With DL, breakthroughs have been made in drug development. This includes research and development techniques, such as sequencing for the next decade and precision medicine, which continue to uncover novel approaches to multifactorial disease prevention. Deep methods of study include unregulated learning and can understand patterns without prediction in data [43]. For various projects in the

Deep learning for health and medicine

9

Microsoft-based Hannover project, ML-based technology is being used, including in the development of AI-based cancer therapy technology and drug combination adaptation for acute myeloid leukemia.

4.5 Medical imaging Innovative technology called Computer Vision is responsible for ML and DL. This was shown in Microsoft’s InnerEye initiative, which uses image diagnostic software for image recognition. With more usability and explication of ML, more data sources of numerous medical images are projected to become part of this AI-driven diagnostic process [44].

4.6 Personalized medicine To make customized treatments more effective, combining personal well-being with predictive science is not only feasible, but also, they are ready for more research and improved disease evaluation. Usually, depending on their symptomatic background and the genetic data available, physicians are limited to selecting or estimating the risk to the patient. However, ML of medicine is developing a great deal, and IBM Watson Oncology is at the forefront of the process in using the medical history of a patient to establish several approaches to therapy. In the future, drugs and biosensors with specialized medical features will be available in the market. With leading DL-based medical technology [45], their widespread use is expected to yield additional results.

4.7 DL-based behavioral modification Behavior therapy is an important part in preventive medicine, and with the spread of DL in healthcare, numerous start-ups have come up in the fields of cancer detection and recognition, medical treatment, and so on. Somatix is a data analyzer for businessto-business-to-consumer (B2B2C) that has launched an ML app to identify the behaviors in our everyday lives to understand and change our unconscious behavior [46].

4.8 Smart health things A systematic method is to keep patient reports up-to-date. Although technology has simplified the data entry process, the fact is that it takes a lot of time for other processes, even now. DL in healthcare plays an important role in facilitating processes

10

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

to save money, time, and effort. The methods of vector grading and ML-based optical characters recognition have become more popular over the years, considering the development of the Google Cloud Vision automatic programming interface (API) and the handwriting-recognition technology based on MATLAB’s learning device. MIT is, today, a pioneer in creating the next wave of intelligent health records, including ground-based ML tolls for assisting with diagnosis and clinical care tips [47].

4.9 Clinical trial and research DL has various potential applications in clinical trials and research. Clinical studies require a great deal of time and expense and will take several years, as anyone from the pharmaceutical industry can tell you. The use of predictive analytics focused on ML to classify possible study candidates can allow researchers to obtain a pool of data points such as past visits to a doctor and social media. DL is also used to provide research subjects with real-time tracking and data access. For example, providing the best sample size for testing and harnessing database error reduction power of electronic records [48].

4.10 Crowdsourced data collection Today, in the medical field, crowdsourcing is a game changer, empowering academics and specialists to access a significant part of the information they possess. Diagnostics and therapeutics can be scientifically evaluated and modified by live clinical results. With immersive software based on ML-based facial recognition, Apple’s ResearchKit allows clinicians to see and try to treat Asperger’s and Parkinson’s disease. IBM has recently worked to decode, compile, and include real-time data, based on diabetes and insulin mass data. In the context of applying this evidence, IoT-based innovation has allowed healthcare to identify cases that are difficult to diagnose, to help with the general progress of diagnosis and medication [49].

4.11 Better radiation therapy In radiology, ML plays a major role. There are several different variables in the study of medical photographs that can appear at a particular moment. Accidents, illnesses, and so on also exist, where it is not possible to model complicated equations. Diagnosis and variables are now easier, as ML-based algorithms are trained with several samples [50]. In medical image review, the grouping of objects as lesions into categories such as normal or pathological, and lesion or non-lesion is a common application for ML. Google’s DeepMind Wellness helps UCLH researchers

Deep learning for health and medicine

11

to routinely build algorithms that recognize the difference between stable or neoplastic tissue and enhance radiation therapy for the same purpose.

4.12 Prediction of epidemics and outbreaks To monitor and predict epidemics around the globe, AI-based technology and DL are being used today. Researchers also have access to a huge volume of data collected from satellites, social media alerts in real time, website information, and so on. Artificial neural networks help compile this awareness and predict pathologies from outbreaks of flu to myocardial infarction [51] and beyond. Due to the lack of important medical services and education systems, forecasting these outbreaks is particularly useful in low- and middle-income countries (LMICs). A characteristic example is the ProMED newsletter, an Internet-based monitoring portal that tracks emerging infections offering real-time epidemic updates.

5 The case deep learning for patients’ safety DL is very much about patients’ safety, either in a physical context, where it provides a cross-referencing of a diagnosis, or in digital settings, where it can detect methodological flaws and faulty procedures. DL applications have incorporated a considerable amount of patients’ records, daily evaluations, and vital measurements in their algorithms, to make them reliable enough for healthcare providers [30]. An interesting case is the one of the El Camino Hospital. Researchers of this institution used electronic health records, bed alarm data, and nurse call data as input for a DL tool to predict patient falls [52]. Falls are a significant cause of inpatient and outpatient morbidity and mortality, among the elderly population. Their care is overwhelming not only for healthcare providers but also for carers [53]. The DL tools in El Camino Hospital could alert staff about patients at high risk for falling. The hospital’s staff was able to act timely, reducing falls by 39% [52]. The impact of this application to local healthcare can be estimated based on data published by the Joint Commission for Transforming Healthcare. More specifically, inpatient injuries for falls would add an average of 6.3 days to hospital stay and elevate the hospitalization costs by $14,000 [54, p. 09]. These results were quite important, considering that conventional means to decrease patients’ falls have repeatedly failed. This failure, locking practitioners in a stalemate, could not be prevented by means of patient education, vision assessments, and walking aids, according to a study of the University of Texas [53, 54].

12

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

6 Challenges Despite the superiority of DL in comparison to other machine algorithms, in extracting features, recognizing, and categorizing, it also suffers from many limitations. Biological data like those in DL are very complicated and difficult to be translated either by man or algorithms, because they cannot clearly reveal biological relationships [55]. They still need human interaction coupled with computational exploration to fully investigate specific biological data, and this is referred to as the black box issue. For achieving more precision, DL algorithms need a lot of data for learning that is not readily available; hence, the risk of overfitting that manifests as higher test errors with lower learning errors. Mostly, it is challenging to choose the specific algorithms that fit the job. Moreover, with the availability of selection-aiding tools like hyperparameter optimization technologies, it is not easy to decide the framework to apply particularly with the ongoing addition of novel ones. Computing technologies reduce the expanses of analyzing data and save time; however, DL algorithms, specifically, have a procedure of data-intensive learning that consumes time and needs professionals like programmers. Experts see that DL cannot solve many critical issues. For example, multiple high-level visualizations delivered by DL are hard to translate or modify, during any categorizing problem. DL cannot be applied to all conditions, especially rare ones. DL algorithms could be evidently tricked to yield uncategorized information via minor alterations to the input.

7 Conclusions Nowadays, the volume of biological data produced is too huge to be analyzed solely by humans; hence, the need for ML support, particularly the algorithms of DL to translate medical and health-related data efficiently. Investigations and advances in medical DL, of late, have introduced ubiquitous applications in developing drugs, analyzing genome and proteome, and transcriptomic gene expression, studying multiomics, splicing, processing medical images, and others. In health, DL is being used in translational bioinformatics, identification of genetic variants, learning about the interactions among targets and ligands, medical imaging, assistive wearables, health informatics, and population health, among others. Yet, such a DL method in biological sciences is in early stages; however, it is a modern technique and holds promise in dramatically transforming the life sciences scene considering expanses, time, and applications. However, there are some limitations to its applications, because the algorithms could be handled via advances in the biological domain through collaborations on research and design of DL algorithms for life science data. Globally, the IT and

Deep learning for health and medicine

13

pharmaceutical industry are quickly investing in studies of computational biology; hence, a massive growth in this area is expected in the upcoming years. This chapter describes the recent innovations in DL and their existing and expected applications in health sciences. It reveals the gap in application of DL to medical sciences, where investigations need to be advanced dramatically via cooperative efforts.

References [1]

[2] [3] [4] [5] [6]

[7]

[8] [9]

[10]

[11] [12] [13] [14]

[15]

Mittal, S., Hasija, Y. Applications of deep learning in healthcare and biomedicine. In: Deep Learning Techniques for Biomedical and Health Informatics, Dash, S., Acharya, B. R., Mittal, M., Abraham, A., Kelemen, A. (eds.), Cham, Springer International Publishing, 2020, 57–77. Cao, C. et al. Deep learning and its applications in biomedicine. Genomics, Proteomics & Bioinformatics, 2018, 16(1), 17–32. doi: 10.1016/j.gpb.2017.07.003. Lecun, Y., Bengio, Y., Hinton, G. Deep learning. Nature, 2015, 521(7553), 436–444. doi: 10.1038/nature14539. Carleo, G. et al. Machine learning and the physical sciences. Reviews of Modern Physics, 2019, 91(4), 045002. doi: 10.1103/RevModPhys.91.045002. Zhang, X.-D. A Matrix Algebra Approach to Artificial Intelligence, Singapore, Springer, 2020. Doupe, P., Faghmous, J., Basu, S. Machine Learning for Health Services Researchers. Value in health: The journal of the International Society for Pharmacoeconomics and Outcomes Research, 2019, 22(7), 808–815. doi: 10.1016/j.jval.2019.02.012. Chen, I. Y., Pierson, E., Rose, S., Joshi, S., Ferryman, K., Ghassemi, M. “Ethical Machine Learning in Health Care,” ArXiv200910576 Cs, Oct. 2020, Accessed: Dec. 14, 2020. [Online]. Available: http://arxiv.org/abs/2009.10576. Rajkomar, A., Dean, J., Kohane, I. Machine learning in medicine. The New England Journal of Medicine, 2019, 380(14), 1347–1358. doi: 10.1056/NEJMra1814259. Gangavarapu, T., Jaidhar, C. D., Chanduka, B. Applicability of machine learning in spam and phishing email filtering: review and approaches. Artificial Intelligence Review, 2020, 53(7), 5019–5081. doi: 10.1007/s10462-020-09814-9. Tekir, S., Bastanlar, Y. Deep Learning: Exemplar Studies in Natural Language Processing and Computer Vision. Data Mining – Methods Application and Systems, 2020. doi: 10.5772/ intechopen.91813. Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 2019, 25(1). Art. no. 1. doi: 10.1038/s41591-018-0300-7. D. Ben-Israel et al. The impact of machine learning on patient care: A systematic review. Artificial Intelligence in Medicine, 2020, 103, 101785. doi: 10.1016/j.artmed.2019.101785. Chalmers, I., Glasziou, P. Avoidable waste in the production and reporting of research evidence. The Lancet, 2009, 374(9683), 86–89. doi: 10.1016/S0140-6736(09)60329-9. Kannel, W. B., Doyle, J. T., McNamara, P. M., Quickenton, P., Gordon, T. Precursors of sudden coronary death. Factors related to the incidence of sudden death. Circulation, 1975, 51(4), 606–613. doi: 10.1161/01.CIR.51.4.606. Marinigh, R., Lip, G. Y. H., Fiotti, N., Giansante, C., Lane, D. A. Age as a risk factor for stroke in atrial fibrillation patients: implications for thromboprophylaxis. Journal of the American College of Cardiology, 2010, 56(11), 827–837. doi: 10.1016/j.jacc.2010.05.028.

14

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

[16] O’Mahony, C. et al. A novel clinical risk prediction model for sudden cardiac death in hypertrophic cardiomyopathy (HCM risk-SCD). European Heart journal, 2014, 35(30), 2010–2020. doi: 10.1093/eurheartj/eht439. [17] National Research Council (US) Committee on A Framework for Developing a New Taxonomy of Disease. Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease, Washington (DC), National Academies Press (US), 2011. [18] Woodruff, P. G. et al. T-helper type 2-driven inflammation defines major subphenotypes of asthma. American Journal of Respiratory and Critical Care Medicine, 2009, 180(5), 388–395. doi: 10.1164/rccm.200903-0392OC. [19] Corren, J. et al. Lebrikizumab treatment in adults with Asthma. The New England journal of Medicine, 2011, 365(12), 1088–1098. doi: 10.1056/NEJMoa1106469. [20] Sahoo, P. K., Thakkar, H. K., Lee, M.-Y. A cardiac early warning system with multi channel SCG and ECG monitoring for mobile health. Sensors, 2017, 17(4). doi: 10.3390/s17040711. [21] Khan, S., Yairi, T. A review on the application of deep learning in system health management. Mechanical Systems and Signal Processing, 2018, 107, 241–265. doi: 10.1016/j. ymssp.2017.11.024. [22] Brosch, T., Tam, R. Manifold learning of brain MRIs by deep learning. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013, Berlin, Heidelberg, 2013, 633–640. doi: 10.1007/978-3-642-40763-5_78. [23] Gulshan, V. et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 2016, 316(22), 2402–2410. doi: 10.1001/jama.2016.17216. [24] Nagendran, M. et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ, 2020, 368, m689. doi: 10.1136/bmj.m689. [25] Kim, M. et al. Deep Learning in Medical Imaging. Neurospine, 2019, 16(4), 657–668. doi: 10.14245/ns.1938396.198. [26] Dash, S., Acharya, B. R., Mittal, M., Abraham, A., Kelemen, A. Eds., Deep Learning Techniques for Biomedical and Health Informatics, Springer International Publishing, 2020. [27] Min, S., Lee, B., Yoon, S. Deep learning in bioinformatics. Briefings in Bioinformatics, 2017, 18(5), 851–869. doi: 10.1093/bib/bbw068. [28] Liu, X. et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digital Health, 2019, 1(6), e271–e297. doi: 10.1016/S2589-7500(19)30123-2. [29] Mahmud, M., Kaiser, M. S., Hussain, A., Vassanelli, S. Applications of deep learning and reinforcement learning to biological data. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(6), 2063–2079, doi: 10.1109/TNNLS.2018.2790388. [30] Namuduri, S., Narayanan, B. N., Davuluru, V. S. P., Burton, L., Bhansali, S. Review – deep learning methods for sensor based predictive Maintenance and Future Perspectives for Electrochemical Sensors. Journal of the electrochemical Society, 2020, 167(3), 037552. doi: 10.1149/1945-7111/ab67a8. [31] Maddalena, J. “The Role of Machine Learning in the Rise of Telemedicine | ATOS zData,” Apr. 21, 2020. https://zdatainc.com/the-role-of-machine-learning-in-the-rise-of-telemedicine/ (accessed Dec. 14, 2020). [32] Dimitrov, D. V. Medical internet of things and big data in healthcare. Healthcare Information Research, 2016, 22(3), 156–163. doi: 10.4258/hir.2016.22.3.156. [33] Dicker, A. P., Jim, H. S. L. Intersection of digital health and oncology. JCO Clinicial Cancer Informatics, 2018, 2, 1–4, doi: 10.1200/CCI.18.00070.

Deep learning for health and medicine

15

[34] Hazenberg, C. E. V. B., Bus, S. A., Kottink, A. I. R., Bouwmans, C. A. M., Schönbach-Spraul, A. M., Van Baal, S. G. Telemedical home-monitoring of diabetic foot disease using photographic foot imaging–a feasibility study. Journal Telemedicine Telecare, 2012, 18(1), 32–36. doi: 10.1258/jtt.2011.110504. [35] Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017, 542(7639), 115–118. doi: 10.1038/nature21056. [36] Saha, S. et al. Automated detection and classification of early AMD biomarkers using deep learning. Scientific Reports, 2019, 9(1), Art. no. 1. doi: 10.1038/s41598-019-47390-3. [37] Coravos, A., Khozin, S., Mandl, K. D. Developing and adopting safe and effective digital biomarkers to improve patient outcomes. Npj Digital Medicine, 2019, 2(1), Art. no. 1. doi: 10.1038/s41746-019-0090-4. [38] Boyapati, R. K., Kalla, R., Satsangi, J., Ho, G.-T. Biomarkers in search of precision medicine in IBD. The American Journal of Gastroenterology, 2016, 111(12), 1682–1690. doi: 10.1038/ ajg.2016.441. [39] Yue, T., Wang, H. “Deep Learning for Genomics: A Concise Overview,” ArXiv180200810 Cs Q-Bio, May 2018, Accessed: Dec. 15, 2020. [Online]. Available: http://arxiv.org/abs/1802. 00810. [40] Liu, J., Li, J., Wang, H., Yan, J. Application of deep learning in genomics. Science China Life Science, 2020, doi: 10.1007/s11427-020-1804-5. [41] Bakator, M., Radosav, D. Deep Learning and Medical Diagnosis: A Review of Literature. Multimodal Technologies and Interaction, 2018, 2(3), Art. no. 3. doi: 10.3390/mti2030047. [42] Kose, U., Alzubi, J. A. Eds., Deep Learning for Cancer Diagnosis, Springer Singapore, 2021. [43] Paul, D., Sanap, G., Shenoy, S., Kalyane, D., Kalia, K., Tekade, R. K. Artificial intelligence in drug discovery and development. Drug Discovery Today, 2020. doi: 10.1016/j.drudis.2020.10.010. [44] Suzuki, K. Overview of deep learning in medical imaging. Radiological Physics and Technology, 2017, 10(3), 257–273. doi: 10.1007/s12194-017-0406-5. [45] Zhang, S., Bamakan, S. M. H., Qu, Q., Li, S. Learning for personalized medicine: a comprehensive review from a deep learning perspective. Reviews in Biomedical Engineering, 2019, 12, 194–208. doi: 10.1109/RBME.2018.2864254. [46] Arac, A., Zhao, P., Dobkin, B. H., Carmichael, S. T., Golshani, P. Deep behavior: A deep learning toolbox for automated analysis of animal and human behavior imaging data. Frontiers in Systems Neuroscience, 2019, 13. doi: 10.3389/fnsys.2019.00020. [47] Simsek, M., Obinikpo, A. A., Kantarci, B. Deep learning in smart health: methodologies, applications, challenges. In: Connected Health in Smart Cities, El Saddik, A., Hossain, M. S., Kantarci, B. Eds, Cham, Springer International Publishing, 2020, 23–46. [48] Del Fiol, G., Michelson, M., Iorio, A., Cotoi, C., Haynes, R. B. A deep learning method to automatically identify reports of scientifically rigorous clinical research from the biomedical literature: comparative analytic study. Journal of Medical Internet Research, 2018, 20(6), e10281. doi: 10.2196/10281. [49] Huang, Y., Ma, X., Fan, X., Liu, J., Gong, W. “When deep learning meets edge computing,” in: 2017 IEEE 25th International Conference on Network Protocols (ICNP), Oct. 2017, pp. 1–2, doi: 10.1109/ICNP.2017.8117585. [50] Boldrini, L., Bibault, J.-E., Masciocchi, C., Shen, Y., Bittner, M.-I. Deep learning: a review for the radiation oncologist. Frontiers in Oncology, 2019, 9. doi: 10.3389/fonc.2019.00977. [51] Punn, N. S., Sonbhadra, S. K., Agarwal, S. COVID-19 epidemic analysis using machine learning and deep learning algorithms. medRxiv, 2020, p. 2020.04.08.20057679. doi: 10.1101/2020.04.08.20057679. [52] El Camino Health, “Preventing Falls,” El Camino Health, May 13, 2015. https://www.elcamino health.org/services/senior-health/specialty-programs/preventing-falls (accessed Dec. 14, 2020).

16

Rehab A. Rayan, Imran Zafar, Christos Tsagkaris

[53] Morris, R., O’Riordan, S. Prevention of falls in hospital. Clinical Medicine London England, 2017, 17(4), 360–362. doi: 10.7861/clinmedicine.17-4-360. [54] Wang, L. et al. Preventing inpatient falls with injuries using integrative machine learning prediction: a cohort study. Npj Digital Medicine, 2019, 2(1), Art. no. 1. doi: 10.1038/s41746019-0200-3. [55] Ravì, D. et al. Deep learning for health informatics. IEEE Journal of Biomedical and Health Informatics, 2017, 21(1), 4–21. doi: 10.1109/JBHI.2016.2636665.

Rohit Rastogi, Mamta Saxena, Sheelu Sagar, Neeti Tandon, T. Rajeshwari, Priyanshi Garg

Exploring Indian Yajna and mantra sciences for personalized health: pandemic threats and possible cures in twenty-first-century healthcare Abstract: Researchers from scientific domains in every walk of life are now fascinated by Indian science, as it has provided benefits to the human race by its logical temperament and affinity to various beliefs. It has assimilated all the good practices of all streams. The present manuscript is an effort by the author team to present the scientific aspects of Yajna and mantra science and its effects on disease control, pollution control, causing rain, positive effects on animals and human productivity, mental fitness, and many more. Yajna and mantra science have been proved to be a boon to the human race, and the present manuscript strongly propounds these facts scientifically. The team has analyzed related data that was gathered from different experiments and visualized the effects of these activities in a logical manner. The subjects chosen for the experiments underwent a predefined protocol of mantra, pranayama, and yoga for a specified duration. Python and other popular data analysis tools have been used in the subsequent analysis. A drastic change in the air quality components was recorded while taking the readings in different intervals of time durations, which proves that Yajna process can be adopted as a complete solution for present threats. Keywords: Yajna, mantra, stress, depression, anxiety, mental fitness, air quality, PM

Rohit Rastogi, Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected] Mamta Saxena, Ministry of Statistics, Program and Implementation-PI, e-mail: [email protected] Sheelu Sagar, Amity International Business School, Amity University, Noida, Uttar Pradesh, India, e-mail: [email protected] Neeti Tandon, Vikram University, Ujjain, Madhya Pradesh, India, e-mail: [email protected] T. Rajeshwari, Yagyopathy Researcher and Active Social Volunteer, Kolkata, West Bengal, India, e-mail: [email protected] Priyanshi Garg, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected] https://doi.org/10.1515/9783110708127-002

18

Rohit Rastogi et al.

1 Introduction 1.1 Yajna and mantra: a great research by Indian ancestors The different dimensions of Yajna and mantra are described here. It has been proved scientifically by many researchers in their articles that the ancient Indian techniques are effective for a complete health and well-being of an individual [1].

1.2 Process of Yajna Simply put, it is a non-complex process, where the science of heat, light, and sound are combined together in a Yajna Kund at a fire altar, and they work their energies on the oblations/Ahutis of various herbs that we use today for many modern day ailments. These same herbs undergo processing in the Yajna fire and their subtle extract in the form of volatile oils and other components waft up, imbuing the upper atmosphere with their goodness [2]. The clouds thus loaded with herbs shower down good health and nourishment for all living on Earth. Does it sound surreal? Not so much, if you have science to back you!

1.3 Meaning of swaha in Yajna The word “swaha” is made up of two components: Svatva Hanana – iti svāhā. It comprises two roots; sva = self and ha = to abnegate or to erase [3, 4]. The meaning is: “I hereby offer myself (self-abnegation) for the benefit and service of all beings!” It is used while making oblations into the fire.

1.4 Mantras and their places in Yajna The team of authors selected this topic to explore what mantra is, the way it works on the human’s physical, mental, and subtle level during Yajna, and the way it helps an individual attain cosmic-level oneness. Here, the author clarifies a paragraph – we all know that every mantra represents energy of a god or goddess, and as per mythology, every divine has a plant origin [5]. For example, the Lakshmi mantra, “Shreem” is turmeric-oriented. When chanting Gayatri with Shreem samphut, Pujyavar Pt. Sriram Sharma Acharya asks us to use turmeric in oil for massaging and in our food. In his great article of Gayatri science, Gayatri MahaVigyan (super science of Gayatri), he also asks us to imagine

Exploring Indian Yajna and mantra sciences for personalized health

19

Gayatri in yellow attire. When you have fruits, oil, seed, leaves, or roots, the mantra can get integrated into the blood, and if you chant the mantras, the divinity begins to live in the person’s consciousness and mantra siddhi happens [6]. Nadi Shastra (vein science) is for those who are unable to perform austerities, like the big anushthan, due to their busy life. This gave us (authors) an idea to search for plant origin for the Gayatri mantra. They searched and got a plant, Himalayan intellect tree, also known as Jyotishmati, Vanhiruchi, Katumbhi, and so on. In English, it is known as the staff tree. Dr. Vandana Shrivastava of DSVV-Haridwar has prescribed Jyotishmati oil for autistic and MR patients. Ayurveda doctors may also help with it http://ijyr.dsvv.ac.in/index.php/ijyr/article/view/11/25. In the below image, one can see the volunteer of Yagyopathy explaining the science of Yajna, mantra, and pranayam to common people and making them aware with scientific aspects (see Figure 1).

Figure 1: A Gayatri Pariwar volunteer explaining the scientific aspect of Yajna and mantra science to common public in easy way.

1.5 Acid rain and Yajna effects The rain water that we receive today is highly acidic in nature. Sulfurous compounds load each raindrop that falls on the Earth. This acid rain is stripping the Earth of harvest as well as of large forests each year. We are receiving acid rain, as

20

Rohit Rastogi et al.

the sheer amount of industrial and vehicular pollution is traveling up the atmosphere and is absorbed in the clouds. This results in rains that are acidic in nature. To answer the question, what does one choose for the environment, acid or medicine? The wise will say that the choice is yours! [7].

1.6 Divine shower through Yajna: effect on rain The ambrosias divine showers through a Yajna. Any greasy substance, when added to water, never enters the water; it, rather, spreads on the surface undissolved. In the same way, the fumigations emitted in the Yajna as a greasy substance spreads over the clouds as a sheath, like the way humans apply oily or greasy lotions, as an insulation, to prevent the cool air of the winter. These greasy fumigations of a Yajna too prevent and protect the clouds from dispersing, and this prevention becomes the main cause for rain. Greasy substance gets frozen when it comes in contact with cool environments along with the water molecules in them, just like ghee (clarified butter) gets frozen during winter. The ghee offered as oblations during Yajna, when the sheath of clouds increases their density, forces them to shower as rain. Today’s environment has been polluted with toxic gases, mainly because of petrol- and diesel-powered vehicles. The only solution is “Yajna.” A video link is also available on this issue for detailed information [8], https://www.youtube.com/ watch?v=nqP3rf50tqk.

1.7 Mantra therapy and cures Antibacterial mantra can prevent worldwide corona-virus-borne epidemics; 1. Corona virus destroyer: Sacrifice to Aditya for the destruction of viruses that are called the worldwide corona virus. 2. Disease prevention mantra – coronogenic disease prevention: For the prevention of diseases caused by “corona virus,” dedicate sacrifices to the Sun god. 3. Safety mantra – Apad-Rakshartham Ahuti: Dedicate this sacrifice to the global god for protection from crises. Dedicate the Aahuti while saying swaha, the second time itself; by not dedicating Ahuti with the word swaha mentioned in the mantra, one can experience the power of the cosmic sound over bacterial and fungal infection and diseases (see Figure 2). Innovative use of water sacrifice in water: Under special circumstances, special solutions have to be developed [9, 10]. The scientific effect of chanting mantra and doing Yajna, different parameters can be recorded by high-end instruments, and stress and other fatigue levels may be cross checked quantitatively by these measurements.

Exploring Indian Yajna and mantra sciences for personalized health

21

Figure 2: The article depicting the cure of Covid’19 and benefit of mantra chanting.

Figure 3: Vitaly Napadow, a neuroscientist at Harvard Medical School and Massachusetts General Hospital, studies how the brain perceives pain. To do that, he uses electroencephalography to track the brain-wave patterns of patients with chronic lower back pain (courtesy: National Geographic Magazine, Jan. 2020). (A World of Pain, National Geographic Magazine, vol. 1, issue 1, pp. 32–57 (January 2020)), Source: https://www.nationalgeographic.com/magazine/2020/01/scientists-are-unraveling-the-mysteriesof-pain-feature/

1.8 Organ chart and different parameters for healthy lifestyle Organ chart plays an important role in treatment of illnesses. When acupuncture and acupressure are given to a patient, the up-and-down cycle can help a lot (see Figure 5). Same goes with the month that people are born in –known as circadian rhythm/ clock. If we give treatment as per the up cycle time, then can the chances of healing be increased? Normally, experts suggest doing a Yajna at a fixed time and they take one or two cases and try with this method. When the authors’ team take up the cases, their first correction is on lifestyle change, then positive attitude, followed by understanding the philosophy of Yagyopathy (how, why and when). Then, care for proper diet, and, if necessary, dietary supplements are suggested. It is a circular process where, initially, lifestyle changes. One should suggest timings

22

Rohit Rastogi et al.

Figure 4: To ease his pain during surgery to remove a pin from his pelvis, Brent Bauer focuses on a virtual reality game called Snow World, which involves throwing snowballs at snowmen and penguins. Orthopedic trauma surgeon Reza Firoozabadi at UW Medicine’s Harborview Medical Center in Seattle was testing the effectiveness of the game, developed by the University of Washington’s Hunter Hoffman, a pioneer in VR for pain relief. Bauer broke numerous bones, including his pelvis, when he fell three stories (courtesy: National Geographic Magazine, Jan. 2020). (A World of Pain, National Geographic Magazine, vol. 1, issue 1, pp. 32–57 (January 2020)), Source: https://www.nationalgeographic.com/magazine/2020/01/scientists-are-unraveling-the-mysteriesof-pain-feature/

according to the circadian rhythm and the Omkara meditation. In Mohammedan beliefs, Bhramari (breathing exercise) is suggested so that there is a higher probability of getting normal [10, 11].

2 Literature survey To answer the question, how is science connected in Hindu traditions, logic, implications and scientific derivations can be used to propound and scientifically prove. These scientific arguments related to Hindu traditions are as follows: (a) No marriage in same Gotra: In an informative program related to genetic diseases on Discovery channel, an American scientist said that there is no cure for genetic disease – Separation of genes. People with the same Gotra should not marry because genes are not separated from close relatives and there is a 100% chance of having gene-linked diseases, such as hemophilia, color blindness, and albinism. So, how Hindus religion has written about genes and DNA thousands of years ago it is well proved now (Rastogi, R. et al., 2018a and 2018b) [12]. There are a total of seven Gotras in the Hindu philosophy and people of the same Gotra cannot get married so that their genes remain separate (splitted). The scientist of the aforesaid article said that today the whole world has to believe that Hinduism is the only religion in the world that is science-based!

Exploring Indian Yajna and mantra sciences for personalized health

23

Figure 5: Organ chart to display the treatment of illness.

(b) Touching feet of elders: According to the Hindu belief, whenever you meet an elder, one must touch his feet. We also teach this to children so that they respect elders. Scientific reasoning for this is that the energy released from the brain completes a cycle through hands and legs. This is called the flow of cosmic energy. There are two modes of energy flowing in, either from the feet of the elder to the hands of the younger or from the hands of the younger ones to the feet of the elders. (c) Clapping during Aarti and prayer at Hindu temples: In any of the Hindu temples, one can see some people clapping at the time of Aarti and then others join in to start clapping. Scientific explanation is that a clap hits the four fingers of the right hand on the palm of the left hand with a strong pressure in such a way that the pressure is complete and the sound is good; this type of clap improves the left palm lung, liver, and gall bladder. At the same time, the pressure points of the right hand fingers suppress the kidneys, small intestine, large intestine, and the sinuses. This causes the blood flow to these organs to be rapid. Such clapping should proceed until the palm turns red. The grand ancient Indian lifestyle and its scientific significance: The world could have chosen Adaab during the pandemic of Sars-Cov’19, but why did they choose

24

Rohit Rastogi et al.

“Namaste”? This is because only one hand is used for “Adaab”, while the other hand can also have daggers. On the other hand, “Namaste” has both hands in front; so, there is no cheating. This is a philosophical approach. Every tradition of Indian culture is scientifically and spiritually based.

3 Experimental setup and methodology The experiments to support the science of Yajna and mantra were conducted following a well-defined protocol for 45 min where the mantra chanting, pranayama, yoga, and fitness activities were carried out. The laughing therapy and some alternative therapies like acupressure, pranik healing were also used at different time intervals.

3.1 Protocol followed Readings were recorded by research team members quantitatively under different parameters and analyzed using various data analytics tools like Python, Tableau, SPSS, and Excel. The subjects were asked to fill a small consent form to give their willingness to participate in the aforesaid experiments, and were assured that their identity and data will not be made commercial or misused. The whole experiment was conducted over online platforms like Google Meet, Zoom sessions, MS Team, etc. as per the convenience of the subjects during the lockdown period and their scores were recorded. The activities were conducted in India during the peak of the Sars-Cov’19 of pandemic, 1 April 2020 to 31 August 2020, for a period of around 150 days. During this period, a strict lockdown was imposed in India to save the masses from the third stage infection of the Sars-Cov’19 spread. The subjects were asked to follow a strict protocol for around 45 min., which included some Asans (sitting, standing, lying front, and lying back postures), some set of mantra chanting, breathing exercises (pranayam), alternate therapies like laughing technique and Shantipath. The subjects’ data was recorded pre and post the experiment and, accordingly, a comparative study was conducted to check the efficacy of the methodology adopted [13–15].

Exploring Indian Yajna and mantra sciences for personalized health

25

3.2 Mudras used Surya Mudra, Pran Mudra, Apan Mudra and Gyan Mudra were used, each for 5 min, daily. Surya Mudra gives knowledge, name, and fame, Pran Mudra increases the vital power, Apan Mudra helps in the release of unnecessary air and extra waste, and Gyan Mudra helps to concentrate and access knowledge (see Figures 6–9).

Figure 6: Gyan Mudra for increasing brain power.

Figure 7: Apan Mudra for releasing the gaseous issues in human body.

Figures 10 and 11 reflect the group Yagya and Yoga processes; they help in mass rectification. Yajna items/components for different properties For people with different Satogun, Rajogun, and Tamogun, different items and components are used in the Yajna. Satogun – mala – Tulsi, Asan – Kush, flower – white, vessel – copper, cloth – soot/khadi, direction –east, ghee in Deepak of cow, Tilak – Chandan, Samidha in Yajna – peepal, banyan, goolar. Havan Samagri components: white Chandan, agar,

26

Rohit Rastogi et al.

Figure 8: Pran Mudra for energy conduction and eye and immune system improvement.

Figure 9: Surya Mudra for weight loss.

Figure 10: Performing Yajna rituals along with different activities.

Chhoti Ilayachi, Laung, Shankhpushpi, Brahmi, Shatavar, Khas, Sheetal Chini, Aamla, Indrajau, Vanshlochan, Javitri, Giloy, Bach, Netrawala, Mulhathi, Kamalkeshar, Bad ki Jata, coconut, almond, Dakh, Jau, Mishri.

Exploring Indian Yajna and mantra sciences for personalized health

27

Figure 11: Group yoga and mantra chanting for mental and physical fitness.

No salty items are used in the Yajna. Scientific reasons: In the Yagya, no saltrelated elements are used because they convert into chlorine. This chlorine is also harmful for humans as it is for bacteria. Ghee is mainly used in Yagya. It has two jobs; it lights the Yajna fire and maintains its temperature at 200–300 °C. Secondly, this ghee after, spreading around the whole Yajna fire, gives it more strength to light up. The main benefit of Yagya is the purification of air. Due to the fire, the air around the Yajna becomes light and it goes up. In place of this air, a more purified air takes its place [16–18].

Figure 12: The chemical composition of salt (NaCl).

4 Results and discussions To see the effect of Yagya on electrical and magnetic radiations, experiments were conducted by the author’s team. Nowadays, not a single day of our life can be imagined without devices such as cell phones, laptops, tablets, and televisions. The reality is that today we are living in an ocean of these radiations. All these devices emit electromagnetic radiations. They are harmful and cause ill effects, including cancer, in the body. The UN’s International Agency for Research on Cancer conducted research by collecting data from 13 countries and found that there was an increased risk of brain and neck cancer among people using mobile phones for more than 10 years.

28

Rohit Rastogi et al.

Yagya is a powerful solution explained in Indian scripture to protect us against these cosmic rays. So, under the Dev Sanskriti University, Haridwar Yagyopathy team conducted many experiments where the radiations at different places, before and after Yajna, were measured. It was found that the level of the radiation had decreased significantly for the first match after the period by the Parayavran mantras. Let us look at the results of some experiments. The experiments to find the effect of Yajna were conducted time to time by the team; the following results were obtained which were highly encouraging (see Table 1). Table 1: The effect of Yajna process on the electromagnetic radiations (experiments were conducted in Vikaspuri, Delhi, NCR, India). Before Yajna

After Yajna

Distance (in feet)

 Feet

 Feet

Magnetic Flux (in Micro Tesla)

.



.



.



.



.



Experiment – 1 When a Yajna was performed in a house in the Vikaspuri area of Delhi, it was seen that the radiation, which was 1.1 micro-tesla at a distance of 5 inches from the mobile, decreased to zero after 30 min of the Yajna. The data of this experiment is given in the Table-1. Experiment – 2 Similarly, in January, 2020, a 24-kund Gayatri Mahayagya took place in an open area at Vipin Garden Extension. At 50 meters from the Yajna spot, the magnetic radiation was 16 Micro-Tesla before the Yajna. As soon as the samagri pouring began, after 2 min, it started to decrease and it became zero in a few seconds. The radiation continued to be at zero till the evening. Experiment – 3 An experiment conducted by the Yagyopathy team in Gurugram saw that even after samagri pouring for the Gayatri and Mahamrityunjaya mantra in Yajna was completed, the radiation decreased. Experiment results are shown in Figure 13 through a graph. A significant reduction in energy level with respect to time and distance was observed, and it was a maximum at the starting time (see Figure 13).

Exploring Indian Yajna and mantra sciences for personalized health

29

Figure 13: The effect of Yajna process after different time instants with respect to distance from the Yajna place.

Radiation was assessed before and after the Yajna at a distance of 3 feet, 6 feet, 9 feet, and 12 feet from the Yajna spot. It was seen that there was a significant decrease in the radiation levels, later than before the Yajna. Even after 24 h, there was a drop of 54 percent at 3 feet and 77 percent at 12 feet, although zero radiation was not seen here at any time. Experiment – 4 In another experiment, environmental revision mantras were also invoked along with Gayatri and Mahamrityunjaya mantra in the Yajna at Gautam Nagar in Delhi and due to this, there was a significant reduction in radiation levels, which can be see in the graph shown in Figure 14. Radations in gT 5 4 3 2 1 0 Morning

Afternoon

Evening

Figure 14: The effect of radiation in micro tesla, pre and post Yajna process (experiment, conducted in NCR, India); readings were taken before the Yajna and one day after the Yajna with significant reduction in radiation. The red line indicates post Yajna radiation and blue shows pre event readings.

Figures taken in the morning, afternoon, and evening show that there were more radiations in the evening than on the previous morning, which, after doing Yajna, became less the next day and became almost zero in the evening.

30

Rohit Rastogi et al.

Demonstration of the effect of Yajna with chanting of vedic mantra on the radiation levels inside a house, related to environment safety (see Figure 14).

5 Major experiment: Grahe–Grahe Gayatri Yagya: environmental experiments, Gautam Budh Nagar, Uttar Pradesh Thousands of families performed Yajna between 9:00 AM and 11:00 AM in the entire Gautam Budh Nagar this year on 30 October 2020 under the Grahe–Grahe Gayatri Yagya, organized by Akhil Vishwa Gayatri Parivar. The smoke from this Yajna was released from 9:00 to 11:00 and went into the atmosphere. At the Gautam Budh Nagar, three live monitoring stations took the pollution related data at places such as Sector 1, Sector 62, and Sector 116, on the same day and analyzed the results. The results are shown in graph 3 above. In this, the day is divided into three parts; pollution level is shown before the Yajna at 8:00 AM, and after the Yajna at 2:00 PM and at 8:00 PM. In this, PM 2.5, PM 10, NO, NO2, and NOx saw a significant decline compared to the level before the Yagya. In sector-116, at all three time points, the levels of NO, NO2 and NOx were almost equal, that is, the smoke of the Yagya had no effect on the level of these polluting gases. The decrease in PM level also shows the same effect. Hence, the misleading assumptions related to pollution the effect of Yajna are not confirmed by this study, but there was a drop in pollution levels. It may be noted that this data is taken from the government monitoring station. The readings of Figure 15 indicate that Yajna significantly reduced the different parameters responsible for pollutants in air quality. Around 24 thousand families performed this ritual to purify the atmosphere and diminish the negativity during the pandemic of Sars-Cov’19 (see Figure 16).

6 Recommendations The scientific aspects of Indian culture are recommended here. The surprising culture of India and its significance has been widely accepted across the globe. Millions of people in India join together at the Kumbh-Mela, Pushkar-fair, Vaishno-Devi-Dham, Golden temple, Jagannath RathYatra, Tirupati, Sabarimala, Badrinath, Kedarnath, Rameswaram, Gangasagar, take bath in the Ganges, perform Durga Puja, travel to Kavar,

0

100

200

300

400

500

600

172.11

241.91

142.9

201.66

258.97

304.06

370.31

49.82 19.41

179.98

321.82

379.41

475.62

360.58

434.56

34.5 10.9 12.05 9.17 10.95 11.01

95.01 15.84

96.75 95.48 93.85 23.49 17.15

141.54

197.72

66.32

60.32 57.99 59.69 58.88 27.9 26.32

115.79

251.47

PM2.5 (ug/m3)

PM 10 (ug/m3) 8:00

14:00

20:00

No (ug/m3)

No2 (ug/m3)

Nox (ppb)

Sector-1 Sector-62 Sector-116 Sector-1 Sector-62 Sector-116 Sector-1 Sector-62 Sector-116 Sector-1 Sector-62 Sector-116 Sector-1 Sector-62 Sector-116

134.47

229.47

300.66

332.6

483.53

568.27

Figure 15: Emissions of PM2.5, PM10, NO, NO2, NOX, and other components responsible for air pollution (the experiment was carried out during the Grahe–Grihe Gayatri movement – Yajna at every home on 30 October 2020, during the pandemic time at different sectors of Noida, NCR, India).

ug/m3, ppb

Emissions of Sector 1, 62 and 116 on 30.10.20

Exploring Indian Yajna and mantra sciences for personalized health

31

32

Rohit Rastogi et al.

Figure 16: The volunteers of Gayatri Parivaar created a record performing the Yajna ceremony on same the day at the same time on 30 October 2020 at different places of NCR and nearby regions.

perform Navratri, Chardham, visit Faith Vinayak, Siddhi Vinayak, 12 Jyotirlinga, etc. for thousands of festivals and fairs. They travel like a river, and at one place, millions of people live together, eat together, take a holy bath, use the washroom and at the same time. There is not a single virus spread, no typhoid or coli epidemic or any cholera outbreak. This is India, an incredible India!. One should be open-minded, curiously try to find the scientific aspects of this civilization that has been continuing for thousands of years. Strange eating habits of some countries should be stopped. The author team at India is proud that they are born in nature, are nature lovers, practice Indian culture, and live in a virtuous land. If you do not respect nature, then nature will destroy you. Only Sanatana Dharma and Sanatana way of life can save the world. Look to eternal life, the world and nature, and your family will all be safe [19–23].

7 Novelties Novel sciences of the Indian culture and Gayatri have been quantitatively and qualitatively exhibited in the manuscript. All sects of the Indian religion accept the importance of Gayatri and Yajna with one voice. The most accepted basis of Indian

Exploring Indian Yajna and mantra sciences for personalized health

33

religion is the Vedas and the seed of the Vedas is “Gayatri.” Similarly, every Hindu Yajna is worshipped. From the Brahmin to the Chandal, no one is married without fire, and no Indian religious man ends his existence without Yajnagni (Chita). Holi is the most popular festival of collective sacrifice (Yajna: a holistic healing process, Yugrishi Shriram Sharma Acharya). There is progress of mind, speech, and wisdom when Yajna and Yajnapati are worshipped (Anshjurveda Yajna: An equitable treatment process, Yugrishi Sri Ram Sharma Acharya). Yajna is actually a holistic healing process and, indirectly, it is the first lamavayuka purification of Havan. The air of the “Yajna” space is heated and lighted by the heat of the fire; the pure air from around it comes to fill the vacancy and germs of diseases die from the heat of the fire (Yugrishi Shriram Sharma Acharya, Yajna ka Gyan Vigyan) [24–25].

8 Future research directions “Meditation with Pranayam” is a weapon to awaken the command. Meditation is the power that keeps spreading our brain; it is meditation to concentrate in one place. If the rays of the sun are gathered on a small glass, it will catch fire. This fire can burn down the entire city. Only the powers of our brain keep falling apart. If we collect, it becomes like a sum that we cannot count (Yugrishi Shriram Sharma Acharya, Dhyaan Ki Prakriya) [26–28].

9 Conclusions In Indian culture, the preach is “Weakness is Sin”; Most people suffer from physical weakness, but most suffer from mental weakness. There is ultimate weakness of mind, because their powers are more frustrated (Yugrishi Shriram Sharma Acharya, Gayatri – The Power of Today). The Manuscript demonstrates the scientific aspects of spiritual activities and rituals. “Gayatri” is the goddess of wisdom, and “Yajna” is the God of Satkarma. Gayatri is the power of God. He who goes near Gayatri remains pure. Purification of mind is necessary for autism. Gayatri mantra is unique for purification of mind. Gayatri-chanting should be considered as the first step to attain God (Gayatri MahaVigyan by Yugarishi Shriram Sharma Acharya).

34

Rohit Rastogi et al.

References [1]

[2]

[3] [4]

[5] [6] [7] [8] [9] [10] [11]

[12]

[13]

[14]

[15]

[16]

Cameron, A. J., Zimmet, P. Z., Dunstan, D. W., Dalton, M., Shaw, J. E., Welborn, T. A., Owen, N., Salmon, J., Jolly, D. Overweight and obesity in Australia: the 1999–2000 Australian Diabetes, Obesity and Lifestyle Study (AusDiab), 2003, https://doi.org/10.5694/j.1326-5377. 2003.tb05283.x. Chaturvedi, D. K., Satsangi, R. The correlation between student performance and consciousness level, International Conference on Advanced Computing and Communication Technologies (ICACCT™-2013), Asia Pacific Institute of Information Technology SD India, Panipat (Haryana), Souvenir – pp.66, proc., 2013, pp. 200–203. Chaturvedi, D. K., Arya, M. A study of correlation between consciousness level and performance of worker. Industrial Engineering Journal, 2013a, 6(8), 40–43. Chaturvedi, D. K., Arya, M. Correlation between human performance and consciousness, IEEE-International Conference on Human Computer Interaction, 23-24 Aug. 2013, Saveetha School of Engineering, Saveetha University, Thandalam, Chennai, IN, India. Gunavathi, C., Premalatha, K. A. Comparative Analysis of Swarm intelligence techniques for feature selection in cancer classification. The Scientific World Journal, 2014, 14, 12. Jain, G. BLOG- Hawan for Cleansing the Environment, 2017. https://medium.com/@giftoffor est192/hawan-for-cleansing-the-environment-a9e1746e38e0 Kim, K. J., Tagkopoulos, L. Application of machine learning rheumatic disease research. The Korean Journal of Internal Medicine, 2019, 34, 2. Lahoty, P., Rana, M. Agnihotra organic farming. Popular Kheti, 2013, 1(4), 49–54. Mahajan, P. Application of Pattern recognition algorithm in health and medicine: a review. International Journal of Engineering and Computer Science, 2016, 5(5), 16580–16583. Shenwai, M. R., Tare, K. N. Integrated approach towards holistic health: current trends and future scope. International Journal of Current Research and Review Indexing, 2017, 9(7), 11-14. Mistry, R., Tanwar, S., Tyagi, S., Kumar, N. Blockchain for 5G-enabled iot for industrial automation: a systematic review, solutions, and challenges. Mechanical Systems and Signal Processing, 2020, 135, 1–19. Rastogi, R., Chaturvedi, D. K., Sharma, S., Bansal, A., Agrawal, A. Audio Visual EMG & GSR biofeedback analysis for effect of spiritual techniques on human behaviour and psychic challenges, in Proceedings of the 12th INDIACom; INDIACom-2018; 2018a, pp 252–258. Rastogi, R., Chaturvedi, D. K., Verma, H., Mishra, Y., Gupta, M. Identifying better? analytical trends to check subjects’ medications using biofeedback therapies,IGL global, International Journal of Applied Research on Public Health Management, 2020a, (IJARPHM) 5 (1), Article 2. doi: 10.4018/IJARPHM.2020010102, 14–31,ISSN: 2639-7692|EISSN: 2639-770610.4018/ IJARPHM,https://www.igi-global.com/article/identifying-better/240753. Rastogi, R., Gupta, M., Chatu Rvedi, D. K. Efficacy of study for correlation of TTH vs age and gender factors using EMG biofeedback technique. International Journal of Applied Research on Public Health Management (IJARPHM), 2020b, 5(1), Article4., 49–66. doi: 10.4018/ IJARPHM.2020010104. Rastogi, R., Chaturvedi, D. K., Satya, S., Arora, N., Gupta, M., Verma, H., Saini, H. An optimized biofeedback EMG and GSR biofeedback therapy for chronic TTH on SF-36 scores of different MMBD Modes on Various medical symptoms,in studies comp. Intelligence, 2020c, 841, Hybrid Machine Intelligence for Medical Image Analysis, 978-981-13-8929-0, 468690_1_En, (8) (2020c). S. Bhattacharya et al. (eds.), doi: https://doi.org/10.1007/ 978-981-13-8930-6_8. Saxena, M., Sengupta, B., Pandya, P. Controlling the microflora in outdoor environment: effect of Yagya. Indian Journal of Air Pollution Control, 2008, 8(2), 30–36.

Exploring Indian Yajna and mantra sciences for personalized health

35

[17] Saxena, M., Kumar, B., Matharu, S. Impact of Yagya on particulate matters. Interdisciplinary Journal of Yagya Research, 2018, 1(1), 01–08. [18] Saxena, M., Sharma, S. K., Muralidharan, S., Beriwal, V., Rastogi, R., Singhal, P., Sharma, V., Sangam, U. Statistical analysis of efficacy of Yagya therapy on type-2 diabetic mellitus patients on various parameters, In Proceedings Of 2nd International Conference on Computational Intelligence In Pattern Recognition (CIPR – 2020), Institute Of Engineering And Management, Kolkata, West Bengal, India. [19] Sharma, S. R. Shabd Brahma – Naad Brahm, BrahmVarchas. Shantikunj, 2015a, 98. [20] Sharma, S. R. Shabd Brahma – Naad Brahm, BrahmVarchas. Shantikunj, 2013, 55. [21] Sharma, S. R., MahaVigyan, G., BrahmVarchas, S. 2015b, 235. [22] Sharma, S. R. Shabd Brahma – Naad Brahm, BrahmVarchas. Shantikunj, 2015c, 34. [23] Sharma, S. R. Shabd Brahma – Naad Brahm, BrahmVarchas. Shantikunj, 2015d, 61. [24] Shrivastava, V., Batham, L., Mishra, A. Yagyopathy (Yagya Therapy) for various diseases – an overview, Ayurveda evam Samagra Swasthya Shodhamala, 2019, 1(1):2, pp. 1-11, https:// www.researchgate.net/publication/339484294_Yagyopathy_Yagya_Therapy_for_Various_ Diseases_-_An_Overview (2019). [25] Srikanth, 15 Benefits of Machine Learning in Health Care, 2019, Blog Article URL- https:// techiexpert.com/benefits-of-machine-learning-in-health-care/ [26] Srivedmata Gayatri Trust, Gayatri Parivar, UK. Yagya’s Effect On The Environment Blog Article, 2011, URL-Source- https://home.awgpuk.org/index.php/yagya/42-yagya-s-effect-on-theenvironment [27] Strate School of Design, Paris and Singapore, IoT applications in healthcare, Supporting Robust Health and Medical Practices, 2016,Blog Article URL https://www.strate.education/ gallery/news/healthcare-iot [28] Verma, S. M., Shrivastava, A., Yagya, V. Therapy in Vedic and Ayurvedic Literature: a Preliminary exploration. Interdisciplinary Journal of Yagya Research, 2018, 1(1), 15–20. http://ijyr.dsvv.ac.in/index.php/ijyr/article/view/7/13.

Bhanu Chander

Advanced deep learning techniques and applications in healthcare services Abstract: In recent times, healthcare informatics has become a rising field of interest for researchers worldwide because of its critical outcomes on civilization. The healthcare sector is unique from every other industry. It is a high priority segment; moreover, patients view that it is important to receive the highest intensity of care and despite the cost. In addition, assimilating various kinds of knowledge that emerge from the current biomedical study using body sensors, text and images, electronic health reports, and composite high-dimensional artifacts, which can then be merged with biomedical data, remains a crucial point of discussion in transmuting healthcare. The inclusion of traditional, statistical, data mining, artificial intelligence (AI), and machine learning (ML) appliances in healthcare has good fallouts. However, there are issues with high-dimensional data, where, typically, most data need to be first executed to obtain valuable features, and build forecasts or clustering models based on such data. From historical data, the latest developments in deep learning (DL) methodologies offer novel efficient standards to achieve continuous learning representations from extensive complex data. DL’s success in other real-world applications also offers exciting and useful results in healthcare. It could be a medium for explaining massive biomedical information for better human health. This chapter presents the state-of-the-art deep learning schemes that are applied to healthcare services. Then, factors that impact the operation of DL in healthcare are explained. Besides, we evaluate the potential and feebleness of the existing works and explore the advanced applications of DL in healthcare. We conclude the chapter with known challenges and confer possible future directions. Keywords: healthcare, deep learning, biomedical records, EHR, medical image analysis

1 Introduction With the increase in human population, health informatics’ domain has emerged as one of the new research areas among the research community due to its real-time impact on the society [1–3]. Moreover, because of the continuous innovations in computer software programs and hardware, vast quantities of healthcare data records are Bhanu Chander, Department of Computer Science and Engineering, Pondicherry University, Pondicherry 609605, India, e-mail: [email protected], https://orcid.org/0000-0003-0057-7662 https://doi.org/10.1515/9783110708127-003

38

Bhanu Chander

available and accessible from patients, paramedical industries and institutions, medical societies, and many others. This easy access offers a unique chance for researchers, scientists, and data science technologies to instruct, understand, perceive, and improve the quality of health services. We now live in the big data era, where data is more significant than ever in the past. When it comes to healthcare management, the patient or treatment data records play a crucial role in the diagnosis process [2–6]. In general, traditional healthcare data is composed and stored in well-designed hospital information systems and picture archiving and communication systems, which contain medical records, lab test results, treatments procedures, diagnosis methods, demographic information, and imageries, along with medications. Rapid technological development in sensor nodes, medical websites, genetic tests, IoT-based wearable devices, and drug discoveries, strengthen the scope, importance, and classification of the healthcare data. Extraction of such data’s important features is necessary for analysis, decision-making, disease prophecy, treatment, and prescriptions. Big data and data mining afford healthcare organizations, physicians, and caretakers the opportunities to make analytical judgments that improve patient behavior, healthcare verdicts, and disease supervision [1–3, 5–8]. So, it is a singular task to abstract the valued facts and generate enormous proofs in healthcare. Most healthcare experts use data analytical tools for big data services and related applications. Artificial intelligence decides by learning reason-based tasks that mostly imitate human intellectual behaviors. AI brings a paradigm shift to healthcare with the rapid enhancement of analytical procedures. It is applied through numerous ways to healthcare data, for instance, machine learning (ML) models for structured or labeled data and deep learning (DL) for unstructured or unlabeled data [3–9]. AI models effectively apply to early findings and diagnosis, treatments, and for forecasting the prognosis. There are some debates in recent health conferences on “whether AIbased doctors will sooner or later replace human physicians” because of the high impact created by AI-based models in healthcare management. We mostly believe that humans will not be swapped by machinery or machine-based technologies in the foreseeable future [8–10]. However, AI plays a huge role in assisting human physicians in making better clinical decisions, or we can confidently say that AI will replace human judgment in specific handling of a treatment. As mentioned above, the increasing quantity of healthcare data coupled with continuous enhancement in big data analytical procedures has made possible optimistic AI applications in healthcare. Highly developed AI models can effectively reveal suitable hidden information, from the enormous amount of health data records. This can help or assist a human physician to make accurate clinical decisions [9–13]. The main motivations or reasons for employing AI in healthcare management are as follows: Basically, AI employs practical set-of-rules to learn high-dimensional features from huge amount of health data records and uses them to back clinical decisions. Moreover, AI can learn and self-correct, which enriches model accuracy, based on feedback [1–6].

Advanced deep learning techniques and applications in healthcare services

39

Additionally, the AI model effectively reduces the logical and remedial faults that are conceivable in human medical training. AI also summarizes helpful information from many patients to make real-time interpretations of health risks, and offers warnings and predictions. Before employing them in healthcare applications, some AI systems need to be trained with data records generated from clinical preparations, such as screening and diagnosis. Thus, they can absorb from similar sets of themes/subjects, connotations among subject/theme topographies, and consequences [4–8]. These clinical facts often exist in, but are not limited to, imageries, demographics, physical examinations, medical notes, and clinical laboratories. Besides, physical inspection notes and clinical laboratory outcomes, such as unstructured narrative notes and images are the other primary data sources that cannot be directly analyzed. The rapid developments in the pharmaceutical industry, medicine, and healthcare monitoring are also significant and they are a result of improving methodological policies and devices, making collection of data records possible for preprocessing and analytical exercises. More importantly, these mentioned policies and devices offer convenient ways to store massive volumes of data with high-dimensional features for possible decision-making. For example, nowadays, high-quality cameras are employed to detect or recognize patients’ movements or monitor the patients. Wearable devices such as necklaces, badges, smart materials for blood flow, and create massive data in texts, sounds, imaginaries, and signals. It is estimated that for the year 2025, nearly 40% of the world data storage will be due to medical images, and this share will continue to increase in the years to come. Hence, this hazardous extensive data with undervalued data records produced from medical institutions call for attention [4–10]. Therefore, the scope of medical information is huge for a complete analysis with the existing diagnostic utilities to exploit the facts that exist in big data. Outdated ML procedures have imperfect measurements to exploit big data; also, in most cases, the result converts composite and then to objectionable. At present, ML and DL attract the attention of industry and academicians, and their importance to healthcare management cannot be overemphasized. Mainly, DL is employed to solve many out-of-date issues, where big data needs to be scrutinized. Countless striking outcomes are verified in dissimilar fields, such as speech recognition, natural language processing, electronic health records (EHR), and image processing [1–4, 8–12]. Observing the research results in the period 2010–2020 in the healthcare industry, it clearly shows that DL achieved remarkable results. Hence, DL gets increased attention from various health domains for accurate results. Some of the DL applications in the healthcare sector are for cancer treatment, brain activity, hearing treatment, eye-related analysis, gait analysis, and heart diagnosis. These operations turn the treatment informal for both patients and human physicians, with quicker and prolific monitoring. The progression of DL in treatment has resulted in the use of radio nuclear imaging, integrating of thermometer and stethoscope into computed tomography (CT), ultrasound analytical strategies, radiation remedy, and

40

Bhanu Chander

ventilators, among others. These models have resulted in an overhaul of orthodox patient care to an incredible adaptive behavior, and stimulating several feared viruses [2–8, 10–16]. By reading this information, there is no doubt that in the coming future, every healthcare equipment along with the related treatment will make significant progress in many spheres, making it more active with qualitative benefits. Before moving to DL, there is another critical topic – artificial neural network (ANN), also acknowledged as neural network (NN) – which needs to be known, a small example of ML models. ANN is constructed with three layers – input layer: receipts input data; hidden layer, which is a single layer that connects data transmission and processing between the input and output layers; and finally the output layer, which produces the results. The main objective of ANN is to progressively estimate a role or purpose that plots an input to a matching output, over an iterativeaspect method [10–14]. Over time, ANN transmits its estimation functions to solve simple complications to innovative perceptions of DL, which holds more than one hidden layer and can analyze texts, imaginaries, and signals to multifaced data types. ANN and DNN are both active research domains of ML; here, the final aim is to make technologies and machinery to think and, realize, as humans, through mirroring the grid-iron of the human-mind construction to concentrate on knowledge data representation (DR), which is somewhat different than task-specific methods [1–3, 12–16]. Coming to DL, it is not a new field of study; DL’s history started two decades back. However, nowadays, DL beats most ML models, with momentous developments in pattern realization, natural language processing, besides image processing. At present, DL is successfully employed in various domains and offered inspiring outcomes. Here, the triumph of DL in computer vision (CV) and image processing is more useful in medical image processing. It should be reminded that healthcare is more than medical image analysis [1–4, 6–10]. With the result, many researchers and institutions apply DL to analyze different kinds of data or health records to try and obtain more accurate results. The biological and medical strategies, treatment, and applications that are proficient in producing large dimensions of information by the arrangement of imageries, echoes, scripts, and charts are mechanisms for producing big data, which was discussed in the above paragraphs. Moreover, DL’s modernization is an emerging trend in the wake of big data for data design. ML produces good results when most of the data features are recognized only by human experts. Human experts in a specific area have sufficient facts about the subject in that field. However, a limit to the ability of human experts is their bias, large disparities across explainers, readiness, and lethargy. DL provides excellent results when the available data is enormous; DL’s hidden layers check or extract high-dimensional repetitions for exact and accurate results.

Advanced deep learning techniques and applications in healthcare services

41

2 Why deep learning? ML is a part of the AI module. It learns associations from datasets without defining them. The central functioning is that the machine can learn without any direct expectations about the primary mechanisms. ML procedure involves four main tasks for any application: data harmonization, model fitting, representation learning, and systematic evolution. From the innovation of ML models, the building of ML specifications requires increased careful manufacturing and domain professionals in order to convert the unprepared data into appropriate interior representation, from which, learning systems can spot outlines in the dataset. Traditional procedures contain single-layer transformations for the input space; this is inadequate to extract high-quality data from raw datasets. DL’s process is entirely dissimilar to ML; DL holds numerous layers with several levels of data abstraction (See Figure 1) [1–4, 6, 12, 15]. The significant dissimilarities between traditional ANN and DL models are the presence of several hidden layers, their neuron connectivity, and abstract meaningful features from input. DL’s key feature is that human engineers do not intend these layers of structures; humans learn from data by means of a multipurpose learning process. In healthcare, detailed analyses of the disease depend upon imaginaries discovery and explanation. Of late, image detection devices have upgraded noticeably; X-ray technology, CT, and MRI photographs are some of the well-built image explanation devices that deliver radiological imageries with much-advanced resolutions. Image-processing-based CV has received great interest from healthcare, for example, for detecting tumors in the brain and lungs, because of the wide-ranging disparity of data across patients, where primary ML replicas cannot detect efficiently. The idiom “deep learning” indicates the use of a deep-neural-network representative [14–18]. The essential computational component in a NN is the neuron, a perception stimulated by the human brain’s learning. The NN receives numerous signs as inputs, associates them linearly with weights, and permits the collective signs over nonlinear procedures, to produce output signs. Appropriate clinical-ready results have been obtained in healthcare, such as the discovery of diabetic retinopathy in retinal fundus pictures, skin cancer classification, forecasting the order specificities of DNA- and RNA-binding proteins, and introducing the path to probable intellectual tools that are based on DL for real-life healthcare (See Figure 2).

3 Deep learning and its variants DL’s innovation started in the early 1940s, and the first official perceptron implementation, which is a semi-robotic stimulated linear classifier, was in the year of 1950, [1–3, 6, 8]. Primary NNs did not accomplish any good concert and perceptron has its boundaries. The research in NN accelerated when multilayer perceptron (MLP) intended

42

Bhanu Chander

with backpropagation (BP) in 1980. In general, NN covers three layers, namely, the input layer that receipts input data points, hidden layer that performs operations, and finally, the output layer that shows the results. Le-Cun designed popular convolutional neural networks (CNN) with numerous hidden layers. Hinton designed the DL concept and procedure.. In the meantime, the rapid expansion of the internet and then mobile internet, simplified the cluster of big datasets. DL, which denotes NN through more than two hidden layers, was a triumph in CV and in speech applications. Over the last few years, DL-based NN was developed and employed in various domains for improved results (See Figure 2). Training of DNN is tough; furthermore, DNN is likely to strive for the gradient’s waning, which is similar to the BP procedure. Gradients (faults) convert the insignificant once they spread the initial few layers over numerous backward layers, resolving the difficulties directed at the DL’s success, for numerous types of DNN.

3.1 Autoencoder (AE) The main intention in designing an AE was dimensionality reduction via nonlinear transformation by means of data-driven learning procedures. AE is proficient in an un-supervised style; it rebuilds the input vector somewhat than assigning a class label. In general, the plan and strategy of AE consists in having the equivalent volume of neurons in the output and input layers, through a complete contact among neurons in every layer. Additionally, the sum of neurons in the hidden layer is less than in the input-output layers. Numerous AEs could be fixed collectively and used to make deep AEs. Many kinds of AEs have been designed in the last few years to handle disparate data designs for performing explicit roles. Some of them, like the Denoising AE, designed to boost the AE model’s robustness, presents some noise to the outlines and pushes the network to taking the input’s exact behavior. Sparse AE makes data more separable, convolutional AE shares weights among nodes to preserve spatial locality, and contractive AE inserts noise to corrupt the input data points and adjusts the error with the logical contractive cost. AEs emphasize the essential properties of data though reducing the dimensional size. Eraslan et al. [14] designed a novel model called the deep count AE to denoise a sequence. The scRNA-seq model receives the total number of distributions, sparseness and desperation of data, and uses negative binomial noise with increase in both zero and non-zero. Guan et al. [15] fabricated a gene multiplication and detection AE with trail. The model achieves high intermediate illustration and also robust-topartial fraud data, and high growths in underlying biology, allied with tumors. Lee et al. [16] designed a model for the detection of tumor where they applied AE techniques to preprocess the imaginaries to abstract the high-level features and feed them into DL. Most DL techniques needs huge amount of data for training but due to the sensitivity of healthcare data, the training model is not prepared well. Seebock et al.

Advanced deep learning techniques and applications in healthcare services

43

[17] proposed a multiple deep denoising AE to detect anomalous data in retinal coherence tomography and Wang et al. [18] developed a deep-stacked sparce AE to detect contextual features for identification and detect spine structure.

3.2 Convolutional neural networks (CNN) CNN has its origin in the human brain biological processes, where the connection structure among neurons is similar to the human’s visual cortex’s perception. In general, CNN includes an input, numerous hidden-layers, and an output layer. The hidden layer of CNN contains convolution, pooling, fully connected (FC), and normalization layers. Coming to the working procedure: the convolutional layer interacts with various local filters to provided data, typically a raw dataset. Next, the pooling layer gradually decreases the scope of the output towards the evade overfitting. FC layers bond the neurons with the preceding layer and the last layer’s neuron, by decoding the input imageries into a solitary direction for categorization. This layer grasps the filter that is employed to regulate the class of the input image. The normalization layer combines to give an appropriate outcome. Saha et al. [19] proposed a CNN-based model with five convolutions, four maxpooling, and two FC along with four ReLU activation functions to evaluate biomedical HER data. The Saha and Chakraborty [20] fabricated a CNN model for the categorization of cell membrane and the nuclei for the detection of breast cancer. The model contains various layers and holds a trapezoidal LSTM active function. In reference [21], CNN with various pooling layers was used to discover the red blood cells and composite designs. Gulshan et al. [22] employed CNNs features for the identification of diabetic retinopathy in retinal fundus pictures for gaining a high understanding from imaginaries to certify ophthalmologist annotations. Esteva et al. [23] proposed a CNN model for sorting biopsy-proven clinical imageries of dissimilar skin cancers.

3.3 Recurrent neural networks (RNN) RNNs are considered as feed-forward NN (FFNN). It has contacts among neurons in the hidden layer to design an order of the directed graph. This factor allows it a time-based chronological data or dynamic state model. This kind of feature is crucial for real-time applications where the output builds upon the preceding calculations, like the scrutiny of content, echoes, DNA classifications, and constant electric signs from the body. The working part of RNNs is built with datasets that have interdependencies to preserve information around what have risen in the prior spell. Thus, the rule of RNN is to describe repeated relation-overtime periods. Specifically, the recurrent creation in RNN can imprison the compound time-based changing

44

Bhanu Chander

aspects in the spatial EHR data and formulate them in the ideal manner for several EHR model exercises, with consecutive medical event forecasts, disease categorization, and computational phenotyping. Here, the hidden conditions of the RNN work as its memory; meanwhile, the existing state of the hidden layer reveals in the initial phase of the hidden layer and the input at the current time. This further allows the RNN to knob variable-length order input. RNN has dual other variations, namely, the LSTM unit and the GRU. These are dual intended to overwhelm the vanishing gradient challenges and capture the outcome of long-term dependences. Finally, we can say that RNN can observe as a state with a feedback circlet. Since RNN building blocks are closely in relation with time-related problems, it is widely applied to various healthcare issues [24]. The authors designed a RNNbased model for progression of biomedical data sequence and time features. The model achieves great results in sequence-based bias correlation issues in RNA. In [25], a combination of CNN with LSTM was fabricated for prostate cancer credentials with a Gleason score of 7. The model measures the relationship among histopathology imageries and genomic facts through sickness reversion in prostate cancers, to classify prophetic biomarkers in skin by modeling the spatial association from automatically created patches. Wang et al. [26] designed a LSTM-based conditional random field, to make connection among biological units for trigger discovery. The authors applied RNN and sequence annotations for better results. In [27], the modified bidirectional LSTM and CRF grid has been executed to identify units and abstract association among units in EHR.

3.4 Generative adversarial networks (GAN) Generative adversarial network (GAN) models are designed for data generation through theoretical game procedures. A GAN model contains two parts: generator and discriminator. Here, the generator captures or inserts a random noise as input and tries to create samples from it, while the discriminator captures both random and real-generated samples as input and attempts to differentiate both. Both these networks are trained separately; the generator to assemble more genuine samples and the discriminator to attain superior deterministic power. Nowadays, GAN-based models are employed in the healthcare field for engendering constant medical work.

3.5 Deep Boltzmann machine (DBM) The architecture of deep Boltzmann machine (DBM) contains regularly fixed theoretical evident and hidden units. It is mostly appropriate for modeling and mining hidden semantic narrations from huge quantity of unlabeled datasets. BM’s novel procedure requires arbitrarily prepared Markov chains to attain balance

Advanced deep learning techniques and applications in healthcare services

45

disseminations to estimate the data-dependent and data-independent potentials in an allied duo of two-fold variables. The learning process of a DBM is very sluggish in the training phase. A restricted Boltzmann machine (RBM) creates effective learning, which has no associates among hidden units. The advantage of using RBM is the restricted dispersal of the agreed noticeable units over the hidden units. This creates implications that can be manipulated, as the RBM feature illustration is engaged to be a set of subsequent borderline disseminations attained by exploiting the possibility straight. A deep reproductive model was designed by Jang et al. [28] to report automatic heart gesture trailing to reduce radiation-induced cardiotoxicity. The model uses MRI imaginaries and was trained with a three-layered DBM. Chao et al. [29] designed a DL framework called the DBN-GC (DBN-glia chains) for acknowledgment emotion recognition. The proposed design has a high accuracy with low computation while handling high-dimensional dataset.

3.6 Deep belief network (DBN) DBN is considered a modified RBM. Numerous hidden layers are trained to treat the hidden production of one RBM as input information to train the next RBM layer. In DBN, numerous connections are present among the top two layers and fixed connections to all of its subsequent layers. The training approach of DBN is like greedy layer-wise and that is accomplished when practicing the DBN with unsupervised culture, and fine-tuning its constraints based on the probable output. The model includes numerous hidden layers of neurons that are skilled in employing BP procedure. Chu et al. [30] established a translating structure from a mixture of Lomb– Scraggle periodogram and DBN to spot imperfect EEG sign facts to explain the problems of motor-powered images that are involved in the retrieval of organization responsibilities such as heart-rate inconsistency [31]. An AE-based multiview DBN was used to solve biomedical challenges and to study multichannel EEG signs of elliptic patients.

3.7 Reinforcement learning Reinforcement learning is a special kind of ML technique used to train computational agents to effectively interrelate with their surrounding environment and realize their respective goals. Here, the training procedure follows a trial-and-error approach by demonstration. When an agent starts to interact with its environment, an iterative response cycle of reward trains the agent to improve and achieve its goal. Moreover, the

46

Bhanu Chander

learning step starts from an expert’s demonstration (supervised learning) or concluding with the expert’s objective. However, the issue is it is hard to train an agent since it has a function that takes over all input sensory signals for the environment and produces the output on the next action, so the agent can take an appropriate step. Here, deep RL models are employed for easy learnings of the model function. These kinds of techniques are mostly used in robotic-assisted surgery (RAS). For RAS-related tasks, methods support the mechanization and the speed of extremely tedious, delicate surgical responsibilities. CV approaches use open-wound imaginaries and solve the path optimization issue for optimal trajectories like limits and complications. Likewise, image-trained RNNs can learn to tie knots unconventionally by learning arrangements of proceedings; in this case, physical exercises from physicians. Mostly, these methods benefit wholly independent robotic surgical treatment or marginally invasive operations. Despite the advantages of deep RL, there are some notable challenges: correct positions and orientation on the patient surgical scenes and accurate data collection of deep surgical robotics. Deep simulated learning needs massive training datasets with numerous samples per surgical feat. It is also quite problematic for autonomous structures to adjust to extraordinary, unnoticed conditions that are too divergent from anything formerly seen.

3.8 Generalized deep learning DL has various domain features in CV, reinforcement learning tasks are where input data is used with special treatment. For instance, take a look at the gene experiments that need various DNA sequence features; for this, DL models are heavily used and help human physicians decide on a more precise diagnosis. In general, building a DL scheme in genomics encompasses taking raw data, adapting this necessary information into input data tensors, and nursing these tensors over neural (a couple of nets), which then power precise biomedical claims. Analyzing genomewide analytical studies needs procedures that measure huge patient cohorts and that deal with concealed confounders. The basic understanding of a particular disease’s genetics permits physicians to commend treatments and offer more precise diagnoses. However, a physician’s critical challenge is defining if new variations in a patient’s genome are medically correlated. Here, this choice depends on predicting the pathogenicity of alterations, which now practices protein construction features, besides evolutionary maintenance to training the learning processes. Given their superior power and capability to mix dissimilar data forms effectively, DLbased models are likely to deliver more precise pathogenicity forecasts than the current models or techniques.

Advanced deep learning techniques and applications in healthcare services

47

4 Factors driving implementation of AI, ANN, and deep learning Some factors are continuously driving the rapid adoption of ANN and DL. – Digital images make their impact in every sector and now they are an essential part of medical healthcare. The traditional human expert of a subject fails to capture or spot the various patterns and symmetries from a considerable volume of HER datasets. DL-based technologies abstract the high-dimensional features from the significant number of datasets. Besides, they continuously adapt over time, making sense to a middle-man’s ability and contribute to improved decision making for active image-guided therapy. – Nowadays, HER data records are stored by digitalization procedures and the distribution of these high-quality, completely labeled, and expert datasets are tricky tasks. Nevertheless, cooperative primary clinical groups have started sharing. However, integrating challenges remain under open-access directions to provide enticements for the progress, evaluation, and distribution of dissimilar ANN and DL models. – As we know, DL has a first-rate work for integrative analysis of heterogeneous datasets from different data sources. EHR holds unlabeled health data, such as signals, imaginaries, and bio-medical datasets. These multifaced data collected from various teams and clinical levels need high-quality analysis approaches. – DL, with its deep hidden layers, has massive measurements to discover novel associations in health data. Start-up companies are using DL to select or project novel fragments for testing pharmaceuticals or biologics; moreover, researchers, across disciplines, have also initiated unpredicted clusters within datasets by linking the instigation concentration feature detectors in the hidden layers of DL nets. – DL shown potential in empowering healthcare physicians, promoting a harmless, more caring, and participatory standard for healthcare management. An increasing volume of research training also advocates specific opportunities for decrease in errors and a better-quality workstream in the medical setting with suitable AI distribution. – Collective wearable devices and sensors, remote monitoring, digital consultations, DL, and ANN-based models possibly overtake the time-honored model of recurrent data gathering and understanding at the clinical encounter. The mentioned developments may endorse a more active and well-versed self-care by people.

5 Healthcare applications Healthcare applications have been continuously improving in various health-related sectors over the last decade and some of them are listed below (See Table 1).

48

Bhanu Chander

5.1 Electronic health records (EHR) Across the globe, various health institutions collect the bulk of data records from patients, which are also acknowledged as EHR. An EHR contains laboratory test outcomes, clinical notes, patient information, medical prescription and medical images, and many more. Although it is difficult to generate accurate analytical models from EHR, there is no label in data for the heterogeneity of data. In the past, subject experts fixed data-related issues but the data size is enormous and, currently, traditional analytical models do not work well. DL models’ innovation continually makes a considerate impact on major data analytical applications, such as CV, automated functions, natural language processing, and image classification. DL for healthcare has been mainly attractive for two reasons. First, for healthcare scholars, DL-based representations produce healthier coordination in numerous works than outdated ML that involves scarcer manual feature manufacturing. Second, big and composite datasets are offered in healthcare, and allow the exercise of composite DL models. At the same time, EHR data also create numerous motivating challenges for DL. Lipton et al. [32] designed a manifold-based classification model to categorize nearly 128 diagnoses; the authors used 10,401 ICU events with 13 time series-based variables. Initially, a model trained with raw time series and existing ANN models and LSTM model gave the best AUC of 0.807. Che et al. [33] modeled a new DL model to improvise the operational performance of a GRU called GRU-D. The proposed models represent the missing data effectively and capture the long-term temporal dependences in time series. Simulation outcomes on real-world clinical and synthetic datasets determine the model concert and the AUC of GRU-D is 0.8527 and that of GRU is 0.8380. Mehrabi et al. [34] proposed a scheme to notice patterns and rules to discover diagnosis. This author managed the patient’s data in a matrix form as in the ICD-9 dataset. Moreover, a simulation done with DBM showed acceptable results. Putin et al. [35] fabricated an ensemble-based 21 DNN with various depths, hyper-parameters, and structures to predict human sequential age using a blood test. Also, their model identifies albumin, glucose, and alkaline phosphate. Cheng et al. [36] addressed a technique that forecasts the congestive heart failure (CHF) risk with stored patients’ information from HER. Simulation results came as for CHF and the AUC of 0.7675 best related with other existing models. Choi et al. [37] proposed a DL model for temporal relationships between actions in EHRs to guess early verdict of heart failing. Every medical incident in HER information builds a one-hot vector setup and a quantifiable event classification to each EHR. RNN replicas via GRU were revised to guess HF onset. The RNN model’s AUC increased to 0.883 and deliberately advanced to 0.834 for the AUCbaseline procedures. Pham et al. [38] fabricated novel deep care that uses EHR data with three classified levels, such as time series-based LSTM, multiscale pooling, and conventional fully connected layer. A model presents a significant forecast in disease evolution, interference confirmation, and future risk prediction. Avati et al. [39] designed a DNN model to estimate the mortality of a patient, since accurate forecasts of

Advanced deep learning techniques and applications in healthcare services

49

mortality would aid medical care supervise increased mollifying care. The dataset included 221,284 nominated EHRs. The model attained a recall of 0.34 at an accuracy of 0.9. The AUC was 0.93. Rajkomar et al.’s [40] work was broadly on EHR for inpatient mortality, readmission, days of stay, and discharge diagnosis. The authors implemented LSTM and time-aware NN (TANN) for time-based predictions, and the simulation results show the AUC of DL models as 0.95 and base LSTM with 0.85.

5.2 Electrocardiography (ECG) EEG is a kind of treatment with a harmless device, which is used to record the heart’s electrical activity, with electrodes placed on the skin. The variations in the biorhythm of the heart may cause various heart-related issues. Furthermore, ECG picturized these heart biorhythms in a useful way. Choi et al. [37] designed a DBN and RBM-based model for pattern classification of ECG signals. The authors used MIT-BIH datasets and got a precision of 98.83 percent. Yao et al. [39] implemented a multiscale-CNN (MSCNN) model and tested it on the public and private ECG datasets; however, they became optimal public datasets with an accuracy of 97.99 percentage. Checng et al. [36] employed fixed LSTM to estimate the likelihood of the ECG input being standard or not. Moreover, the outcome was modeled with a Gaussian distribution to check whether it is right or not. MIT-BIH-based datasets were used, and the simulation results show a precision of 0.975 and a recall of 0.464, respectively. Al Rahhal et al. [41] proposed a feature extraction model from the data of fresh ECG signals with a sparsity constraint. Labeled data was used and a SoftMax regression function was applied to the hidden layer for better DR. Acharya et al. [42] fabricated a deep model-based CNN with 11 hidden layers to sense myocardial infarction; a model trained with data with noise accuracy of 93.5 and without noise accuracy of 95.2. Rajpurkar et al. [43] built massive ECG data and a CNN with ResNet of 34 layers was designed to detect 14 types of beats with a combined F1 score of 0.809, associated with an averaged F1 score of 6 specialized heart specialists, which was 0.751.

5.3 Electroencephalogram (EEG) EEG is a special kind of noninvasive machine to obtain a copy the electrical activity of the human brain. If there is any uncharacteristic electroencephalography, it shows problems in brain function. EEG is also used to diagnose seizures, epilepsy, head injuries, faintness, head pains, brain cancers, and sleep difficulties. It is also extensively used for modest thought interpretations. Rajpurkar et al. [43] employed a new semi-supervised-based DL for accurate brain performances. Generative RBM trained with pre-labeled data and an unsupervised structure information were used

50

Bhanu Chander

to make a mutual result on a network effort. The proposed technique accomplished a better precision rate. Wulsin et al. [45] addressed the DBN-based model to recognize outliers/anomalies in EEG signal patterns. The authors used massive HER data for training, and compared it with SVM versions and simulation results: the accuracy was 96.53 and the F1 score was 0.475. Page et al. [46] attempted to use various representation learnings in ML and DL to notice seizure. Schirrmeister et al. [47] analyzed various DCNN with different architectures modeled to decode raw EEG signals. Accuracy results of CNN were better than the various existing models, which showed 93%. Sturm et al. [48] employed DNNs in layer-wise weight proliferation for EEG signals examination. Over LRP, DNN choices are transmuted into heatmaps representing data’s relevance for the decision’s result. The single-trial LRP heatmaps disclose neurophysiologically reasonable designs, approximating conservative typical spatial pattern-derived charts.

5.4 Community healthcare Social mass media information or records on internet sources are also a critical, innovative region for healthcare in this digital era. They are considered an authoritative supplement to outdated health-related data. Social mass media is supportive of tending to the psychological health standings than the spread of transmittable illnesses. Nie et al. [49] gather information from Q&A on the health network. Authors designed a sparsely linked deep structure of three secreted layers by sparse restraint, and these collected implication consequences tested on a dataset were composed from a remedial Q&A Internet site. The experimental outcome displayed that the technique outstrips SVM, KNN, decision tree, and deep NN collected from stacked AE with SoftMax. Zhao et al. [50] formulated a social media-burrowed wide-ranging framework archetypal, where twitter-based information was used to trail healthiness conditions from the community. Benton and Hovy [51] proposed a technologically progressive DNN multitask learning (MTL) archetypal for ten forecast errands, such as self-destruction, seven psychological health situations, neurotypicality, and gender, which plot the Twitter information. The MTL archetypal is associated with single-task learning (STL) replicas. Consequences displayed that an MTL achieves meaningfully healthier than added replicas for the effort. The AUC of suicide was 0.848, and for mental health situations, it was 0.853.

5.5 Wearable devices Due to the rapid innovations in various miniature device technology and the comfort they bring to human daily life, smart devices and wearable devices receive increased attention from various fields of the research community. Wearable devices,

Advanced deep learning techniques and applications in healthcare services

51

attached to the human body, collect valuable real-time data continuously from home and hospital environment. From a hospital point of view, the collected data helps disease monitoring and activity recognition. Aliamiri [52] designed wearable smart devices with inbuilt technology of photoplethysmography for low cost, portable monitoring and exposure. The designed model attained over 95% AUC in quality valuation mission and over 99% AUC in AF discovery. Unterthiner et al. [53] employed a novel tool, DeepTox, which normalizes the mixed chemical illustrations. The authors use DNN with five uncovered layers to predict the toxicity of the various compounds. The model executed with 12,000 ecological, chemicals, and drugs was measured for 12 dissimilar toxic effects; the average AUC of 12 toxic efforts was 0.846. Hammerla et al. [54] evaluated the phases of Parkinson’s disease with wearable sensing devices – smart triaxial-based sensor node attached to the patient wrist and an RBM model trained with a SoftMax function. Simulation results showed an F1 score of 0.55. Xu et al. [55] designed a DAE model for the analysis of molecular representation. Moreover, the authors employed a multifaced GRU for mapping input to fixed dimensionality and another deep GRU for continuous vector, back to the unique molecule. The authors inserted AdaBoost and gradient boost for continuous feature vector for the chemical properties forecast. The precision rate of the designed model was 0.766 4. Zhange et al. [56] fabricated a sparse AE to abstract patients’ emotion-based features. The authors used respiration data, composed from wearable devices, for spotting human reactions and the model’s precision was about 80%. Ravi et al. [57] employed a human activity recognition approach with DL-based CNN technology, with low-power wearable devices. The proposed work has good experimental results with traditional datasets, but it needs high resources from wearable devices. Ma et al. [58] addressed a DNN for perfection in quantitative structure-activity relationships (QSAR), which are used in the pharmaceutical industry for forecasting on and off-target actions. This kind of work is very helpful during drug discovery procedures. The mean of DNN was 0.496 and when associated with random forest, it was 0.423. Huynh et al. [59] analyzed DL architectures for adverse drug effects (ADE) classification. With Convolution recurrent NN (CRNN) and CNN with attention (CNNA), the model was tested on Twitter dataset and obtained an AUC of 0.88. Chaudhary et al. [60] analyzed dissimilar subcategories of hepatocellular carcinoma (HCC) by mixing numerous patient cohorts’ multiomics information. The authors designed an autoencoder model and trained with 360 HCC patient’s facts from Cancer Genomic Atlas datasets. In their study, they found multiomics features and breast cancer biomarkers. [61] proposed a SDAE to abstract high-dimensional topographies from gene appearance and evaluated the extracted illustration’s performance over supervised categorization replicas to prove the utility of the novel topographies in cancer discovery.

52

Bhanu Chander

5.6 Computer vision CV is one of the new research fields in any domain. It has great success with the inclusion of DL models for image segmenting and video frame undemanding, while dealing with object categorization, exposure, and so on. With respect to healthcare, if patients radiograph malevolent cancers, CNN-based DL models are heavily used in CV for imaginaries categorization and object detection. DL schemes are used on a huge amount of data items or images to learn standard statistics, such as lines, correlation, and curves, and learn higher-level layers of the model that is trained to differentiate diagnosis instructions. Unexpectedly, DL models achieved immense accuracy in various health issues, such as detection of moles, heart and brain risks, referrals from dangerous wounds, spinal actions, mammograms, and diabetic retinopathy. Some of the new waves in healthcare that are based on DL with CV are: cancer histopathology can be efficiently trained and mitotic cells or tumor sections with CNN variants can be noticed. In healthcare, sequential DL and language technologies influence the requests within domains, such as EHRs. As discussed in various sections, EHRs are rapidly fetching universal data. The EHR-based bulky medical relations could hamper the medicinal dealings of over 10 million patients. Applications of DL models with EHR data is a fast-increasing research area. The current uses of DL techniques work on the temporal system of designed actions that happened in a patient’s records with CNN and RNN variants to forecast future medical events. Three-fourth of the work emphasizes that the Medical Information Mart for Intensive Care (MIMIC) datasets and ICU patients produce even more EHR information, compared to non-ICU patients. Non-ICU patients pointedly outstrip them. However, it is still undefined how well these datasets will simplify to more extensive populaces. Table 1: DL models in healthcare pros, cons and applicable field. Network type

Explanation of network

Pros

Cons

Preferred field

Long–shortterm machine (LSTM)

LSTM grids perform under recurring neural network, proficient in learning order, a necessity in sequence forecast difficulties.

Connecting in performances analysis, image translation

Vanishing gradients are not effectively removed; needs lots of data and time to learn, effected with dissimilar weight functions

HER event detection, genomics, multisensor prognostics, epileptic seizure prediction

Advanced deep learning techniques and applications in healthcare services

53

Table 1 (continued ) Network type

Explanation of network

Pros

Cons

Preferred field

Deep neural networks (DNN)

DNN contains more than two layers, which agree multipart non-linear contacts. It is employed for categorization, regression

It is extensively used on huge datasets through boundless precision.

The training method is not a small task, since the error is proliferated back to the preceding individual layers and they convert to an unimportant learning procedure, taking more time.

Multisensor prognostics, limb movement, brain activities detection, bone disease classification

Convolutional neural networks (CNN)

Produces better results on twodimensional data. In addition, it contains convolutional filters to transmute D into D

It learns quickly, layer by layer, to offering improved accuracy.

It needs labeled data for efficient categorization, but real-world data is not easy to collect.

Parkinson’s disease, brain computer interfaces, cancer detection, cardiac abnormality detection, prosthetic limb movement, brain activities detection, classification of red blood cells, capture sleep information, tissue volume on computed tomography images,

Deep belief networks (DBN)

DBN is unidirectional connecting, used in supervised and unsupervised ML models. In this, the hidden layers of every subnetwork serve as perceptible layer for the subsequent layer.

The greedy tactic applied on each layer, moreover, inference tractable affects the probability directly.

The computational training procedure is expensive, training takes high resources.

Biosignal classification, brain computer interfaces, tumor classification, bone disease classification, heart disease prediction, cellular signal processing, emotion recognition

Model efficiently learns chronological events and time dependencies.

Numerous problems are due to a vanishing gradient. Besides, there is need for big datasets for training.

Predict the mental strength of doctors, prostrate cancer differentiation

Recurrent RNN consumes the neural ability to absorb, network (RNN) acquire classifications. Here, weights are shared over entire neurons.

54

Bhanu Chander

Table 1 (continued ) Network type

Explanation of network

Pros

Cons

Preferred field

Deep autoencoders (D-AE)

It is used on unsupervised learning, intended mostly for data abstraction, decrease in the dimensionality of data. The sum of inputs is identical to the number of outputs.

Model learns to represent data in a reproductive manner.

It needs a pretraining step. Activation function and deciding threshold values are difficult tasks.

Organ detection in images, brain mage segmentation, HER data computation, histo-pathological pattern detection, spine disease diagnosis

Transfer learning

Transfer learning is the capability to exploit compartitions amongst dissimilar knowledge or dataset to simplify the learning of a new-fangled job that shares some features

Contains sequences in convolutional layers, mostly used for image pattern recognition and picturizations

Finding the right model, and selection of loss function

Organ detection in images, brain image segmentation, biosignal classification, rain computer interfaces, tumor classification, HER vent detection, genomics

Deep convolution extreme learning machine (DCELM)

Used for the selection of local contacts; this network uses a Gaussian likelihood

It is extensively employed for real-world appliances and has good precision. It is computationally resourceful besides performing quick training.

Initialization could probably be real if the learning job is very simple and the amount of labeled data is trivial

Spine disease diagnosis, predict the mental strength of doctors, biosignal classification, brain computer interfaces.

6 Challenges and limitations Till date, DL models have shown great strength in healthcare management. However, there are still some key challenges:

Advanced deep learning techniques and applications in healthcare services

55

6.1 Data As discussed in the above sections, ML models produce efficient results when dealing with preprocessed data, but in a real-time environment, it is very hard to get such a kind of dataset. DL requires a huge quantity of dataset to train the models. In the healthcare sector, data records are sensitive and the collection of data is not an easy task. Besides, data records in healthcare are mostly deranged; therefore, building a big, demonstrative dataset is vital although it is a time-consuming job.

6.2 Interpretability DL models are used for imaginaries and pattern recognition. The authors check the precision they obtain, but they don’t think about the why and how of the precision. The authors were working illustrations of DL, such as black-boxes, which are not understandable and explainable. In healthcare sectors, interpretability is a very central point than other applications. There is a need to analyze risk factors and identify which treatment instructions are best suited for a specific illness. Additionally, data interpretation is still the most effective instrument for a model explanation.

6.3 Data representation In outdated ML in imaginaries and pattern recognition tasks, data is well prepared. However, the healthcare data storage in EHR is unbalanced, underprivileged, and has missing values. A healthier DR could boost accuracy and performance levels.

6.4 Generalization ability In traditional approaches, the correlation or alteration among training and testing datasets is not crucial since they execute some required distribution. However, in the healthcare sector, the dissimilarity among training and testing data records is not acceptable.

6.5 Transfer learning helpful The designed model is trained or executed on some datasets. If the same model is applied or trained on some other datasets, there is a need for reliable modification,

56

Bhanu Chander

which has to be done by transfer learning and extending the models of healthcare to various sectors.

6.6 Computational complexity Most DL approaches contain complex neuron connections that need huge time, space, and memory usages. Numerous constraints need to be stored and different operations take place for appropriate model running. It is not a big deal if the medical institutions install advanced systems, but it is problematic when applying them to wearable devices or sensor nodes. To avoid this problem, energy-efficient hardware and adopted techniques that reduce the complexity in the designed model will help.

6.7 Handling biomedical data stream Medical applications deal with real-time online streamed data; moreover, due to sudden changes in miniaturization in the healthcare industry, data originates at a high speed. These data records gathered from various real-time resources, such as brain motion and blood sugar, apart from glucose monitoring, saturation points, electrography, oxygen saturation level, and MRI imaginaries rose from hundreds to thousands of terra bytes. Physiological perception information, obtained from ECG and EEG, show significant signs learned from dissimilar fragments of the body,. Hence, it remains significant for DL to be intelligent toward importing huge volumes of constant input information. This transforms with time and also takes into contemplation, once the earlier data is converted or outmoded. While many DL construction alternatives have tried to offer methods for working around these circumstances, there are unsettled tasks, such as dynamic studies of fast-moving, ingesting significant streaming data in memory, feature range, and misplaced data, and computational intricacy. A DL architecture proficient in linking multiple kinds of data simultaneously is desirable to test real-world circumstances. From the inclusion of DL procedures into the healthcare sector, there is a continuous advancement in most practical explanations and predictions with relevant data records. At this time, real-time medical records are in shapeless formats, such as sequences in time series, acoustic and audiovisual gestures, DNA, and many more, and trees in XML forms, parse trees, RNA, and others, text statistics in indicator details, tumor details, medicinal histories, and others, requiring amalgamations of these setups. Regrettably, the DL methods can only process numeric input data, as ultimately it is fragmented to strings of zeros and ones for computation purposes.

Advanced deep learning techniques and applications in healthcare services

57

6.8 Analyzing medical big data As we studied, DL models produce accurate results when they execute high-volume datasets. In the learning phase, the dataset features are utilized to formulate parameters inside the neurons for better predictions. Moreover, datasets must hold taskrelated required features for training. Fine-tuning constraints of neurons is attained mainly over validation measurements, besides the high-quality DL construction. The study of medical big data (MBD) has many advantages, such as, disease control, handling and analysis. The primary challenge is getting the MBD, as the risk of data mismanagement besides lack of data distribution selfishness, which can negate patients’ secrecy, legal subject, pricey tackle, and medical professionals.

6.9 Hardware requirements for medical data Generally, real-time medical data records are extensive besides being continually growing. So, the design of model computing systems must include adequate processing authority. To knob such necessities, data researchers and technologists settled for multicore gigantic stand-in GPUs; these GPUs are exclusive, chomp a lot of capacity, besides not easily obtainable for a standard user or by medical institutes. mHealth big data pointlessly deprived of appropriate DL analytic systems to meaningful abstract information and hidden shapes from data.

6.10 Federated inference In general, every clinical and pharma institution stores its patient information. Developing a DL model that collects patient health records from different resources without dripping their sensitive information is a critical job. DL, with federated approaches, securely connects with other security techniques that can quickly transfer the experimental data among institutions. However, more work is needed at a process level.

6.11 Model privacy The privacy of patients is another big task in the healthcare sector. Recent works concluded various vulnerabilities in ML, DL models, which breakdown the model and personal confidentiality. Nevertheless, naive styles might extract outputs that are unserviceable or cannot offer adequate protection, making the growth of almost convenient variance secrecy keys non-trivial. The research community must work with all aspects of security measurements for future health applications.

58

Bhanu Chander

6.12 Knowledge distillation It wraps the data cultured from a compound prototypical into a simpler form with ample information to organize. The current progress of mimic-learning and knowledge purification has to transport information from a multifaceted model to a plain model. There are current efforts to cover mimic learning to the healthcare field to boost the intelligence of DL approaches through improved trees. The key awareness is to practice the multifaceted practice to make extra soft-labeled samples to train a basic model.

7 Open research problems and future research directions 7.1 Requires extensive interorganization collaboration knowledge distillation DL achieves excessive efforts, possible assistances for disease analysis, and treatment through dealing with a huge quantity of big data, and their predictions. However, in spite of significant achievements, there are some additional issues that need to be fixed in the future models. Wide-ranging collaboration among hospitals, medical doctors, and AI professionals is needed to stimulate and improve the quality of health. This kind of collaboration effectively solves the problem, such as lack of accessible facts for the AI specialists. To overcome this challenge, there is a need for sophisticated methods that can resourcefully contract with the extensive volume of health records.

7.2 Need to capitalize big image data DL-based models work on massive datasets, but the issue here is that annotated data is not reachable, in relation to other image parts. Because the explanation of medical-related information is exclusive, droning, and inefficient, it requires ample time from professionals. Therefore, allocation of data properties with dissimilar healthcare provisions will benefit to overcome such problems.

7.3 Enhancement in DL models From literature, most of the DL models are employed in healthcare are supervised. However, in rare diseases or in the absence of a capable professional, these models do not produce good results. Hence, the proficiency of unsupervised and semi-supervised methods in healthcare needs to be negotiated. Also, there is a concerned

Advanced deep learning techniques and applications in healthcare services

59

query of how we can transfer from supervised systems to learning without disturbing health precision. Another critical area of attention would be to understand the surroundings of both labeled than unlabeled data; this arises in several genetic fields such as proteins and DNA. Modifications of semi-supervised learning in data mining, and active learning approaches, such as adaptive and transfer learnings exploit to attain enhanced deep reinforcement learning.

7.4 Black-boxes in healthcare DL helps physicians in discovering many health sectors issues; however, there are still some issues where DL models have not yet offered a complete solution. Humans are not worried about every constraint to make composite conclusions; it is fairly a human confidence substance. Taking of DL in the health sector requires resilience. Black-boxes might be an additional critical task; the authorized consequences of black-box functionality might be a barricade as healthcare professionals would stand hesitant to bank on it, as who would be accountable if consequences are not right? Owing to this topic’s compassion, hospitals might not be content with blackboxes; that is, just how a specific consequence could context to an ophthalmologist.

7.5 Computation complexities Computation complexities of DL in the healthcare sector are another issue, specifically in the region of physiological signals such as ECG and EMG. Here, ECG deals with bioelectrical actions of the heart; EMG finds the working state of the body’s muscles and nerves. Automated models for signal analysis show great results and improve the efficiency of treatment. However, to attain these feats, consequent investigation is required to boost classifier procedures’ complexity aimed at progression in computational complexities. Thus, enhancement is desired to increase the DL process for hands-on usage. Multitasking DL modules are on the rise, which means multiple physiological indicators can be capture concurrently and uninterruptedly. The classification and investigation of these signals may need dissimilar DL approaches for diverse tasks. Future research should contemplate a single comprehensive DL technique that can work with numerous taxonomies.

7.6 Progress the proficiency of treatment and then constant health monitoring It is a known fact that IoT devices are on the increase. Medical IoT, real-time appliances of IoT, and Big data are considered as the main reasons for building a smarter

60

Bhanu Chander

environment. The exponential progress of sensitive data records from allied objects, such as wearable sensors, smart pulses, etc., turns DL as the favorite tool to make meaning after-collected records. A cloud-based DL turns into a challenge due to connection blocks and complete decrease in service value due to latency problems. Also, cuttingedge computing adds a fundamental improvement to cloud computation through processing enormous data sizes before transporting it to the cloud and aiding computation capability in the edge-node, which enhances the resources needed in the edge/cloud.

7.7 Edge/fog computing Edge computing offers models, computing services from central cloud-based to the edge nodes, in close relation with end-users. The rapid development of smart mobiles and wearable intelligent sensor nodes has revolutionized the single-center system to a high modeled personalized healthcare systems. For instance, mHealth envisages wireless connection in IoT and moveable technology from mobile engineering to link patients with healthcare authorities, to make patients, activists of their health and encourage linking among specialized physicians with patients. However, security in data transmission and resource allocation of edge-based devices is a tricky task.

Figure 1: DL models used in healthcare.

Figure 2: Recent published articles of DL techniques in healthcare.

Advanced deep learning techniques and applications in healthcare services

61

62

Bhanu Chander

8 Conclusion The inclusion of DL-based approaches into healthcare sector has made tremendous improvements in disease diagnosis, besides raising numerous challenges and questions. In this chapter, we discussed DNNs, which is a subpart of ML tools in healthcare applications, such as brain, body–machine interfaces, bioimaging, as well as medical imaging. We also included reasons that force DL into healthcare and various DL variants. Moreover, we discussed various healthcare applications with DL techniques. Finally, we included various challenges, limitations, and future directions of DL in the healthcare sector.

References [1]

Zhao, R., Yan, R., Chen, Z., Mao, K., Wang, P., Gao, R. X. Deep learning and its applications to machine health monitoring. Mechanical System and Signal Processing, 2019, 115, 213–237. [2] Jiang, F., Jiang, Y., Zhi, H. et al. Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology, 2017, 2, e000101. doi: 10.1136/svn-2017-000101.2017. [3] Srivastav, S., Soman, S., Rai, A., Srivastava, P. K. Deep learning for health informatics: recent trends and future directions. IEEE Explorer, 2017. [4] Jamshidi, M., Lalbakhsh, A. Artificial intelligence and COVID-19: deep learning approaches for diagnosis and treatment. IEEE Access, 2020, 8. [5] Wang, S., Cao, J., Yu, P. S. Fellow, deep learning for spatio-temporal data mining: a survey. 14(8), 2015. [6] Tobore, I., Jingzhen, L., Eng, M., Yuhang, L. Deep learning intervention for health care challenges: some biomedical domain considerations. JMIR Mhealth Uhealth, 2019, 7(9). [7] Razzak, M. I., Naz, S., Zaib, A. Deep learning for medical image processing: overview, challenges and the future. Classification in BioApps, Lecture Notes in Computational Vision and Biomechanics, 26. doi: https://doi.org/10.1007/978-3-319-65981-7_12.2018. [8] Xu, J., Glicksberg, B. S., Su, C., Walker, P., Bian, J., Wang, F. Federated learning for healthcare informatics. Journal of Healthcare Informatics Research, 2017. [9] Yao, Z.-J., Bi, J., Chen, Y.-X. Applying deep learning to individual and community health monitoring data: a survey. International Journal of Automation and Computing, December 2018, 15(6). [10] Faust, O., Hagiwara, Y., Hong, T. J., Lihb, O. S., Rajendra Acharya, U. Deep learning for healthcare applications based on physiological signals: A review. Computer Methods and Programs in Biomedicine, 2018, 161, 1–13. [11] Esteva, A., Robicquet, A., Ramsundar, B. A guide to deep learning in healthcare. Nature Medicine, 2019, 25, 24–29. [12] Miotto*, R., Wang*, F., Wang, S., Jiang, X., Dudley, J. T. Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 2017, 1–11. doi: https://10.1093/ bib/bbx044. [13] Xiao, C., Choi, E., Sun, J. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review. Journal of the American Medical Informatics Association, 2018, 25(10), 1419–1428. doi: 10.1093/jamia/ocy068.

Advanced deep learning techniques and applications in healthcare services

63

[14] Eraslan, G., Simon, L. M., Mircea, M., Mueller, N. S., Theis, F. J. Single-cell RNA-seq denoising using a deep count autoencoder. Nature Communications, 2019 Dec 23, 10(1), 390. doi: 10.1038/s41467-018-07931-2. [15] Guan, R., Wang, X., Yang, M. Q., Zhang, Y., Zhou, F., Yang, C. et al. Multi-label deep learning for gene function annotation in cancer pathways. Scientific Reports, 2018, 8(1), 267. doi: 10.1038/s41598-017-17842-9. [16] Lee, C. Y., Chen, G. L., Zhang, Z. X., Chou, Y. H., Hsu, C. C. Is intensity inhomogeneity correction useful for classification of breast cancer in sonograms using deep neural network?. Journal of Healthcare Engineering, 2018, 2018, 8413403. doi: 10.1155/2018/8413403. [17] Seebock, P., Waldstein, S. M., Klimscha, S., Bogunovic, H., Schlegl, T., Gerendas, B. S. et al. Unsupervised identification of disease marker candidates in retinal OCT imaging data. IEEE Transactions on Medical Imaging, 2019, 38(4), 1037–1047. doi: 10.1109/TMI.2018.2877080. [18] Wang, X., Zhai, S., Niu, Y. Automatic vertebrae localization and identification by combining deep SSAE contextual features and structured regression forest. Journal of Digital Imaging, 2019Apr, 32(2), 336–348. doi: 10.1007/s10278-018-0140-5. [19] Saha, M., Chakraborty, C., Racoceanu, D. Efficient deep learning model for mitosis detection using breast histopathology images. Computerized Medical Imaging and Graphics: The Official Journal of the Computerized Medical Imaging Society, 2018 Dec, 64, 29–40. doi: 10.1016/j.compmedimag.2017.12.001. [20] Saha, M., Chakraborty, C. Her2Net: a deep framework for semantic segmentation and classification of cell membranes and nuclei in breast cancer evaluation. IEEE Transactions on Image Processing : A Publication of the IEEE Signal Processing Society, 2018May, 27(5), 2189–2200. doi: 10.1109/TIP.2018.2795742. [21] Xu, M., Papageorgiou, D. P., Abidi, S. Z., Dao, M., Zhao, H., Karniadakis, G. E. A deep convolutional neural network for classification of red blood cells in sickle cell anemia. PLoS Computational Biology, 2017, 13(10), e1005746. doi: 10.1371/journal.pcbi.1005746. [22] Gulshan, V., Peng, L., Coram, M. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 2016, 316, 2402–2410. [23] Esteva, A., Kuprel, B., Novoa, R. A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017, 542, 115–118. [24] Zhang, Y. Z., Yamaguchi, R., Imoto, S., Miyano, S. Sequence-specific bias correction for RNA-seq data using recurrent neural networks. BMC Genomics, 2017, 18(Suppl 1), 1044. doi: 10.1186/s12864-016-3262-5. [25] Ren, J., Karagoz, K., Gatza, M., Foran, D. J., Qi, X. Differentiation among prostate cancer patients with Gleason score of 7 using histopathology whole-slide image and genomic data. Proceedings of SPIE International Society Optical Engineering, 2018 Feb, 10579, 1057904. doi: 10.1117/12.2293193. [26] Wang, Y., Wang, J., Lin, H., Tang, X., Zhang, S., Li, L. Bidirectional long short-term memory with CRF for detecting biomedical event trigger in FastText semantic space. BMC Bioinformatics, 2018, 19(Suppl 20), 507. doi: 10.1186/s12859-018-2543-1. [27] 1Wu, J., Mazur, T. R., Ruan, S., Lian, C., Daniel, N., Lashmett, H. et al. A deep Boltzmann machine-driven level set method for heart motion tracking using cine MRI images. Medical Image Analysis, 2018Dec, 47, 68–80. doi:10.1016/j.media.2018.03.015. [28] Jang, H., Plis, S. M., Calhoun, V. D., Lee, J. H. Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: evaluation using sensorimotor tasks. Neuroimage, 2017 Dec 15, 145(PtB), 314–328. doi: 10.1016/j.neuroimage.2016.04.003.

64

Bhanu Chander

[29] Chao, H., Zhi, H., Dong, L., Liu, Y. Recognition of emotions using multichannel EEG data and DBN-GC-based ensemble deep learning framework. Computational Intelligence and Neuroscience, 2018, 2018, 9750904. doi: 10.1155/2018/9750904. [30] Chu, Y., Zhao, X., Zou, Y., Xu, W., Han, J., Zhao, Y. A decoding scheme for incomplete motor imagery EEG with deep belief network. Frontiers in Neuroscience, 2018, 12, 680. doi: 10.3389/fnins.2018.00680. [31] Hassan, M. M., Uddin, M. Z., Mohamed, A., Almogren, A. A robust human activity recognition system using smartphone sensors and deep learning. Future Generations Computer Systems : FGCS, 2018, 81, 307–313. doi: [doi: 10.1016/j.future.2017.11.029. [32] Lipton, Z. C., Kale, D. C., Elkan, C., Wetzel, R. Learning to diagnose with LSTM recurrent neural networks. https://arxiv.org/abs/1511.03677, 2015. [33] Che, Z. P., Purushotham, S., Cho, K., Sontag, D., Liu, Y. Recurrent neural networks for multivariate time series with missing values. [Online], Available: https://arxiv.org/abs/1606. 01865, 2016. [34] Mehrabi, S., Sohn, S., Li, D. H., Pankratz, J. J., Therneau, T., Sauver, J. L. S., Liu, H. F., Palakal, M. Temporal pattern and association discovery of diagnosis codes using deep learning. In: Proceedings of International Conference on Healthcare Informatics, Dallas, USA, pp.408–416, 2015. DOI: 10.1109/ICHI.2015.58. [35] Putin, E., Mamoshina, P., Aliper, A., Korzinkin, M., Moskalev, A., Kolosov, A., Ostrovskiy, A., Cantor, C., Vijg, J., Zhavoronkov, A. Deep biomarkers of human aging: Application of deep neural networks to biomarker development. AGING, 2016, 8(5), 1021–1033. doi: 10.18632/ aging.100968. [36] Cheng, Y., Wang, F., Zhang, P., Hu, J. Y. Risk prediction with electronic health records: A deep learning approach. In: Proceedings of SIAM International Conference on Data Mining, Miami, USA, pp.432–440, 2016 [37] Choi, E., Schuetz, A., Stewart, W. F., Sun, J. M. Using recurrent neural network models for early detection of heart failure onset. Journal of the American Medical Informatics Association, 2017, 24(2), 361–370. doi: 10.1093/jamia/ocw112. [38] Pham, T., Tran, T., Phung, D., Venkatesh, S. DeepCare: A deep dynamic memory model for predictive medicine. In Proceedings of the 20th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer, Auckland, New Zealand, pp.30–41, 2016. DOI: 10.1007/ 978-3-319-31750-2_3.2016. [39] Avati, K. J., Harman, S., Downing, L., Ng, A., Shah, N. H. Improving palliative care with deep learning. [Online], Available: https://arxiv.org/abs/1711.06402, 2017. [40] Rajkomar, E. O., Chen, K., Dai, A. M., Hajaj, N., Hardt, M., Liu, P. J., Liu, X. B., Marcus, J. Scalable and accurate deep learning with electronic health records. Nature Partner Journals Digital Medicine, 2018, 1, 1–10. doi: 10.1038/s41746-018-0029-1,. [41] Al Rahhal, M. M., Bazi, Y., AlHichri, H., Alajlan, N., Melgani, F., Yager, R. R. Deep learning approach for active classification of electrocardiogram signals. Information Sciences, 2016, 345, 340–354. doi: 10.1016/j.ins.2016. 01.082.2016. [42] Acharya, U. R., Fujita, H., Oh, S. L., Hagiwara, Y., Tan, J. H., Adam, M. Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Information Sciences, 2017, 415-416, 190–198. doi: 10.1016/j.ins.2017.06.027. [43] Rajpurkar, P., Hannun, A. Y., Haghpanahi, M., Bourn, C., Ng, A. Y. Cardiologist-level arrhythmia detection with convolutional neural networks. https://arxiv.org/abs/1707.01836, 2017. [44] Jia, X. W., Li, K., Li, X. Y., Zhang, A. D. A novel semi-supervised deep learning framework for affective state recognition on EEG signals. In: Proceedings of IEEE International Conference on Bioinformatics and Bioengineering, Boca Raton, US, pp.30–37, 2014. DOI: 10.1109/ BIBE.2014.

Advanced deep learning techniques and applications in healthcare services

65

[45] Wulsin, D., Blanco, J., Mani, R., Litt, B. Semi-supervised anomaly detection for EEG waveforms using deep belief nets. In: Proceedings of the 9th International Conference on Machine Learning and Applications, Washington DC, USA, pp.436–441, 2010. DOI: 10.1109/ ICMLA.2010.71. [46] Page, J. T., Mohsenin, T., Oates, T. Comparing raw data and feature extraction for seizure detection with deep learning methods. In: Proceedings of the 27th International Flairs Conference, AAAI, Pensacola Beach, USA, pp.284–287, 2014. [47] Schirrmeister, R. T., Springenberg, J. T., Fiederer, L. D. J., Glasstetter, M., Eggensperger, K., Tangermann, M., Hutter, F., Burgard, W., Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping, 2017, 38(11), 5391–5420. doi:10.1002/hbm.23730. [48] Sturm, S. L., Samek, W., Muller, K. R. Interpretable deep neural networks for single-trial EEG classification. Journal of Neuroscience Methods, 2016, 274, 141–145. doi: 10.1016/j. jneumeth.2016.10.008. [49] Nie, L. Q., Wang, M., Zhang, L. M., Yan, S. C., Zhang, B., Chua, T. S. Disease inference from health-related questions via sparse deep learning. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(8), 2107–2119. doi:10.1109/TKDE.2015.2399298. [50] Zhao, L., Chen, J. Z., Chen, F., Wang, W., Lu, C. T., Ramakrishnan, N. Simnest: Social media nested epidemic simulation via online semi-supervised deep learning. In: Proceedings of IEEE International Conference on Data Mining, IEEE, Atlantic City, USA, pp.639–648, 2015. DOI: 10.1109/ICDM.2015.39 [51] Benton, M. M., Hovy, D. Multi-task learning for mental health using social media text. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, Spain, pp.1–11 2017. [52] Aliamiri, Y. C. S. Deep learning based atrial fibrillation detection using wearable photoplethysmography sensor. In: Proceedings of IEEE EMBS International Conference on Biomedical & Health Informatics, Las Vegas, USA, pp.442–445, 2018. doi: 10.1109/ BHI.2018.8333463 [53] Unterthiner, T., Mayr, A., Klambauer, G., Hochreiter, S. Toxicity prediction using deep learning. [Online], Available: https://arxiv.org/abs/1503.01445, 2015. [54] Hammerla, N., Fisher, J., Andras, P., Rochester, L., Walker, R., Ploetz, T. PD disease state assessment in naturalistic environments using deep learning. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence, Austin, USA, pp.1742–1748, 2015 [55] Xu, Z., Wang, S., Zhu, F. Y., Huang, J. Z. Seq2seq fingerprint: An unsupervised deep molecular embedding for drug discovery. In: Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, Boston, USA, pp.285–294, 2017. DOI: 10.1145/3107411.3107424. [56] Zhang, Q., Chen, X. X., Zhan, Q. Y., Yang, T., Xia, S. H. Respiration-based emotion recognition with deep learning. Computers in Industry, 2017, 92–93,, 84–90. doi: 10.1016/j. compind.2017.04.005. [57] Ravi, D., Wong, C., Lo, B., Yang, G. Z. Deep learning for human activity recognition: A resource efficient implementation on low-power devices. In: Proceedings of the 13th IEEE International Conference on Wearable and Implantable Body Sensor Networks, San Francisco, USA, pp.71–76, 2016. DOI: 10.1109/BSN.2016.7516235 [58] Ma, S., Sheridan, R. P., Liaw, A., Dahl, G. E., Svetnik, V. Deep neural nets as a method for quantitative structureactivity relationships. Journal of Chemical Information and Modeling, 2015, 55(2), 263–274. doi:10.1021/ci500747n.

66

Bhanu Chander

[59] Huynh, T., He, Y. L., Willis, A., Ruger, S. Adverse drug reaction classification with deep neural networks. In: Proceedings of the 26th International Conference on Computational Linguistics, Osaka, Japan, pp.877–887, 2016. [60] Chaudhary, O. B., Poirion, L. Q., Lu, L. X. G. Deep learning-based multi-omics integration robustly predicts survival in liver cancer. Clinical Cancer Research, 2017, 24(6), 1248–1259. doi:10.1158/1078-0432.CCR-17-0853. [61] Danaee, P., Ghaeini, R., Hendrix, D. A. A deep learning approach for cancer detection and relevant gene identification. In: Proceedings of Pacific Symposium on Biocomputing, World Scientific, Kohala Coast, USA, 2017.

Rohit Rastogi, Mamta Saxena, Devendra Kr. Chaturvedi, Mayank Gupta, Akshit Rajan Rastogi, Vrinda Kohli, Pradeep Kumar, Mohit Jain

Visualizations of human bioelectricity with internal symptom captures: the Indo-Vedic concepts on Healthcare 4.0 Abstract: The human body, including the brain, runs on a very different framework, which includes positively charged ions of elements like potassium, sodium, and calcium. That is how all signals travel within and between the brain and other parts of the body. It gives us the ability to perform various tasks like thinking, talking, and walking. Examining the mechanisms of cells helps in determining whether each system in which they reside is functioning properly. Consumerism and materialistic culture have many ill effects but also offer the benefit of bringing citizens of the world together. This has also helped in strengthening local individualities to create global forms of identity. Effectively, to address the moral and identity challenges that are offered by globalization, a Global Consciousness is required. It is the embodiment of both traits, i.e. connection between people and differences in humankind, and a will in young individuals to take proper honest actions in a conditioned manner, on its behalf. It is the ability to understand the global, international, cultural sources, interconnections, and actions. It will surprise you every time! Health sensors have been used to capture the bulky data and stats have been produced by Internet of Things. Visualizations of human energy in different body organs are presented. Keywords: Kirlian photography, human bioelectricity, Sars-Cov’19, satisfaction and peace, gadget radiations

Rohit Rastogi, Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected], https://orcid.org/0000-0002-6402-7638 Mamta Saxena, Ministry of Statistics, Govt. of India, Delhi, India, e-mail: [email protected] Devendra Kr. Chaturvedi, Dept. of Electrical Engineering, DEI, Agra, India, e-mail: [email protected] Mayank Gupta, Tata Consultancy Services, Japan, e-mail: [email protected] Akshit Rajan Rastogi, Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected] Vrinda Kohli, Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected] Pradeep Kumar, Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected] Mohit Jain, Dept of CSE, ABES Engineering College, Ghaziabad, Uttar Pradesh, India, e-mail: [email protected] https://doi.org/10.1515/9783110708127-004

68

Rohit Rastogi et al.

1 Introduction 1.1 Introductory content on human bioelectricity and global understanding on consciousness By understanding the theory of flow of current in the human body, its generation pattern, how it is transmitted and under what conditions, we can map a human brain. Bioelectric oscillations tend to occur in all humans. They play a significant role in cognitive activities. Pandemic caused by Sars-Cov’19 is due to impurities in the atmosphere. The scientific Vedic science of yajna is the only cure for this problem. Timely sanitization effectively removes this issue from its roots. This is the only efficient mechanism on Earth that can rectify the whole environment. That is why we need to make it acceptable and popular all over the world. It has also been highlighted in Vedic literatures [1]. By understanding the hypothesis of the stream of current in human body, how it is created, how it can be transmitted, and under what conditions, we may be able to outline the human mind. Bioelectric motions, in general, happen in all people. They assume a critical role in intellectual exercises. Neural motions help in encoding and sending messages to the environmental factors. It is an exceptionally effective approach. This marvel happens in every single living element, in different layers [2].

1.2 Kirlian photography Kirlian photography was discovered in 1939 by Semyon Davidovitch Krilian. It is used to reveal the visible “aura” or energy that is radiated by the object being photographed. Semyon and his wife believed that this photography can be used to detect plant disease that was otherwise difficult to detect. Between 1970 and 1980, this photography was frequently used to observe leaves and finger tips. It is the process of passing a high-voltage charge through the object connected to the photographic plate within a high-frequency electric field, which results in the formation of a colored aura around it. By using results of different sweat composition, it is possible to detect the physical and mental well-being of a person[3]. Krilian photography of living tissue is completed using the Bio-well GDV camera; it is a non-meddling approach to quantify the human vitality field using a specific camera and programming framework. It uses gas release representation and electrophotonic imaging (EPI), which is extremely productive in lead human vitality examinations. It records the utilitarian and fiery state of organs and offers a snapshot estimate; it is very important to comprehend that the qualities taken are not steady; the vitality estimated here mulls over the feelings and the mental condition of the subject, which are liable to change. Moreover, this investigation helps in dissecting quantitatively, the genuine mental and passionate state, uneasiness, and

Visualizations of human bioelectricity with internal symptom captures

69

the impact on human body on the introduction of a specific sort of radiation, treatment, and drug [4].

1.3 Energy measurement The human body needs energy for performing tasks and carrying out main functions, and energy is obtained by consuming food. Adenosine triphosphate (ATP) is the form in which the human body gets supplied immediate energy. ATP is obtained from carbohydrates, protein, and fat present in foods. ATP is considered to be the key source of energy for all the body parts to perform any task. As, there are small amounts of ATP in the body, and it is necessary to have sufficient energy for backup, other stored energies are used to replenish ATP. The amount of energy that one requires daily depends on the individual’s energy consumption and his/her metabolic energy requirements, which can be estimated based on body weight and activity level. If the person takes in excess food, the energy is stored as fats in the body. And, having excessive fat storage can eventually lead to a high body mass index (BMI) [5]. The technique used for energy measurement is the response to stimulus technique; the stimulus is the specific physiological and psychological reaction of human body to a weak electrical current applied to the fingertip of subject, for less than a millisecond. Now, every human has a different reaction to the stimulus; therefore, the output varies for every subject. In response to stimulus, the fingertip emits electrons, which strikes and excites air molecules; these molecules create glow, this glow is captured by a video camera, and the digital image of glow is processed by software for evaluation. Energy is measured by analyzing these digital images, using software and standard graphs [6]. The strategy used for vitality estimation is the reaction to improvement method; the upgrade is the particular physiological and mental response of a human body to a frail electric flow applied to the fingertip of the subject, for less than a millisecond. Presently, every human body has an alternate reaction to the boost; in this way, the yield changes for each subject. Due to the boost, the fingertip radiates electrons, which strikes and energizes air atoms; these particles create a shine, this shine is caught by a camcorder, and the computerized picture of the shine is prepared by programming for assessment. Vitality is estimated by dissecting these computerized pictures using programming and standard diagrams [7].

1.4 Bioelectricity Bioelectricity refers to electrical currents that are being generated inside the human body. These currents are the result of multiple biological processes that take place within the human body; this current is used by the body to send impulse signals

70

Rohit Rastogi et al.

along nerve fibers to carry out different types of tissue and organ-related functions and also to control metabolism. The flow of electricity plays a very essential role in physiology and in times of disease. Nerves send information and govern the functioning of a human body on the basis of these signals generated by bioelectric current. This signal or impulse is directly generated by the organic tissue and is carried via electricity, not through any kind of fluid, directly to the brain. Studies have shown how bioelectricity helps in reparation/ healing by attracting macrophages and neutrophils, to the wounded site [8]. It is similar to the medical subject, electrophysiology. Bioelectricity is the flow of electricity through body or currents that are produced by the body. It plays an important role in our health and disease – that is, physiological and pathophysiological conditions. The bioelectric signals mediate the body functions by transmitting electric impulse. The bioelectric current that flows through the body is a flow of ions and is used by cells. The nerves relay the information, which is used to regulate functions of tissue and other organs. [8]. Bioelectricity alludes to electrical flows that are created inside human body; these flows are a consequence of various natural procedures that occur inside human body; this flow is used to transport motivation signs along nerve strands to do various kinds of tissue and organ-related functions; it also controls digestion. This progression of power assumes a fundamental role in physiology and in times of sickness. Nerves send data and administer the working of a human body based on these signs, created by bioelectric flow. This sign or drive is created by the natural tissue in a straightforward manner and is conveyed by means of power, not through any sort of liquid, to the mind. Studies have indicated how bioelectricity helps in reparation/recuperation by drawing in macrophages and neutrophils to the injured site [9].

1.5 Pranik Urja Pranik is derived from “prana” which means alive, and Urja means energy, Pranik Urja means energy of life. The three main sources of Pranik Urja are the sun, wind, and ground, which correspond to the solar prana, air prana, and ground prana. Solar prana is derived from the sun. It promotes the flow of energy in human body. It can be derived by taking sunbath for about 5–10 min, or by consumption of water that has been kept in sunlight. Air prana is gained from energy in the air. While breathing in, our lungs absorb air prana; more air prana can be absorbed by practicing Pranayama. Ground prana is obtained from energy in the ground. This energy is captured by the soles of our feet. By walking barefoot on the ground, our body absorbs larger amounts of ground prana [10].

Visualizations of human bioelectricity with internal symptom captures

71

1.6 Human body and seven chakras There are seven chakras inside the human body that are responsible for various characteristics. The colors and role of the seven chakras are: The root chakra – red: The role of this chakra is grounding, that is, to connect all energy with ground. The sacral chakra – orange: Energy that provides motivation to enjoy the reward of one’s product, including indulging in joyful and pleasing activities. The solar plexus chakra – yellow: Confidence and wisdom. The heart chakra – green: This chakra includes self-love and love for others. The throat chakra – blue: The energy to speak comes from this chakra. The third eye chakra – indigo: This chakra is said to be a mind opener to the information beyond the material world and the five senses. The crown chakra – violet: Pure consciousness energy. Balancing it can be seen as the destination of one’s spiritual journey for achieving pure consciousness [11].

1.7 Aura of human body The human aura consists of seven layers. These layers of energy have particular attributes. The aura is like a halo of energy; it functions in conjunction with chakras of the body and affects the health of the human body; it can be viewed as the essence of spiritual energy. One’s feelings and understandings give colors to the aura and depict how they feel emotionally and physically. Thoughts also add color to the aura. It is the energy that one emits during the mental process. Aura colors refer to the energy vibrations that one carries. The aura of the human body contains seven layers which are: 1. Archetypal 2. Spiritual 3. Mental 4. Emotional 5. Imaginal. 6. Etheric 7. Physical [12] Human aura is the energy radiated by the human body. All living things have a special energy field, often referred to as bio-field. It is not only limited to our head, but extends to all parts of our body. The human aura radiates spiritual, physical, and

72

Rohit Rastogi et al.

mental energy. This aura is represented by a mix of colored frequencies, where each color has its own specific function and characteristic. According to scientific research, the aura is an electromagnetic field created around our body. The vibration of the aura can be detected by fine instruments as they are very delicate. This aura is said to be depleted in an unhealthy person. If we want our bodies to absorb oxygen properly, then we will want our lungs to function properly. Similarly, if we want out body to absorb healthy pranic energy, then we have to keep our aura strong. All living beings show changes in their auras. Auras of people taking drugs or medicines also change significantly. Today, we have digital aura scanning system (DAS), which is said to detect the auras of human body and to check any imbalance [13]. A quality or human vitality field is a hued transmission that encompasses a human body. Medication experts profess to be able to see the size, shade, and kind of vibration of a quality. The human aura emanates seven layers. These layers of vitality have certain characteristics. The emanation resembles a radiance of vitality; it works based on chakras of the body and influences the strength of a human body; it tends to be seen as one’s embodiment of profound vitality. One’s feelings, contemplations, and encounters offer hues to the quality and delineate how they feel sincerely and genuinely. Musings are, likewise, vitality that one discharges during the psychological procedure, and they additionally add shades to the atmosphere. Atmosphere hues are the example of vitality vibrations that one conveys within themselves [14].

1.8 Yin and yang energy Yin and yang energy is the true example of how two opposite elements can coexist in the same place, at the same time. Yin is the quietness and stillness of the night, while Yang is the energy of the sun. Yin and yang are a duality forming a whole. It is represented by the “Taijitu” symbol. It can be seen in male and female, day and night, and so on, so for one to start, the other has to end. Similarly, for a day to begin, night has to end. Hence, all actions are dependent on one another. Our body is also divided in yin and yang. The upper body is Yang and the lower is Yin. In life, we learn from our problems. During a midlife crisis (Yan), the challenges can push the person to achieve success (Yin). The person experiences a transformation. He can either strive towards success or stay weak. Thus, imbalance in our life also teaches us something [15]. Yin and yang is a structure of dualism, of brightness and darkness, negative and positive, cold and warm; it depicts how two expressly inverse traits exist together, yet, in addition, supplement one another, are reliant upon one another, and bring forth each other; it is a part of Chinese cosmology. It is an image of the agreeable, the concurrent vitality. Yin is depicted as dark, while Yang is depicted as

Visualizations of human bioelectricity with internal symptom captures

73

white. Yin is described as ladylike vitality, which is delicate, detached, slow-paced, and quiet. Yang is portrayed as manly, which is forceful, strong, and centered. The quintessence of this is, yin and yang energies cannot function independently; parity is required, similar to two parts of life. These two energies, impact the life of people, as it were, bringing in fundamental harmony and serenity and the much-required vitality and drive to do everyday assignments; that is how these two energies coexist and are basic for humankind [16].

1.9 Quantum consciousness Quantum consciousness is a way of saying that consciousness requires quantum processes. It is also known as quantum mind. However, it is conflicting according to the view of neurobiology, which works on the theory that the function of the brain is purely classical. Human brain works with the help of functions performed by relevant quantum processes, but till now, there are no proven evidences for this. Quantum effects states that thought process is an effective point, unless there is any proven practical evidence by the scientists about how quantum effects could really cause consciousness [17]. Quantum consciousness is, basically, a group of hypotheses that proposes that classical mechanics cannot explain consciousness; according to this, the probability of our brain creating a certain thought is equivalent to probability of spin of an electron. The work done on this subject is mostly experimental. Many experiments were conducted that lead to the conclusion that a model of quantum consciousness related to the dynamics of psychological states, during perception and cognition of ambiguous features, follows quantum mechanics, and quantum wave function exists in the brain [18]. When human beings think, they have many thoughts that come up in their mind. Our consciousness is our inner life. It is how we feel and why we feel it. Research suggests that consciousness arises from quantum vibrations. Hence, it emerges from the complex neural computations in our brain. A conscious mind is an aware mind. However, the concept of consciousness is complicated [19]. Quantum awareness is essentially a gathering of speculations that proposes that old-style mechanics cannot clarify cognizance; as per this, the likelihood of our mind making a specific idea is equal to the likelihood of turn of an electron. The work done on this subject is, for the most part, exploratory. Numerous investigations have led to the conclusion that a model of quantum cognizance identified with the elements of psychological states, during observation and discernment of uncertain highlights, follows quantum mechanics, and quantum wave function exists in the mind [20].

74

Rohit Rastogi et al.

1.10 Science of meditation and its effect on human body Studies show how meditation can affect the physical as well as the intellectual health of a person. The purpose of meditation is to understand one’s mind and what is going on in the moment. When discussing meditation and science, it is important to keep in mind that the goal is for the person to move from a dull mind to “enlightenment.” Meditation also affects our immune system. It was found, in a study, that people who meditate would have a greater number of antibodies produced. It produces genes that regulate the body. According to a research by Harvard, there have been a number of key areas like depression, anxiety, and other things, where mindful meditation has shown benefits in patients. Meditation has shown many benefits. Eight weeks of meditation can reduce ageing. It can improve memory. It keeps us positive. Meditation is to our mind, what exercise is to our body [21]. Reflection is when an individual uses different strategies to improve their focus and have a clear mind. For this, one can use various techniques, for example, care or concentration of the mind on an article, thought, or movement. Long periods of research that have been devoted to the science behind contemplation have given some confirmation based on preferences of reflection methods. Logical strategies and instruments, for example, MRI and EEG, have demonstrated that customary reflection influences people by estimating the substantial changes in the mind. An examination shows that care meditators have exhibited better execution when the upgrade identified in an undertaking was surprising, compared with when it was normal. This proposes consideration assets were all the more promptly accessible so as to perform well in the undertaking. Another exploration shows that subjects with social nervousness issue displayed diminished initiation of negative self-convictions, following a mediation program that included contemplation practice [22].

1.11 Science of jap and its effect on human body There are proven evidences that when we chant, it helps in improving our health in amazing ways that do not involve any medicines, supplements, or any kind of herbs. And moreover, it is free of cost and can be easily practiced daily. Chanting spiritual words like “OM” calms our mind and decreases stress. In the book Human Sounds by Jonathan Goldman, the author follows a group of French monks, who would chant some spiritual words every single day, and then suddenly, one day, they decided to stop chanting. And after some time, they started getting sick. Some studies state that chanting on regular basis helps boost one’s immunity, making one less vulnerable to the diseases. Chanting has the power to heal the body; some reports state that it can even heal cancer. Chanting can also boost up our energy level and keep us going all day long, in any situation. Chanting can also help in creating a good mood and ending depression; according to some reports, chanting

Visualizations of human bioelectricity with internal symptom captures

75

helps create more endorphins in our body, which, in turn, helps create a better mood [22]. Some great benefits of meditation are: 1. The focus and concentration on a single subject leads to generation of energy. 2. If a deity is remembered, the effect increases, as per religious claims. 3. Each person should perform meditation regularly, in order to purify the mind and take it away from daily chores and fill it with peace. 4. It energizes mind and allows handling much more psychological load, without being stressed out[23]. In ancient Indian scriptures, it has been found that OM is the most powerful mantra to chant. Hindus consider it the absolute power. We offer our prayers, using this single syllable chant. Using a mala for jap will help us to keep a count of the jap. The use of finger and thumb during jap directly connects to the heart (that is the soul), which relieves our stress. OM is considered to have cosmic sound, and chanting it removes all the stress and psychological pressure. Chanting does not help us to escape from the situation; instead, it makes us mentally strong to deal with it. It has been found in research that chanting OM can calm our mind, which further relaxes our body. It helps the mind and the body to stay wide awake. It removes our negative thoughts, and the electric signals through our living cells change. The harmony of chanting echoes all through our body functions like our heartbeat, breathing, and so on [24]. Scientifically, jap is another form of meditation, which consists of repetition of mantra. The mantra can be spoken in any way –either it can be spoken softly enough for the practitioner to hear it, or it may be spoken within the mind. Jap is to be performed while sitting in a meditation posture. This posture provides increased focus and concentration for prolonged duration. Some great benefits of meditation are: The focus and concentration on single subject leads to generation of energy; If a deity is remembered, the effect increases, as per religious claims; Each person should perform meditation regularly, in order to purify the mind and take it away from daily chores and fill it with peace; It energizes the mind and helps handle much more psychological load, without being stressed out [25, 26].

1.12 Science of mantra and its effect on human Body The word mantra is a Sanskrit word, which implies a word or sound rehashed to help fixation in reflection. Mantra, when recited or quietly recounted, is an incredible treatment instrument that Indian otherworldly pioneers have known for a few thousand years, and science is presently demonstrating its value and hugeness. In one of the investigations, scientists used propelled cerebrum mapping instruments and affirmed medical advantages of reciting mantra while performing contemplation; for example, its capacity to help free the psyche of foundation clamor inside

76

Rohit Rastogi et al.

the mind and quieten down the sensory system. It has additionally been found in another exploration that advantages of reciting a credible Sanskrit mantra as opposed to a phony (tamas) mantra lead to better outcomes in decrease of despondency and tension levels. An investigation with people at Duke University found that a 4-week day by day practice of a mantra reflection brought about altogether diminished pressure, uneasiness, and manifestations of mental misery while illuminating the mindset [27].

1.13 Science of yajna (yagya) and its effect on human body Yagya is a process in which herbal sacrifices are made in fire with the help of the thermal energy of fire and the sound energy provided by rhythmic chanting of mantras. Yagya significantly reduces the amount of electromagnetic radiation emitted by various electronic devices, such as laptop, mobile, and tablet. Yagyagni, that is going to be produced by the combined energy or vitality may seem to be a religious act, but its power and effects are extraordinary, and this has been proved through extensive and successful experimentation by scientists over the last two decades. Also, there have been multiple cases where patients suffering from chronic disease got relief. In acute diabetic patients, the level of sugar in urine was found to be totally absent after yagya [28].

1.14 Artificial intelligence for health informatics With massive advancements in technology, along with artificial intelligence, even neural network is coming to be a new technology. Also, with the requirement of large data storage and improvements in the computational technologies, the requirement of good predictive texts [29] has increased.

1.15 Health sensor data management Rapidly growing technology and its advancement have created many opportunities. This has lead to the introduction of devices that can monitor and collect the information both manually and automatically, depending on the requirement. Unlike in old times, the modern generation can easily connect with each other through mobile phone or electronic media. This media has also helped the health sector. While getting reports online, for information on blood sugar, blood pressure, etc., devices can be worn on the body to record data that can interpret one’s health. The massive growth in this dataset creates both data manageability and collaboration challenges [30]. Therefore, a service platform that employs cloud computing and big data technology for efficient data management becomes necessary [28].

Visualizations of human bioelectricity with internal symptom captures

77

1.16 Multimodal data fusion for healthcare Multimodal data fusion for healthcare refers to the electronic records, which includes measurements and images; however, the increasing flow of medical data produced by the healthcare sector (especially the data that is extremely large in size) when combined yearly poses new challenges to data mining. To overcome these challenges, we require a model where information can be presented in various ways that can be easily interpreted – just like medical data images – proper measurements and a multifunctioning and a multidimensional model that fulfils all the requirements. This model will help combine data from unsupervised data mining and will help to give a clear representation of it [31].

1.17 Heterogeneous data fusion and context-aware systems for the IoT health With technology advancing every day, the healthcare sector has seen a major shift from its conventional methodology to smart sensor-based technology. However, the development in wireless networks, cloud computing, and big data analysis, which has led to increase in the use of Internet of things (IoT) has come with its own drawbacks - for example, there is no assurance of accuracy and big data can mix up, causing confusion. Thus, data fusion,, which extracted data from a heterogeneous set came into the picture. In this technique, results from multiple sensors were combined to produce a common and an accurate result. IoT applications can be benefitted with context-aware data fusion, by using this information to modify the behavior of the application based on the current situation [32].

2 Literature survey Gist summary of this paper by Bundzen and his team reflects health quality evaluation on the basis of gas discharge visualization (GDV) parameters by P. Bundzen and K. Korotkov. They exhibited in their experiments that the ability to make good judgments, quick decisions, and biomedical diagnostics are basically brought about by focusing on contemporary medicines. The assessment, control, and maintenance of the functional capacity of a relatively healthy person (RHP) is an important clinical problem – a child’s state during the dynamic growth phase and the sick during remission. The search for an easy, cost-effective method and associated criteria for objective evaluation of health status for both relatively healthy people and sick patients appears intuitively obvious and necessary. Health quality can be well estimated by measuring a number of physiological parameters. The development of a

78

Rohit Rastogi et al.

user-friendly, inexpensive method for objective instrumental evaluation and control of one’s health level is highly desirable (Bundzen, P. et al., 2000). Bundzen and his team concluded that one solution is the technique of GDV diagrams. This index helps in measuring average level of homeostasis of the organism of an RHP in a state of calmness and in relaxed mood. These experiments on large RHP groups from various countries results in a distribution of a JS parameter calculated in the GDV diagram program, which follows a law named “quasi-Gauss” (Bundzen, P. et al, 2006). Application of work in this paper:GDV-grams were taken in the USA and Russia. Due to the specific properties of the logarithmic function, inspection of the cure finds it asymmetric, relative to zero. The positive and negative values should be considered separately. By statistical analysis, the sections of data are taken using the aggregate with a normal distribution. The average value is 0.072, which is close to the median 0.078. The median is located approximately between the 25% and 75% (–0.078 and 0.217, respectively). About 96% of the values are concentrated within two standard deviations from the average. The database was expanded using examination results from 41 RHP in Sweden and 20 RHP in the USA, collected during the years 1999 and 2000 with the existing data results shown in a composite histogram. A total of 135 RHP were tested, and a graph also proves that statistical tendency is preserved: 99% of the tested RHP met the range –0.6/+1.0. Taking into account that these 207 people were selected without knowledge of their health state, involving people with compensated chronic or latent diseases, the range JS can be taken as –0.6/+1.0 for the value of a relatively normal form of health. This range may be taken as a “good health” range and appears to be similar across various countries, providing a means to cross check the data. All data discussed above were measured for RHP between 35 and 60 years old (Bundzen, P. et al., 2006). The paper exhibits the future perspective and limitations that it is impossible to evaluate health state based on the GDV diagram data only. A lot of patients’ data correspond to the “good health” range. However, we should take into consideration data of all other GDV programs. As we see from the results of t-test, there was statistically significant difference between most of the measured GDV parameters for healthy and unhealthy groups and no significant difference between different groups of patients. On the other hand, as it was demonstrated in a lot of clinical studies, GDV parameters are very sensitive to the changes of organism state after different influences and treatment. This study also makes a statement true, namely, “To make evaluation of health state we should use a complex set of GDV parameters” (Bundzen, P. et al, 2006). It employs GDV and EPI, which are very efficient in conducting human energy scans. It records functional and energetic condition of organs and systems at the moment of measurement; it is necessary to understand that the values taken are not constant – the energy measured here takes into consideration the emotions and the psychological state of the subject, which is subject to change. Furthermore, this

Visualizations of human bioelectricity with internal symptom captures

79

study helps in analyzing quantitatively, the actual psychological and emotional state, anxiety, the effect on human body on exposure to certain kind of radiation, treatment, and medication. The output of the study is easy to measure, because of the years of research dedicated to this technique; there are standard values and patterns that depict various levels of energy, stress, and multiple conditions of the subject, and this output can be analyzed by non-experienced personnel (Kototkov, K. et al, 2019). The team applied the instrument and dataset as follows: The instruments that contribute to the above study include Bio-Well GDV camera to capture digital image and Bio-Well software to process the image of the glow. The techniques include GDV and EPI. A large amount of dataset is received as output to carry out in-depth evaluation of various energy and anxiety levels of a subject (Kototkov, K. et al, 2019). Application of this work: This work is still finding its application in various fields – it includes study of subject with a new medicine, or treatment, the effects of being exposed to electromagnetic radiation for prolonged period of time, study of change in various energy levels by practicing meditation, yoga, and other spiritual activities, the study of stress management in today’s situation, and various fields (Kototkov, K. et al, 2019). Limitations: As far as limitations are considered, Bio-Well is not a medical instrument (Kototkov, K. et al, 2019). Conclusions of the Krilian photography of humans is a great way to analyze the various energy levels of subject, which were not considered to be quantifiable earlier, but dramatically influence the behavior and other characteristics of human body; this is still a developing field and new applications of output of this study are being used in various studies. The evolution of this technology will enable researchers to find more efficient methods than conventional methodologies used (Kototkov, K. et al, 2019) hitherto. Work by Gagua and his team shown in their paper is about how GDV can be used in the department of oncology. GDV is the analysis of biological objects placed in highintensity electromagnetic field. Studies have been done on how Krilian photography can be used in diagnosis of cancer. L.W. Konikiewicz was an American researcher who did an elaborate study and correctly identified cystic fibrosis patients. In 1990, research on GDV for early detection of cancer began. Patients of both genders were selected for the analysis. A total of 140 women with stage 3 breast cancer were also selected. The patients were compared with healthy patients before and after their treatment. It was found that the little finger showed the most consistent difference [10]. At the same time, there were clinical trials done on patients that reflected uniqueness in GDV imaging of lower sector of lower finger. Different people were set to different tasks. Some did computer image processing of the data; some did statistical computations [10]. The Instruments and dataset used by 109 patients of both genders with stage 3 lung cancer and 140 women with stage 3 breast cancer were selected for the analysis.

80

Rohit Rastogi et al.

Patients were measured before the treatment and after the treatment (2–3 weeks after). Forty-four healthy people were also measured in the same control. For the big group of people, it was found that analysis of GDV parameters was best described by Gaussian distribution [10]. Application of work: The result of the analysis demonstrates that it can be useful in monitoring cancer patients before and after the treatment and in the course of recovery, which is essential in the healthcare industry. The GDV camera can be accepted as a unique system to monitor the everyday healthcare system of the patient. The calculated results also give us hope of early detection of cancer,, based on GDV evaluations and a system of active testing, which would benefit millions, worldwide [10]. They concluded that everyone knows that if cancer is detected at an early stage of its development, it can be treated successfully with the technologies available now. GDV is a cheap and noninvasive method that presents unbiased measure for detection of cancer as well as for monitoring the patient’s health after the treatment. Patients visiting the hospital may be easily checked with this system. GDV method is easy to be applied. The approach should be based on computer data-mining multiparametric comparison with database of different types of cases [10].

3 Results and discussion, interpretation, and analysis on healthcare experiments 3.1 Energy distributions On analysis of different organs, the energy distribution before and after has been analyzed, as below in Fig 1.

3.2 Nervous system The autonomic nervous system has a pivotal role in physical response to stress and is divided into two parts: In stressful conditions, the sympathetic nervous system (SNS) induces the “Fight or Flight” response, that is, the subject either fighting off life threat or fleeing away from enemy. SNS sends signals to the adrenal gland to release epinephrine and epinephrine hormones, which increase the heart rate and dilate the blood vessels in legs and arms; the digestive system adjusts to increase glucose levels in the blood to deal with the emergency. The energy levels of the nervous system are capable of causing a serious problem to life and can even cause death in case of acute attacks etc.

Visualizations of human bioelectricity with internal symptom captures

81

Figure 1: Energy distribution through Kirlian of both subjects for their different organ systems before and after the yajna/meditation and mantra protocols.

3.3 Respiratory system This system is responsible for supplying oxygen to cells and moves out carbon dioxide waste from the body. The airways run from nose to lungs, through the trachea and bronchi. Stress conditions or loss in energy causes shortness of breath as pathways constrict. Studies have proven that acute stress causes such as sudden death of loved ones can trigger asthmatic attacks. Yagya shows that the subject gets relaxed, and the improvement in their energy levels indicates that they start operating more towards body mean energy level and become better (as per Figure 3).

4 Applications of yagya and mantra Yajna or yagya is a process in which natural forfeits in fire happen with the assistance of the warm vitality of fire and the sound vitality that arises out of musical

82

Rohit Rastogi et al.

Figure 2: Energy distribution through Kirlian of both subjects for their nervous systems before and after the yajna/meditation and mantra protocols.

reciting of mantras presented in Fig 2. Logical research has additionally demonstrated uses of yagya and furthermore demonstrated its value in purging of medical condition. Performing yagya every day, with appropriate reciting of mantras, leads to diminished strain level and tension degree of the subject, the barometrical poisons decrease drastically, and the general condition is a lot more beneficial. The yagya, likewise, essentially diminishes the measure of electromagnetic radiation discharged by different electronic gadgets, for example, portable PC and tablet. Yagyagni that is delivered by consolidated vitality or essentialness may appear to be a strict demonstration; however, its capacity and impacts are uncommon, and this has been demonstrated through broad and effective experimentation by researchers in the course of the recent two decades. Additionally, there have been different instances of patients experiencing interminable malady getting help, in intense diabetic patients, the levels of sugar in urine was seen as absolutely missing, and the levels of glucose diminished to normal, following half a month of going to yagya, daily. [33]. Mantra – Chanting mantras helps focus our attention, as when we sit down and try to meditate, our mind deviates from here to there, making it difficult to focus our attention. Chanting helps in distracting our mind from senseless thoughts and helps in

Visualizations of human bioelectricity with internal symptom captures

83

Figure 3: Energy distribution through Kirlian of both subjects for their respiratory systems before and after the yajna/meditation and mantra protocols.

concentrating the mind. Reports state that when the mantra is recited at the right frequency with a pure mind and determination, it helps the brain to rejuvenate by sending maximum oxygen to it. It manages the heart rate, blood pressure, and acts as a medicine for many health problems. It creates a calm mind and helps achieve immunity from the outside mental disturbances (Staples, J. et al, 2009). Also,chanting mantras proves to be an effective way of releasing emotion. It also helps in stimulating the endocrine system. Mantras align our vibrations to create awareness. And, these vibrations from mantras have a lot of power, as each sound has different and unique vibration; as a result, each mantra has a different effect. Moreover, all our thoughts, feelings, words, and actions impact our consciousness [34].

5 Therapy in Kirlian captures Kirlian captures – It employs GDV and EPI, which is a very efficient way to conduct human energy scans. It records functional, energetic conditions of organs and systems at the moment of measuring; it is necessary to understand that the values taken

84

Rohit Rastogi et al.

Figure 4: Energy distribution through Kirlian of both subjects for their digestive systems before and after the yajna/meditation and mantra protocols.

are not constant – the energy measured here, takes into consideration, the emotions and the psychological state of the subject, which is subject to change as presented in Fig 4. Furthermore, this study helps in analyzing quantitatively, the actual psychological and emotional state, anxiety, the effect on human body on exposure to certain kinds of radiation, treatment, and medication. The output of the study is easy to measure, because of the years of research dedicated to this technique; there are standard values and patterns that depict various levels of energy, stress and multiple factors of the subject, and this output can be analyzed by non-experienced personnel [35]. Yin and yang energy – yin and yang is a composition of dualism, of bright and dark, negative and positive, cold and warm; it describes how explicitly opposite features not only coexist but also complement each other, are dependent upon each other, and give birth to one another. It is a part of Chinese cosmology. It is a symbol of harmonious, coexistent energy. Yin is portrayed as black, while Yang is portrayed as white. Yin is characterized as feminine energy, which is soft, passive, slow-paced, and silent. On the other hand, Yang is characterized as masculine, which is aggressive, solid, and focused. The essence of this is that yin and yang energies cannot survive in isolation – a balance is needed, like every other aspect of life. These two

Visualizations of human bioelectricity with internal symptom captures

85

energies influence the life of individuals to a great extent, bringing essential peace and calmness, and the much-required energy and focus to carry out daily tasks; that is how these two energies coexist and are essential for mankind [35].

6 Future perspectives and research directions This research is capable of creating a system that can be used for premature detection of cancer. Special software to monitor the patients’ state can be designed, and moreover, there could be internet exchange of GDV data with online analysis. GDV helps in making work easier and can help diagnose a several number of diseases. The future scope of the study is that it has the potential to reach out to various present fields; it will enable the study of human emotions, psychology, and its subsequent relation to physical body characteristics, which were not quantifiable till now. These are the parameters that affect human energy more than external factors.

7 Novelty of the research Mantra also decreases stress and anxiety and improves our mood. Mantras slow down breath, which has a beneficial effect on our heart. Mantras work at a spiritual and a physical level. Practicing mahamantra can bring purity and wisdom and reduce the darkness in our lives. It is important to practice the real mantra and not fake mantras, because fake mantras do not help balance the three gunas, the forces of nature. A study at Duke University showed significant improvement in the moods after practicing 4-weeks of mantra. Mantras connect to our inner soul. When mantra is accompanied with peace and calmness, it helps to bring back positive memories. Thus, it has been proved by research that ancient mantra practice can heal and strengthen our mind and body.

8 Recommendations The years of research that have been dedicated to the science behind meditation have given some proof-based advantages of doing meditation. Instruments have measured the brain and bodily changes of an individual and have proved how regular meditation affects them [35]. According to a research, when the people who meditate on daily basis were asked to detect unexpected stimulus, they showed a better performance than when they were supposed to detect an expected one, which led to the conclusion that

86

Rohit Rastogi et al.

these people had more attention span to perform actively in the given tasks. Scientific research has shown significant applications of Yagya and also proved its worth in purification of environment. Performing yagya daily, with proper chanting of mantras, led to reduced tension levels and anxiety levels of subject, the atmospheric toxins reduced dramatically, and the overall environment was much healthier.

9 Conclusions Meditation is considered as a path towards enlightenment and self-realization. It helps in reducing stress, anxiety, depression and pain, by increasing harmony, awareness, self-introspection, and well-being. Some of them reduced stress and decreased muscle tension. Mindful Meditation also impacts health in a positive way. Other than meditation, performing Yagya is also a good way for improving health. Yagya is essential for refining our lives. Yagyas are considered to be an important part in our progress and overall happiness. Our four Vedas specify about the Gayathri and yagna in the divine creation. Basically, yagya means to sacrifice ego, selfishness, etc., for a noble cause. Performing yagya – the sound of the Gayathri mantra and the heat from the fire of the yajna – associates us with the two major sources of energy that is heat and light.. Performing yagya also helps in removal of insects and bacteria in our environment. Cow’s ghee offered in the yagya helps kill bacteria. It is also said that performing yajna helps in working on nerves, thus helping recovery from neurological diseases [35]. We chant Sanskrit mantras while performing the yagyas; these send vibrations that ensure proper functioning of our body parts. The cosmic energy associated with the Gayathri mantra is that of the sun. In one of the studies, researchers used advanced brain-mapping tools and confirmed some of the health benefits of chanting mantra while performing meditation, such as its ability to help free the mind of background noise inside the brain and calm down the nervous system. It has also been found in another research that benefits of chanting an authentic Sanskrit mantra rather than a fake (tamas) mantra led to better results in reduction of depression and anxiety level.

References [1]

Chaturvedi, D. K. Relationship between chakra energy and consciousness. Biomedical Journal of Scientific and Technical Research, 15(3), 1–3. doi:10.26717/BJSTR.2019.15.002705. ISSN: 2574–1241, 2005.

Visualizations of human bioelectricity with internal symptom captures

[2]

[3]

[4] [5]

[6] [7] [8] [9] [10]

[11]

[12]

[13] [14] [15]

[16]

[17]

[18]

87

Chaturvedi, D. K., Satsangi, R. The Correlation between Student Performance and Consciousness Level, In: International Conference on Advanced Computing and Communication Technologies (ICACCT™-2013), Asia Pacific Institute of Information Technology SD India, Panipat (Haryana), 16 Nov. 2013, Souvenir-pp.66, proc. pp. 200–203. Cioca, G. H., Giacomoni, P., Rein, G. A correlation between GDV & HRV measures for well being. In: Measuring Energy Fields: Current Research – Backbone publishing Co, Korotkov, K. G. (eds.), Fair Lawn, USA, 2004, 59–64. Laycock, D., Vernon, D., Groves, C., Brown, S. The Power of Mantra, Imagecraft, Canberra, ISBN 0-7316-5794-2, 1989, p. 11. Gurjar, A. A., Ladhake, S. A., Thakare, A. P. Analysis of acoustic of “OM” chant to study its effect on nervous system. IJCSNS International Journal of Computer Science and Network Security, 2009, 9(1), 363–366. Johnson, R. D. eds. Computational Chemistry Comparison and Benchmark DataBase, National Institute of Standards and Technology, 2002. doi:10.18434/T47C7Z. Kim, K. J., Tagkopoulos, L. Application of machine learning rheumatic disease research. Korean Journal of International Medicine, 2019, 34, 2. Konikiewicz, L. W. Introduction to electrography: A handbook for prospective researchers of the Kirlian effect, in biomedicine Leonard’s Associates, 1978. Korotkov, K. G. Bio well: analysis of personal energetic homeostasis by measuring energy field. Medical Science Sports Exercise, 2019, 35, 451–75. Michael, L. The wisdom of the body: Future techniques and approaches to morphogenetic fields in regenerative medicine, developmental biology and cancer. Regenerative Medicine, 2011, 6, 667–673. Mistry, R., Tanwar, S., Tyagi, S., Kumar, N. Blockchain for 5G-Enabled IoT for industrial automation: a systematic review. Solutions, and Challenges, Mechanical Systems and Signal Processing, 2020, 135, 1–19. Mouradian, C., Naboulsi, D., Yangui, S., Glitho, H., Morrow, M. J., Polakos, P. A. A Comprehensive survey on Fog computing: state-of-the-art and research challenges. IEEE, 2017, 20, 416–464. Available: https://ieeexplore.ieee.org/abstract/document/8100873/ authors#authors. Nigal, S. G. Axiological Approach to the Vedas, Northern Book, 1986, 80–81. ISBN 978-8185119182. Patton, L. The Hindu World, Mittal, S., Thursby, G. eds, Routledge, 2005, 38–39 ISBN 978–0415772273. Pang, Z., Yang, G., Khedri, R., Zhang, Y. T. Introduction to the special section: convergence of automation technology, biomedical engineering, and health informatics toward the Healthcare 4.0. IEEE, 2018, 11, 249–259. Available: https://ieeexplore.org/document/ 8421122. Jain, R., Hatha Yoga For Teachers & Practitioners: A Comprehensive Guide to Holistic Sequencing, 13 June- 2020. Reference: https://www.arhantayoga.org/blog/7-chakrasintroduction-energy-centers-effect/ Rastogi, R., Chaturvedi, D. K., Sharma, S., Bansal, A., Agrawal, A. Audio Visual EMG & GSR Biofeedback Analysis for Effect of Spiritual Techniques on Human Behaviour and Psychic Challenges, In: Proceedings of the 12th INDIACom; INDIACom-2018; 14th – 16 March, 2018a, pp. 252–258. Rastogi, R., Chaturvedi, D. K., Satya, S., Arora, N., Sirohi, H., Singh, M., Verma, P., Singh, V. Which One is Best: Electromyography Biofeedback Efficacy Analysis on Audio, Visual and Audio-Visual Modes for Chronic TTH on Different Characteristics, In: the proceedings of International Conference on Computational Intelligence &IoT (ICCIIoT) 2018, at National

88

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

Rohit Rastogi et al.

Institute of Technology Agartala, Tripura, India, ELSEVIER-SSRN Digital Library (ISSN 15565068), 14-15 December 2018b. Rastogi, R., Chaturvedi, D. K., Satya, S., Arora, N., Saini, H., Verma, H., Mehlyan, K. Comparative Efficacy Analysis of Electromyography and Galvanic Skin Resistance Biofeedback on Audio Mode for Chronic TTH on Various Indicators, In: the proceedings of International Conference on Computational Intelligence &IoT (ICCIIoT) 2018, at National Institute of Technology Agartala, Tripura, India, ELSEVIER-SSRN Digital Library (ISSN 1556-5068), 14-15 December 2018c. Rastogi, R., Saxena, M., Sharma, S. K., Muralidharan, S., Beriwal, V. K., Singhal, P., Rastogi, M., Shrivastava, R. Evaluation of efficacy of Yagya therapy on T2- Diabetes Mellitus Patients, In: The proceedings of The 2nd edition of International Conference on Industry Interactive Innovations in Science, Engineering and Technology (I3SET2K19) organized by JIS College of Engineering, Kalyani, West Bengal, 13-14 Dec. 2019a. https://papers.ssrn.com/ sol3/papers.cfm?abstract_id=3514326. Rastogi, R., Saxena, M., Gupta, U. S., Sharma, S., Chaturvedi, D. K., Singhal, P., Gupta, M., Garg, P., Gupta, M., Maheshwari, M. Yajna and Mantra Therapy Applications on Diabetic Subjects: Computational Intelligence Based Experimental Approach, In: the proceedings of The 2nd edition of International Conference on Industry Interactive Innovations in Science, Engineering and Technology (I3SET2K19) organized by JIS College of Engineering, Kalyani, West Bengal, 13-14 Dec. 2019b. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3515800. Rastogi, R., Saxena, M., Sharma, S. K., Murlidharan, S., Berival, V. K., Jaiswal, D., Sharma, A., Mishra, A. Statistical analysis on efficacy of Yagya therapy for type-2 diabetic mellitus patients through various parameters. In: On Computational Intelligence in Pattern Recognition (CIPR), Das, A., Nayak, J., Naik, B., Dutta, S., Pelusi, D. eds, Kalyani, West Bengal, 181–197. Advs in Intelligent Syst., Computing. Vol. 1120. Computational Intelligence in Pattern, Recognition, 13-14 Dec 2019c. 978-981-15-2448-6, 487895_1_En, (15). doi:10.1007/ 978-981-15-2449-3_15 Rastogi, R., Chaturvedi, D. K., Verma, H., Mishra, Y., Gupta, M. Identifying better? analytical trends to check subjects’ medications using biofeedback therapies. IGL Global, International Journal of Applied Research on Public Health Management (IJARPHM, 2020a, 5–1(2), 14–31. ISSN: 2639-7692|EISSN: 2639-7706|. doi:10.4018/IJARPHM, https://www.igi-global.com/arti cle/identifying-better/240753. doi: 10.4018/IJARPHM.2020010102 Rastogi, R., Gupta, M., Chaturvedi, D. K. Efficacy of study for correlation of TTH vs age and gender factors using EMG biofeedback technique. International Journal of Applied Research on Public Health Management (IJARPHM), 2020b, 5(1), 49–66. Article 4. doi:10.4018/ IJARPHM.2020010104 Rastogi, R., Chaturvedi, D. K., Satya, S., Arora, N., Gupta, M., Verma, H., Saini, H. An Optimized Biofeedback EMG and GSR Biofeedback Therapy for Chronic TTH on SF-36 Scores of Different MMBD Modes on Various Medical Symptoms. In: Bhattacharya, S. et al. eds, Studies Comp. Intelligence, Vol. 841,: Hybrid Machine Intelligence for Medical Image Analysis, 978-981-13-8929-0, 468690_1_En, (8), 2020c. doi:https://doi.org/10.1007/ 978-981-13-8930-6_8 Rastogi, R., Chaturvedi, D. K., Satya, S., Arora, N., Trivedi, P., Singh, A. K., Sharma, A. K., Singh, A. Intelligent personality analysis on indicators in IoT-MMBD enabled environment. In: Multimedia Big Data Computing for IoT Applications: Concepts, Paradigms, and Solutions, Tanwar, S., Tyagi, S., Kumar, N. eds, Springer Nature Singapore, 2020d, 185–215.doi:https:// doi.org/10.1007/978-981-13-8759-3_7 Sandra, R.-V., Holgado-Terriza, J. A., Gutiérrez-Guerrero, J. M., Muros-Cobos, J. L. Distributed Service-Based Approach for Sensor Data Fusion in IoT Environments, Multidisciplinary Digital

Visualizations of human bioelectricity with internal symptom captures

[28] [29] [30] [31] [32] [33] [34] [35]

89

Publishing Institute, 14 July, 2014, doi: 10.3390/s141019200, https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC4239875/ Saxena, M., Sengupta, B., Pandya, P. A study of the impact of Yagya on indoor microbial environments. Indian Journal of Air Pollution Control, 2007a, 7(1), 6–15. Saxena, M., Sengupta, B., Pandya, P. Comparative studies of Yagya vs. Non-Yagya microbial environments. Indian Journal of Air Pollution Control, 2007b, VII(1), 16–24. Saxena, M., Sengupta, B., Pandya, P. Effect of Yagya on the gaseous pollutants. Indian Journal of Air Pollution Control, 2007c, VII(2), 11–15. Saxena, M., Sengupta, B., Pandya, P. Controlling the microflora in outdoor environment: effect of Yagya. Indian Journal of Air Pollution Control, 2008, VIII(2), 30–36. Saxena, M., Kumar, B., Matharu, S. Impact of Yagya on particulate matters. Interdisciplinary Journal of Yagya Research, 2018, 1(1), 01–08. Takagi, K., Nakayama, T. Peripheral effector mechanism of Galvanic skin reflex. Japan Journal of Physiology, 1956, 5, 5–7. Thakur, G. S. YAJÑA-A vedic traditional technique for empirical and transcendental and achievement. Indian Streams Research Journal, 2014, 04, 5. Yesha, P., Elonnai, H., Amber, S., Udhbhav, T. Artificial Intelligence in the Healthcare Industry in India, The Centre for Internet and Society, India, 2018.

Wasiur Rhmann, Babita Pandey

Early cancer predictions using ensembles of machine learning and deep learning Abstract: Cancer is the most frequent cause that is responsible for large number of deaths globally. According to the report published in 2018 by the international agency of research on cancer, one man in five and one women in six in the whole world develops cancer during their life. One in eight men and one in eleven women in the whole world die of cancer each year. Lung cancer and breast cancer are two major types of cancer with the highest numbers of new cases. Breast cancer is a prevalent and the second most deadly disease among women after lung cancers that are responsible for deaths. Survival of cancer patients largely depends on timely and accurate diagnosis of disease. In literature, various machine learning (ML) techniques are applied for breast cancer prediction. ML techniques utilize past data for the prediction of disease. Identification of tumors as benign and malignant is a crucial part in the detection of cancer and is considered as a classification problem. In this chapter, ensemble-based techniques and deep learning (DL) techniques are used for lung and breast cancer prediction. Two datasets, Wisconsin and Coimbra, obtained from wellknown UCI ML repository are used for experimental purposes. Ensemble techniques are used to make effective classifiers with improved prediction capability. Generally, ML classifiers like Logistic regression (LR), random forest (RF), naïve Bayes (NB), and support vector machine are used for prediction. Combinations of different classifiers are used to enhance the performance of single classifier, and combined classifier is known as an ensemble. In recent years, a subfield of ML, DL, has emerged as a promising area with several techniques that have drastically boosted the performances of models that have caught the attention of researchers. DL originated from neural network. Although DL techniques have shown very good performance on different types of problems, they are computationally intensive. In this chapter, ensemble ML techniques have shown best performance on two cancer datasets, Coimbra and lung cancer datasets, while deep neural network has shown best results for Wisconsin dataset, and ensemble technique is very close to it. Keywords: machine learning, ensemble, deep learning, breast cancer, lung cancer

Wasiur Rhmann, KL Deemed to be University, Vaddeswaram, Guntur, Andhra Pradesh, India, e-mail: [email protected] Babita Pandey, Babasaheb Bhimrao Ambedkar University, Lucknow, India, e-mail: [email protected] https://doi.org/10.1515/9783110708127-005

92

Wasiur Rhmann, Babita Pandey

1 Introduction Treatment of any type of cancer is costly due to its recurrent nature and high mortality rate. Artificial intelligence (AI)-assisted clinical techniques can be very effective in early diagnosis of cancer for pathologists and doctors. The accuracy of AI-based diagnosis techniques is much better compared to empirical methods [1]. Survival of cancer patients largely depends on the timely and accurate diagnosis of disease. Identification of tumors as benign and malignant is a crucial part in the detection of cancer. Different researchers have used machine learning (ML)-based models for cancer detection. A large number of women suffer from breast cancer and die each year. Lung cancer is also responsible for a large number of deaths. Supervised classification techniques are used extensively for the detection of both breast cancer and lung cancer. ML is a part of the evolving field of AI-based computational research with diverse applications in different domains. With the rise in data availability, the use of ML techniques is on the rise for the creation of prediction models. Different ML algorithms are explored in different domains like software security, network security, software engineering, cryptocurrency price prediction, and medical diagnosis. Classification of ML techniques is as follows: supervised techniques and unsupervised techniques. Supervised techniques train models with datasets having target labels, while unsupervised techniques do not have target labels to train a model. Supervised classification techniques are used extensively for the detection of cancer. With the advancement of technology, researchers in biological and medical sciences are concerned with the improvement in accuracy of disease detection. Many cancer prediction studies have used numeric datasets, while some of the studies in the literature have used images; but obtaining such images is, sometimes, a difficult task, and, at the same time, image-based predictions require a large number of computations. Hence, in the study, only numeric datasets are used for cancer prediction. In this chapter, ensembles of ML techniques and deep learning (DL) technique have been explored for prediction of two different types of cancer. Two leading types of cancers, namely, breast cancer and lung cancer, are explored for prediction in the early stage. Ensemble techniques combine two or more different types of ML techniques for better results. The three main categories of ensemble techniques are bagging, boosting, and stacking. These have different types of strength to boost the prediction performance. DL has its roots in ML and is inspired by the human brain; DL models have several neurons as information processing units. DL techniques involve a large number of computations and they require graphical processing units (GPU) for larger datasets. In recent years, DL techniques have given promising results: from image processing to video processing, text classification to text summarization, etc. DL techniques usually do not require manual feature selection as required by ML techniques. Different DL techniques like deep neural network(DNN) [2], recurrent neural network (RNN) [3], long–short-term memory (LSTM) [4], and convolution

Early cancer predictions using ensembles of machine learning and deep learning

93

neural network (CNN) [5] are suitable for different types of tasks. In this chapter, DNN is considered for cancer prediction. Also, ensemble-based techniques and DNNs are applied for cancer prediction in this chapter. This chapter is divided into six sections: Section 2 contains different ML techniques applied on various cancer predictions, in Section 3 the ensemble techniques and its types are discussed, Section 4 presents DL technique, Section 5 contains different datasets used for experimental purposes, and Section 6 contains different metrics used to assess the performances of different techniques, Finally, concluding remarks, results, and discussions are presented in Section 7, and Section 8 concludes the chapter with future scope.

2 Machine learning in early prediction of cancer Supervised and unsupervised learning techniques are two major categories of ML techniques. Supervised techniques are further classified into classification and regression techniques and are based on training dataset with features and target variables, while unsupervised techniques do not require target variables for training the model. Most of the prediction models in the literature have used classification techniques. ML and data mining techniques have been explored for cancer drug response prediction to develop effective drugs to treat cancer [6]. With the advancement of technology, researchers are concerned about techniques to improve accuracy of disease detection. Some of the leading researches of the prediction are discussed. Patrício et al. [7] have presented breast cancer prediction model with the data obtained from blood analysis and anthropometric data. They collected data from 166 participants and found that support vector machines (SVM) is the best performing model with sensitivity up to 88% and specificity up to 90%. Kourou et al. [8] have reviewed various ML techniques applied for cancer prediction. They used several input features for cancer prediction using SVMs, artificial neural networks (ANNs), and decision trees (DTs), and Bayesian networks (BNs). Patrício et al. [7] have used naïve Bayes and KNN (K-nearest neighbor) for breast cancer classification. They performed experiments on the Wisconsin breast cancer dataset. They found that the accuracy of KNN 97.51%. Shen et al. [9] has developed a DL-based breast cancer detection technique based on mammogram screening. They have used a convolution network for the classification of mammograms. INbreast dataset which achieved sensitivity: 86.7%, specificity: 96.1%. Ray et al. [10] discussed breast cancer prediction models based on ML techniques. They applied random forest and Gaussian naïve Bayes, decision tree, KNN classifiers on image and numeric data for prediction of cancer. The image-based models have good precision, recall, and f-measures compared to numeric data-based prediction models. Some of the studies in the literature have used images [11, 12] for breast cancer prediction. In the present study, only numeric datasets are used for breast cancer prediction. Ensemble-based techniques and DNNs are applied for breast

94

Wasiur Rhmann, Babita Pandey

cancer prediction. Lync et al. (2017) have presented an ML- based lung cancer prediction technique. The presented approach considers the target variable as continuous and applied regression-based supervised techniques for the prediction of survival time. Gradient boosting method performed best with the root mean square value of 15.05. Pradeep and Naveen [13] have performed a study on the survival rate of lung cancer patients using support vector, naïve algorithms and C4.5. Kadir and Gleeson [14] have presented the approach of lung cancer prediction using SVM. The CNN technique of DL is also used for lung cancer prediction on image datasets. From the available literature, it is clear that a large portion of cancer prediction studies has used classification techniques for the classification of the tumor as benign and malignant. Classification techniques that are generally used are random forest, SVM, gradient boosting, naïve Bayes, ANN, and so on. Some of the studies of cancer prediction are based on regression techniques that are used for prediction of the size of the tumor. ML techniques applied on cancer prediction are described as:

2.1 Support vector machine (SVM) It is a supervised technique for classification and regression [15]. It shows good speed and better performance when the dataset is limited. It easily classifies linearly separable data and kernel trick is used for nonlinearly separable data. Classification mechanism by SVM is given in Figure 1. A hyperplane is used with the highest margin to separate linearly separable data while non-separable dataset is transformed into a different dimensional space for classification. SVM works well for low as well as high feature datasets. Hyperplane High Margin

High Margin

Figure 1: Support vector machine.

Early cancer predictions using ensembles of machine learning and deep learning

95

2.2 Random forest It is a supervised technique that can be used for classification as well as regression. This technique creates several decision trees for several sample datasets obtained from the original dataset [16]. The final prediction of decision trees is combined with voting in Figure 2.

Decision tree1

Decision tree 2

Decision tree n

Majority Vote

Final prediction Figure 2: Prediction using random forest.

2.3 Naïve Bayes Naïve Bayes is a probability-based classifier. It assumes independent features and uses Bayes theorem for classification. Posteriori and priori probabilities are calculated:     P feature class class *PðclassÞ = P PðfeatureÞ feature  Posteriori probability: P

class feature



Priori probability: P(class) Test data is assigned a label with the highest posteriori probability.

2.4 Decision tree A decision tree is a non-parametric supervised technique for a classification as well as a regression problem. A decision tree contains internal nodes as test on the features, while the leaf nodes represent target class labels. They are very fast and do not require scaling of the dataset. Information gains decide which feature is used to split for the creation of a tree. A decision tree of breast cancer dataset is given in Figure 3.

Yes (13.0/1.0)

< =30.915577

> 12.9361 BMI

No (8.0)

> 30.915577

Figure 3: Decision tree(J48) of breast cancer Coimbra dataset.

No (29.0/3.0)

< =12.9361

Resistin

< = 91

Gluoose

< = 73

< = 7.85

< = 48

Yes (19.0/2.0)

No (5.0)

Yes (19.0)

> 91

> 73 BMI

BMI

> 118 Yes (10.0)

Yes (2.0)

> 29.154519

No (7.0/1.0)

> 32.270788

Gluoose

< = 29.154519 No (4.0)

Age

> 7.85

< = 118

> 48

< = 32.270788

Leptin

Age

96 Wasiur Rhmann, Babita Pandey

Early cancer predictions using ensembles of machine learning and deep learning

97

2.5 K-nearest neighbors It is a non-parametric supervised technique helpful for classification and regression. This algorithm assumes that similar types of data points are located closely. This dataset works on the similarity of features, that is, how close the test data features match the training data. Classification by KNN is given in Figure 4.

Malign

Benign

?

K=1

Figure 4: K-nearest neighbors.

2.6 Logistic regression Logistic regression, a classification technique [17], uses a linear equation of independent variables for prediction of the dependent variable. The dependent variable ranges from –infinity to +infinity. A sigmoid function is used to change it into binary form. The sigmoid function is given as follows: y = g ð xÞ =

1 1 + e−x

Sigmoid function graph is given in Figure 5. The value of y lies between 0 and 1 and cutoff, which is generally taken as 0.5 for classification problems. If the value is above the cutoff, then output is 1; otherwise, 0 for binary classification.

98

Wasiur Rhmann, Babita Pandey

1 y Cut off(0.5)

0 x Figure 5: Sigmoid function graph in logistic regression.

3 The ensemble of machine learning techniques Ensemble techniques are combinations of two or more than two ML techniques [18]. These techniques often improve prediction performance compared to individual classifiers. Heterogeneous ensembles are a combination of different types of classifiers, while homogeneous ensembles combine classifiers of similar nature. Ensemble-based techniques are discussed further.

3.1 Bagging classifier Bagging combines several similar types of ML models trained on a different dataset obtained from original data, using replacement called bootstrap. Data items in bootstrap are almost three-fourths similar to the original dataset. Predictions of different models are combined using the majority vote. Bagging classifier reduces the variation of prediction. The process of the bagging classifier is given in Figure 6.

3.2 Stacking In stacking, several weak models are combined with a meta-model to improve prediction accuracy. Here, meta-model is logistic regression, and base models are random forest, logistic regression, and naïve Bayes. Stacking is very good in improving

Early cancer predictions using ensembles of machine learning and deep learning

99

Data

Bootstrap sample

Bootstrap sample

Bootstrap sample

Adaboost Adaboost

Adaboost

Actual

1,1,0

Actual

1,1,0

Predicted

0,1,1

Predicted

1,0,0

Ensemble

1,1,,0

Actual

1,1,0

Predicted

1,1,0

Figure 6: Bagging classifier of AdaBoost.

the prediction capability of weak classifiers. For lung cancer dataset, random forest is used as a meta-learner, while base learners are SMO, naïve Bayes, LogitBoost, as presented in Figure 7. Prediction SMO

Dataset

Naive bayes

Random Forest

Final prediction

LogitBoost

Figure 7: Stacking ensemble.

3.3 Voting In voting, several prediction models participate to make the final prediction. In the study SMO, AdaBoost, random forest, logistic regression, and naïve Bayes models are used to make final predictions for the Breast cancer dataset, while SMO, random

100

Wasiur Rhmann, Babita Pandey

forest, LogitBoost are used for voting based on the majority vote for lung cancer dataset, as presented in Figure 8.

SMO

Dataset

Random forest

Majority Vote(0,0,1)=0

Final prediction

LogitBoost

Figure 8: Voting ensemble.

4 Deep neural network In recent literature, DL-based models have shown very good accuracy in wider categories of applications like image-based classification, audio, and video classification. DL is applied in medical fields like heartbeat detection (Murat), health management applications [19], and so on. A DNN is a simple, connected neural network. It contains several hidden layers. A dense layer contains connected neurons, and each neuron is connected with the neuron of the previous layer. In this chapter, five dense layers with 30, 16, 8, 8, 4 neurons are used, and there is one output layer with 2 neurons. Dense layers are fully connected. Each neuron computes an activation function and neurons of preceding layers are connected with successive layers by weighted edges. Initially, weights are assigned randomly, and later, after each iteration, weights are updated to find results. DL techniques are computationally intensive and a large amount of data is needed for better results. A DNN for cancer is given in Figure 9. There is one layer to take input features and one output layer to give the result. These layers contain neurons that are connected. Each neuron works as an information processing unit [20]. In the present DNN, dense layers are used as hidden layers. A neural network is an information processing unit. A sample neuron is given in Figure 10. There is an activation function in neurons for processing. Activation function, f, takes combined input and gives the output. The activation function is usually a nonlinear function. The nonlinear function helps correct classification

Early cancer predictions using ensembles of machine learning and deep learning

101

Figure 9: Deep neural network model for cancer.

x1 x2

w1 w2 z = x1w1 + x2w2 + x3w3 + ∙∙∙ + xnwn + bias

x3

xn wn

Bias

Figure 10: Neural unit.

error using the back propagation algorithm. Generally sigmoid and ReLU (rectified linear unit) are applied as activation functions. ReLU uses the following function: f ðzÞ = max ðz, 0Þ The output of ReLU function is the value of input if the input is positive; otherwise its value will be 0. Sigmoid function: f ð zÞ =

1 1 + e−z

The sigmoid function gives value in range of 0 and 1.

102

Wasiur Rhmann, Babita Pandey

5 Datasets of cancers In this section, different types of datasets used for cancer prediction are discussed. Datasets are divided into two categories as follows.

5.1 Numeric dataset In the present study, datasets from UCI machine learning repository named as breast cancer, Coimbra dataset, and Wisconsin datasets are used [21]. These datasets are numeric. There are nine independent variables. The target variable is classification as cancer or no cancer. There are 116 cases in the dataset with a total of 10 attributes. The dataset contains 64 cancers and 52 non-cancer cases. The Wisconsin dataset is used by various researchers for breast cancer prediction [22, 23]. This dataset contains 31 features. There are 546 records in the datasets. These records are classified as benign and malignant. As the second dataset contains a large number of features, features are selected using information gain for ML and its ensembles. Features fractal_dimension_se, symmetry_se, fractal_dimension_mean, texture_se, and smoothness_se are removed as they have low information gain. Tables 1 and 2 show different attributes of Coimbra and Wisconsin datasets. The information gain of each feature (fi) of the Wisconsin dataset in Weka software [24] using feature selection. For performing classification, all features are selected as input as reducing the features causes bad classification performance. Datasets contain 357 benign and 212 malignant cases. In lung cancer dataset, there are 32 instances and 56 attributes. Dataset classifies the three categories of cancers. The attributes have nominal variables of integers ranging from 0 to 3. The most frequent value is used to fill missing values. Table 1: Coimbra dataset attributes. S. no.

Attribute

Description



Age

Numeric



BMI

Numeric



Glucose

Numeric



Insulin

Numeric



HOMA

Numeric



Leptin

Numeric



Adiponectin

Numeric

Early cancer predictions using ensembles of machine learning and deep learning

103

Table 1 (continued ) S. no.

Attribute

Description



Resistin

Numeric



MCP.

Numeric



Classification

Binary  = healthy  = patients

Table 2: Wisconsin datasets attributes. S. no.

Attribute

Description



Radius

Numeric (it is the mean of distances from center to points on the perimeter)



Texture

Numeric (it is the standard deviation of gray-scale values)



Perimeter

Numeric



Area

Numeric



Smoothness

Numeric (it is local variation in radius lengths)



Compactness

Numeric (it is defined by perimeter/area – .)



Concavity

Numeric (it is severity of concave portions of the contour)



Concave points

Numeric (it is the number of concave portions of the contour)



Symmetry

Numeric



Fractal dimension

Numeric (it is “coastline approximation” – )



Diagnosis

(M = malignant, B = benign)

Mean, standard error, and worst of each feature are calculated for each image of the dataset. Datasets are normalized as different features have varied ranges of features. By normalization, features are reduced in the range of 0 to 1. Normalization is performed using the following equation: x=

x − xmin xmax − xmin

(1)

where xmin and xmax are the minimum and maximum values of an attribute which is being normalized.

104

Wasiur Rhmann, Babita Pandey

5.2 Image-based dataset In recent years, image-based diagnosis of diseases is prevalent. Various DL techniques are explored for the prediction of different diseases. Several publicly available datasets useful for researchers are available with proper instructions for experimental purposes. Cancer image data can be found on https://www.cancerimagingarch ive.net/. This repository is created as a division for Cancer Treatment and Diagnosis by the National Cancer Institute.

6 Metrics used for performance evaluation In the study, positive cases are malignant while negative cases are benign. As accuracy is not a good measure of performance of classifiers when the data set is skewed, precision, recall, and f-measures are used [25]. The confusion matrix for classification results is given in Table 3. Table 3: Confusion matrix of cancer prediction. Predicted malignant

Predicted benign

Actual malignant

True positive (TP)

True negative (TN)

Actual benign

False positive (TP)

False negative (TN)

True positive (TP): It is defined as the numbers of malignant predicted cases and they were actually malignant. True negative (TN): It is defined as the numbers of benign predicted cases and they were actually benign. False positive (FP): It is defined as the numbers of malignant predicted cases and they were actually benign. False negative (FN): It is defined as the numbers of benign predicted cases and they were actually malignant.

The metrics used in the study are discussed below: Accuracy is measured as the percentage of correctly classified instances. It is not a good performance measure when the dataset is imbalanced and it is calculated by the following equation: Accuracy =

TP + TN TP + TN + FP + FN

(2)

Precision is calculated by taking the ratio of correctly predicted malignant classes to the total number of classes predicted as malignant by the classifier. It is calculated as follows:

Early cancer predictions using ensembles of machine learning and deep learning

Precision =

TP TP + FP

105

(3)

The recall is measured as the ratio of correctly predicted malignant to total malignant cases. It is given as follows: Recall =

TP TP + FN

(4)

It is the harmonic mean of precision and recall values and is given as follows: F − measure =

2 × Precision × Recall Precision + Recall

(5)

The ranges all these metrics are in between 0 and 1.

7 Experimental results and discussion All the experiments are performed in data mining and ML technique-enabled open source software, WEKA. All the experiments are performed using reliable 10-fold crossvalidation. Results of experiments of different techniques applied on the Coimbra dataset is presented in Table 4, while results obtained on the Wisconsin dataset are presented in Table 5.

7.1 Workflow of cancer prediction In the study, different ML techniques are applied for cancer prediction, using a 10-fold cross validation method. Figure 11 shows 10-fold cross-validations.

Figure 11: Ten-fold cross-validation.

Here, 10-fold cross-validation is used on datasets in WEKA software. It divides datasets into 10 parts – 9 parts come in the training phase, while evaluation is performed

106

Wasiur Rhmann, Babita Pandey

Figure 12: The workflow of the cancer prediction model.

on the remaining 1 part. The final prediction performance is obtained by taking the average of all iterations results. Workflow for cancer prediction is given in Figure 12. Feature selection based on information gain is performed for the dataset with large numbers of features. The flow of work is presented in Figure 3. Feature selection is performed with Information gain. Information gain or entropy for each attribute can be calculated. Information gain of attributes ranges between 0 and 1. Information gain 0 means no information, while 1 means the highest information gain. Attributes with low information gains can be discarded from the final model. Information gain is calculated using the given equation: Information gain = Difference of entropies of dataset before and after split Entropy is a measure of impurity and it is calculated for variable xi as follows: X H ð xÞ = − pðxi Þ logðpxi Þ

(6)

(7)

Early cancer predictions using ensembles of machine learning and deep learning

107

7.2 Results for Breast cancer prediction Tables 4 and 5 show the prediction capability of different ensemble techniques on breast cancer datasets. Table 4: Performances of different techniques on Coimbra dataset. Accuracy Precision

Recall F-measure

Techniques

Parameter

AdaBoost

Decision stump



.

Stacking

Metalearner: LR Random forest, logistic regression, naïve Bayes

.

. .

.

Bagging

AdaBoost

.

. .

.

Vote (majority vote)

SMO, AdaBoost, random forest, logistic regression, naïve Bayes

.

. .

.

Random forest

Default

.

. .

.

Naïve Bayes

Default

.

. .

.

Logistic regression

Default

.

. .

.

Deep neural network

Two dense layers (,) One output layer ()

.

. .

.

.

.

Table 5: Performance of different techniques on Wisconsin dataset. Accuracy Precision

Recall F-measure

Techniques

Parameter

AdaBoost

Decision stump

.

. .

.

Stacking

Metalearner: LR Random forest, logistic regression, naïve Bayes

.

. .

.

Bagging

Random tree

.

. .

.

Vote (majority vote)

SMO, AdaBoost, random forest, logistic regression, naïve Bayes

.

. .

.

Random forest

Default

.

. .

.

Naïve Bayes

Default

.

. .

.

Logistic regression

Default

.

. .

.

Deep neural network

Five dense layers (,,,,) One output layer ()

.

. .

.

108

Wasiur Rhmann, Babita Pandey

For Coimbra dataset, bagging with AdaBoost has shown the best accuracy and the results have outperformed [10] results on the same numeric dataset. For Wisconsin dataset, DNN has achieved the same precision-recall F-measure as obtained by Dhahri et al. [26].

7.3 Results for lung cancer prediction The performance of different ensemble techniques on the lung cancer dataset is given in Table 6. Table 6: Results of different techniques on lung cancer dataset. Accuracy Precision

Recall F-measure

Techniques

Parameter

LogitBoost

Decision stump

.

. .

.

Stacking

Metalearner: RF SMO, Naïve Bayes, LogitBoost

.

. .

.

Bagging

REPtree

.

. .

.

Vote (majority vote)

SMO, random forest, LogitBoost

.

. .

.

Random forest

Default

.

. .

.

Naïve Bayes

Default

.

. .

.

Simple logistic regression

Default

.

. .

.

Deep neural network

Four dense layers (,,,) One output layer ()

.

. .

.

From Figure 13, it is clear that the performance of bagging is best for the Coimbra dataset for breast cancer prediction, while the performance of DL technique is poor. Naïve Bayes is the worst technique for breast cancer prediction on the Coimbra dataset. From Figure 14, it is clear that the performance of DL is best for breast cancer prediction on Wisconsin dataset. Naïve Bayes technique is the worst among all the techniques for Wisconsin dataset. From Figure 15, it is clear that bagging is the best technique for lung cancer dataset while DL technique performed poorly. Random forest technique is the worst technique for lung cancer prediction. Random forest has shown the worst performance on lung cancer dataset in terms of accuracy. Figure 16 displays that accuracy of all techniques is good for the Wisconsin dataset and the accuracy of all techniques is low on the lung cancer dataset.

Early cancer predictions using ensembles of machine learning and deep learning

Figure 13: Comparisons of different techniques on Coimbra dataset.

Figure 14: Comparisons of different techniques on Wisconsin dataset.

109

110

Wasiur Rhmann, Babita Pandey

Figure 15: Comparisons of different techniques on Lung cancer dataset.

Figure 16: Comparisons of different techniques using accuracy.

The precisions of all techniques are very good for the Wisconsin dataset, while they are worst for lung cancer dataset, as shown in Figure 17. Bagging is the best technique for all datasets in terms of precision. Recall is best for all techniques on Wisconsin dataset, and it is presented in Figure 18. Recall obtained by DL is best on Wisconsin dataset. Bagging, stacking, and logistic regression have shown good performance on all datasets while the DNN is not good on F-measure as shown in Figure 19.

Early cancer predictions using ensembles of machine learning and deep learning

111

Figure 17: Comparisons of different techniques using precision.

Figure 18: Comparisons of different techniques using recall.

8 Conclusions and future scope of work Cancer is a deadly disease and it has been a research area in the medical field from ancient time, but with the rise of AI, diagnosis of cancer of different types has caught the attention of the AI community. Earlier, data mining techniques and statistical technique-based diagnosis models were proposed in the literature, while in recent years, DL techniques have caught the attention of researchers with the advancement of hardware and improvement in processing power. DL techniques are

112

Wasiur Rhmann, Babita Pandey

Figure 19: Comparisons of different techniques using F-measure.

very efficient on large quantity of datasets from text to images. Various ML and DL models are explored in different medical fields. In this chapter, ensemble of machine- and deep learning techniques are applied on numeric datasets for the prediction of cancer. It was observed that the Bagging technique based on AdaBoost has the best performance on the Coimbra dataset, while DL technique, DNN has shown the best performance for the Wisconsin dataset. DL technique has shown poor performance on the Coimbra datasets that have low numbers of features. Ensembles of ML techniques have shown good performance with reduced features. As DL techniques are very computationally intensive, that is, they require long time for training, and performance of ensemble techniques is very close or better than a DNN. In this study, some useful ensemble techniques for cancer prediction were identified. The results of the study could be useful for medical professionals who can apply these techniques on numeric breast cancer data obtained for clinical process and blood analysis. Bagging technique performed best for lung cancer prediction, while DL techniques performed poorly. The findings of the study suggest the use of ensemble techniques for cancer prediction using numeric datasets. In this chapter, only DNN based on densely connected network is explored; however, various others like CNN and Autoencoders, which are applied on image datasets, can be explored on numerical datasets. There is also the possibility of exploring the applications of ensembles of DL in cancer prediction. In the future, ensemble techniques can be explored on image-based datasets for identification and prediction of diseases in the early stage.

Early cancer predictions using ensembles of machine learning and deep learning

113

References [1] [2] [3] [4] [5] [6]

[7]

[8]

[9]

[10]

[11] [12] [13]

[14] [15] [16] [17] [18] [19]

[20] [21]

Huang, S., Yang, J., Fong, S., Zhao, Q. Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges. Cancer Letters, 2020, 471, 61–71. Schmidhuber, J. Deep learning in neural networks: an overview. Neural Networks, 2015, 61, 85–117. Samuel, D. ”A thorough review on the current advance of neural network structures”. Annual Reviews in Control, 2019, 14, 200–230. Hochreiter, S., Schmidhuber, J. Long Short-Term Memory. Neural Computation, 1997, 9(8). 1735–1780. doi: 10.1162/neco.1997.9.8.1735. PMID 9377276. Goodfellow, I., Bengio, Y., Courville, A. Deep Learning, MIT Press, 2016, 326. Vougas, K. et al. Machine learning and data mining frameworks for predicting drug response in cancer: An overview and a novel in silico screening process based on association rule mining. Pharmacology & Therapeutics, 2019, 203, 107395. Patrício, M., Pereira, J., Crisóstomo, J., Matafome, P., Gomes, M., Seiça, R., Caramelo, F. Using Resistin, glucose, age and BMI to predict the presence of breast cancer. BMC Cancer, 2018, 18, 29. doi: 10.1186/s12885-017-3877-1. Kourou, K., Exarchos, T. P., Exarchos, K. P., Karamouzis, M. V., Fotiadis, D. I. Machine learning applications in cancer prognosis and prediction. Computational and Structural Biotechnology Journal, 2015, 13, 8–17. Shen, L., Margolies, L. R., Rothstein, J. H. et al. Deep learning to improve breast cancer detection on screening mammography. Scientific Reports, 2019, 9(12495). doi: https://doi. org/10.1038/s41598-019-48995-4. Ray, R., Abdulla, A. A., Mallick, D. K., Ranjan Dash, S. R. Classification of benign and malignant breast cancer using supervised machine learning algorithms based on image and numeric datasets. Journal of Physics: Conference Series, 2019, 1372, 012062, IOP Publishing. doi: 10.1088/1742-6596/1372/1/012062. Aswathy, M. A., Jagannath, M. Detection of breast cancer on digital histopathology images: Present status and future possibilities. Informatics in Medicine Unlocked, 2017, 8, 74–79. Janowczyk, A., Madabhushi, A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. Journal of Pathology Informatics, 2016, 7. Pradeep, K. R., Naveen, N. C. Lung Cancer Survivability Prediction based on Performance Using Classification Techniques of Support Vector Machines, C4.5 and Naive Bayes Algorithms for Healthcare Analytics. Procedia Computer Science, 2018, 132, 412–420. Kadir, T., Gleeson, F. Lung cancer prediction using machine learning and advanced imaging techniques. Translational Lung Cancer Research, 2018, 7(3). 304–312. Cortes, C., Vapnik, V. Support-vector networks. Machine Learning, 1995, 20(3). 273–297. Ho, T. K. (1995). Random decision forests, Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16. Tolles, J., Meurer, W. J. Logistic regression relating patient characteristics to outcomes. Japan Automobile Manufacturers Association, 2016, 316(5). 533–534. Zhou, Z. H. Ensemble Methods Foundation and Algorithms, New York, 2012. Fink, O., Wang, Q., Svensen, M., Dersin, P., Ducoffe, M. Potential, challenges and future directions for deep learning in prognostics and health management applications. Engineering Applications of Artificial Intelligence, 2020, 92, Article 103678. Ketkar, N. Deep learning with python: a hand on Introduction, India, Apress publication, 2018. Dua, D., Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA, University of California, School of Information and Computer Science.

114

Wasiur Rhmann, Babita Pandey

[22] Brown, G. Diversity in Neural Network Ensembles, The University of Birmingham. CRC Press, Taylor &Francis Group, 2004. [23] Antos, A., Kégl, B., Linder, T., Lugosi, G. Data-dependent margin-based generalization bounds for classification. Journal of Machine Learning Research, 2002, 3, 73–98. [24] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I. H. The WEKA data mining software: an update. SIGKDD Explorations, 2009, 11(1). https://www.cs.waikato.ac. nz/ml/weka/. [25] Witten, I. H., Frank, E., Hall, M. A. Data Mining, Practical Machine Learning Tools and Techniques, 4th edition, San Francisco, CA, United State, Morgan Kaufmann, 2016. [26] Dhahri, H., Maghayreh, E. A., Mahmood, A., Elkilani, W., Nagi, M. F. Automated breast cancer diagnosis based on machine learning algorithms. Journal of Healthcare Engineering, ArticleID 4253641, doi: https://doi.org/10.1155/2019/4253641. [27] Lynch, C. M., Abdollahi, B., Fuqua, J. D., Alexandra, R., Bartholomai, J. A., Balgemann, R. N., . . . & Frieboes, H. B. (2017). Prediction of lung cancer patient survival via supervised machine learning classification techniques. International journal of medical informatics, 108, 1–8.

D. A Janeera, G. Jims John Wesley, P. Rajalakshmy, S. Shalini Packiam Kamala, P. Subha Hency Jose, T. M. Yunushkhan

Deep learning in patient management and clinical decision making Abstract: With the rapid progression of technology, several hospitals are directed toward digital maintenance of patient information and records. The patient outcomes are significantly affected by the time-constrained, complex, uncertain, and high-stake decisions made by surgeons. Heuristics, individual judgement, and hypothetical deductive reasoning generally dominate clinical decisions. The traditional decision-support systems and predictive analysis faces certain challenges leading to suboptimal accuracy and manual management of information that is tedious and time consuming. This may lead to preventable harm, error, or bias. The challenges of the traditional systems can be overcome by implementation of artificial intelligence models that use deep learning for automation, where the outputs of mobile healthcare devices can be fed to electronic health records as a live-stream of patient data. In such systems, the decision-making process involves human intuition, bedside assessment preservation, detailed monitoring and implementation, model interpretability, data standardization, and consideration toward ethical challenges in error accountability and algorithm bias. Right from patient appointment to surgery and recovery, this online platform can be instigated and optimized. During the process of treatment, medication, and surgery, deep learning technology can guide the surgeon as well as the patient. In complex and debilitating cases, this technology uses statistical and quantitative models, enabling better decision making. Infections can be detected, tracked, investigated, and controlled using this model. In this chapter, we discuss the analysis of clinical images with deep learning techniques for clinical decision making. We also review the patient management system, wearable technology, standardization, and processing of information from the sensor nodes using deep learning approaches.

D. A Janeera, Department of ECE, Sri Krishna College of Engineering and Technology, e-mail: [email protected] G. Jims John Wesley, Aerospace Engineering, Karunya Institute of Technology and Sciences, e-mail: [email protected] P. Rajalakshmy, Robotics Engineering, Karunya Institute of Technology and Sciences, e-mail: [email protected] S. Shalini Packiam Kamala, Department of Science and Humanities, Nehru Institute of Engineering and Technology, e-mail: [email protected] P. Subha Hency Jose, Biomedical Engineering, Karunya Institute of Technology and Sciences, e-mail: [email protected] T. M. Yunushkhan, King Khalid University, Abha, KSA, e-mail: [email protected] https://doi.org/10.1515/9783110708127-006

116

D. A Janeera et al.

This system offers improved solutions, explanation, and transparency when compared to the traditional clinical schemes. Keywords: artificial intelligence, deep learning, clinical decision making, patient management, decision support system, electronic health record

1 Introduction The medical field is revolutionized with the technological development, and is adopting and experimenting automation to a large extent. Smart technologies, such as artificial intelligence (AI) and deep learning (DL), provide enhanced solutions in personalized therapies, uniquely composed drugs, and several targeted and customized treatments [1]. Technologies, such as AI, help in maintaining patient records in a digital format and perform patient checkup from time to time. The availability of voluminous information for data analytics and the growing requirement of computational power and storage have led to advancements in DL technology. Its applications are expanding across various fields, including medicine, with efficient and promising opportunities. The healthcare ecosystem has been impacted to a great extent with the developments in research with respect to DL [2]. Various issues in medicine are solved by a wide range of algorithms and technical solutions offered by DL. Digitization of patient records, patient management, and clinical decision making are some of the key areas in which DL algorithms play a major role. The existing technologies, such as virtual reality, natural language processing (NLP), image processing and so on, can be combined with DL to offer promising solutions to various challenges in the medical field. This not only helps in automation, but also enormously improves the clinical healthcare quality of service [3]. Classification and prediction of large datasets, which is the key factor of various smart systems, can be performed efficiently using DL technique. These systems comprise a sequence of embedded blocks, in which, data transfer occurs by flow of data between the blocks that leads to optimal decision making, on condensation of information. Tele-health and digital health systems have become a necessity due to the outbreak of COVID-19 [4]. The importance of remote treatment, consultation, and diagnosis has been reemphasized in this situation. Over the past few years, the advancements in telehealth are slowed down by the behavioral and regulatory obstacles. The contributions of AI and DL will help in transforming telemedicine to a large extent. A holistic treatment can be obtained from the clinical practice and healthcare with smart management and decision making [5]. Prevention of disabilities and diseases as well as prediction of health status for patient assistance is the primary goal of quantitative models and precision medicine. In predictive analysis and clinical research acceleration, electronic health

Deep learning in patient management and clinical decision making

117

records (EHRs) offer a promising prospective. Patient recruitment for clinical trials, autism spectrum disorders, co-morbidity cluster discovery, type 2 diabetes subgroup identification, drug interaction and effect prediction based on data driven-models and EHRs are more popular and accurate. Despite these advantages, limited workflow and clinical decision support systems make use of these DL schemes and predictive models. Systematic biases, random errors, incompleteness, sparseness, heterogeneity, and noise and high dimensionality are some of the challenges in modeling and representing EHR data. Data representation and feature selection largely affect the realization of predictive algorithms. However, there has been a steady growth in the use of EHR in ambulatory and hospital care settings.

2 Deep learning in healthcare Scientists, researchers, and medical experts are trying to implement several new technologies and offer impactful outcomes. Explicit patterns can be determined based on domain knowledge and expertise of the practitioner using raw data and input features available in the traditional schemes. This process is time consuming and laborious for appropriate evaluation of features by selection, analysis, and creation processes. Various innovative applications and solutions are offered by DL in healthcare [6]. The voluminous information from patient insurance records, medical reports, and patient records are gathered and processed by neural networks to offer optimal outcomes. Despite being in the early stages of implementation, DL has offered significant results in various dimensions [7]. Some of the most common applications of DL in healthcare include drug discovery, medical imaging, insurance fraud, detection of diseases at early stage, and genomics. Diagnostic and image analysis involve convolution neural networks (CNN) that aid in the detection of melanoma and other conditions. The training and learning abilities of DL has placed this technology in the limelight as the future of healthcare. The workflow optimization, maximized clinician support, and radiology technology are the successful outcomes of implementation of the DL Aidoc model [8]. Board of directors, managers, nurses, doctors, pathologists, and several people are involved in the healthcare system. Clinical and non-clinical decisions are made by this group of people. Strategic planning, budgets, resource allocation, and so on relate to non-clinical decisions, whereas medical prescription, treatment, therapy, and diagnosis relate to clinical decisions [9].Clinical decision making involves arrangements that are driven by technology that help the medical practitioner, physician, or surgical robots in the process of better decision making. DL can assist in both clinical as well as non-clinical decision making. Shared decisions with respect to utilization of resources, postoperative management, identifying and mitigating risk factors that are modifiable, informed consent process, and augmentation of the

118

D. A Janeera et al.

operational decisions are the potential benefits of clinical decision making with AI [10]. The complexity of clinical decision is high as its attributes include both measurable and non-measurable parameters. Previous health records, diagnostic reports, education level, experiences, lifestyles, patient beliefs, and so on contribute to the attributes of clinical decision making. It is essential to have quantifiable and measurable attributes as inputs to the DL algorithm to improve its efficiency in the process of decision making. In order to provide a better understanding to the reader, Figure 1 represents a block diagram of a simple clinical decision making system. The primary and secondary patient information that involve information regarding medical records, diagnostic reports, observed symptoms, patient rights, food and drinking habits, allergies, and so on are combined with a knowledge base that consists of other patient records that aids as a reference in decision making. DL, Microsoft Cognitive Toolkit (CNTK), Caffe, PyTorch, Torch, Keras, Theano and TensorFlow [11] are some of the open source tools and major programming languages for implementation of DL algorithms. Multilayer perceptron, CNN, recurrent neural networks (RNN), and auto-encoders (AE) are some of the DL models that work on supervised or unsupervised learning functionalities for information extraction, representation learning, and outcome prediction [12]. PRIMARY DATA OF PATIENT

SECONDARY DATA OF PATIENT

OBSERVED DIAGNOSTIC MEDICAL RECORDS SYMPTOMS REPORTS

FOOD AND PATIENTS ALLERGIES DRINKING HABITS RIGHTS

DECISION MAKING TOOL

DECISION MADE

KNOWLEDGE BASE

Figure 1: A simple clinical decision-making system.

Improvement in patient outcomes, healthcare cost optimization, personalization of medicine with high objectiveness, and accuracy are some of the benefits of AI in clinical decision making [8]. Healthcare data exceeds the data generated from any

Deep learning in patient management and clinical decision making

119

other source. Despite legal ramifications and ethical discussions, there is a promising future for this technology if it is collaborated with the voluminous data available in the right manner.

2.1 Deep learning in patient management Self-correction or arriving at definite conclusions can be performed by reasoning and utilization of the data acquired through learning in AI processes [13]. Under-diagnosis barriers can be overcome by identification of toxicity, rate diseases, and other health system barriers. The possible correlations, trends, and patterns can be computationally revealed by examining the structured and unstructured data by using predictive modeling, computation, machine leaning, DL, and AI. Machine learning, AI, and big data are used in a wide range of applications in the health sector [14–15]. Patient management is revolutionized by the advancements in these technologies. Detection of out-of-norm or fraudulent behavior, quick recalculation of risk portfolio, near-real time defects and issues, determination of root cause of failures, strategic business planning, smart decision making, optimized offerings, development of new products, time saving, and cost reduction solutions can be provided by a combination of DL and big data [16–18]. Figure 2 represents the integration of DL in clinical decision making tasks with respect to data acquisition, feature extraction, interpretation, and decision making. Voluminous health data is analyzed by Google, leveraging the strengths of DL, to become a global innovation leader in the healthcare sector [17]. These techniques can also be used for data management in the health sector. An annual increase of 5.5% is expected in the US health spending, based on the reports from the Centre for Medicare and Medicaid Services. Close to 10% of the global gross domestic product is contributed by health expenditure, which approximates to over 7 trillion dollars [19]. Nurses, doctors, and patients can use mobile phone apps or Apple devices to transmit and receive data in a secure manner, ensuring safety of patients while administering certain medication, viewing radiology images and lab results, and further, learning about patient treatment and conditions [20]. Diabetes control, stroke related speech impediments, hearing test, vision test, Parkinson’s disease, and atrial fibrillation can be tracked by detection of irregular heartbeat using Apple Watch. The drug non-adherence problem, which costs around 300 billion dollars, is addressed by the Medisafe app company with the health of Apple HealthKit application programming interface for assisting people to manage their medication, with personalized technology [21]. In order to assist the global researchers and health professionals, IBM Watson Health was developed for making informed patient-care related decisions, by translating knowledge and information into insights. The usage of Watson

120

D. A Janeera et al.

Figure 2: Integration of deep learning in clinical decision-making tasks.

healthcare is supported by a growing body of evidence from over 300 healthcare organizations and hospitals of cancer care and oncology. Last-mile delivery, medical records, and cancer research projects are being developed by the Amazon Grand Challenge team. A new healthcare industry is formed by the Berkshire Hathaway Company, JP Morgan Chase Company, and Amazon for providing technology solutions to their employees, at a reasonable cost, for quality care access and incentives that are free of profit. Vast amount of data is acquired by over 1.2 million employees across various markets. A healthcare focused department is setup by Microsoft utilizing cloud computing, machine learning, and AI at the Cambridge research laboratory [22]. This tech giant focuses on diabetic research and patient monitoring solutions to improve the healthcare industry. Emotions of Facebook users are analyzed using a baseline comparison algorithm according to a patent filed by the company in the year 2017. Depression or anger is identified based on the typing speed and strength at which the keyboard is tapped. A combination of predictive modeling, computing, DL, ML, and AI are proposed to be integrated into the curriculum of medical and science schools in the future in order to expand the education beyond statistics [23]. Health management is made possible through potential support via wearable devices and health apps. The awareness regarding data confidentiality and privacy is enhanced, enabling doctors and patients involve in overcoming the security barriers. Drug discovery techniques are transformed by application of computational schemes using the available drug-related data and outpour of public diseases. The efficiency of the analyses can be improved by leveraging several available tools and resources. The limitations regarding data security and privacy can be overcome using appropriate technologies for masking

Deep learning in patient management and clinical decision making

121

the personal data of patients. Considering the increase in the volume of data, security schemes must be strengthened to offer an efficient patient management [24].

2.2 Data extraction schemes Abbreviation expansion, relation extraction, temporal event extraction, and single-concept extraction are the major subtasks in extraction of clinical information using DL [25]. Structured medical concepts, such as procedures, treatments and diseases can be extracted in clinical-free text using the single-concept extraction scheme. Varying success levels can be achieved through traditional NLP techniques. However, the complexity of clinical notes leaves a large scope for improvement in this domain. In a clinical note, each word can be assigned with clinically relevant tags, enabling sequence labeling as a solution to the concept extraction issue. Disease severity, indication, adverse drug event, medication route, dosage, drug name, and other relevant tags are made available at each category of disease and medication. Conditional random fields (CRF) and long–short-term memory (LSTM) networks are combined with bidirectional LSTMs or gated recurrent units in RNN-based DL architectures [9–11]. Clinical concepts are extracted from the text using state-of-the-art techniques and compared with the baseline CRFs, for experimental purpose. The experimental results show that CRF baselines are outperformed by the RNN variants with a wide margin in terms of disease severity, medication frequency, duration, and other subtle attributes. In clinical code structure related to billing, clinical informatics are not available, despite their high level of importance. Named entity recognition (NER) is another significant clinical-concept extraction application based on DL. In this model, CRF baselines are improved using CNN over pre-trained word embeddings. In extracted EHR, assigning time notion is a complex issue, which can be overcome by temporal event extraction feature. Information from two large clinical corporations is analyzed and the text is used for pre-training a model, where word vectors and word2vec, along with standard RNN, are used as a framework on clinical notes for extraction of medical events and related time of event [8]. Prediction and structured relationships can be performed using the Deep Dive application developed in Stanford. Manual engineering is required in this model when a shared task is to be performed, providing a huge scope for improvement. Relations, such as test X, which reveal the medical issue Y, or X causes or worsens or improves the condition Y, and other relations can be extracted using relation extraction [25]. This is in contrast to the association of date and time span with corresponding clinical events, as seen in temporal event extraction. Mapping of word to concept, as in Unified Medical Language System (UMLS) and other text pre-processing schemes, is used in feature generation with sparse AE that provides input to CRF classifiers. This model outperforms the existing HER-relation extraction models. These techniques are represented in Figure 3.

Figure 3: Information extraction from electronic health record.

122 D. A Janeera et al.

Deep learning in patient management and clinical decision making

123

Clinical text consists of over 150,000 distinctive medical abbreviations. In order to perform extraction, structured concept mapping must be done, followed by expansion. Expansion of abbreviation is a challenging task, as several possible explanations may be available for each abbreviation [23]. Word embedding techniques can be used for overcoming this issue. Clinical texts from books, journals, medical articles, Wikipedia, and intensive care units are used for a word2vec model pre-training to create customized word embeddings. The NLP DL assignments require word embedding models, even though they are not deep models. The baseline abbreviation expansion scheme outperforms the embedding-based model multiple folds. The highest accuracy yielding embeddings are obtained while combining all background information sources.

2.3 Deep learning in clinical decision making The hybrid system domain knowledge or pure data-driven techniques are used in data-driven AI and clinical decision-support systems, demonstrating the predominance of data-driven AI, according to recent studies. Taking appropriate steps for the integration and conjugation of available knowledge create hybrid approaches that enable collaboration between multiple AI categories. The major resolution issues faced by this collaboration that require explanation and transparency can be overcome by medical informatics and AI. While conducting surgeries, optimizing resource utilization, managing complications, and addressing variable risk factors, involve high stakes and complex decisions are made by surgeons. The surgical harms that are incurred can be prevented largely by overcoming the judgmental and diagnostic errors [26]. Individual judgement and hypothetical deductive reasoning largely influence surgical decision making. These factors cannot be provided with a standard remedy due to their highly variable nature. Figure 4 represents a flowchart of clinical decision making, where laboratory tests, imaging, electrocardiogram acquisition, blood pressure, height, weight, and such other simple measurements, demographics, patient clinical history, and other related data are acquired. The reference data and human knowledge is combined with the data acquired by physicians for comparison and interpretation [19]. The measurement uncertainties and information completeness are also considered to validate the data reliability. Further, interpretable decisions are made, enabling easy understanding of the reasoning behind them, by tracing back. The resulting decision helps in planning follow-up observation, offering immediate treatment, or sending the patient home. Risk mitigation and reducing the variability can be carried out by implementing the National Surgical Quality Improvement Program, Surgical Risk Calculator, and other conventional clinical decision support tools. However, data acquisition and entry into this system have to be performed manually, which is time consuming and offers

Figure 4: Flowchart of clinical decision making.

124 D. A Janeera et al.

Deep learning in patient management and clinical decision making

125

suboptimal accuracy that hinders clinical decision making. Research related to improvement strategies for clinical decision making is limited, despite the importance of the task. This chapter offers a model where live data from the patient’s EHR is fed to smart models, which aids in clinical decision making, along with human intuition [19]. The system can also be integrated with bedside health systems. A combination of data regarding tumor stage and spread from cytological notes with medical images of tumors from patients can largely assist in understanding the disease and patient’s condition. DL and AI improve the reliability of clinical decision support systems.

2.4 Other applications in hospital setup DL, ML, and AI schemes may be used for improving operational efficiency, patient monitoring, prediction of patient flow, clinical decision making, management and analysis of EHR, and for various other purposes in the clinical environment. Patient satisfaction can be improved and hospital resources can be optimized by reducing the appointment delays and waiting time through prediction algorithms. Radiography, ultrasound, magnetic resonance imaging (MRI), and computed tomography facilities are moderated to identify the delay times. This data, along with the walk-in time, can help in predicting the waiting time. The radiology database is extracted for several variables. Regression tree, classification, bagging, gradient boosting mechanism, kth nearest neighbor, multivariate adaptive regression splines, elastic net, support vector machine (SVM), random forest, neural network, and several such learning algorithms can be used for training data fit and fine tuning the parameters. The predictive accuracy of the algorithms can be determined using the root mean square error value. In terms of delay and waiting time prediction, efficient and better accuracy are obtained using the elastic-net algorithm. In the hospital wards and departments where emergency diagnostic decisions must be taken, the process can be sped up by automated diagnostic decision support applications [27]. When unplanned transfer to intensive care is required for patients at risk, the recognition of features with several prediction variables using AI models is optimal. Healthcare system cost can be reduced to a large extent by predicting hospital readmission within a specific time duration using AI algorithms. Unattended ambulatory patients can be monitored using the Scalable Medical Alert Response Technology integrated wireless system. Caregivers are provided with wireless interface with this system along with additional features, such as targeted alerting, signal processing, geo-positioning, saturated oxygen, and electrocardiography patient monitors that are wirelessly integrated. The emergency department and other critical departments can benefit in terms of improved patient outcomes while using healthcare information systems through embedded AI algorithms. The patients’ quality of life can be improved and treatment cost can be reduced using web-based patient support systems. Health monitoring centered on clinicians and

126

D. A Janeera et al.

decision making centered on patients are the key factors in web-based patient support systems. In an emergency department, patients with chest pain are analyzed for acute cardiac complications and prediction of cardiac arrest using scoring algorithms. Efficient patient management in healthcare facility and predicting the patient flow are performed by applying patient flow management schemes in AI models. This enables forecasting visits to emergency departments, avoiding preventable visits and calls to the health centers, customization of treatment plan, and efficient patient flow management. In hospitals, patient flow management, access and allocation of hospital beds are a challenging task. Prediction of bed availability and patient discharge can be performed using the patient’s clinical data as well as bed usage and other related administrative data. The bed availability information can be used for scheduling surgeries and organizing the system in an efficient manner in a hospital environment. This results in increasing the revenue and offers efficient utilization of hospital manpower and resources. Predictive analytics provides solutions in categorizing ward-wise data. In hospitals, patient flow can be determined by not only hospital and patient factors, but also by environmental factors. Ambient temperature and other calendar variables can also be used for forecasting the patient flow. Compared to monthly seasonality, weekly seasonality often prevails. Seasonal autoregressive integrated moving average (SARIMA), generalized estimating equations, and generalized linear models (GLM) are time-series models used for forecasting [23–25]. Increase in patient satisfaction, allocation and planning in hospital resource utilization, healthcare quality improvement, operational planning improvement, and hospital resource optimization are the benefits of patient flow prediction. Artificial neural network, time series regression, exponential smoothing, SARIMA, and linear regression are some of the patient flow prediction and forecasting schemes.

3 Challenges in implementation of deep learning in healthcare Multitasking issues, bias and variance tradeoff, flexibility, generation issues, over and under fitting issues, high performance hardware requirements, tuning hyperparameters, data size, and, choosing the right model type are some of the inherent challenges in DL algorithms. It is also challenging to set a common benchmark for any specific task related to clinical support, as private data is used by most researchers, who are hesitant to share the data sets. Patient privacy concerns and other policies restrict sharing and utilization of huge medical datasets by DL researchers [28]. EHR documents are available as digital data, printed documents, and handwritten

Deep learning in patient management and clinical decision making

127

text. These data must be analyzed and understood by the DL algorithms. The shorthand notations and abbreviations in clinical texts vary from physician to physician.

3.1 Data standardization Patient care and disease trajectories with high variability can be captured and modeled successfully with EHR. Hospice care, choice of terminal cancer treatment, laboratory test timings, provider protocol variability, cancer location, disease presentation, and other natural differences between patients provide high variability. This makes it challenging to standardize patient care, despite continuous research efforts. Navigation of the longitudinal models within the huge search space is challenging with disease progression and variability sources. When compared to machine learning schemes, DL achieves better results and overcomes a large number of healthcare decision-makingrelated issues over the same dataset. The use cases of clinical decision support tools help in compounding the variability of patient information. A certain mortality prediction model built from specific subpopulation of patients from multiple specialties may not perform well on other subpopulations. While DL techniques require extensive data, ML can handle only a smaller amount of data, despite the humongous issues in healthcare. Manual data annotation is challenging with lesser resources or insufficient patient volume in smaller community institutions, leading to inadequacy in relevant training data. Transfer learning is a possible solution to address the data inadequacy issue [29]. A different institution or different task can be executed with a new training model with the help of weights and architecture of existing well-validated models in transfer learning. Weights of an existing model can be updated efficiently with a smaller dataset and annotating the newly dataset that is acquired, instead of completely building a new model. This reduces the effort and cost required for retraining the model or building a new dataset.

3.2 Bias and confounding Watson supercomputer is a significant example of the advancements in AI, ML, and DL. However, it has provided several flawed cancer treatment recommendations. The suggestions of the system are biased toward the US-based routine care according to physicians at other geographical regions. The gap in health results is amplified by this bias. It also restricts the development and growth of personalized medicine from reaching the minorities. The training data must include all minorities as emphasized by certain researchers. The biased training dataset enables certain algorithms to predict future criminals, biased by race. These are human-like biases inherited by the AI, DL, and ML solutions [23]. Post heart attack care is another example of gender bias where the sex groups are treated unequally due to medical misdiagnosis. Based on

128

D. A Janeera et al.

the training data and decisions made by the individuals profiled in the set, the DL solutions inherit certain bias factors. Unawareness of the clinical goal, partiality and bias in clinical decision making are some of the existing challenges in DL techniques. Successful and efficient decision-making applications involve finding ways to overcome human bias by DL and learning from past mistakes. Figure 5 represents the clinical decision-making paradigm that compares effective and ineffective clinical decision-making factors and their outcomes. Asthmatic patients are classified as low risk by DL algorithms during pneumonia prognosis and risk assessment with no further interventions required. This is a common instance of bias, a concern in the healthcare sector. Aggressive treatment is prescribed to asthmatic patients suspected with pneumonia, thus reducing the risks compared to non-asthmatic patients. This huge problem still requires efficient solutions, despite the hazardous effects. Debiasing algorithms are proposed by German and Czech researchers to comprehend the human cognitive bias and its effects. The learning process is as confounding as the bias issue. During study, the input data and outcome are be analyzed for spurious associations. Another example of this issue is the detection of pneumonia using chest X-rays using a DL model. The learning model learns that certain machines are used on patients, likely with pneumonia, in certain ways and in certain places [28]. However, this example is in contrast to the generalized learning model. Instead of using supervised applications and forcing desirable outputs, unsupervised learning models can be used to overcome this issue. Similarity between the input data acts as the driving factor to arrive at a suitable conclusion. Experiment randomization can be performed if confounding effects still persist.

3.3 Validation and continuous improvement Extensive data validation, audit, and systematic debugging are essential even though the prediction of algorithms is beyond human limits. Improved financial and patient outcomes must be demonstrated to implement DL algorithms in hospitals. Prospective trials that are randomized and multicentric form the core of the validation. These are required for determining if models trained at one site can be used at another site. External validation is very scarce in real-time clinical environment [30]. A survey shows that out of 516 studies, only 6% of the ML trials are assessed and validated. The prospective validation of radiologists was satisfactory in the implementation of DL for the assessment of mammographic breast density. In metastatic breast cancer, review of lymph nodes with DL algorithms that assist pathologists was assessed to provide improved accuracy in decision making and a shorter review time. Prospective validation of diabetic retinopathy detection with diagnostic systems based on machine learning has been performed by researchers. This model is approved for clinical practice by the Food and Drug Administration (FDA) due to its

Figure 5: Clinical decision-making paradigm.

Deep learning in patient management and clinical decision making

129

130

D. A Janeera et al.

exceptional results. In the case of congenital cataracts, treatment suggestion and risk stratification are provided along with diagnosis, using the ML platform. For application in real-time colonoscopy, ML models were extensively developed and validated for the identification of neoplastic polyps that requires resection. DL models can learn from experience, which is their greatest strength. With the increase in the availability of data, the performance of the model improves. Catastrophic forgetting issue is a significant challenge, where the model forgets data learned previously, upon learning from new information. More resource and time are required for retraining the model over the complete database. The issue can be overcome by a decentralized computational architecture, called federated learning, where the models can be run locally on mobile phones and can be improved with data from a single user [29]. Data security can be maintained and brain tumors can be segmented using deep neural networks, with this learning strategy, without sharing patient information. Algorithms are also developed for periodic performance monitoring of the programs, updating, and evaluating regulations, due to the evolving nature of DL models.

3.4 Interpretability and explainability The ability to present and express information in a way that is understandable to humans is called interpretability. In any learning algorithm, it is a highly valued feature to provide the user with the facility to interpret the output of the algorithm. Especially, in medicine and other strictly-regulated fields, where the DL schemes are not adopted in clinical practice due to the lack of interpretability, this feature can help in overcoming limitations. Cautious decisions must be taken while translating the results from DL algorithms to clinical practice, in applications where controversial results may appear due to the use of large datasets. The European General Data Protection Regulation (GDPR) filter is not surpassed by various DL algorithms, which leads to revealing the logic and data involved in every decision [31]. The user is provided access to the path of reasoning, which enabled the algorithm to provide the specific decision. In order to enable user access to the reasoning path behind the decision made by the algorithm, there are various schemes that are being developed in DL technology. Prediction and attention maps are used for improving the ability to explain ability analyzing CT scan images to diagnose acute intracranial hemorrhages using DL algorithms. Figure 6 represents the various clinical decision-making approaches. Physicians are assisted in clinical decision making using a model, in which, specific data is looked at for every individual patient, focusing on the credibility of facts. MRI scans are analyzed by researchers for classification of liver lesions using DL approaches, making significant decisions. Interpretability with important feature estimates and prediction by combining gradient boosting machines are combined in DL systems, during surgery, for prevention of hypoxemia development. Local interpretable model-agnostic explanations

Deep learning in patient management and clinical decision making

131

Figure 6: Clinical decision-making approaches.

(LIME) is an ML model that is most popular for its explainability feature [7]. A black-box model can be approximated easily with a simple local model in LIME, rather than approximation of the whole model. The prediction changes can be evaluated using LIME by altering the input and varying the components that make sense to humans, in order to explain individual prediction. Cardio respiratory data is analyzed for detecting risk of developing hypertension, using prediction algorithm that is based on machine learning.

3.5 Security and regulatory Training DL models requires access to enormous datasets. This raises several privacy and security challenges. Data transfer between healthcare organizations in a secure manner is under debate. The high-profile data leakage hazards are not underestimated

132

D. A Janeera et al.

by the stakeholders anymore. Data breaches and hacking are prevented by protecting sensitive information, using security measures as recommended by the European GDPR. Large-scale damage may be caused by hacking a decision-making system deliberately, which is a very serious threat. Patient data privacy and security must be warranted by industry regulations and standards for information technology resources. Irrevocable and secured cryptographic data can be transferred using block-chain technology, providing a higher level of data security. Data access is regulated through smart contracts, data security, and integrity using cryptographic tools – immutable and public transaction log is provided by block chain [16]. Data cannot be placed in a single place in a network but has to be made available at each node of the network. This feature makes block-chain hard to scale, costly to maintain, and slower in processing. It is not possible to expel a user, in case of misbehavior, despite the sovereignty of users that helps avoiding pernicious companies from accessing the data. While preserving individual information, federated learning enables updating the learning model with a de-centralized model-training paradigm that guarantees patient data security. When explanations are required, self-defeat and failure are caused in systems with full anonymization. Pseudo anonymization, in DL systems, are auditable and is a highly desired feature. Learning failures in the DL algorithm and improper clinical decision making falls under medical negligence, leading to legal challenges. There is a lack of appropriate regulations in this sector and the entity that holds liability for malpractice and other issues are not defined in DL applications [12].The fast-paced evolution of data and DL models pose unique challenges in defining a regulation and evaluating the updates. The training and validation of data quality, the validation process, and the existing standards must be analyzed by the policymakers using these algorithms for generating specific criteria and framing necessary guidelines. Appropriate implementation of the algorithms for the welfare of people must be ensured by adequate laws.

4 Data analysis In order to record events and data in EHR, several controlled vocabularies and classification schemes exist in internal hospital administrative environments. RxNorm and other medication codes, Logical Observation Identifiers Names and Codes (LOINC) and such laboratory observations, Current Procedural Terminology (CPT) and other procedure codes, International Statistical Classification of Diseases and Related Health Problems (ICD), and other diagnosis codes are used for this purpose. The data and codes vary between healthcare organizations with Systemized Nomenclature of Medicine – Clinical Terms (SNOMED CT) and UMLS-based resources that are mapped partially. Data analysis and harmonization of the large array of inter institutional data, across terminologies, is a promising research area.

Deep learning in patient management and clinical decision making

133

Figure 7: EHR paper publication.

Figure 8: Research trends in deep EHR publications since 2015.

Cross institutional applications and analysis are performed using DL models on several EHRs in the clinical code representation. Clinical notes, administered and prescribed medication, laboratory test results, sensor measurements, physical exams, diagnoses, demographics, and other patient data are stored in the EHR. Figure 7 and Figure 8

134

D. A Janeera et al.

represent the analysis of research publications since 2015 in deep EHR. There has been a tremendous growth in the number of publications over the past few years. Research related to data selection, concept representation, data extraction, quality assurance, representation learning, phenotyping, and prediction are further classified to learn the research trend. Randomized clinical trials that use well-defined protocols and follow strict input criteria are considered while collecting data for decision making using DL algorithms. When compared to clinical trials, the complexity of clinical practice is higher. Formulations or imputation must be performed to deal with incomplete clinical data faced by the DL algorithms. Patients may also be given certain decision-making choices in case of certain investigations, they may be present at a different disease stage, may be treated with different investigation protocols and may vary based on lifestyle, age, gender, ethnicity, or co-morbidities during the DL study. Economics, time, and data storage constraints must also be considered while monitoring patients over longer time duration. There are various challenges involved while training the algorithm with such data. Here is a case study of model interpretation based on the Google AI blog. Figure 9 represents the illustration of the attribution schemes, over a timeline, in a hospital environment [32]. The data points are read by the DL model and patient outcomes are learned. Feed-forward networks and RNNs are used as the number of data points involved is large. Inpatient mortality prediction is also made using attribution schemes, on admitting a patient in the hospital for 24 h. Based on the severity of the condition, risk of death can also be predicted using the DL model. For a single patient, more than 150,000 data points may be considered from the patient record [33]. The EHR prediction outcomes must be evaluated for validation of the model. Standard classification metrics, such as diagnosis prediction, hypertension prediction, risk stratification, F1 score, recall, precision, unplanned read-mission prediction, analgesic response prediction and other accuracy measures, clinical event prediction, bone disease risk factor identification, diagnosis classification, heart failure prediction, and other appropriate use criteria can be used for DL-based outcome prediction scheme evaluation. Timed clinical event prediction, disease progression modeling, and temporal diagnosis prediction metrics can be used for temporal prediction tasks.

5 Future research directions Several diagnostic inference schemes are developed using DL algorithms. However, DL prognosis providing prior knowledge and prediction of posttreatment health situation, at an early stage, would be a desirable course for future research. Figure 9 represents the path of prediction is represented by black boxes due to the lack of explanation using DL algorithms. When life and death decisions have to be taken by clinicians,

135

Figure 9: Patient timeline as represented in the Google AI blog.

Deep learning in patient management and clinical decision making

136

D. A Janeera et al.

explainability provides sufficient trust on DL algorithms [34]. Future research can be directed toward addressing this issue. Medical decisions may be taken irrationally by humans due to various reasons. However, DL, ML, and other AI algorithms are rational learners, when trained. Game theory-based approaches may be used as a solution for such behavior. However, an optimal solution has not been attained for this issue. In order to improve future models, data transfer between institutions and hospitals must be made possible in large amounts for an efficient functioning of the Deep EHR framework. Sensitive patient data must also be protected with strict privacy policies. Character-enhanced embeddings, bidirectional LSTMs, and RNNs are used for automatic de-identification of patients, using DL. These models use lexical features for operation. Result reproducibility is another major challenge in deep EHR, as there is no universal benchmark for enhancement of the algorithm. Privacy preservation and evaluation of the generated data utility is to be performed in a more quantitative and accurate manner [35]. Bias of the generated data toward the task prediction often happens when supervised data generation is performed. Data variety is not available due to lack of data sharing between hospitals and institutions. The challenges must be overcome for data augmentation. Data leakage or compromise in algorithms is probable when AI algorithms are trained and tested online using a crowd-sourced data pool or a pool of databases. Expert systems are greatly concerned about security issues. It is essential to safeguard patient privacy. This creates a need for efficient privacy and security algorithms. Researchers are focused toward removing the differences between human and AI, to a large extent. The trust in the use of AI technology in healthcare industry has increased among clinicians and patients by these advancements [36]. Supportive and learning ecosystems have to involve machines to support daily activities of humans. Imbalanced or skewed datasets should not be used during the training or validation process of the AI algorithms. In order to enable unbiased learning, modification of algorithms or data preprocessing has to be done when imbalanced class of data is used. Most existing works have not addressed this issue and these have to be focused in future research. Issues with respect to data volume, variability, quality, uncertainty, casual interference, interpretability, and legal and ethical issues are all directions for future research.

6 Conclusion DL algorithms have promising applications in patient management and clinical decision making. Accurate and quick decisions can be made with organized data management using AI techniques. The suboptimal accuracy and manual data entry that are time consuming and tedious in traditional decision support systems are overcome by DL schemes. Bias, hypothetical-deductive reasoning, decision fatigue, complexity,

Deep learning in patient management and clinical decision making

137

uncertainty, and time constraints are some of the factors that affect clinical decision making using DL schemes, leading to preventable harm. The EHR data can be live-streamed to the automated AI models. Preservation of human intuition, bedside assessment, attention to ethical challenges thorough monitoring and careful implementation, advances in model interpretability, and data standardization factors must be taken care of in the successful integration of clinical decision making and AI. Data scientists and incisive clinicians must scrutinize the prospective clinical application and decisions for validation of data that is provided by AI models in a retrospective fashion, before deployment for real-time usage. Clinical care is transformed by shared decisions with respect to resource utilization, identification and control of complications, recognition and moderation of risk factors that are modifiable, process of informed consent, and augmentation of decisions to operate when AI is applied in the right manner.

References [1]

Toh, T. S., Dondelinger, F., Wang, D. Looking beyond the hype: applied AI and machine learning in translational medicine. EBioMedicine, 2019, 47, 607–615. [2] Liu, F., Weng, C., Yu, H. Advancing clinical research through natural language processing on electronic health records: traditional machine learning meets deep learning. In: Clinical Research Informatics, Cham, Springer, 2019, 357–378. [3] Leite, H., Hodgkinson, I. R., Gruber, T. 2020. New development: ‘Healing at a distance’ – telemedicine and COVID-19. Public Money & Management, 1–3. [4] Obeid, J. S., Davis, M., Turner, M., Meystre, S. M., Heider, P. M., Lenert, L. A. An AI approach to COVID-19 infection risk assessment in virtual visits: a case report. Journal of the American Medical Informatics Association, 2020. [5] Kumar, E. S., Jayadev, P. S. Deep learning for clinical decision support systems: a review from the panorama of smart healthcare. In: Deep Learning Techniques for Biomedical and Health Informatics, Cham, Springer, 2020, 79–99. [6] Miotto, R., Wang, F., Wang, S., Jiang, X., Dudley, J. T. Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 2018, 19(6), 1236–1246. [7] Ojeda, P., Zawaideh, M., Mossa-Basha, M., Haynor, D. The utility of deep learning: evaluation of a convolutional neural network for detection of intracranial bleeds on non-contrast head computed tomography studies. In: Medical Imaging 2019: Image Processing, Vol. 10949, International Society for Optics and Photonics, 2019 March, 109493J. [8] Loftus, T. J., Tighe, P. J., Filiberto, A. C., Efron, P. A., Brakenridge, S. C., Mohr, A. M., . . . Bihorac, A. Artificial intelligence and surgical decision-making. Jama Surgery, 2020, 155(2), 148–158. [9] Deng, X., Huangfu, F. Collaborative variational deep learning for healthcare recommendation. IEEE Access, 2019, 7, 55679–55688. [10] Shortliffe, E. H., Sepúlveda, M. J. Clinical decision support in the era of artificial intelligence. Jama, 2018, 320(21), 2199–2200. [11] Stoyanov, D., Taylor, Z., Carneiro, G., Syeda-Mahmood, T., Martel, A., Maier-Hein, L., . . . Nascimento, J. C. (Eds.). 2018. Deep learning in medical image analysis and multimodal

138

[12]

[13]

[14] [15] [16]

[17]

[18]

[19] [20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

D. A Janeera et al.

learning for clinical decision support: 4th international workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings (Vol. 11045). Springer. Hu, C., Ju, R., Shen, Y., Zhou, P., Li, Q. 2016, May. Clinical decision support for Alzheimer’s disease based on deep learning and brain network. In 2016 IEEE International Conference on Communications (ICC) (pp. 1–6). IEEE. Lakshmanaprabu, S. K., Mohanty, S. N., Krishnamoorthy, S., Uthayakumar, J., Shankar, K. Online clinical decision support system using optimal deep neural networks. Applied Soft Computing, 2019, 81, 105487. Montani, S., Striani, M. Artificial intelligence in clinical decision support: a focused literature survey. Yearbook of Medical Informatics, 2019, 28(1), 120. Liu, S., Ngiam, K. Y., Feng, M. 2019. Deep reinforcement learning for clinical decision support: a brief survey. arXiv preprint arXiv:1907.09475. Echle, A., Rindtorff, N. T., Brinker, T. J., Luedde, T., Pearson, A. T., Kather, J. N. Deep learning in cancer pathology: a new generation of clinical biomarkers. British Journal of Cancer, 2020, 1–11. Zhu, R., Tu, X., Huang, J. Using deep learning based natural language processing techniques for clinical decision-making with EHRs. In: Deep Learning Techniques for Biomedical and Health Informatics, Cham, Springer, 2020, 257–295. Shickel, B., Tighe, P. J., Bihorac, A., Rashidi, P. Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE Journal of Biomedical and Health Informatics, 2017, 22(5), 1589–1604. Gupta, V., Sachdeva, S., Bhalla, S. A novel deep similarity learning approach to electronic health records data. IEEE Access, 2020, 8, 209278–209295. Poongodi, T., Sumathi, D., Suresh, P., Balusamy, B. Deep learning techniques for Electronic Health Record (EHR) analysis. In: Bio-inspired Neurocomputing, Singapore, Springer, 2020, 73–103. Kaji, D. A., Zech, J. R., Kim, J. S., Cho, S. K., Dangayach, N. S., Costa, A. B., Oermann, E. K. An attention based deep learning model of clinical events in the intensive care unit. PloS one, 2019, 14(2), e0211057. Xiao, C., Choi, E., Sun, J. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review. Journal of the American Medical Informatics Association, 2018, 25(10), 1419–1428. Solares, J. R. A., Raimondi, F. E. D., Zhu, Y., Rahimian, F., Canoy, D., Tran, J., . . . Conrad, N. Deep learning for electronic health records: a comparative review of multiple deep neural architectures. Journal of Biomedical Informatics, 2020, 101, 103337. Si, Y., Du, J., Li, Z., Jiang, X., Miller, T., Wang, F., . . . Roberts, K. 2020. Deep representation learning of patient data from Electronic Health Records (EHR): a systematic review. arXiv preprint arXiv:2010.02809. Rundo, L., Pirrone, R., Vitabile, S., Sala, E., Gambino, O. Recent advances of HCI in decisionmaking tasks for optimized clinical workflows and precision medicine. Journal of Biomedical Informatics, 2020, 103479. Antunes, R., Silva, J. F., Matos, S. 2020, March. Evaluating semantic textual similarity in clinical sentences using deep learning and sentence embeddings. In Proceedings of the 35th Annual ACM Symposium on Applied Computing (pp. 662–669). Xu, D., Hu, P. J. H., Huang, T. S., Fang, X., Hsu, C. C. A deep learning –based, unsupervised method to impute missing values in electronic health records for improved patient management. Journal of Biomedical Informatics, 2020, 111, 103576.

Deep learning in patient management and clinical decision making

139

[28] Huang, S. C., Pareek, A., Seyyedi, S., Banerjee, I., & Lungren, M. P. (2020). Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ digital medicine, 3(1), 1–9. [29] Rafiq, M., Keel, G., Mazzocato, P., Spaak, J., Savage, C., & Guttmann, C. (2018, July). Deep learning architectures for vector representations of patients and exploring predictors of 30day hospital readmissions in patients with multiple chronic conditions. In International Workshop on Artificial Intelligence in Health (pp. 228–244). Springer, Cham. [30] Vial, A., Stirling, D., Field, M., Ros, M., Ritz, C., Carolan, M., ... & Miller, A. A. (2018). The role of deep learning and radiomic feature extraction in cancer-specific predictive modelling: a review. Transl Cancer Res, 7(3), 803–816. [31] https://ai.googleblog.com/2018/05/deep-learning-for-electronic-health.html [32] Ho, A. (2019). Deep ethical learning: Taking the interplay of human and artificial intelligence seriously. Hastings Center Report, 49(1), 36–39. [33] Wang, F., Casalino, L. P., Khullar, D. Deep learning in medicine – promise, progress, and challenges. JAMA Internal Medicine, 2019, 179(3), 293–294. [34] Chen, D., Liu, S., Kingsbury, P., Sohn, S., Storlie, C. B., Habermann, E. B., . . . Liu, H. Deep learning and alternative learning strategies for retrospective real-world clinical data. NPJ Digital Medicine, 2019, 2(1), 1–5. [35] Li, F., Chen, H., Liu, Z., Zhang, X. D., Jiang, M. S., Wu, Z. Z., & Zhou, K. Q. (2019). Deep learning-based automated detection of retinal diseases using optical coherence tomography images. Biomedical optics express, 10(12), 6204–6226. [36] Shickel, B., Loftus, T. J., Adhikari, L., Ozrazgat-Baslanti, T., Bihorac, A., & Rashidi, P. (2019). DeepSOFA: a continuous acuity score for critically ill patients using clinically interpretable deep learning. Scientific reports, 9(1), 1–12.

Neha Mehta, SVAV Prasad

Patient health record system Abstract: The patient health record (PHR) system is one of the new platforms for empowering and supporting a new vision in the health services. It belongs to the class of new generation information systems emerging as a user-centric healthcare record system. PHR has seen a substantial rise with the increase in the implementation of mobile computing. As an extension of traditional EHRs (electronic health record), PHRs were started; PHR is in the developing and enables patients manage their records, thus enabling the self-care. Emphasizing on the fact that EHR can be easily recorded, computed, and communicated provides the insight that it can create a big difference in the timeliness and even integrity of the treatment and quality care. It has been a promising tool for both the patient and the health care system – affordable benefits for the entire healthcare system as a whole. The system includes both the chronic condition of the patients as well as the ill-health condition, with its prevention. For the effective establishment of ICT (information and communication theory)-based healthcare, it is required to primarily concentrate on the acceptable safety level of the patients’ personal information and their treatment with assured post care. There is a great role for ICT in maintaining the effectiveness and efficiency for the seamless healthcare. This enhancement in the healthcare has also pinned the focus of researchers on e-Health. Thus, the focus has been attained on the better prediction and prevention of diseases, along with the personalization of healthcare. Keywords: information system, user-centric, mobile computing, EHR, PHR

1 Introduction Healthcare promotion, illness-related diagnosis, related therapy, and after-therapy rehabilitation are the building blocks of the healthcare process organization. The vision of a patient health record (PHR) system is an interoperable system, which intends not only to improve the PHR process but also the core of the health production process (HPP). It has interconnected valuable chains of different procedures and tools that are inevitably connected through the supporting processes and tasks that are included in the role of public health. Thus, increasing the patient safety based on the regular safety indicators throughout the stages of the healthcare

Neha Mehta, Engineering (Consultant), Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh, India, e-mail: [email protected] SVAV Prasad, ECE Department, Lingaya’s Vidyapeeth, Faridabad, Haryana, India, e-mail: [email protected] https://doi.org/10.1515/9783110708127-007

142

Neha Mehta, SVAV Prasad

process is necessitated. The key support role of the management allows and provides regular health updates for better prevention and management. However, the ICT (information and communication theory)-related systems and applications form the back-end for the ongoing and related long-term care of the patients. Health promotion is the important element in the core health delivery system, while the PHR and EHR (electronic health record) are the guiding and supporting tools for the health delivery system. The back-end administration, related planning, logistics and organization are the building blocks of the management of a healthcare system. Day-to-day research has also brought up many new and improved ways for the promotion of this system. The related CME (continuous medical education) and CPD (continuous professional development) are part of training in the field, although EHRs are serving as the repository of electronically maintained information. In the pursuit of these highlights, the chapter focuses on the scheme of traditional EHRs and its extension as PHRs. The conceptual framework of the PHR system, the major categories of PHRs, emphasis on its evolution with time, and the focus of PHR systems in providing and collecting the information, along with the maintenance of clinical information, will be illustrated. The functionalities and the preferences to be maintained will be presented. The different levels of EHRs and different levels of communication through support services will be discussed. The inclusion of continuous patient tracking and monitoring along with the socioeconomic impact is also illustrated. The identification of patient data-related challenges and potentiality of data storage in this field will help increase the quality along with the quantity of the services. A brief history of case studies along with their relevance will be given. The concept of interoperability, with different levels of E-prescription and its socio economic effects will be described. The chapter will also conclude the impact of EHRs on the quality of the hospital and the satisfaction level of the patients, through an evaluation model. The chapter will also go over determining the effect of EMR systems on the work flow and efficiencies in long-term care. This demonstration will show how the use of EMR can reduce the number of steps in the completion of several day-to-day clinical processes, thus allowing the supporting staff with the opportunity to dedicate time to direct care of the resident patients, and thus contributing to improved quality care. The several benefits include better clinical decision-making, improvement in clinical outcomes, enhanced communication with patients, fewer medical errors, facilitation in access to medical records, and improved adherence to clinical guidelines. In a systematic review, the impact of ICT-operated health on the efficiency, quality, and costs has been huge. According to data analysts, the effect of the EMR on patient safety is small, but it is reported to be improving with time, with improved investments and strong evaluation. However, the lack of technical expertise in ICT and physician practice settings are identified as primary barriers in the hospital, and the secondary barriers like cost of implementation, complexity in implementation

Patient health record system

143

processes, and the lack of evidence show that these systems can be used deliver the benefits to the users. Various studies have been reviewed and have reported the major benefits. Thus, the healthcare community is moving forward with the adoption of ICT and EMR systems.

2 Healthcare process organization Healthcare refers to the maintenance of health through diagnosis, treatment, and prevention or it may refer to the process of improvement of health through proper healthcare delivery by the healthcare professionals. Modern healthcare delivery is done by a team of interdisciplinary professionals like community health workers team, health practitioners team, preventive care team, assisting personnel, etc. Healthcare process is one of interrelation in healthcare activities. According to B. Beryman et al. [1], healthcare involves five important processes from the citizen perspective. These are: Prevention Detection Diagnosis Treatment Good end Prevention means keeping the person healthy, which is a mechanism to keep oneself physically fit and includes further improvement at the individual level. There is a class of clinical prevention, which is further classified as: primary prevention, which avoids the manifestation of the disease; secondary prevention, which aims to treat the disease as soon as possible, in order to prevent its worsening; and tertiary prevention that aims to slow down the progression of the disease. Figure 1 maps the involvement and back end of the different processes that are involved in the healthcare process. Health promotion front desk, diagnosis, treatment, and post care are steps in the flow process behind every healthcare process, involving a variety of specialists and superspecialists, at every stage. Information and communication techniques are the back-end processes that play a critical role for accessing and communicating between the individuals, communities, and healthcare professionals. ICT forms the bridge of information between the healthcare sector and communities, in terms of research and development. It also improves the efficiency of the health system and also prevents medical errors. The basic qualification of the health line workers and other associated staff also forms the basic need of the system. However, the focus toward healthcare improvement to ensure quality and access needs developed training programs for the development of healthcare professionals [2]. However, research and logistics that may be either from the research centers, healthcare reports, healthcare or information providers, and the

144

Neha Mehta, SVAV Prasad

Figure 1: Basic blocks of healthcare process.

healthcare industries helps the system in generating new and improved ways of promotion from the core.

3 ICT-related systems Improvement in the quality of life has been one of the challenges in these times. Thus, Information and Communication Technology plays a vital role in almost all the aspects of the healthcare. The incorporation of ICT allows the health sector to be developed as an emerging digital health sector. The existence of ICT in healthcare is at different stages –management of the records, automated medication delivery, wireless access of handheld digital medical instruments, and much more. The electronic storage of patients’ medical data allows the medical personnel to retrieve the information at any point in time and, in the same way, the data can also be made available to the patient anywhere and at anytime. In other words, ICT-related healthcare system is also known as the patient- centered healthcare. It is the best way of sharing the information, finding out better specialists wherever the patient wishes, and may even go for a better and upgraded technology that can be reached. These days, ICT is also being implemented in the

Patient health record system

145

medication handling pathways like automated drug dispensing system, which has even drastically improved and reduced the errors in drug delivery and improved quality of life. Thus, ICT-based infrastructure allows a cost-benefitted profile with real-time data collection, its acquisition, and alongside, data delivery with the storage capacity [4]. ICT in the patient record allows the accessibility of complete profile with alerts, medications, and reminders and is linked with the decision support systems. It includes the primary patient record and secondary patient record [5]. The primary patient record is the record that is used by healthcare provider for reviewing the patient health and their related documents of observations or instructions. However, the secondary patient record is a combination of data elements, which is useful for the nonclinical users and is used for regulation, administration, or payments. This patient record supports the entire health system in quality review, its utilization, and quality assurance. These two records are the components of patient record system that is further stored and used.

4 Electronic health record (EHR) EHR is the computerized version of the paper record. EHRs are considered real time and are a broader chart of the patient’s history. EHRs contain past medical histories, medications, immunizations, allergies, treatments, and test results with record of medicare and treatment dates. It is intended to be beyond the normal clinical data. EHRs are maintained by the authorized providers to be produced in in a secure manner and instantly, whenever required. EHRs encompass the digitization of medical record in this digital era to continuously support healthcare, health education, and its related research. Supplementary information in the EHRs include the lab test reports, imaging reports, details of drugs administered, allergy records, legal reports, and attachments. The complex nature of an EHR allows it to be more efficient and effective in patient care, which helps in facilitating the patient’s information between different healthcare sites allowing reduction in duplicity and also speeding up in patient care. Thus the key features of EHR can be summarized as follows: – Improvement and easy access in patient care – More participation of the patient – Better coordination leading to improvement in care – Improvement in healthcare staff efficiency – Cost saving due to less duplicity – Improved patient outcomes This vision and outcome of the EHR has lead to the enhancement and strengthening the core of the HPP. EHR always continues to serve as a central data base with all the relevant and updated data stored in it [8].

146

Neha Mehta, SVAV Prasad

4.1 EHR implementation EHR implementation needs to be driven by the enhancement in availability of integrated patient data. It intends to improve the doctor–patient relationship, with the stream of healthcare shared by many coworkers. In the earlier stages, when computing systems began to be used in hospitals, they were used only for billing and financial systems; moving ahead, it was used to store patient data and thus extended to make it more clinically relevant. Furthermore, the laboratories of the hospitals were increasingly computerized and, thus, all the test reports were available in digital form and are stored to be accessible from anywhere. This has served a variety of purposes with a single access point – formal data to informal workspace – and can be saved from individual impressions to the medicolegal records [9]. EHR demonstrates the following benefits: – Comprehensive information – Better data retention and communication – Easy access – Continuous quality reporting – Complete and precise documentation with labs and all clinicians – Convenience in e-prescription and direct connection with drug stores – Improvement in investigation – Reduce in duplication of lab tests EHR allows the integrated access of patient data. An interface engine exists [12], in order to serve as router and technical buffer among the various attachments. There is a two-way connection between patient database (PD) and the database information (DI), which serves as an interface engine among the associated units as shown in Figure 2. The effective and efficient coordination of communication among all the blocks leads to timeliness in care. Almost all EHRs are built on the basis of database; there is an overwhelming amount of data present in the context of PD, and there is an underlying need to structure the unstructured data with a number of challenges. A transformation in this practice of medicine, a HITECH ACT (Health Information Technology for Economic and Clinical Health) was passed in the year 2009, which has lead to a big transition from paper prescriptions to EHR. According to Wallis [13], since 2009 to 2017, the percentage of number of physicians using EHR has increased from 48% to 85%. However, it was a combined role of data scientists and clinicians. Thus, there was an implementation of machine learning in the form of artificial intelligence (AI) that allows the computer machines to identify the parameters and draw a conclusion. This has furthermore improved the level of predictions with less errors, and thus improved the patient care. According to Wallis [13], a survey “Harris Poll” was conducted for Stanford Medicine in

Patient health record system

147

Figure 2: Different levels of communication in EHR.

2018, which has reported the effectiveness of use of EHR, as it saves the time that is spent for interacting with the patients. However, using the power of AI allows the digitization of patients’ record as well as their analyses through predictive models. In this domain, the algorithms are designed and the machines are trained through the examples of real world. In 2015, a system named Saykara launched a start-up using a virtual assistant with the name “Kara.” The foundation was laid to combat the amount of time and burden spent on fulfilling the requirements of the EHRs. However, “Kara” as a virtual assistant, listens to the patient–physician conversation, interprets it, and transforms it as notes, or referrals. This virtual assistant also enters the data into the EHR. In June, 2009, Saykara released its new version “Kara2.0” as an AI-empowered assistant. It has served as a breakthrough voice application that transcribes all the discussions between the patient and the physician and thus converts all the text into structured data. Henceforth, it generates the clinical note with history, assessment, plan, and referrals. It denoted a versatile augmented AI-based approach accompanied by human feedback and thus enables the system to train itself to meet the accuracy requirements of the healthcare. In March 2020, Saykara has launched the controversial AI breakthrough, leading to more innovations; Saykara has won the Stratus Award for the implementation of cloud computing and practical innovations in the field of AI.

148

Neha Mehta, SVAV Prasad

Different acronyms are being used inconsistently with state of art EHR [3] in five different levels, and they include: – Automated medical record (AMR) – Computerized medical record (CMR) – Electronic medical record (EMR) – Electronic patient record (EPR) – EHR The different levels of medical records are also shown in the below shown Figure 3.

Figure 3: Different levels of medical records.

AMR contains half of the IT-generated information, along with paper-based records and some automated data that may be generated from communication or result reporting, or from any digital recording available. CMR is the digitized form of all the medical records, along with the scanned form of all available paper records. EMR is the IT-generated data prepared along with data management and decision support system made through interactive guidelines. EPR is more than just the documentation of medical data; it contains all the disease-related data of a patient that can be established beyond the institution. It is a longitudinal projection of the medical record. EHR contains the detailed health-related data of an individual about food habits and issues, wellness record, health-related information, allergies, disease record, etc. It is established beyond any institutional framework.

Patient health record system

149

5 Patient health record (PHR) Similarly, a usability assessment of VA’s My Health Vet (Web Based PHR System, launched by Veterans Administration, USA in 2003) as per the (PHR usability and Adoption National Research, USA) [14], a research conducted on PHR scenarios mentions: – Registration and logging in into PHR – Prescription refilling into PHR – Tracking health into PHR – Searching for health information (already filled) On the basis of research, the following lessons and challenges came into light. Lesson 1: The patient registration stage of PHR faces the development challenge of being simpler and secure. Lesson 2: Enhancement in the subsets (on the basis of usability) will increase the PHR promotion and patient preferences, as well. Lesson 3: Making of patient’s data more accessible either in printable form or downloadable form) will aid in visualization. Lesson 4: While making any addition to PHR, it should be a value addition to the available functions.

6 Artificial intelligence in EHR Application of AI in EHR makes the EHR system grow rapidly, and thus, will be an effective improvement in patient care. It also helps in clinical decision-making, risk assessment, and thus progression prediction. AI is found to be productive in multiple domains [10]. The major three techniques are named as machine learning, second is deep learning, and the third is natural language processing (NLP). Machine learning techniques are a combination of computational methods, which are made to learn patterns in two ways: one is supervised learning in which the model learns the training data set with output data and thus it predicts the output for the desired case; the other is unsupervised learning, in which the training data set is provided and the model learns from it and thus, identifying the underlying patterns within its data. The learning techniques being used in supervised machine learning includes support vector machines, random forest, least absolute shrinkage, logistic regression, selection operator, classification and regression tree; while in unsupervised learning, hierarchical clustering, k-means clustering, and mixture models are included. A branch of machine learning, which is widely known as deep learning, is a technique which is based on artificial neural network (ANN). The technique is known to

150

Neha Mehta, SVAV Prasad

mimic the human brain. The architecture of neural network is a combination of multiple layers, with each layer performing computation of data received from the previous layer. The neural network may also work in forward as in feed-forward network and backward as in backpropagation network. The different types of neural networks included are convolutional neural networks, feed forward neural network, multilayer neural network, and many more. The complexity of EHR implementation in hospitals is studied in [11] on the basis of EFIR context, content and implementation process. Similarly, NLP tends to generate the structured data gathered from the unstructured data. It tends to interpret the human language and extract the information [15], for example, determination of diagnoses from the clinical notes, making separate sections and identification of events. The architecture of machine learning in healthcare is illustrated in Figure 4, which shows back end supported pre processing of data with further feature selection and extraction through supervised learning and unsupervised learning.

Figure 4: Architecture of machine learning in healthcare.

6.1 Training in the field As the focus toward improvement in healthcare increases, quality assurance and better value are the two dedicated needs that lead to the overall improvement of the system. The training themes CME and CPD are the professional requirements in this field. There are development courses that can be accommodated as online/offline workshops for on-job training, short-term courses, undergoing practical projects, distance learning, simulations and role play. and other types of collaborations. In

Patient health record system

151

professional development, the quality improvement appears to be growing more through multidisciplinary trainings, practical projects, and by ad hoc training setups. There is a bigger range of techniques, which is being used in the improvement of healthcare, clinical audits, making records, and in service frameworks, etc. The Institute of Healthcare Improvement (IHI), founded in 1991 in the USA, led by Dr. Don Berwick, through a National Demonstration project is related to quality improvement in healthcare. Now, the IHI has influenced the healthcare improvement with its rapid growth in many nations like Sweden, Latin America, Ghana, Canada, Singapore, Denmark, and England. The IHI has given eight domains of quality improvement [2]. These are given in Table 1. Table 1: Eight domains of quality improvement in healthcare. Domains

Details

Customer knowledge

This domain identifies the customer or groups who are assessing the healthcare. Their demands and preferences are taken into context.

Healthcare as a process

This as an acknowledging domain of the interconnectedness of services, procedures, techniques, and activities in the system.

Variation and related measurement

Here, the variation in performance is measured, and is considered in the improvement of designing or reframing the system.

Leading/making changes Strategic management is required in this domain for making any in the healthcare variation in methods or the skills of the people. Collaboration among groups

The knowledge and the technique may be collaborated for increasing effectiveness among the groups.

Developing knowledge

Development of new knowledge and giving recognition to the existing one is covered in this domain.

Social accountability

This domain gives importance to the social context of the healthcare.

Professional knowledge

With the continuous increase in relevant knowledge allows high competency in several domains.

Online learning/distance learning/conferences/online workshops are getting more popular for CPD (Continuous Professional Development) these days. For instance, a class of e-modules was launched by NHS Clinical Governance Support System for the domain of clinical governance in 2005. The program was targeted to practice management along with self-assessment and learning sets supported by full-time facilitators. Another example is a module launched for continuous education program in the USA for nurses in the oncology department. This program was conducted for 7 months with learning methods like webcasts. For a broader learning strategy, one-to-one training is also one of the best ways that may be sometimes understood as the best way for detailing about facts and motivation. However, it is a part of informal teaching and is not a

152

Neha Mehta, SVAV Prasad

common method for quality improvement. In England, a good scenario for distance learning in which ICT (Information and Communication Theory) is implemented through partnership with universities was launched. It has been accredited with work-based learning schedules for the nurses. This module was incorporated with mentorship in the form of distance learning for the nurses and is reported as a quality improvement training [2]. Two quality improvement programs, namely, IMPROVE (Improving Prevention through Organization, Vision, and Empowerment) and IDEAL (Improving Diabetes Care through Empowerment, Active Collaboration and Leadership) were started in the USA using didactive instructions for a 2-year period, and the telephone call support was also provided as an improvement initiative.

7 Vision of PHR PHR is a lifelong resource of the electronic information of an individual that is maintained and owned by him. PHRs have received increased attention in both the medical sector as well as the IT industry. EHRs are different from PHRs as EHRs are maintained by healthcare organizations. However, in PHRs, the individuals manage and own their health related information, which comes from the different healthcare providers. However, the PHRs need to be efficiently managed and used on the ground and need initial understanding along with reforms of history and trends [16]. The basic requirements of a PHR system are: 1. Open-source software (OSS): An (OSS) intends to provide restriction-free software, which is free from any proprietary and distribution-related limitations. An OSS provides the ability to change or modify the software through full access to its source code, with free license that can be copied or distributed, according to the requirement. In this way, the PHR system can be implemented in order to accomplish the needs of the healthcare environment. 2. Remote access and interoperability: Since the PHR system is a web-based system, it is flexible in its interoperable nature that it could be accessed anywhere and at any time. The web-based nature of the PHR system does not require downloading or installation of any software [6]. 3. Vital functions: The PHR needs to be fully functional and should have higher quality functions. On the basis of vital functions, the health system can be classified on the basis of PHR-S functional model [6]. 4. Architectural requirement: A PHR can be of two types on the basis of its architecture. One is PHR system as a standalone application, in which the patients tend to keep their records updated and they are self responsible and will need the installation of a personal computer [7]. The second one is the tethered PHR, in which the PHR is not a standalone application, but is part of an EHR system, which is linked to a clinically controlled infrastructure, that is internally connected

Patient health record system

153

to the healthcare system, and thus the records are managed and transferred through that system. This type of architecture also supports the collaboration with EMRs, EHRs, and so on, and it contains multiple data from different repositories.

7.1 Functional aspects of PHR According to researchers, PHRs have originated many years back, but the realistic implementation is newer, these days [16]. Health Level Seven (HL 7) organization that was founded in 1987 is a non-profit ANSI accredited organization for standard development. It is healthcare IT standard development organization and has approved a (PHR-SFM) as a function model for PHR system. The organization has launched PHRSFM, DSTU (Draft Standard for Trial Use) [18]. The model is renowned as HL 7, which defines the set of functions for PHR systems and the guidelines for health information exchange between various PHR systems and between various EHR and PHR systems. The aspects of the PHR may be data type or standards, with different functional aspects like user profile, way of interaction, data source, and goals [17]. The three significant efforts that specify the functional standardization of the PHR system are concluded and distinguished in [21] and are mentioned as: (a) the ISO/IEC 25,000 given by International Standardization Organization, [21] introduced the International standards, SQUARE (Software Product Quality Requirements and Evaluation); (b) HL7, as functional model of PHR, given by HL7 Organization [22], which specifies the functions along with the security features of the PHR system under three categories, that is, Personal Health, Supportive services, and Information Infrastructure; (c) the meaningful use criteria, given by National Coordinator of Health IT [23], have provided a 15-set criteria that should be met in order to implement health record system as the effective practice tool. According to the functional model described by Genitsaridi et al. [20], there are five functional categories of the PHR.

7.2 PDT category (problem, its diagnostics, and its treatment) In this category, a complete record of the problem with its diagnosis and treatment is maintained. The problem category includes the disease record, for example, the related incident for which the patient has registered; diagnosis of the problem is related to the diagnostic procedures, test, history, diagnostic documents, and finally the diagnosis results.

154

Neha Mehta, SVAV Prasad

Self-monitoring category: this category includes the functions that are involved in making the patients self-monitor his health and regular recording of his daily life observation parameters. These parameters are unofficial but are very crucial from the wellness management point of view. Communication management: this category covers all the communication protocol services like appointment making, scheduling, and generating the remainders. All service messages like re-prescriptions and drug refill are also covered in this category. Security and access control: in this category, the controlling functions like authentication, authorization, audit, delegation, data access, and data security are covered. Intelligent factors: in this category, the intelligent behavioral functions are included – Intelligent Alerts, priority recommendation, clinical trials-related information, intelligent data export, and presentation are covered. There are wide application software devices, personal healthcare devices, sensors, and monitoring devices. They have widely increased the domain of remote healthcare, presented in the study (Remote patient Monitoring: a comprehensive study) [24, 25] with an advantage of tracking. The patient is allowed to freely move and change their position; the GPS data tracks the patients. However, the early experiences with PHR were, traditionally, the bunches of papers at hospitals and the healthcare offices [26]. The impact of maintaining health records increases the quality and satisfaction level of the patients. Later on, the clinicians have adopted NHIN (Nationwide Health Information Network), which has increased the access demand for online health records from patients. Furthermore, the report of public survey [27] with different age groups has mentioned that if the patients are provided with their summary health records, then they are more satisfied, with their enhancement in understanding of the disease and motivation toward their treatment plans. A similar report with researchers showing impact of improved self-care in chronic diseases is given in [28, 29].

8 Challenges in healthcare system The healthcare industry has seen advances in the past two decades; thus, with the increase in the involvement IT in this sector, the healthcare data and system is facing many challenges in terms of data storage, processing and its management. These challenges are: – Due to increase in intelligent and professional healthcare services, there is an extensive increase in the volume of data. – Due to wide number of diseases and parameters, there has been a wide variety of data that is generated related to the healthy issue of an individual, and thus, there is a wide variety of data from multiple sources.

Patient health record system

155

For example, the different types of visits to different departments at different times, with different times and with different diseases. Therefore, medical records-related imaging data, related lab reports, related surgery reports and hospitalization records are required to be stored and managed. Similarly, all these data will have different sources or formats like image, text, audio, or video. However, since all the departments are equipped with different service providers and installed equipments, the data standard and data parameters of the devices will also be different. All these make the entire system very complex. – Rapid and approximate processing of medical data will help the healthcare staff to provide rapid and effective treatment to the patient for early and risk-free response. – Mining of data in healthcare sector is comparatively challenging to other sectors; thus, it demands more research and valuable attempts (like AI) toward the data analytics and related services. A comprehensive set up for the healthcare system is necessary for its effective management. The demand of this field is to entirely manage the multisource as well as heterogeneous data with purely technical features.

9 Technologies involved: from traditional to extension There is a wide range of technologies that focus on the data analysis calculation, data storage, and data mining of the healthcare sector. Mobile computing: It is an emerging field of information technology that is dedicated to provide quality information services to those who are located as one location or to others who are located at different locations in the form of data storage, calculation through mobile phones, or laptops. The data may be transmitted without being physically connected. The advancement in this field has been allowed healthcare professions to use mobile applications in their various domains of clinical practice. For example, if a patient only needs advice about continuation of his medication, he may communicate and get counseled through a mobile application, via health record manager, which will save the time of both patient and the doctor. Thus, in a similar way, numerous applications have been developed that can assist the medical staff and service providers in their important tasks, such as information management, counselling support, clinical decision-making, health record maintenance, patient monitoring, medical training, etc. Internet of things (IoT): It is the network of objects or things, which are incorporated with technologies, so that when connected the exchange of data can happen, either among the devices by using the internet. IoT is basically a connection between

156

Neha Mehta, SVAV Prasad

the things or objects, rather than people. As an emergency service, IoT refers to the techniques that can collect the data, integrate it and thus make it operable to the healthcare professionals by telemedicine. These days, IoT is also being commonly used for home healthcare, which has improved the experience and service efficiency of the home healthcare services. For example, Apple’s HomeKit, in which the home devices are controlled through the iOS application, which is connected through the devices like Apple Watch and iPhone. Another example is an application, named “Elder Care” which is also smart home application for the elderly and individuals with disabilities. Cloud computing: This technology provides the Internet-based service through the virtualized resources. It utilizes the mobile internet and gets combined with the big data technology over the different mobile terminals. Wearable computing and devices: It includes those devices that are worn by the individual for real time monitoring of the health-related parameters. This has also been a part of emerging healthcare technologies through wearable devices like wristbands, wrist watches, and sometimes smart phones. Wearable devices are also capable of computing, recording, and detecting data at any instant. IoMT (Internet of medical things): It is a further application of IoT, specifically designed for medical purposes. It envelops the application of medical data, its monitoring, and analysis. Such devices have a range of monitoring devices – from the blood pressure monitor to the heart rate monitors. How AI enhances the healthcare? Data extraction: By using the application called as “Abstractors,” the clinicianprovided notes are reviewed and thus, the structured data is pulled out, using AI. The application of AI in this context is to recognize the key biological terms and to uncover the insights. Similarly, Amazon Web services have also introduced its cloud service through the application of AI, with its functionality to index data from clinical notes that are provided. Data entry and documentation: After the clinical notes are captured through NLP, the AI application involved in data entry and clinical note composition allows clinicians to not get involved in data entry, and thus, focus on patients has also increased. Diagnostic and predictive algos: Big data-based prediction models are involved in the diagnosis and prediction of the health condition. Google and Enlitic are two renowned start-ups in developing prediction and interpretation algos. This application warns the clinicians of high-risk conditions. Decision support: An application of clinical decision support helps recommend the treatment strategy that may be generic or rule-based, on the basis of new data learning algorithms. AI applied in Health Record System tends to improve data discovery and, moreover, extraction of data. In the personalization of treatment and recommendations, AI has increased the potential and thus the EHRs have become user friendly. The customization of EHRs, thus, makes is easier for the clinicians by saving a lot of

Patient health record system

157

their time. However, system rigidity is one of the unavoidable obstacles in this domain. The combination of AI with machine learning has got user preference and thus, there is an improvement in the clinical outcomes with the quality of life. But this capability needs to be more integrated with EHRs for its effectiveness. Continuous advances in the capabilities of NLP, clinical decision support, telemedicine, telehealth, and automated image analysis are on track, in this field. Despite its level of complexity, AI has also allowed a potential cost saving through the several domains. These domains are: – Medical image analysis – Assisted medical diagnosis – Assisted drug delivery – Risk prediction – Assisted robotic surgery – Medical data security – Virtual nursing assistant – Assisted administrative workflow

10 E-prescription E-prescription is an electronically generated, system-based medical prescription. It allows the physician, patient, pharmacist, nurses, and assisting staff to use the digital way of electronically transmitting or receiving a prescription. In this system, the prescriber is a clinician or clinical staff, who searches the existing database of the patient record, then reviews the medical information, and finally uploads the new prescription to the medical file. All the data in e-prescription is handled through the Transaction Hub. Transaction Hub channelizes this data to other departments like pharmacy, imaging, and laboratory. There are six different levels of E-prescriptions as follows: Level 1: Provided electronic reference and basic information including drug and dose information, long-term data is also available in this stage. Level 2: Prescription writer – this level searches for the drug information given in level 1 and creates the prescription; long-term data is not available at this stage. Level 3: Supporting data – this level allows the availability of support data, for example, allergic history, demographics, and formulary data. Level 4: Medication – medication management is done at this level, on the basis of combination of level 3 data and the prior medication. Here, the renewal of medication is done. Level 5: Different connectivity – interconnectivity between different services like pharmacy, nursing, and imaging is done here. Level 6: EMR integration – the collected information is integrated with EMR, which is complete.

158

Neha Mehta, SVAV Prasad

Level 6 completes E-prescribing and medication management, which is further available at the patient’s site for access.

10.1 Socio-economic impacts of E-prescription – – – –

There is a reduction in prescription as well as dispensing errors. E-prescription has decreased the work load, as compared to paper prescription. Avoidance of drug interactions and chemical reactions. It is a reliable platform of offering a substitutes of drug alternatives, that may also be checked with the approved list of insurance provider. – Better compliance with medication. – Less damage risk of prescriptions and reports. – Reduction in drug abuse and drug surveillance. Various socio- economic impacts of e-prescription are also shown in Figure 5, and also the implementation of using AI in healthcare are faces certain pros and cons which are listed in Table 2.

Figure 5: Socioeconomic impacts of E-prescription.

Patient health record system

159

Table 2: Pros and cons of using AI in healthcare. Pros

Cons

Accuracy in diagnosis and decision making No timing, no wear and tear Rationality in decision making It is selfless and thus works with no breaks It is robust and scalable Increased efficiency

It increases high cost Leads of unemployment of medical staff No experience base improvement No space for creativity Dependency on machines

Concluding, the PHR systems are static repositories with a combination of data, record, knowledge, and software tools [30], which has allowed the patients to become active role players in their care. Also, the combination of PHR with EHR system works as a standalone application for the consumers. However, the issues related to technology infrastructure are the key parameters in the design of PHR systems. Wald and Bloom [31] depicted the functional issues and the impact of ICT-operated health on the efficiency, quality, and costs. The effect of the EMR on patient safety is small, but according to data analysts, it is reported to be improving with time. Studies have also have suggested the requirement of investments and evaluation in this domain. However, a requirement of technical expertise in medical field-related ICT, physician practice system settings, cost of implementation, complexity in implementation processes, and the lack of evidence showing are a few barriers in the delivery of healthcare benefits to the users.

References [1] [2] [3] [4] [5]

[6] [7] [8] [9]

B. Bergman, D. Neuhauser and L. Provost, “Five main processes in healthcare: a citizen perspective”, BMJ Qual Saf, 2011, 20(Suppl 1), pp. 141–142. Evidence scan: Quality improvement training for healthcare professionals. 2012. THE HEALTH FOUNDATION. pp 1–51. Dobrev, A. 2008. Report on: The conceptual framework of interoperable electronic health record and e Prescribing systems” www.ehr-impact.eu. pp. 1–50. Borthne, K. Challenges of healthcare and health ICT. Features, 2007, 2(2), Available at: Healthmanagement.org. Dick, E. B. S., Detmer, D. E. The Computer – Based Patient Record. The Computer-Based Patient Record: An Essential Technology for Health Care, Revised, Washington DC, National Academy Press, 1997, Available at, http://www.nap.edu/catalog/5306.html. Genitsaridia, H. K., Koumakisa, L., Mariasa, K., Tsiknakisa, M. Towards intelligent personal health record systems: review, criteria and extensions. Procedia Computer Science, 2013, 21, 327–334. Shukla, M. S. K., Sadriwala, T. Electronic health records (EHR): in clinical research and patient recruitment. International Journal of Medical and Health Research, 2017, 3(1), 23–28. Seymour, D. F., Graeber, T. Electronic Health Records (EHR). American Journal of Health Sciences – Third Quarter, 2012, 3(3), 201–210. Grimson, W. G., Hasselbring, W. The SI challenge in health care. Communication Of The ACM, 2000, 43(6), 49–55.

160

Neha Mehta, SVAV Prasad

[10] Kim, M. I. Personal health records: evaluation of functionality and utility. Journal of the American Medical Informatics Association, 2002, 9, 171–180. [11] Boonstra, A. V., Vos, J. F. J. Implementing electronic health records in hospitals: a systematic literature review. BMC Health Services Research, 2014, 14, 370. [12] Tang, C. J. M. Electronic Health Record Systems. In: Shortliffe, E. H., Cimino, J. J. (eds.) Biomedical Informatics. Health Informatics, New York, Springer, 2006, 447–475. doi: https://doi.org/10.1007/0-387-36278-9_12. [13] Wallis, C. How artificial intelligence will change medicine, cover and main illustrations by Harry Campbell. Nature, 2019, 576, 19–26. [14] Haggstrom, J. J., Saleem, A. L., Russ, J. J., Russell, S. A., Chumbler, N. R. Lessons learned from usability testing of the VA’s personal health record. Journal of the American Medical Informatics Association, 2011, 1–5. [15] Lin, J. S. C., Chiang, M. F., Hribar, M. R. Applications of artificial intelligence to electronic health record data in ophthalmology. TVST, Special Issue, 9(2), 1–15. [16] Kim, H. J., Bates, D. W. History and trends of “personal health record”. Research in PubMed, Case report. [17] Fylan, L. C., Cartwright, A., Fylan, B. Making it work for me: beliefs about making a personal health record relevant and useable. BMC Health Services Research, 2018, 18(445), 1–12. [18] Report: Health Level Seven’s Personal Health Record Functional Model Approved as a Draft Standard for Trial Use, For Immediate Release. Health Level Seven Inc. [19] Ma, Z. W., Zhou, S., Wen, H., Zhang, Y. Intelligent healthcare systems assisted by data analytics and mobile computing. Hindawi Wireless Communications and Mobile Computing, 2018, Article ID 3928080, 1–16. [20] Genitsaridi, H. K., Koumakis, L., Marias, K., Tsiknakis, M. Evaluation of personal health record systems through the lenses of EC research projects. Computers in Biology and Medicine, 2015, 59, 175–185. [21] Report: Software Engineering – Software Product Quality Requirements and Evaluation (SQuaRE) – Guide to SQuaRE. 2005. ISO/IEC 25000. [22] http://www.hl7.org/ [23] http://www.healthit.gov/ [24] Malasinghe, N. R., Dahal, K. Remote patient monitoring: a comprehensive study. Journal of Ambient Intelligence and Humanized Computing, 2019, 10, 57–76. [25] Kamel, George, L. E. Remote patient tracking and monitoring system. International Journal of Computer Science and Mobile Computing, 2013, 2(12), 88–94. [26] Halamka, K. D. M., Tang, P. C. Early experiences with personal health records. The Journal of the American Medical Informatics Association, 2008, 15, 1–7. [27] Liu. 2011. Barriers to the adoption and use of personal health record systems. In: Proceedings of the 2011 iConference, pp. 363–370. [28] Badran, A. B., Khan, S., Sii, F., Shah, P. Barriers to adoption of a personal health record in an ophthalmic setting: lessons from implementation of a Glaucoma Patient Passport. Clinical Ophthalmology, 2019, 13, 1369–1375. [29] Newell, C. C., Punshon, G., Leary, A. Severe asthma: emergency care patient driven solutions. International Journal of Health Care Quality Assurance, 2017, 30, 628–637. [30] Tang, J. S. A., Bates, D. W., Overhage, J. M., Sands, D. Z. Personal health records: definitions, benefits, and strategies for overcoming barriers to adoption. Journal of the American Medical Informatics Association, 2006, 13, 121–126. [31] Wald, B. M., Bloom, A. A patient-controlled journal for an electronic medical record: issues and challenges. Medinfo, 2004, 11, 1166–1170.

S. Jaya, M. Latha

Prediction of multiclass cervical cancer using deep machine learning algorithms in healthcare services Abstract: Cervical cancer is the least unsafe tumor that develops in a woman’s abdomen. Cervical cancer is a two-form disease, normal condition and abnormal condition. Both show similar patterns of nucleus and cytoplasm. There are seven stages in a cervical cancer, which includes superficial squamous, intermediate squamous, columnar epithelial, mild dysplastic, moderate dysplastic, severe dysplastic, and carcinoma in situ. The prediction of a cervical cancer stage is very difficult, leave alone if a person is affected by tumor or not. In such a case, statistical analytics and machine learning algorithms are very helpful to predict the state of the cancer in early stages. The main focus of this chapter is in analyzing the various deep machine learning algorithms and finding the best performance measure that gives the maximum accuracy level for prediction of cervical cancer. Statistical models of size, shape, and structure are most important for feature extraction from Pap smear images. From the dataset, 237 samples were collected and used for developing an applied mathematics model to analyze the texture variation and study the correlation between shape and texture. Various classifiers were implemented and compared under machine learning algorithms to measure the accuracy level. This proposed article has been analyzed from three kinds of environments, such as data analytics, big data, and digital image processing. The entire execution of this chapter has been implemented using MATLAB R2016a. Keywords: image processing, cervical cancer, feature extraction, classifiers, machine learning, Pap smear images

1 Introduction 1.1 General concepts of data analytics using learning algorithms Analytics is “The Science of Analysis.” Data analytics mainly focus on the science of testing and researching the original data to obtain useful structure. The main process of data analytics is extraction of meaningful information from huge amounts of

Acknowledgments: The author(s) have not received any financial support to publish the chapter. S. Jaya, M. Latha, Department of Computer Science, Sri Sarada College for Women, Salem 16, Tamil Nadu, India https://doi.org/10.1515/9783110708127-008

162

S. Jaya, M. Latha

data that are structured or unstructured. Data analytics has spread across most of the business enterprises to make decisions in the right manner. It refers to the assumption and prediction of the method with the intention to bring solutions at distinct levels. The original data can be filtered and analyzed along with raw data sources, and statistical models applied to make better decisions in various fields, such as businesses, industries, medicine, healthcare, banking, customer interactions, and energy management. An important aspect of analytics is decision support, which provides information to support the human decision-making process.

Figure 1: Types of data analytics.

Figure 1 shows the four types of data analytic techniques that provide benefits to various applications.

1.1.1 Descriptive analytics Descriptive analysis is used to reiterate the natural data and modify it into a pattern that humans are able to understand the raw data. This type of analytics intimates about what happened in the past. It is an easy way of discover data in a brief manner about content that happened earlier. This type of analytics is helpful to design a pattern from past events so as to bet strategies for the future. For example, in medical fields, the past reports of women with cervical cancer will be helpful for further treatment processes. This is one of the techniques that is often used by analytics to make decisions based on past information.

1.1.2 Diagnostic analytics Diagnostic analytics help an expert to know the issue and the state of the problem, which leads to overcome the initial problem quickly. Diagnostic analytics assist in analyzing why something has happened in the past. For example, in health care,

Prediction of multiclass cervical cancer using deep machine learning algorithms

163

the doctor can get a clear conclusion about the patient’s state of the disease by analyzing symptoms, such as fever, dizziness, and fatigue.

1.1.3 Predictive analytics Predictive Analytics began in the year 1940 as governments started using early computers. In an organization or business, people are introducing a product at a specific time by using prediction-based model analytics. There are various prediction algorithms available to predict the future state by using machine learning algorithms. These are support vector machine, artificial neural network, Bayesian network, decision tree, and regression predictive models. For example, predictive analysis about cervical cancer can predict the cancerous state and whether a patient will be affected by tissue or not. It can easily predict the future, based on the current shape of the cells, their shapes, size, and structures, as well as from past reports of patients. This type of analytics takes existing data and loads it into machine learning models that can predict what will happen next.

1.1.4 Prescriptive analytics Prescriptive analytics is a collection of data, from a combination of both descriptive and predictive reference, for its models and applied on decision-making systems. Furthermore, it is useful to evaluate if a decision depends on various possible future assumptions. Prescriptive analytics is the last step in business analytics in taking a final decision.

1.2 Applications of data analytics and big data 1.2.1 Banking sectors In the banking and financial services sector, the management accesses large amounts of data that are related to customers. Related information about the products and services are then updated to the customer individually. These products and services include, loan details, interest, arrival of new policies, bill details, and outstanding debts. These banking transactions can be processed with the help of data analytics.

1.2.2 Communications, media, and entertainment The main role of media and entertainment: Analysis of consumer insights – audience feedback about a particular business run by a company. The best way to analyze

164

S. Jaya, M. Latha

human behavior and thinking is to know if the company understands humans’ mentality about a product easily and what do customers feel. And, how do they feel. By getting such feedback, the company can use them to improve the sales of their products.

1.2.3 Policing and security Data analytics is being used for nearly a decade to predict crime rate, and the technology is used to find the rate of crime at specific locations. It is used to analyze if the crime rate has increased or decreased over previous years, and which type of the crime has largely increased. Various types of crime can be predicted by using advanced technology in data analytics. Some of the information that will help are crime type, date, time, and crime locations.

1.2.4 Transportation Data analytics is useful in the transportation industry of a country. It can be used to generate day-to-day updates about routing optimization. It is used to solve traffic problems, selecting profitable networks for transport service, and predicting future incidents on time. In modern society, goods are transported from one place to another.

1.2.5 Manage risk In insurance, data analytics is used on customer claim data. A company collects data and policies that will be useful for the customer to make decisions. Evaluation is by an agent before an individual is insured, and then, proper insurance is provided. Nowadays, analytical software are used to dishonor claims. It is very important to bring such fraudulent claims to the management’s attention.

1.2.6 Delivery logistics There are a number of logistics companies working all over the world, such as Flipkart, Amazon, DHL, and FedEx. These companies find the best and shortest routes for shipping to achieve the best delivery times. GPS is one of the most useful tracking systems that is used in data analytics for delivering goods.

Prediction of multiclass cervical cancer using deep machine learning algorithms

165

1.2.7 Web provision Web provision is the fastest way to reach information to people. The useful web provisions are Facebook, Instagram, and Twitter. By using these social media platforms, information is spread to all parts of the world. Data analytics plays a vital role in network bandwidth to reach this goal.

1.2.8 Healthcare – medical image analysis Data analytics is one of the most important tasks in handling the reports of patients in a hospital. In a hospital, reports of patients are maintained at every stage of their visit. Based on those reports, a doctor can predict the patient’s disease state and if they are at a critical stage or at the initial stage.

1.2.9 Fraud and risk detection To identify a customer with credit worthiness, banks apply data analytics with inputs provided by the customer at various stages. The inputs include PAN CARD, CIBIL scores, and cheque transaction details. Here, frauds include the possibility that a customer may get loans from multiple banks with outstanding balances. In such a case, analytics is helpful to analyze the customer’s past experience.

1.3 Statistical models The main concept of data analytics is to study business data by using statistical models and techniques to discover and understand historical patterns to predict the future state of the business. Mathematical functions and methods are used on the dataset to arrive at some solution. Statistics and mathematical models are used in various fields to improve business, with the use of analysis on past reports of the company. Such analysis contains distinct variables that are used to explore the relationship between other variables. To test inference and validate data, one should use a confidence interval and hypothesis testing. In statistical modelling, one of the best methods is regression analysis. Several types of regression analysis are used in real-time applications. For example, this chapter considers the diagnosis of multiclass cervical cancer on Pap images with feature extraction dataset.

166

S. Jaya, M. Latha

1.4 Machine learning algorithms for predicting analysis Machine learning is the method of commanding the computer to learn like humans. This learning has more capability than a human being. If there is a large amount of data that a normal person cannot comprehend the pattern and models in the dataset, a machine learning algorithm is helpful to take decisions and make it very clear. Humans cannot predict if a particular event will occur or not. But, by using machine learning, one can predict future incidents based on past experience and past dataset reports. The input that we give to the computer should be understood by the computer, and for that, the dataset should be in very clear and realistic. Machine learning can have various types of classifiers for predicting purposes. They are the predictive analysis algorithms used for classification. 1. Supervised learning: pattern recognition and regression 2. Unsupervised learning: clustering, association rule, dimensionality reduction 3. Reinforcement learning: genetic algorithm, SARSA Q-learning, DQN 4. Ensemble learning: stacking, bagging, boosting 5. Neural network and deep learning: CNN, ANN. These are the machine learning classifiers that can be used in various applications and fields such as in medical, transport, rainfall prediction, self-driving car, satellite forecast, face fingerprint recognition, and forensic detection.

1.4.1 Supervised learning Regression and classification are important roles in this learning. Under regression, we have various types of regression methods, as explained in Section 1.2, Statistical model. Classification might be based on K-nearest neighbor, naive Bayes, SVM (support vector machine), and decision tree. (i) KNN (K-nearest neighbor) KNN is used in supervised machine learning algorithms. It is easy to implement and is used flexibly to figure out classification and regression. The dataset is framed with function class and labels; it is, hence, called a supervised learning algorithm. Based on the dataset, one has to teach the computer what the data is and how it should differentiate each and every data point. Only then can the computer work on the dataset to classify whether yes or no, or, this or that. Feature is one of the vital roles in pre-processing a dataset. The system classifies data based on the space between the data points.

Prediction of multiclass cervical cancer using deep machine learning algorithms

167

KKN implementation steps: Load the dataset of cervical cancer. Assign K value to select the number of neighbors, for example, K = 3. Find the distance between the query occurrence and the sample training dataset – calculate the Euclidean distance. 4. Order the collection of distances in ascending order, from smallest to largest data points, based on their distance. 5. Select the initial K entries from the gathered collection; it will classify the points that are nearest to the K value. 6. Fix labels for K entries. 1. 2. 3.

Naive Bayes implementation steps: Task 1: Calculate the superior probability for cervical cancer class labels. Task 2: Calculate the search probability for all attributes for all classes. Task 3: Apply the values into the Bayes theorem and then find the posterior probability. Task 4: Note the highest probability class based on the given input attributes (features). (ii) SVM (support vector machine) The principal aim of SVM classifier is to determine the best hyper-plane, which is to be separate every data point – one class from another. A good hyper-plane for an SVM denotes the largest line that separates the class label, indicated by the wide margin between the two classes. The attributes are called data points. Based on the data points, the class will be placed and SVM will separate each class using a line.

Figure 2: Work flow of support vector machine.

Figure 2 refers to the workflow of how SVM works sequentially, starting with data collection until finding the performance evaluation. For example, in cervical cancer dataset, we have six types of class variables (superficial squamous, intermediate squamous, columnar epithelial under normal cell and mild dysplastic, severe dysplastic, and carcinoma in situ under abnormal cell) and a dataset with some essential attributes. The work has been implemented

168

S. Jaya, M. Latha

in the MATLAB environment and the obtained SVM results are the most effective classifiers for medical diagnosis. (iii) Decision tree Decision tree classifier is used effectively as a categorization technique in various pattern recognition situations to address the challenges of image recognition, and character or text identification. This classifier succeeds, generally, for complex classification problems. A decision tree classifier shows a higher ability to handle contrary datasets, whether numerical or categorical. Decision tree classifiers are basically non-parametric and are ideal to handle nonlinear relations over feature attributes and classes. The structure of the decision tree will look like a tree basis. Generally, a decision tree has three elementary structures, 1. Root node (parent hub) 2. Hidden nodes 3. Terminal nodes (other wise known as leaf node/child node) The root node should have at least a child node. The below diagram is a decision tree structure based on cervical cancer dataset with attributes of area and standard deviation. Here, only two values are shown, but the original dataset has 17 nodes. These are all the only continuous values; no categorical data.

Figure 3: Model of decision tree.

Figure 3 explains a decision tree for the cervical cancer dataset by using two features, area and standard deviation.

1.4.2 Unsupervised learning Unsupervised learning comes under the machine learning technique. There is no necessity to observe a model. As an alternative, it is essential to go with a framework to work on its content. It primarily handles data without class label. Unsupervised learning model is a more undetermined method, and compared with other natural learning methods. The main reasons for using unsupervised learning algorithm are given below:

Prediction of multiclass cervical cancer using deep machine learning algorithms

169

– Find various unknown patterns in data – Search features that are helpful for classification. – Input dataset is analyzed and label fitting is done by the user. (i) Clustering – hierarchical clustering Clustering is one of the important concepts under unsupervised learning. The main task of clustering is to find a form in a group of unclassified information. Clustering algorithms analyze the data and join the group of data (cluster). There are various types of clustering in unsupervised learning algorithms. In this method, each information is sorted based on its types in such a way that the user can set the number of clusters according to their dataset. Examples include Hierarchical clustering and K-means clustering. The Hierarchical clustering technique is a favorite method in machine learning algorithms. The term “cluster” implies collecting similar data points from the corresponding dataset. The types of clustering techniques are, 1. Agglomerative 2. Divisive Agglomerative hierarchical clustering technique – At first, every data point is defined as a one-on-one cluster. At each looping step, the related agglomeration is merged by different data points until K clusters are defined. The elementary rules of agglomeration are: – Calculate the closeness array – Consider every data point as a cluster – Iterate; mix new two nearest clusters and modify the new locality matrix – Form a single cluster Hierarchical clustering method is represented by a dendrogram. A dendrogram is a tree-like structure that stores a series of unified or divided data. Divisive hierarchical clustering technique – It is not popularly used in the real world. This factious method is used to study every data point as an individual cluster. All data points are isolated to make an individual cluster. Thus, the distinctive individual groups form a group cluster, and this approach is called divisive hierarchical clustering. (ii) K-means clustering K means is a repetitive aspect agglomeration algorithm that intends to discover the highest value for all iteration. At first, a specific number of clusters are picked up. The main task of this clustering is to group the data points into k number of classes. The result of the rule is a set of “labels”. It distributes the data points to one of the k groups. In this clustering technique, every set is characterized by making a center

170

S. Jaya, M. Latha

of mass for a single group. Centroid is a very essential step for the cluster, which acquires the closest points to the cluster. (iii) Dimensionality reduction Dimensionality reduction is the operation of decreasing the number of random variables by acquiring a set of principal variables. Feature selection and feature extraction are important factors in dimensionality reduction. There are two components in dimensionality reduction: – Feature selection is one of the best methods that is mostly used to improve the accuracy of a dataset by selecting exact class variables or attributes. – Feature extraction is acquiring the character of an object/dataset. Elaborating the statistical values of an original dataset is called features. The different methods that are applicable for dimensionality reduction are: (i) Principal components analysis (ii) Singular value decomposition (iii) Independent component analysis Principal component analysis (PCA): The primary concept of PCA is to trim the spatial property of a real dataset that has too many variables related to each other in the dataset. It is a non-parametric method for extracting the related information from the dataset. It is a method to identify a pattern, and also highlight the similarities and differences. It is difficult to find a pattern in data. To analyze a huge dataset in such a way, PCA is powerful tool. Compression is one of the techniques is used in various applications by reducing space and redundant data. Singular value decomposition (SVD): Image compression is one of the best examples. SVD takes a rectangular matrix with a number of rows and columns. Initially, it finds the eigenvalues and eigenvectors of the 2 × 2 or the 4 × 4 matrix: A = USVT – – – –

(1)

A refers to m × n array U refers to m × n orthogonal array S refers to n × n diagonal array V refers to n × n orthogonal array

Independent component analysis (ICA) comes under machine learning technique to split independent sources from a mixed signal. The aim of independent component analysis is to separate multiple signals into cumulative subcomponents. The main purpose of ICA is studying the mutual independence of the components. In medical applications, it plays a vital role in medical research in biomedical signal processing,

Prediction of multiclass cervical cancer using deep machine learning algorithms

171

such as signals of heart and brain. The aim is to separate the signals into subcomponents to identify the activity of different signal sources. (iv) Association rule This unsupervised technique is referred to as exploring inter-relationships between variables in ample databases. Association rule mining is not much suitable for numerical data. It is appropriate for supporting non numerical dataset, called the categorical data. Applications of association rule: – A set of multiclass cancer patients reports are clustered by their gene reflection measurements. – Group of sales outlets depend on frequently buying history. – Variables are grouped by the rating given by movie viewers.

1.4.3 Reinforcement learning Reinforcement learning is a kind of device acquisition algorithm. It also acts as a factor to acquire situations by test and fault that occur using natural actions from its own experiences. The goal of reinforcement learning is to obtain insight into the best eligible activation trigger, which will increase the total cumulative approval of the agent. (i) Genetic algorithm patients Genetic algorithms are random search algorithms that work on a population of solutions (pop size). Every solution is otherwise called as a single. Each single result has a chromosome. Basically, the chromosome is described as a unit of attributes or a set of genes that specifies an individual solution. Each and every chromosome has a group of genes. Every gene is described as a series of 0s and 1s in the Figure 1. Process of genetic algorithms is slightly slow until it reaches the best solution. (ii) State–action–reward–state–action (SARSA) SARSA algorithm is a little variant of the popular Q-learning algorithm. The policies of reinforcement learning algorithm in learning agent are of two types: 1. On policy: This kind of policy refers to “The learning agent learns the value function based on the current action from the currently used policy.” 2. Off policy: Off policy refers to “The learning agent learns the value function based on the action from the new policy.”

172

S. Jaya, M. Latha

(iii) Q-learning The term Q stands for quality. Q-learning can be thought of as evaluating the degree of a state taken to move an action, instead of deciding the attainable duration of the state “S.” Q-table is otherwise called a simple look-up table with the purpose of calculating the maximum total rewards, in future, for actions at each and every state. The initial value will be 0. This q table is very useful to find the best action at each state. It is like a reference table to select the best action. (iv) DQN The expansion of DQN is deep Q-network. “Deep Mind” application is the very first learning methods. Many games are benefiting from this technique. DQN works more effectively than humans. For example, if we take a game, the duty of DQN will be to acquire how to read scores, shoot the enemy, and rescue users, automatically from raw images.

1.4.4 Ensemble learning The purpose of ensemble learning is to improve the machine learning results by combining the various predictive models in order to decrease variance, called bagging. Bias is called boosting and improved predictions are called stacking. Bagging, boosting, and stacking The term, bagging, is called bootstrap sampling. One of the best ways to reduce the variance is averaging the regression by training different trees on different subsets of the data and computing the ensemble model. – Bagging is used to control the variance value. – Boosting is used to reduce the bias value. – Stacking is used to develop the forecasting force of the classifier. Most errors occur during the three factors of model learning. The factors are variance, noise, and bias, which are used by ensemble methods to improve the steadiness of the final model.

1.4.5 Neural network and deep learning ANN and CNN Artificial neural networks are able to learn nonlinear functions. Activation functions are the trigger point of nonlinear properties of ANN. This activation function is helpful to learn the difficult relationship between input and output data. An activation function is a backbone of ANN. It can also be used to solve various kinds of data types, such as Tabular dataset, Image dataset, and Text dataset. Convolutional

Prediction of multiclass cervical cancer using deep machine learning algorithms

173

neural network models are used in multiple applications, especially in image and video processing fields. The main advantage of CNN is that it learns the filters automatically without any explicit instructions. The main work of the filter is extracting the related features of the image. CNN allows only an image dataset for identifying an object accurately in the classification phase.

1.5 Overview of cervical cancer and its types Cervical cancer is a very risky disease in the world. Nearly 288,000 women are affected by cervical cancer between the ages of 21 and 65. It may rarely happen under the age of 21. It starts in a woman’s cervix, which is the lower part of the uterus. It mostly affects the surface area of the cervix. If any abnormal symptoms are seen in the cervix part, the patient can consult a doctor about the causes. Each and every woman should go for the screening procedure. Pap (Pap smear test) and HPV (human papillomavirus) are recommended as screening tests for women to identify the cervix state. The initial test is called Pap test. After identifying an abnormal cell, they can further go for another test, called the HPV test. The sample tissue collected will be sent to the pathology laboratory for detecting a cell using a microscope. After the confirmation of the cell stage that it is an abnormal tissue, it may be classified into six types of cancer. Each and every cell is a combination of a nucleus and a cytoplasm. Based on the size, shape, and structure, we can determine the types of cancer cells. The various statistical features are evaluated to find the cancer stage. Here, in this chapter, seventeen features were extracted from the sample pap smear images of natural and defective cells. The normal cell can be classified into three types: superficial squamous, intermediate squamous, columnar epithelial and irregular cells, mild dysplasia, severe dysplasia, and carcinoma in situ. The pathologist determines the cancer stage based on its types. The numerical features are extracted using statistical functions and geometric functions. All features are used in machine learning algorithms with continued data series. Various machine learning algorithms are applied for sensing the cervical cancer at an advanced stage by predictive analysis.

2 Literature review The author determined that supervised learning techniques gives a good performance and can be implemented in the stock market field for predicting the business task. Compared and analyzed machine learning algorithms [1]: The chapter of this author is focused on Big Data analytics using machine learning algorithms [2]. In this article, the author established a suitable tree, based on variable linear regression for the purpose of evaluating predicting algorithm performance. The method

174

S. Jaya, M. Latha

and techniques works well for numerical and nominal data obtained from realworld environments [3]. New artificial intelligent systems are flexible for sensing and identification of various kind of diseases. The plan of action executed: the related health check representational process and connected component data help the doctor to make the right decision while classifying the disease. The AI methodology can improve the process of decision-making, and it can decrease the fault prediction rates [4]. Guo-Zheng Li, a Chinese author, referred to the state of affairs in data analytic enquiry for TCM (traditional Chinese medicine) using latest algorithms on feature selection, multiclass, and multilabel techniques in the system acquisition field to address the challenges. Thus, he concentrated on introducing discriminate indication variety and multisyndrome learning that are used to increase the performance of the research works [5]. The article aims to consider the current medical diagnosis system to swine flu, and used it in data mining and neural networks. The implementation phase took 12 features of swine flu and was executed in the neural network, and also in SVM and naive Bayes classifiers [6]. Electronic health data may consider some quality issues, such as missing values, misclassification, and null values. Machine learning may help answer research-based questions. Ensemble algorithms run collective algorithms they take the best algorithm to predict the disease and update delirium risk prediction [7]. The chapter discusses improvements to machine learning, performance of supervised and unsupervised linear methods, as well as the work in Bayesian inference in the domain of biomedical diagnostics. They are implemented in each and every the application in the perception and identification of disease in bio-medicine [8]. The primary work of the chapter is to focus on predicting blood disease during blood tests. The authors explored several classifiers with sample blood test reports and obtained an accuracy of 98.16% in classification. The author also mentions that future work is required in deep learning techniques with IOT methodology [9]. The author concluded with suggestions to use machine learning algorithms to increase the percentages of classification in neurodegenerative diseases. In their study, they worked with gait biomarkers of a public dataset and executed an ANN (artificial neural network) [10]. In this chapter, the main chapter is supervised and unsupervised medical data used by machine learning decision trees under the domain of big data. Compared to many algorithms, the authors obtained an accuracy of 94.8% in CNN-based unimodal disease risk prediction [11]. The author, Paul Sajda, referred to the biomedical image statistical dataset used by SVM, CNN, PNN, NN, ANN etc. for cancer prediction – mammogram of breast cancer [12]. This author developed a theory to predict cardiac diseases using machine learning concepts with logistic regression, naive Bayes, KNN, K-means clustering, and backpropagation. He showed that backpropagation achieves the best result of 98.2% than other algorithms [13]. The article is a review about the machine learning algorithms in big data analytics [14]. The author used to predict the different kinds of disease by applying various machine learning algorithms with a performance measure [15]. The article analyzed AI methodologies in the field of bio-medicine

Prediction of multiclass cervical cancer using deep machine learning algorithms

175

application and also clearly declared the efficiency of AI with different aspects [16]. The chapter is a review of machine learning algorithms in healthcare sector. It also concentrated on prediction of cervical cancer by using ML – executed with a database of 145 patients and concluded that decision tree algorithm gives better predictions [17–18]. The author used four kinds of stage-wise prediction from analog data of the patients repository and finally found that decision tree is better for f-measure, truepositive, and true negative cases [19]. Data mining technology has been used to analyze the cervical cancer state by applying Synthetic Minority Oversampling Technique (SMOTE) [20]. The article explained the case study of determining cervical cancer – trained and tested for best accuracy from Bayes Net algorithm [21]. The author discussed prediction of various cancer disease types, implementing deep ML and proposed a survey paper. CNN was used for execution [22]. Applied ML algorithms were used for the purpose of predicting cervical cancer. The other author enabled automatic detection of cervical cancer by using deep CNN algorithms from a dataset of 3,000 patient records using pathology cell images [23–24]. The authors carried out a survey of pap smear images with various image processing technologies and also collected ML algorithms [25]. They implemented SVM, FNN, and KNN for predicting the cervical cancer state [26]. The article clearly mentioned Bayes Net algorithm, which gives the best accuracy for identifying cervical cancer [27]. The chapter discovered multiple measurements for detecting cervical cancer and compared them with some other algorithms to show the results [28]. The author fully concentrated on performance analysis using image processing technologies with precision, recall, and F1-score [29].

3 Proposed methodology 3.1 Data collection The proposed methodology uses a dataset of 237 samples with 6 stages and 17 feature attributes of cervical cancer by applying statistical models and machine learning algorithms. These datasets are collected from MDE Laboratory(Management Development and Engineering) using Google source link.

3.2 Feature selection Feature selection is an efficient method that analyzes the raw and original data. The important features are picked out from the several attributes to improve the performance level while carrying out classification and regression analysis. In this proposed work, various features are taken from the cervical cancer pap smear images. From them, the authors selected only effective statistical and geometrical attributes

176

S. Jaya, M. Latha

that provide the best accuracy. The essential features are included in geometrical features, statistical functions, and GLCM (gray level co-occurrence matrix) features. They eliminated some features, such as, centroid, convex, convex-hull, surface of the cell, compactness, and min-max value from the real dataset. Table 1: Nucleus features of the cervical cell. Definition of nucleus features Shape features

Texture features

. Area: Number of pixels measure the object area

. Solidity: Describes the relation between the area and convex area in the nucleus.

. Eccentricity: It provides an extent of the closest path of the image pixel, circle and sphere position

. Energy: Indicates the similarity of the image. This will change based on the type of the nucleus region.

. Perimeter: Measures the microsize of the cell’s particle in the nucleus.

. Mean: Spatial filtering used for noise reduction in the cell.

. Diameter: It derives the shortest length of a . Standard deviation: It is used to measure the circle have the whole nucleus is circumferenced spread of data around the mean. . Radius: It is the outward distance from the center of the nucleus

. RMS: Root mean square values to obtain the visualization of structural imperfectness.

. Minor axis length: Provides the distance of the minimum axis. It varies based on the cell type.

. Contrast: It shows the brightness of the nucleus.

. Major axis length: Provides the distance of the maximum axis. It varies based on the cell type.

. Correlation: Refers to how correlated the pixel is to its nearest neighbor in the entire image

. Orientation: Viewpoint between the x-axis and . Homogeneity: It describes the similarity of the the maximum distance of the object in a cell cell

The above mentioned Table 1. provides definitions about the attributes for predicting cervical cancer.

3.3 Data preprocessing Data preprocessing comes under data mining techniques. It refers to the transformation of real data into understandable information. Generally, real-time dataset is frequently incomplete, inconsistent, and has missing and null data. Data preprocessing is a method for solving such problems. Data cleaning is the first step in preprocessing to clean the data by filling null values, removing noisy data, and eliminating redundancy. The data can be normalized and the size of the data reduced in preprocessing. Data preprocessing is not used in the detection of cervical cancer dataset because the

Prediction of multiclass cervical cancer using deep machine learning algorithms

177

dataset has been taken from the real pap smear images and it doesn’t contain any null values. The information taken from the image will contain pixel and intensity values. Every pixel value is very important for regression analysis that will be done in the future. Detection of cervical cancer is based only on the pixel and intensity values of the cell. Each cell contains a nucleus and a cytoplasm. The determination of the cancerous stage is according to the form, property, and anatomical shape of the nucleus.

3.4 Prediction of multiclass label cervical cancer using classifiers Classifiers are used to analyze the training and testing data to help the computer automatically understand based on the class and labels. The classifiers may have supervised and unsupervised learning models, and with or without label. The dataset has 237 records of women with cervical cancer in six stages. Various classifiers are used to detect cervical cancer and analyze performance measures.

4 Results and discussion 4.1 Data collection The cervical cancer dataset considered six stages of cancerous images. The Table 2 explains two types of cells –normal and abnormal, under six types of cervical cancer. This is the initial step in data collection from the database. Table 2: Data extracted from the Pap smear images. Normal cell Superficial squamous

Intermediate squamous

Columnar epithelial

Severe dysplastic

Carcinoma in situ

Abnormal Cell Mild dysplastic

178

S. Jaya, M. Latha

4.2 Result of predicting multiclass cervical cancer using classifiers 4.2.1 Supervised learning – SVM The result of supervised learning is followed by fixing class labels and statistical attributes.

Figure 4: SVM – result of detecting cervical cancer using six class.

Preset: Linear SVM, kernel function: linear, accuracy: 87.3%, prediction speed: ~1,300 obs/s, training time: 4.9253 s. Figure 4 used the dataset of cervical cancer to separate in the training and testing phase for classification. The maximum accuracy of 87.3% was reached.

4.2.2 Supervised learning – KNN The K value of neighbors is 10. Based on this K value, clustering points are represented.

Prediction of multiclass cervical cancer using deep machine learning algorithms

179

Figure 5: KNN – result of detecting cervical cancer using six class.

Preset: weighted KNN, number of neighbors: 10, distance metric: Euclidean accuracy: 81.4%, prediction speed: ~ 6,000 obs/s, training time: 0.4779 s Figure 5 denotes the clustering results with six types of cervical cancer, obtaining an accuracy of 81.4%.

4.2.3 Supervised learning – naive Bayes Naive Bayes algorithm implementation steps Step 1: Start reading the dataset and load the data file Step 2: Make a cv structure object that defines the folds Step 3: Load a training set to train the model Step 4: Create test set to test the data Step 5: Declare the number of class-independent variables Step 6: Get predicted output for test set Step 7: Analyze the predicted result with the actual result from test data Step 8: Build confusion matrix Step 9: Find accuracy

180

S. Jaya, M. Latha

Figure 6: Naive Bayes classifier using cervical cancer.

The above Figure 6 shows the result of naive Bayes on a cervical dataset using confusion matrix in MATLAB tool with less accuracy.

4.2.4 Supervised learning – decision tree

Figure 7: Decision tree – detection of six class cervical cancer.

Preset: Complex tree, minimum number of splits: 100, accuracy: 85.2%, prediction speed: ~ 8,300 obs/s, training time: 0.5513 s. Figure 7 displays the execution result of the decision tree when split by 100 with 85.2% accuracy.

Prediction of multiclass cervical cancer using deep machine learning algorithms

181

4.2.5 Dimensionality reduction The null values, missing values, repeated and duplicated data would be eliminated during dimensionality reduction and the size of the entire dataset will also be compressed. In this proposed work, the authors did not use these concepts to avoid loss of information. The dataset has been taken from the Pap smear microscopic images. This is the original effective information. If we go for dimensionality reduction, the quality of the real information may get lost and we cannot predict the exact result (prediction of cervical cancer) that we expect from the dataset.

Figure 8: Result of dimensionality reduction.

Figure 8 shows the result of data compression techniques in dimensionality reduction in the case of cervical cancer.

4.2.6 Neural network- ANN (artificial neural network) Figure 9 shows the ANN workflow which followed training, testing, and validation of six classes (left side). During the training time, the epoch and gradient values (right side) were recorded. There are 10 hidden layers that worked behind the network during the training and testing phases.

182

S. Jaya, M. Latha

Receiver Operating Characterstics(Plotroc)

Neural Network Training State

Figure 9: Output for the multiclass cervical cancer ROC curve and epoch state ANN – accuracy of 78%.

Prediction of multiclass cervical cancer using deep machine learning algorithms

183

5 Conclusion Healthcare analytics has the power to bring down the cost of treatment, predict disease stages, deflect foreclose from the diseases, and save people lives by passing them through their experienced data. Machine learning and artificial intelligence algorithms are frequently used in various applications, especially in healthcare to predict the disease state. In this chapter, we applied a few supervised and unsupervised machine learning techniques for classifying the prediction of multiclass cervical cancer from the pathology images. The main purpose is to analyze prediction models and explore the accuracy level of the classifiers using machine learning algorithms. The cervical cancer dataset has been compressed from the Pap smear images that are generally used by a pathologist.

References [1]

Nitin Sakhare, S. S., Vishwakarma. Performance analysis of regression based machine learning techniques for prediction of stock market movement. International Journal of Recent Technology and Engineering, March 2019, 7(6). ISSN: 2277-3878. [2] Chang, K. Y. Machine learning algorithms for predictions. International Journal of Advances in Science Engineering and Technology, Feb 2017, 5(1), Spl. Issue- 2, http://iraj.in, ISSN: 2321-9009, . [3] Doan, T., Kalita, J., “Selecting machine learning algorithms using regression models”, IEEE 15th International Conference on Data Mining Workshops 978-1-4673-8493-3/15, IEEE: DOI 10.1109/ICDMW.2015.43. [4] Dinu, A. J., Ganesan, R., Joseph, F., Balaji, V. A study on Deep machine learning algorithms for diagnosis of diseases. International Journal of Applied Engineering Research, 2017, 12(17), 6338–6346, ISSN 0973-4562. [5] Guo-Zheng, L. Medical diagnosis by using machine learning techniques. China Academy of Chinese Medical Sciences, Dec 2013. doi: 10.1007/978-3-319-03801-8-3. [6] Raval, D., Bhatt, D., Kumhar, M. K., Parikh, V., Vyas, D. Medical diagnosis system using machine learning. International Journal of Computer Science, Sept 2015, March 2016, 7(1), 177–182. www.csjournals.com. [7] Rose, S. Machine Learning for Prediction in Electronic Health Data, August 3, 2018, 1(4), e181404. doi: 10.1001/jamanetworkopen.2018.1404. [8] Sajda, P. Machine Learning for Detection and Diagnosis of Disease. In: Department of Biomedical Engineering, New York, NY, Columbia University, 10027. doi: 10.1146/8.061505. 095802. [9] Alsheref, F. K., Gomaa, W. H. Blood Diseases Detection using Classical Machine Learning Algorithms, 2019, 10(7), 2004; 64: 2502-8. [10] Sánchez-Dela, E., Pozos-Parra, P., “Machine learning-based classification for diagnosis of neuro degenerative diseases” https://www.tensorflow.org. [11] Vinusha, H., Sajini, S., Vinitha, S., Sweetlin, S. Disease prediction using machine learning over big data. Computer Science & Engineering: (CSEIJ), February 2018, 8(1). doi: 10.5121/ cseij.2018.8101.

184

S. Jaya, M. Latha

[12] Sajda, P., “Machine learning for detection and diagnosis of disease”, Columbia University, bioeng.annualreviews.org doi: 10.1146annurev.bioeng.8.061505.095802 [13] Ravindhar, N. V., Anand, H., Shanmugasundaram, R., Winster, G. Intelligent diagnosis of cardiac disease prediction using machine learning. The International Journal of Innovative Technology and Exploring Engineering, September 2019, Volume-8(11), ISSN: 2278-3075. [14] Bhargavi, P., Singaraju, J., “Machine Learning Algorithms in Big data Analytics”, January 2018 DOI: 10.26438/IJCSE/v6i1.6370, Volume 6, Issue 1. [15] Ferdous, M., Debnath, J., Chakraborty, N. R. Machine learning algorithms in healthcare: a literature survey. IEEE Xplore, July 1-3 2020, IIT- Kharagpur. [16] Rong, G., Mendez, A., Assi, E. B., Zhao, B., Sawan, M. Artificial intelligence in healthcare: review and prediction case studies. Engineering, 2020, 6, 291–301, Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. doi: https:// doi.org/10.1016/j.eng.2019.08.015. [17] Shailaja, K., Seetharamulu, B., Jabbar, M. A., “Machine learning in healthcare: a review”, Proceedings of the 2nd International conference on Electronics, Communication and Aerospace Technology (ICECA 2018); IEEE Xplore ISBN:978-1-5386-0965-1. [18] Asadi, F., Salehnasab, C., Ajori, L. Supervised algorithms of machine learning for the prediction of cervical cancer. Journal of Biomedical Physics and Engineering, Aug 2020, 10(4), 513–522. Published online 2020 Aug, doi: 10.31661/jbpe.v0i0.1912-1027. [19] Singh, J., Sharma, S. Prediction of cervical cancer using machine learning techniques. International Journal of Applied Engineering Research, 2019, 14(11), 2570–2577, ISSN 0973-4562. [20] Alam, T. M., Afzal Khan, M. M., Iqbal, M. A., Wahab, A., Mushtaq, M. Cervical cancer prediction through different screening methods using data mining. (IJACSA) International Journal of Advanced Computer Science and Applications, 2019, 10(2). [21] Suman, S. K., Hooda, N. Predicting risk of Cervical cancer: a case study of machine learning. Journal of Statistics and Management Systems, ISSN: 0972-0510 (Print) 2169-001.4. [22] Debelee, T. G., Kebede, S. R., Schwenker, F., Shewarega, Z. M. Deep learning in selected cancers’ image analysis – a survey. Journal of Imaging, 2020, 6, 121. doi: 10.3390/ jimaging6110121. [23] Parikh, D., Menon, V. Machine learning applied to cervical data. International Journal of Mathematical Sciences and Computing(IJMSC), Jan 2019, 5(1), ISSN: 2310-9025. [24] Miao, W., Yan, C., Liu, H., Liu, Q., Yin, Y., “Automatic classification of cervical cancer from cytological images by using convolutional neural network”, Bioscience Reports (2018) https://doi.org/10.1042/BSR20181769. [25] William, W., Ware, A., Basaz, A. H. Machine learning techniques for automated cervical cancer screening from pap-smear images. Computer Methods and Programs in Biomedicine, doi: 10.1016/j.cmpb.2018.05.034. [26] Yong, Q., Zhao, Z., Zhang, L., Liu, H., Lei, K. A classification diagnosis of cervical cancer medical data based on various artificial neural networks. Advances in Intelligent Systems Research, 147. [27] Unlersen, M. F., Sabanci, K., Özcan, M. Determining cervical cancer possibility by using machine learning methods. International Journal of Latest Research in Engineering and Technology (IJLRET), ISSN: 2454-5031. [28] Lu, J., Song, E., Ghoneim, A., Alrashoud, M. Machine learning for assisting cervical cancer diagnosis: An ensemble approach. Future Generation Computer Systems, 2020, 106, 199–205. doi: https://doi.org/10.1016/j.future.2019.12.033. [29] Singh, S. K., Goyal, A. Performance analysis of machine learning algorithms for cervical cancer detection. International Journal of Healthcare Information Systems and Informatics, April-June 2020, 15(2). doi: 10.4018/IJHISI.2020040101.

Sidharth Purohit, Shubhra Suman, Avinash Kumar, Sobhangi Sarkar, Chittaranjan Pradhan, Jyotir Moy Chatterjee

Comparative analysis for detecting skin cancer using SGD-based optimizer on a CNN versus DCNN architecture and ResNet-50 versus AlexNet on Adam optimizer Abstract: Skin cancer is regarded as the cardinal cause of morbidity and mortality globally, with death count increasing at an alarming rate. It has a higher chance of being cured if diagnosis is done in its initial stages; proper diagnosis of skin cancer is crucial to enable proper treatments. Highly skilled dermatologists and skin specialist doctors are capable of accurately detecting skin cancer at an early stage. Expert dermatologists are limited in number, so systems that automatically detect the cancerous growth at early stage with high performance are a useful tool. So, this study presents a deep learning (DL) technique to classify images and detect skin cancer at an early stage. We have trained our model using images of harmless, that is, benign images and tumor-based images; we have used Convolutional neural network (CNN) on those images to classify whether the image is a suspect of skin cancer or not. This proposed approach achieves an accuracy of 86% and is compared to the DCNN model, which was introduced earlier, before our work. Also, an additional approach using the ResNet-50, a 50-layer deep CNN has been implemented which has proved useful in further improving the accuracy to over 90%. Keywords: Benign, CNN, malignancy, SGD classification model

Sidharth Purohit, School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, e-mail: [email protected] Shubhra Suman, School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, e-mail: [email protected] Avinash Kumar, School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, e-mail: [email protected] Sobhangi Sarkar, School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, e-mail: [email protected] Chittaranjan Pradhan, School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneshwar, Odisha, e-mail: [email protected] Jyotir Moy Chatterjee, Department of IT, LBEF, Kathmandu, Nepal, e-mail: [email protected] https://doi.org/10.1515/9783110708127-009

186

Sidharth Purohit et al.

1 Introduction The uncontrolled growth of abnormal skin cells is known as skin cancer. It occurs when nonrepaired DNA damages skin cells (most often caused by ultra violet radiation from sunshine), activates deviation or defects in genes that result in the skin cells proliferating promptly and creating malignant tumors. The important types of skin cancer are melanoma, squamous cell carcinoma, and basal cell carcinoma. In the past 15 years, the numbers of malignant melanomas and non-melanoma carcinoma have increased dramatically throughout the world, above all, among people with fair complexion. In the previous two centuries, the most ordinarily occurring cancer in men and ladies was of this type [1]. There have been nearly 300,000 new cases in 2020 alone. Non-melanoma carcinoma is the third most ordinarily occurring cancer in men and ladies, with over one million diagnosed worldwide [1]. The diagnosis of melanoma depends upon subjective tests based on symmetry, diameter, color, and evolution-based tests performed by a skin cancer oncologist [2]. The proper diagnosis of malignant melanoma is performed only when removal of the infected portion is performed with a histopathology-based procedure. There are chances of high cure rates when skin cutis/_tumor prognosis begins as soon as it is detected. An evaluative constraint in skin-malignant neoplastic disease sensing is the absence of reliable broad-spectrum techniques that can spot the cancer with fast accuracy and with increased detection anticipation and low depression rates [3, 4]. Population residing in a region where there is too much light exposure and also people whose job demands spending time under scorching heat with no preventive measures like sun lotions, creams, etc. are at greater risk for this disease. Vulnerability, notably in people who had incessant suntan as children also increases the risks of skin cancer. DL is a type of machine learning (ML), and solves the problems that were unsolvable by ML. Although ML techniques are also applied for solving image data problems, we are required to label the images as binary values, that is, 1 and 0, which might lead to inconsistency in results [5–7]. Thus, DL techniques are being used, as it offers the scope for introducing image augmentation and clustering of similar looking graphics and pixels. State-of-the-art DL is helping therapeutic professors and research enthusiasts in finding unknown opportunities in medicinal data analysis and helps serving the community with a solution. DL in health are provides doctors with an analysis of any disease accurately. Hence, using the technology of classification systems and neural networks, we can predict if a particular image can be a suspect of skin cancer or not. ML and DL differ in the way they process data. In ML, data has to be labeled or structured, but DL can classify images using layers of ANN (Artificial Neural Network). By knowing the features of a particular image, DL classifies it. However, in the case of ML, it is only the labeled dataset that helps us in doing the job. Even when there is a large amount of data, DL performs better and gives an accurate

Comparative analysis for detecting skin cancer

187

result. In this research work, we are trying to help oncologists and radiologists to get a good overview of cancerous and noncancerous images and about the appropriate medical practices that can be suited for a particular skin cancer malignancy type, so that they can spread awareness. This work is presented in the following way. Section 2 explains the literature part which provides information of previous state-of-the-art methods performed on similar data. Section 3I describes the background study, Section 4 explains the methodology we applied in this manuscript. Section 5 covers the result analysis with comparisons [8–10] involved here, and in Section 6, we have provided the information of Test Bench applied here. Section 7 covers the data leak strategies that are essentially creating barriers in this study, and Section 8 discusses the conclusions. Finally, in Section 9 the algorithm for the whole process is discussed.

2 Literature survey The process used in this chapter hovers around the idea of finding patterns within an image that helps in identifying the class of the same. Our processes and techniques can be employed not only for skin cancer but for any multiple-class-based image classification. While looking at the medical report of an infected patient, doctors tend to physically look at the lobes, unusual cell growth, and curves that may or may not cause issues in an organism; similarly if we aspire to computationally find the results, we need to segment an image in accordance with various labelpatterns occurring in the image – also called Image Segmentation. The purpose of image segmentation is to create a set of segments that collectively cover the entire image, or a set of contours extracted from the image (also performed in edge detection in Computer Graphics). Similar analysis is performed by Antony et al. and Jaleel et al. [1, 2] using simple ANN architecture, where they feed the neural network with malignant and benign based skin cancer images. Here, we have applied the methods of both existing ML-based hyper parameter tuning techniques and state-of-theart transfer learning techniques to solve this problem. Choudhari et al. [3] have also employed an ANN-based model, which used Adam optimizer and unlabeled images; automatic imaging followed by lesion detection is the key of this research presented by Alcón et al. [4]. A wholesome guideline that helps in prognosis of skin cancer with early detection and discovery measures in the contemporary world is being worked upon by Rigel et al. [5]; we have found this as a comprehensive text matter to understand the basics of disease detection and mathematical modelling of the same. This research discusses the formation, cause, and prevention of malignant melanoma. An aided device that helps in detection and diagnosis of skin cancer nodules is being studies by Giotis et al. [6]; they discussed

188

Sidharth Purohit et al.

this methodology using a non-demoscopic image cluster. Kawahara et al. [7] have explained the deep features that are being used in exploring the skin lesions. A profound research using the CNN architecture and transfer learning technique was studied for this manuscript. Brinker et al. [11] has used the Adam-based optimizer to decrease the loss and improve the accuracy of a deep CNN-based model. This study is widely effective but to a limited-use case. While the transfer learning based approach using Alex-net is being performed by Hosny et al. [12] to study augmentation in detail, they have published this research using the then state-of-the-art model [12]; here, we have used the recent state-of-the-art model ResNet-50, a 50-layer deep CNN to increase the accuracy when compared to AlexNet [12] architecture; and existing methodologies in machine learning are being hyper-parameterized using the stochastic gradient descent (SGD)-based optimizer to further improve the accuracy manyfold. The sub-sections explain the applications of Image pre-processing and Image segmentation, followed by classification in Image processing tasks, SGD-based hyper-parameter tuning, and how they were being performed earlier.

2.1 Image pre-processing Image pre-processing followed by Image segmentation are essential and crucial steps towards better results from an otherwise distorted image. Each of the pixels in an area is in resemblance with esteem to some characteristic or computed property, such as coloring material, intensity, or texture. Concurrent regions are significantly different, with obedience to the same characteristic(s).When applied to a pile of typical medical images, the conclusive contour line is used after image segmentation is used to contrast within an image, such that pixels with the similar label can be used with certain characteristics and are easy to frame, according to our demand. Data with images are usually classified on the basis of common repeated features, patterns, lobes, curves, patches, and so on [1, 2]. Before processing any image, we remove any unwanted artifacts it may have. The aim of the image pre-processing task is to improve image data or enhance image features for further processing. The images captured by the camera or satellite have errors in geometry (like shape and size) and brightness values of pixels. These errors are corrected using either statistical or mathematical models such as scale-invariant feature transform (SIFT, which acts on local region gradients, a feature detection algorithm in computer vision to detect and describe local features in images) and the speed-up robust features technique, a patented local feature detector and descriptor that can be used in tasks such as object recognition, image registration, classification, or 3D reconstruction. To detect cancer, pre-processing acts as a fundamental step in improving the precision of the segmentation step.

Comparative analysis for detecting skin cancer

189

2.2 Image segmentation In Image segmentation, we partition the image into various parts called segments. An image is a collection of different pixels. We group together those pixels which have similar attributes. The techniques performed are inspired from Rehman et al. [11]. Here, the generalized Gaussian distribution is used to classify cancer. In this technique, all training images were divided into R, G, and B color channels to individually determine the extent of malignancy [11]. Intensities of the malignant area were obtained. In this chapter, training images are used. The prime objective of this model is to compute threshold value at positions where the addition of foreground and background spreads is minimal. Our observations have found that the method displayed in [12] can be deployed to execute histogram-based image threshold or to transform a gray level image to a binary image. If the histogram retains are bimodal distributions then the method in [12] exhibits relatively good performance.

2.3 Image classification The methods performed by Brinker et al. inspired us to do systematic research on classifying skin lesions using CNN, as proposed in [13, 14]; we also used 399 images from standard camera for classification. The classification was carried out using k-nearest classifier [15] using cosine distance metrics, but the task was time consuming and a broader perspective can be drawn, provided there is enhancement in the technology. Bhavya Sai V et al. [16] designed a deep convolutional neural network (DCNN) system for skin classification [17, 18]. Our approach with varying variations has proved more suitable. The classification problems are very subjective in ML-based optimization tasks [19, 20]. The phenomena involves splitting the train and test data; prior to that, the data should be easily distinguishable in two (binary classification) or more than two (multiple classification-based) classes [21, 22].

2.4 Stochastic gradient descent-based optimizer SGD stands for stochastic gradient descent, where stochastic means random. Gradient basically means a slope of any surface. SGD is an approach which works in iterative manner, which helps minimize the target function with some properties. The loss is calculated along the gradient at a time, and the model gets updated with some learning rate. Where mean is zero and there is unit variance, we prefer to use the default learning rate. The data which are generally represented as sparse array that consists of floating point values are the ideal dataset where we can use SGD. We can control the model with some parameter or loss parameter.

190

Sidharth Purohit et al.

Generally, SGD fits with SVM (or support vector machine) [23, 24]. We can shrink the model parameters towards the zero vector by adding a penalty to the loss function, called regularizer [23]. The main advantages of SGD are its efficiency and ease in implementation. There are a few drawbacks in SGD. It requires a number of hyper parameters such as regularization parameter and a number of iterations. It is also sensitive when it comes to scaling of a few features [25, 26].

3 Background techniques The processes and code example below describes our approach of research in Python; however a generalized image classification task can be performed using these steps, represented in the flow chart in the Methodologies section.

3.1 Dataset collection Figure 1 shows the image derived from the Kaggle data source; the algorithm after Section 8 explains the image dataset used here, among malignancy and benign in train and test data. Figure 2 represents the number of datasets trained. Train_benign contains 1,800 data, test_malignant contains 1,500 data, and test_benign contains 1,800 data. The largest part of the pie chart comprises train_benign and test_benign. The pie chart shows that the dataset is very well-formed and can be used directly. Overall, the pie chart displays the distribution of the image in proportion to the pie, of various live cancer detected patients suffering from melanoma or squamous cell carcinoma. The dataset file contains a balanced data set of images of benign skin moles and malignant skin moles. All the rights of the data are bound to the ISIC archive rights. The dataset file has a set of 6,600 images split into two folders, which is further divided into two subsection folders, namely, train data set and test data set, where each of these folders comprises Malignant and Benign folders, each having 1,800 malignant and 1,500 benign images [27].

4 Methodologies 4.1 Cancer classification using SGD classifier The methodology in the study involved the use of CNN) analysis. In this analysis, the Keras module with TensorFlow back end is being used as shown in Figure 3.

Comparative analysis for detecting skin cancer

191

Figure 1: Benign and malignancyimages.

The steps to redefine the contrasts and RGB standards of the image as a part of pre-processing and Image segmentation is being done by following the next part of algorithm as displayed in Section 9, applicable for image data generation for any standard image classification case. A symbolic data library called Tensorflow with provisions for performing data analysis, image data generation, fixating deep convolution layers, etc. is being used here. Cross-entropy loss or log loss measures the performance of a classification model whose output is a probability value between 0 and 1 [28]. Here, binary crossentropy is being used, along with the binary confusion metrics. The optimizer that is being used is SGD; it has provision for optimizing an objective function with suitable smoothness properties [29] (e.g., differentiate or subdifferentiate), and it has support for momentum, learning rate decay, and Nesterov momentum. It accepts the arguments in the form of learning rate, momentum, and decay. lr: float ≥ 0. Learning rate (the alpha in gradient descent algorithm, here 0.9) momentum: float ≥ 0. Parameter that accelerates SGD in the relevant direction and dampens oscillations. (here 0.02) decay: float ≥ 0. Learning rate decay over each update.

192

Sidharth Purohit et al.

test_malignant:1500 test_benign:1800

train_benign:1800 test_malignant:1500 test_benign

test_malignant

train_benign

test_malignant

Figure 2: Image dataset distribution [27].

Input Image

Redefining the contrast and RGB Standards of Image

CNN Model to Convolutional Neural Network Layer 1Dense Layer

Applying Batch Normalization Iterative and ReLU function on Output Layer

Model Checkpoint for Best Accuracy

Model Compilation with SGD Classification

Result

Figure 3: Flow chart of the SGD-based proposed approach.

nesterov: boolean. To apply Nesterov momentum [17], for the given binary classification problem Also, the Keras.callback ModelCheckpoint API is being used here to specify which measure to control out of loss, accuracy on training data, to improve the score by increasing or decreasing it. ModelCheckpoint functionality is being used here to save the network weights and avoid frequent training every time we run the model. It is a setup for saving only the improving scores, while performing analysis on a given dataset. from keras.callbacks import ModelCheckpoint. Python code to apply the Model checkpoint Dropout, in simple terms, is a process of removing random neural nodes from training. This implies that their role in deciding the activation is reduced and discarded upon performing the forward pass, so no weight updates occur on them

Comparative analysis for detecting skin cancer

193

while performing the backwardly passing process. In the given model, dropout = 0.5 is being used which means that 50% of the neurons are being removed from the network layer in the forward pass itself. The summary table of CNN layers and activation function is shown in Table 1. Figure 4 represents the training screenshot of the model after a total of 100 epochs; the accuracy and valid test, train score along with loss has been mentioned. The image format in the model follows the 112*112*3 {each image being a square of dimension, with 3 representing the RGB configuration}. The model has six convolution layers and four max pool layers, along with a dense layer at the end that acts as a binary classifier. The model has a total of 1,839,969 parameters, out of which 1,837,953 are trainable parameters and the remaining ones are non-trainable. Table 1: Summary table – the CNN layers and the activation function. input_ (input layer)

(None, , , )



conv_ (ConvD)

(None, , , )



batch_normalization_ (batch

(None, , , )



re_lu_ (ReLU)

(None, , , )



max_poolingd_ (MaxPooling

(None, , , )



convd_ (ConvD)

(None, , , )

,

batch_normalization_ (batch

(None, , , )



re_lu_ (ReLU)

(None, , , )



max_poolingd_ (MaxPooling

(None, , , )



convd_ (ConvD)

(None, , , )

,

batch_normalization_ (batch

(None, , , )



re_lu_ (ReLU)

(None, , , )



max_poolingd_ (MaxPooling convd_ (ConvD)

(None, , , ) (None, , , )

 ,

batch_normalization_ (batch

(None, , , )



re_lu_ (ReLU)

(None, , , )



max_poolingd_ (MaxPooling

(None, , , )



convd_ (ConvD)

(None, , , )

,

batch_normalization_ (batch

(None, , , )

,

194

Sidharth Purohit et al.

Table 1 (continued )

re_lu_ (ReLU)

(None, , , )



convd_ (ConvD)

(None, , , )

,,

batch_normalization_ (batch

(None, , , )

,

re_lu_ (ReLU)

(None, , , )



flatten_ (flatten)

(None, )



dense_ (dense)

(None, )

,

re_lu_ (ReLU)

(None, )



dropout (VariableDropout)

(None, )



dense_ (dense)

(None, )



Figure 5 shows the train loss and test loss that occurred while doing the experiment; the best accuracy that was found here is 86.82%.

4.2 Cancer classification using ResNet-50-based CNN on Adam optimizer In this research, we also propose a method generalized for cancer-based analysis of images by using the architecture of ResNet-50. A residual neural network (ResNet) [17] is an artificial neural network, which is similar to the memorizing and cerebral cortex of a human brain. It does this by taking advantage of skip connections or by using the pre-memorized jumps. ResNet models are generally deployed with ReLu activation function, in an attempt to deal with the vanishing gradient problem with ease. Frequent skips in weights coined the term Highway nets. In our problem statement, they are being employed to compete with popular transfer learning techniques like Alex-Net architecture, which had provided an 88% accuracy earlier on skin cancer classification task. Our objective in this research is to deploy a more robust transfer learning model like that of ResNet-50 to counter the vanishing gradient problem existing in the previous approaches and thus improve the accuracy to over 90%. The architecture of a typical Res-Net with 50 dense layers can be represented as shown in Figure 6. Clearly the approach for image classification tasks involving Residual neural network with dense 50-layer architecture works better than the SGD- based CNN model with limited layers, for the given image classification tasks, as we have shown in the algorithm section displayed in the end. The overall accuracy with this process came out to be 0.905252525252, and the confusion matrix thus obtained had more number of true positive based prediction. This implementation can be found in Section IX.

Comparative analysis for detecting skin cancer

195

Figure 4: The epoch table.

Confusion matrix

1002

70

61

845

Actual

benign

malignant

benign

malignant

Predicted

Figure 5: Confusion matrix of the SGD-based approach.

2048-d DRF

Input 224x224x3

7x7,64

Layer Name Conv1 (Output Size) (112x112)

x3

x4

x6

x3

1x1,64 3x3,64 1x1,256

1x1,128 3x3,128 1x1,512

1x1,256 3x3,256 1x1,1024

1x1,512 3x3 512 1x1,2048

Conv2 (56x56)

Conv3 (28x28)

Conv4 (14x14)

Conv5 (7x7)

Figure 6: Res-Net50-based architecture.

FC 1000

Output (1 x nClasses)

196

Sidharth Purohit et al.

5 Result analysis Figure 7 represents train loss and valid loss. The X-axis of the graph is labeled as number of epochs, and the Y-axis of the graph is labeled as loss. The blue line represents valid loss whereas the red line represents train loss. With the increase in number of the epochs, train loss is decreases non-linearly. Figure 8 represents train accuracy and valid accuracy. The X-axis of the graph is labeled as number of epochs, and the Y-axis of the graph is labeled as accuracy. The blue line represents valid accuracy whereas the red line represents train accuracy. With the increase in the number of epochs, train accuracy increases substantially. The application is developed on Spyder IDE (which is a part of ANACONDA 3) and the Jupyter notebook where we achieved the accuracy of 86.82%. We trained our model with a huge amount of data, and we have predicted the accuracy of cancerous and the noncancerous cells present in the dataset using the CNN and SGD classifier algorithms. We have built a prediction system by the use of rules. The input is collected in an image form and the training set formed is precisely classified [30]. This model can predict under any circumstance and for any variation related to the cancer on the skin malignancy. The comparison of our results and citation is done with the following paper on Classification of skin cancer using TensorFlow and inception V3 by Sai V et al. [16], where the author has used the DCNN on CPU version; their proposed approach got 85% accuracy. Such techniques although effective, are costlier to implement, both in terms of time and resources. This model, on the other hand, is trained on a relatively free-to-use implementation on Google COLAB, which has designed its GPU. In terms of accuracy measures, the techniques employed by us have used the SGD classifier, which has a better prediction accuracy, that is, 86.82% compared to the DCNN model used there. Also, the technique with a different architecture in the form of Res-Net50 performs much more effectively and efficiently with an accuracy of over 90.52%. The accuracy percentage is shown in Figure 9.

Comparative analysis for detecting skin cancer

train loss valid loss

175 150 125 100 0.75 0.50 0.25 0.00 0

10

20

30

40

50

Figure 7: Train comparison of loss-SGD-based approach.

train loss valid loss

100 0.95 0.90 0.85 0.80 0.75 0.70 0

10

20

30

40

Figure 8: Train comparison of accuracy-SGD-based approach.

50

197

198

Sidharth Purohit et al.

Accuracy Comparision

Resnet-50 (our approach)

90.52

86.82

SGD (our approach)

85

Compared approach 84

85

86

87

88

89

90

91

92

Different Methods Figure 9: Accuracy percentage.

6 Test bench We tested the model on a HP X-360 PC with 8 GB NVIDIA GEFORCE 960 M GPU and a computer with 12 GB NVIDIA TITAN X GPU. Here, the 40 epochs took 2 h of training. The high- end PC helped us converge at the results at a relatively faster rate.

7 Data leak Data leak [28] is a hazardous issue, which may lead to loss of integral information from the image data. We must handle this issue using various techniques to ensure that our model is robust. From the training set, many of the training images had artifacts such as images with markings, images with bright lights at the edge and images with color patches as shown in Figure 10. These were looked upon with utmost care and such patched images were then removed from the data used for training the model.

Comparative analysis for detecting skin cancer

199

Figure 10: Images with marking and patches.

8 Conclusion In this chapter, we have discussed the applications of image segmentation and have trained a DL model on a SVG-based CNN network, and residual network with 50 dense layers. We have tried to generalize the concept for all image classification based problems. We have also compared our results with existing DCNN model on similar data. The accuracy of the model trained in this chapter was around 87% for the first approach and around 90.52% for the later one. In the future work, we aim to develop this model with incorporation of more types of cancers whose images can be obtained in the form of CT scans, MRIs; even pneumonia and COVID-19 report analysis can be performed using this process. The problem statement could incorporate a working model for the same image classification and cancer fatality detection task. Now, as global public health and personal hygiene are assuming immense importance, the techniques can help in assuring community well being. As a future work, we can fine-tune the parameters of ConvNets, increase the number of samples in the dataset to improve the accuracy, perform data augmentation, and can try cropping and removal of background artifacts.

200

Sidharth Purohit et al.

9 Proposed algorithms The proposed algorithms are provided as Algorithm 1 and Algorithm 2. Algorithm 1: For skin cancer prediction using SGD optimizer-based CNN. Input: Image file with separated malignant and benign folders. Output: Classified images with over 86% accuracy Begin * Read the dataset comprising image files as Malignant and Benign. for i in range (1, columns*rows+1): ax = fig.add_subplot (rows, columns, i) if y_train[i] ==0: ax.title.set_text(‘Benign’) else ax.title.set_text(‘Malignant’) plt.imshow(X_train[i], interpolation = ‘nearest’) plt.show() * Performing image segmentation and pre-processing from keras import optimizersfrom keras_preprocessing.image importImageDataGeneratortrain_generator = ImageDataGenerator(rescale = 1./255, rotation_range=10,zoom_range = 0.1,width_shift_range=0.1, height_shift_range=0.1) test_generator = ImageDataGenerator(rescale = 1./255) * Fine Tuning the SGD-based optimizer employed in the model from keras.optimizers import SGD model.compile(SGD(.02, .9, nesterov=True), binaryCrossentropy,metrics=[binaryAccuracy]) End

Algorithm 2: For skin cancer prediction using ResNet-50-based transfer learning architecture on Adam optimizer. Input: Image file separated malignant and benign folders. Output: Classified images with over 90% accuracy Begin * Read the dataset comprising image files as Malignant and Benign. for i in range (1, columns*rows+1): ax = fig.add_subplot (rows, columns, i) if y_train[i] ==0: ax.title.set_text(‘Benign’)

Comparative analysis for detecting skin cancer

201

else ax.title.set_text(‘Malignant’) plt.imshow(X_train[i], interpolation = ‘nearest’) plt.show() * Deploying the ResNet50 Architecture from keras.applications import ResNet50 base_model=ResNet50(include_top=False,weights="imagenet", Input_shape= (224,224,3)) * Compiling the model model.compile(optimizer=optimizers.adam(lr=0.0001), loss="categorical_crossentropy",metrics=["accuracy"]) * Finding accuracy and Correlation. print(accuracy_score(y_test, yp)) 0.905252525252 from sklearn.metrics import confusion_matrix cm= confusion_matrix(y_test,y_pred) print(cm) [[2177 102] [228 793]] End

References [1] [2]

[3]

[4]

[5] [6]

Skin Cancer Statistics. https://www.wcrf.org/dietandcancer/cancer-trends/skin-cancerstatistics. Godoy, S. E., Hayat, M. M., Ramirez, D. A., Myers, S. A., Steven Padilla, R., Krishna, S. Detection theory for accurate and non-invasive skin cancer diagnosis using dynamic thermal imaging. Biomedical Optics Express, NCBI, 2017, 8(4), 2301–2323. Antony, A., Ramesh, A., Sojan, A., Mathews, B., Varghese, T. A. Skin cancer detection using artificial neural networking. International Journal of Innovative Research in Electrical Electronics Instrumentation and Control Engineering, 2016, 4(4), 305–308. Abdul Jaleel, J., Sibi Salim, R. B. A. Artificial neural network based detection of skin cancer. International Journal of Advance Research in Electrical Electronics and Instrumentation Engineering, 2012, 1(3), 200–205. Choudhari, S., Biday, S. Artificial neural network for skin cancer detection. International Journal of Emerging Trends and Technology in Computer Science, 2014, 3(5), 147–153. Alcon, J. F., Ciuhu, C., Ten Kate, W., Heinrich, A., Uzunbajakava, N., Krekels, G., Siem, D., De Haan, G. Automatic imaging system with decision support for inspection of pigmented skin lesions and melanoma diagnosis. IEEE journal of selected Topics in Signal Processing, 2009, 3(1), 14–25.

202

[7] [8]

[9] [10] [11]

[12]

[13]

[14] [15] [16]

[17] [18]

[19] [20]

[21]

[22] [23] [24]

[25]

Sidharth Purohit et al.

Rigel, D. S., Carucci, J. A. Malignant melanoma: prevention early detection and treatment in the twenty-first century. CA: A Cancer Journal for Clinicians, 2000, 50(4), 215–236. Giotis, I., Molders, N., Land, S., Biehl, M., Jonkman, M. F., Petkov, N. MED-NODE: A computerassisted melanoma diagnosis system using non-dermoscopic images. Expert Systems with Applications, Elsevier, 2015, 42(19), 6578–6585. Kawahara, J., BenTaieb, A., Hamarneh, G., “Deep features to classify skin lesions”, International Symposium on Biomedical Imaging, IEEE, Czech Republic, 2016, pp. 1397–1400. The International Skin Imaging Collaboration: Melanoma Project, https://isic-archive.com/. Ur Rehman, M., Khan, S. H., Rizvi, S. M. D., Abbas, Z., Zafar, A., “Classification of skin lesion by interference of segmentation and convolutional neural network”, International Conference on Engineering Innovation, IEEE, Thailand, 2018, pp. 81–85. El Khoukhi, H., Filali, Y., Yahyaouy, A., Sabri, M. A., Aarab, A., “A hardware implementation of OTSU thresholding method for skin cancer image segmentation”, International Conference on Wireless Technologies, Embedded and Intelligent Systems, IEEE, Morocco, 2019, pp. 1–5. Brinker, T. J., Hekler, A., Utikal, J. S., Grabe, N., Schadendorf, D., Klode, J., Berking, C., Steeb, T., Enk, A. H., Von Kalle, C. Skin cancer classification using convolutional neural networks: systematic review. Journal of Medical Internet Research, 2018, 20(10). Hosny, K. M., Kassem, M. A., Foaud, M. M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE, 2019, 14(5). Vera-Rodriguez, R., Fierrez, J., Morales, A. Progress in Pattern Recognition, Image Analysis, Computer Vision and Applications, Springer, 2018, isbn. 9783030134693. Bhavya Sai, V., Narasimha Rao, G., Ramya, M., Sujana, S. Y., Anuradha, T. Classification of skin cancer images using tensorflow and inception v3. International Journal of Engineering \& Technology, 2018, 7(2), 717–721. Theckedath, D., Sedamkar, R. R., “Detecting affect states using VGG16, ResNet50 and SEResNet50 networks”, SN Computer Science, 2020. Naeem, A., Farooq, M. S., Khelifi, A., Abid, A. Malignant melanoma classification using deep learning: datasets, performance measurements, challenges and opportunities. IEEE Access, 2020, 8, 110575–110597. Gaidarski, I., Kutinchev, P., “Using big data for data leak prevention”, Big Data, Knowledge and Control Systems Engineering, IEEE, 2020, pp. 1–5. Jinnai, S., Yamazaki, N., Hirano, Y., Sugawara, Y., Ohe, Y., Hamamoto, R. The development of a skin cancer classification system for pigmented skin lesions using deep learning. Biomolecules, mdpi, 2020, 10(8), 1–13. Hosny, K. M., Kassem, M. A., Foaud, M. M., “Skin cancer classification using deep learning and transfer learning”, Cairo International Biomedical Engineering Conference, Egypt, 2018, pp. 90–93. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., Thrun, S. Dermatologistlevel classification of skin cancer with deep neural networks. Nature, 2017, 542, 115–118. Murugan, A., Nair, S. A. H., Kumar, K. P. S. Detection of skin cancer using SVM, random forest and kNN classifiers. Journal of Medical Systems, 2019, 43(8). Kumar, A., Sarkar, S., Pradhan, C., “Malaria disease detection using CNN technique with SGD, RMSprop and ADAM optimizers”, Deep Learning Techniques for Biomedical and Health Informatics, Springer, 2020, pp. 211–230. Bumrungkun, P., Chamnongthai, K., Patchoo, W., “Detection skin cancer using SVM and snake model”, International Workshop on Advanced Image Technology, IEEE, Thailand, 2018, pp. 1–4.

Comparative analysis for detecting skin cancer

203

[26] Winters, D. W., Van Veen, B. D., Hagness, S. C. A sparsity regularization approach to the electromagnetic inverse scattering problem. IEEE Transactions on Antennas and Propagation, 2010, 58(1), 145–154. [27] Skin Cancer: Malignant vs. Benign, https://www.kaggle.com/fanconic/skin-cancer-malignant -vs-benign. [28] Crimaldi, M., Cristiano, V., De Vivo, A., Isernia, M., Ivanov, P., Sarghini, F., “Neural network algorithms for real time plant diseases detection using UAVs”, Innovative Biosystems Engineering for Sustainable Agriculture, Forestry and Food Production, Springer, 2020. [29] Hu, F., Zhu, Y., Liu, J., Li, L., “An efficient long short-term memory model based on Laplacian Eigenmap in artificial neural networks”, Applied Soft Computing, Elsevier, vol. 91, 2020. [30] Kumar, A., Sarkar, S., Pradhan, C., “Recommendation system for crop identification and pest control technique in agriculture, International Conference on Communication and Signal processing, India, 2019, pp. 185–189.

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

Coronary heart disease analysis using two deep learning algorithms, CNN and RNN, and their sensitivity analyses Abstract: Heart disease is the number one cause of death, killing approximately 17.9 million people per year, of which 85% of them are due to heart attack and stroke, with more than 75% of deaths occurring in Third world countries. Lifestyle, lack of exercise, and unhealthy diet are the main reasons why people get heart diseases, resulting in them going to the hospital or medical centers for treatment. Risk factors, such as hypertension, diabetes, smoking, and excessive consumption of alcohol, are causes for heart-related disease. An analysis would help detect and facilitate the diagnosis of coronary heart diseases at an early stage. This chapter describes a diagnostic model constructed using a series of sensitivity analyses using two deep learning algorithms, namely, recurrent neural network (RNN) and convolutional neural network (CNN). The sensitivity analysis is performed on the number of input layers (NOIL), perturbation over training versus testing ratio, and the number of epochs. The features selected are based on the risk factors of coronary heart disease on a dataset from the Kaggle database. This dataset is from the Framingham Heart Study on coronary heart disease and contains 15 attributes and one target variable. The result of the sensitivity analysis indicates that the RNN model yields an accuracy of 85.98% when the dataset is split 50:50, whereas the CNN model yields an accuracy of 85.79% when the dataset is split 55:45, to predict if a person would have the disease in ten years. Keywords: Heart disease, coronary heart disease, CNN, RNN, NOIL, perturbation analysis, number of epochs

1 Introduction Life depends on the proper functioning of the heart. Any disorder that affects the heart’s ability to function properly is known as heart disease or cardiovascular disease (CVD). CVDs, such as rheumatic heart disease, cerebrovascular heart disease, coronary heart disease (CHD), congenital heart disease, peripheral arterial disease, pulmonary embolism, and deep vein thrombosis [1] are groups of heart and

Mildred J. Nwonye, University of Botswana, Gaborone, Botswana, e-mail: [email protected] V. Lakshmi Narasimhan, University of Botswana, Gaborone, Botswana, e-mail: [email protected] Zablon A. Mbero, University of Botswana, Gaborone, Botswana, e-mail: [email protected] https://doi.org/10.1515/9783110708127-010

206

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

blood vessel disorders. Various factors, such as age, gender, diet, blood pressure (BP), alcohol intake, smoking, obesity, physical inactivity, and family history contribute to an individual developing CVD. Nowadays, people work hard to make ends meet, and coupled with their busy life, they neglect their health; further, when one joins the workforce after obtaining a diploma or degree qualification, their lifestyle or diet changes. Their different lifestyle leads to change in their diet, increase in stress levels and pressure, increases their blood pressure, and, if not taken care of, can develop heart disease. The World Health Organization (WHO), in 2016, stated that CVD is the major cause of death globally, killing at least 17.9 million, which is estimated to be 31% of all the global deaths. In the USA, heart disease is the major cause of death in which, CHD accounts for 1 in 7 deaths, killing over 360,000 people in a year [2]. Each year, about 790,000 people have heart attacks, of which, 114,000 die. From 2004 to 2014, the annual death rate of CHD declined to 35.5%, but the risk factors are alarming. Heart attacks and CHD were two out of the ten most expensive hospital discharge diagnoses, and between 2013 and 2030, medical costs for treatment of coronary heart disease are expected to rise by 100% [3]. In Botswana, non-communicable disease (NCD) is responsible for 37% of the deaths, with CVD being the number one cause of death [4], while CHD is responsible for 1,1310 deaths [5]. People find out about their health issues only when they are forced to visit a hospital. But with advancements in technology, such as the Internet of things (IoT) [6], artificial intelligence [7], and machine learning approaches, physicians can use an efficient and diagnostic model to make meaningful and accurate medical decisions. Various techniques, such as support vector machine (SVM), artificial neural network (ANN), K-nearest neighbor (KNN), fuzzy logic, and other techniques have been employed to tackle the issue of inaccurate diagnosis of various heart diseases; some of them have proved to be successful, while others require more validation on datasets. In this chapter, we use convolutional neural networks (CNN) and recurrent neural networks (RNN) on risk factors (age, current_smoker, sysBP, diabetes, glucose, and total cholesterol) from the dataset, and vary the number of epochs, data ratios, and number of input layers (NOIL). This chapter is organized as follows: Section 2 reviews what other researchers have done, while Section 3 explains the methodology used. Section 4 gives details of the modified CNN and RNN algorithms, and Section 5 describes the perturbation, in addition to sensitivity analysis. Section 6 discusses the effect of perturbation of the three parameters NOIL, the number of epochs, and data ratios on RNN and CNN algorithms, while the conclusions summarize the chapter and provide pointers for future work in this area.

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

207

2 Related works Most researchers have used several machine learning algorithms to predict the diagnosis of heart diseases, including adding some feature-selection methods to increase the accuracy of algorithms used. Peter and Somasundaram [8] used naïve Bayes (NB), decision tree (DT), K-nearest neighbor (KNN), and neural network (NN) to detect the association between independent and dependent variables. Correlationbased feature selection (CFS) subset, chi-squared, consistency subset, gain ratio, filtered attribute, filtered subset, information gain, latent semantic, one attribute, and relief were used as attributes or feature selections. Their result showed that the NB classifier has the highest accuracy of 85.5% for heart disease prediction when the CFS technique was applied when compared with other attribute selection methods. Hasan et al. [9] used data mining techniques to detect hidden patterns, together with correlated features, to predict heart disease, without the support of medical specialists, using the dataset from the Cleveland Heart Disease UCI machine learning repository [10]. This dataset contains 14 attributes and 303 records. The use of information gain feature selection technique was employed to select highly correlated features. Classification techniques, such as KNN, DT using the Iterative Dichotomiser 3 (ID3) [11], logistic regression (LR), Gaussian NB, and random forest (RF) were used on the dataset. LR showed better performance, with an accuracy of 89.5% for all 14 attributes, and 92.76% for 10 attributes, than other techniques. Kohli and Arora [12] proposed the use of classification algorithms on recognition systems that can facilitate detection of chronic diseases early, and increase the patient’s survival rate. The three datasets used are heart, breast cancer, and diabetes available in the UCI machine learning repository [10], and used LR [13], DTs [14], RF [15] SVM [16], and adaptive boosting [17] machine algorithms. The result showed that linear regression was the best for heart disease prediction, with an accuracy of 87.1%; SVM had the highest performance for diabetes with accuracy of 85.71%; and AdaBoost classifier for breast cancer showed an accuracy of 98.57%. Yan et al. [18] employed the multilayer perceptron (MLP) architecture to create a diagnostic support system for the five most common heart diseases1. It reduced the diagnosis time and improved diagnostic accuracy. The result showed that, amongst the five heart diseases, chronic cor pulmonale [19] has the highest classification accuracy of 94.9% and coronary, rheumatic valvular heart disease had the lowest, with 90.4% on the training set, and hypertension had 89.5% on the test data; values from the confusion matrix were used to calculate the accuracy. Zhang et al. [20] proposed the use of a deep learning classifier built on a 1-D CNN multilayer, using the relevant information extracted from electrocardiogram

1 Hypertension, Coronary Heart Disease (CHD), rheumatic valvular heart disease, chronic cor pulmonale, and congenital heart disease.

208

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

(ECG). The MIT-BIH arrhythmia database [21] was used and performance evaluation was carried out using F1 score, positive predictive value (PPV), and sensitivity, which showed values of 0.976, 0.977, and 0.976 respectively. Baccouche et al. [22] proposed an ensemble-learning framework of different neural networks – gated recurrent unit (GRU), long–short-term memory networks (LSTM), bidirectional GRU (BiGRU), bidirectional LSTM (BiLSTM), MLP, and CNN – to accurately predict various types of heart diseases. Model-based selection method, [22] along with recursive feature elimination method [22], was used as the feature-selection method on the “Medica Norte Hospital” in Mexico, which contained 800 records and 141 features. The performance metrics used were accuracy (calculated using the values from the confusion matrix), precision, recall, Area Under Curve (AUC), and F1 score. For Target 1, LSTM had the highest precision value of 0.86; BiGRU had the highest recall; and F1, accuracy, and AUC values of 0.87, 0.87, 0.81, and 0.79, respectively. Target 2 BiGRU had the highest precision, F1 and AUC values of 0.82, 0.81, and 0.82, respectively, BiLSTM had recall and accuracy values of 0.84. For Target 3, BiLSTM and BiGRU had the highest precision value of 0.96, for BiLSTM, recall, F1, accuracy, and AUC values were 0.8, 0.87, 0.8, and 0.88, respectively. Lastly, target 4, MLP had the highest precision value of 0.96, while BiGRU had the highest recall, F1, accuracy, and AUC values of 0.87, 0.82, 0.87, and 0.85, respectively. Tu et al. [23] employed a bagging algorithm [23], which was compared with the DT algorithm [14]. Since the bagging algorithm combines the outputs from different models, it increases the performance over a single model. The dataset contained four sets from the UCI machine learning repository [10], with a total of 920 records and 76 attributes. Sensitivity, specificity, and accuracy were used for performance measures. The result showed that the bagging algorithm performed better than the decision tree, with an accuracy of 81.41%, making it very effective. Yuan et al [24] proposed the use of “fuzzy logic” and bootstrap aggregating algorithm, based on gradient boosting decision tree (GBDT) prediction algorithms [24]. The dataset used was from the UCI repository [10], which had 303 data with 14 parameters. The bagging algorithm was introduced into the fuzzy-GBDT, and metrics, such as accuracy, recall, and AUC were used as performance metrics to avoid overfitting. The result of bagging-fuzzy-GBDT was compared with the fuzzy-GBDT model, DT model, GBDT model, and “bagging-GBDT” to validate the effectiveness of the model. It indicated that “fuzzy logic and bagging algorithm”, when combined with GBDT, increased the AUC value to 0.87. Junwei et al. [25] proposed the use of LSTM to control the problem of irregular time intervals. The dataset used was collected from hospital information systems (HIS), and continuous, discrete, and missing values were replaced with the mean and highest occurring values. The authors used the z-score standardization technique to pre-process the input data. Precisionmicro, Recallmicro, F1micro, and AUC were used as performance metrics on both the improved LSTM (T-LSTM-TR) and LSTM, with the number of hidden layers being 120, α of 0.5, and a dropout layer of 0.4. The result showed that

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

209

the performance of T-LSTM-TR is significantly higher, with a recall of 0.811, than other techniques, namely, F1 with 0.608 and AUC with 0.896. Li et al. [26] proposed the use of some machine learning algorithms, such as SVM, LR, ANN, KNN, NB, and DT, to develop an efficient and accurate system, based on classification algorithms. They employed some feature-selection algorithms, such as relief [27], minimal redundancy maximal relevance (MRMR) [28], least absolute shrinkage selection operator (LASSO) [29] and local learning-based feature selection (LLBFS) [30], and “fast conditional mutual information (FCMIM) feature selection (FS) algorithm” [31]. The Cleveland Heart dataset [10] had 303 instances and 75 attributes, but after removing missing values, 297 instances and 13 features were used. The performance evaluation metrics used were accuracy, sensitivity, specificity, precision, and Matthews correlation coefficient (MCC). For FCMIM, SVM achieved the highest accuracy of 92.37% and a specificity of 98%,. LR had the highest sensitivity of 98% and MCC showed 91%, with an execution time of 0.01 s. Karaolis et al. [32] developed a system, based on DT, to assess the risk factors of CHD, and help decrease the number of patients. The events that were studied were “myocardial infarction” (MI), “percutaneous coronary intervention” (PCI), and “coronary artery bypass graft” (CABG) surgery, which had different risk factors. MI and CABG risk factors were “age”, “smoking”, and “history of hypertension”. For PCI, they were “family history” (FH), “history of hypertension”, and “history of diabetes”. The data that was used was collected from 1,500 patients with CHD between 2003 and 2006, and in 2009, under the supervision of J. Moutiris, a cardiologist at the Paphos General Hospital in Cyprus. The performance was measured using correctly classified (%CC), “true positive rate” (%TP), “true negative rate” (%TN), “false negative rate” (%FN), “sensitivity, “specificity,” and “support”. The following splitting criteria were used: information gain (IG), Gini index (GI), likelihood ratio chi-squared statistics (X2), gain ratio (GR), and distance measure (DM). The result showed that MI, PCI, and CABG had the highest %CC of 66%, 75%, and 75%. respectively. Mu et al. [33] proposed the use of bidirectional RNN to remember all the visitation of patients, both past and future, and to include some patients attributes. The dataset is from the Hainan People’s Hospital, which contains the diagnosis ID, time, diagnostic code, order execution time, stop time, drugs code, and patient profile. An Evaluation was done with other models, such as KNN, SVM, RNN (the basic LSTM model), RNNy (proposed model), and RNNi (uses side information as input). The number of layers used for the experiment was 2, with 128 hidden units in each layer, with a dropout rate of 0.4. Based on the result obtained from the metric accuracy on 100 iterations, RNNy performed better, with an accuracy of 64.75% after 10 iterations. RNNi can significantly improve the model’s performance. Side information is used as the first input units and subsequent changes, thereby affecting the judgment of the model. When RNNi is compared with RNNy, as the sequent length increases, there is an improvement, indicating that a combination of biLSTM can improve characteristic discovery of the patients and the method can be used for

210

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

diagnosis. Javeed et al. [34] found that most of the methods developed by other researchers had an overfitting problem on the testing data. They developed a system that uses two algorithms, Random Search Algorithm (RSA) for feature selection and the RF algorithm for heart failure prediction. The dataset from the Cleveland heart disease dataset was used on the UCI machine learning repository [10]. The dataset contained 303 instances with 6 missing instances and 76 features (but 13 published). The RSA is used to search a subset of features having complementary information about the heart failure prediction. The two algorithms were used as one hybrid learning system, RSA-RF. The performance was measured using accuracy, sensitivity, specificity, and MCC. They had an accuracy of 93.33%, when compared with [35] that had 85% accuracy. Chang et al. [37] proposed the use of RF, SVM, eXtreme Gradient Boosting (XGB) [38], and eXtreme Gradient Boosting SVM (XGBSVM) to determine if hypertensive patients will have hypertensive heart disease within three years. The dataset used is from a hospital in Beijing, China with a total of 1,357 records, but was later reduced to 372 because of the removal of missing data. AUC and “Improved Normalized Entropy” (INE) were two evaluation metrics that were used. The highest AUC value obtained was from the XGBSVM model –0.939. For INE, the value was 0.868 from the XGBSVM. Ali et al. [39] proposed an automated decision support system using ANN for heart disease diagnosis. The use of the χ2 statistical model was used for feature selection and feature elimination. χ2 statistics were computed between each non-negative feature. The Cleveland heart disease dataset used is from the UCI machine learning repository. The metrics used were accuracy, sensitivity, specificity, and Matthews correlation coefficient (MCC). The result of DNN and the χ2-DNN values of DNN, for an accuracy of 90% on all features, namely, specificity, sensitivity, and MCC, were 91.83%, 87.8%, and 0.798,respectively, with χ2-DNN performing better with 3.33%. The result of ANN and χ2-ANN for ANN accuracy is 90% on all features. Specificity, sensitivity, and MCC were 91.83%, 87.8%, and 0.798, respectively, with χ2-ANN performing better with 1.11%. When compared under k-fold cross-validation, an accuracy of 91.57%, sensitivity of 89.78%, specificity of 93.12%, and an MCC value of 0.83 was achieved. Lastly, when compared with other models, such as SVM, Adaboost, and RF, the proposed model performed better, with an accuracy of 93.33%.

3 Proposed approach The approach used here is shown in Figure 1, which involves the pre-processing of the dataset to handle missing values, and the use of principal component analysis (PCA) [40] feature selection to remove the irrelevant features. Also, CNN and RNN algorithms are employed, along with enhancements, to cater for perturbation parameters as i) data ratio, ii) the number of epochs, and iii) NOIL.

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

211

Figure 1: Methodology.

3.1 Description of the dataset The dataset used in this chapter is obtained from the Framingham database available on Kaggle [41], a Google-owned platform that data scientists use to publish MLbased prediction competitions, using publicly available databases. The dataset was obtained from a study conducted in Massachusetts, USA, and contains 4,238 records, with 16 attributes, 15 independent variables, and 1 target variable. Table 1 describes the feature values, whereas Table 2 describes the target variable. The target variable indicates the 10-year occurrence of CHD, noted as TenYearCHD, and contains a value of 0 or 1, where 0 indicates no risk of heart disease and 1 indicates a high risk of heart disease. Note that the dataset needs some pre-processing before it can be used.

Table 1: Description of the features of the dataset. No.

Name

Type



Male(gender)

bool



Age

int



Education

int



currentSmoker

bool



cigsPerDay (estimated average)

int

212

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

Table 1 (continued ) No.

Name

Type



BPMeds

bool



PrevalentStroke

bool



prevalentHyp

bool



Diabetes

bool



totChol

int



sysBP

int



DiaBP

int



BMI

float



heartRate

int



glucose

int

Table 2: Description of the target or output class. Classname

Type

Description

TenYearCHD

Bool

-Year prevalence of coronary heart disease (CHD)

3.2 Data pre-processing It is a technique that changes raw data into a useful and efficient format. The steps involved in data pre-processing are data cleaning, which handles missing values and noisy data, data transformation, which involves normalization, attribute selection, discretization, and data reduction. The dataset used has some missing values, which are substituted with the mean value of the feature using scikit-learn [42]. To select relevant features, feature selection was employed.

3.3 Feature selection The performance of the model is influenced by the features used. Feature selection is the process of automatically or manually selecting features that contribute the most to the target or output variable of your dataset. Feature selection reduces overfitting, improves accuracy, and reduces training time. The feature selection method used here is PCA, which is a linear dimensionality reduction technique for mapping

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

213

high-dimensional representation of the dataset into low-dimensional representation [40]. PCA is also a method in statistics for deciding important variables in a highdimensional dataset that explains the difference in observation, and simplifies analysis and visualization without much loss of information [40].

4 Deep learning algorithms 4.1 Convolutional neural networks (CNN) CNN is a type of ANN [43] in which the neurons self-optimize while learning; each neuron in the system receives input and performs some computations to calculate the output [44]. The CNN architecture consists of input layer, convolutional layer, pooling layer, and fully connected layers, as shown in Figure 2, and the steps involved in the computations are shown in Table 3. The hidden layers of the CNN comprise convolutional layers, pooling layers, and fully connected layers. The architecture of the network, as shown in Figure 2, is a grayscale image of the dimension, 28 × 28 × 1, in which the number of channels is one. The input data is mapped to two convolutional layers that have a kernel size of 5 × 5. The first convolutional layer, labeled Conv_1, and the second convolutional layer, labeled Conv_2, has n1 and n2 channels respectively, which refer to the number of output feature maps or the number of filters that the layers have. The concept of pooling is applied to each convolutional layer in the form of max-pooling, to reduce the size of the feature map. Max pooling is useful because the dimensionality of the data is reduced, thus lowering the number of parameters in future steps [45]. Two fully connected layers flatten the feature maps with ReLU activation fc_3, which passes the output to fc_4 with n3 units, and each has dropout layers. The output layer produces the output by applying a simple linear activation.

Figure 2: Basic structure of a CNN.

214

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

Table 3: CNN algorithm modified for sensitivity analysis. Input: Feature values = {“Age”, “currentSmoker”, “sysBP”, “diaBP”, “diabetes”, and “totChol”}; Output: Target variable = {“ TenYearCHD”}; Begin Set Feature X = {X1, X2, . . ., Xn}; Set Number of Iteration T= {200, 500, 700, 1000, 1500}; Set Convulation_Layer C = {2, 4, 6, 8, 10}; int Fully_Connected Layers FC = 10; Set NOIL N = {2, 4, 6, 8}; foreach i in N { Apply PCA(Xi); } Set dataset D(x:y) = {“80:20”, “75:25”, “70:30”, “65:35”, “60:40“, 55:45”, “50:50”}; // where x is the training_ratio and y is the test_ratio// foreach d in D { //for each element in D// foreach t in T { // for each element in T // foreach c in C { // for each element in C // input = x.shape[0], x.shape[1]; filter = 10; Apply max-pooling; Apply Dropout(0.3); End foreach } FC ; End foreach } End foreach } End

4.2 Recurrent neural network (RNN) RNN uses its internal memory and recurrent connection in every unit to estimate the current output, based on past information [46]. Since the units are interconnected sequentially, the units become either smaller or bigger, depending on the computation, and the gradients may be reduced to zero or infinity, and vanish [22]. LSTM is designed to solve the problem imposed by RNN, namely, the vanishing gradient problem. The building block of an LSTM is a memory cell that represents the hidden layer. The modified RNN comprises an input, an output, memory cell, and forget gates. The structure of the LSTM recurrent cell unit is shown in Figure 3. Table 4 shows the steps involved in its implementation. To get to the current time step Ct, the cell state from the previous time step, Ct-1 is modified without being multiplied by any weighting factor. In Figure 3, the ⊙ represents the element-wise multiplication and ⊕, the element-wise summation. xt is the input data at time t and ht-1 is the hidden units at time t – 1. Matrix-vector multiplication is applied to the input using: the sigmoid function (σ), hyperbolic tangent (tanh), and weight

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

215

values. The forget gate (ft) decides which information passes through the cell and which information is to be withheld. It is computed as follows: f t = σWxf + Whj + bf

(1)

The input gate (it) and input node (gt) are used to bring up-to-date the cell state. They are computed as: ft = σWxi + Whi h + bi

(2)

gt = tanhWxg x + Whg h + bg

(3)

The cell state at time t is calculated as follows: Ct = ðCt − 1  f t Þ¯ðit  gt Þ. The output gate (ot) decides how to update the value of the hidden layers. σt = σðWxo xt + Who ht − 1 + bo Þ

(4)

Figure 3: Structure of an LSTM.

5 Perturbation and sensitivity analysis In Perturbation analysis, we want to find out how changing the x values affects y and in sensitivity analysis, we want to find how the x values affect the y values for a particular variable. Perturbation analysis is done over data ratios and the number of epochs versus accuracy with the various values of NOIL as the sensitivity parameter. The

216

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

following are the ratios used: 80:20, 75:25, 70:30, 65:35, 60:40, 55:45, and 50:50, and the number of epochs is set to the following values: 200, 500, 700, 1,000, and 1,500.

5.1 Results from the perturbation on CNN algorithm The results shown in Table 5 are obtained from an execution of the CNN algorithm on the target variable, by varying the training versus testing data ratios for 1000 epoch. The table records accuracies obtained when the NOIL is varied from 8, 6, and 4, to 2, which fluctuates as the NOIL decreases across data division ratios, as shown in Figure 4. The lowest accuracy is recorded as 83.86% at a ratio of 55:45, with NOIL = 6, while the highest accuracy is recorded as 85.79% at a ratio of 55:45, with NOIL = 4. As the number of NOIL decreases across each data division, the accuracy of the algorithm fluctuates. This shows that varying the data division ratios has a negative effect on the performance of the algorithm in predicting CHD for 1,000 epochs. Table 4: CNN algorithm modified for sensitivity analysis. Input: Feature values = {“Age”, “currentSmoker”, “sysBP”, “diaBP”, “diabetes and “totChol”}; Output: Target variable = {“ TenYearCHD”}; Begin Set Feature X = {X1, X2, . . ., Xn}; Set Number of Iteration T= {200, 500, 700, 1000, 1500}; Set LSTM_Layer L = {2, 4, 6, 8, 10}; int Number Of Layers NOL = 10; Set NOIL N = {2, 4, 6, 8}; foreach i in N { Apply PCA(Xi); } Set dataset D(x:y) = {“80:20”, “75:25”, “70:30”, “65:35”, “60:40”, “55:45”, “50:50”}; // where x is the training_ratio and y is the test_ratio// foreach d in D { // for each element in D // foreach t in T { // for each element in T // foreach l in L { // for each element in L // input = x.shape[0], x.shape[1]; Apply Dropout(0.2); End foreach; } NOL; End foreach } End foreach } End

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

217

Table 5: Results for executing CNN algorithm on ratio versus accuracy, with (NOIL as sensitivity parameter for 1,000 epoch). NOIL = 

NOIL = 

:

.

.

.

.

:

.

.

.

.

:

.

.

.

.

:

.

.

.

.

:

.

.

.

.

:

.

.

.

.

:

.

.

.

.

Training: testing data ratio

NOIL = 

NOIL = 

Figure 4: Perturbation graph for ratio with NOIL for CNN algorithm with NOIL as sensitivity parameter for 1,000 epoch.

The results shown in Table 6 are obtained from an execution of the CNN algorithm on the target variable by varying the number of epochs. The table records the accuracy obtained when the NOIL is varied from 8, 6, and 4, to 2. The accuracy values recorded fluctuate across each epoch as the NOIL decreases, as shown in Figure 5. The lowest accuracy is recorded as 83.86% at an epoch of 1,000, with NOIL = 6, while the highest accuracy is recorded as 85.79% at an epoch of 1,000, with NOIL = 4. As the number of NOIL decreases across each epoch, the accuracy of the algorithm fluctuates. This shows that varying the number of epochs has a negative effect on the performance of the algorithm in predicting CHD for a ratio of 55:45.

218

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

Table 6: Results for executing CNN algorithm with number of epoch versus accuracy with NOIL as sensitivity parameter for data ratio of 55:45. NOIL = 

NOIL = 

NOIL = 

NOIL = 



.

.

.

.



.

.

.

.



.

.

.

.

,

.

.

.

.

,

.

.

.

.

Epoch

Figure 5: Perturbation graph for number of epoch versus accuracy for CNN algorithm with NOIL as sensitivity parameter for data ratio of 55:45.

5.2 Results from the perturbation on RNN algorithm The results shown in Table 7 are obtained from the execution of the RNN algorithm on the target variable by varying the training versus testing data ratios. The table records the accuracies when the NOIL is varied from 8, 6, and 4 to 2. The accuracy values recorded fluctuate as the data division ratios are varied, and the NOIL decreases as shown in Figure 6. The lowest accuracy is recorded as 84.29% a ratio of 50:50, while the highest accuracy is recorded as 85.98% at a ratio of 50:50. As the number of NOIL increases across each data division, the accuracy of the algorithm fluctuates. This shows varying of the data division ratios.

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

219

The results shown in Table 8 are obtained from an execution of the RNN algorithm on the target variable by varying the number of epochs. The table records the accuracies when the NOIL is 8, 6, 4 to 2. The accuracy values recorded fluctuate as the data division ratios are varied, as shown in Figure 7. The lowest accuracy is 83.95% at epoch 700, while the highest accuracy recorded is 85.98% at epoch 1,000. As the number of NOIL increases across each epoch, the accuracy of the NOIL fluctuates. This shows that varying the number of epochs influences the performance of the algorithm in predicting CHD, for a ratio of 50:50. Table 7: Results for executing RNN algorithm on ratio versus accuracy with NOIL as sensitivity parameter for 1,000 epoch. NOIL = 

NOIL = 

NOIL = 

NOIL = 

:

.

.

.

.

:

.

.

.

.

: :

. .

. .

. .

. .

:

.

.

.

.

:

.

.

.

.

:

.

.

.

.

Training: testing data

Figure 6: Perturbation graph for ratio versus accuracy for RNN algorithm with NOIL as sensitivity parameter for 1,000 epoch.

220

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

6 Performance evaluation comparison For the RNN algorithm, it is observed that both the ratio and number of epochs had the same accuracy when the NOIL is 8. For CNN, the ratio and number of epochs had the same accuracy of 85.79 when the NOIL = 4, as shown in Table 9. Table 8: Results for executing RNN algorithm on number of epoch versus accuracy with NOIL as sensitivity parameter for data ratio of 50:50. NOIL = 

NOIL = 

NOIL = 

NOIL = 



.

.

.

.



.

.

.

.



.

.

.

.

,

.

.

.

.

,

.

.

.

.

Epoch

Figure 7: perturbation graph for number of epoch versus accuracy for RNN algorithm with NOIL as sensitivity parameter for data ratio of 50:50.

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

221

Table 9: Performance comparison of CNN and RNN models. Model

Accuracy Ratio

Accuracy

NOIL

Number of epoch

Accuracy

NOIL

CNN

:

.



,

.



RNN

:

.



,

.



7 Conclusions This chapter discusses the effect of perturbation of three parameters – NOIL, number of epochs, data ratios – on CNN and RNN algorithms used for heart disease modeling, and their respective sensitivity analyses. The results for RNN showed that when the NOIL is 8, the highest perturbation was achieved with an accuracy of 85.98% for both the ratio and number of epochs. CNN algorithm shows the highest result when the NOIL is 4, for both perturbations, with an accuracy of 85.79%. Therefore, RNN performed better than CNN, because it had the highest accuracy of 85.98%. The techniques employed in this research can be used to develop a medical decision support system that can be used to assist physicians in diagnosis. In the future, we intend to explore other perturbation and sensitivity parameters, such as number of hidden layer, number of neurons in the hidden layer, activation function, bias, weight on RNN, CNN, and MLP. Collecting real data from the hospital and used with algorithms to help live patients would be an interesting challenge, both in terms of analyses and underlying ethics.

References [1]

[2]

[3]

[4]

World Health Organization, ‘Cardiovascular Diseases (CVDs)’, May 17, 2017. https://www. who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds) (accessed Jan. 14, 2020). Blaha, M. J. et al. ‘Heart Disease and Stroke Statistics 2017 At-a-Glance’, Jan. 2017. Accessed: Mar. 25, 2020. [Online]. Available: https://healthmetrics.heart.org/wp-content/uploads/ 2017/06/Heart-Disease-and-Stroke-Statistics-2017-ucm_491265.pdf. Diethrich, E. B., DeBakey, M. E., Oliver, M. F., Godman, M. J., Entman, M. L., Prout, W. G., ‘Cardiovascular disease’. Encyclopædia Britannica, inc., Dec. 06, 2019, Accessed: Mar. 20, 2020. [Online]. Available: https://www.britannica.com/science/cardiovascular-disease. World Health Organization, ‘Botswana leads the way on fighting health threat endangering millions of lives in Africa’, WHO | Regional Office for Africa, Sep. 19, 2018. https://www.afro. who.int/news/botswana-leads-way-fighting-health-threat-endangering-millions-lives-africa (accessed Nov. 03, 2020).

222

[5] [6] [7]

[8]

[9]

[10]

[11]

[12]

[13] [14] [15] [16] [17] [18]

[19] [20]

[21] [22]

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

Organization, W. H., ‘Coronary heart disease in Botswana’, World Life Expectancy, 2018. https:// www.worldlifeexpectancy.com/botswana-coronary-heart-disease (accessed Nov. 03, 2020). Khan, M. A. An IoT framework for heart disease prediction based on MDCNN classifier. IEEE Access, 2020, 8, 34717–34727. doi: 10.1109/ACCESS.2020.2974687. Kumar, V. S., Prasad, J., Lakshmi Narasimhan, V., Ravi, S., ‘Application of artificial neural networks for prediction of solar radiation for Botswana’, in 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), Aug. 2017, pp. 3493–3501, doi: 10.1109/ICECDS.2017.8390110. Peter, T. J., Somasundaram, K., ‘An empirical study on prediction of heart disease using classification data mining techniques’, in IEEE-International Conference On Advances In Engineering, Science And Management (ICAESM -2012), Mar. 2012, pp. 514–518, Accessed: Jan. 23, 2020. [Online]. Available: https://ieeexplore.ieee.org/document/6215898. Hasan, S. M. M., Mamun, M. A., Uddin, M. P., ‘Comparative ANALYSIS OF CLASSIFICATION APPROACHES FOR HEART DISEASE PREDICTION’, in International Conference on Computer, Communication, Chemical, Material and Electronic Engineering, Sep. 2018, pp. 1–4, doi: 10.1109/IC4ME2.2018.8465594. Janosi, A., Steinbrunn, W., Pfisterer, M., Detrano, R., ‘UCI Machine Learning Repository: Heart Disease Data Set’, 2007. https://archive.ics.uci.edu/ml/datasets/heart+disease (accessed Oct. 08, 2020). Pathan, A. A., Hasan, M., Ahmed, M. F., Farid, D. M., ‘Educational data mining: A mining model for developing students’ programming skills’, in The 8th International Conference on Software, Knowledge, Information Management and Applications (SKIMA 2014), Dec. 2014, pp. 1–5, doi: 10.1109/SKIMA.2014.7083552. Kohli, P. S., Arora, S., ‘Application of machine learning in disease prediction’, in 2018 4th International Conference on Computing Communication and Automation (ICCCA), Dec. 2018, pp. 1–4, doi: 10.1109/CCAA.2018.8777449. Bergerud, W. A. Introduction to Logistic Regression Models: With Worked Forestry Examples, Vol. 7, British Columbia, Ministry of Forests Research Program, 1996. Mitchell, T. M. Machine Learning, 1st. Professional Book Group 11 West 19th Street, New York, NY, United States, McGraw-Hill, Inc., 1997. Breiman, L. Random forests. Machine Learning, Oct. 2001, 45(1), 5–32. doi: 10.1023/ A:1010933404324. Cristianini, N., Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernelbased Learning Methods, Cambridge, Cambridge University Press, 2000. Kurama, V., ‘A guide to understanding adaboost | paperspace blog’, Paperspace Blog, Feb. 23, 2020. https://blog.paperspace.com/adaboost-optimizer/ (accessed Nov. 04, 2020). Yan, H., Jiang, Y., Zheng, J., Peng, C., Li, Q. A multilayer perceptron-based medical decision support system for heart disease diagnosis. Expert Systems with Applications, Mar 2006, 272–281, doi: 10.1016/j.eswa.2005.07.022. Weitzenblum, E. CHRONIC COR PULMONALE. Heart, Feb. 2003, 89(2), 225–230. doi: 10.1136/ heart.89.2.225. Zhang, W., Yu, L., Ye, L., Zhuang, W., Ma, F., ‘ECG signal classification with deep learning for heart disease identification’, presented at the 2018 International Conference on Big Data and Artificial Intelligence (BDAI), Jul. 2011, doi: 10.1109/BDAI.2018.8546681. Moody, G. B., Mark, R. G., ‘MIT-BIH arrhythmia database’. physionet.org, 1992, doi: 10.13026/ C2F305. Baccouche, A., Garcia-Zapirain, B., Olea, C. C., Elmaghraby, A. Ensemble deep learning models for heart disease classification: a case study from Mexico. Information, Art. no. 4, Apr. 2020, 11(207), doi: 10.3390/info11040207.

Coronary heart disease analysis using CNN and RNN, and their sensitivity analyses

223

[23] Tu, M. C., Shin, D., Shin, D., ‘Effective diagnosis of heart disease through bagging approach’, in 2009 2nd International Conference on Biomedical Engineering and Informatics, Oct. 2009, pp. 1–4, doi: 10.1109/BMEI.2009.5301650. [24] Yuan, X. et al., ‘A High accuracy integrated bagging-fuzzy-GBDT prediction algorithm for heart disease diagnosis’, in 2019 IEEE/CIC International Conference on Communications in China (ICCC), Aug. 2019, pp. 467–471, doi: 10.1109/ICCChina.2019.8855897. [25] Junwei, K., Yang, H., Jun-jiang, L., Zhijun, Y. Dynamic prediction of cardiovascular disease using improved LSTM. Emerald Group Publishing, May 2019, 3, 14–25. doi: 10.1108/IJCS-012019-0002. [26] Li, J. P., Haq, A. U., Din, S. U., Khan, J., Khan, A., Saboor, A. Heart Disease identification method using machine learning classification in E-healthcare. IEEE Access, 2020, 8, 107562– 107582. doi: 10.1109/ACCESS.2020.3001149. [27] Peng, H., Long, F., Ding, C. Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, Aug. 2005, 27(8), 1226–1238. doi: 10.1109/TPAMI.2005.159. [28] Unler, A., Murat, A., Chinnam, R. B. mr2PSO: A maximum relevance minimum redundancy feature selection method based on swarm intelligence for support vector machine classification. Informing Science: The International Journal, Oct 2011, 181(20), 4625–4641. doi: 10.1016/j.ins.2010.05.037. [29] Tibshirani, R. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B, 1996, 58(1), 267–288. Accessed: Nov. 11, 2020. [Online]. Available: https://www.jstor.org/stable/2346178. [30] Harrell, F. E. Ordinal Logistic Regression. In: Jr. Harrell, F. E. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis, Cham, Springer International Publishing, 2015, 311–325. [31] Fleuret, F. Fast binary feature selection with conditional mutual information. Journal of Machine Learning Research, Dec. 2004, 5, 1531–1555. [32] Karaolis, M. A., Moutiris, J. A., Hadjipanayi, D., Pattichis, C. S. Assessment of the risk factors of coronary heart events based on data mining with decision trees. IEEE Transactions on Information Technology in Biomedicine, May 2010, 14(3), 559–566. doi: 10.1109/ TITB.2009.2038906. [33] Mu, Y., Huang, M., Ye, C., Wu, Q., ‘Diagnosis Prediction via Recurrent Neural network’, Apr. 2018, doi: 10.18178/ijmlc.2018.8.2.673. [34] Javeed, A., Zhou, S., Yongjian, L., Qasim, I., Noor, A., Nour, R. An intelligent learning system based on random search algorithm and optimized random forest model for improved heart disease detection. IEEE Access, 2019, 7, 180235–180243. doi: 10.1109/ ACCESS.2019.2952107. [35] Ali, L. et al. An optimized stacked support vector machines based expert system for the effective prediction of heart failure. IEEE Access, Apr. 2019, 7, 54007–54014. doi: 10.1109/ ACCESS.2019.2909969. [36] Jabbar, M. A., Samreen, S., ‘Heart disease prediction system based on hidden naïve Bayes classifier’, in 2016 International Conference on Circuits, Controls, Communications and Computing (I4C), Oct. 2016, pp. 1–5, doi: 10.1109/CIMCA.2016.8053261. [37] Chang, W., Liu, Y., Wu, X., Xiao, Y., Zhou, S., Cao, W. A new hybrid XGBSVM Model: Application for hypertensive heart disease. IEEE Access, 2019, 7, 175248–175258. doi: 10.1109/ACCESS.2019.2957367. [38] Sakhnovich, A., ‘On the GBDT version of the B\”acklund-Darboux transformation and its applications to the linear and nonlinear equations and Weyl theory’, ArXiv09091537 Math-Ph, Sep. 2009, Accessed: Nov. 11, 2020. [Online]. Available: http://arxiv.org/abs/0909.1537.

224

Mildred J. Nwonye, V. Lakshmi Narasimhan, Zablon A. Mbero

[39] Ali, L., Rahman, A., Khan, A., Zhou, M., Javeed, A., Khan, J. A. An automated diagnostic system for heart disease prediction based on $\chi^2$ Statistical Model and Optimally Configured Deep Neural Network. IEEE Access, 2019, 7, 34938–34945. doi: 10.1109/ ACCESS.2019.2904800. [40] Parveen, A. N., Inbarani, H. H., Sathishkumar, E. N. Performance analysis of unsupervised feature selection methods. IEEE 2012 International Conference on Computing, Communication and Applications, Feb. 2012, 1–7, doi: 10.1109/ICCCA.2012.6179181. [41] Aman, A., ‘Framingham Heart study dataset’, Jul. 11, 2017. https://www.kaggle.com/di leep070/heart-disease-prediction-using-logistic-regression (accessed Mar. 17, 2020). [42] ‘scikit-learn: machine learning in Python – scikit-learn 0.23.2 documentation’. https://scikitlearn.org/stable/ (accessed Nov. 04, 2020). [43] Tan, R. S., Lakshmi Narasimhan, V. Time complexity analysis of neural networks on message passing multicomputer systems. International Journal of Engineering Intelligent Systems for Electrical Engineering and Communications, Sep 1999, 7(3), 137–144, Accessed: Oct. 28, 2020. [Online]. Available: https://www.researchgate.net/publication/290017674_Time_com plexity_analysis_of_neural_networks_on_message_passing_multicomputer_systems. [44] O’Shea, K. T., Nash, R., ‘An introduction to convolutional neural networks’, ResearchGate, Dec. 2015, Accessed: Sep. 11, 2020. [Online]. Available: https://www.researchgate.net/publi cation/285164623_An_Introduction_to_Convolutional_Neural_Networks. [45] Shukla, N. Machine Learning with TensorFlow, Manning, 2017. [46] Anandajayam, P., Krishnakoumar, C., Vikneshvaran, S., Suryanaraynan, B., ‘Coronary heart disease predictive decision scheme using big data and RNN’, in 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), Mar. 2019, pp. 1–6, doi: 10.1109/ICSCAN.2019.8878765.

Ana Carolina Borges Monteiro, Reinaldo Padilha França, Rangel Arthur, Yuzo Iano

An overview of the technological performance of deep learning in modern medicine Abstract: Artificial intelligence is a technology that uses several layers of data and information, encompassing algorithms, machine and also deep learning models, pattern matching, and even cognitive computing, learning to digitally assimilate data, gain insights on diagnoses, the variability of medical treatment, as also patient outcome with machine learning (ML). ML programming models are trained on data sets before being implemented, with properties to create their own rules or questions. It automatically and gradually improves their efficiency, accuracy, and precision with the number of experiments in which these models (algorithms) are trained. Deep learning (DL) replicates the basic structure of biological neurons, using complex algorithms, allowing predictive digital health and also allowing logical and complex structures to be established, without the need for human supervision. In other words, it is a digital architecture to study, understand, and learn how to interpret information related to the object of study that helps to more accurately identify a tumor, for example, considering that they do not need to be an expert in the field to perform this type of identification through technology. In the same sense as more pertinent diagnostic recommendations, are some examples of the application of this type of technology that can be widely used in various health segments. Emphasizing that it will give the doctor and the patient more accurate diagnoses, faster diagnoses, more assertive treatments, better-calculated risks, possibility of detecting infections, abnormalities, and diseases in a matter of seconds, the use of deep learning in health is very well accepted for reducing the margin of human error. Therefore, this manuscript aims to provide an updated overview of DL, concerning its essential concepts and in the field of medicine, as also its relationships in frameworks with meaningful intelligent properties, enabling cost-effective personalized digital services as a primary goal in healthcare in the current modern scenario. Ana Carolina Borges Monteiro, School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil, e-mail: [email protected] Reinaldo Padilha França, School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil, e-mail: [email protected] Rangel Arthur, Faculty of Technology (FT) – State University of Campinas (UNICAMP), Limeira, São Paulo, Brazil; e-mail: [email protected] Yuzo Iano, School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo – Brazil, e-mail: [email protected] https://doi.org/10.1515/9783110708127-011

226

Ana Carolina Borges Monteiro et al.

Keywords: Healthcare, digital image, artificial intelligence, deep learning, image processing, healthcare informatics, healthcare , cognitive healthcare

1 Introduction Artificial intelligence (AI) technology allows the machine to do “a lot” for the human being, since in the “tab” of the AI there is “machine learning (ML),” and within this technology, there is a technology called “deep learning (DL).” Used in several industry segments such as computer vision used in driverless cars, image recognition, natural language processing, among other applications, are based on the concept of neural networks that try to “emulate” the human brain [1]. DL is a branch of ML that allows the computer to think and learn like a human being. In this sense, it is possible to put the “machine” in the doctor’s place to make diagnoses of diseases more accurately than the doctors themselves. And it is advantageous, as it is possible to reduce the occasional medical error and not need a second opinion [2, 3]. ML is an area of computational science that has been updated, advanced, and grown from the research of digital pattern recognition and the theory of computational learning in AI, exploring the research and structuring of learning models and algorithms with learning properties and even making predictions/inferences about data [4]. ML is an AI sub-area responsible for one of the greatest revolutions in human history, as machines are taught to make decisions and make predictions in an increasingly simple way and through increasingly accurate systems and tools. ML has been successfully used in several areas. In medicine, it allows making predictions about specific diseases and treatments in a personalized way and in a short time, reflecting that currently, one of the main advantages of ML in medicine is in the area of diagnostics, offering greater and faster analysis accuracy both in exams and in the availability of results [5]. In this context, the ML process, in a broader way, allows machine algorithms to learn instead of being programmed manually. In medicine, this technology has been employed to analyze lung images and help better diagnose and estimate the prognosis for cancer by identifying hundreds of new features on these medical images [6]. DL uses algorithms suggested by the functioning and even structure of the human brain, known as artificial neural networks (ANN), which will allow the computer to think and learn like a human being. Digital image recognition is presumably one of the best examples of how fast DL technology is, since the error rate in image recognition in its algorithms was better than in the human task [7]. In addition to medical disciplines such as radiology, it represents an “elected potential” for the rise of DL technology tests, other areas of medicine where it is possible to use this technology for fracture detection or cancer. It is stipulated that future health will become a software business [8].

An overview of the technological performance of deep learning in modern medicine

227

Conditions such as diabetic retinopathy can be diagnosed with a higher accuracy rate than that of a human with the help of DL, which analyzes the retinal fundus image and, with greater precision, directs to the most appropriate treatment, with the provision of drugs and allocation of doctors. Thus, medicine has a lot to gain from the use of DL in terms of efficiency and effectiveness, with disease prevention, research of new drugs, and the reduction of diagnoses made based on the effect and not on the cause of the problem. The secret to this, however, lies in the continuous improvement and refinement of cognitive computing-based algorithms [9]. In the future, humans will not tell computers specifically what to do but will provide examples of what they want computers to achieve, with the expectation that computers will find out how to do it, by themselves, due to the hands-on work towards success in this new technological wave related to DL [10]. Thus, it is clear that one of the most promising areas for the use of DL technology is medicine. It is not possible to use this technology from digital diagnosis through image recognition to the retrieval of unstructured information from electronic patient records. The possibilities of using DL include image processing and recognition, the classification and marking of patterns, digital radiology analysis, culture analysis, analysis of adherence to clinical medication, sizing and suitability of clothing and accessories, and a guarantee of automation quality, among other features and possibilities [11]. Faster diagnostics, more assertive treatments, and better-calculated risk DL has become a great ally of doctors and patients. The constant publication of news about new discoveries and implementations shows studies have been done in several related areas, including those of ophthalmology, cardiology, pathology, and radiology. Currently, the technology is already trained to analyze images, identify problem points, and inform professionals about the areas that need more attention [3, 12]. In addition to agility, the use of DL in health is very well accepted for reducing the margin of human error, and in trying to remedy some of these complications; it has fulfilled this role with mastery. Also, listing the technological expectations derived from predictive modeling, being able to identify molecules with the potential to be transformed into drugs to combat biological processes, or even the screening of patients is more precise and detailed, even including genetic information, and even follow-up of laboratory tests are done in real-time with results obtained quickly, avoiding unnecessary delays or investments; among others [13]. Therefore, this manuscript aims to provide an updated overview of DL, concerning its essential conception in the field of medicine derived from diagnoses of diseases and tumors to personalized medicines created specifically for an individual, as also its relationships from frameworks with meaningful intelligent properties, enabling cost-effective personalized digital services as a primary goal in healthcare in the current modern scenario, with a concise bibliographic background, synthesizing the potential of intelligent technology.

228

Ana Carolina Borges Monteiro et al.

2 Intelligent medical technology 2.1 AI concept AI is a technology standardized according to the neural network of the brain that uses several layers of information, encompassing algorithms, machine and also DL models, pattern matching, and even cognitive computing, learning to digitally assimilate data, gaining insights on diagnoses, or even variability of medical treatment, as also exam patient results with ML support [1, 14]. AI represents a set of software, logic, computing that aims to make computers, machines, and devices perform human functions, considering that an efficient AI solution can “think” faster, recognize facial expressions, or even the meaning in written or spoken language, and process more information than any human brain. However in AI, it is invariably necessary to bring enough computing power to support a reasoning task of the machine. No matter which AI sector or ML techniques are applied, this will be done by combining algorithms designed to create machines that have the same capabilities as humans, both in automating data analysis or in making better business decisions [14]. The use of AI in the field of medicine is through tools that can discover significant resemblance and connection in raw data with the competence to be employed in most fields of medicine, solving complex issues that would be hard, time-consuming, or inefficient for human medical professionals to solve on their own. It also includes treatment decisions, drug development, operational decisions, financial and even patient care, acting as a precious digital resource for medical professionals, allowing these professionals to better employ their experience and add value to the healthcare ecosystem [15]. The advantages of AI in medicine are linked to the potential that technology has in being able to extract significant information from a huge volume of data generation that could be employed in several applications. Through this, it is possible to reveal treatment insights by finding knowledge in the unstructured medical literature supporting treatment decisions. AI tools support user needs and make it is possible to search data helping doctors and specialists find use in complete health clarification, leading to more informed doctors and specialists [16]. These tools have properties and characteristics to search through unstructured and structured medical records (medical prescriptions among others) providing relevant medical history of patients. Besides looking at similarities and patterns found in these datasets, it is possible to recognize patterns that assist researchers to create dynamic studies and clinical trials [17]. AI-oriented systems are developed to handle the complex information and data which have been generated from modern clinical healthcare, after considering predictive models in healthcare dealing with a wide range of variables in well-organized

An overview of the technological performance of deep learning in modern medicine

229

healthcare data. Sophisticated ML uses ANN to learn extremely complex similarity and even connection, or even DL technologies offering digital support that exceeds human skills in performing a few tasks related to medicine [15, 17]. The importance of AI in medicine stems from the fact that it provides contextual relevance, considering that technology enables healthcare institutions to quickly interpret a huge volume of data, considering text data and digital images, which are used to detect anomalies in medical digital images, such as CT images, MRI or CT, and identify contextually significant information for the patient in an individual and customized way. While AI algorithms do not have fatigue, or varying moods (compared to human beings), processing huge volumes of data with high velocity and precision is costly [18]. Thus, through AI, it is possible to read computed tomography exams used to track patients with lung cancer, either due to late detection, incorrect diagnosis, or because the patient has undergone an invasive procedure (unnecessary) that ended up causing complications; through AI, it is possible to avoid this type of occurrence. Even considering that the performance of biopsies to confirm a simple spot on a patient’s skin is or is not something that can become a cause for concern, with AI applied in this type of context, the accuracy is greater, compared to diagnoses performed by specialists in the field of dermatology [19, 20]. It is also worth highlighting a well-known use of AI in medicine in relation to patient monitoring through various devices and applications used to monitor the health status and change in patient profile. Considering that smartwatches, bracelets, and other smart wearables track indicators such as blood pressure, blood glucose spikes, heart rate, among others, the data generated by these devices is processed by an AI, reproducing and producing useful information for the physician to assess the problem of patient health [21]. Storage and processing of data by an AI in medicine is one of the main uses of this tool, considering the speed in storing, processing the data, and even carrying out a specific analysis, as in the case of the result of an examination or examination. In the same way, all files can be digitized, facilitating the organization and digital protection of files as well as access to it, given the sensitivity of these data [22]. The benefits of AI for medicine, in general, have increasingly been used through techniques such as DL, since it allows the extraction of insights even in unstructured data. Considering the main gains provided by its application directed to the treatment of diseases, alerts regarding the patient’s clinical condition, through the identification and association of symptoms, as well as through the precision of the diagnostic result, that is, the adoption on a scale of solutions, AI-based technologies will allow more patients to access higher quality health services. In developing countries, where there are countless regions without specialized medical care, this type of technology with intelligent tools will allow patients in these regions and this type of situation to have access to extremely high-precision diagnoses [23].

230

Ana Carolina Borges Monteiro et al.

2.2 Machine learning concept ML employs algorithms on computers with properties and characteristics to get learning according to the expected results through a combination of distinct data, that is, machines gather the information, receive or search on the network, check this data, which can be digital images, numbers (sequences, vectors, cipher, and derivatives), and any information that can be computed that this technology can identify, and arrive at a result autonomously faster than a human would [5, 6, 23]. The digital structure employed in ML programming is distinct from conventional programming, from the data to be examined, interpreted, translated, and the outcomes (results) expected from this, considering that at the end of the processing, the ML- oriented system has properties to create its own rules or questions. In the conventional method, a set of rules and logical sequence is developed generating a response (outcome) from this processing. Also, considering that ML models are trained on data sets before being implemented, it also automatically and gradually improves their efficiency, accuracy, and precision, with the number of experiments in which these models (algorithms) are placed for training [5, 6, 23]. With regard to training, this is an iterative process leading to an improvement in the classes of associations between elements and data, presenting in a vast quantity (dataset), also considering that this can be used in real-time learning alone with data showing greater accuracy in results over time; due to this vast volume of analyzed data, the connections and patterns which were performed only by human being could be ineffective, in case it is performed without support from ML technologies [24–26]. In reinforcement learning, generally, the algorithm discovers patterns and correlations (hidden insights without being explicit) in the volume of data through trial-and-error strategies, identifying which of the actions generated the best rewards, and is basically a behavioral learning model. This type of learning is guided, as those systems are not trained with the sample dataset. Instead, it learns through trial-and-error strategies, a sequence of correct decisions that will reinforce in the process. It is possible to exemplify this type of learning in solutions-oriented to robotics, games, and even navigation [24–26]. Supervised ML consists of labeled examples, usually with a settled dataset and an understanding of how this data is classified, that is, a set of inputs together with the matching correct outputs. This type of learning is aimed at finding similarities and patterns in voluminous data that can be employed in an analytical process, exemplifying an ML-oriented solution that distinguishes between millions of animal pictures, based on written descriptions and digital images. With respect to this data, these label features determine the meaning of the data, and the ML algorithm learns by comparing the present output with the correct outputs (supervised learning) to find errors [24–26]. Unsupervised machine learning is applied to data that does not have historical labels; it is employed when the issue requires a large volume of unlabeled data,

An overview of the technological performance of deep learning in modern medicine

231

that is, the machine learning solution does not know the “right answer” in this case. It is possible to exemplify this type of learning in social media applications that have an extensive volume of unlabeled data, as it understands the meaning related to that data through ML requiring classification based on the clusters or patterns found. There are also several variables in legitimate emails and spam emails, and the ML classifiers, based on cluster and membership, are employed to identify and classify malicious emails. This type of learning leads to an iterative process, without human intervention, with the objective of exploring the data by finding some structure in it, works well in transactional data [27]. ML offers potential value for large volumes of data by helping to better comprehend subtle changes in user behavior, preferences, or user satisfaction, and even hidden patterns and anomalies in datasets, and from there to recognize and extract patterns from large volumes of data, building a learning model in this way, still considering one of these algorithms and models as a factor in the prediction (insights) of results. Therefore, it is possible to define this intelligent technique, as a kind of field of study that allows computers, machines, and devices the digital ability to learn without being necessarily programmed, that is, it is a way to perform tasks better in the future, based on experiences of the past [5, 27]. In this sense, considering the growing volume and variety of data available, in the most diverse sectors of society, as well as considering that computational processing is cheaper and also powerful, as well as that data storage is accessible, it is possible to produce intelligent models readily allowing the analysis of more complex data, providing faster and accurate results. It results in high-value insights that lead to intelligent actions without human intervention and better decisions in real-time, that is, much more accuracy and a more qualified and assertive investment [28, 29]. The proposal of ML in health derives from the fact that its fundamentals and applications are notable and efficient in the clinical environment, considering that this brings precision to the care and the digital medical diagnosis, forecasting the patient’s prognosis, and measuring the main therapeutic conducts, in order to considerably minimize the chances of human medical errors. Also considering that the potentials of machine learning are based on a compilation of digital data and data insertion by users facilitating decision making, in the same way as through data generated by equipment, it is possible to provide more accurate conclusions than if were analyzed in isolation, bringing more security to the health professional when making decisions for the patient, as well as crosschecking the clinical data of the patients with the epidemiological references obtained in the governmental health sites and formulating several hypotheses about a clinical event, for example, as seen during the COVID-19 epidemic [29, 30]. ML can be applied by means of a tool with the possibility of analyzing through digital images and inferring on data of individuals with diabetes to predict the possibility of their developing diabetic retinopathy, a condition that can lead to blindness and significantly compromise the quality of life. Other fields of medicine such

232

Ana Carolina Borges Monteiro et al.

as cardiology, pathology, and radiology are also investigating the application of ML techniques to identify flaws in the interpretation of the images obtained [31]. Also considering that the full insertion of technological resources in the health field, such as the integration of digital systems and the statistical study of clinical problems, the ML tool provides differentiated learning for patient care professionals, reducing the chances of diagnostic error and conducts and even facilitates faster communication between all those involved in the health chain [32–34]. In this way, through this digital integration, it is also possible to bring people who are far away geographically, by means of more efficient communication resources, to predict long-term clinical results with scientific and statistical basis, since this technology is a revolution in the fields related to diagnosis, follow-up, and establishment of clinical conducts, and even as an aid to survival with more assertive treatments for patients with little expectation of cure, as it is possible to take advantage of the available clinical and technological resources and constantly evaluate the effectiveness of treatments or infer the necessary changes for the treatment. [35–37].

2.3 Deep learning concept The technique known as DL replicates the basic structure of our neurons, allowing logical and complex structures to be established without the need for human supervision. Just like human intelligence, the neural networks behind a DL architecture need to study, understand, and learn how to interpret information related to the proposed object of study, helping the human medical professional to more accurately identify a tumor, for example, even when he/she is not an expert in the field to perform this type of identification through technology [38]. DL uses complex algorithms that obtain digital knowledge, which imitate the neural network of the human brain, providing advanced facial and voice recognition and allowing predictive digital health. More pertinent diagnostic recommendations are an example of the application of this type technology that can be widely used in various health segments [3, 39]. DL is a specific ML method that encompasses ANN in successive layers, especially useful when trying to learn unstructured data patterns iteratively, dealing with ill-defined abstractions and problems, often used for digital image recognition, digital speech recognition, and computer vision. ML is specified in DL, as a model comprising DL is constantly learning, without a possible and natural cultural intervention. With that, it unites all the knowledge obtained, or input, and processes it in search of an answer (insight), or output, at a speed that no human can achieve [2, 39]. In addition, DL has properties to be able to “learn” according to the context in which it is used, through neural networks performing speech recognition, computer vision, and language processing. It is already used, mainly in the health industries, powered by multiple of data generated at all times, thus being able to decipher the

An overview of the technological performance of deep learning in modern medicine

233

natural language and relate terms, generating meaning. Considering that DL uses algorithms that do not require pre-processing and automatically generate properties that do not vary in their hierarchical representation layers since these layers are nonlinear data, it allows a complex and abstract representation of the data forming an ordered classification [40]. The high performance possible with DL makes it possible to quickly identify problems and patterns in data, optimizing decision-making and reporting, in addition to reducing the need for rework and manual operational activities. In this way, the productivity of the employees increases considerably; the self-learning capacity of DL also brings more precise results and faster processing, resulting in a great competitive advantage, which is essential [41]. The versatility of DL is a great attraction, comprising a passable application for various purposes, such as in the corporate environment, industries in segments such as health, education, and e-commerce. In short, with the size of the computational power, machines can recognize objects in digital images and translate voice in real time, thus making AI applicable to the routine and business of any segment [39, 41]. A digital model that is trained based on the principles of DL can identify from faces for facial recognition to features that indicate diseases such as cancer and tumors in medical digital imaging exams; This is processed based on neural networks in “layers” on which the algorithms build their representation in natural language, since the pixels of the images, for example, form the first layer, the output of each layer becoming the input of the next layer. In this sense, building many layers of this neural network is the basic principle of DL algorithms [42]. A neural network is a weighted connectionist paradigm, that is, the ability to develop intelligent models lies in the connections of a significant number of artificial neurons (and not in the neurons themselves), considering that arithmetic operations grow exponentially in terms of deep networks. Still what differentiates an ANN from a DL-oriented neural network is the significantly larger amount of artificial neurons and connections. Given that a simple neural network has up to five layers, in contrast, a deep neural network has more than five layers; based on the sufficient processing capacity to train these deep networks and depending on the neural network architecture used, it generates the need for parallel processing techniques and the requirement for a GPU (graphics processing unit) [43]. DL can be used in the most diverse applications, but the most common is in relation to digital image processing and NLP, with applications in Medicine aimed at the recognition of medical images in relation to breast cancer, Alzheimer’s disease, cardiovascular diagnosis, skin cancer or even stroke. It comprises applications that help in the development of genomic drugs, since its applications are widely varied, and its results and achievements are promising in terms of revolutionizing the way AI has been influencing decision-making in the modern world [44]. CNN manages to obtain superior performance in tasks with high complexity, enabling autonomous cars, video classification solutions, among others. Besides, CNNs

234

Ana Carolina Borges Monteiro et al.

are not restricted to visual digital perception (computer-oriented vision digital image); it is also efficacious in tasks aimed at recognition NLP), since the most important building block of this type of DL network is the convolutional layer, in which digital neurons are not interconnected to each pixel of the input image, but only to the pixels in their receptive areas. These artificial neurons are organized in three dimensions –height, width, and depth; note that depth is related to the third dimension of an activation volume, and not to the depth of a complete neural network, referring to the total number of layers belonging to a specific neural network [45]. Just as each artificial neuron in sequential convolutional layers is connected only to the specific artificial neurons in the first layer, CNN focuses on low-level resources in the first hidden layer and then gathers them in higher level resources in the following hidden layer, sequentially. This hierarchical structure is ordinary in real digital images because CNNs work so well in image recognition. Therefore, a simple CNN is composed of a sequence of layers, where each layer transforms one volume of activations into another through a differentiable function, using a few types of layers created, such as convolutional layer, flatten, dropout, pool layer, and fully connected layer [43, 45]. Thus, DL models are trained to gain insights into one or more clinical events and summarize relevant past medical events (patient health problems, patient conditions, patient test results, medications, and patient treatment, among other aspects) at a given patient’s entry into a healthcare system institution [44, 45].

3 Performance of technology in medicine With AI, more objective data evaluation is possible, allowing for a reduction in time, greater efficiency, and effectiveness in the evaluation of health processes. Both AI and ML or DL techniques will not only reach a single specialty but impact the whole, resulting in a digital health revolution, encompassing the need to empower patients’ health care, extending better treatment to them [46]. Through the data collected from wearable sensors capable of transmitting vital patient data to the doctors’ smartphone, warning the professional (even remotely) when something is not correct, AI allows the general practitioner to be a patient’s caregiver (and not of the disease only), allowing personalization of care and care, evidencing the need for health data for a digital analysis without the bias of routine activities of each patient [47]. In obstetrics, also through the use of sensors, AI can monitor the vital signs of the mother and baby, ensuring that the necessary care will be provided quickly in cases of emergencies. In radiology, through cognitive computers with AI, it is possible to scan thousands of radiological images in seconds, each more assertive than the earlier one. Or, in oncology, AI will allow personalized medicine through the

An overview of the technological performance of deep learning in modern medicine

235

possibility of genetic sequencing and the study of biomarkers, or the tumors may be diagnosed early without the need for a surgical procedure, more precision is achievable with respect to the dose of chemotherapy for each patient, reducing side effects [48–51]. In dermatology, through skin images and AI, algorithms that detect skin cancer contribute to a broader and faster assessment of population groups, leading to the diagnosis and treatment of early disease painlessly and at relatively low costs. As regards infectious diseases, they can be fought by new antibiotics that emerge from the analysis of ML, as is the case of Halicin for infections by multiresistant bacteria. In the case of surgery, it is possible for surgeons to achieve a better level of precision and efficiency through ML supported by robotic arms that replace surgeons during operations [48–51]. With regard to geriatrics and the advent of AI, it is natural that the average life expectancy increases either because it improves the quality of life, or because it is more detailed about chronic diseases, or because of the discovery of medications that prolong life, or by preventing disabling diseases. With regard to intensive care, it is aggregated by AI through ML that understands the physiological, pathological, and laboratory data that are generated every millisecond of a patient in the ICU, allowing the prediction of diseases that would take patients to an intensive care unit or treat them in a more refined and complete way [48–51]. In relation to the development of algorithms it is possible to achieve more accuracy in medical imaging exams, such as MRI exams and automatic segmentation of tumor lesions that are capable of identifying the lesions, mapping them, and delimiting them, contributing to the accuracy of the medical report. Or even algorithms that analyze the magnetic resonance and indicate whether or not there is prostate cancer, and its malignancy, interpreting the resonances as those of the skull (brain) and separating those that present alteration, prioritizing the analysis of a neuroradiologist [48–51] are possible. Consider the personalization of care and patient care that uses increasing amount of data, making it impossible for a doctor to analyze everything in real time. Big data technology with AI allows obtaining a more complete history, aiming at a database revealing trends, history, and even supporting the tracking and monitoring of diseases and risk factors, supporting large-scale preventive actions, which should be helping doctors to get to know each individual better, and choosing the procedures, medications, and recommendations that best suit their profile [52, 53]. In exams transmitted via telemedicine, it is possible to use ML to screen and place patients’ emergencies first in the analysis queue, either by digital reading compared to other exams, or by visual recognition to filter exams with possibilities of critical diagnoses; when analyzing the exam data and images, the AI learns, using the available information to automatically improve itself, and can distinguish urgencies in more critical exams, helping doctors [54].

236

Ana Carolina Borges Monteiro et al.

With text and image analysis tools provided for AI solution with ML techniques that are capable of analyzing data and detecting anomalies in the exams, it is also possible to inform almost immediately if the exam quality is compromised and whether it needs to be redone. This is essentially useful, as the doctor does not waste time on an exam that will not provide relevant information, and can avoid recalling patients to repeat the exam [24, 25, 54]. Through a ML tool, it is possible to communicate through a chatbot about guidance, referrals, and diagnoses; scheduling medical appointments can be made online, in a cognitive AI system working 24 h, every day, which can include teleconsultation with the doctor, sending the prescriptions to the patient [24, 25, 54]. ML tools in health can act as assistants in the first aid of health institutions, making the screening of patients more accurate and detailed, including follow-up of laboratory tests, with results obtained quickly, avoiding unnecessary delays; through predictive modeling they can identify patients needing highest priority of care optimizing an infinite number of processes and, consequently, improve the quality of care of those who are waiting for treatment [55].

4 Discussion From a historical point of view, more precisely in past centuries, countless people have died of diseases that, today, are controlled and cured with antibiotics and vaccines; the world was then more prone to wars and all the violence were starting points for the need for surgery. Or even decades past, a video laparoscopic surgery was almost surreal, being considered inconceivable to remove a gallbladder through this type of resource. This context only demonstrates that the medical sciences have evolved in exponential steps over time, and more precisely in recent years, this development has been supported by technology in comparisons, analyses, and evidence bringing resources that favored the digital progress of this technology. Technological revolution should not be seen as the dehumanization of medicine; quite on the contrary, it brings resources that generate new digital technological instruments that increase the security and resolution of medical problems currently faced, with its intelligent techniques such as ML and DL. It is necessary to understand AI within an evolutionary context, considering algorithms and models that resemble human intelligence. Given the complexity of artificial neural systems and learning processes and the immensity of available information, it provides digital recognition of images. These DL and AI techniques have been used in modern hospitals. They are more accurate than human assessment for diagnostics in the medical field; these institutions are generally equipped with medical devices that gather and share large amounts of

An overview of the technological performance of deep learning in modern medicine

237

data applied to clinical analysis using AI in digital information systems. It can range from urgent cases that require a quick medical intervention using DL for a light analysis of data, so that, in a few minutes, a medical measure can be taken to avoid potential sequelae and even the death of patients, to identification of several medical complications such as cardiovascular risks, breast lesions, spinal problems, among countless other examples from digital images, acting in the prediction of diseases. It is important to note that the revolutionary leap in the dynamics of health care from the expanded capabilities of AI is notable, even though there is still no consensus on what is expected from technology. Similarly, with the exploited capacities of IT for the financial and administrative management of health institutions, it has not yet found the same technology correlation as has been found in the area of clinical data management through AI, despite variability in the quality of the data and even in the form of the medical report, through handwritten recipes that previously prevented a standardization. These are already overcome and have been made feasible by the techniques and methods of DL. In clinical practice, efficiency in the clinical decision-making process is what has been sought and researched; however, until now, the integration of AI into the clinical workflow has tended to significantly decrease the cognitive burden faced by the clinical human medical teams that led to greater stress, less efficiency, and worse clinical patient care. In this same sense, through research on the diagnostic capacity of digital images in the field of AI, it has also shown itself to be very promising, as in the case of images of retinopathy; or even in images of patches on the skin of patients, in the sensitive area of the diagnosis of melanoma, through the accuracy of AI in the diagnosis of malignant lesions, through a bank of hundreds of thousands of photos of skin lesions. The application of AI in digital processing of mammography images, in the area of oncology, results in excellent accuracy of AI for breast cancer screening; or even lung tomography images, or blood images, or the use of AI to classify brain tumors, given the inter-observer variability, in the histopathological diagnosis of hundreds of types of tumors affecting different areas of the central nervous system (CNS), represent the strong impact that this unique AI diagnostic methodology has made possible. The requirement for quality health care continues to rise exponentially, as also the volume of laboratory tests, and DL techniques have successfully managed to increase the requirement for services and enhance the quality and safety of these exams. The use of this technique has significantly improved in statistical benchmark tasks due to the availability of high-speed computing in GPU, integration of CNN, optimization of digital learning in ever-larger datasets, reaching an inflection point where the significant increase in the diagnosis of pathology in laboratory medicine has gone through digitization and automation, such as image convolution and neural networks based on digital image-based learning. Digital images are not as abundant in quantity in clinical laboratories as seen and found in other diagnostic specialties, as radiology for example, which may possibly generate limits in applications of digital image-based technology in laboratory medicine.

238

Ana Carolina Borges Monteiro et al.

It is also worth mentioning that in the forefront of daily practice, AI consists of a satisfactory instrument, considering that nothing has yet made intelligent technology go beyond the phase of forecasting to that of action. In this aspect, in the doctor–patient relationship and the social rites between doctors, nurses, and other health professionals, medical professionals must seek a partnership in which the machine foresees with demonstrably greater precision, and the human explains and decides the action; AI currently remains a technological instrument, leaving the human professional to the judgment of caring. AI is not exactly an algorithm or even a set of instruction code lines but consists more of a form of a mathematical formula that influences the digital dynamics of an algorithm causing it to change, that is learn from previous experiences, develop and draw conclusions (extracting insights concealed by the large volume of information) without being previously programmed for it, or even make decisions and predictions, based on data analysis, and therefore ML or DL.

5 Trends GANs (generative adversarial networks) are algorithmic tools of DL that have been employed in medicine to help in the detection and identification of cancer by generating new realistic images Two neural networks, a network deep neural as the generator and the other deep neural network known as the discriminator, discern generator operation. These deep neural networks (DNN) dispute against each other (characteristic adversarial) in an attempt to outperform each other. The generator network tries to deceive the discriminator; meanwhile, the discriminator tries to distinguish between the authentic data and those not authentic (made) by the generator. Through this feedback and competition, both DNNs improve their operation; GANs imitate counterfactual reasoning and human creativity [56, 57]. In medicine, GANs focus on the segmentation or synthesis of medical digital images presenting an impressive performance when discovering, in a creative way, information missing in the diagnosis of heart disease, for example. A GAN-based approach can provide a robust classification in the same way that GANs have been employed in protein folding and drug discovery applications, and even in odontology [56–58]. With-- the premise of how to better use the knowledge learned by models already trained in DL, Transfer Learning is (borrowing knowledge), the creation of DL models from the beginning; it can become a complicated task and requires many computational resources and training data. Transfer Learning can be used to benefit knowledge and information learned by well-known models trained in millions of training data [59, 60]. It can be a positive transfer learning, through an expected behavior of a neural network with transfer learning, where before any training process, the result is better than

An overview of the technological performance of deep learning in modern medicine

239

the random initialization, with less time in the learning, and obtaining superior results compared to the random startup. Or it can be negative transfer learning, with the reverse effect of positive transfer learning, with lower performance in the initialization and the final result of the neural network, such as using a previously trained method in a more similar problem, and even verifying if the performance of this neural network is satisfactory (state-of-the-art), when using a previously trained neural network with an origin task “more difficult” than the target task [59–61]. In scenarios in which there are little data and high similarity, it is possible to use the pre-trained network to extract the characteristics, given that the problem is similar, or to train the last layers of the classifier (fine-tuning). In scenarios in which there is little data and low similarity (worst case), it is necessary to look for another neural network or to check the use of another approach (ML or hand-crafted that can have more satisfactory results). In scenarios in which there is a lot of data and low similarity, the training will have to be from scratch. In scenarios in which there is a lot of data and high similarity (best case), the application of the transfer learning technique can be geared towards the classification of Biomedical Images, depending on the scenario [59–61].

6 Conclusions It is worth noting that the figure of the doctor will never be replaced by AI or even by his techniques such as DL, considering that there will always be a need for a medical professional capable of conducting the entire digital process. Human and artificial intelligence must contribute considerably to the reduction of unnecessary surgeries or inaccurate diagnoses and make up for the lack of professionals in the face of their growing demand in places where there are no specialized professionals. Models of digital image interpretation will give the doctor and the patient more accurate diagnoses, offering both greater security; the analysis of slides under the microscope, among other applications, will support the understanding that the medical professional will continue in its place, using the machine with expanded intelligence, as an extension of the scalpel. As it offers faster diagnoses, more assertive treatments, better calculated risks, the possibility of detecting infections, abnormalities, and diseases in a matter of seconds, in addition to agility, the use of DL in health is very well accepted for reducing the margin of human error. The practice of medicine involves the conversion of data that describes patients (diagnoses and prognosis) into knowledge to help doctors and specialists, guiding a large part of clinical decisions, translating into inconsistent and incomplete knowledge, with skills, training, and experience varying between one and other. Consequently, this results in the generation of a gross variation in health care, clinical decisions, treatments, and even results. The amount of data available for physicians

240

Ana Carolina Borges Monteiro et al.

to evaluate is vast; the volume, variety, and speed of acquisition of this data, which keep increasing, usually in a time-constrained environment and with the current known limitations in relation to human data management, are challenges for the practice of intensive care in health. Therefore, techniques and tools for intelligent learning, such as DL are useful in finding patterns and extracting information from these huge volumes of data. Often, the data of a current patient are analyzed in comparison with data from that patient in a previous moment, through experience classified into recognizable patterns, such as diagnoses or even syndromes. Diagnosis depends on the recognition of complex and distinct signs related to symptoms, laboratory tests, and the natural history of the disease and forms the basis for prognosis and, potentially, therapy. Thus, it is possible to assess that inconsistency in clinical conduct ends up generating not only suboptimal clinical decisions or failures, translating into greater impacts on long-term morbidity, with respect to failure to prevent neglected diseases, and even the current cost of the disease in countries, but even higher, in the cost health of the patient and the health system as a whole. Therefore, with so much information in the health area, in the era of AI, the use and application of DL techniques is crucial to validate a more consistent clinical decision making, based on data, evidence, and values.

References [1] [2]

Joshi, P. Artificial Intelligence with Python, Packt Publishing Ltd, 2017. França, R. P. et al. An overview of deep learning in big data, image, and signal processing in the modern digital age. Trends in Deep Learning Methodologies: Algorithms, Applications, and Systems, 2020, 4, 63. [3] Monteiro, A. C. B. et al. Deep learning methodology proposal for the classification of erythrocytes and leukocytes. Trends in Deep Learning Methodologies: Algorithms, Applications, and Systems, 2020, 129. [4] Raschka, S., Mirjalili, V. Python Machine Learning, Packt Publishing Ltd, 2017. [5] Harrington, P. Machine Learning in Action, Manning Publications Co., 2012. [6] Bishop, C. M. Pattern Recognition and Machine Learning, springer, 2006. [7] Goodfellow, I. M. et al. Deep Learning, Vol. 1, No. 2, Cambridge, MIT Press, 2016. [8] Wang, F., Lawrence, P. C., Khullar, D. Deep learning in medicine – promise, progress, and challenges. JAMA internal medicine, 2019, 179(3), 293–294. [9] Moises, A. A. et al. Algorithm for predicting macular dysfunction based on moment invariants classification of the foveal avascular zone in functional retinal images. Research on Biomedical Engineering, 2017, 33(4), 344–351. [10] Lacey, G., Graham, W. T., Areibi, S., “Deep learning on FPGAs: Past, present, and future.” arXiv preprint arXiv:1602.04283 (2016). [11] Tao, B., Dickinson, B. W. Texture recognition and image retrieval using gradient indexing. Journal of Visual Communication and Image Representation, 2000, 11(3), 327–342.

An overview of the technological performance of deep learning in modern medicine

241

[12] Monteiro, A. C. B., “Proposta de uma metodologia de segmentação de imagens para detecção e contagem de hemácias e leucócitos através do algoritmo WT-MO.” (2019). [13] Xu, Y. et al. Deep learning for molecular generation. Future Medicinal Chemistry, 2019, 11(6), 567–597. [14] Dunjko, V., Briegel, H. J. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics, 2018, 81(7), 074001. [15] Maddox, T. M., Rumsfeld, J. S., Payne, P. R. O. Questions for artificial intelligence in health care. Jama, 2019, 321(1), 31–32. [16] Contreras, I., Vehi, J. Artificial intelligence for diabetes management and decision support: literature review. Journal of Medical Internet Research, 2018, 20(5), e10775. [17] Jordão, K. C. P., et al. “Smart City: A Qualitative Reflection of How the Intelligence Concept with Effective Ethics Procedures Applied to the Urban Territory Can Effectively Contribute to Mitigate the Corruption Process and Illicit Economy Markets.” Proceedings of the 5th Brazilian Technology Symposium. Springer, Cham. [18] Hosny, A. et al. Artificial intelligence in radiology. Nature Reviews Cancer, 2018, 18(8), 500–510. [19] Kapoor, R., Whigham, B. T., Al-Aswad, L. A. Artificial intelligence and optical coherence tomography imaging. The Asia-Pacific Journal of Ophthalmology, 2019, 8(2), 187–194. [20] Negrete, P. D. M., et al. “Classification of dermoscopy skin images with the application of deep learning techniques.” Proceedings of the 5th Brazilian Technology Symposium. Springer, Cham. [21] Langen, P. A., et al. “Remote monitoring of high-risk patients using artificial intelligence.” U.S. Patent No. 5,357,427. 18 Oct. 1994. [22] França, R. P., et al. “Potential model for improvement of the data transmission in healthcare systems.” (2019). [23] Yeasmin, S., “Benefits of artificial intelligence in medicine.” 2019 2nd International Conference on Computer Applications & Information Security (ICCAIS). IEEE, 2019. [24] Mohseni, S., Zarei, N., Ragan, E. D., “A survey of evaluation methods and measures for interpretable machine learning.” arXiv preprint arXiv:1811.11839 (2018). [25] Murdoch, W. J. et al. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 2019, 116(44), 22071–22080. [26] Du, M., Liu, N., Hu, X. Techniques for interpretable machine learning. Communications of the ACM, 2019, 63(1), 68–77. [27] Jaeger, S., Fulle, S., Turk, S. Mol2vec: unsupervised machine learning approach with chemical intuition. Journal of chemical information and modeling, 2018, 58(1), 27–35. [28] Das, K., Behera, R. N. A survey on machine learning: concept, algorithms and applications. International Journal of Innovative Research in Computer and Communication Engineering, 2017, 5(2), 1301–1309. [29] França, R. P. et al. An Overview of the Machine Learning Applied in Smart Cities. Smart Cities: A Data Analytics Perspective, 91–111. [30] Yadav, M., Perumal, M., Srinivas, M. Analysis on novel coronavirus (COVID-19) using machine learning methods. Chaos, Solitons & Fractals, 2020, 139, 110050. [31] Roy Chowdhury, S., Koozekanani, D. D., Parhi, K. K. DREAM: diabetic retinopathy analysis using machine learning. IEEE Journal of biomedical and health informatics, 2013, 18(5), 1717–1728. [32] Monteiro, A. C. B., et al., “A comparative study between methodologies based on the Hough transform and watershed transform on the blood cell count.” Brazilian Technology Symposium. Springer, Cham, 2018. [33] Monteiro, A. C. B., Iano, Y., França, R. P., “An improved and fast methodology for automatic detecting and counting of red and white blood cells using watershed transform.” VIII Simpósio de Instrumentação e Imagens Médicas (SIIM)/VII Simpósio de Processamento de Sinais da UNICAMP (2017).

242

Ana Carolina Borges Monteiro et al.

[34] Monteiro, A. C. B. et al. Development of a laboratory medical algorithm for simultaneous detection and counting of erythrocytes and leukocytes in digital images of a blood smear. In: Deep Learning Techniques for Biomedical and Health Informatics, Academic Press, 2020, 165–186. [35] Monteiro, A. C. B. et al. Metaheuristics applied to blood image analysis. In: Metaheuristics and Optimization in Computer and Electrical Engineering, Cham, Springer, 117–135. [36] Monteiro, A. C. B. et al. Medical-laboratory algorithm WTH-MO for segmentation of digital images of blood cells: a new methodology for making hemograms. International Journal of Simulation Systems, Science & Technology, 2019, 20(Suppl 1), 19–1. [37] Monteiro, A. C. B. et al. Hematology and Digital Image Processing: Watershed TransformBased Methodology for Blood Cell Counting Using the WT-MO Algorithm. Medical Technologies Journal, 2020, 4(3), 576–576. [38] Balas, V. E. et al. (eds.) Handbook of Deep Learning Applications, Vol. 136, New York, Springer, 2019. [39] Zhong-Qiu, Z. et al. Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(11), 3212–3232. [40] Voulodimos, A. et al. Deep learning for computer vision: A brief review. Computational Intelligence and Neuroscience, 2018, 2018. [41] Fawaz, H. I. et al. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 2019, 33(4), 917–963. [42] Deng, L., Liu, Y. (eds.) Deep Learning in Natural Language Processing, Springer, 2018. [43] Rashid, T. Make Your Own Neural Network, CreateSpace Independent Publishing Platform, 2016. [44] Miotto, R. et al. Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 2018, 19(6), 1236–1246. [45] Yamashita, R. et al. Convolutional neural networks: an overview and application in radiology. Insights into Imaging, 2018, 9(4), 611–629. [46] Monteiro, A. C. B. et al. Health 4.0: Applications, management, technologies and review. Medical Technologies Journal, 2018, 2(4), 262–276. [47] Bonato, P., “Advances in wearable technology and applications in physical medicine and rehabilitation.” (2005): 1–4. [48] Ramesh, A. N. et al. Artificial intelligence in medicine. Annals of The Royal College of Surgeons of England, 2004, 86(5), 334. [49] Hamet, P., Tremblay, J. Artificial intelligence in medicine. Metabolism, 2017, 69, S36–S40. [50] Kaul, V., Enslin, S., Gross, S. A., “The history of artificial intelligence in medicine.” Gastrointestinal endoscopy (2020). [51] Dimitris, V. et al. Artificial intelligence, machine (deep) learning and radio (geno) mics: definitions and nuclear medicine imaging applications. European Journal of Nuclear Medicine and Molecular Imaging, 2019, 1–8. [52] França, R. P. et al. Big data and cloud computing: a technological and literary background. In: Advanced Deep Learning Applications in Big Data Analytics, IGI Global, 2020, 29–50. [53]] França, R. P., et al. “A proposal based on discrete events for improvement of the transmission channels in cloud environments and big data.” Big Data, IoT, and Machine Learning: Tools and Applications (2020): 185. [54] França, R. P. et al. A methodology for improving efficiency in data transmission in healthcare systems. In: Internet of Things for Healthcare Technologies, Singapore, Springer, 49–70. [55] Rajkomar, A., Dean, J., Kohane, I. Machine learning in medicine. New England Journal of Medicine, 2019, 380(14), 1347–1358.

An overview of the technological performance of deep learning in modern medicine

243

[56] Talha, I., Ali, H. Generative adversarial network for medical images (MI-GAN). Journal of Medical Systems, 2018, 42(11), 231. [57] Xin, Y., Walia, E., Babyn, P. Generative adversarial network in medical imaging: A review. Medical Image Analysis, 2019, 58, 101552. [58] Jie, G., et al. “A review on generative adversarial networks: Algorithms, theory, and applications.” arXiv preprint arXiv:2001.06937 (2020). [59] Fuzhen, Z., et al. “A comprehensive survey on transfer learning.” Proceedings of the IEEE (2020). [60] Chuanqi, T., et al. “A survey on deep transfer learning.” International conference on artificial neural networks. Springer, Cham, 2018. [61] Matthew, T. E., Stone, P. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 2009, 10(7).

Index abnormal 167, 173, 177 AdaBoost 51 AE 42, 45, 50–51, 54 agglomerative 169 AI 226 algorithms 161, 163, 166, 169, 171, 173, 175, 183 analyzing medical big data 57 ANN 186, 229 anonymization 132 application programming interface 119 applications 162, 165–166, 170, 173, 183 Area 168, 176 artificial intelligence (AI) 116 artificial intelligence 146–147, 149, 156 artificial neural network (ANN) 40, 149, 206 artificial neural network 163, 172, 174, 181 association rule 166 AUC 48, 50–51 auto-encoders (AE) 118 automated medical record 148 automatic programming interface – API 10 backpropagation network 150 bagging 92, 98–99, 107–108, 110, 112, 166, 172 Bayesian network 163 behaviour 164 benign 187 big data 235 biomedical data stream 56 black-boxes in healthcare 59 blood pressure (BP) 206 boosting 166, 172 business 162–163, 165, 173 business-to-business-to-consumer – B2B2C 9 cancer 93, 102, 104, 111 cancer diagnosis as a neural network – CNN 8 carcinoma 167, 173, 177 cardiovascular disease (CVD) 205 categorical 168, 171 cervical cancer 167–168, 173, 175–176, 180–181, 183 class 165–173, 175, 177–181, 183

https://doi.org/10.1515/9783110708127-012

classification 166, 168–169, 173, 175, 178 classifiers 161, 177–178 clinical decision 38 clinical decision making 116–120, 123–125, 128, 130, 136 clinical healthcare 116 cloud computing 147, 156 clustering 166, 169, 174 CME (continuous medical education) 142 CNN 42–44, 49–53, 188, 190, 194, 233 cohorts 46, 51 columnar epithelial 177 computer vision 52 computer-assisted diagnostic system (CAD) – CAD 5 computerized medical record 148 conclusion 163 conditional random fields (CRF) 121 confusion matrix 179 congestive heart failure 48 contrast 176 convolution neural networks (CNN) 117 convolutional neural networks (CNN) 206, 213 convolutional neural networks 42–43, 53 coronary heart disease (CHD) 205, 207, 211 correlation 176 correlation-based feature selection (CFS) 207 CPD (continuous professional development) 142 cytoplasm 173 Data analytics 161, 164–165, 173 data harmonization 41 data representation 55 database information (DI) 146 dataset 41, 43, 45, 48–50, 54–55, 166–172, 174–181, 183 DBM 44–45, 48 decision support 156 decision tree (DT) 207 decision tree 163, 166, 168, 180 decision-making 162 deep learning 37–38, 40–41, 46–47, 91, 92–93, 100, 108, 110–112, 116–119, 121, 123, 126–128, 130–134, 136–137, 166, 172, 232–233, 238

246

Index

– DL deep learning (DL) 116 deep neural network 92 descriptive analysis 162 diagnosis 38–39, 48–49, 52, 54, 62, 165 diagnostic analytical 162 diameter 176 digital image 226 dimensionality reduction 166, 170, 181 disease 163, 173–174, 183 divisive 169 DL 226 eccentricity 176 EEG 45, 49–50, 56 EHR 37, 44, 47–48, 52, 55 electrocardiography – ECG 4 electrocorticography – ECoG 6 electrodes 49 electroencephalography 49 – EEG 6 electromyography – EMG 6 electronic health record (EHR) 125 electronic health record 142, 145, 147–148 electronic health records (EHRs) 117 electronic medical record 148 electronic medical records – EMRs 3 electronic patient record 148 electrooculography – EOG 6 elementary 168–169 energy 162, 176 ensemble 91, 92–93, 98, 100 ensemble learning 166, 172 E-prescription 158 feature extraction 161, 170 feature selection 175, 170 feed-forward network 150 fingerprint 166 fuzzy logic 206

GAN 44, 238 Gaussian distribution 49 geometrical features 176 gradient driven tree models – GBM 3 gross domestic product 119 health 229 health production process (HPP) 141, 145 healthcare 38, 141, 143–145, 147, 151–156, 159, 161, 165, 183 healthcare applications 47 healthcare organizations 38 Heuristics 115 HITECH ACT (Health Information Technology for Economic and Clinical Health) 146 homogeneity 176 IoMT (Internet of medical things) 156 image classification 48 image detection 41 implementation 167, 174, 179 informatics 37 information 161, 163, 165, 169–170, 177, 181 intermediate squamous 173, 177 intermediate squamous Internet of things (IoT) 155, 206 interpretability 55 introduction 37 K-nearest neighbor (KNN) 206–207 labels 166–167, 169, 177–178 learning models 163, 177 long–short-term memory networks are combined with bidirectional LSTMs 121 low- and middle-income countries – LMICs 11 LSTM 43–44, 48–49, 52 lung cancer dataset 100, 108 machine learning 91, 93, 112, 163, 166, 168–170, 172–173, 175, 183, 226 machine learning in healthcare – MLH 3 magnetic resonance imaging – MRI 6

Index

major axis length 176 malignancy 187 matrix 169–170, 180 mean 174, 176 medical big data 57 medical datasets 47 medical healthcare 47 medical image 165 medical information 39 melanoma 186 mild 167, 173, 177 miniature device 50 minor axis length 176 ML 226 mobile computing 155 naïve Bayes (NB) 207 named entity recognition (NER) 121 National Surgical Quality Improvement Program 123 natural language processing (NLP) 116, 121, 150 neural network (NN) 207 neural network 163, 166, 172, 174, 181, 233 neural networks 117–118, 134 non-communicable disease (NCD) 206 normal 166–167, 173, 176–177 nucleus 173, 176 number of input layers (NOIL) 206 numeric 168, 171, 173–174 open research problems 58 open-source software (OSS) 152 orientation 176 original 161, 168, 170, 175, 181 Pap smear image 161, 177, 183 patient database (PD) 146 patterns 165, 169 performance 167, 173, 175, 177 perimeter 176 pixel value 177 positron emission tomography – PET 6 prediction 162–163, 166, 172, 174, 181, 183 preprocessing 176 prescriptive analytics 163 probability 167 prostate cancers 44

247

quality of service 116 radius 176 RBM 45, 49, 51 recognition 166, 168 recurrent neural network (RNN) 213 recurrent neural networks (RNN) 118, 206 regression 163, 166 reinforcement learning 45–46, 166 relationship 165, 171–172 ResNet-50 194 RNN 43–44, 48, 52–53 robotic-assisted surgery (RAS) 46 seasonal autoregressive integrated moving average (SARIMA) 126 severe 167, 173, 177 SGD 189, 194 shape 163, 173, 177 skin cancer 186 Social mass media 50 solidity 176 stacking 92, 98, 110, 166, 172 Standard Deviation 168, 176 statistical function 173, 176 superficial squamous 173, 177 supervised learning 166 support vector machine (SVM) 206–207 support vector machine 163 SVM (support vector machine) 166 tomography 39, 43, 53 transfer learning helpful 55 tumors 92 Unified Medical Language System (UMLS) 121 unsupervised learning 166, 168 variables 165, 170–171, 179 voting 95, 99 wearable computing and devices 156 why deep learning 41 World Health Organization (WHO) 206 X-ray technology 41

Computational Intelligence for Machine Learning and Healthcare Informatics Already published in the series Volume 6: Computational Intelligence and Predictive Analysis for Medical Science Poonam Tanwar, Praveen Kumar, Seema Rawat et al. (Eds.) ISBN 978-3-11-071498-2, e-ISBN (PDF) 978-3-11-071527-9, e-ISBN (EPUB) 978-3-11-071534-7 Volume 5: Computational Intelligence for Managing Pandemics Aditya Khamparia, Rubaiyat Hossain Mondal, Prajoy Podder et al. (Eds.) ISBN 978-3-11-070020-6, e-ISBN (PDF) 978-3-11-071225-4, e-ISBN (EPUB) 978-3-11-071227-8 Volume 4: Nature Inspired Optimization Algorithms Deepak Gupta, Nhu Gia Nguyen, Ashish Khanna, Siddhartha Bhattacharyya (Eds.) ISBN 978-3-11-067606-8, e-ISBN (PDF) 978-3-11-067611-2, e-ISBN (EPUB) 978-3-11-067615-0 Volume 3: Artificial Intelligence for Data-Driven Medical Diagnosis Deepak Gupta, Utku Kose, Bao Le Nguyen, Siddhartha Bhattacharyya (Eds.) ISBN 978-3-11-066781-3, e-ISBN (PDF) 978-3-11-066832-2, e-ISBN (EPUB) 978-3-11-066838-4 Volume 2: Predictive Intelligence in Biomedical and Health Informatics Rajshree Srivastava, Nhu Gia Nguyen, Ashish Khanna, Siddhartha Bhattacharyya (Eds.) ISBN 978-3-11-067608-2, e-ISBN (PDF) 978-3-11-067612-9, e-ISBN (EPUB) 978-3-11-066838-4 Volume 1: Computational Intelligence for Machine Learning and Healthcare Informatics R. Srivastava, P. Kumar Mallick, S. Swarup Rautaray, M. Pandey (Eds.) ISBN 978-3-11-064782-2, e-ISBN (PDF) 978-3-11-064819-5, e-ISBN (EPUB) 978-3-11-067614-3

www.degruyter.com