Foundations of Artificial Intelligence in Healthcare and Bioscience: A User Friendly Guide for IT Professionals, Healthcare Providers, Researchers, and Clinicians 0128244771, 9780128244777

Foundational Handbook of Artificial Intelligence in Healthcare and Bioscience: A User Friendly Guide for IT Professional

357 103 14MB

English Pages 558 [551] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foundations of Artificial Intelligence in Healthcare and Bioscience
Copyright
Dedication
Contents
Section I Artificial Intelligence (AI): Understanding the technology1
Section II Artificial Intelligence (AI): Applications in Health and Wellness73
List of Illustrations
Foreword by Adam Dimitrov
Foreword by Ernst Nicolitz
Preface
Postscript
Acknowledgments
Artificial intelligence (AI): Understanding the technology
Introduction
References
1 The evolution of artificial intelligence (AI)
1.1 Human intelligence
1.2 Defining artificial intelligence (AI)
References
2 The basic computer
2.1 Layers of basic computers
2.1.1 Input layer
2.1.2 Inner (hidden) layer
2.1.3 Output layer
2.2 Basic computer language and programming
2.3 Basic computer hardware
2.4 Basic computer software
2.5 Servers, internet and world wide web (www)
2.5.1 Servers
2.5.2 Internet
2.5.3 World wide web (www)
References
3 The science and technologies of artificial intelligence (AI)
3.1 The theory and science of artificial intelligence (AI)
3.2 Artificial neural network (ANN) model of artificial intelligence (AI)
3.3 AI software (algorithms)
3.3.1 Machine learning
3.3.1.1 Supervised (labeled) data
3.3.2 Neural networking and deep learning
3.3.2.1 Unsupervised (unlabeled) data
3.3.2.2 Reinforcement learning
3.4 AI hardware
3.4.1 Ram (random access memory)
3.4.2 Computer servers (file, mail, print, web, game, apps)
3.4.3 Central processing unit (CPU)
3.4.4 Graphic processing unit (GPU)
3.4.5 Accelerators [61]
3.4.6 Quantum processors using “qubits” (vs digital binary code)
3.4.7 Neuromorphic chips (“self-learning” microchips)
3.4.8 Application specific integrated circuit (ASIC)
3.4.9 Field-programmable gate array (FPGA) integrated circuit with hardware description language (HDL)
3.5 Specialized AI systems
3.5.1 Natural language processing (NLP)
3.5.2 Natural language generation (NLG)
3.5.3 Expert systems
3.5.4 “Internet of things” (IoT)
3.5.5 Cyber-physical system (CPS)
3.5.6 Big data analytics
3.5.7 Blockchain
3.5.8 Robotics
3.6 Sample AI scenarios
3.6.1 “Why is the Mona Lisa smiling?” [103]
3.6.2 The “great steak” experience [106]
References
Artificial intelligence (AI): Applications in health and wellness
Introduction
References
4 AI applications in the business and administration of health care
4.1 AI applications in government agencies (GOVs), non-governmental organizations (NGOs) and third-party health insurers
4.1.1 Primary AI applications GOVs, NGOs, and third-party health insurers (1, 2, 3)
4.1.2 Additional AI applications to GOVs, NGOs, and third-party health insurers (4, 5, 6)
4.2 Big data analytics in health care [Text #1]
4.2.1 Primary AI literature reviews of big data analytics (1, 2, 3)
4.2.2 Additional AI literature reviews of big data analytics (4, 5, 6)
4.3 Blockchain in health care [Text #2]
4.3.1 Primary AI literature reviews of blockchain (1, 2, 3)
4.3.2 Additional AI literature reviews of blockchain (4, 5, 6)
4.4 Health information and records (electronic health record or EHR) [Text #3]
4.4.1 Primary AI literature reviews of health information and records (EHR) (1, 2, 3)
4.4.2 Additional AI literature reviews of health information and records (EHR) (4, 5, 6)
4.5 Population health [Text #4]
4.5.1 Primary AI literature reviews of population health (1, 2, 3)
4.5.2 Additional AI literature reviews of population health (4, 5, 6)
4.6 Healthcare analytics (descriptive, diagnostic, predictive, prescriptive, discovery) [78] [Text #5]
4.6.1 Descriptive analytics [Text #6] [84]
4.6.2 Diagnostic analytics [Text #7] [85]
4.6.3 Predictive analytics [Text #8, page 99] [78]
4.6.4 Prescriptive analytics [Text #9, page 100] [83]
4.6.5 Primary AI literature reviews of health analytics (1, 2, 3)
4.6.6 Additional AI literature reviews of health analytics (4, 5, 6)
4.7 Precision health (aka precision medicine or personalized medicine) [Text #10]
4.7.1 Primary AI literature reviews of precision medicine/health (1, 2, 3)
4.7.2 Additional AI literature reviews of precision medicine/health (4, 5, 6)
4.8 Preventive medicine/healthcare [Text #11]
4.8.1 Primary AI literature reviews of preventive medicine/healthcare (1, 2, 3)
4.8.2 Additional AI literature reviews of preventive medicine/healthcare (4, 5, 6)
4.9 Public health [Text #12]
4.9.1 Primary AI literature reviews of public health (1, 2, 3)
4.9.2 Additional AI literature reviews of public health (4, 5, 6)
References
5 AI applications in diagnostic technologies and services
5.1 Major diagnostic technologies [4] and their AI applications
5.1.1 Diagnostic imaging
5.1.1.1 Categories of diagnostic imaging
5.1.1.1.1 AI’s influence on conventional radiography [15]
5.1.1.1.2 Literature reviews re AI’s influence on conventional radiography
5.1.1.1.3 AI’s influence on mammography
5.1.1.1.4 Literature reviews re AI’s influence on mammography
5.1.1.1.5 AI’s influence on fluoroscopy [33]
5.1.1.1.6 Literature reviews re AI’s influence on fluoroscopy
5.1.1.1.7 AI’s influence on radiomics
5.1.1.1.8 Literature reviews re AIs influence on radiomics
5.1.1.1.9 AI’s influence on computed tomography (CT or CAT) scans [53]
5.1.1.1.10 Literature reviews re AI’s influence on computed tomography (CT or CAT) scans
5.1.1.1.11 AI’s influence on MRI scans
5.1.1.1.12 Literature reviews re AI’s influence on MRI scans
5.1.1.1.13 AI’s influence on nuclear medicine scans [74]
5.1.1.1.14 Literature reviews re AI’s influence on nuclear medicine scans
5.1.1.1.15 AI’s influence on ultrasound (sonography) [78]
5.1.1.1.16 Literature reviews re AI’s influence on ultrasound (sonography)
5.1.1.1.17 AI’s influence on endoscopy [90]
5.1.1.1.18 Literature reviews re: AI’s influence on endoscopy
5.1.1.1.19 AI’s influence on fundus imaging [97]
5.1.1.1.20 Literature reviews re AI’s influence on fundus imaging
5.1.1.1.21 AI’s influence on medical (clinical) photography
5.1.1.1.22 Literature reviews re AI’s influence on medical (clinical) photography
5.1.2 Laboratory (clinical diagnostic) testing
5.1.2.1 AI’s influence on laboratory testing
5.1.3 Genetic and genomic screening and diagnosis
5.1.3.1 The science
5.1.3.2 Cytogenetics
5.1.3.3 Genetic testing [128]
5.1.3.4 Big data analytics in genomics [130]
5.1.3.5 AI in genetic cancer screening
5.1.3.6 AI in immunogenetics (see also Immunology, Chapters 6 and 7)
5.1.3.7 Genetics, precision medicine and AI
5.1.3.8 Literature reviews re AI’s influence on genetics and genomics
5.2 Additional diagnostic technologies and their AI applications
5.2.1 Vital signs
5.2.2 Electrodiagnosis
5.2.3 Telemedicine (aka telehealth)
5.2.4 Chatbots
5.2.5 Expert systems
5.2.5.1 Literature reviews re AI’s influences on “additional diagnostic technologies”
References
6 Current AI applications in medical therapies and services
6.1 Medical care (primary, secondary, tertiary, quaternary care)
6.1.1 Big data analytics and AI in medical care
6.1.2 Health information and records (EHR) and AI in medical care
6.1.3 Research/clinical trials and AI in medical care
6.1.4 Blockchain and AI in medical care
6.1.5 Internet of Things (IoT) and AI in medical care [15]
6.1.6 Telehealth and AI in medical care [16]
6.1.7 Chatbots and AI in medical care [16]
6.1.8 Natural language processing (NLP) and AI in medical care
6.1.9 Expert systems and AI in medical care
6.1.10 Robotics and AI in medical care
6.1.11 Population health (demographics and epidemiology) and AI in medical care
6.1.12 Precision medicine/health (personalized health) and AI in medical care
6.1.13 Healthcare analytics and AI in medical care
6.1.14 Preventive health and AI in medical care
6.1.15 Public health and AI in medical care
6.1.16 Access and availability and AI in medical care
6.2 Pharmaceutical and biopharmaceutical care
6.2.1 Big data analytics and AI in pharmaceutical care
6.2.2 Health information and records (EHR) and AI in pharmaceutical care
6.2.3 Research/clinical trials and AI in pharmaceutical care
6.2.4 Blockchain and AI in pharmaceutical care
6.2.5 Internet of Things (IoT) and AI in pharmaceutical care
6.2.6 Telehealth and AI in pharmaceutical care
6.2.7 Chatbots and AI in pharmaceutical care
6.2.8 Natural language processing (NLP) and AI in pharmaceutical care
6.2.9 Expert systems and AI in pharmaceutical care
6.2.10 Robotics and AI in pharmaceutical care
6.2.11 Population health (demographics and epidemiology) and AI in pharmaceutical care
6.2.12 Precision medicine/health (personalized health) and AI in pharmaceutical care
6.2.13 Healthcare analytics and AI in pharmaceutical care
6.2.14 Preventive health and AI in pharmaceutical care
6.2.15 Public health and AI in pharmaceutical care
6.2.16 Access and availability and AI in pharmaceutical care
6.3 Hospital care
6.3.1 Big data analytics and AI in hospital care
6.3.2 Health information and records (EHR) and AI in hospital care
6.3.3 Research/clinical trials and AI in hospital care
6.3.4 Blockchain and AI in hospital care
6.3.5 Internet of Things (IoT) and AI in hospital care [15]
6.3.6 Telehealth and AI in hospital care [114]
6.3.7 Chatbots and AI in hospital care
6.3.8 Natural language processing (NLP) and AI in hospital care
6.3.9 Expert systems and AI in hospital care
6.3.10 Robotics and AI in hospital care
6.3.11 Population health (demographics and epidemiology) and AI in hospital care
6.3.12 Precision medicine/health (personalized health) and AI in hospital care
6.3.13 Healthcare analytics and AI in hospital care
6.3.14 Public health and AI in hospital care [136]
6.3.15 Access and availability and AI in hospital care
6.4 Nursing care
6.4.1 Big data analytics and AI in nursing care
6.4.2 Health information and records (EHR) and AI in nursing care
6.4.3 Research/clinical trials and AI in nursing care
6.4.4 Blockchain and AI in nursing care
6.4.5 Internet of Things (IoT) and AI in nursing care
6.4.6 Telehealth and AI in nursing care
6.4.7 Chatbots and AI in nursing care
6.4.8 Natural language processing (NLP), and AI in nursing care
6.4.9 Expert systems and AI in nursing care
6.4.10 Robotics and AI in nursing care
6.4.11 Population health (demographics and epidemiology) and AI in nursing care
6.4.12 Precision medicine/health (personalized health) and AI in nursing care
6.4.13 Healthcare analytics and AI in nursing care
6.4.14 Preventive health and AI in nursing care
6.4.15 Public health and AI in nursing care
6.4.16 Access and availability and AI in nursing care
6.5 Home health care, nursing homes and hospice care
6.5.1 Big data analytics and AI in home health, nursing homes, and hospice care
6.5.2 Health information and records (EHR) and AI in home health, nursing homes, and hospice care
6.5.3 Research/clinical trials and AI in home health, nursing homes, and hospice care
6.5.4 Blockchain and AI in home health, nursing homes, and hospice care
6.5.5 Internet of Things (IoT) and AI in home health, nursing homes, and hospice care
6.5.6 Telehealth and AI in home health, nursing homes, and hospice care
6.5.7 Chatbots and AI in home health, nursing homes, and hospice care
6.5.8 Natural language processing (NLP) and AI in home health, nursing homes, and hospice care
6.5.9 Robotics and AI in home health, nursing homes, and hospice care
6.5.10 Population health (demographics and epidemiology) and AI in home health, nursing homes, and hospice care
6.5.11 Precision medicine/health (personalized health) and AI in home health, nursing homes, and hospice care
6.5.12 Healthcare analytics and AI in home health, nursing homes, and hospice care
6.5.13 Preventive health and AI in home health, nursing homes, and hospice care
6.5.14 Public health and AI in home health, nursing homes, and hospice care
6.5.15 Access and availability and AI in home health, nursing homes, and hospice care
6.6 Concurrent medical conditions (“comorbidity,” aka “multimorbidity”)
6.6.1 Big data analytics and AI in concurrent medical conditions (“comorbidity”)
6.6.2 Health information and records (EHR) and AI in concurrent medical conditions (“comorbidity”)
6.6.3 Research/clinical trials and AI in concurrent medical conditions (“comorbidity”)
6.6.4 Blockchain and AI in concurrent medical conditions (“comorbidity”)
6.6.5 Telehealth and AI in concurrent medical conditions (“comorbidity”)
6.6.6 Chatbots and AI in concurrent medical conditions (“comorbidity”)
6.6.7 Natural language processing (NLP) and AI in concurrent medical conditions (“comorbidity”)
6.6.8 Expert systems and AI in concurrent medical conditions (“comorbidity”)
6.6.9 Robotics and AI in concurrent medical conditions (“comorbidity”)
6.6.10 Population health (demographics and epidemiology) and AI in concurrent medical conditions (“comorbidity”)
6.6.11 Precision medicine/health (personalized health) and AI in concurrent medical conditions (“comorbidity”)
6.6.12 Healthcare analytics and AI in concurrent medical conditions (“comorbidity”)
6.6.13 Preventive health and AI in concurrent medical conditions (“comorbidity”)
6.6.14 Public health and AI in concurrent medical conditions (“comorbidity”)
6.6.15 Access and availability and AI in concurrent medical conditions (“comorbidity”)
6.7 Medical/surgical robotics
6.7.1 Big data analytics and AI in medical/surgical robotics
6.7.2 Health information and records (EHR) and AI in medical/surgical robotics
6.7.3 Research/clinical trials and AI in medical/surgical robotics
6.7.4 Blockchain and AI in medical/surgical robotics
6.7.5 Internet of Things (IoT) and AI in medical/surgical robotics
6.7.6 Telehealth and AI in medical/surgical robotics
6.7.7 Chatbots and AI in medical/surgical robotics
6.7.8 Natural language processing (NLP) and AI in medical/surgical robotics
6.7.9 Expert systems and AI in medical/surgical robotics
6.7.10 Precision medicine/health (personalized health) and AI in medical/surgical robotics
6.7.11 Healthcare analytics and AI in medical/surgical robotics
6.7.12 Preventive health and AI in medical/surgical robotics
6.7.13 Public health and AI in medical/surgical robotics
6.7.14 Access and availability and AI in medical/surgical robotics
6.8 Stem cells and regenerative medicine
6.8.1 The basic bioscience of stem cells and regenerative medicine [276]
6.8.2 Big data analytics and AI in stem cells and regenerative medicine
6.8.3 Research/clinical trials and AI in stem cells and regenerative medicine
6.8.4 Blockchain and AI in stem cells and regenerative medicine
6.8.5 Internet of Things (IoT) and AI in stem cells and regenerative medicine
6.8.6 3-D bioprinting and AI in stem cells and regenerative medicine
6.8.7 Chatbots and AI in stem cells and regenerative medicine
6.8.8 Natural language processing (NLP) and AI in stem cells and regenerative medicine
6.8.9 Expert systems and AI in stem cells and regenerative medicine
6.8.10 Robotics and AI in stem cells and regenerative medicine
6.8.11 Precision medicine/health (personalized health) and AI in stem cells and regenerative medicine
6.8.12 Healthcare analytics and AI in stem cells and regenerative medicine
6.8.13 Preventive health and AI in stem cells and regenerative medicine
6.8.14 Public health and AI in stem cells and regenerative medicine
6.8.15 Access and availability and AI in stem cells and regenerative medicine
6.9 Genetics and genomics therapies
6.9.1 Big data analytics and AI in genetics and genomics
6.9.2 Health information and records (EHR) and AI in genetics and genomics therapies
6.9.3 Research/clinical trials and AI in genetics and genomics
6.9.4 Blockchain and AI in genetics and genomics
6.9.5 Internet of Things (IoT) and AI in genetics and genomics
6.9.6 Telehealth and AI in genetics and genomics
6.9.7 Chatbots and AI in genetics and genomics
6.9.8 Natural language processing (NLP) and AI in genetics and genomics
6.9.9 Expert systems and AI in genetics and genomics
6.9.10 Robotics and AI in genetics and genomics
6.9.11 Population health (demographics and epidemiology) and AI in genetics and genomics
6.9.12 Precision medicine/health (personalized health) and AI in genetics and genomics
6.9.13 Healthcare analytics (and bioinformatics) and AI in genetics and genomics
6.9.14 Preventive health and AI in genetics and genomics
6.9.15 Public health and AI in genetics and genomics
6.9.16 Access and availability and AI in genetics and genomics
References
7 AI applications in prevalent diseases and disorders
7.1 Immunology and autoimmune disease
7.1.1 Pathogenesis and etiologies of immunology and autoimmune disease
7.1.2 Clinical presentations in immunology and autoimmune disease
7.1.3 Current treatment approaches and AI applications in immunology and autoimmune disease
7.1.3.1 Stem cell transplantation
7.1.3.2 CRISPR-Cas9 (gene editing)
7.1.3.3 CAR-T cell (gene replacement)
7.1.4 Research and future AI considerations in immunology and autoimmune disease
7.2 Genetic and genomic disorders
7.2.1 Description and etiology of genetic and genomic disorders
7.2.2 Clinical presentations in genetic and genomic disorders
7.2.3 Current treatment approaches and AI applications in genetic and genomic disorders
7.2.4 Research and future AI considerations in genetic and genomic disorders
7.3 Cancers
7.3.1 Description and etiology of cancers
7.3.2 Clinical presentations in cancers
7.3.3 Current treatment approaches and AI applications in cancers
7.3.4 Research and future AI considerations in cancers
7.4 Vascular (cardiovascular and cerebrovascular) disorders
7.4.1 Description and etiology of cardio and cerebrovascular disorders
7.4.1.1 Structures of the cardiovascular systems
7.4.1.2 Structures of the cerebrovascular system
7.4.1.3 Diseases and disorders of the cardiovascular system
7.4.1.4 Diseases and disorders of the cerebrovascular system
7.4.2 Current treatment approaches and AI applications in vascular disorders
7.4.3 Research and future AI considerations in vascular care
7.4.3.1 Diagnostic and screening considerations in vascular care
7.4.3.2 Emerging AI applications in vascular treatment and prevention
7.5 Diabetes (type 1 and 2)
7.5.1 Description and etiology of diabetes (type 1 and 2)
7.5.1.1 Type 1 diabetes
7.5.1.2 Type 2 diabetes (mellitus)
7.5.2 Clinical presentations in diabetes (type 1 and 2)
7.5.2.1 Type 1 diabetes
7.5.2.2 Type 2 diabetes mellitus
7.5.3 Current treatment approaches to diabetes (type 1 and 2)
7.5.3.1 Type 1 diabetes [163]
7.5.3.2 Type 2 diabetes [164]
7.5.4 Research and future AI applications in diabetes (type 1 and 2)
7.5.4.1 Type 1 diabetes
7.5.4.2 Type 2 diabetes
7.6 Neurological and sensory disorders and diseases
7.6.1 Neuroanatomy, etiologies, clinical considerations associated with neurological and sensory disorders
7.6.1.1 The central nervous system (CNS) neuroanatomy [177]
7.6.1.2 Central nervous system (CNS) clinical considerations (by etiology) [178]
7.6.1.3 Peripheral nervous system (PNS) neuroanatomy [179]
7.6.1.4 Peripheral nervous system (PNS) clinical considerations (by etiology) [180]
7.6.1.5 Sensory systems [181]
7.6.2 Research and AI considerations in neurological and sensory disorders
7.7 Musculoskeletal disorders (MSDs)
7.7.1 Musculoskeletal disorders (MSD) and diseases and associated AI applications
7.8 Integumentary system and exocrine glands
7.8.1 Dermatology
7.8.2 Integumentary system disorders and diseases and associated AI applications
7.9 Endocrine glands
7.9.1 Endocrine disorders and diseases and associated AI applications
7.10 Digestive and excretory systems
7.10.1 Digestive and excretory disorders and diseases and associated AI applications
7.11 Renal system and urinary system
7.11.1 Renal and urinary disorders and diseases and associated AI applications
7.12 Respiratory (pulmonary) system
7.12.1 Respiratory system diseases and disorders and associated AI applications
7.13 Reproductive systems
7.13.1 Female reproductive system [366]
7.13.2 Female reproductive cycle
7.13.2.1 Disease conditions of the female reproductive system with recent, related AI programs
7.13.3 Male reproductive system [366]
7.13.3.1 Male reproductive process
7.13.3.2 Functional disorders of the male reproduction system with recent, related AI programs
7.13.4 Disease conditions of the male reproduction system with recent AI programs
7.14 Physical injuries, wounds and disabilities
7.14.1 Fatal injury data
7.14.2 Nonfatal injury data
7.14.3 Disabilities
7.15 Infectious disease
7.16 Human development, aging, degeneration and death
7.17 Chronic disease
7.18 Mental and behavioral disorders
7.19 Nutrition and exercise (preventive care)
7.19.1 Physical exercise
7.19.2 Nutrition
References
8 SARS-CoV-2 and the COVID-19 pandemic
8.1 Background
8.1.1 Definitions
8.1.2 History of pandemics
8.1.2.1 Historical overview
8.1.2.2 Recent history
8.1.3 Incidence and prevalence of COVID-19
8.2 Pathogenesis and bioscience considerations for SARS-CoV-2
8.2.1 Mechanisms
8.2.2 Theories
8.2.3 Life cycle of SARS-CoV-2
8.2.4 Review of AI regarding the pathogenesis of SARS-CoV-2
8.3 Clinical considerations regarding SARS-CoV-2 infection
8.3.1 Clinical manifestations (signs and symptoms)
8.3.2 Diagnostic testing
8.3.2.1 Antigen testing
8.3.2.2 Molecular genetic test (PCR test)
8.3.2.3 Antibody testing
8.4 Treatment and management strategies
8.4.1 General measures
8.4.1.1 Basic preventive steps
8.4.1.2 Mitigation
8.4.1.3 Contact tracing
8.4.1.4 Modeling
8.4.1.5 Herd immunity and R Naught (RO or RO)
8.4.2 Therapeutics
8.4.2.1 Monoclonal antibodies
8.4.2.2 Convalescent plasma (serum)
8.4.2.3 Hydroxychloroquine (Plaquenil®) combined with azithromycin (Zithromax®)
8.4.2.4 Remdesivir
8.4.2.5 Dexamethasone (and corticosteroids)
8.4.2.6 RNA screening
8.4.3 Vaccine (immunization)
8.4.4 CRISPR-Cas13 and RNA screening
8.4.5 Immunoinformatics
8.4.6 Review of AI for clinical considerations for coronavirus infections
8.5 Epidemiology and public health considerations in COVID-19
8.5.1 Current epidemiologic considerations
8.5.2 Review of AI for epidemiology and public health considerations
Conclusion
References
Epilogue
References
Glossary of terminology
Glossary of abbreviations
Index
Recommend Papers

Foundations of Artificial Intelligence in Healthcare and Bioscience: A User Friendly Guide for IT Professionals, Healthcare Providers, Researchers, and Clinicians
 0128244771, 9780128244777

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Foundations of Artificial Intelligence in Healthcare and Bioscience

Foundations of Artificial Intelligence in Healthcare and Bioscience A User Friendly Guide for IT Professionals, Healthcare Providers, Researchers, and Clinicians

Louis J. Catania Nicolitz Eye Consultants, Jacksonville, FL, United States

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2021 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-824477-7 For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara Conner Acquisitions Editor: Chris Katsaropoulos Editorial Project Manager: Rafael G. Trombaco Production Project Manager: Niranjan Bhaskaran Cover Designer: Christian J. Bilbow Typeset by MPS Limited, Chennai, India

Dedication To my wife Stephanie, Thank you for your love and support To all the health care workers who have served us all during the COVID-19 pandemic and every day, Thank you To the families of all those lost to the COVID-19 pandemic, My deepest sympathies

Contents List of Illustrations

xxv

Foreword by Adam Dimitrov Foreword by Ernst Nicolitz Preface

xxxi

xxxiii

Acknowledgments

Section I

xxix

xxxv

Artificial Intelligence (AI): Understanding the technology

Introduction References 1.

2.

1 1 6

The evolution of artificial intelligence (AI)

7

1.1 Human intelligence

7

1.2 Defining artificial intelligence (AI)

8

References

11

The basic computer

13

2.1 Layers of basic computers

14

2.1.1 Input layer

14

2.1.2 Inner (hidden) layer

16

2.1.3 Output layer

18

2.2 Basic computer language and programming

19

2.3 Basic computer hardware

21

2.4 Basic computer software

22 vii

viii

Contents

2.5 Servers, internet and world wide web (www)

3.

23

2.5.1 Servers

23

2.5.2 Internet

25

2.5.3 World wide web (www)

26

References

26

The science and technologies of artificial intelligence (AI)

29

3.1 The theory and science of artificial intelligence (AI)

29

3.2 Artificial neural network (ANN) model of artificial intelligence (AI)

31

3.3 AI software (algorithms)

36

3.3.1 Machine learning

38

3.3.1.1 Supervised (labeled) data

38

3.3.2 Neural networking and deep learning

40

3.3.2.1 Unsupervised (unlabeled) data

40

3.3.2.2 Reinforcement learning

42

3.4 AI hardware

46

3.4.1 Ram (random access memory)

46

3.4.2 Computer servers (file, mail, print, web, game, apps)

47

3.4.3 Central processing unit (CPU)

47

3.4.4 Graphic processing unit (GPU)

48

3.4.5 Accelerators

48

3.4.6 Quantum processors using “qubits” (vs digital binary code)

49

3.4.7 Neuromorphic chips (“self-learning” microchips)

49

3.4.8 Application specific integrated circuit (ASIC)

50

3.4.9 Field-programmable gate array (FPGA) integrated circuit with hardware description language (HDL)

50

3.5 Specialized AI systems

51

3.5.1 Natural language processing (NLP)

51

3.5.2 Natural language generation (NLG)

52

Contents

3.5.3 Expert systems

53

3.5.4 “Internet of things” (IoT)

55

3.5.5 Cyber-physical system (CPS)

55

3.5.6 Big data analytics

56

3.5.7 Blockchain

59

3.5.8 Robotics

60

3.6 Sample AI scenarios

64

3.6.2 The “great steak” experience

67

Artificial Intelligence (AI): Applications in Health and Wellness

Introduction References 4.

64

3.6.1 “Why is the Mona Lisa smiling?” References

Section II

ix

68

73 73 77

AI applications in the business and administration of health care

79

4.1 AI applications in government agencies (GOVs), non-governmental organizations (NGOs) and third-party health insurers

79

4.1.1 Primary AI applications GOVs, NGOs, and third-party health insurers (1, 2, 3)

79

4.1.2 Additional AI applications to GOVs, NGOs, and third-party health insurers (4, 5, 6)

82

4.2 Big data analytics in health care [Text #1]

83

4.2.1 Primary AI literature reviews of big data analytics (1, 2, 3)

83

4.2.2 Additional AI literature reviews of big data analytics (4, 5, 6)

85

x

Contents

4.3 Blockchain in health care [Text #2]

85

4.3.1 Primary AI literature reviews of blockchain (1, 2, 3)

86

4.3.2 Additional AI literature reviews of blockchain (4, 5, 6)

88

4.4 Health information and records (electronic health record or EHR) [Text #3]

88

4.4.1 Primary AI literature reviews of health information and records (EHR) (1, 2, 3)

89

4.4.2 Additional AI literature reviews of health information and records (EHR) (4, 5, 6)

91

4.5 Population health [Text #4]

91

4.5.1 Primary AI literature reviews of population health (1, 2, 3)

95

4.5.2 Additional AI literature reviews of population health (4, 5, 6)

97

4.6 Healthcare analytics (descriptive, diagnostic, predictive, prescriptive, discovery) [Text #5]

97

4.6.1 Descriptive analytics [Text #6]

98

4.6.2 Diagnostic analytics [Text #7]

99

4.6.3 Predictive analytics [Text #8]

99

4.6.4 Prescriptive analytics [Text #9]

100

4.6.5 Primary AI literature reviews of health analytics (1, 2, 3)

100

4.6.6 Additional AI literature reviews of health analytics (4, 5, 6)

101

4.7 Precision health (aka precision medicine or personalized medicine) [Text #10]

101

4.7.1 Primary AI literature reviews of precision medicine/health (1, 2, 3)

102

4.7.2 Additional AI literature reviews of precision medicine/health (4, 5, 6)

106

4.8 Preventive medicine/healthcare [Text #11] 4.8.1 Primary AI literature reviews of preventive medicine/healthcare (1, 2, 3)

106 108

Contents

4.8.2 Additional AI literature reviews of preventive medicine/healthcare (4, 5, 6) 4.9 Public health [Text #12]

5.

xi

110 111

4.9.1 Primary AI literature reviews of public health (1, 2, 3)

112

4.9.2 Additional AI literature reviews of public health (4, 5, 6)

116

References

117

AI applications in diagnostic technologies and services

125

5.1 Major diagnostic technologies and their AI applications

127

5.1.1 Diagnostic imaging 5.1.1.1 Categories of diagnostic imaging 5.1.2 Laboratory (clinical diagnostic) testing 5.1.2.1 AI’s influence on laboratory testing 5.1.3 Genetic and genomic screening and diagnosis

129 131 158 159 168

5.1.3.1 The science

169

5.1.3.2 Cytogenetics

173

5.1.3.3 Genetic testing

173

5.1.3.4 Big data analytics in genomics

175

5.1.3.5 AI in genetic cancer screening

176

5.1.3.6 AI in immunogenetics

176

5.1.3.7 Genetics, precision medicine and AI

177

5.1.3.8 Literature reviews re AI’s influence on genetics and genomics

177

5.2 Additional diagnostic technologies and their AI applications

178

5.2.1 Vital signs

179

5.2.2 Electrodiagnosis

180

5.2.3 Telemedicine (aka telehealth)

182

xii

Contents

5.2.4 Chatbots

185

5.2.5 Expert systems

187

5.2.5.1 Literature reviews re AI’s influences on “additional diagnostic technologies”

6.

189

References

190

Current AI applications in medical therapies and services

199

6.1 Medical care (primary, secondary, tertiary, quaternary care)

200

6.1.1 Big data analytics and AI in medical care

201

6.1.2 Health information and records (EHR) and AI in medical care

202

6.1.3 Research/clinical trials and AI in medical care

202

6.1.4 Blockchain and AI in medical care

203

6.1.5 Internet of Things (IoT) and AI in medical care

203

6.1.6 Telehealth and AI in medical care

204

6.1.7 Chatbots and AI in medical care

204

6.1.8 Natural language processing (NLP) and AI in medical care

205

6.1.9 Expert systems and AI in medical care

205

6.1.10 Robotics and AI in medical care

206

6.1.11 Population health (demographics and epidemiology) and AI in medical care

207

6.1.12 Precision medicine/health (personalized health) and AI in medical care

207

6.1.13 Healthcare analytics and AI in medical care

208

6.1.14 Preventive health and AI in medical care

209

6.1.15 Public health and AI in medical care

209

6.1.16 Access and availability and AI in medical care

211

6.2 Pharmaceutical and biopharmaceutical care

212

6.2.1 Big data analytics and AI in pharmaceutical care

212

6.2.2 Health information and records (EHR) and AI in pharmaceutical care

212

Contents xiii

6.2.3 Research/clinical trials and AI in pharmaceutical care

213

6.2.4 Blockchain and AI in pharmaceutical care

214

6.2.5 Internet of Things (IoT) and AI in pharmaceutical care

215

6.2.6 Telehealth and AI in pharmaceutical care

215

6.2.7 Chatbots and AI in pharmaceutical care

216

6.2.8 Natural language processing (NLP) and AI in pharmaceutical care

216

6.2.9 Expert systems and AI in pharmaceutical care

217

6.2.10 Robotics and AI in pharmaceutical care

217

6.2.11 Population health (demographics and epidemiology) and AI in pharmaceutical care

217

6.2.12 Precision medicine/health (personalized health) and AI in pharmaceutical care

218

6.2.13 Healthcare analytics and AI in pharmaceutical care

218

6.2.14 Preventive health and AI in pharmaceutical care

219

6.2.15 Public health and AI in pharmaceutical care

219

6.2.16 Access and availability and AI in pharmaceutical care

220

6.3 Hospital care

220

6.3.1 Big data analytics and AI in hospital care

220

6.3.2 Health information and records (EHR) and AI in hospital care

220

6.3.3 Research/clinical trials and AI in hospital care

221

6.3.4 Blockchain and AI in hospital care

221

6.3.5 Internet of Things (IoT) and AI in hospital care

222

6.3.6 Telehealth and AI in hospital care

222

6.3.7 Chatbots and AI in hospital care

223

6.3.8 Natural language processing (NLP) and AI in hospital care

223

6.3.9 Expert systems and AI in hospital care

224

6.3.10 Robotics and AI in hospital care

224

xiv

Contents

6.3.11 Population health (demographics and epidemiology) and AI in hospital care

225

6.3.12 Precision medicine/health (personalized health) and AI in hospital care

225

6.3.13 Healthcare analytics and AI in hospital care

226

6.3.14 Public health and AI in hospital care

226

6.3.15 Access and availability and AI in hospital care

227

6.4 Nursing care

227

6.4.1 Big data analytics and AI in nursing care

227

6.4.2 Health information and records (EHR) and AI in nursing care

228

6.4.3 Research/clinical trials and AI in nursing care

228

6.4.4 Blockchain and AI in nursing care

228

6.4.5 Internet of Things (IoT) and AI in nursing care

229

6.4.6 Telehealth and AI in nursing care

229

6.4.7 Chatbots and AI in nursing care

230

6.4.8 Natural language processing (NLP), and AI in nursing care

230

6.4.9 Expert systems and AI in nursing care

231

6.4.10 Robotics and AI in nursing care

231

6.4.11 Population health (demographics and epidemiology) and AI in nursing care

232

6.4.12 Precision medicine/health (personalized health) and AI in nursing care

232

6.4.13 Healthcare analytics and AI in nursing care

233

6.4.14 Preventive health and AI in nursing care

233

6.4.15 Public health and AI in nursing care

234

6.4.16 Access and availability and AI in nursing care

234

6.5 Home health care, nursing homes and hospice care 6.5.1 Big data analytics and AI in home health, nursing homes, and hospice care

235 235

Contents

xv

6.5.2 Health information and records (EHR) and AI in home health, nursing homes, and hospice care

235

6.5.3 Research/clinical trials and AI in home health, nursing homes, and hospice care

236

6.5.4 Blockchain and AI in home health, nursing homes, and hospice care

236

6.5.5 Internet of Things (IoT) and AI in home health, nursing homes, and hospice care

237

6.5.6 Telehealth and AI in home health, nursing homes, and hospice care

237

6.5.7 Chatbots and AI in home health, nursing homes, and hospice care

238

6.5.8 Natural language processing (NLP) and AI in home health, nursing homes, and hospice care

238

6.5.9 Robotics and AI in home health, nursing homes, and hospice care

239

6.5.10 Population health (demographics and epidemiology) and AI in home health, nursing homes, and hospice care

239

6.5.11 Precision medicine/health (personalized health) and AI in home health, nursing homes, and hospice care

239

6.5.12 Healthcare analytics and AI in home health, nursing homes, and hospice care

240

6.5.13 Preventive health and AI in home health, nursing homes, and hospice care

240

6.5.14 Public health and AI in home health, nursing homes, and hospice care

241

6.5.15 Access and availability and AI in home health, nursing homes, and hospice care

241

6.6 Concurrent medical conditions (“comorbidity,” aka “multimorbidity”)

242

6.6.1 Big data analytics and AI in concurrent medical conditions (“comorbidity”)

243

6.6.2 Health information and records (EHR) and AI in concurrent medical conditions (“comorbidity”)

243

xvi

Contents

6.6.3 Research/clinical trials and AI in concurrent medical conditions (“comorbidity”)

244

6.6.4 Blockchain and AI in concurrent medical conditions (“comorbidity”)

244

6.6.5 Telehealth and AI in concurrent medical conditions (“comorbidity”)

245

6.6.6 Chatbots and AI in concurrent medical conditions (“comorbidity”)

246

6.6.7 Natural language processing (NLP) and AI in concurrent medical conditions (“comorbidity”)

246

6.6.8 Expert systems and AI in concurrent medical conditions (“comorbidity”)

247

6.6.9 Robotics and AI in concurrent medical conditions (“comorbidity”)

247

6.6.10 Population health (demographics and epidemiology) and AI in concurrent medical conditions (“comorbidity”)

248

6.6.11 Precision medicine/health (personalized health) and AI in concurrent medical conditions (“comorbidity”)

248

6.6.12 Healthcare analytics and AI in concurrent medical conditions (“comorbidity”)

249

6.6.13 Preventive health and AI in concurrent medical conditions (“comorbidity”)

249

6.6.14 Public health and AI in concurrent medical conditions (“comorbidity”)

250

6.6.15 Access and availability and AI in concurrent medical conditions (“comorbidity”)

250

6.7 Medical/surgical robotics

251

6.7.1 Big data analytics and AI in medical/surgical robotics

251

6.7.2 Health information and records (EHR) and AI in medical/surgical robotics

251

6.7.3 Research/clinical trials and AI in medical/surgical robotics

252

6.7.4 Blockchain and AI in medical/surgical robotics

252

Contents

xvii

6.7.5 Internet of Things (IoT) and AI in medical/surgical robotics

253

6.7.6 Telehealth and AI in medical/surgical robotics

253

6.7.7 Chatbots and AI in medical/surgical robotics

254

6.7.8 Natural language processing (NLP) and AI in medical/surgical robotics

255

6.7.9 Expert systems and AI in medical/surgical robotics

255

6.7.10 Precision medicine/health (personalized health) and AI in medical/surgical robotics

256

6.7.11 Healthcare analytics and AI in medical/surgical robotics

256

6.7.12 Preventive health and AI in medical/surgical robotics

256

6.7.13 Public health and AI in medical/surgical robotics

257

6.7.14 Access and availability and AI in medical/surgical robotics

257

6.8 Stem cells and regenerative medicine

257

6.8.1 The basic bioscience of stem cells and regenerative medicine

258

6.8.2 Big data analytics and AI in stem cells and regenerative medicine

259

6.8.3 Research/clinical trials and AI in stem cells and regenerative medicine

260

6.8.4 Blockchain and AI in stem cells and regenerative medicine

260

6.8.5 Internet of Things (IoT) and AI in stem cells and regenerative medicine

261

6.8.6 3-D bioprinting and AI in stem cells and regenerative medicine

261

6.8.7 Chatbots and AI in stem cells and regenerative medicine

261

6.8.8 Natural language processing (NLP) and AI in stem cells and regenerative medicine

262

xviii

Contents

6.8.9 Expert systems and AI in stem cells and regenerative medicine

262

6.8.10 Robotics and AI in stem cells and regenerative medicine

263

6.8.11 Precision medicine/health (personalized health) and AI in stem cells and regenerative medicine

263

6.8.12 Healthcare analytics and AI in stem cells and regenerative medicine

264

6.8.13 Preventive health and AI in stem cells and regenerative medicine

264

6.8.14 Public health and AI in stem cells and regenerative medicine

265

6.8.15 Access and availability and AI in stem cells and regenerative medicine

265

6.9 Genetics and genomics therapies

265

6.9.1 Big data analytics and AI in genetics and genomics

266

6.9.2 Health information and records (EHR) and AI in genetics and genomics therapies

268

6.9.3 Research/clinical trials and AI in genetics and genomics

268

6.9.4 Blockchain and AI in genetics and genomics

269

6.9.5 Internet of Things (IoT) and AI in genetics and genomics

269

6.9.6 Telehealth and AI in genetics and genomics

270

6.9.7 Chatbots and AI in genetics and genomics

270

6.9.8 Natural language processing (NLP) and AI in genetics and genomics

271

6.9.9 Expert systems and AI in genetics and genomics

271

6.9.10 Robotics and AI in genetics and genomics

272

6.9.11 Population health (demographics and epidemiology) and AI in genetics and genomics

272

6.9.12 Precision medicine/health (personalized health) and AI in genetics and genomics

273

Contents

7.

xix

6.9.13 Healthcare analytics (and bioinformatics) and AI in genetics and genomics

273

6.9.14 Preventive health and AI in genetics and genomics

274

6.9.15 Public health and AI in genetics and genomics

275

6.9.16 Access and availability and AI in genetics and genomics

275

References

276

AI applications in prevalent diseases and disorders

293

7.1 Immunology and autoimmune disease

294

7.1.1 Pathogenesis and etiologies of immunology and autoimmune disease

295

7.1.2 Clinical presentations in immunology and autoimmune disease

298

7.1.3 Current treatment approaches and AI applications in immunology and autoimmune disease

300

7.1.3.1 Stem cell transplantation

302

7.1.3.2 CRISPR-Cas9 (gene editing)

303

7.1.3.3 CAR-T cell (gene replacement)

305

7.1.4 Research and future AI considerations in immunology and autoimmune disease 7.2 Genetic and genomic disorders

306 308

7.2.1 Description and etiology of genetic and genomic disorders

309

7.2.2 Clinical presentations in genetic and genomic disorders

310

7.2.3 Current treatment approaches and AI applications in genetic and genomic disorders

312

7.2.4 Research and future AI considerations in genetic and genomic disorders 7.3 Cancers

313 314

7.3.1 Description and etiology of cancers

314

7.3.2 Clinical presentations in cancers

315

xx

Contents

7.3.3 Current treatment approaches and AI applications in cancers

318

7.3.4 Research and future AI considerations in cancers

319

7.4 Vascular (cardiovascular and cerebrovascular) disorders 7.4.1 Description and etiology of cardio and cerebrovascular disorders

320 321

7.4.1.1 Structures of the cardiovascular systems

322

7.4.1.2 Structures of the cerebrovascular system

322

7.4.1.3 Diseases and disorders of the cardiovascular system

322

7.4.1.4 Diseases and disorders of the cerebrovascular system

324

7.4.2 Current treatment approaches and AI applications in vascular disorders

324

7.4.3 Research and future AI considerations in vascular care

331

7.4.3.1 Diagnostic and screening considerations in vascular care

331

7.4.3.2 Emerging AI applications in vascular treatment and prevention

332

7.5 Diabetes (type 1 and 2) 7.5.1 Description and etiology of diabetes (type 1 and 2)

332 333

7.5.1.1 Type 1 diabetes

333

7.5.1.2 Type 2 diabetes (mellitus)

333

7.5.2 Clinical presentations in diabetes (type 1 and 2)

334

7.5.2.1 Type 1 diabetes

334

7.5.2.2 Type 2 diabetes mellitus

334

7.5.3 Current treatment approaches to diabetes (type 1 and 2)

334

7.5.3.1 Type 1 diabetes

334

7.5.3.2 Type 2 diabetes

335

Contents

7.5.4 Research and future AI applications in diabetes (type 1 and 2)

xxi

335

7.5.4.1 Type 1 diabetes

335

7.5.4.2 Type 2 diabetes

336

7.6 Neurological and sensory disorders and diseases 7.6.1 Neuroanatomy, etiologies, clinical considerations associated with neurological and sensory disorders

337 338

7.6.1.1 The central nervous system (CNS) neuroanatomy

338

7.6.1.2 Central nervous system (CNS) clinical considerations (by etiology)

340

7.6.1.3 Peripheral nervous system (PNS) neuroanatomy

342

7.6.1.4 Peripheral nervous system (PNS) clinical considerations (by etiology)

342

7.6.1.5 Sensory systems

343

7.6.2 Research and AI considerations in neurological and sensory disorders 7.7 Musculoskeletal disorders (MSDs) 7.7.1 Musculoskeletal disorders (MSD) and diseases and associated AI applications 7.8 Integumentary system and exocrine glands

344 351 352 356

7.8.1 Dermatology

356

7.8.2 Integumentary system disorders and diseases and associated AI applications

357

7.9 Endocrine glands 7.9.1 Endocrine disorders and diseases and associated AI applications 7.10 Digestive and excretory systems 7.10.1 Digestive and excretory disorders and diseases and associated AI applications

363 364 368 368

xxii

Contents

7.11 Renal system and urinary system 7.11.1 Renal and urinary disorders and diseases and associated AI applications 7.12 Respiratory (pulmonary) system 7.12.1 Respiratory system diseases and disorders and associated AI applications 7.13 Reproductive systems

371 372 374 375 386

7.13.1 Female reproductive system

386

7.13.2 Female reproductive cycle

387

7.13.2.1 Disease conditions of the female reproductive system with recent, related AI programs 7.13.3 Male reproductive system

389 389

7.13.3.1 Male reproductive process

391

7.13.3.2 Functional disorders of the male reproduction system with recent, related AI programs

391

7.13.4 Disease conditions of the male reproduction system with recent AI programs 7.14 Physical injuries, wounds and disabilities

392 393

7.14.1 Fatal injury data

393

7.14.2 Nonfatal injury data

395

7.14.3 Disabilities

398

7.15 Infectious disease

400

7.16 Human development, aging, degeneration and death

406

7.17 Chronic disease

412

7.18 Mental and behavioral disorders

414

7.19 Nutrition and exercise (preventive care)

418

7.19.1 Physical exercise

418

7.19.2 Nutrition

420

References

422

Contents

8.

xxiii

SARS-CoV-2 and the COVID-19 pandemic

445

8.1 Background

445

8.1.1 Definitions

445

8.1.2 History of pandemics

446

8.1.2.1 Historical overview

446

8.1.2.2 Recent history

446

8.1.3 Incidence and prevalence of COVID-19 8.2 Pathogenesis and bioscience considerations for SARS-CoV-2

447 448

8.2.1 Mechanisms

448

8.2.2 Theories

448

8.2.3 Life cycle of SARS-CoV-2

450

8.2.4 Review of AI regarding the pathogenesis of SARS-CoV-2

450

8.3 Clinical considerations regarding SARS-CoV-2 infection

452

8.3.1 Clinical manifestations (signs and symptoms)

452

8.3.2 Diagnostic testing

453

8.3.2.1 Antigen testing

453

8.3.2.2 Molecular genetic test (PCR test)

454

8.3.2.3 Antibody testing

454

8.4 Treatment and management strategies

454

8.4.1 General measures

455

8.4.1.1 Basic preventive steps

455

8.4.1.2 Mitigation

455

8.4.1.3 Contact tracing

455

8.4.1.4 Modeling

456

8.4.1.5 Herd immunity and R Naught (RO or RO)

456

8.4.2 Therapeutics

457

8.4.2.1 Monoclonal antibodies

457

8.4.2.2 Convalescent plasma (serum)

457

xxiv

Contents

8.4.2.3 Hydroxychloroquine (Plaquenils ) combined with azithromycin (Zithromaxs )

457

8.4.2.4 Remdesivir

457

8.4.2.5 Dexamethasone (and corticosteroids)

458

8.4.2.6 RNA screening

458

8.4.3 Vaccine (immunization)

458

8.4.4 CRISPR-Cas13 and RNA screening

459

8.4.5 Immunoinformatics

459

8.4.6 Review of AI for clinical considerations for coronavirus infections

460

8.5 Epidemiology and public health considerations in COVID-19 8.5.1 Current epidemiologic considerations

461

8.5.2 Review of AI for epidemiology and public health considerations

462

Conclusion

464

References

Epilogue

464

469

Glossary of terminology Glossary of abbreviations Index

461

505

473 497

List of Illustrations Figure. Intro 1.1 Moore’s Law. Processing power for computers will double every two years while costs will be halved. Figure 1 1 Turing Test. One human functions as the questioner while a second human and a computer function as hidden respondents. The questioner interrogates the respondents using a specified format and context. After a preset length of time or number of questions, the questioner is asked to decide which respondent was human and which was a computer. Figure 2 1A Basic computer model (input layer). Input data is made up of information or data provided by an external source called an input device (Table 2 3). This external source of data (“user input”) is characterized schematically by a series of red dots representing “data points” (or “data nodes” when they produce a network of data points). Figure 2 1B Basic computer model (inner/hidden layer). In the computer input layer (red), the hardware (keyboard and monitor) are functioning as the input and output devices respectively, i.e., an I/O (input/output) process and the application programming interface (API) and central processing unit (CPU) software (blue) are functioning as the data processing unit of the inner (hidden) layer. Figure 2 2A Basic computer model (inner layer framework). Target code or binary (machine) code information becomes the information that will populate the inner (hidden) layer (blue) and be used by its software (OS, API, CPU, servers and software apps) for data processing. Figure 2 2B Basic computer model (inner layer functions). The object code file(s) (red) also directs coded instructions to the API and appropriate servers and/or software apps in the inner (hidden) layer (blue). Figure 2 3A Basic computer model (output layer). Input data points (red) and the inner (hidden) layer (blue) represent the transfer of the machine code target data to the inner (hidden) layer and output layer (green). Figure 2 3B Basic computer model (outer layer devices). Each data point (or node) in the input layer (blue) has the potential to be executed by any (or all) software programs in the inner (hidden) layer. Figure 3 1 Artificial intelligence schematic. The broadest classification of AI includes the subcategories of machine learning (ML) and deep learning (DL) within which artificial neural networks (ANN) and deep learning are subsets of machine learning. Figure 3 2 The neuron. The neuron is the basic unit of the human neural network that receives and sends trillions of electrical signals throughout the brain and body. It consists of a cell body, an axon, and dendrites. Figure 3 3 Neurotransmitter chemicals across synaptic cleft. Chemical messenger transmits signals across a synapse (synaptic cleft) such as a neuromuscular junction from one neuron (nerve cell) to another “target” neuron. Figure 3 4 Mathematical model of neuron. Schematic diagram of the mathematical model of a neuron where input “weights (w)” are activated and summed (Σ) by cell body producing an output (similar to the computer model). Figure 3 5 Schematic of the cortical neural network (CNN). A vast network of one hundred billion interconnecting neurons creates a complex in the human brain called the “cortical neural network.” This network has the potential of producing one hundred trillion neural connections.

5 9

15

16

17

17

18

19

31

31

32

33

33

xxv

xxvi

List of Illustrations

Figure 3 6 Figure 3 7

Figure 3 8

Figure 3 9

Figure 3 10

Figure 3 11

Figure 3 12

Figure 3 13

Figure 3 14

Figure 3 15

Figure 3 16

Figure 3 17

Figure 3 18

Neural network with 3 computer layers. This model demonstrates input nodules (red) to inner (hidden) layer nodes (blue), in relationship to the cerebral cortex, and output (green). Deep neural network. The inner (hidden) layer nodes. From (Fig. 3 6) are distributed throughout the brain as progressive cortical centers or layers (blue) to create the deep neural network (DNN). Convolutional neural network (CNN). The deep neural network (DNN) is graphically represented in a linear dimensional, multilayered distribution of nodes (blue) demonstrating the hundred trillion deep neural network connections that create the “convolutional neural network (CNN).” This neural process is analogous to the deep learning algorithms in AI. The limbic system (hippocampus, amygdala, thalamus). The subcortical limbic system (hippocampus, amygdala, and thalamus) in the human midbrain serves as relay stations between input layer data (red) and the inner (hidden) layers (blue nodes) of the deep neural network (DNN). The cerebral functions of each of these brain nuclei have direct analogous implications in the AI process. Neural layer transmissions from limbic system to higher cortical centers. Signals from the limbic system are transmitted to the higher cortical neural centers (blue arrows, #3) for cognitive interpretations at which point they are “cataloged” as memory in the limbic system and corresponding progressive cortical layers. Finally, the impulses are generated as output (green, #4) representing some aspect of human intelligence. Natural language processing (NLP) and natural language generations (NLG). Once NLP unlocks the verbal context (red) and translates it into human language (blue), NLG takes the output and analyzes the text in context (blue) and produces audio or text output (green). Expert system. The basic structure of an expert system consists of a human expert (e.g., doctor) and knowledge engineer (e.g., related expert) as input (red); a knowledge base (related database[s]), inference engine (AI algorithm), explainable AI (AI algorithm) and user interface (NLP audio or text) as inner (hidden) layers (blue); and the user (from input, i.e., the doctor) as output recipient (green). Forward and backward chaining. In forward chaining the inference engine follows the chain of conditions and derivations and finally deduces the outcome. In Backward Chaining, based on what has already happened, the inference engine tries to find conditions that could have occurred in the past for this result. Artificial intelligent robots. Artificial intelligent robots are simply robots that are controlled by AI programs. This difference is illustrated through this Venn diagram. The overlap between the two technologies represents the category of “Artificial Intelligent Robots.” Case study (example) of “Why is Mona Lisa smiling?” (input layer). Leonardo da Vinci’s masterpiece, “Mona Lisa” can serve as the sensory visual input stimulus (red) to ask the AI computer the classic question, “Why is she smiling?”. Case study (example) of “Why is Mona Lisa smiling?” (inner layer). Labeled information is “neural networked” (blue number 3 arrows) to relevant, preexisting, higher cortical neural layers and their unlabeled data related directly and indirectly to the user’s question (“Why is she smiling?”). Case study (example) of “Why is Mona Lisa smiling?” (processing). Nerve impulses from the subcortical and cortical levels are transmitted through associated cortical and optic radiations (green number 4 arrows) to the visual cortex (area V1and V5). Case study (example) of “Why is Mona Lisa smiling?” (output). Through logical inference rules (green), the brain and an AI computer can evaluate reasonable deductions, probabilities, and conclusions to the question, “Why is she smiling?” And most likely, Explainable AI (XAI) will conclude as have art experts over the years, “Only Leonardo knows for sure.”.

34 34

35

35

36

52

53

54

60

64

65

66

66

List of Illustrations

Figure 4 1 Figure 4 2 Figure 4 3

Figure 4 4

Figure 4 5

Figure 4 6

Figure 4 7 Figure 5 1

Figure 5 2

Figure 5 3

Figure 5 4

Figure 5 5

Sources of waste in American health care. Over $500 billion in health care spending is excess from supplier time wastefulness, waste, fraud and misuse. Population health (independent variables small scale). Example of a small population model where demographic and epidemiological factors (independent variables) are measured. Population health (dependent variables small scale). AI algorithms (regression analysis, Bayesian probabilities and inference logic) analyze the statistical results of independent variables against one another as well as against other potential dependent variables (e.g., health and wellness, risks of aging, etc.). Population health (interventions small scale). AI algorithms measure positive and negative correlations between dependent and independent variables allowing appropriate professionals and caregivers the ability to introduce corrective interventions. Population health (large scale). The population health concept can be used in large scale assessments where AI analysis of demographic and epidemiologic independent variables produce the dependent variable outcomes and the intervention similar to small models. Healthcare analytics. Data mining techniques in health care analytics fall under 4 categories: (1) descriptive (i.e., exploration and discovery of information in the dataset); (2) diagnostic (i.e., why something happened); (3) predictive (i.e., prediction of upcoming events based on historical data); and (4) prescriptive (i.e., utilization of scenarios to provide decision support). The public health system. Public health is a network of vertically integrated and interoperable systems delivering and assessing the provision of public health services. Convolutional Neural Networks (CNNs).Similar to the convolutional neural networks described in Chapter 3 (Figure 3 5), AI’s deep learning CNN process is used to classify the image for diagnosis. In this example, 5 convolutional layers are followed by 3 fully connected layers, which then output a probability of the image belonging to each class. These probabilities are compared with the known class (stroke in the training example) and can be used to measure how far off the prediction was (cost function), which can then be used to update the weights of the different kernels and fully connected parameters using back-propagation. When the model training is complete and deployed on new images, the process will produce a similar output of probabilities, in which it is hoped that the true diagnosis will have the highest likelihood. Comparison of machine learning vs. deep learning CNNs.AI systems look at specific labeled structures and also learn how to extract image features either visible or invisible to the human eye. The comparison in this Figure between classic machine learning and deep learning approaches is applied to a classification task. Both approaches use an artificial neural network organized in the input layer (IL), hidden layer (HL) and output layer (OL). The deep learning approach avoids the design of dedicated feature extractors by using a deep neural network that can represents complex features as a composition of simpler ones. Number of diagnostic clinical algorithms by technology.Diagnostic imaging currently more than doubles all other forms of AI diagnostic algorithms due primarily to advanced (GPU) image recognition software. However, it is felt that genetic testing will grow significantly in the coming years as a principal diagnostic testing modality. Chromosome.In the nucleus of each cell, the DNA molecule is packaged into thread-like structures called chromosomes. Within each DNA helix are “sequences” (“genetic code”) made up of four nitrogen base compounds, paired as “base pairs” (adenine paired with thymine and guanine paired with cytosine). Together, a base pair along with a sugar and phosphate molecule is called a nucleotide. Normal human karyotype.The overall number and shape of all your chromosomes is called a karyotype.

xxvii

82 92 92

93

93

98

113 130

131

147

169

170

xxviii List of Illustrations

Figure 5 6

Figure 6 1

Figure 6 2

Figure 6 3 Figure 7 1

Figure 7 2 Figure 7 3

Figure 7 4

Figure 7 5

Figure 8 1

The cellular biology of the human genome.There are a number of elements that make up what is referred to as the human genome including the cellular biology of genetics which includes the cell, its nucleus, chromosomes within the nucleus, the DNA strands within the chromosomes and the base compounds of the genes within the chromosomes. Comorbidities by age. Combined conditions (“comorbidities”) encompass physical as well as mental disorders in patients with their greatest frequency for occurrence in the elderly population. This makes the issue of comorbidities a demonstrable public health issue. Nanorobotic technology. Medical robots (micro-bot, nanorobots, nanobots) use nearmicroscopic mechanical particles to localize a drug or other therapy to a specific target site within the body. Stages of human (and stem cells) development. Stem cells have the potential to develop into many different cell types during early (pluripotent) life and growth. Chronic inflammation. Through its inflammatory mediators and cellular components that damage tissue throughout the body, especially the blood vessel (perivasculitis) walls (with infiltration and diapedesis) supporting virtually every organ system, chronic inflammatory disease is considered the progenitor or originating cause of all (emphasis on all) the major human disease categories. Genetic modification & stem cell therapy. The patient’s own stem cells are used in a procedure known as autologous (from “one’s self”) hematopoietic stem cell transplantation. CRISPR-Cas9. CRISPR guide RNAs target specific spots in the genome for the Cas9 enzyme (“genetic scissors”) to cut (scissor), forming a double-strand break. A machine learning algorithm predicts which types of repairs will be made at a site targeted by a specific guide RNA. Possibilities include an insertion of a single base pair, a small deletion, or a larger change known as a microhomology deletion [25]. Immunogenics (immunotherapy) Chimeric Antigen Receptor T cells (CAR-T). CAR-T-cell therapy begins by removing a patient’s lymphocytes and transducing them with a DNA plasmid vector (a DNA molecule distinct from the cell’s DNA used as a tool to clone, transfer, and manipulate genes) that encodes specific tumor antigens. These modified and targeted lymphocytes are then reintroduced to the patient’s body through a single infusion to attack tumor cells. Telomeres. At the ends of chromosomes (“tips”) are stretches of DNA called telomeres which are chains of chemical code made up of the four nucleic acid bases (guanine, adenine, thymine, and cytosine). SARS-CoV-2 Life Cycle. The life cycle of the novel coronavirus (SARS-CoV-2) begins when its spike protein attaches to an ACE2 receptor on a cell membrane (1) and penetrates the cell wall where it replicates a genomic RNA (2 4), then produces ‘subgenomic RNAs’ (5 6), synthesizes various spike proteins through translation (7) and new genomic RNA becomes the genome of a new virus particle (8). This combines with the strand genomic RNA, merges in the endoplasmic reticulum-Golgi apparatus into a complete virus particle within a vesicle (9), and the new viral particles are released (exocytosis) to the extracellular region (10).

170

242

254

259 297

303 304

306

419

451

Foreword by Adam Dimitrov A primary care perspective on “Artificial Intelligence in Health Care” We are on the brink of a transformative era in medicine. Yes, the health-care system clearly has its flaws. We are reminded daily of the high cost of health care in this country, its suboptimal clinical outcomes, and dissatisfaction on the part of patients and clinicians with their experience of care. Some would argue that we don’t even have a true health-care system in this country, but rather a health-care market which drives expensive and uncoordinated care, leaving patients too often to navigate their own care. The growing reliance on prescription medications to control chronic conditions has led to a culture of sickcare rather than wellness. American health care itself needs healing. These shortcomings sometimes blind us to the fact that we are experiencing amazing advances in the field of medicine itself. Targeted cancer treatments such as immunotherapy continue to evolve, in some cases making cancer a chronic rather than a terminal disease. 3D printers are being developed to create tissue and organs for transplant patients. Our knowledge of pharmacokinetics is expanding, allowing us to prescribe medications for patients specific to their genetic and metabolic makeup, thereby reducing the number of side effects. Robotic surgery continues to allow physicians to help patients in ways never before possible. Medicine is advancing at an exponential pace. On the forefront of that change comes Artificial Intelligence or AI. Ironically, many doctors in this generation grew up quite familiar with AI years before their medical training. In between the long hours of studying and extracurricular activities, many an aspiring physician spent their downtime trying to defeat an AI opponent . . . sometimes for hours. I am speaking of the immersive world of video games. For the past two or three decades, young men and women know all too well the challenges of trying to defeat a computer-generated opponent, whether it be chess against the computer or a final battle against a computerized adversary on a video game console. Every one of us “gamers” can recall the satisfaction in getting past a particularly difficult level in a game or finally winning against that monster that seemed to know our every move. The video game industry has grown to become a multibillion-dollar industry as millions of gamers try to “outsmart” their computerized opponents armed with the power of artificial intelligence. Just as we see artificial intelligence develop in various industries, there is no doubt that AI will play an increasing role in the transformation of health care. Clinical decision support already helps to guide physicians on which drugs to prescribe for hospitalized patients with certain conditions, and perhaps more importantly, prevent medication errors. Digital imaging tools allow primary care doctors to screen for diabetic retinopathy in their own office, a significant resource in areas where access to eye care may not be as prevalent. Many believe that the specific causes of particular conditions such as autism will be discovered not by

xxix

xxx

Foreword by Adam Dimitrov

scientists conducting clinical studies, but rather by computer algorithms that identify an environmental or genetic cause. It just so happens that as I write this foreword, we are in the middle of the coronavirus (COVID-19) pandemic. There is no doubt that when the pandemic dissipates, we will learn of various ways in which AI was called upon to combat this novel pathogen. Due to lack of available testing, we have already heard how agencies are using Global Positioning Systems on users’ smartphones to predict hot spots and spread of the disease. Computer models are surely being used to study the genetic makeup of the virus to help develop a vaccine or effective medication. Many of the predictive models that are presented to local governments and health-care systems use AI as a means of tabulating the data. Which is why this book by Dr. Lou Catania comes at such an opportune time. Dr. Catania has compiled a resource that is both introductory and comprehensive to anyone who wishes to explore the world of artificial intelligence, whether a seasoned clinician or a layperson outside of the medical field. This book touches upon all aspects of AI . . . from neural networks to population health management, from the Mona Lisa to the reduction of waste in health care. It is clear that Dr. Catania has poured as much dedication into this book as he did for thousands of patients over his many years of practice in the field of optometry. As a distinguished clinician, author, and speaker, his insights are a welcome addition to the emerging field of AI in the health-care arena. As I type these last words this evening, I ready myself for somewhat of an uncertain day in my primary care practice tomorrow. Due to the COVID-19 pandemic, we have rapidly deployed virtual visits, with more than 80% of our clinical encounters over the past month taking place via video chat. The overnight evolution of telemedicine will certainly be one of the silver linings that comes out of this pandemic. But for now, I think I will put my laptop down and see if I can finally beat the computer tonight in a game of poker. Adam Dimitrov, MD, FAAFP Family Medicine Physician, Baptist Health, Jacksonville, FL, United States

Foreword by Ernst Nicolitz A medical specialist’s perspective on Artificial Intelligence in Health Care As a medical specialist (specifically, an ophthalmic surgeon) for over 40 years, the evolution of artificial intelligence (AI) is occurring at a rate I find challenging. I’d venture a guess that most of my colleagues feel the same. But, Dr. Lou Catania, an associate of mine for more than 20 years with Nicolitz Eye Consultants, our multidisciplinary practice in Jacksonville, FL, has challenged me further. He has invited me to write a Foreword for the book he has been working on for over a year. Lou has asked me to provide you, the reader, with a specialist’s perspective on AI. Undoubtedly, there are many different specialists’ perspectives on such a disruptive technology, but thanks to Lou’s efforts in writing this excellent text, I feel I have a bit of a preferential advantage. For me, the first section of the book was invaluable in providing just enough technical information about basic computing as the foundations to AI technology. Lou does go a little deep (at least for most health professionals, I would presume), but he aptly justifies it with Einstein’s famous line, “Everything should be made as simple as possible, but not simpler.” I felt that the computer information he presents does indeed prove to be necessary to fully appreciate AI’s enormous role in health care as covered in Section 2. Most medical professionals that follow the progress in AI are familiar with its extraordinary ability to evaluate imagery of any kind through pattern recognition. A “graphic processing unit” studies (and learns) from among millions of patterns (images) and then uses machine learning to differentiate normal from abnormal. This is precisely what we as medical specialists (like ophthalmologists, radiologists, dermatologists, pathologists, and others) do in everyday diagnosis and even during treatment (i.e., surgical) procedures. We “learn” through training and experience and then identify abnormal conditions in our patients. What’s more, as Lou so methodically outlines throughout this book, selectivity and specificity study results suggest equal or superior comparisons between machine (computer) AI findings and those of the respective specialists in the field. Given this accuracy and breadth of AI, one can easily see it (as some do) as an existential threat to the human factor in medical specialty care. I don’t quite see it that way as much as an adjunct to, and verification of my 40 years of experience and hopefully, my willingness to always give a second thought to my clinical decisions. Nothing wrong with a “curbside consult,” be it from a colleague or an “intelligent machine.” I don’t know of any medical professional who would ignore a suggestion coming from an analysis of a database of hundreds of millions of sources (“Big Data Analytics”). But again, Lou manages to skillfully demonstrate the immeasurable benefits that AI brings to specialty care through a balance of “big data,” robotics and efficient technology versus irreplaceable real-world considerations (i.e., the value of increased personal time and interactions with a patient versus strictly quantitative

xxxi

xxxii

Foreword by Ernst Nicolitz

analysis). Lou’s frequent references to the AI algorithm, “Explainable AI (XAI)” that gives the user a rationale for AI’s conclusions and recommendations, exemplifies the synergies of a partnership, if you will, between AI and the medical specialist. What I find so valuable in this book is the way Lou has assessed AI’s enormous role in the critical issues related to administrative health care and public health, as well as the most prevalent clinical issues we face today. As an active associate in our practice, I am very aware of Lou’s keen interest in immunology and genetics. In these two areas in the text, he shows perhaps AI’s greatest contributions to current and future health and wellness. He skillfully describes AI’s influences and applications in the diagnostic aspects of these two fields (in Chapters 5 and 6) and then expands the discussion (in Chapter 7) to the vital therapeutic role immunology and genetics is playing in health care and how AI is making it all possible. After 40+ plus years as a medical/surgical specialist, I am thrilled to see a technology like AI entering my domain of practice—ophthalmology. I am certain that specialists in all fields feel the same as we reap the benefits of such a powerful and disruptive technology. I am also thrilled to see the book that Lou has produced from his extensive background, expertise and knowledge base in computers, AI, and in health care. It will help us all make the transition to a new level of health and wellness care to our patients. Ernst Nicolitz, MD, FACS Senior Ophthalmic Surgeon, Nicolitz Eye Consultants, Jacksonville, FL, United States

Preface The three principal subjects of this book are artificial intelligence (AI), health care and related biosciences, three rather complex topics. Their interrelationship, integration, and interoperability (a word you’ll be hearing a lot more about) adds to the complexity of the subject matter. Meanwhile, the audience for which the book is aimed includes multiple disciplines of health care providers, researchers, administrators, scientists, information technologists (IT), educators, students, and interested laypersons. Among each of these groups will be individuals with varying levels of expertise and experience in either or all of the topics to be discussed. And regarding the interrelationships of the three topics, few individuals in any discipline have the full breadth of the evolving and disruptive spheres of AI, healthcare, and the biosciences. This book hopes to reach readers through understandable explanations of the biosciences, AI, and health, and mostly, their integration and interoperability (there it is again). Please understand, that certain portions of the text in your area of expertise might be developed at an introductory level to help educate the reader with less or no experience in that area. No discussion at any level is intended to be patronizing or condescending, as I hope no one will interpret it as such. Rather, my goal in writing this book is simply to provide each group of readers with a comfortable base of relevant information in all the related areas and their current and evolving synergies to which this book strives to address. The mathematical formulae and equations associated with AI and their applications in current and evolving healthcare concepts (business and clinical) can be considered, at the very least, challenging to all. The author (moi) is a clinical and academic health care professional with over 50 years experience, and a student of AI for the past 10 years. While comfortable with the many levels of business and clinical health care, I will be the first to acknowledge unadulterated terror when confronted with Bayesian theorems, linear algebra, multivariable calculus, the stuff of which AI algorithms are made. It also took me years to begin to understand the applications of data science and the complex computer programming used in integrating AI into health care. Thus I will be taking author’s privilege (aka a bit of cowardice) to avoid in-depth coverage of some of these areas in favor of the more practical, less theoretical applications of AI in health care and bioscience. I have strived to remain acutely aware that the discussions and the text of this book, albeit challenging, should be no more difficult to understand and follow than the contents of a simple traveler’s guide for all interested readers regardless of their background. I use this analogy of a “traveler’s guide” because that’s the way I try to unfold your journey through AI and its applications in health, bioscience, and wellness. I will never discuss a subject or specific xxxiii

xxxiv

Preface

topic wherein I haven’t first presented its fundamental concepts and elements. And as often as possible, I will refer you back to those basic explanations when necessary (with page references) on any advancing, complex topic. I have even color-coded certain concepts in the text and illustrations to make the journey a little more colorful and easier to follow. While I refer to this book as a “guide book,” I also see it as a reference book to which I would hope you will return to when reading or hearing other information on AI and health care that may leave you with questions. Whereas the subject areas of the book, AI, health care, and bioscience are each relatively complicated topics, the greater goal of this book is to present an appreciation of the increasingly intimate relationship between these sciences and technologies. To accomplish that goal, after descriptions of the basics of computers (Chapter 2) and AI technology (Chapter 3), I spend considerable time describing the business aspects of health care (indeed a “very big” business) in Chapter 4. Then I do a deep dive into AI applications in the clinical and bioscience aspects of health care (especially my passion for immunology and genetics) in Chapters 5 7. Admittedly, I do use considerable medical terminology, which is necessary for a fuller understanding of the condition(s) to be discussed. But hopefully, the glossary at the end of the book will serve as an aid when needed. For the AI-related aspects of the book, I try to represent the profound relationship AI has with health care (clinical and business) in understandable terms. But the significant basis of all the discussions is my reliance on the most recent AI literature that I have exhaustively researched in each area with discussion, excerpt, and references to over 1600 AI, health and bioscience research, and technical papers. And, I must admit, and apologize in advance that some of the information provided in these literature reviews may be challenging at times with the authors and researchers use of specific technical terms and descriptions that border on, and admittedly, cross the line of “simple.” There are lots of books written on AI and health care, but too many of them presuppose or assume more than the reader’s experience provides. This book aims to provide an understandable picture of AI in health care and related biosciences, in its current state at the date of publication and to do so in a manner that accommodates health care and IT professionals, technologists, technicians, and even interested laypersons through an organized, methodical, and simple approach. But let’s remember the words of Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

Postscript The manuscript for this book took over a year to organize and construct. During the final weeks of those efforts (February 2020) the world was attacked by the novel coronavirus (SARS-COV-2). A virtual instant need and new applications for AI were introduced into health care. The clinical, scientific, public health, and AI world responded. So too have I by adding a Chapter 8, SARS-COV-2 and COVID-19 Pandemic, to the completed manuscript. I hope it offers a helpful summary (albeit dated to future readers) of COVID-19 and its associations with AI.

Acknowledgments To paraphrase an old proverb, “It took a village to write this book.” When you combine the relationship of two subjects, artificial intelligence (AI) and health care, we all know that the magnitude of each fills countless volumes of text and terabytes of databases. It’s quite obvious that completing such a comprehensive analysis can’t be done alone. In spite of only one author’s name on the cover of this book, it should truly be an exhaustive list of: (1) researchers; (2) AI and health care experts; and (3) editors and friends who collectively helped make it a reality. While I can’t enumerate all of the specific entities and individuals in each of these groups, allow me to briefly mention how, together, they made up the “village” that contributed so much in putting my name on the cover. First, the amount of research to even approximate an overview of AI’s influence and applications in health and wellness is proliferating daily and requires enormous amounts of time. But more so, it means diligent review of countless technical, biomedical, and clinical literature papers, books, journals, as well as weekly and monthly magazines (e.g., Forbes, Harvard Business Review, Time, Newsweek, and others) and daily newspaper articles. Coverage of this abundance of information resources would be impossible without the invaluable search assistance of PubMed, Medline, and MedlinePlus databases provided by the US National Institute of Medicine, National Institute of Health, Center for Disease Control and Prevention, and by the powerful search engines, Google Scholar, ScienceDirect, and Scopus. Combined, these resources provided far more information than could ever be included in covering any one subject, let alone two. The second group in “the Village” I want to thank are those “behind the scenes” contributors. Specifically, I speak of the experts upon whom I relied indirectly and directly. Those whom I consider having been among the greatest assets in my ability to complete this book include experts in AI, most who don’t even know I exist and very well may never even be aware of me acknowledging them for their contribution. These behind the scene individuals are the AI instructors in MIT’s CSAIL, Johns Hopkins’, and Stanford’s online AI courses, Udacity, edX, and others. Over an 8-year period, they have taught me the science and technology (maybe not as much the math!) that I needed to understand AI and to be able to present it (hopefully) in a cogent manner in this book. And then there are the professional and personal “Villagers” who helped me and supported me editorially, cerebrally, and emotionally through this long editorial journey as well as over many years of tutelage and friendship. I have always leaned on Craig Percy, the editor of my first book, back in the 1986, for guidance and often, for brutal honesty, but mostly for a deep friendship which has lasted for more than 30 years. Of course, contemporarily, I am xxxv

xxxvi

Acknowledgments

grateful to Chris Katsaropoulos, my Elsevier acquisition editor for guiding me through the early stages of the publishing process. Chris is a first-class professional with a personal manner that goes a long way during the sometimes stressful, emotional phases of the approval process. And certainly, Rafael Trombaco, the Elsevier Project Manager (EPM), Niranjan Bhaskaran, the Elsevier Production Manager, Venkateswari Kalaiselvan, copyeditor were the professionals who transformed my “scattered manuscript” into a beautiful book. They managed the complex pieces of the editing production process as would a conductor lead a symphony orchestra. And we can all thank them for the exceptional literary composition they produced. You probably noticed two Forewords in the front matter of the book which is not typical. I did so because I wanted the readers to get a real-world perspective on AI from a primary care provider’s and medical specialist’s viewpoint. Dr. Ernst Nicolitz is the senior surgeon in our multidisciplinary eye care practice, Nicolitz Eye Consultants in Jacksonville, FL, and a respected colleague of mine for over 25 years. World class surgeons like Ernie are sometimes less than humble. But among Ernie’s greatest strengths are his humility, warmth, and kindness, which reflects in his patient care. I knew he would give an honest and objective clinical and personal specialist’s feeling about AI and he did. And on the primary care side, I wanted that same humility, warmth, and kindness and there was only one physician I was certain had it all, my own personal family doctor, Adam Dimitrov. I asked Adam to write the Foreword in late 2019, fortuitously just prior to the coronavirus pandemic. Knowing what a difficult and stressful time the following months would turn into for him as the pandemic worsened, I fully expected and understood his need to cancel. Instead, he wrote a beautiful and thoughtful piece on AI’s role in primary care and the view of a primary care physician, literally from the trenches. Others, whom again never even knew they were helping me through the challenge of weaving together this complex of AI, business, medical, and public health information deserve a shout out. They include a math genius and good friend, John Paulos, who’s lucid and, believe it or not, humorous “simplifications” (his New York Times best-seller, “Immuneracy” is a must read) of some of the most complex mathematics on planet earth. I relied on his explanations regularly in trying to explain (and often avoiding with apologies to the reader) the multitude of linear algebra, statistics, regression analyses which are part and parcel of AI. On the business side of health care, I am indebted to a very special longtime friend, advisor, and financial wizard, Bob Jackson, for guiding me through many of the nuances involved in governmental and nongovernmental bureaucracies. And I can’t forget my friend, Brian Armitage, who was the first person to proofread the rough draft of my manuscript (over 900 pages) and gave me a thumbs up, which meant more than he’ll ever know. I dedicated the book “to all the health care workers who have served us all during the COVID-19 pandemic and every day” all of whom deserve our enduring thanks. Also, the dedication includes “the families of all those lost to the COVID-19 pandemic.” There loses should never be forgotten. And finally, I offer a very special thank you and all my love to Stephanie, my wife of 52 years (at the time of this writing). She was a great mother to our three children, one of whom was disabled and whom, together we suffered his loss to cancer. She is my inspiration, my moral and emotional support, my strength, and my cheerleader. Without her there would be no book.

SECTION

I Artificial intelligence (AI): Understanding the technology “It is not the strongest of the species that survives, nor the most intelligent; It is the one most adaptable to change.” Charles Darwin

Introduction Change is disruptive. By definition, it never leaves us in the same place we started. It moves us to another place. Sometimes that other place is familiar and comfortable, and sometimes it is new and different. When it is a new place, it requires us to learn and understand it. If it is uncomfortable or disruptive, it leaves us with the choice to reject it and retreat to the same place we started or to accept the disruption, understand it, and adapt to it. Health care is a place where change is continuously occurring in the interest of improving the human condition. These changes include new therapies, new technologies, new procedures, and new methods of communication. These changes take us to a new place in the art and science of caring for people and require health care providers to continually learn and understand these new places where health care is moving. When the changes are disruptive to the status quo, health care providers do not have the option to reject them and retreat to the more comfortable place they started. To do so would be to deprive their patients of new and hopefully better ways and means for providing care and, indeed, a better level of health care. So, adapting to disruptive changes and technologies in health care is not an option but rather a requirement. Disruption and adaptation are an intrinsic part of the process of health care. Clayton Christensen, a professor at the Harvard Business School, introduced the term disruptive innovation in 2012. He defined it as “a process (in this case, health care) by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves upmarket, eventually displacing established competitors” [1]. Artificial intelligence (AI) is a disruptive technology. It can be defined simply as a branch of computer science dealing with the simulation of intelligent behavior in computers, the capability of a machine to imitate intelligent human behavior [2]. Albeit simplistic, nonetheless, the implications of that definition suggest an existential change in all areas where human intelligence currently controls decision-making. Thus, the possibilities for AI are endless (Table Intro 1.1), from models and algorithms that can predict the effects of climate 1

2

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table Intro 1.1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

Limited list of AI applications.

Agriculture Aviation Computer science Education Finance Algorithmic trading Market analysis and data mining Personal finance Portfolio management Underwriting History Government and military Heavy industry Hospitals and medicine Human resources and recruiting Job search Marketing and advertising Media and e-commerce Music News, publishing and writing Online and telephone service

22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42.

Power electronics Sensors Telecommunications Toys and games Transportation Artificial life Automated reasoning Bio-inspired computing Concept mining Data mining Knowledge representation Email spam filtering Robotics Automotive Cybernetics Developmental robotics Evolutionary robotics Hybrid intelligent system Intelligent agent and control Video games Litigation

change to those that help researchers improve health care. It should be no surprise that AI will soon become the world’s most disruptive technology [3]. An interesting parlor game might be to have a group of friends identify a time span and list what they would consider to be the top 10 disruptive technologies during that time period. Each individual’s selections would probably make for interesting discussion. The common denominators among the listings would undoubtedly produce a compelling list of advances during the identified period that the group of friends could reflect on. A discussion of how long, how difficult, how comfortable or uncomfortable they felt in adapting to the disruption(s) produced by the changes associated with each technology would reveal interesting personal perspectives. Just to test the validity of this proposed “game,” presented below is a list (author’s choices) of the top 10 disruptive technologies since the year 1900 to the present. But, before reading my list, make your own list and then see if there are any common denominators between lists and among the different items which you think deserve a place in “the final listing.” 1. 2. 3. 4. 5. 6.

Flight (1903) Refrigeration (1920) Antibiotics (the 1930s) Microprocessor Computers (1947) Space exploration (the 1960s) The Internet and World Wide Web (1983)

Section I • Artificial intelligence (AI): Understanding the technology

7. 8. 9. 10.

3

The Human Genome (2003) Immunogenetics and immunotherapies (since 2000) The smartphone (2007) Artificial intelligence (since 2007)

Another interesting option in this little parlor game would be to identify a specific topic with or without a timeframe and create your top 10 list. Many such listings (top 10, top 5, etc.) are found on the Internet on an array of categories, both general and specific. One such defined listing, as an example and relevant to the subject of this book, is a list of the “Top 10 Medical Innovations of 2019” selected and prioritized by a panel of Cleveland Clinic physicians and scientists [4]. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Pharmacogenomic testing; Artificial intelligence; Treatment of acute stroke; Immunotherapy for cancer treatment; Patient-specific products achieved with 3-D printing; Virtual and mixed reality; Visor for prehospital stroke diagnosis; Innovation in robotic surgery; Mitral and tricuspid valve percutaneous replacement and repair; RNA-based therapies

To illustrate how the nature of top 10 or top 5 lists can vary with their creators, consider the top 5 list Forbes Magazine published in 2017 of the “Five Technologies That Will Disrupt Healthcare By 2020.” [5] 1. 2. 3. 4. 5.

Artificial intelligence Immunotherapies/genetics Big Data Analytics CRISPR 3D Printing

Notwithstanding the predictable parochial nature of the priorities and initiatives of the Cleveland Clinic (medically oriented) compared to the Forbes Magazine (business-oriented) listings, it’s noteworthy that 4 out of the top 10 selections (#2, #4, #8 and #10) for the Cleveland Clinic and 4 out of 5 in the Forbes list relate directly to AI. Further, as will be reflected in Section 2 of this book (“AI Applications in Health Care”), most of the other items on both lists also rely heavily on AI. The fact is, universal presence and high prioritization of AI appear on virtually every top 10 listings on the Internet, be it related to business, economy, technology, and indeed, to health care. We will revisit this “game” of Top 5 and Top 10 lists in Section 2 of this book when we discuss AI applications in an array of business, administrative and clinically related health issues. The amount of AI technologies disrupting health care is so vast and expanding so rapidly that being all-inclusive is an impossibility. Research for this book included exhaustive

4

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table Intro 1.2 order). • • • • • • • • • • • • • • • •

Sources of references (in alphabetical

Academic sites (Johns Hopkins Univ., Stanford, Harvard, MGH) Center for Disease Control and Prevention (CDC) Commercial IT industry websites Google Scholar Government (GOV) and Non-government (NGO) health, AI and IT organizations Health and wellness websites Institute of Medicine Major business journals newspapers (Forbes, Fortune) Major newspapers (New York Times, Washington Post) Medical websites Medline National Academy of Science National Institute of Health PubMed Research Institutes and Foundations (Brookings, RWJ, Kaiser) Science Direct

reading and reviews of literally thousands of topic related articles and papers from an array of respected and recognized sources (Table Intro 1.2). Final selections of materials for inclusion in the book are based on relevancy to the topic, the currency of information (unless historically pertinent), and the authenticity of the source. So, what we will be doing (in Section 2) is highlighting for discussion and review the top 3 “Primary” AI technologies and/or programs in each of the health care categories to be covered. Then we will include a listing of 3 “Additional” related AI technologies (with citations) in current use or development. The reader will be able to select any topic from that list that they find interesting, or that may have direct relevance to them personally or professionally for their additional reading and research. And finally, every 2 3 years, we will present subsequent editions of this book wherein we will offer a new list of “Primary” top 3 listings of AI technologies for discussion and an updated “Additional” list (of 3) for your review and reference. It is apparent from both retrospective and prospective analysis that AI as a disruptive technology will represent a significant factor in most areas of life and certainly in health care moving into the future. It is also likely that Moore’s Law will prevail. This Law states that processor speeds or overall processing power for computers will double every 2 years, while costs will be halved [6]. Given this proven axiom and its general rate of human adaptability, New York Times’ economist, Thomas Friedman’s projected estimate (graphically illustrated [Fig. Intro 1.1] in his book “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” [7]) may prove accurate as to our general delayed adaptability to change. There is nothing more powerful than an idea whose time has come (Victor Hugo). AI is a disruptive concept and a science whose time has come. The goal of this book is to first provide a general understanding of AI as a science and technology (Section 1) and then,

Section I • Artificial intelligence (AI): Understanding the technology

5

FIG. INTRO 1.1 Moore’s Law. Processing power for computers will double every two years while costs will be halved. Adapted from Friedman TL. Thank you for late: an optimist’s guide to thriving in the age of accelerations. 2016.

its applications in health care’s business and clinical domains (Section 2). Depending on your current level of understanding of AI, you may view discussions in this book as anywhere from an elementary level through an intermediate or even advanced level discourse of the subject. But, given the exponential rate of progress and applications in this science and technology, especially in health care, in 5 years you may likely find the book’s discussions, especially in health care applications of AI, “old news.” You may have derived from the book’s title and descriptor (“Foundations of Artificial Intelligence in Healthcare and Bioscience: A User-Friendly Guide for IT Professionals, Healthcare Providers, Researchers and others interested in AI in health and wellness”) that my goal is to approach this highly complex subject in as “comfortable and casual” a literary manner as possible. As such, I have attempted to limit overly scholarly, academic, technical, or clinical jargon except where necessary. Further, in that effort, as well as a product of the dynamic nature of the subject matter, some of the cited materials used in the book (all cross-referenced and validated) are not all from traditional, academic, scientific publications. Instead, they are taken from timely news articles, current monthly, even weekly periodicals, and commercial and lay media websites and magazines. And finally, despite the objective nature of the computer-related science and technologies defining AI, the most challenging aspects in adapting to this disruptive technology may well be the subjective and ethical issues it generates. “Is AI creating an existential threat to the integrity, uniqueness, even the essence of humankind?” Is it creating a “humanoid” existence? [8] And specifically to health care, will AI replace doctors or make them better? [9] (More on these perplexing questions in the Epilogue.) Some respected scientists and futurists like the late Stephen Hawking [3], Elon Musk et al. say, “. . .such powerful systems would threaten humanity” [10]. Other equally respected scientists like Bill Gates (“AI is the holy grail”) [11], Ray Kurzweil et al. claim, “Artificial intelligence will reach human levels by around 2029” [12]. Meanwhile, scientific institutions like MIT [13], Stanford University [14], Brookings Institute [15], et al., and expert business sources

6

Foundations of Artificial Intelligence in Healthcare and Bioscience

like Harvard Business Review [16], Forbes [17], NY Times [18], McKinsey [19], etc. are enthusiastically supporting and encouraging AI’s continued growth. This book will provide an objective presentation and an understanding of AI and its applications in health care. However, it will not address or opine on the associated subjective, ethical, and emotional questions regarding AI technology and its effects on health care going forward. That will be for the bioethicists to debate and for you, the reader, to decide for yourself.

References [1] Christensen C. Disruptive innovation. Harvard Business Review; 2012. [2] Merriam-Webster’s unabridged dictionary. 2019. [3] Laura HG. Why AI will be the World’s most disruptive technology. DueDigital.com 2018; 12 March. [4] Saleh N. Cleveland clinic unveils top 10 medical innovations for 2019. MDLinx; 2018. [5] Press G. Top 10 hot artificial intelligence (AI) technologies. Forbes

Tech; 2017.

[6] Kenton W. Moore’s Law. Investopedia; 2019. [7] Friedman TL. Thank you for being late: an optimist’s guide to thriving in the age of accelerations. Farrar, Straus, and Giroux; 2016. [8] Purkayastha P. Artificial intelligence and the threat to humanity. NewsClick; 2017. [9] Parikh R. AI can’t replace doctors. But it can make them better. MIT Technol Rev 2018. [10] Lewis T. A brief history of artificial intelligence. Live Science; 2014. [11] Ulanoff L. Bill Gates: AI is the holy grail. Mashable Tech 2016. [12] Marr B. 28 Best quotes about artificial intelligence. Forbes 2017. [13] Kiron D. What managers need to know about artificial intelligence. MIT Sloan Manage Rev. 2017. [14] Parker CB. One hundred year study on artificial intelligence (AI100). Stanford News Service; 2016. [15] Desouza K. Krishnamurthy R. Dawson GS. Learning from public sector experimentation with artificial intelligence. Brookings Institute Think Tank; 2017. [16] Brynjolfsson E, Mcafee A. The business of artificial intelligence. Harvard Business Review (Cover story); 2017. [17] Columbus L. How artificial intelligence is revolutionizing business in 2017. Forbes Tech Big Data; 2017. [18] Lewis-Kraus G. The great AI awakening. The New York Times Magazine; 2016. [19] Burkhardt R, Hohn N, Wigley C. Leading your organization to responsible AI. McKinsey and Comp; 2019.

1 The evolution of artificial intelligence (AI) 1.1 Human intelligence I don't think there’s anything unique about human intelligence. Bill Gates

To understand “artificial” intelligence, a review of “bona fide” human intelligence is necessary. The definitions for human intelligence are extraordinarily complex and range from evolutionary to biological to neurological to psychological explanations. Large bodies of research and literature exist from studies of each type of intelligence and their relationships to human intelligence. A 2006 paper entitled “A Collection of Definitions of Intelligence” [1] list an astounding 71 definitions for “intelligence.” They conclude that “. . .scan(ning) through the definitions pulling out commonly occurring features, we find that intelligence is: • A property that an individual agent has as it interacts with its environment or environments; • Intelligence is related to the agent’s ability to succeed or profit concerning some goal or objective; and • It depends on how able the agent is to adapt to different objectives and environments.” Putting these key attributes together, they adopted their informal definition of intelligence to be: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” After extensive research on the meaning of human intelligence for this discussion, I settled on a definition found in an article in a neuroscience journal (Nature Reviews Neuroscience) entitled “The Neuroscience of Human Intelligence Differences.” In it, a study was cited [2] wherein 52 prominent researchers agreed on a broad definition of intelligence. The reason I have selected this neuroscience-related definition is due to the intimate relationship between AI and neuroscience that will be discussed in detail in Chapter 3’s discussion on neural networking and “deep learning.” Thus, the definition of human intelligence to be used in comparison with AI is as follows: “Intelligence is a very general capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. . . it reflects a broader and more profound capability for comprehending our surroundings, ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do” [3]. The structural elements outlining such a definition of intelligence include the general categories of reasoning, learning, perception, problem-solving, and communication. Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00002-X © 2021 Elsevier Inc. All rights reserved.

7

8

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 1–1 The general categories of human intelligence represent the array of cognitive and noncognitive features of human thinking and acting. These features are emulated and reproduced through algorithms in artificial intelligence. Intelligence category

Category features

Reasoning

• • • • • •

Learning

Perception

Problem solving Communication

Inductive Deductive Auditory Experience Motor Sensory • Visual • Hearing • Cerebral • Assessment • Decision-making • To speak; and

• • • •

Observational Memory Perceptual Pattern recognition

• Relational • Spatial • Stimuli

• Solutions • Alternatives • To recognize and understand spoken and written language

Each of these categories includes subcategories that define their functional attributes (Table 1 1). These categories and subcategories are revisited in Chapter 3, “The Science and Technologies of Artificial Intelligence,” where their analogs in AI are identified and compared.

1.2 Defining artificial intelligence (AI) Given the complexity of defining human intelligence, it’s no wonder that the definition of artificial intelligence (AI) presents its own set of intricacies. The modern dictionary definitions of AI also focus on the technology as being a sub-field of computer science and how machines can imitate human intelligence (being human-like rather than becoming human) [4]. This feature of AI is demonstrated by a test developed by an early pioneer in computer science, Alan Turing (Fig. 1 1) [5]. As early as 1936, this British mathematician had a profound influence on the science of computing [6] in his writings, research, and his code-breaking activities during World War II (excellent movie, “The Imitation Game” chronicles his enormous achievements and his tragic life). He is also considered a founding father of AI from his theorizing that the human brain is, in large part, a digital computing machine [7] (see the Neural Networking discussion in Chapter 3, page 31) and measurable by the “Turing Test” that he developed in 1951. The Turing Test is a method of testing a machine’s (computer’s) ability to exhibit intelligent behavior equivalent to, or indistinguishable from that of a human. During the test, one human functions as the questioner while a second human and a computer function as hidden respondents (Fig. 1 1). The questioner interrogates the respondents using a specified

Chapter 1 • The evolution of artificial intelligence (AI)

9

FIGURE 1–1 Turing Test. One human functions as the questioner while a second human and a computer function as hidden respondents. The questioner interrogates the respondents using a specified format and context. After a preset length of time or number of questions, the questioner is asked to decide which respondent was human and which was a computer.

format and context. After a preset length of time or number of questions, the questioner is asked to decide which respondent was human and which was a computer. The test is repeated numerous times, and if the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence. That is, the questioner regards it as “just as human” as the human respondent [8]. The challenges of defining AI began to be addressed in earnest at a conference at Dartmouth College (“Dartmouth Summer Research Project on Artificial Intelligence”) in 1956 when research engineers, John McCarthy and Marvin Minsky coined the term artificial intelligence [9]. The studies from that conference conclude, “. . .every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” That conclusion evolved into a modern Miriam-Webster’s definition of AI (stated previously in the Introduction) as “a branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior” [10]. The concept of AI, as envisioned and articulated by the genius of Turing, McCarthy, and Minsky, seemed to have preceded the actual computer hardware and software necessary for its full maturation, especially its cornerstone and most profound component, deep learning. Slow but steady progress was indirectly stimulated in 1982 when the Japanese government invested $400,000,000 in the “Fifth Generation Computer Project” (FGCP) which did not meet its ambitious goals of revolutionizing computer processing and improving AI through logic programming over the following 10 years [11]. But it did inspire a new generation of computer engineers and scientists [12]. As a result, through the 1990s to 2000s, even in the absence of major funding, AI development thrived and began to meet the original goals identified by FGCP. Increasing international interest grew exponentially with AI’s dramatically increasing memory capacity, processing speeds, natural language processing (NLP), and machine

10

Foundations of Artificial Intelligence in Healthcare and Bioscience

learning (ML). Applications in banking and the financial market vagaries of the1990s and early 2000s began to produce mainstream and serendipitous press attention in AI [12]. In 1997, the reigning world chess champion and grandmaster Gary Kasparov was defeated by IBM’s Deep Blue, a chess-playing computer program [13]. AI computers continued to become “smarter” and proving it to the world with IBM’s “Watson” defeating “Jeopardy” champions Ken Jennings and Brad Rutter in 2011 [14]. Also, Google’s DeepMind Alpha Go program defeated the Chinese Go champion, Kie Je, and Korean champion Lee Sedol in 2016 [13]. After these events, appreciation of AI grew from curiosity to excitement as human adaptability converged with Moore’s Law. The introduction of AI, specifically into health care, experienced resistance in the 1970s and 1980s, due in large part to practitioner conservatism and uncertainties in this evolving technology. Among those concerns included AI systems’ inability to explain how decisions are reached. A cognitive scientist (Daniel Dennett) at Tufts University put it best. “Since there are no perfect answers, we should be as cautious of AI explanations as we are of each other’s. No matter how clever a machine seems, if it can’t do better than us at explaining what it’s doing, then don’t trust it” [15]. Indeed, early iterations of AI computing did not provide software “explanation facilities” for computed conclusions. Since then, however, algorithms have grown to include “Explanation Facilities” (now referred to as “Explainable AI or XAI”) in their output information. Mass General Hospital has already developed and tested “Explanation Facilities” that aim to give AI systems used in medical diagnoses the ability to explain their reasoning rather than leaving the algorithmic conclusions a mystery [16]. Along with inherent human adaptability, reticence to disruptive technologies and the limited funding resulting from the FGCP experience through the 80s and 90s, it wasn't until the 1990s that AI was seriously incorporate into health care. Even then, and to this day, concerns regarding professional liability in diagnosis and treatment continue to limit AI’s use to augmenting clinicians’ decisions in clinical health care rather than providing decision-making itself [17]. Notwithstanding the perceived limitations in AI’s applications in clinical health care, there was an early awareness of the value of AI’s big data analytics and blockchain technologies in the management of health care information and data. The processing speeds and volumes of data manageable with evolving AI hardware and software now provide dramatic improvements in areas of health care information previously considered unattainable. These big data analytics capabilities have provided enormous benefits in the areas of “precision health care,” “population health,” medical research (especially in immunology and genetics), medical imaging analysis, and robotics. AI’s applications in each of these areas of health care are discussed in depth in Section 2 of this book. To wit, the Introduction and evolving applications of AI in health care information management and clinical care can only be understood and appreciated through a comfortable understanding of what AI is and how it works. That is the goal of Chapter 2 (“The Basic Computer”) and Chapter 3 (“The Science and Technologies of Artificial Intelligence”) of this book.

Chapter 1 • The evolution of artificial intelligence (AI)

11

References [1] Legg S, Hutter M. A collection of definitions of intelligence. Cornell University; 2007. arXiv.org . cs .arXiv:0706.3639v1. [2] From Gottfredson LS. Mainstream science on intelligence: an editorial with 52 signatories, history, and bibliography. Intelligence 1997;24:13 23. [3] Ian J, Deary IJ, Penke L, Johnson W. The neuroscience of Human Intelligence differences. Nat Rev Neurosci 2010;11:201 11 March. [4] Stanton A. AI develops human-like number sense general intelligence. Phys.org 2019; May 10.

taking us a step closer to building machines with

[5] Turing A. Can machines think? Mind 1950;59:433 60. [6] Copeland BJ. Alan Turing: British mathematician and logician. Encyclopedia Britannica. Updated: January 23, 2019. [7] Aggarwal A. Genesis of AI: the first hype cycle. scryanalytics.com/articles; January 2018. [8] Rouse M. Turing test. Search Enterprise AI; December 2017. [9] Marr B. The key definitions of artificial intelligence (AI) that explain its importance. Forbes Magazine; February 14, 2018. [10] Merriam-Webster Dictionary. Merriam-Webster Inc.; 2019. [11] Pollack A. Fifth-generation' became Japan’s lost generation. New York Times Archives; June 5, 1992. [12] Anyoha R. The history of artificial intelligence. Harvard University SITNBoston; August 28, 2017. [13] Press G. The brute force of IBM Deep Blue and Google DeepMind. Forbes; February 7, 2018. [14] ,http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html.. [15] Knight W. The dark secret at the heart of AI. MIT Technology Review; April 11, 2017. [16] Kreps GL, Neuhauser L. Artificial intelligence and immediacy: designing health communication to personally engage consumers and providers. Patient Educ Couns 2013;92:205 10. , https://doi.org/ 10.1016/j.pec.2013.04.014 . . [17] Serdar Uckun S. AI in medicine

an historical perspective. Rowanalytics; February 8, 2018.

2 The basic computer This Chapter (Two) on the basic computer, as with the next Chapter (Three) are meant to give you a practical understanding of computer systems from their essential functions through the more complex algorithmic driven AI computer functions. They are not meant to present an exhaustive analysis of computer science and technology, but rather to provide a “user-friendly,” general explanation with as little technical computer jargon as possible in this very complex topic. In other words, the goal of these next 2 chapters is to create a comfortable level of knowledge of the computer process, specifically AI computing, so that you fully understand the relevance, reasons, and usefulness of AI in health care. The word “algorithm” has already been mentioned numerous times up to this point, and it takes center stage in most of the AI descriptions going forward. Thus, an up-front definition of its literal meaning, as well as its complex relationship to AI, is valuable at this point. Simply defined, an algorithm is a procedure or formula for solving a mathematical problem, based on conducting a sequence of specified actions or steps that frequently involves repetition of an operation [1]. The fundamental framework (a platform for developing software applications [2]) of all AI computing is mathematics-based, including linear algebra, multivariable calculus, Bayesian logic, statistics, and probabilities, all of which algorithms utilize extensively [3]. As will be discussed, AI deals with enormous volumes of data (billions of data points) requiring incredible amounts of calculations, computations, and iterations (in the trillions) [4] done at almost instantaneous speeds. The development of sophisticated algorithms (to be discussed in Chapter 3, “AI Software”) in conjunction with the rapid development of powerful AI hardware (Chapter 3, “AI hardware”) capable of generating and delivering these incomprehensible amounts of calculations at near-instantaneous speeds is responsible for the explosion of AI computing [5]. The hardware and software of both basic and AI computer systems operate within 3 functional categories: input layer(s) and their related hardware; inner (hidden) layers and their related software; and output layer(s) and their related software and hardware. This book, as its title (“Foundations of Artificial Intelligence in Healthcare and Bioscience”) implies, attempts to provide an “illustrated guide” to the components of basic and AI computer science and technologies and their applications in health care. To achieve this goal, the illustrations throughout Chapters 2 (“The basic computer”) and 3 (“The science and technologies of artificial intelligence”) are color-coded (Table 2 1) for clarity, cross-referencing and ease in following the explanations of each computing system. Next, we describe how each of the 3 layers of computing (input layer, inner [hidden]) layer, and output layer) operate and interrelate to produce the basic computer process. After that, we discuss the key elements (language, programming, hardware, and software) of basic

Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00008-0 © 2021 Elsevier Inc. All rights reserved.

13

14

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 2–1 Color coding guide for multicolored illustrations in Chapters 2 and 3.

computing. And with that information, we are then able to move on to Chapter 3 and the extraordinary expansions from basic computing into AI computing.

2.1 Layers of basic computers 2.1.1 Input layer Input data to the input layer of a computer system is made up of information or data provided by an external source called an input device (Table 2 2). This external source of data (“user input”) may be a single command or a complex of information and facts. This input is characterized schematically in Fig. 2 1A by a series of red dots representing “data points” (or “data nodes” when they produce a network of data points). Collectively, these data points (or nodes) introduced by the input hardware device constitute the input layer(s) of the computer process (Fig. 2 1A). This illustrated model of the input layer uses a text command to a standard keyboard as its user input. This input process activates the operating system (OS) compiler that translates the user input into computational digital code (machine code). This code is sent to the RAM drive. Then the keyboard’s device driver (in the control unit [CU] of the central processing unit [CPU] more on CPU below and Chapter 3, Page 47) activates the monitor’s I/O

Chapter 2 • The basic computer

15

Table 2–2 Common computer input devices (alphabetical). • • • • • • • • • • • • • • •

Barcode reader Cameras Digital camera Gamepad Graphics tablets Joystick Keyboard Microphone Mouse (pointing device) Optical character reader (OCR) Scanner Touchpads Trackballs Video capture hardware Webcam, USB

FIGURE 2–1A Basic computer model (input layer). Input data is made up of information or data provided by an external source called an input device (Table 2 3). This external source of data (“user input”) is characterized schematically by a series of red dots representing “data points” (or “data nodes” when they produce a network of data points).

(Input/Output, all discussed further under “Basic Computer Hardware”) device controller (in the CU of the CPU) that transmits the machine code to the monitor (and Application Programming Interface [API], if applicable). So, in this particular computer process of the input layer, the hardware (keyboard and monitor) are functioning as the input and output devices, respectively, i.e., an I/O (input/output) process, as illustrated in Fig. 2 1B. The API and CPU software are functioning as a data processing unit of the inner (hidden) layer.

16

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 2–1B Basic computer model (inner/hidden layer). In the computer input layer (red), the hardware (keyboard and monitor) are functioning as the input and output devices respectively, i.e., an I/O (input/output) process and the application programming interface (API) and central processing unit (CPU) software (blue) are functioning as the data processing unit of the inner (hidden) layer.

This I/O, API, and CPU data processing unit explains the meaning of “hidden” as an alternate description of the inner (hidden) layer. The “inner” label explains the internal operational aspects of the software framework relative to the computer system at large. The “hidden” label implies computer activity removed from direct observation by the user. This hidden aspect also explains the concept of “back-end programming,” which is described ahead under “Basic computer language and programming.”

2.1.2 Inner (hidden) layer The source code (user input) from the input layer is “translated” by the compiler software into target code, and, as part of the I/O process, it is transmitted to the output device (monitor in this example). This target code or binary (machine) code information also becomes the information that populates the inner (hidden) layer and is used by its software (OS, API, CPU, servers, and software apps) for data processing (Fig. 2 2A). The array of arrows, Figs. 2 3A and 2 3B between the input data points and the inner (hidden) layer represent the transfer of the machine code target data to the inner (hidden) layer. Each data point (or node) in the input layer has the potential to be executed by any (or all) software programs in the inner (hidden) layer. This information transfer is referred to as “compilation.” In this compilation process, the target data is assembled (by “assembler” software) into an executable object file that is transmitted from the input layer to the inner (hidden) layer through the RAM drive (a hardware component). This process occurs in milliseconds

Chapter 2 • The basic computer

17

FIGURE 2–2A Basic computer model (inner layer framework). Target code or binary (machine) code information becomes the information that will populate the inner (hidden) layer (blue) and be used by its software (OS, API, CPU, servers and software apps) for data processing.

followed instantaneously by the execution of object code instructions to the CPU (microprocessor). This execution produces a series of stored programmed instructions to device controllers and device drivers to perform specific tasks to the CPU’s ALU (Arithmetic Logic Unit) for mathematical and logical operations on the information in the RAM drive. The object code file(s) also directs coded instructions to the API and appropriate servers and/or software apps in the inner (hidden) layer (Fig. 2 2B).

FIGURE 2–2B Basic computer model (inner layer functions). The object code file(s) (red) also directs coded instructions to the API and appropriate servers and/or software apps in the inner (hidden) layer (blue).

18

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 2–3 Common computer output devices (alphabetical). • • • • • • • •

Braille embosser Braille reader CPU (Central Processing Unit) Flat panel GPS (Global Positioning System) Headphones Monitor Plotter

• Printers a. Dot matrix printer b. Inkjet printer c. Laser printer d. 3-D printer • Projector • Sound card • Speakers • SGD (Speech-generating device) • TV • Video card

2.1.3 Output layer The function of the inner (hidden) layer software is to interpret the user input and generate the data, information, and answers to the user’s queries and commands. This generated data and information is disseminated and displayed through multiple output devices (Table 2 3) constituting the output layer. The most common output device in basic computing is the monitor (used in this example, Fig. 2 3A). In the expansion to AI computing, the output becomes far more complex relative to the information provided and to the autonomous devices in robotics where primary computer output serves as input for a robotic operating system (ROS). Lots more on robotics in Chapter 3, AI discussion.

FIGURE 2–3A Basic computer model (output layer). Input data points (red) and the inner (hidden) layer (blue) represent the transfer of the machine code target data to the inner (hidden) layer and output layer (green).

Chapter 2 • The basic computer

19

FIGURE 2–3B Basic computer model (outer layer devices). Each data point (or node) in the input layer (blue) has the potential to be executed by any (or all) software programs in the inner (hidden) layer.

The output is produced from data processing in the form of a software response such as a mathematical calculation result, an electronically delivered text message or audio or visual information. It also can be provided in a physical form such as a printout or graphics. It can be text, graphics, tactile, audio, and video (Fig. 2 3B). Virtually all data can be parsed by suitably equipped and API programmed computer-to-human readable format. Alternatives to a human-readable representation are a machine-readable format or medium of data primarily designed for reading by electronic, mechanical, or optical devices, or computers. Examples of such include barcodes, HTML code, graphics, and other data forms. Beyond the traditional output devices like monitors and printers, audio and speech synthesizers have become increasingly popular and are considered the preferable output option by many users and systems. Natural language processing (NLP) and generation (NLG) to be discussed in Chapter 3, are significantly advanced beyond basic computing’s sound synthesis and are now an integral part of AI computing.

2.2 Basic computer language and programming Computer languages are called code, and it is through this code that the computer receives instructions (“programmed”) to solve problems or perform a task. Because the goals of computer programs are so diverse, so too are the languages used (Table 2 4) [6]. Programmers (“coders”) select languages that are most suited to address the goals of the program in use. Expert coders are skilled in multiple languages, and after they access the problem or the task the program is attempting to answer, they can choose the most appropriate code to use [7]. Computers are machines and thus require a means of converting human language and commands (source code) from keyboards, spoken word (audio), or visual imagery into

20

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 2–4 10 most popular programming languages (in order of popularity). • • • • • • • • • •

Python Java C/C11 JavaScript Golang R Swift PHP C# MATLAB

machine code (or computational language) for the input layer. This conversion is referred to as analog-to-digital conversion (ADC) and is accomplished by hardware and software called compilers. All computers understand only binary numbers [8]. The digital language the basic computer uses is called “binary code,” which has 2 discrete values—zero and 1. Each of these discrete values is represented by the OFF and ON status of an electronic switch called a transistor. The compiler, through complex electronic processes, converts the human-readable format into machine language digital format called machine code. It can also convert audio from sound wavelength frequencies at precise intervals into machine language digital format. And compilers can also convert visual input by digitizing its matrix of color and light pixels. Other human senses, including hearing, sight, taste, and touch are potential sources of input capable of being converted to digital code. There are numerous types of compilers with the capacity to translate any source code into executable mathematical, computational target code [9]. Computer programming is simply computer code (language) written electronically (by a programmer or coder) on a semiconductor integrated circuit (a microchip) producing a set of instructions for an operating system (e.g., Windows, MacOS, Linux) or a specific software application. These instructions are written to allow the program to perform an endless variety of functions repeatedly and efficiently. When a microchip is programmed, it is called a microprocessor (e.g., CPU, GPU discussed below), the central functional unit of computer software. Semiconductor companies (Table 2 5) [10] manufacture microchips that computer hardware and software companies use to program their operating systems and applications. Program code can be developed as “front-end” programming where the program code is visible to the user. Front-end programming is typical in websites and browsers. “Back-end” programming is associated more with databases and servers. Other types of frameworks include (but are not limited to) programs such as Bootstrap, AngularJS, and EmberJS. These types of programs control how content looks on different devices such as smartphones and tablets [11].

Chapter 2 • The basic computer

21

Table 2–5 The world’s top 10 semiconductor companies. 1. Intel 2. Samsung 3. Taiwan Semiconductor 4. Qualcomm 5. Broadcom 6. SK Hynix 7. Micron Technology 8. Texas Instruments 9. Toshiba 10. NXP Walton J. The world’s top 10 semiconductor companies. Investopedia; May 13, 2019.

2.3 Basic computer hardware Basic computer hardware consists of the physical elements of a computer system. All computers have different hardware specifications, but there are some essential components common to all of them. They include the devices and peripherals that are used for input, processing, output, and storage of data. One of the most critical pieces of hardware in a basic computer is the RAM drive or random-access memory drive. It is a high-speed type of computer memory that temporarily stores all input information as well as application software instructions the computer needs immediately and soon after. RAM is read from any layer of the computer at almost the same speed. As a form of dynamic, short-term memory, it loses all of its memory when the computer shuts down [12]. Ram should not be confused with ROM (read-only memory) that stores the program required to initially boot the computer. It then retains its content even when the device is powered off [13]. There are 4 categories of computer hardware [14]: 1. Input devices (Table 2 2) [15] transfer raw data into the input layer. Devices include (but are not limited to) keyboard, touchpad, mouse, joystick, microphone, digital camera, video, webcam. The raw (or source) data includes anything that provides information and direction to a computer system. This would include (but not be limited to) information, commands, questions, audio, video, barcode, imagery, facts, material, figures, details, data, documents, statistics, sensory stimuli. 2. Processing devices or microprocessors (integrated circuits, e.g., Intel Pentium chip) participate in all 3 layers of the computing process (sometimes termed I/O devices explained below). They use a central processing unit (CPU) and for AI, a graphic processing unit (GPU). The CPU or GPU is considered “the brain of the computer” (lots more on CPUs and GPUs in Chapter 3). The CPU consists of 4 main parts: (1) it receives a set of instructions as digital

22

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 2–6 Hard drive Magnetic strip SuperDisk Tape cassette Zip diskette Blu-ray disc CD-ROM disc

Computer data storage devices. CD disc. DVD USB flash drive CF (CompactFlash) Memory card MMC NVMe

SDHC Card SmartMedia Card SD card SSD Cloud storage Network media

code from RAM; (2) the control unit (CU) decodes and executes instructions; (3) the Arithmetic Logic Unit (ALU) performs calculations and logical operations from which it makes decisions, and (4) it sends output to the RAM drive and the hard drive for storage [16]. Using software called compilers (or translators) [17], the CPU transforms the raw data into binary code information (described in “Basic computer language and programming” section below) that can be manipulated and stored as memory (caches) in RAM. This coded information is retained as memory only while the computer power is on. It is removed from RAM and stored on the hard drive when the computer is shutdown. 3. Output devices (Table 2 3) [18] are part of the output layer(s) and disseminate and display data and information through multiple means including (but not limited to) digital monitor text, images, video cards, sound cards, printouts (2-D and 3-D), and robotics. 4. Storage devices (Table 2 6) [19] are retention and memory storage devices used for permanent storage (,SAVE. on the keyboard). They include (but are not limited to) peripheral devices such as hard drives, optical disk drives, thumb drives. Another essential addition to hardware and software discussions includes a group of hardware found in the CU of the CPU that controls communications between multiple computer layers as well as the computer input layer to external devices. These communicating hardware devices are called I/O (Input/Output) units. They include (but are not limited to) mouse, keyboards, monitors, touchpad, disk drives, display adapters, USB devices, Bit-mapped screen, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers. I/O units typically consist of a mechanical component and an electronic component where the electronic component is called the device controller. The device controller serves as an interface between a device and its device driver (software in the CPU that communicates with a specific device). As an example, input to a keyboard (mechanical component) is translated by a compiler to digital code that communicates with a specific device driver in the CPU, which activates a specific I/O device controller for a selected output device (e.g., a monitor, printer, see Table 2 3) [20].

2.4 Basic computer software The term ‘software’ refers to the set of electronic program instructions or data a computer’s microprocessor reads in order to perform a task or operation. Based on what the instructions

Chapter 2 • The basic computer

23

accomplish, the software is categorized into 2 main types [21]. “Systems software” includes the programs that are dedicated to managing the applications and data integration into the computer itself and with its associated hardware. These programs include (but are not limited to) the operating system or “OS” (e.g., MacOS and Windows), compilers, application programming interfaces (APIs), device drivers, file management utilities, and servers. The second type of software is called “application software” or more commonly, “applications or apps.” These user-specific programs are installed and managed through hardware input devices and microprocessors and enable the user to complete an enormous array of tasks ranging from basic educational, to recreational, to business and administrative health care computing and beyond. Examples include (but are not limited to) creating documents, spreadsheets, databases, and publications, doing online research, sending an email, designing graphics, running business and administrative functions, and even playing games. Software applications in the inner (hidden) layer of a computer are organized in a somewhat abstract functional collection called a “framework.” Frequent references in Chapter 3 to machine learning and deep learning categories of AI can be considered frameworks [2]. These platforms have their own specific (but changeable) language to communicate, interrelate, and interface with other general, specific, and application software programs. They can also be used to develop and deploy specific (e.g., research theories, financial models) or general (e.g., art, music, ecological) software applications. Finally, this last software program discussed, the “application programming interface” or API, is perhaps the most essential part of any computer system. It is a software intermediary (i.e., “software to software” program) whose function is to specify how software components should interact or “integrate” with databases and other programs [22]. Most operating environments, such as MS-Windows, Google Maps, YouTube, Flickr, Twitter, Amazon, and many software applications and websites (millions), provide their APIs, allowing programmers to write applications consistent with their operating environment. A simple analogy to understanding how an API operates is to think of the API as a restaurant waiter taking your order (your input), communicating the order(s) to the kitchen (processing), and then delivering the food to your table (output). Effectively, the API software communicates, interacts, and integrates the entire computing process.

2.5 Servers, internet and world wide web (www) 2.5.1 Servers The term “server” simply means a machine (computer) that “serves” other machines. Thus, the name, “server machine” [23]. Whereas processors and CPUs are “the brain of the computer,” servers are their “heart.” They are computers themselves (hardware supporting software) designed to process requests and deliver data to another computer, a local network, or over the Internet. There are multiple types of servers (Table 2 7) [24] including (but not limited to) file servers (local data to an intranet), printer, email, database servers, and web servers are supporting web browsers (also called “clients”) [25].

24

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 2–7

Types of servers.

Type of server

Description

Server platform

Hardware or software system that acts as an engine that drives the server. Synonymously with operating system (OS). Computing and connecting database servers and the end user. Multimedia capabilities to websites Chat comprises different network servers that enable the users to connect to each other File transfer protocol provides secure file transfer between computers Enables users to work together in a virtual atmosphere irrespective of the location, through the Internet or a corporate intranet. Chat comprises different network servers that enable the users to connect to each other through an IRC network. Manages mailing lists for open interactive or a one-way list that provide Transfers and stores mail over corporate networks through LANs, WANs and across the Internet. Distribution and delivery source for many public news groups, approachable over the USENET news network. Mediator between client program and external server to filter requests, improve performance and share connections. Enables users to log on to a host computer and execute tasks as if they are working on a remote computer. Provides static content to a web browser by loading a file from a disk and transferring it across the network to the user’s web browser intermediated by the browser and the server using HTTP.

Application server Audio/video server Chat server FTP server Groupware server IRC server List server Mail server News server Proxy server Telnet server Web server

Data from: Breylan Communications, 2020.

By definition, a server refers to a large, powerful computer that stores (hosts) websites and communicates with other computers, internet resources, software applications, databases, printers. When you type (input) a URL (uniform resource locator) address into your computer browser (e.g., Safari, Chrome, Explorer), your computer communicates with the server hosting that website. This process allows the website data to be transmitted and displayed on your computer. The communication steps between a computer and a server, “the heart” of electronic computing, include the following components [23]: 1. The hypertext transfer protocol, or HTTP: This is the language that browsers and Web servers use to speak to each other. 2. The server name (“www.DomainName.com”): The Domain Name System, or DNS, translates the “human-readable” domain name that you provide into a numerical internet protocol (IP) address. The browser uses this IP address to connect to the Web server. 3. The filename (“web-server.html”): The file name relates to all of the files like images, computer language, fonts and more, that are relevant to the particular website being visited.

Chapter 2 • The basic computer

25

An IP address is assigned to your computer by your internet service provider (ISP) each time you log on, whereas a server’s IP address remains the same (static). All servers have an IP address. This IP address is how the browser, using a domain name (a human-readable IP address), communicates with a web server’s IP address to access the website’s specific HTML code (HyperText Markup Language universal website language) and pull up the site. Once connected, the browser uses the URL to request a specific file or page on the server’s website. The server then sends all the HTML text for the Web page you requested to your browser. From here, using the computer’s compiler, the browser converts the data from the Web page and formats the data onto the computer screen. All this happens in milliseconds. In the example above, a URL uses the input source from a browser to demonstrate the relationship of the input to a web server. However, any input source (other computers, internet resources, software applications, databases, printers, as mentioned above) can be used to communicate with an appropriate server. There are numerous types (Table 2 7), and over 80 million servers connected to the Internet [26], most of them now cloud-based.

2.5.2 Internet The Internet, sometimes called “The Net,” is a global, decentralized network connecting millions of computers globally. It is a computer network infrastructure or “a network of networks” with each computer capable of communicating with all the other computers connected to the Internet network. These connections are made through a language known as protocols, the most popular of which is HTML (mentioned above) and implemented through user computer hardware or software [27]. The amount of information transferred over the Internet per day (as of 2019) exceeds 5 exabytes [28]. The Internet is seen as having 2 major components. First is its software or network protocols responsible for translating the alphabetic text of a message into electronic signals that are transmitted over the Internet, and then back again into legible, alphabetic text. The protocols, such as the TCP/IP (Transmission Control Protocol/Internet Protocol), present sets of rules that devices must follow in order to complete tasks. The second principal component, the hardware of the Internet, includes everything from the computer or smartphone that is used to access the Internet to the cables that carry information from 1 device to another. These various types of hardware are the connections within the network. Devices such as computers, smartphones, and laptops are endpoints, or clients, while the machines that store the information are the servers. The transmission lines that exchange the data can either be wireless signals from satellites or 4G (soon 5G broadband) and cell phone towers or physical lines, such as routers, cables, and fiber optics. Each computer connected to the Internet is assigned a unique IP address that allows it to be recognized. When 1 computer attempts to send a message to another, the data is sent as a digital “packet” with an assigned IP address and port number (address for the endpoint computer). A router will identify the port number and send it to the appropriate computer where it is translated back into alphabetic text [28].

26

Foundations of Artificial Intelligence in Healthcare and Bioscience

2.5.3 World wide web (www) Sometimes confused with the Internet, the World Wide Web (WWW or “the web”) is a way of accessing, viewing, and sharing information over the Internet. That information, be it text, music, photos or videos or whatever, is written on the web pages (numbering in the billions) served up by a web browser (a software program to present and explore content on the Web) [29]. Browsing the web on the Internet remains a primary source of obtaining information. However, as will be demonstrated in Section 2 of this book (“AI applications in diagnostic and treatment technologies”), the rise of software apps has become increasingly popular. Computer users increasingly browse www sites and their hyperlinks to obtaining news, messages, weather forecasts, videos, health information, and the like. These www sites now number greater than 1,275,000,000 [23]. Given the basics of computers that you have been introduced to in this chapter, you are now ready to enjoy their relationship and their extraordinary evolution into the science and technologies of artificial intelligence. I think you will quickly begin to understand the meaning of “disruptive technologies” as you appreciate the incredible elevation of 1 level of computing to the transitional horizon of AI’s immeasurable potential.

References [1] WhatIs.com. Algorithm. TechTarget; March 2019. [2] Software terms: framework definition. TechTerms; 2013. [3] Prabhakar A. Mathematics for AI: all the essential math topics you need. Medium; August 9, 2018. [4] Paruthi A. Artificial intelligence hardware. Medium; December 16, 2018. [5] Singh H. Hardware requirements for machine learning. eInfochips; February 24, 2019. [6] Goel A. 10 Best programming languages to learn in 2019. Hackr.IO; March 18, 2019. [7] McCandless K. What is computer programming? Code Academy Insights; June 13, 2018. [8] Rafiquzzaman M. Chapter 1, Introduction to digital systems. In: Fundamentals of digital logic and microcontrollers. 6th ed. Safari Books Online; 2019. [9] Gedikli A. Development of programming learning environment with o techniques. J Invest Eng Technol 2018;1(2):14 18. [10] Walton J. The world’s top 10 semiconductor companies. Investopedia; May 13, 2019. [11] Park JS. The status of JavaScript libraries & frameworks: 2018 & beyond. Medium; March 29, 2018. [12] Martindale J. What is RAM? Digital Trends; February 19, 2019. [13] RAM vs. ROM. Diffen; 2019. [14] Amuno A. The four categories of computer hardware. TurboFuture; January 28, 2019. [15] Input. Computer Hope; November 13, 2018. [16] Durden O. The importance of a computer CPU. Chron; March 20, 2019. [17] Bolton D. What is a programming compiler? ThoughtCo. David Bolton; May 8, 2018. [18] Output Devices. Computer Hope; April 1, 2018.

Chapter 2 • The basic computer

[19] Storage Devices. Computer Hope; January 31, 2019. [20] Silberschatz A, Gagne G, Galvin PB. Operating system concepts. Google Books.com; 2018. p. 7 11. [21] Kabir J. What is software and its types? Quora.com; July 12, 2018. [22] Beal V. API - application program interface. Webopedia; 2019. [23] Brain M. How web servers work. How Stuff Works; 2019. [24] What are some of the different kinds of servers? Breylan Communications; 2019. [25] Mitchell B. Servers are the heart of the internet. Lifewire; December 14, 2018. p. 309. [26] Kuˇcera L. How many servers exist in the world? Quora. 2016. [27] Beal V. The difference between The Internet and The World Wide Web. Webopedia; August 7, 2018. [28] Sample I. What is the Internet? The Guardian; October 22, 2018. [29] Browser. Computer Hope; October 2, 2018.

27

3 The science and technologies of artificial intelligence (AI) Artificial intelligence is the new electricity Andrew Ng

Let’s start this discussion about AI with an analogy (somewhat prosaic perhaps) that should set the tone for the Chapter. Most everyone is somewhat familiar with the historic Model T Ford automobile and the contemporary and arguably disruptive Tesla automobile of the 21st century. Both are automobiles. Both have 4 tires, a steering wheel, brakes, axles, internal combustion engines (oops, make that just the Model T), and so on. Despite the numerous features they share in common, nobody would deny that the dramatic differences beyond their generic similarities make them uniquely different technologies. So too is the case to be made between the generic (basic) computer technology used today (discussed in Chapter 2) and the AI computer “disruptive” technology to be addressed in this Chapter. Notwithstanding unique differences, the information in Chapter 2 (“The basic computer”) is essential because there are some common denominators between basic computing and AI computing that must be understood. Most relevant among them include the 3 fundamental categories of the input layer, the inner (hidden) layer, and output layer. The devices associated with these layers remain similar between basic computing and AI computing. AI, however, introduces advances in some, expansion of others, and an array of new, additional hardware and software devices from natural language processing (NLP) to neural networks to AI robotics. The functions and goals of the input layer and output layer remain relatively similar in both computing technologies (with some additions in the output layer). Still, the inner (hidden) layer between basic and AI computing presents substantial differences in structure (hardware) and function (software). Those significant modifications in the inner (hidden) layer effectively render AI a unique technology. Please also note that through this discussion on AI computing, the color-coding introduced in Chapter 2 (Table 21) will continue to be used throughout this Chapter (3) as well.

3.1 The theory and science of artificial intelligence (AI) The definition of AI presented in this book’s introduction stated: “(AI) can be defined simply as a branch of computer science dealing with the simulation of intelligent behavior in Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00009-2 © 2021 Elsevier Inc. All rights reserved.

29

30

Foundations of Artificial Intelligence in Healthcare and Bioscience

computers; the capability of a machine to imitate intelligent human behavior” [1]. The breadth and depth of the words in this simple definition bespeak the magnitude of the science of AI. For “. . .a machine to imitate intelligent human behavior” is to effectively “mimic” the functions of the human brain or restated, in biologic terms, to mimic neuroscience. These neurological functions are related to the control centers, progressive cortical layers, and the neural networking of the human brain. Thus, AI must simulate the structures of these layers and neural centers and their function of neural networking [2]. That neural networking is precisely what the science of AI attempts to do. It means that at the very least, AI computing must simulate the neurologic (brain) functions responsible for (but not limited to): • • • • •

Speech recognition and natural language processing; Visual perception and pattern recognition; Analysis and planning; Problem-solving and decision-making Learning, reasoning, and remembering.

In Chapter 1 (“The evolution of artificial intelligence”), a discussion of human intelligence resolved a set of categories and subcategories that define the features of human intelligence (see Chapter 1, Table 11). Each of these features must be assimilated and demonstrated in an electronic system to accurately call it “artificial intelligence.” This goal is attained in AI through multiple algorithms achieving higher levels of learning, reasoning, problem-solving, and communicating. These levels are classified into 3 subcategories of AI that include: “machine learning,” “neural networking” and “deep learning.” So schematically, the broadest classification is AI, and the associated subcategories of AI are machine learning, neural networking and deep learning (Fig. 31). These 3 subcategories of AI introduce unique aspects to the computing process that must be achieved electronically to reproduce the qualities of human intelligence (enumerated in Table 11) produced by the human brain. To accomplish this overwhelming task, the science of AI uses “algorithms” (defined previously in Chapter 2, “Basic Computing”) to simulate the progressive layers of neuronal functions and neural networking in the human brain. The best way to “get one’s head around” the extraordinarily complex discussion of the neuroscience model of AI is to present a basic (illustrated and color-coded) model. This model can show how the AI process simulates the neurobiology of the human brain in the AI subcategories of machine learning and deep learning (from Fig. 31). Such an AI neural network model incorporates an array of analogous, compelling hardware components and highly sophisticated software (algorithm) programs. A preliminary discussion of these critical elements allows a better understanding of how AI utilizes the human neural model in what is referred to in AI as the “Artificial Neural Network (ANN)” [3]. With this understanding, an analysis of each step of the machine learning, neural networking, and deep learning AI subcategories, with their relevant software and supporting hardware, can then be presented in conjunction with their neurobiological analogs. These correlations help to make the complex science of AI more understandable (I hope!).

Chapter 3 • The science and technologies of artificial intelligence (AI)

31

FIGURE 3–1 Artificial intelligence schematic. The broadest classification of AI includes the subcategories of machine learning (ML) and deep learning (DL) within which artificial neural networks (ANN) and deep learning are subsets of machine learning.

3.2 Artificial neural network (ANN) model of artificial intelligence (AI) The fundamental neuroanatomical component of the brain that dictates neural functioning is appropriately called “the neuron” (Fig. 32). Scientists estimate that there are approximately 100 billion neurons in the human brain [4]. It is the basic unit of the central nervous system responsible for the transmission of nerve impulses through several threadlike “arms” called axons and dendrites. The nerve impulses travel down axons reaching junctions called synapses, where neurotransmitter chemicals are released across the synaptic cleft activating

FIGURE 3–2 The neuron. The neuron is the basic unit of the human neural network that receives and sends trillions of electrical signals throughout the brain and body. It consists of a cell body, an axon, and dendrites.

32

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 3–3 Neurotransmitter chemicals across synaptic cleft. Chemical messenger transmits signals across a synapse (synaptic cleft) such as a neuromuscular junction from one neuron (nerve cell) to another “target” neuron.

other neurons (Fig. 33). All of this activity can be reduced to a mathematical model [5,6], (Fig. 34). This vast network of 1 hundred billion interconnecting neurons creates a complex in the human brain called the “neural network” (Fig. 35). This network has the potential of producing 1 hundred trillion neural connections [7] between input data (information to the brain), the inner (hidden) layers, and resulting in output (human intelligence) (Fig. 36). Consistent with the neuroscience model, Fig. 36 can be restated (re-illustrated) with the inner (hidden) layer nodes distributed through the cerebral cortex as progressive cortical centers or layers (Fig. 37). These distributed cortical layers are called the “deep neural network” [8]. This deep neural network can also be graphically represented (Fig. 38) in a linear dimensional, multilayered distribution of nodes demonstrating the hundred trillion deep neural network connections that create the “convolutional neural network (CNN)” [9]. This neural process is analogous to the deep learning algorithms in AI discussed below. The subcortical limbic system (hippocampus, amygdala, and thalamus) in the midbrain serves as relay stations between input layer data and the inner (hidden) layers of the deep neural network (Fig. 39). The cerebral functions of each of these brain nuclei have direct

Chapter 3 • The science and technologies of artificial intelligence (AI)

33

FIGURE 3–4 Mathematical model of neuron. Schematic diagram of the mathematical model of a neuron where input “weights (w)” are activated and summed (Σ) by cell body producing an output (similar to the computer model).

FIGURE 3–5 Schematic of the cortical neural network (CNN). A vast network of one hundred billion interconnecting neurons creates a complex in the human brain called the “cortical neural network.” This network has the potential of producing one hundred trillion neural connections.

implications on the AI process identified in the AI Software discussion that follows. Signals from these limbic cortical centers are transmitted to the higher neural layers for cognitive interpretations, at which point they are “cataloged” as memory in the limbic system as well as corresponding progressive cortical layers. Then, they are subsequently generated as output in some aspect of human intelligence (Fig. 310) [10].

34

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 3–6 Neural network with 3 computer layers. This model demonstrates input nodules (red) to inner (hidden) layer nodes (blue), in relationship to the cerebral cortex, and output (green).

FIGURE 3–7 Deep neural network. The inner (hidden) layer nodes. From (Fig. 36) are distributed throughout the brain as progressive cortical centers or layers (blue) to create the deep neural network (DNN).

Similar to the basic computer model from Chapter 2 (Layers of basic computers), each input data point (or node) in an ANN undergoes the “compilation process” (Chapter 2, page 16) as it connects with each analogous inner (hidden) layer (algorithms, servers, and databases). It is at this point in the process that AI computing begins to diverge from the basic computing process.

Chapter 3 • The science and technologies of artificial intelligence (AI)

35

FIGURE 3–8 Convolutional neural network (CNN). The deep neural network (DNN) is graphically represented in a linear dimensional, multilayered distribution of nodes (blue) demonstrating the hundred trillion deep neural network connections that create the “convolutional neural network (CNN).” This neural process is analogous to the deep learning algorithms in AI.

FIGURE 3–9 The limbic system (hippocampus, amygdala, thalamus). The subcortical limbic system (hippocampus, amygdala, and thalamus) in the human midbrain serves as relay stations between input layer data (red) and the inner (hidden) layers (blue nodes) of the deep neural network (DNN). The cerebral functions of each of these brain nuclei have direct analogous implications in the AI process.

36

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 3–10 Neural layer transmissions from limbic system to higher cortical centers. Signals from the limbic system are transmitted to the higher cortical neural centers (blue arrows, #3) for cognitive interpretations at which point they are “cataloged” as memory in the limbic system and corresponding progressive cortical layers. Finally, the impulses are generated as output (green, #4) representing some aspect of human intelligence.

Here the mathematical modeling of each neuron (from Fig. 33) begins to undergo computations and analysis by AI algorithms. These numerous algorithms (Table 31 [1113]) are classified (from Fig. 31) as machine learning, neural networking and deep learning. Machine learning is the critical learning process in AI (analogous to the neural process in Fig. 36). In contrast, neural networking and deep learning (comparable to the CNN process in Fig. 38) are branches of machine learning with some more highly sophisticated, unique characteristics.

3.3 AI software (algorithms) As defined in Chapter 2, algorithms are procedures or formulae for solving a mathematical problem, based on conducting a sequence of specified actions or steps that frequently involves repetition of an operation. The fundamental framework of all AI computing is mathematicsbased, including linear algebra, multivariable calculus, Bayesian logic, statistics, and probabilities, all of which algorithms utilize extensively [14]. The algorithm is the computational process using the language of mathematics. When applied in computer science, it expresses maximal, practical, efficient solutions in words to mathematical problems and questions. From the brief neuroscience outline of the human neural network described above, algorithms are now identified as specific to ANN, some specific to CNN, and some common to both (Table 31). As previously described in Chapter 2, APIs (Application Programming Interface) are software intermediaries (i.e., “software to software” programs) whose functions are to

Chapter 3 • The science and technologies of artificial intelligence (AI)

Table 3–1

37

Main algorithms used in artificial intelligence.

Algorithms Artificial neural network (ANN)

Brief description

Nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found. Application programming Software intermediary whose function is to specify how software components should interface (API) interact or “integrate” with databases and other programs. Regression analysis Statistical methods that examine the relationship and influence between one or more independent variables on a dependent variable (y 5 f[x]). Linear Approach to modeling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables). Polynomial Relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Logistic Explains the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables. Support vector machine (SVM) Analyze data used for classification and regression analysis. k  Nearest neighbors A plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). Random forest Additive model that makes predictions by combining decisions from a sequence of base models. Decision trees Creates a training model that can be used to predict class or value of target variables by learning decision rules inferred from prior data (training data). Naive Bayes Provides a statistical method to analyze the ideas and possibilities of user input to update their probability as more evidence or information becomes available at each neural layer. Applies logical rules to the knowledge base to deduce new information. Components of Inference engine (If/Then deduction) expert systems. Natural language generation (NLG) Focuses on generating natural language from structured data such as knowledge base or logical form (linguistics). Natural language processing (NLP) Concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Applications in deep learning only: Convolutional neural network A class of deep neural networks, most commonly applied to analyzing visual imagery. (ConvNet/CNN) k  Means clustering Identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible. Association rule If-then statements that help to show the probability of relationships between data items within large data sets in various types of databases. Hierarchical clustering A technique which groups the similar data points such that the points in the same group are more similar to each other than the points in the other groups. Hidden Markov models (HMM) A class of probabilistic graphical model that predicts a sequence of unknown (hidden) variables from a set of observed variables. Markov decision processes (MDPs) A mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. Q-learning Uses Q-values (“action values”) to iteratively improve the behavior of the learning agent. SARSA (State-Action-RewardIn the current state, S an action, A is taken and the agent gets a reward, R and ends up in State-Action) next state. Deep Q-Networks Takes as input the state of the environment and outputs a Q value for each possible action. DDPG (deep deterministic Uses a stochastic (random probability distribution) behavior policy for good exploration but policy gradient) estimates a deterministic (determined by cause) target policy.

38

Foundations of Artificial Intelligence in Healthcare and Bioscience

specify how software components should interact or “integrate” with databases and other programs [15]. These API software algorithms interpret user input (e.g., text, data, audio, graphics, natural language processing, video, GPUs [graphic processing units], knowledge engineers [experts]) to allow 1 computer to be used (“user input”) by other computer programs (databases). Thus, APIs enable the AI computer algorithms to analyze the main factor(s) of user input that one is attempting to understand or predict [13]. The frameworks within which algorithms operate are the 3 subcategories of AI described previously (Fig. 31); machine learning, ANN (in our neuroscience analog) and deep learning (CNN in our neuroscience analog). Descriptions of their respective algorithms and associated hardware that follow offer a complete picture of the science of AI. And once again, as mentioned early in Chapter 2, the scope of this book (and ergo, of the author) limits indepth descriptions to only brief samples of the complex mathematical equations, formulae, and technical jargon (i.e., no binary code) related to each algorithm. For those readers interested, detailed explanations and applications of each algorithm are available in multiple mathematical textbooks [1719].

3.3.1 Machine learning Machine learning is a framework (a platform for developing software applications [20]) that allows computers to learn directly from examples and experience in the form of data. Given a large amount of data to use as examples (input) to detect patterns to determine how a task is achieved, the system “learns” how best to produce the desired output [21]. As with the human neural network (Fig. 37), in machine learning, there are hundreds of layers, each with thousands of nodes, trained upon millions of data points [22]. There are 3 branches of machine learning: supervised (labeled data), unsupervised (unlabeled data), and reinforcement learning. Supervised (labeled data) is usually associated with machine learning, while unsupervised (unlabeled data) and reinforcement learning refer more to neural networks and deep learning. These distinctions are not absolute in that neural networks and deep learning are subcategories of machine learning, yet still part of that category. Thus, certain forms of unsupervised and reinforcement learning are also considered a part of machine learning as well as ANN and deep learning. So too, supervised learning, part of the machine learning framework, may also be regarded as deep learning in certain forms. Often in the literature, algorithms are classified as both machine and deep learning. Each branch of machine learning is driven by mathematical formulae (algorithms) that analyze data to produce answers, predictions, and insights from which the user can make decisions. Each branch conducts its analysis through different algorithms and data processing functions. A brief description of all 3 branches of machine, ANN and deep learning with practical examples help to clarify their meaning [22].

3.3.1.1 Supervised (labeled) data The majority of practical machine learning uses supervised learning [23]. It is a form of learning where the user has a dataset of “labeled” input variables (x) and an output

Chapter 3 • The science and technologies of artificial intelligence (AI)

39

variable (Y). The algorithm learns the mapping function from the input to the output by “supervising” the process. It develops “training data” from the variable’s structure or pattern recognition (e.g., a dataset of apples, round and red versus bananas, long and yellow). As it learns the correct answers, the algorithm iteratively makes predictions on the training data. The algorithm’s self-modifications are called “training.” This process of the ANN is analogous to the human brain’s neural progressive cortical layer’s memory functions. The machine (computer) uses its AI software to process keyboard, visual or natural language “processed” input (NLP); web browsers to communicate with databases; and regression analysis software to analyze, interpret and synthesize these layers of information, layer-by-layer. This is similar to the neural layers of the brain (see Fig. 37). From this process, the computer arrives at (“machine learns”) relationships and conclusions [24]. The goal in supervised learning is to approximate the mapping function so well that when you have new input data (x), you can predict the output variables (Y) for that data. Learning stops when the algorithm achieves an acceptable level of performance [25]. Supervised learning problems are grouped into regression and classification problems [26]. A classification problem is when the output variable (y) is a category, such as “disease” and “no disease.” A regression problem is when the output variable (y) is a real value, such as “dollars” or “weight.” The process is mathematically stated as y 5 f(x). All classification and regression algorithms come under supervised learning. They include (with limited thumbnail descriptions for each due to their mathematical complexities beyond the scope of this book): • Regression analysis includes Linear; Logistic; Support vector machine (SVM); k-Nearest Neighbors; Random Forest; Polynomial. These powerful statistical methods examine the relationship and influence between 1 or more independent variables (x) on a dependent variable (y 5 f(x) as described above). Regression coefficent Pn Mathematically stated β 5

  ðx i 2 x Þ y i 2 y Pn 2 i51 ðx i 2x Þ

i51

Where: x is nothing but the value y (which we are going to predicate) for particular x (means y is a linear function of x) [27]. • Decision trees: This algorithm is used for solving regression and classification problems. It can create a training model for predicting the class or value of target variables by learning decision rules inferred from prior data (training data). J X   Mathematically stated: H ðT Þ 5 IE p1 ; p2 ; . . . ; pj 5 2 pi log2 pi i51

Where {p_{1},p_{2},. . .} {p_{1},p_{2},. . .} are fractions that add up to 1 and represent the percentage of each class present in the node that results from a split in the tree [28].

40

Foundations of Artificial Intelligence in Healthcare and Bioscience

• Naive Bayes: A straightforward and powerful algorithm for classification problems. Thomas Bayes, an 18th-century mathematician, developed a mathematical formula for determining conditional probability. AI Bayesian algorithms provide a statistical method to analyze the ideas and possibilities of user input to update their probability as more evidence or information becomes available at each neural layer. Mathematically stated: PðHjEÞ 5

PðEjHÞ  PðHÞ PðEÞ

Where: P(H) is the probability of hypothesis H being true. This is known as the prior probability. P(E) is the probability of the evidence (regardless of the hypothesis). P(E|H) is the probability of the evidence given that the hypothesis is true. P(H|E) is the probability of the hypothesis given that the evidence is there [27].

3.3.2 Neural networking and deep learning 3.3.2.1 Unsupervised (unlabeled) data Unsupervised learning and semi-supervised learning (limited labeled data, but large dataset) are generally referred to as “deep learning” or a subcategory of machine learning. This process utilizes an algorithm where only input data (x) is known with no corresponding output variables (“unlabeled”). The goal is to group (“cluster”) the structure or distribution of the data to learn more about the data. This deep learning is called unsupervised learning because there are no correct answers. Algorithms are left to their devices to discover and present the relevant structure in the data [9]. This deep learning process in the ANN maximizes the analogous deep neural network, hundreds of millions of progressive cortical layer functions (Figs. 3.7 and 3.8) of the human brain. As mentioned in the previous neuroscience discussion, this process is called the convolutional neural network (CNN) [9]. The computer’s software interacts with specific knowledge database(s) (“unlabeled data”), which includes stored data from previously labeled experiences and other preexisting, directly and indirectly related, digitally stored knowledge bases on the worldwide web. This process is similar to the brain’s limbic system (amygdala and hippocampus) and higher cortical neural layers storing past quantified and qualified human experiences (personal or observed behavioral, educational, sensory, emotional). Like the human brain’s hippocampus (storage of long-term memory, including all past knowledge and experiences), the computer learns and stores new information into long term memory. The stored limbic information from the supervised, labeled data is networked (“neural networking”) at the higher cortical layers with the unlabeled data. This aggregate of information is analyzed, integrated, and transmitted for interpretations, decision-making and interactions, and stored within the limbic system and related cortical centers in the brain. In the corresponding AI system, this information is stored by inference engine software for future deep learning experiences. The information storage allows for AI’s continued and expanding learning potential similar to that of the human brain.

Chapter 3 • The science and technologies of artificial intelligence (AI)

41

The unsupervised learning information (active and stored) is programmed into AI software allowing the computer, using Bayesian deduction reasoning, to employ it in an active collectively, progressive analytical process that extracts synergies, improves performance(s) and continued learning [29]. When humans learn, they alter the way they relate to the world. When algorithms learn, they change the way they process the labeled data. They assess variables and pattern recognition and alter themselves as they are exposed to the data. Through an extensive iterative, analytical process, with minimal human intervention, the algorithm learns from previous computations to produce reliable, repeatable decisions and results [30]. Unsupervised learning problems are grouped into clustering and association problems [31]. A clustering problem is where you want to discover the inherent groupings in the data (e.g., grouping clinical conditions by their symptom pattern). An association rule learning problem is where you want to find rules that describe large portions of your data, such as conditions that symptom pattern x also tend to demonstrate y (thus, y 5 f(x) as described previously). Association Rules work based on “if/then” statements in supervised and unsupervised learning [32]. These statements help to reveal associations between independent data in a database, relational database, or other information repositories. The main applications of Association Rules are in data analysis, classification, clustering, and many others [33]. All clustering algorithms come under unsupervised learning algorithms. They include (with limited thumbnail descriptions for each due to their mathematical complexities beyond the scope of this book) [30]: • K  means clustering: K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster while keeping the centroids as small as possible. Mathematically stated: Given a training set x(1), . . . , x(m) x(1), . . . , x(m), and want to group the data into a few cohesive “clusters.” Here, we are given feature vectors for each data point x(i) A ℝn x(i) A Rn as usual; but no labels y(i) (making this an unsupervised learning problem). The goal is to predict k centroids and a label c(i) for each data point. The k-means clustering algorithm is as follows [34]: 1: Initialize cluster centroids μ1 ; μ2 ; . . . ; μk Aℝn randomly: 2: Repeat until converhence: f For every i; set cðiÞ : 5 are min:x ðiÞ 2μj : U 2

j

For each j; set Pm ðiÞ ðiÞ i51 1fc 5 jgx : μJ : 5 P m ðiÞ i51 1fc 5 jg

• Hierarchical clustering: A technique that groups similar data points such that the points in the same group are more similar to each other than the points in the

42

Foundations of Artificial Intelligence in Healthcare and Bioscience

other groups. The group of related data points is called a Cluster. Mathematically stated: Sim(C1, C2) 5 Min Sim(Pi, Pj) such that Pi A C1 & Pj A C2: For single linkage algorithm (MIN) defined as the similarity of 2 clusters C1 and C2 is equal to the minimum of the similarity between points Pi and Pj such that Pi belongs to C1 and Pj belongs to C2 [35]. • Hidden Markov models (HMM): A class of probabilistic graphical models that predicts a sequence of unknown (hidden) variables from a set of observed variables. The Markov process assumption is that the “future is independent of the past given that we know the present.” A simple example of an HMM is predicting the clinical condition (hidden variable) based on the symptom pattern the patient demonstrates (observed). An HMM is viewed as a Bayes Net unrolled through time with observations made at a sequence of time steps being used to predict the best series of hidden states [36]. Mathematically stated: Consider the situation where you have no knowledge of the outcome when you are examining the patient. The only way for you to know what the outcome (diagnosis) might be is to recognize the symptom pattern during examination. Here, the evidence variable is the symptom pattern, while the hidden variable is the diagnosis. t   HMM representation: P ðR0 ; R1 ; . . . ; Rt ; Uo ; U1 ; . . .; U t Þ 5 PðRo Þ L PðRi Ri51 ÞPðUi Ri51 Þ i51

3.3.2.2 Reinforcement learning Falling between supervised and unsupervised learning is reinforcement learning, focusing on learning from experience. It enables a user to learn in an interactive environment by trial and error using feedback from its actions and experiences and gives a reward function that tries to optimize the experience. Supervised and reinforcement learning use mapping between input and output. With supervised learning, feedback provided to the user is a correct set of actions to the learner for performing a task. Reinforcement learning uses rewards and punishment as signals for positive and negative results. The goal of the agent is to learn the consequences of its decisions [37]. The AI software (as with the human cortical centers) tries to maximize the most significant benefit(s) it can receive when interacting with an uncertain environment [38]. It uses a process called “memory networking” which is an artificial neural networking (ANN) using RAM (random access memory, analogous to the hippocampus) to differentiate and adjust the connections between unsupervised and supervised learned information [39]. This form of learning in the CNN process (in large part as well as supervised and unsupervised learning) accentuates the analogy to the human brain’s limbic system. This highly sophisticated AI process mimics the plasticity of the human brain’s limbic system and cortex. A comparison of limbic centers of the brain and AI computer software analogs in Table 32 [4042], identify the ways AI “mimics” human neurological functions. As compared to unsupervised learning, reinforcement learning is different in terms of goals. While the goal in unsupervised learning is to find similarities and differences between

Chapter 3 • The science and technologies of artificial intelligence (AI)

Table 3–2

43

Limbic centers of the brain and AI computer functional analogs.

Neurological structure

Al computer functional analog

Progressive cortical layers

• • • • • • • • • •

Hippocampus

Amygdala Thalamus

Supervised learning Unsupervised learning Reinforcement learning RAM Memory networking Plasticity Reinforcement learning Robotics Artificial neural network Unsupervised learning

data points, in reinforcement learning, the goal is to find a suitable action model that would maximize the total cumulative reward of the user. Since reinforcement learning requires a lot of data, it is most applicable in domains where simulated data is readily available, like games. It is used widely in AI for playing computer games (e.g., AlphaGo Zero, the first computer program to defeat a world champion in the ancient Chinese game of Go [43]). Another example of reinforcement learning is Google’s DeepMind’s work on Deep Reinforcement Learning for Robotic Manipulation with asynchronous automobiles (more on robotics discussed below). But the simple game of PacMan is an ideal example of reinforcement learning. The goal for PacMan is to eat the food in the grid while avoiding the ghosts on its way. The grid world is an interactive environment. PacMan receives a reward for eating food and punishment if he gets killed by the ghost (and loses the game). To build an optimal policy, PacMan faces the dilemma of exploring new states while maximizing its reward at the same time. This is called “Exploration versus Exploitation trade-off.” The total cumulative reward is PacMan winning the game [44]. As with supervised, semi-supervised, and unsupervised learning, reinforcement learning has a set of unique mathematical algorithms. They include (with limited thumbnail descriptions for each due to their mathematical complexities beyond the scope of this book) [40]: • Markov Decision Processes (MDPs): a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision-maker. The Markov property states that “The future is independent of the past given the present.” • Q-learning: uses Q-values (“action values”) to improve the behavior of the learning agent iteratively: Mathematically stated: 0

1 learned value

Qnew ðst ; at Þ’ð1 2 αÞU Qðst ; at Þ 1 |fflfflfflffl{zfflfflfflffl} old value

α |{z}

Bzfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{ C B   C C UB rt 1 γ U max Q st11;a B|{z} C |{z} a @ |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} A

learning rate

reward

discount factor

estimate of future value

44

Foundations of Artificial Intelligence in Healthcare and Bioscience

Where r1 is the reward received when moving from the state st to the state st 1 1, and α is the learning rate (0 , α1) [45]. • SARSA (State-Action-Reward-State-Action): An algorithm for learning a Markov decision process policy, used in reinforcement learning. Mathematically stated : QðSft g; AftgÞ :5 QðSft g; AftgÞ 1 αT½Rft 1 1g 1 γTQðSft 1 1g; Aft 1 1gÞ 2 QðSft g; AftgÞ

Where, in the current state, S an action, A is taken and the agent gets a reward, R and ends up in next state; learning rate α determines to what extent the newly acquired information overrides the old information, and discount factor γ determines the importance of future rewards [46]. • Deep Q-Networks (DQN): Take as input the state of the environment and outputs a Q value for each possible action. Deep-Q-Networks combine deep learning and reinforcement learning with learning how to play video games at superhuman levels [47].

• DDPG (Deep Deterministic Policy Gradient): a policy gradient algorithm that uses a stochastic (random probability distribution) behavior policy for good exploration but estimates a deterministic (determined by cause) target policy. Finally, algorithms apply Bayesian probability logic (statistical analysis of increasing available information to confirm the probability of a hypothesis [48] through “Inference Engine” software [“if/then” inference rules logic [49]]) that can be applied to interpret reasonable expectations. Applications of each of these machine learning and deep learning processes are demonstrated directly in many of the topics addressed in the business, administrative, and clinical aspects of health care in Section 2 of this book. And finally, having provided analogies between these AI algorithms and neuroscience, they can also be categorized in general with the human intelligence categories presented in Table 11 and compared in Table 33 relative to associated neurological analogs [50]. In summary, the universal AI process can be categorized into roughly 7 activities (Table 34), all distinct, yet all directly or indirectly interrelated. Together, they provide the matrix of electronic, mathematical logic, reasoning, and decision-making, which constitutes the “art and science” of artificial intelligence.

Chapter 3 • The science and technologies of artificial intelligence (AI)

45

Table 3–3 Comparison of human intelligence categories (from Chapter 1, Table 1.1) and neurological analogs. Intelligence category Reasoning Learning

Perception

Problem solving Communication

Table 3–4

Neurological analogs

Category features • • • • • •

Inductive Deductive Auditory Experience Motor Sensory a. Visual b. Hearing c. Cerebral • Assessment • Decision-making • To speak; and

• Frontal lobe • • • •

Observational Memory Perceptual Pattern recognition

• Solutions • Alternatives • To recognize and understand spoken and written language

• Relational • Spatial • Stimuli

• • • • • •

Frontal lobe Temporal lobe Limbic system Parietal lobe Occipital lobe Thalamus

• • • •

Frontal lobe Amygdala Broca’s area Wernicke’s area

Summary of AI categories.

Summary of Al categories Artificial intelligence (Al)

Machine learning (ML) Algorithm

Supervised machine learning

Unsupervised machine learning (deep learning or CNN) Semi-supervised machine learning (deep learning or CNN) Reinforcement learning (deep learning or CNN)

A broad definition which describes the ability of a machine to demonstrate intelligent behavior. Al algorithms exhibit either nonadaptive (e.g. rulebased) or adaptive intelligence (e.g. machine learning). A type of Al that uses mathematical models to automatically map input to desired outputs in a way that does not rely on explicit rule-based programming. The computational process formulated using the language of mathematics. When applied in Al. it expresses maximal practical, efficient solutions in words to mathematical problems and questions. An approach to training ML algorithms in which a model is provided input (e.g. digital images) that is classified with a label. For example, a model used to distinguish images of cats from dogs would be shown many images of cats, labeled as cats, and many images of dogs, labeled as dogs—a process denoted as training. Following training, a user could show this model an image of a cat or dog without a label, and the model on its own should be able to classify the image. An approach to training ML models in which the input data has not been labeled, and the algorithm identifies patterns or regularities in the data. An approach to training ML models when the number of labeled input data are limited, but the available input dataset is large. A combination of supervised and unsupervised techniques use both labeled and unlabeled input data, respectively. Falling between supervised and unsupervised learning, reinforcement learning focuses on learning from experience. It enables a user to learn in an interactive environment by trial and error using feedback from its own actions and experiences and gives a reward function that tries to optimize the experience.

46

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 3–5 • • • • • • • •

AI hardware.

Computer servers (file, mail, print, web, game, application) Computer processing units (CPUs) Graphic processing units (GPUs) Accelerators Quantum processors using “qubits” (vs digital binary code) Neuromorphic chips (“self-learning” microchips) Application-specific integrated circuit (ASIC) Field-programmable gate array (FPGA) integrated circuit with hardware description language (HDL)

3.4 AI hardware As described above in “AI Software (Algorithms),” the fundamental framework of all AI computing is mathematics-based, including linear algebra, multivariable calculus, Bayesian logic, statistics, and probabilities [14]. These mathematical sciences conduct incredible amounts of calculations, computations, and iterations (in the trillions) done at almost instantaneous speeds [51]. The guiding principle and goal of AI hardware technology are to is support this enormous volume of data processing and the calculations and computations the AI software algorithms must execute. The input, output, and storage device hardware listings (Tables 23, 24, and 25) from the “Basic Hardware” discussion in Chapter 2, plus the microprocessing devices (RAM and CPU), are all hardware used in AI computing. But the data processing and millisecond speeds associated with basic computer hardware (especially with the CPU) are woefully inadequate to accommodate the volume and speed needed for AI’s machine learning and deep learning data processing. As such, the most active area of development in the AI industry lies in the dramatic upsurge and advances in AI hardware (Table 35). Currently, numerous prominent companies are producing AI-specific hardware (Google, Microsoft, Intel, IBM, Amazon, Alibaba, Baidu, Nvidia) as well as many startups [2]. A review of each of the significant forms of current and evolving hardware provides an understanding of the physical resources that drive the AI computing process and software algorithms.

3.4.1 Ram (random access memory) As described in Chapter 2, a RAM microchip is a high-speed type of computer memory device that temporarily stores all input information as well as application software instructions the computer needs immediately and in the near future. RAM is read from any layer of the computer at almost the same speed. As a form of dynamic, short-term memory, it loses all of its memory when the computer shuts down [52]. Sometimes referred to as DRAM (dynamic random-access memory) or SDRAM (synchronous dynamic random-access memory), these terms are generally interchangeable. Another common term, especially in the video game space, is VRAM, or video RAM [52]. It is used to denote the memory available to a graphics chip and GPU (graphic processing unit) processors. As described below, GPUs function at rapid rates and thus, need a memory

Chapter 3 • The science and technologies of artificial intelligence (AI)

47

chip that can accommodate their production. RAM is available in chips with storage capacity as high as 256 GB, although 832 GB is more common [53]. RAM speeds range from 800 to 4200 MHz (megahertz equals 1 million commands sent or received from the CPU and GPU, referred to as bandwidth). So, an example of a good system for gaming and computing might be 32 GB of RAM storage at a processing speed of 2400 MHz.

3.4.2 Computer servers (file, mail, print, web, game, apps) In Chapter 2, computer servers were described as “the heart” of the computer. As computers themselves (hardware supporting software), they are designed to process requests and deliver data to another computer, a local network, or over the internet. Table 27 lists the multiple server types [50].

3.4.3 Central processing unit (CPU) The Central Processing Unit (CPU), described in Chapter 2 as the “brain” of the computer consists of 4 main parts: 1. It receives a set of instructions as digital code from RAM; 2. The control unit (CU) decodes and executes instructions; 3. The Arithmetic Logic Unit (ALU) performs calculations and logical operations from which it makes decisions; and 4. It sends output to the RAM drive and the hard drive for storage [53]. CPUs are built by placing billions of microscopic transistors onto a single computer chip [54]. Those transistors allow it to make the calculations it needs to run programs stored on the system’s memory. Additional CPU chips can be added to a computer to increase operating speeds. The most common advancements in CPU technology have been in making transistors smaller and smaller. That has resulted in the improvement of CPU speed over the decades. A multi-core (small processors) CPU can have anywhere from 2 to greater than 32 core processors which can execute multiple instructions simultaneously at speeds of 5 GHz or higher. (One hertz is the speed of 1 operation per second; 1 GHz equals 1 million operations per second) [54]. The CPU is the analog to the human brain at large when using our neuroscience analogy to AI computing. It is the processor for all the operations the computer conducts. How your computer operates is based on mathematical operations, and the CPU controls all of them. Information sent from an input device is compiled (from Chapter 2, page 14) and then transferred to the CPU. The CPU’s ALU is then responsible for all mathematical and logical operations. It processes instructions using 4 basic functions [55]: 1. Fetch: Each instruction is stored in memory and has its own address. The processor takes the address number from the program counter to track which instructions should execute next. 2. Decode: All programs executed are translated into Assembly instructions. Assembly code is decoded into binary instructions.

48

Foundations of Artificial Intelligence in Healthcare and Bioscience

3. Execute: a. ALU calculations; b. move data from 1 memory location to another; or c. jump to a different address. 4. Store: Gives output data feedback to RAM.

3.4.4 Graphic processing unit (GPU) If CPUs are the “brains” of computing, GPUs are the “brawn.” A CPU can do anything a computer requires, whereas a GPU is a specialized microprocessor optimized for displaying graphics and doing very specific computer tasks. CPUs and GPUs are both made from hundreds of millions of transistors, which can process thousands of operations per second. The difference between them is that a CPU uses between 1 and 4 processing cores clocked anywhere from 1 to 4 GHz. GPUs process tasks in different ways and are best at focusing all their computing abilities on a specific task. The GPU uses thousands of smaller and more efficient cores than the CPU and can handle multiple functions of lively parallel data at the same time. An analogy might be to use a sports car to move a large number of boxes from one point to another (i.e., multiple parallel processes). The process would be fast but inefficient, whereas, a large truck, albeit slower, would conduct the process considerably faster. GPUs are 50100 times faster in tasks that require multiple parallel processes, such as computer graphics and gaming (for which they were initially developed). But their most significant value lies in their ability to conduct massive loads of iterative computations in machine learning, deep learning, and big data analytics. GPUs process data at rates in the billions of calculations per second, whereas central processing units (CPUs) compute at only millions of processes per second [56]. The most effective data sources utilized in machine learning are graphics (video and images). Thus, the development of the GPU by the AI technology company, Nvidia, in 1999, is responsible for the breakthrough advancement in AI computing [57]. Then, in 2006 “GPU accelerators” were introduced supercharging AI computing’s capabilities and growth [58]. The microprocessor industry continues to grow with increasingly powerful chip technology and combined technologies. The APU (Accelerated Processing Unit) chip combines the best features of gaming graphic cards and processors [59]. Powerful cloud-based processors (e.g., TPU  Tensor Processing Unit by Google) are now available to researchers and developers (and beta downloads) in need of high-speed machine learning and training of AI models [60].

3.4.5 Accelerators [61] An AI accelerator is a microchip designed to enable faster processing of AI tasks. Like other dedicated-use accelerators, such as graphics processing units (GPUs), auxiliary power units (APUs), and power processing units (PPUs), AI accelerators are designed to perform their tasks in a way impossible for traditional CPUs to achieve. A purpose-made accelerator delivers greater performance, more features, and greater power efficiency to facilitate its given task.

Chapter 3 • The science and technologies of artificial intelligence (AI)

49

An example of the value of AI accelerators is demonstrated in Google DeepMind’s AlphaGo project, where the number of possible piece positions in the game made processing the task impossible with a “brute force” approach. Accelerators focus on multicore, simple AI arithmetic functions done in mass quantities and computed through specialized algorithms. The number of such functions required for a task would render traditional computing approaches impossible.

3.4.6 Quantum processors using “qubits” (vs digital binary code) Quantum computing is an industry (i.e., IBM, Microsoft, Google, Intel, and other tech heavyweights) goal over the next 510 years. The Quantum theory is the branch of physics that deals with the world of atoms and the smaller (subatomic) particles inside them. Transistors, the basic unit of computers, are now approaching the point where they’ll soon be as small as individual atoms. At this level, wave-particle duality exists wherein things can exist simultaneously in 2 states of matter and energy (e.g., Schrödinger’s cat). In the 1970s and 80s, physicists (Landauer, Bennett, Benioff, Feynman, and Deutsch) outlined the theoretical basis of a quantum computer. When the duality theory applies to computing, instead of binary bits in 0s and 1s, a quantum computer has quantum bits or “qubits” that can store both 0s and 1s simultaneously, or an infinite number of values in between, in multiple states (i.e., store multiple values) “at the same time!” The number of qubits stored through this concept is called “superposition.” A quantum computer can store multiple numbers at once, and so to can process them simultaneously. Thus, instead of working in serial (doing a series of things one at a time in a sequence), it can work in parallel (doing multiple things at the same time). Finally, upon a command, the dual state “collapses” into one of its dual states and gives the answer to your question. This process of parallel computing would make computing millions of times faster than conventional computing [62]. Intel has already developed a simulated supercomputer. The next step is to make the qubits. It takes something like 5 trillion transistors to simulate 42 qubits. It likely requires 1 million or more qubits to achieve commercial relevance. But starting with a simulator, you can build the underlying architecture, compilers, and algorithms. Until there are physical systems that are a few hundred to a thousand qubits, it’s unclear exactly what types of software or applications a quantum computer will be able to run. But let’s remember this. The first transistor was introduced in 1947. The first integrated circuit followed in 1958. Intel’s first microprocessor, which had only about 2500 transistors, didn’t arrive until 1971. Each of those milestones was more than a decade apart. If 10 years from now, there are quantum computers that have a few thousand qubits, it would undoubtedly change the world in the same way the first microprocessor did [63].

3.4.7 Neuromorphic chips (“self-learning” microchips) Neuromorphic computing includes the production and use of neural networks to prove the efficacy of how the brain performs its functions, not just reaching decisions, but memorizing

50

Foundations of Artificial Intelligence in Healthcare and Bioscience

information and even deducing facts. The engineering of a neuromorphic device involves the development of components, hardware, and software, whose functions are analogous to parts of the brain and what such parts are believed to do. This concept represents a comprehensive analogy of the input, compilation, and inner (hidden) layers that constitute the ANN discussed previously. The goal is an entirely new class of computers capable of being “trained” (machine learning) to recognize patterns using far fewer inputs (e.g., big data and blockchain discussed below) than a digital neural network would require [64]. Neuromorphic chips do all the processing and functioning without having to send messages back and forth to the cloud. They function similarly to the human brain, conserving energy by only operating when needed. In recent years there has been more emphasis on developing software than hardware. But Prof. Irving Wladawsky-Berger, a Research Affiliate at MIT Sloan School of Management, says, “Neuromorphic computing and chips bring the much-needed evolution in computer hardware,” [65].

3.4.8 Application specific integrated circuit (ASIC) This semiconductor microchip is designed to be customized for specialized use as opposed to a generic processor. ASIC chips are part of any product that requires specific features that cannot be achieved by off-the-shelf-ICs. Examples of ASICs include chips that are designed to run particular devices such as hand-held devices (e.g., cell phones). They are used primarily to add specific features to a product to gain a competitive edge. As such, many companies are using these customized chips for their products [66]. The ASIC, however, can be used with or replaced by the FPGA.

3.4.9 Field-programmable gate array (FPGA) integrated circuit with hardware description language (HDL) FPGAs are hardware circuits that a user can program to carry out 1 or more logical operations. They are integrated circuits that are sets of circuits on a chip (an “array”). Those circuits, or arrays, are groups of programmable logic gates, memory, or other elements. With these arrays, a user can write software that loads onto a chip and executes functions. That software is later replaced or deleted, but the hardware chip remains unchanged. FPGAs are useful for prototyping application-specific integrated circuits (ASICs) or processors. The FPGA is reprogrammed until the ASIC or processor design is bug-free, and the actual manufacturing of the final ASIC can begin again. Intel uses this FPGA method to prototype new ASIC chips. In some cases, high-performance FPGAs outperform GPUs used to accelerate inference processing in analyzing large amounts of data for machine learning [67]. Hardware Description Language (HDL) is specialized software that can be used to describe and program FPGA digital circuits in a textual manner. Two types of HDL include Verilog and VHDL (VHSIC Hardware Description Language), both used to describe the structure and behavior of electronic circuits, and most commonly, FPGA digital logic circuits. HDL enables a precise, formal description of an electronic circuit that allows for the automated analysis and simulation of the circuit [68].

Chapter 3 • The science and technologies of artificial intelligence (AI)

Table 3–6

51

Natural language process (NLP) and natural language generation (NLG).

Type of NLP

Description/example

Simple terms

Speech recognition

Software that understands or transcribes spoken language (e.g., Dragon Naturally Speaking) Software that extracts information from written text (e.g., portions of IBM Watson) Software that produces narratives and reports, in easy-to-read language (e.g., Arria’s NLG Platform) Software that speaks or reads out text (e.g., CereProc, CereVoice)

Speech-to-text

Natural language understanding (NLU) Natural language generation (NLG) Speech synthesis

Text mining, text analytics Data in, language out Text-to-speech

3.5 Specialized AI systems 3.5.1 Natural language processing (NLP) Among the technologies of AI that make it truly more user-friendly and are having a profound effect on practical AI applications (in Section 2) are Natural Language Processing (NLP) and its associated Natural Language Generation (NLG) (discussed below and Table 36). NLP is a specialized software application using machine learning (ANN) and computational linguistics, enabling computers to understand and process human languages and to get computers closer to a human-level understanding of language. Recent advances in machine learning and ANNs have allowed computers to do quite a lot of useful things with natural language. Deep Learning (CNN) has also enabled the development of programs to perform things like language translation, semantic understanding, text summarization, and chatbots [69]. The general NLP process includes the following steps: 1. 2. 3. 4. 5. 6.

A user (human) talks to the machine (this input can be audio, text, video); The machine captures the audio; NLP algorithms convert the audio to binary code conversion; Processing of the code data (compilation process  see Chapter 2, page 16); Algorithms convert data to audio (this is part of the NLG process) The machine responds to the human by outputting the data as audio or text.

The NLP algorithms apply language-specific syntactic and semantic rules (language-specific) to produce the input source and convert it to computer code. Syntactic analysis assesses how the natural language input aligns with the grammatical rules to derive meaning from them. Semantic rules must analyze the meaning conveyed by a text by interpretation of words and how sentences are structured. Here, NLP also uses NLG algorithms to access databases to derive semantic intentions and convert them into human language output (Fig. 311). This complex, subjective process is one of the problematic aspects of NLP that is being refined. This challenging process is referred to as “natural language understanding (NLU)” and differentiates NLP from basic computing speech recognition (see Chapter 2, page 19) [70].

52

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 3–11 Natural language processing (NLP) and natural language generations (NLG). Once NLP unlocks the verbal context (red) and translates it into human language (blue), NLG takes the output and analyzes the text in context (blue) and produces audio or text output (green).

Besides its powerful ability to integrate with big data, NLP provides some of the most common AI applications we all use, including: • Language translation applications (e.g., Google Translate); • Word Processors (e.g., Microsoft Word “Grammarly” for grammatical accuracy of texts); • Interactive Voice Response (IVR) applications used in call centers to respond to specific users’ requests; • Personal assistant applications, chatbots (e.g., Google, Siri, Cortana, Amazon’s Echo, and Alexa).

3.5.2 Natural language generation (NLG) Natural Language Generation (NLG) is the AI technology that analyzes, interprets, and organizes data into plain, written text or audio output (Fig. 311). It converts data into naturalsounding language, the way it is spoken or written by a human. Once NLP unlocks the context hidden in data and translates it into human language, NLG takes the output and analyzes the text in context. You can think of NLG and NLP engaged in a joint endeavor to provide readymade conversation [71]. NLG aids the machine in sorting through many variables and putting “text into context,” thus delivering natural-sounding sentences and paragraphs that observe the rules of English grammar. While AI machine learning can memorize all the words and grammatical rules in individual languages (most of which keep evolving), there is still a multitude of other things it must factor in attempting to produce natural language generation. Thus, as stated for NLP,

Chapter 3 • The science and technologies of artificial intelligence (AI)

53

FIGURE 3–12 Expert system. The basic structure of an expert system consists of a human expert (e.g., doctor) and knowledge engineer (e.g., related expert) as input (red); a knowledge base (related database[s]), inference engine (AI algorithm), explainable AI (AI algorithm) and user interface (NLP audio or text) as inner (hidden) layers (blue); and the user (from input, i.e., the doctor) as output recipient (green).

NLG (and NLU), albeit they are a significant benefit already in AI, they continue to be refined and improved from their current state [72].

3.5.3 Expert systems One of the most significant application of AI in the clinical aspects of health care delivery is the domain of “expert systems.” These are AI computer programs utilizing the deep learning process to analyze stored knowledge base(s) to deduce and provide options, alternatives, suggestions, advice to health care providers through “if/then” rules, inference reasoning and forward and backward chaining to a question, problem or strategy. This human interface activity is communicated “provider to computer” through NLP processing and “computer to provider” through NLG [73]. It is valuable to dive a little more deeply into expert systems as well as the following 2 items (Big Data Analytics and Blockchain) than we have into some other specialized systems. The applications of these 3 AI systems are of enormous importance in both the business, administrative, and the delivery of health care. You’ll be able to reference back to this section for review when we discuss them in Section 2, but an initial foundation in their structure and functions is worthwhile. (Additional discussion on Expert Systems related directly to health care is also found in Chapter 5, page 187.) The basic structure of an expert system consists of the following parts (Fig. 312) [74]. • The knowledge base used to store expert system expertize, including facts and rules. In the process of building a knowledge base, the knowledge base should be able to acquire new knowledge, expressing and storing knowledge in a way that the computer can accomplish. • The working memory is responsible for storing the input fact; • The reasoning machine matches the facts in the working memory with the knowledge and gets new information. The intermediate information obtained during processing is also stored in a storage unit;

54

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 3–13 Forward and backward chaining. In forward chaining the inference engine follows the chain of conditions and derivations and finally deduces the outcome. In Backward Chaining, based on what has already happened, the inference engine tries to find conditions that could have occurred in the past for this result. Reproduced with permission of Data Flair, 2020.

• The interpreter is responsible for interpreting the results of the inference engine output, including explaining the correctness and reason of the conclusion; and • Finally, the human-computer interaction interface responds to the input user. The “Rule-Based Expert System,” the most common form of an expert system, starts with human experts working with “knowledge engineers” to create a “knowledge database.” This database stores both factual, exact information on the given subject matter as well as heuristic (trial and error, intuitive or “rule of thumb”) knowledge. The knowledge engineer then categorizes, organizes, and stores the information in the form of IF-THEN-ELSE rules, to be used by the “inference engine” (an algorithm). A potentially more powerful expert system can provide knowledge in a neural network (ANN). The weakness of such a deep learning approach is that the ANN is limited by its “training set” of stored knowledge and its inability to provide reasoning in an “explanation facility” or “explainable AI (XAI)” (see discussion in Chapter 1, page 10). The inference engine manipulates the knowledge database to arrive at a solution through forward and backward chaining. Forward chaining answers the question, “What can happen next?” Here, the Inference Engine follows the chain of conditions and derivations and finally deduces the outcome. In backward chaining, the expert system finds out the answer to the question, “Why this happened?” Based on what has already happened, the Inference Engine tries to find conditions that could have occurred in the past for this result (Fig. 313) [74]. One other interesting related system is called the Fuzzy Logic-Based Expert System, which refers to the indiscriminate nature of real things with a series of transitional states between them, without clear dividing lines. This occurrence is common in medical diagnosis versus Boolean logic wherein things are entirely true (having degree of truth or 1.0) or completely false (having a 0.0

Chapter 3 • The science and technologies of artificial intelligence (AI)

55

degree of truth). Fuzzy logic uses reasoning about inherently vague concepts, such as subjective or qualitative descriptions of disease (e.g., a complicated medical condition) [75]. Finally, the “user interface” provides a response to the user (usually not the original expert) through NLG or screen text. The responses in expert systems used in health care generally include an “explanation facility” (XAI) to justify the logic used in diagnostic decision-making. Of course, in medical expert systems, their value is directly proportional to the quality of the knowledge database (created by “human experts” and thus the risk of “garbage in  garbage out”). Only an XAI can objectively defend the information provided through the user interface. This approach is required in expert systems being used in health care today. Significant progress is being made in the development of “Explaining AI (XAI)” algorithms for all forms of AI applications that address this “black box” issue or “how was the outcome determined?” [76].

3.5.4 “Internet of things” (IoT) Beyond the universal Internet network is an additional “hybrid” system known as the “Internet of Things” (IoT), which is a system of interrelated computing devices, mechanical and digital machines, objects, even people and animals. These objects are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction [77]. All categories of industries (particularly health care, as discussed frequently in Section 2) are using IoT to operate more efficiently, better understand customers to deliver enhanced customer (patient) service, improve decision-making, and increase the value of the business (clinical care). A “thing” in the Internet of Things (IoT) can be a person with a heart monitor implant, an automobile that has built-in sensors to alert the driver when tire pressure is low or any other natural or human-made object that can be assigned an IP address and can transfer data over a network. The next level of IoT is a sensor network of billions of smart devices (e.g., smartphones) that connect people, systems, and other applications to collect and share data. In healthcare, IoT offers many benefits, including the ability to monitor patients more closely to use the data that’s generated and analyze it. Hospitals often use IoT systems to complete tasks such as inventory management for both pharmaceuticals and medical instruments. There is much more information and applications of IoT in Section 2 of this book.

3.5.5 Cyber-physical system (CPS) An addition or perhaps more appropriate, an enhancement of the IoT is a technology called the Cyber-physical System (CPS) [78]. CPS is the system of collaborating computer entities that are in comprehensive and accelerated connection with the surrounding physical world and its on-going processes. Furthermore, these systems provide and use data-accessing and data-processing services available on the Internet to achieve the ends mentioned above. CPS integrates with IoT (and cloud-computing plus an array of other technologies) to create what is now called “Industry 4.0” [79]. During the First Industrial Revolution,

56

Foundations of Artificial Intelligence in Healthcare and Bioscience

manufacturing production facilities were developed with the help of water and steam powers. During the Second Industrial Revolution, mass production was realized with the help of electrical energy. During the Third Industrial Revolution, electronic and information technologies were introduced that furthered increased production automation. The Fourth Revolution, or Industry 4.0, represents the current trend of automation technologies in the manufacturing industry, and mainly includes enabling technologies such as the cyberphysical systems (CPS), the Internet of Things (IoT) and cloud computing [80]. The healthcare counterpart to Industry 4.0 is called Health 4.0 [81]. It, too, uses cyberphysical systems (CPS), cloud computing, the internet of “Everything” including things, services, and people and evolving mobile communication networks (5G). These all produce the personalization of healthcare, a goal of modern health care. Much more about Health 4.0 and “personal health” is discussed in Section 2.

3.5.6 Big data analytics In the Introduction to this Section 1, I present a 2017 listing by Forbes Magazine of the “Five Technologies that will Disrupt Healthcare By 2020” [43]. Big Data Analytics was number 3 on the list. While we have touched on some of the others on the list and will expand on them in Section 2 of this book, Big Data Analytics may have the most profound influence on both the business, administrative, and clinical aspects of health and wellness and, thus, requires an in-depth description here. This full explanation will make you more comfortable and help you understand the magnitude of the effects of this special category of AI. Indeed, big data analytics is “disruptive” in a very positive way. For any data to be useful, it must be analyzed, interpreted, and then addressed. Thus, algorithms, not databases, and datasets become the critical factor in utilizing data most effectively. As we mentioned in the previous discussion on expert systems, computer-based algorithms in medicine are “expert systems” using rules, encoded knowledge, general principles (heuristics and deterministics), and apply them to diagnosis and treatment. Conversely, in machine learning, algorithms analyze vast numbers of variables, looking for combinations that reliably predict outcomes (e.g., regression analysis). But machine learning is most effective in handling enormous numbers of predictors and combining them in interactive ways. Machine learning allows the use of enormous amounts of new data whose volume or complexity would previously have made analyzing them unimaginable [82]. “Big data” is an evolving term that describes a large volume of structured, semistructured, and unstructured data that has the potential to be mined for information and used in machine learning projects and other advanced analytics applications. It is characterized by “The 10 [sometimes contracted to the first 5 or 3] Vs of Big Data” (Table 37), of which each term describes a specific property of Big Data Analytics that must be understood to capture the essence of the technology [83]. 1. VOLUME: HOW MUCH DATA IS THERE? It is commonly cited that 4.4 zettabytes of data existed globally in 2013. That number is set to grow exponentially to 44 zettabytes (44 trillion gigabytes) by 2020 as it more than doubles each year (Moore’s Law). Most of

Chapter 3 • The science and technologies of artificial intelligence (AI)

Table 3–7

57

The 10 Vs of Big Data.

1. VOLUME: How much data is there? 2. VELOCITY: How quickly is the data being created, moved, or accessed? 3. VARIETY: How many different types of sources are there? 4. VERACITY: Can we trust the data? 5. VALIDITY: Is the data accurate and correct?

6. VIABILITY: Is the data relevant to the use case at hand? 7. VOLATILITY: How often does the data change? 8. VULNERABILITY: Can we keep the data secure? 9. VISUALIZATION: How can the data be presented to the user? 10. VALUE: Can this data produce a meaningful return on investment?

this data is transient, but health care data such as clinical notes, claims data, lab results, gene sequences, medical device data, and imaging studies are information that must be retained. It becomes even more useful when combined in multiple ways to produce brand new insights. Thus, in health care, storage techniques are necessary, on-premises or in the cloud, to handle and retain large amounts of data. They must also ensure that the infrastructure can keep up with the next V on the list without slowing down other critical functions like EHR access or provider communications. 2. VELOCITY: HOW QUICKLY IS THE DATA BEING CREATED, MOVED, OR ACCESSED? Healthcare information accounts for a respectable proportion of the data transmitted in the world, and the figures continue to rise as the Internet of Things (IoT), medical devices, genomic testing, machine learning, natural language processing, and other data generation and processing techniques continue to evolve. Some of this data must update in real-time at the point of care (e.g., ICU) and be displayed immediately. Thus, system response time is a critical metric in health care. Defining which data sources require immediate access versus days, weeks, or months becomes a necessity. 3. VARIETY: HOW MANY DIFFERENT TYPES OF SOURCES ARE THERE? The types of health care information vary widely, but the more they integrate, the more insights can be garnered. Big data analytics addresses the integration of multiple datasets or ones too complex to be handled through traditional processing techniques. One of the most significant barriers to effective data management is the variety of incompatible data formats, non-aligned data structures, and inconsistent data semantics used throughout the health care system. Health IT developers are starting to address the problem by enlisting the help of application programming interfaces (APIs) and new standards such as Fast Healthcare Interoperability Resource (FHIR) technologies. 4. VERACITY: CAN WE TRUST THE DATA? Providers cannot utilize insights that may have been derived from data that is incomplete, biased, or filled with noise. A New York Times 2014 study [84] showed that data scientists in health care spend more than 60% of their time cleaning up data before being used. Data governance and information

58

5.

6.

7.

8.

9.

Foundations of Artificial Intelligence in Healthcare and Bioscience

governance are vital strategies that healthcare organizations must employ to ensure that their data is clean, complete, standardized, and always ready for use. VALIDITY: IS THE DATA ACCURATE AND CORRECT? Similar to veracity, the validity of data is a critical concern for clinicians and researchers. A dataset may be complete, but is its content and values correct, up to date, and was the information generated using accepted scientific protocols and methods? Who is responsible for curating and stewarding the data? Healthcare datasets must include accurate metadata that describes when, how, and by whom the data is created. This data helps to ensure that analysts understand one another, that their analytics are repeatable, and that future data scientists can query the data and find that for which they are looking. VIABILITY: IS THE DATA RELEVANT TO THE USE CASE AT HAND? Understanding which elements of the data are tied to predicting or measuring the desired outcome is essential for producing reliable results. To do this, organizations must understand what elements they have, are they robust enough to use for analysis and whether the results are genuinely informative or just an interesting diversion. Many predictive analytics (Chapter 4, page 99) projects focus on identifying innovative variables for detailing certain patient behaviors or clinical outcomes. These variables will no doubt be an ongoing process as more datasets become available. VOLATILITY: HOW OFTEN DOES THE DATA CHANGE? Healthcare data changes quickly, by the second in some cases, which raises the question of how long certain data is relevant, which historical metrics to include in the analysis, and how long to store the data before archiving or deleting it. As the volume of data continues to increase, these decisions become increasingly important based on the cost of data storage and complicated by the fact that HIPAA requires providers to retain specific patient data for at least 6 years. VULNERABILITY: CAN WE KEEP THE DATA SECURE? Speaking of HIPAA, data vulnerability has skyrocketed in the wake of multiple ransomware attacks and a litany of data breaches. Security is a priority in the healthcare industry, especially as storage moves to the cloud and data starts to travel between organizations as a result of improved interoperability. In 2016, close to a third of hospitals said they were spending more money on data security than in the previous year [85]. VISUALIZATION: HOW CAN THE DATA BE PRESENTED TO THE USER? Clinicians struggle with their electronic health record interfaces complaining about too many clicks, too many alerts, and not enough time to get everything done. These issues add to the complexity of information processing that is part of every clinician’s daily workflow and sour users further on the potential of health IT. Filtering clinical data intuitively helps to prevent information overload and may help to mitigate feelings of burnout among overworked clinicians. Other valuable tools include interactive dashboards for reporting financial, operational, or clinical metrics to end-users, online mapping tools to visualize public health concerns. Also available are a variety of new apps for desktops, tablets, and even smartphones giving users ways to interact with data more meaningfully.

Chapter 3 • The science and technologies of artificial intelligence (AI)

59

10. VALUE: CAN THIS DATA PRODUCE A MEANINGFUL RETURN ON INVESTMENT? Ultimately, the only reason to engage in big data analytics is to extract some value from the information at hand. Many healthcare organizations are still in the early phases of developing the competencies that allow them to achieve these goals and generate actionable insights that apply to real-world problems. But the value is there for those who adhere to strong data governance principles and take a creative approach to disseminate insights to end-users across organizations. In summary, Big Data is a term used to describe a collection of data that is huge and yet growing exponentially with time. In short, such data is so large and complex that none of the traditional data management tools can store it or process it efficiently. Big Data analytics is a process of using advanced algorithms and machine learning for examining, filtering, aggregating, and modeling large datasets (big data). This allows for the discovery of hidden patterns, trends, conclusions, and meaningful correlations, preferences between variables to retrieve intelligent insights from the data not achievable through human analyzes and to drive decisions. Applied to health care, this concept is advancing the areas of diagnostics, preventive medicine, precision medicine, population health, research, and cost controls [86]. All of these areas are discussed in detail in Section 2, Chapter 4 of this book.

3.5.7 Blockchain While different from both big data analytics and AI machine learning, the intersection and convergence of blockchain technology with AI and big data may prove to be one of the strongest (and most disruptive) advances in the future of health care. An understanding of blockchain technology, combined with its applications with AI and big data, quickly demonstrates the potential of these combined technologies. Blockchain is a distributed database existing on multiple computers at the same time. It is continually growing as new sets of recordings, or ‘blocks,’ are added to it. Each block contains a timestamp and a link to the previous block, so they form a chain (ergo, “blockchain”). Any particular entity does not manage the database, but rather, everyone in the network gets a copy of the whole database. Old blocks are preserved forever, and new blocks are added to the ledger, irreversibly making it impossible to manipulate the ledger by faking documents, transactions, and other information. All blocks are encrypted uniquely so that everyone can have access to all the information. Still, only a user who owns a special cryptographic (coded) key can add a new record to a particular chain. As long as you remain the only person who knows the key, no one can manipulate your transactions. Also, cryptography (writing or solving codes) is used to guarantee the synchronization of copies of the blockchain on each computer (or node) in the network. Blockchain is commonly associated with the cryptocurrencies and bitcoin, but in the context of health care, we can think of blockchain as a digital medical record. Every record is a block which has a label stating the date and time when the document is entered. Neither the doctor nor the patient should be able to modify the records already made. Nevertheless, the

60

Foundations of Artificial Intelligence in Healthcare and Bioscience

doctor owns a private key that allows him/her to make new records, and the patient holds a public key that enables them to access the files anytime. This method makes the data both accessible and secure [87]. Blockchain can create a mechanism to manage large quantities of medical information (e.g., medical records, EHRs, insurance data) stored in the cloud. This method of managing big data will increase interoperability while maintaining privacy and security of health care data. It contains inherent integrity and conforms to strict legal regulations [88]. Increased interoperability is beneficial in health care management and health outcomes [89]. For perspective, health-related blockchain spending was estimated to be about $177 million in 2018 but is expected to rise to more than $5.6 billion by 2025. Blockchains are causing old systems to evolve, increasing efficiency while cutting down on costs across the healthcare industry [90].

3.5.8 Robotics Last but certainly not least among the “Specialized AI Systems” is one of AI’s most exciting and disruptive applications in all areas of life, from games to motor vehicles to health care. However, it is important to note upfront that robotics and artificial intelligence are not the same things at all. The 2 fields are almost entirely separate. This difference is classically illustrated in textbooks on both technologies through a Venn diagram (Fig. 314). The overlap between the 2 technologies represents the category of “Artificially Intelligent Robots.” In other words, artificially intelligent robots are simply robots that are controlled by AI programs [91]. As with any complex topic, definitions can be challenging to capture the full meaning of specific terms and concepts. Nonetheless, a few words do require general definitions in the field of robotics. Thus, a good “general” definition of a robot is a programmable machine that physically interacts with the world around it and is capable of carrying out a complex series of actions autonomously (self-governing) or semi-autonomously [92]. The “general” definition of robotics is an interdisciplinary branch of engineering and computer science (AI) that deals with the design, construction, operation, and use of robots, as well as computer systems for their control (“artificial intelligent robots”), sensory feedback, and information

FIGURE 3–14 Artificial intelligent robots. Artificial intelligent robots are simply robots that are controlled by AI programs. This difference is illustrated through this Venn diagram. The overlap between the two technologies represents the category of “Artificial Intelligent Robots.”

Chapter 3 • The science and technologies of artificial intelligence (AI)

61

processing [93]. And one more relevant term, “robotic process automation” (RPA) is defined as the use of software to mimic human actions to perform a sequence of steps, leading to meaningful activity, without any human intervention [94]. Robots are loosely defined as electromechanical devices that are actuated by specific input agents like sensors or transducers and are guided to work by computer circuitry with the standard input layer, inner layer (processor in this case) and output layer. The inputs to the robots are via sensors; the CPU unit does the processing; then, the desired mechanical action output is obtained. The sensory inputs that the robot takes can be anything from smell, touch, visual differences, voice (NLP). The central processing unit is the microprocessor or microcontroller that processes this input quantity, searches for the corresponding function to perform from the previously programmed instruction set, and then sends the signal on to the output port. Upon reception of this signal, the robot will perform the desired action. The input to the robot will be via sensors and transducers which might include (but not be limited too): • • • • • • •

Contact/touch sensors Temperature sensors Light sensors Sound sensor Proximity sensor Distance sensor Pressure sensor

The processing unit is a microcontroller or microprocessor (described earlier in “AI Software,” page 36). This choice depends on the driving load for the robot, as will the output unit. The most common output units include (but are not limited to): • • • •

Actuators Relays Speakers CD screens

Based on the robotic applications, the actuating sources change. For wired applications, cables and wires are used, while for wireless robots, RF (radio frequency), RFID (radio frequency identification), Wi-Fi, DTMF (Dual Tone Multi-Frequency), technologies are used. Wireless technology is used in most robots today, from autonomous, semi-autonomous to humanoid robots. The working of the wireless robot is similar to the wired robot, except for the change in the circuitry usually transmitter-receiver pair for wireless technology [95]. The overlap in the Venn diagram in Fig. 314, represents the category of “Artificial Intelligent Robots.” A traditional robot is a machine able to perform repetitive, highly specialized tasks (e.g., industrial robots). However, an “AI Intelligent Robot” using Machine Learning can extract information from the surrounding environment to make meaningful decisions (a “behavior-based” robot).

62

Foundations of Artificial Intelligence in Healthcare and Bioscience

Two examples of how Machine Learning can improve the performance of robotics systems are Multi-Robots Systems and Swarms. Programming collective behaviors using traditional programming can become an extremely challenging task. Using Machine Learning (Reinforcement Learning) makes it easier and possibly leading to more innovative solutions not previously considered. The main differences between Multi-Robots systems and Swarms are that the former has global knowledge of the environment. Robotic systems can have a centralized architecture. The latter, swarms, don’t have a global understanding of the environment and use a decentralized architecture. Deep Learning algorithms can provide even more significant benefits in robotics by using Multi-Layer Artificial Neural Networks (ANNs). They can perform incredibly well in tasks such as image recognition, which have critical applications in the robotic system’s vision. One problem area for using Deep Learning algorithms in robotics is the current inability to track the algorithm’s decision-making process fully. This problem is addressed and corrected by “explanation facilities” [96]. Another variant of robotics is the AI “creation” of “chatbots.” (Lots more about chatbots in Chapter 5, page 185.) These are computer programs that simulate human conversation through voice commands or text chats or both. A chatbot (short for chatterbot) is an AI feature that can be embedded and used through any major messaging applications. There are several synonyms for the chatbot, including “talkbot,” “bot,” “IM bot,” “interactive agent,” “conversation or virtual assistants”: or “artificial conversation entity” [97]. A chatbot that functions through machine learning is programmed to self-learn as it introduces new dialogues and words. In effect, as a chatbot receives new voice or textual conversations through NLP, the number of inquiries that it can reply to (NLG) and the accuracy of each response it gives increases. Some examples of chatbot technology are virtual assistants like Amazon’s Alexa, Google (Home) Assistant, Apple’s Siri and messaging apps, such as WeChat and Facebook messenger. A discussion of robotics in this day and age would not be complete without the mention of the controversial yet compelling subject of “self-driving (driverless or autonomous) vehicles.” Relevance of this topic to the theme of this book, i.e. AI and health care can only be justified in 2 ways. First, it is highly likely that all of your interactions with the health care system in the future will include you need for transportation and undoubtedly that will include the use of a hybrid, robotic vehicle. The second justification for a discussion on autonomous vehicles (the first being pretty “thin”), is simply that the subject is “pretty cool,” especially when talking about disruptive technologies. By definition, self-driving (autonomous) vehicles have been classified into 6 levels published in 2014 by the Society of Automotive Engineers International and officially adopted as the standard for autonomous vehicle technology in 2016 by the National Highway Transportation Safety Administration [98]. The 6 levels include [99] 1. Self-Driving Car Automation Level 0: No automation; 2. Self-driving car automation level 1: Some autonomous functions (e.g., cruise control, automatic braking), but driver assistance always required;

Chapter 3 • The science and technologies of artificial intelligence (AI)

63

3. Self-Driving Car Automation Level 2: Partial automation includes pre-programmed or fixed scenarios, but the driver must monitor the environment and keep hands on the wheel at all times; 4. Self-Driving Car Automation Level 3: Conditional autonomy where a car can safely control all aspects of driving in a mapped environment, but a human is still required to be present, monitoring and managing changes in road environments or unforeseen scenarios; 5. Self-Driving Car Automation Level 4: High automation. where the car has self-driving automation with no driver interaction needed and the car stops itself if the systems fail; 6. Self-Driving Car Automation Level 5: Fully autonomous where program controls the destination with no human involvement nor the ability to intervene. Vehicles will have no steering wheels nor gas and brake pedals. So how will these increasing levels of automation be achieved? Autonomous vehicles are fitted with numerous sensors, radars, and cameras to generate massive amounts of environmental data. All of these data form the complex similar to the human brain’s “sensorium” or totality of neurological centers that receive, process, and interpret sensory stimuli (analogous to the neural networks mimicking neuroscience as described in previously in this Chapter). These stimuli allow the autonomous vehicle to “see, hear and feel” the road, road infrastructure, other vehicles, and every other object on/near the road, just like a human driver. This data is processed with on-board-computers, and data communication systems are used to securely communicate valuable information (input) to the autonomous driving cloud platform (e.g., Tesla’s “Autopilot” [100], Waymo’s Lidar et al. [101]). The autonomous vehicle communicates the driving environment and/or the particular driving situation to the Autonomous Driving Platform. The Autonomous Driving Platform uses AI algorithms as an “intelligent agent” to make meaningful decisions. It acts as the control policy or the brain of the autonomous vehicle. This intelligent agent connects to a database that acts as a memory where past driving experiences are stored. This data, along with the real-time input coming in through the autonomous vehicle and the immediate environment around it, helps the intelligent agent make accurate driving decisions. The autonomous vehicle now knows precisely what to do in this driving environment and/or particular driving situation. Based on the decisions made by the intelligent agent, the autonomous vehicle detects objects on the road, maneuvers through the traffic without human intervention, and gets to the destination safely. Autonomous vehicles are also being equipped with AI-based NLP, NLG, gesture controls, eye tracking, virtual assistance, mapping, and safety systems, to name a few. These functions are also carried out based on the decisions made by the intelligent agent in the Autonomous Driving Platform. The driving experiences generated from every ride are recorded and stored in the database to help the intelligent agent make much more accurate decisions in the future. This data loop, called Perception-Action Cycle, takes place repetitively. The more the number of Perception-Action Cycles takes place, the more intelligent the intelligent agent

64

Foundations of Artificial Intelligence in Healthcare and Bioscience

becomes, resulting in a higher accuracy of making decisions, especially in complex driving situations. AI, especially neural networks and deep learning, has become an absolute necessity to make autonomous vehicles function safely and adequately. AI is leading the way for the launch of Level 5 autonomous vehicles, where there is no need for a steering wheel, accelerator or brakes [102].

3.6 Sample AI scenarios Given the enormous amount of information presented in this Chapter, it may be time to summarize (at least some of it) in a practical (and maybe somewhat fun) manner. So, to illustrate the concepts and workings of AI, let’s present 2 sample case studies. The first case study will be the application of AI in answering a profound dilemma of the ages, “Why is the Mona Lisa smiling?”; and the second will be, a personal struggle and resolution in, “The ‘Great Steak’ experience.”

3.6.1 “Why is the Mona Lisa smiling?” [103] One of the more elegant examples of how AI artificial neural networking (ANN) replicates the human brain is the human visual system [104] and the sensory domain of vision and sight. This case study (Fig. 315) uses Leonardo da Vinci’s masterpiece, “The Mona Lisa,” as the sensory visual stimulus (supervised, labeled information) and asks the AI computer the classic question, “Why is she smiling?” Relevant information regarding the question is provided through a keyboard (or audio NLP) input. An image of the painting allows the retina to collect wavelengths of light (in “pattern

FIGURE 3–15 Case study (example) of “Why is Mona Lisa smiling?” (input layer). Leonardo da Vinci’s masterpiece, “Mona Lisa” can serve as the sensory visual input stimulus (red) to ask the AI computer the classic question, “Why is she smiling?”.

Chapter 3 • The science and technologies of artificial intelligence (AI)

65

recognition” from the painting). It transmits them through optic radiations (#1 [wavelength colored] arrows in Fig. 315) to the lateral geniculate nucleus (LGN). This process is analogous to the supervised user input portion (GPU pattern recognition) of an AI software platform in machine learning. Acting as a relay station, neurons from the LGN send signals (#2 [blue] arrows in Fig. 316) to the subcortical limbic hippocampus and the amygdala (neural center for emotions and memory) creating neural layers of data (light frequency, intensity, patterns, configuration). This labeled information is “neural networked” (#3 [blue] arrows in Fig. 316) to relevant, preexisting, higher cortical neural layers and their unsupervised, unlabeled data related directly and indirectly to the user’s question(s), in this case, vision, art and specifically, the Mona Lisa. This neural networking is analogous to AI’s API computer software probing knowledge databases to address the “smile” dilemma. These collective stimuli are analyzed and compared (by inference deductions, heuristics, probabilities) with the supervised, first level, labeled light wavelength data. Finally, the nerve impulses from these subcortical and cortical levels are transmitted through associated cortical and optic radiations (#4 [green] arrows in Fig. 317) to the visual cortex (area V1 and V5). There, they are cognitively interpreted (pattern recognition) through cerebral reinforcement and “memory networking.” This process corresponds to the inference engine “deep learning” level in AI, which yields logical decisions, behavioral, situational, emotional, environmental, and all-inclusive interactions [105]. Through these logical inference rules, the brain and an AI computer can evaluate reasonable deductions, probabilities, and conclusions to the question, “Why is she smiling?” (Fig. 318). And AI’s answer would probably be the same as that of all the art experts over the centuries. “Only Leonardo knows for sure.”

FIGURE 3–16 Case study (example) of “Why is Mona Lisa smiling?” (inner layer). Labeled information is “neural networked” (blue number 3 arrows) to relevant, preexisting, higher cortical neural layers and their unlabeled data related directly and indirectly to the user’s question (“Why is she smiling?”).

66

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 3–17 Case study (example) of “Why is Mona Lisa smiling?” (processing). Nerve impulses from the subcortical and cortical levels are transmitted through associated cortical and optic radiations (green number 4 arrows) to the visual cortex (area V1and V5).

FIGURE 3–18 Case study (example) of “Why is Mona Lisa smiling?” (output). Through logical inference rules (green), the brain and an AI computer can evaluate reasonable deductions, probabilities, and conclusions to the question, “Why is she smiling?” And most likely, Explainable AI (XAI) will conclude as have art experts over the years, “Only Leonardo knows for sure.”.

Beyond this example of sensory experience (vision and sight) correlated with AI machine learning and neural networking (ANN), an infinite amount of scenarios exist, from the most serious to the sublime, as in the next case study.

Chapter 3 • The science and technologies of artificial intelligence (AI)

67

3.6.2 The “great steak” experience [106] AI can reach all provinces of human behavior from the practical to theoretical to intellectual to behavioral and even to the sensory and emotional. This next case study presents a simple illustration of an AI expert system involving multiple domains. The example also demonstrates the 3 levels of machine learning: supervised, unsupervised, and reinforcement learning operating in a scenario of simultaneous sensory, emotional, and practical applications with logical conclusions and results. The first time you taste a great steak and appreciate its flavor, your brain labels “eating a great steak” as a positive gastronomic experience (let’s use the acronym PGE). Your brain has just developed a neural layer of “labeled data” through a “supervised learning” experience (you tasting the steak). Similarly, AI computer input software can create a simulated digital program of your brain’s supervised neural layer by analyzing the texture, characteristics, appearance, chemical compounds, fats, and protein that constitute a “great steak.” The software program “labels” and remembers this data as a PGE. After a couple of “great steak PGEs,” your brain starts asking you, “As much as I enjoy this PGE, can I afford it?” At this stage, the process called “neural networking” begins to develop in your brain using the hippocampus, amygdala, and other preexisting cortical neural layers containing stored financial and economic information and knowledge. This neural networking isn’t adding new, labeled data (as in supervised learning) but rather interconnecting (analogous to API software) the previously labeled, supervised (PGE) neural layer with other preexisting “unlabeled data” in limbic and cortical neural layer(s). This networking is considered “unsupervised learning” (also referred to as neuroplasticity). Computer API software simulates this unsupervised learning experience by creating (and using preexisting) decision-making economic and financial datasets that can analyze the “great steak PGE” against traditional and variable economic norms. With each additional “great steak PGE,” similar to your brain, inference engine computer algorithm(s) begin to probe (using Bayesian logic) additional direct and indirect related economic considerations and datasets. This computer activity (“neural training”) is the actual beginning of the deep learning AI process. Finally, the dilemma created by the unsupervised PGE learning experience (“. . .can I afford it?”) causes your brain’s hippocampus, amygdala, and cortical layers to assess your budgetary priorities. This assessment leads to a “reinforcement learning” experience with a positive emotional and practical result. The question, “Can I afford it (PGE)?” leads you to logical “if/then” calculations (by inference, heuristics, forward and backward chaining). “If I reorganize my budgetary efficiencies (neuroplasticity) to allow me to enjoy an occasional PGE - then I can enjoy more ‘great steak PGEs.’” Using random access memory (RAM), the computer’s sensory input and inference engine software can utilize multiple direct and indirect related (and unrelated) datasets. That process enables analysis and adjustments (“backward chaining”) of the complex economic issues needed to answer the question. The result of such AI reinforcement learning leads to the most significant benefits (“reward”) to you, including an organized, balanced budget, and more “great steak PGEs.”

68

Foundations of Artificial Intelligence in Healthcare and Bioscience

These are just 2 scenarios, albeit lighthearted, from among a virtually infinite amount of AI storylines. Machine and deep learning may be able to “train,” assimilate and continue to expand its knowledge base to address virtually all of life’s issues and questions. Such possibilities also include any subjective and objective (serious) areas, such as health and wellness. Section 2 of this book concentrates on health care scenarios. It provides a plethora of examples and applications of current and future AI methods and strategies in the area of bioscience, health and wellness. And so, we come to the end of Section 1. I hope it has helped your understanding of the science and technologies of AI. All of the discussions have been in general terms to provide you with a basis for the more specific health-related discussions in Section 2. In that Section, we become rather specific in applying the numerous AI concepts and programs that are changing (“disrupting”) the business and administration of health care. Perhaps of more value to some will be an expansion of your understanding of and benefits from the clinical aspects of health and wellness care. I hope you enjoyed Section 1, and I hope Section 2 becomes your AI health and wellness guide and reference text for years to come. Thank you.

References [1] Merriam-Webster Dictionary. Merriam-Webster Inc.; 2019. [2] Shapshak P. Artificial Intelligence and brain. Bioinformation 2018;14(1):3841. , https://doi.org/ 10.6026/97320630014038 . . [3] Dormehl L. What is an artificial neural network? Here’s everything you need to know. Digital Trends 2019. [4] Grafman J. A glossary of key brain science terms. The Dana Foundation; 2106. [5] Karpathy A. Convolutional neural networks for visual recognition. Stanford.edu. cs231n.github.io.; 2019. [6] Diaa AMA. The mathematical model of the biological neuron. The Bridge: Experiments in Science and Art, Virtual Residence, Stefanos & Diaa group, 11th week, posted on November 24, 2018. [7] Zimmer C. 100 trillion connections: new efforts probe and map the brain’s detailed architecture. Sci Am 2011. [8] Miikkulainen R, Liang J, Meyerson E, et al. Artificial intelligence in the age of neural networks and brain computing. Chapter 15 - Evolving deep neural networks. Academic Press; 2019. p. 293312. [9] Saha S. A comprehensive guide to convolutional neural networks — the ELI5 way. Data Science; December 15, 2018. [10] Dormehl L. What is an artificial neural network? Here’s everything you need to know. Digital Trends; 2019. [11] Algorithms. Wikipedia. Edited on April 25, 2019. [12] Li H. Which machine learning algorithm should I use? SAS Blog; April 12, 2017. [13] Dasgupta A, Nath A. Classification of machine learning algorithms. Int J Innovative Res Adv Eng (IJIRAE) 2016;3 ISSN: 23492763 Issue 03. [14] Prabhakar A. The merging of humans and machines, is happening now. DARPA; January 27, 2017.

Chapter 3 • The science and technologies of artificial intelligence (AI)

69

[15] Beal V. The difference between the internet and the World Wide Web. Webopedia; August 7, 2018 [16] Gallo A. A refresher on regression analysis. Harvard Business Review. No. 4; 2015. [17] McElwee K. From math to meaning: Artificial intelligence blends algorithms and applications. Princeton University. January 2, 2019. [18] Prabhakar A. Mathematics for AI: All the essential math topics you need. Essential list of math topics for Machine Learning and Deep Learning. Towards Data Science. August 9, 2018. [19] Deisenroth MP. Faisal AA. Ong CS. Mathematics for Machine Learning. Cambridge University Press. July 3, 2020. [20] Software terms: framework definition. TechTerms; 2013. [21] Shalev-Shwartz S, Ben-David S. Understanding machine learning: from theory to algorithms. Cambridge: Cambridge University Press; 2014. [22] Paruthi A. Artificial intelligence hardware. Medium; December 16, 2018. [23] Brownlee J. Supervised and unsupervised machine learning algorithms. Machine Learning Mastery; March 16, 2016. [24] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:43644. [25] Brownlee J. Understanding machine learning algorithms. Machine Learning Mastery; March 16, 2016. [26] Pereira JA, Martin H, Acher M, et al. Learning software configuration spaces: a systematic literature review. GroundAI. arXiv:1906.03018v1. June 7, 2019. [27] Polamuri S. Supervised and unsupervised learning. Datasprint; September 19, 2014. [28] Witten I, Eibe F, Hall M. Data mining. Burlington, MA: Morgan, Kaufmann; 2011. p. 1023. ISBN 978-012-374856-0. [29] Liang P. Semi-supervised learning for natural language. MIT Press; 2005. p. 4452. [30] Soni D. Supervised vs. unsupervised learning. Medium; March 22, 2018. [31] Wen I. Data mining process in R language. Computer Language. October 19, 2018. [32] Sarmah H. Understanding association rule learning & its role in data mining. Analytics India Mag; February 18, 2019. [33] Sarmah H. Understanding association rule learning & its role in data mining. ProLearning. February 18, 2019. [34] Piech CK. Means. Stanford University CS221; 2013. [35] Reddy C. Understanding the concept of hierarchical clustering technique. DataScience; December 10, 2018. [36] Dorairaj S. Hidden Markov models simplified. Medium; March 20, 2018. [37] Mayo M. 5 things to know about machine learning (18:n11). KDnuggets; 2018. [38] Sutton RS, Barto AG. Reinforcement learning an introduction. The MIT Press; 2012. [39] Fan S. Google’s new A.I. gets smarter thanks to a working memory. Singularity Hub; November 01, 2016. [40] Fan S. DeepMind’s new research on linking memories, and how it applies to AI. SingularityHub; September 26, 2018. [41] Admin. Meet the amygdala of the self-driving car. AutoSens; September 12, 2017. [42] Trafton A. How the brain switches between different sets of rules. MIT News; November 19, 2018. [43] Press G. The brute force of IBM deep blue and Google DeepMind. Forbes; February 7, 2018. [44] Bhatt S. Things you need to know about reinforcement learning. KDNuggets; March 2019.

70

Foundations of Artificial Intelligence in Healthcare and Bioscience

[45] ADL. An introduction to Q-Learning: reinforcement learning. Medium; September 3, 2018. [46] Bhartendu. SARSA reinforcement learning. MathWorks. version 1.0.0.0 (117 KB). [47] Hinzman L. Deep-Q-Networks explained. Toward Data Science; February 15, 2019. [48] Lin H. Chapter 11  Bridging the logic-based and probability-based approaches to artificial intelligence. Academic Press; 2017. p. 21525. , https://doi.org/10.1016/B978-0-12-804600-5.00011-8 . . [49] Suchow JW, Bourgin DD, Griffiths TL. Evolution in mind: evolutionary dynamics, cognitive processes, and bayesian inference. Science Direct. Elsevier 2017;21(7):52230. , https://doi.org/10.1016/j. tics.2017.04.005 . . [50] Kinser PA. Brain structures and their functions. Serendip; May 2, 2018. [51] Singh H. Hardware requirements for machine learning. eInfochips; February 24, 2019. [52] Martindale J. What is RAM? Digital Trends; February 3, 2019. [53] RAM. Computer hope. April 2, 2019. [54] Martindale J. What is a CPU? Digital Trends; March 8, 2018. [55] Silwen. What are the main functions of a CPU? TurboFuture; January 21, 2019. [56] Alena. GPU vs. CPU computing: what to choose? Medium; February 8, 2018. [57] Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:6088. [58] Graphics Processing Unit (GPU). Nvidia; March 26, 2016. [59] Leaton R. APU vs. CPU vs. GPU. Which one is best for gaming? WePC; January 4, 2019. [60] Johnson K. Google cloud makes pods with 1,000 TPU chips available in public beta. LogMeIn; May 7, 2019. [61] Rouse MAI. Accelerators. SearchEnterpriseAI; April 2018. [62] Woodford C. Quantum computing. ExplainThatStuff; March 26, 2019. [63] Greenemeier L. How close are we  really - to building a quantum computer? Scientific American; May 30, 2018. [64] Fulton S. What neuromorphic engineering is, and why it’s triggered an analog revolution. ZDNet; February 8, 2019. [65] Muslimi M. Are neuromorphic chips the future of AI and blockchain? Hackernoon; April 2, 2019. [66] Understanding ASIC development. AnySilicon; October 23, 2017. [67] Touger E. What is an FPGA and why is it a big deal? Prowess; September 24, 2018. [68] Wood AM. Get started with VHDL programming: design your own hardware. Who is hosting this? February 12, 2019. [69] Seif G. An easy introduction to natural language processing. Data Science; October 1, 2018. [70] Garbade MJ. A simple introduction to natural language processing. Medium; October 15, 2018. [71] Ghosh P. The fundamentals of natural language processing and natural language generation. Dataversity; August 2, 2018. [72] Joshi N. The state of the art in natural language generation. Allerin; April 8, 2019. [73] ArseneIoan O, DumitracheIoana M. Expert system for medical diagnosis using software agents. Science Direct. doi.org/10.1016/j.eswa.2014. 10.026. [74] Team. What is expert system in artificial intelligence  how it solve problems. Dataflair; November 15, 2018.

Chapter 3 • The science and technologies of artificial intelligence (AI)

71

[75] Tan H. A brief history and technical review of the expert system research IOP Conf. Ser.: Mater. Sci. Eng. 242 012111. 2017. [76] Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): towards medical XAI. Cornell University. arXiv:1907.07374 [cs.LG]; October 15, 2019. [77] Rouse M. The Internet of Things (IoT). TechTarget; March 2019. [78] Hermann M, Pentek T, Otto B. Design principles for industrie 4.0 scenarios. In: Proceedings of 2016 49th Hawaii International Conference on Systems Science, January 58, Maui, Hawaii; 2016. doi:10.1109/HICSS.2016.488. [79] Li DX, Eric LX, Ling L. Industry 4.0: state of the art and future trends. Int J Prod Res 2018;56 (8):294162. , https://doi.org/10.1080/00207543.2018.1444806 . . [80] GTAI (Germany Trade & Invest). Industries 4.0-smart manufacturing for the future. Berlin: GTAI; 2014. [81] Thuemmler C, Bai C. Health 4.0: how virtualization and big data are revolutionizing healthcare. Basel: Cham; Springer; 2017. [82] Mullainathan S, Spiess J. Machine learning: an applied econometric approach. J Economic Perspect 2017;31(2):87106. , https://doi.org/10.1257/jep.31.2.87 . . [83] Bresnick J. Understanding the many V’s of healthcare big data analytics. HealthITAnalytics; June 05, 2017. [84] Lohr S. For big-data scientists, ‘Janitor Work’ is key hurdle to insights. New York Times; 2014. [85] Heath S. More hospitals invest spending in healthcare data security. HealthITSecurity; February 1, 2016. [86] Catalyst. Healthcare big data and the promise of value-based care. NEJM; January 1, 2018. [87] Omelchenko D. What is blockchain in layman’s terms. ihodl.com. September 4, 2017. [88] Zaria A, Ekblaw A, Vieira T, Lippman A. Using blockchain for medical data access and permission management. In: 2nd International Conference on Open and Big Data (OBD); August 2224, 2016. [89] Anuraag A, Vazirani AA, Odhran O’Donoghue O, Brindley D. Implementing blockchains for efficient health care: systematic review. J Med Internet Res 2019. [90] Morrissey D. How blockchain technology is helping the healthcare industry evolve. Electronic Communications Network (ECN) Magazine; 2019. [91] Owen-Hill A. What’s the difference between robotics and artificial intelligence? Robotiq; July 19, 2017. [92] Nichols G. Robotics in business: everything humans need to know. Robotics; July 18, 2018. [93] , https://en.wikipedia.org/wiki/Robotics.; 2019. [94] Kappagantula S. Robotic process automation  all you need to know about RPA. Edureka; May 22, 2019. [95] Swetha B. A simple explanation of how robots work. TechSpirited; December 17, 2018. [96] Ippolito PP. Need for explainability in AI and robotics. Toward Data Science; April 18, 2019. [97] Frankenfield J. Chatbot. Investopedia; June 26, 2019. [98] National Highway Transportation Safety Administration (NHTSA). Automated driving systems. Federal Registry. ,https://transportation.gov/.; November 23, 2018. [99] Rhodes MG. Self-driving cars explained. Dryve; November 28, 2017. [100] , https://www.tesla.com/support/autopilot.. [101] Ohnsman A. Self-driving unicorn aurora, backed by Amazon, is buying a laser lidar maker. Forbes; May 23, 2019. [102] Gadam S. Artificial intelligence and autonomous vehicles. Medium; April 19, 2018.

72

Foundations of Artificial Intelligence in Healthcare and Bioscience

[103] Catania LJ, Nicolitz E. Artificial intelligence and its applications in vision and eye care (Chapter 2). In: Advances in optometry and ophthalmology 2018 textbook. pp 910, 2452-1760/18/a 2018. Elsevier Inc. , https://doi.org/10.1016/j.yaoo.2018.04.001 . . [104] Garvert MM, Frston KJ, Dolan RJ, et al. Part 2: Subcortical amygdala pathways enable rapid face processing. Science Direct, Vol. 102. Elsevier; 2014. p. 30916. [105] Mujica-Parodi LR, Jiook Cha J, Gao J. From anxious to reckless: a control systems approach unifies prefrontal-limbic regulation across the spectrum of threat detection. Frontiers in Systems Neuroscience, Vol. 11. Article 18; April 2017. [106] Catania LJ, Nicolitz E. Artificial intelligence and its applications in vision and eye care (Chapter 2). In: Advances in optometry and ophthalmology 2018 textbook. pp 1113, 24521760/18/a 2018. Elsevier Inc. , https://doi.org/10.1016/j.yaoo.2018.04.001 . .

SECTION

II Artificial intelligence (AI): Applications in health and wellness Nothing in medicine ever comes easy, and all of the intelligence in the world, artificial or not, won’t change that. Jason Moore, Director of the University of Pennsylvania’s Institute for Biomedical Informatics, Perelman School of Medicine

Introduction We are all interested in health and wellness for ourselves personally, our loved ones, and for humanity. We tend to think of health as defined by the World Health Organization (WHO) since 1948 as “. . .a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity” [1]. Although there is no consensus on the concept of wellness and well-being, there is general agreement that, at a minimum, “. . .well-being includes the presence of positive emotions and moods (e.g., contentment, happiness), the absence of negative emotions (e.g., depression, anxiety), satisfaction with life, fulfillment and positive functioning” [2]. Beyond a personal interest in health and wellness, we must also consider the concept of “public health.” It is “the art and science of preventing disease, prolonging life, and promoting health through the organized efforts and informed choices of society, organizations, public and private communities, and individuals” [3]. Sadly and acutely, the COVID-19 pandemic (Chapter 8) has made the world painfully aware of the devastating implications and direct influence the science of epidemiology and discipline of public health have on our lives. By this definition, we begin to understand that our personal health and wellness is dependent on a system greater than any one individual. Indeed, it is a system in which the whole is greater than the sum of its parts. It is a public system whose goal is to deliver the best care to each individual it serves. That enormous collective goal has created the largest “business” in the world, the business of health care. In the United States alone, health care expenditures in 2018 exceeded $3.65 trillion [4]. Projecting what that expenditure may rise to over the next ten years is not an exercise about which anyone wants to think. But we must. Governmental agencies (GOV) and non-governmental organizations (NGOs) that manage the business and administration of health care in the United States and globally face the formidable challenge of dealing with the economics, financing, access, availability, and delivery of 73

74

Foundations of Artificial Intelligence in Healthcare and Bioscience

health care. They also must consider the quality of personal and public health and wellness care we all seek and expect. These agencies’ toolkits vary widely, including ever-increasing technologies and frameworks, programs, and systems. But the vast array of demographics (e.g., geographic, age, socioeconomics, cultural) and epidemiologic factors (e.g., etiologies of diseases, risk factors, patterns, frequencies) related to health and wellness are overwhelming. Without expanding and “disruptive” technologies, these dedicated and hardworking GOV and NGO agencies will continue to lose ground to the ever-increasing challenges they face. Enter artificial intelligence! Certainly not a “turnkey solution” to the immense task of managing the business of health care, but AI “. . .promises to be truly life-changing, i.e., disruptive. From personal care to hospital care to clinical research, drug development, and insuring care, AI applications are revolutionizing how the health sector works to reduce spending and improve patient outcomes” [5]. We are entering a window in time where the landscape of health will start to be redefined, from treatment to cures, to prevention. This transformation will encompass sweeping changes in the pools of data we rely on; in the functional building blocks of the “work” of healthcare, such as doctors, hospitals, and emergency rooms; and in the economic undercurrents and data streams that will reset how the marketplace rewards value [6]. Section 2 of this book, “AI Applications in Health and Wellness,” will deal with the business and administrative aspects of health care (Chapter 4) as well as the associated clinical diagnostic and treatment technologies (Chapters 5 8). Both the business and clinical categories are “vertically integrated,” one of two terms we will be using multiple times, especially in Chapter 4, “AI Applications in the Business and Administration of Health Care.” A generic definition of “integration or, to integrate” is simply, “to form, coordinate, or blend into a functioning or unified whole” [7]. Vertical integration is more a business term than a health care term, but as established above, health care is indeed a business. Thus, we must examine it as such, that is, a business providing health and wellness services. Therefore, vertical integration is the coordination of (health) services among operating units (i.e., providers, facilities [hospitals, etc.], insurers) that are at different stages or levels of the process of delivery of patient services [8]. By addressing the different stages and levels, vertically integrated systems intend to address the following: • Efficiency goals • manage global capitation; • form large patient and provider pools to diversify risk; • reduce the cost of payer contracting. • Access goals • offer a seamless continuum of care; • respond to state legislation. • Quality goals For clarity purposes, by definition, horizontal integration means the coordination of activities across operating units that are at the same stage or level in the process of delivering

Section II • Artificial intelligence (AI): Applications in health and wellness

75

services [8]. Our discussion on the business and administration of health care will focus on the broader, more comprehensive issues of vertical integration. The second term and critical concept we will be discussing regarding the business and administration of health care is “interoperability.” If you recall, in the first paragraph of the Preface of this book, I mention “interoperability” as “a word you’ll be hearing a lot more about.” Well, this is the beginning. It can be defined as the ability of different information systems, devices, or applications to connect, in a coordinated manner, within and across organizational boundaries to access, exchange, and cooperatively use data amongst stakeholders to optimize the health of individuals and populations [9]. In other words, interoperability is the practical implementation and application of vertical integration within the health care industry. There are varying degrees of interoperability. Each demonstrates the types of information in which exchange organizations may engage. There are 4 types of exchange [9]: 1. “Foundational” interoperability establishes the inter-connectivity requirements needed for one system or application to share data with and receive data from another; 2. “Structural” interoperability defines the syntax (the arrangement of words and phrases to create well-formed sentences in the language) of the data exchange where there is a uniform movement of healthcare data from one system to another so that the clinical or operational purpose and meaning of the data is preserved; 3. “Semantic” interoperability is the ability of 2 or more systems to exchange information and to interpret and use that information. This ability supports the electronic exchange of patient data and information among authorized parties via potentially disparate health information and technology systems and products; and 4. “Organizational” interoperability facilitates the secure, seamless, and timely communication and use of data within and between organizations and individuals. It becomes apparent from the definitions of vertical integration and interoperability that their dual roles are essential in organizational health care and its maximally efficient communications, coordination, delivery, and ultimate quality patient outcomes. The practical Table Intro 2.1 • • • • • • • • • • •

Health care systems and concepts.

Big data analytics Blockchain Public Health Health information and records: Electronic Health Record (HER) Population Health Precision Health Personalized Health Predictive Analytics Descriptive Analytics Prescriptive Analytics Preventive Health

76

Foundations of Artificial Intelligence in Healthcare and Bioscience

implementation of the concepts and tools needed to realize these efficiencies (Table Intro 2.1) lie in the systems AI is making possible through its digital manipulative capabilities of massive amounts of data (i.e., big data analytics). The analysis and examples of the applications of these concepts and tools become the basis for our discussions in the following Chapters 4 7. You will recall the in-depth descriptions of big data analytics and blockchain from Section 1, Chapter 3. They are included in Chapter 3, “The Science and Technologies of Artificial Intelligence (AI)” versus this Chapter 4, “AI Applications in the Business and Administration of Health Care,” because they are more AI-related technologies than actual health care technologies. But their vital and fundamental relationship to health care (in numerous systems) will become evident in this Section. Hopefully, the information in the Chapter 3 descriptions will help you better understand the applications of big data analytics and the blockchain technologies directly to the business and administration of health care. If at any time these applications and/or benefits are not apparent to you, a quick review of the Chapter 3 explanations may be helpful. Because the AI tools and health care concepts we will be discussing are vertically integrated and interoperable, by definition, many of them will apply to multiple systems to be addressed. As such, it becomes impossible to explain some concepts before they are mentioned relative to another system. What we will do for clarity and continuity in this Chapter 4 is start with a generic discussion of “AI applications in government agencies (GOVs), nongovernmental organizations (NGOs), and third-party health insurers.” Then we will follow with numbered discussions [labeled TEXT #1 through TEXT #10] of the each major AI health care systems and concepts to be covered in the Chapter. Wherever the application of an idea or system is mentioned relative to another idea or system discussed (i.e., interoperability), it is accompanied by its respective [TEXT #, Page #]. If needed for better contextual understanding, at any point, you can read or review the individual concept or system referenced. Finally, the goal of Section 2 of this book is to present as many of AI’s applications and influences on the most relevant business and clinical issues in health and wellness care. In that the amount of current and evolving applications has become overwhelming, I pledge to try my best to cover the most significant developments in each category. As I mentioned in the introduction to Section 1, what we will be doing in this Chapter 4, is highlighting for discussion the “top 3” AI technologies and/or programs (listed as “Primary [Topics] 1 through 3”) in each of the health care categories to be covered. Then we will include a listing (with footnote citations) of 3 additional related AI technologies (listed as “Additional [Topics] 4 through 6”) in current use or development, no less significant but impossible to cover all. The reader will be able to select any of the 3 additional citations that they find interesting or that may have direct relevance to them personally or professionally, for their further referenced reading and research. And finally, every 2 3 years, we will present subsequent editions of this book and submit new “Primary” and “Additional” listings of updated AI technologies for your review and reference.

Section II • Artificial intelligence (AI): Applications in health and wellness

77

References [1] World Health Organization. About the World Health Organization. Constitution. ,http://www.who.int/ governance/eb/constitution/en/. [accessed 27.12.17]. [2] Well-being concepts. National Center for Chronic Disease Prevention and Health Promotion, Division of Population Health; 2018. [3] Winslow CEA. Introduction to public health. Office of Public Health Scientific Services, Center for Surveillance, Epidemiology, and Laboratory Services, Division of Scientific Education and Professional Development; 2018. [4] Sherman E. U.S. Health Care Costs Skyrocketed to $3.65 Trillion in 2018. Fortune Magazine; 2019. [5] Intel AI Insights Team. AI and healthcare: a giant opportunity. Forbes; 2019. [6] Laraski O. Technology alone won’t save healthcare, but it will redefine it. Forbes Insights; 2019. [7] Integrate. Merriam-Webster dictionary. 2020. [8] Pan American Health Organization. Integrated delivery networks: concepts, policy options, and road map for implementation in the Americas. 2008. [9] National Academy of Medicine (NAM). Procuring interoperability: achieving high-quality, connected, and person-centered care. Washington, DC: 2018.

4 AI applications in the business and administration of health care 4.1 AI applications in government agencies (GOVs), non-governmental organizations (NGOs) and third-party health insurers In 2013 the Governor of Ohio, John Kasich said, “For those that live in the shadows of life, those who are the least among us, I will not accept the fact that the most vulnerable in our state should be ignored. We can help them” [1]. Such is the belief that government (GOV) agencies (Table 4 1 [2]) and non-governmental organizations (NGOs) (Table 4 2 [3]) abide by. Notwithstanding that laudable goal, administrative complexity of the United States health care system is one reason why the U.S. spends double the amount per capita on health care compared with other high-income countries [4]. GOV and NGOs are now using AI to automate decision-making, parts of their supply chains, refining compliance functions and financial and tax reporting. As AI effects more of the healthcare system, consumers may not even recognize these influences because most happen behind the scenes [5]. Systemwide health care costs in 2017 set the expense at $1.1 trillion. Of that sum, $504 billion is excess from supplier time wastefulness, waste, fraud and misuse [6]. The second classification greatest loss ($190 billion) as indicated by the NIH Institute of Medicine, is credited to excessive regulatory costs (Fig. 4 1). The utilization of artificial intelligence in expanding efficiencies and distinguishing, monitoring, and correcting misuse is basic to this monetary crisis in health care services. AI has effectively identified a few areas for reform, starting from the design of treatment plans to adjusting repetitive occupations to prescription and medication creation and management [7]. Forbes Magazine predicts that AI for healthcare IT applications will exceed $1.7 billion by 2019. They also predict that introducing AI into the healthcare workflows will result in 10 15% productivity gain over the next 2 3 years [8]. This increased productivity will benefit health care delivery administrative and cost efficiencies. Examples of these GOV, NGO and third-party AI applications are documented in the following literature reviews.

4.1.1 Primary AI applications GOVs, NGOs, and third-party health insurers (1, 2, 3) 1. Transferring time-consuming human tasks to machines [9]: The “iron triangle” in health care defines 3 interlocking elements: access, affordability, and effectiveness. AI applications can provide conventional strategies for cutting costs, improving treatment, Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00003-1 © 2021 Elsevier Inc. All rights reserved.

79

80

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 4–1 • • • • • • • • • • • • • • • • • • • • • • • • • • •

United States governmental health agencies.

ACL Administration for Community Living (https://acl.gov) Administration for Children and Families (www.acf.hhs.gov) Agency for Healthcare Research and Quality (www.ahrq.gov) Agency for Toxic Substances and Disease Registry (www.atsdr.cdc.gov) Centers for Disease Control and Prevention (www.cdc.gov) Centers for Medicare and Medicaid Services (www.cms.hhs.gov) Food and Drug Administration (www.fda.gov) Health Care Finance Administration (www.ncbi.nlm.nih.gov/books/NBK218421/) Health Resources and Services Administration (www.hrsa.gov) Home of the Office of Disease Prevention and Health Promotion (https://health.gov) Indian Health Service (www.ihs.gov) Inspector General Office, Health and Human Services Department (https://oig.hhs.gov) Institute of Medicine website (www.iom.edu) Maternal and Child Health Bureau (https://mchb.hrsa.gov) Mine Safety and Health Administration (MSHA) (www.msha.gov) National Center for Health Statistics CDC (www.cdc.gov/nchs/index.htm) National Institutes of Health (www.nih.gov) National Library of Medicine (https://www.nlm.nih.gov) National Institute of Nursing Research (www.ninr.nih.gov) Occupational Safety and Health Administration (https://www.osha.gov) Office of Minority Health (OMH) (www.minorityhealth.hhs.gov) Substance Abuse and Mental Health Services Administration (www.samhsa.gov) The National Institute for Occupational Safety and Health (NIOSH) CDC (www.cdc.gov/niosh/index/htm) U.S. Public Health Service Home (www.usphs.gov) US Department of Health and Human Services (www.hhs.gov) US Environmental Protection Agency (www.epa.gov) VA.gov (www.va.gov)

Data from Federal Registry, 2019.

and improve availability. “What we see now is a path to unlocking that triangle so you can improve one side without breaking the other,” says Kaveh Safavi, head of Accenture’s global health practice. Improving health care cost-structure issue lies in transferring time-consuming human tasks to machines, while empowering patients, where conceivable, to self-administration for their needs. Such an approach can decrease the amount of human time and effort required to improve patients’ health. AI could help address some 20% of unmet clinical demand, according to Accenture. 2. Performance-based contracting [10]: When AI serves various stakeholders (providers, insurers/payers, pharma companies, etc.), more innovations are accomplished in healthcare. Challenges existing in implementing precision medicine [Text #10, page 101] have been addressed through a process called “performance-based contracting.” This is a transactional methodology that attempts to align payments with real-world clinical performance. This new methodology shares the risk over a range of stakeholders. Financial incentives for real-world outcomes give stakeholders a share of financial risk and

Chapter 4 • AI applications in the business and administration of health care

Table 4–2

81

Nongovernmental organizations (NGOs) working in global health.

International organizations • The Global Fund to Fight AIDS, Tuberculosis and Malaria • Joint United Nations Programme on HIV/AIDS (UNAIDS) • World Bank • World Health Organization (WHO) Scientific organizations • American Association for the Advancement of Science (AAAS) • American Society for Microbiology (ASM) • American Society of Tropical Medicine and Hygiene (ASTMH) • American Thoracic Society (ATS) • Coalition for Epidemic Preparedness Innovations (CEPI) • Consortium of Universities for Global Health (CUGH) • CRDF Global • The Global Health Network • Infectious Diseases Society of America (IDSA) • International Society for Infectious Diseases (ISID) • International Diabetes Federation (IDF) • Planetary Health Alliance Advocacy/policy organizations • Center for Strategic and International Studies Global Health Policy Center • GBCHealth • The Earth Institute • Global Alliance for Chronic Diseases (GACD) • Global Health Council • Global Health Technologies Coalition (GHTC) • Kaiser Family Foundation (KFF) U.S. Global Health Policy • Research!America Global Health R&D Advocacy Foundations • Africare • Bill and Melinda Gates Foundation • Foundation for NIH (FNIH) • UN Foundation (UNF) • Wellcome Trust Other resources • Gapminder • Institute for Health Metrics and Evaluation (IHME) • Worldmapper Data from Nongovernmental organizations (NGOs) working in global health research. Fogarty International Center at the National Institutes of Health; 2019.

rewards to a specified treatment. Challenges do exist with this contracting methodology include managing uncertainties, identifying metrics, and forecasting results. The full risk remains an uncertainty and the concept of betting on best treatments is a game of chance. Prediction rates of patients have improved to over 20-fold faster with neural networks than without predictive methodologies, with preliminary accuracy levels of over 80%. By

82

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 4–1 Sources of waste in American health care. Over $500 billion in health care spending is excess from supplier time wastefulness, waste, fraud and misuse. From: Institute of Medicine. National Academy of Medicine. NIH. 2020.

leveraging these AI models, uncertainty diminishes by more 50%, meaning more predictive and reliable performance-based terms. AI capabilities can serve to align value with health outcomes through such methodologies. The value of implementing performance-based agreements at scale across multiplestakeholders will extend to individual patients. More positive and more precise treatment results for patients will be the result of incentivizing health outcomes. Given the costadvantages, shared cost-reduction will occur by mitigating financial risks. 3. Fraud detection [11]: Besides being illegal, fraud is a costly problem for health care organizations and insurers. Its detection relies on computerized (rules-based) and manual reviews of medical claims. Besides being a time-consuming process, it is cumbersome and unable to quickly intervene upon identifying anomalies after the incident. AI-supported data mining through neural networks, as an example, can conduct high-speed searches of Medicare claims for patterns associated with medical reimbursement fraud. It is estimated such AI systems could provide $17 billion in annual savings by improving the speed and accuracy of fraud.

4.1.2 Additional AI applications to GOVs, NGOs, and third-party health insurers (4, 5, 6) 4. “IBM Watsons Care” [12]: This Watson program helps organizations unlock and integrate the full breadth of information from multiple systems and care providers, automate care management workflows, and scale to meet the demands of growing populations under management.

Chapter 4 • AI applications in the business and administration of health care

83

5. “Costly problem of dosage errors” [13]: A mathematical formula developed with the help of AI has correctly determined the correct dose of immune suppressant drugs to administer to organ patients. 6. “AI for Humanitarian Action” [14]: An NGO (Microsoft) is sponsoring a $40 million, fiveyear AI initiative to focus on 4 priorities: (1) helping the world recover from disasters; (2) addressing the needs of children; (3) protecting refugees and displaced people; and (4) promoting respect for human rights.

4.2 Big data analytics in health care [Text #1] A comprehensive definition and description of big data analytics as a concept and tool of AI are presented in Chapter 3 (page 29). In this chapter, we will focus on the applications of big data analytics in health care. Perhaps, more than any other concept or technology discussed in the forthcoming chapters, you will find a considerable amount of vertical integration and interoperability associated with big data analytics. Seventy-seven percent of healthcare executives reported that their organizations are accelerating investments in big data analytics and artificial intelligence (AI) [15]. Big data (AI) technology enables healthcare organizations to analyze an immense volume, variety, and velocity of data across a wide range of healthcare networks. This supports evidencebased, decision-making and actionable intervention. The various analytical techniques include systems such as descriptive analytics [Text #6, page 98]; diagnostic analytics [Text #7, page 99]; mining/predictive analytics [Text #8, page 99]; and prescriptive analytics [Text #9, page 100]. These methodologies are ideal for analyzing a large proportion of text-based health (structured) documents and other unstructured (e.g., physician’s written notes and prescriptions, medical imaging, etc.) clinical data [16]. The main challenge in this field is the enormous and continuing increase in the generated data within the health care domain. The big data analytical tools save lives, develop innovative techniques to diagnose and treat various diseases, and provide innovative healthcare services by extracting knowledge from patients’ care data. The following are active examples of this big data analytics effect.

4.2.1 Primary AI literature reviews of big data analytics (1, 2, 3) 1. Clinical Decision Support Services (CDSS) [17]: Big Data technologies provide new, powerful instruments that gather and jointly analyze large volumes of data collected for different purposes, including clinical care, administration, and research. This makes the “Learning Healthcare System Cycle” possible where healthcare practice and research work together. By providing fast access to the same set of data available for research purposes enables clinical decision support. The proper design of dashboard-based tools enables precision medicine [Text #10, page 101] decision-making and case-based reasoning.

84

Foundations of Artificial Intelligence in Healthcare and Bioscience

The concept of Clinical Decision Support Services (CDSSs) embedded in Big Data represent a new way to support clinical diagnostics, therapeutic interventions, and research. This information, properly organized and displayed, reveals clinical patterns not previously considered. New reasoning cycles are generated and explanatory assumptions can be formed and evaluated. To properly model and update the different aspects of clinical care, CDSSs need to support their relationships. Models of clinical guidelines and care pathways can be very effective in comparing the analytic results with expected behaviors. This may allow effective revision of routinely collected data to get new insights about patients’ outcomes and to explain clinical patterns. These actions are the essence of a Learning Health Care System. 2. Big data and cost of health care [18]: As the aging population increases the burden on the healthcare system, administrators and politicians’ access the most effective ways to manage the country’s healthcare network. Meanwhile, the health care provider pool continues to decrease. Healthcare administrators are looking seriously at AI as a way to improve the efficiency of care delivery. Eventually, big data technology supported by AI will increase the range and effectiveness of providers and help mitigate the global shortage of medical professionals. Big data and AI can help the growing care provider shortage in many ways [19]. Healthcare providers will have to leverage big data technology to make the biggest impact. Technologies could support the frameworks for medical technologies ranging from robotic surgical assistants to highly advanced expert diagnostic systems. Healthcare providers will continue to provide care to underserved populations by using big data technology to make treatment more accessible. Helping care providers will be supported by big data technologies to help make improvements in the delivery of healthcare services [20]. Big data systems, in combination with AI technology, can enhance the ability to analyze tumors and make accurate diagnoses. Medical researchers and the healthcare community are also enthusiastic about AI’s potential to serve as a resource for quickly restoring functionality for injured and sick patients. Yet another valuable potential benefit of big data technology is to decrease the cost of caregiving. Accenture management consulting firm forecasts that care providers could use AI to slash operational costs by $150 billion per year by 2026 [18]. Using AI technology, healthcare organizations will be able to find new opportunities for analyzing the big data collected from patients’ Internet-connected medical devices as well as healthcare provider information networks. This approach will allow organizations to cut costs, improve community wellness and lower the cost of providing care. 3. Big data for critical care [21]: Data dependency is paramount in the critical care department (CCD) in any of its forms: intensive care unit (ICU), pediatric intensive care unit (PICU), neonatal intensive care unit (NICU) or surgical intensive care units (SICU). This dependency involves very practical implications for medical decision support systems (MDSS) at the point of care [22]. ICUs care for acutely ill patients. Many of these, and particularly SICU patients, are technologically dependent on life-sustaining devices,

Chapter 4 • AI applications in the business and administration of health care

85

including infusion pumps, mechanical ventilators, catheters, and so on. Beyond treatment alone, prognosis is extremely important and is assisted in combining different data sources in critical care. Medical device connectivity is essential for providing a clinical decision support framework in the different types of ICU. While EHRs [Text #3, page 88] offer enormous workflow benefits, documentation, and charting systems, they are no stronger than the data they convey. Providers’ care can be augmented by automated and validated data collection through a seamless form of medical device connectivity and interoperability supported both inside and outside the clinical environment and capable of following the patient throughout the care process. Although 80% of medical data is unstructured (physician notes, medical correspondence, individual EMRs, lab and imaging systems, CRM systems, finance and claims) it is all clinically relevant [23]. These data can be coordinated in multiple locations for better treatment and prognosis. Big Data technology can capture all of the information about a patient to get a more complete view of the medical data and gaining access to this valued data. It can be categorized into clinical and advanced analytics providing critical building of sustainable healthcare systems that are more accessible. Such Big Data Analytics methods are invaluable in an extremely data-intensive environment such as the ICU.

4.2.2 Additional AI literature reviews of big data analytics (4, 5, 6) 4. Big data and reinforcement learning (RL) [24]: Big data and RL provide unique opportunities for optimizing treatments in healthcare. 5. Big data and IoT in “smart ambulance” system [25]: A novel approach using a smart ambulance that is equipped with IoT (Internet of Things) and big data technology is used to disseminate the information to a main center or hospital [25]. 6. The promise of big data and AI in clinical medicine and biomedical research [26]: The use of big data and AI to help guide clinical decisions is a central aspect of precision medicine [Text #10, page 101] initiatives.

4.3 Blockchain in health care [Text #2] The description of blockchain technology in Chapter 3 (page 59) stated that “. . .the intersection and convergence of blockchain technology with AI and big data [Text #1, page 83] may prove to be one of the strongest (and most disruptive) advances in the future of health care.” Forty percent of health executives see blockchain as one of their top 5 priorities [27]. Blockchain, in combination with AI and big data, has emerged as a new way to enable secure and efficient transactions as well as helping health care organizations utilize emerging technologies. The addition of AI technologies including (and not limited to) computer vision, natural language processing, machine learning, and speech recognition, blockchain technology delivers high accuracy in information treatment and workforce augmentation [28].

86

Foundations of Artificial Intelligence in Healthcare and Bioscience

Computer vision is used in coordination with natural language processing to convert the millions of health records and notes transmitted into digital documents in the form of images. A machine learning feedback loop from incorrectly transcribed information feeds continuously trains the system using manual intervention for greater accuracy. AI’s interoperability with a health care blockchain can then enable benefits of distributed records combined in sophisticated and powerful ways with cognitive technologies to target health insights for continuous secured, encrypted access. AI’s ability in blockchain to identify patterns hidden (to the naked eye) in semi-structured datasets provides meaningful connections of thousands of disparate entities. Permissible distribution in a system of health information through blockchain could make a wide array of executable datasets available for these highly specialized, trusted, narrow AI agents. The ultimate goal is not just aggregating and analyzing data, but to improve the care delivered to patients. The following reviews and programs illustrate the potential of AI and blockchain technology in health care.

4.3.1 Primary AI literature reviews of blockchain (1, 2, 3) 1. Health care industry analysis of blockchain from 2018 to 2025 [29]: The health care system is woefully inadequate at sharing information between hospitals, physicians, institutions, and other health providers. This lack of interoperability makes transmission, retrieval, and analysis of data very difficult, producing concerns regarding security, privacy, storage, and exchange of data. These shortcomings in the system can be addressed effectively by blockchain, in conjunction with AI. First, blockchain introduces complete elimination of third-party intermediaries. Second, it streamlines operational processes and provides significant cost containment through a more transparent procedure. Third, blockchain can also allow for a shift to a more value-based healthcare system versus the existing fee-based healthcare systems. It can use natural language processing (NLP) to improve patient involvement and create opportunities for more consumer-centric product segments and revenue streams. Blockchain can save the healthcare industry up to $100 billion per year by 2025 in data breach-related costs, IT costs, operations costs, support function and personnel costs, counterfeit related frauds, and insurance frauds. Fraud within health insurance associated with providers is also expected to witness up to $10 billion reduction in costs annually. AI and blockchain used for healthcare data exchange will contribute the largest, reaching a value of $1.89 billion by 2025. The use of blockchain has the potential to solve the widespread problems in healthcare information systems related to interoperability and non-standardization in the industry. Such shifts to blockchain-based solutions will require significant investments and efforts, including seamless integration with the current infrastructure. There may be provider and patient resistance to blockchain providers. Also, industry support will be

Chapter 4 • AI applications in the business and administration of health care

87

necessary to bring standardization and promote interoperability between different networks developed by different enterprises and run on different consensus protocols. 2. A new project, Pharmeum (2019) [30]: The Pharmeum project is a system heavily reliant on blockchain and AI technology that allows for coexistence between doctors, pharmacies, regulators, and patient. Once achieved, it can help with predictive health [Text #8, page 99] in many ways, including analyzing patient health history and evolving patterns that can provide definite auditability of prescriptions and medication flows. In creating a coexistence of the current digital-analog hybrid system into a fully AI digital, blockchain-based system, Pharmeum will help doctors reduce their amount of wasted effort, reducing the potential for incorrect diagnosis, treatment, and prescription assignment. Doctors’ prescription histories are available at will in the system, and patients’ health history can be monitored, preventing incorrect medication or dosage instructions by using tablets and laptops for medical record collection and dissemination through a medication pad. Pharmeum also can resolve pharmacy inefficiencies with blockchain by writing review periods into prescription smart contracts and minimizing patient waiting times for medication. Pharmeum will include an AI function that will let pharmacies create diagnoses earlier in case of terminal illness through access to patient history, family history, symptom information, and other vital data. According to the British Medical Journal, 70% of medication-related treatment errors come from the prescribing process alone [31]. Eighty percent of these could be reduced or eliminated with an efficient, digitized AI prescription system. The technologies included in Pharmeum blockchain includes zero-knowledge proofs, prescription tokens, medicine asset tokens, and Pharmeum tokens. Personally, Identifiable Information (PII) will be stored off-chain, while non-PII will be stored onchain for use with AI and other factors. 3. Centers for Disease Control and Prevention (CDC) [32]: The CDC’s Office of Technology and Innovation and the National Center for Health Statistics worked with IBM to create a pilot program for an EHR blockchain. The digital ledger nature of blockchain makes it ideal to record transactions for data owners, healthcare providers, the CDC, and other agencies. Thus, all of the players involved, including users and auditors, can be confident that the data is secure and accurate for transferring. It also allows the ability for them to determine if there has been interference with the data. The National Ambulatory Medical Care Survey, a service that collects information about ambulatory care services in hospital emergency rooms and doctors’ offices, submits data to the CDC in the pilot program. “There are a number of edits that we conduct on the data. There’s a rigorous process on the National Center for Health Statistics (NCHS) side to review the data, then store the data, and then make a public use file. We’ll be able to see exactly as the data moves through the lifecycle, who has access to it, and at which point. There are no limitations on the frequency of the data received or the size of it.” [32] The blockchain pilot program provides “complete transparency” to

88

Foundations of Artificial Intelligence in Healthcare and Bioscience

the healthcare providers and CDC officials, while complying with privacy laws like the Health Insurance Portability and Accountability Act (HIPAA). The publication of immunization Clinical Decision Support (CDSi) algorithms by the CDC is another recent development. These algorithms are publicly available and can be embedded directly into patient EHRs (Text #3, page 88). They enable adaptive immunization forecasting to tailor person-specific vaccine recommendations. They also couple with clinical, claims, social, and immunization registry data to recognize and interpret critical health factors. This ensures the right immunization is offered to the right person at the right time. This technology leveraging CDSi logic gives providers personalized prompts whenever a patient is seen, not just in annual visits [33].

4.3.2 Additional AI literature reviews of blockchain (4, 5, 6) 4. Authenticating medical records: Medical errors are the third leading cause of death in the United States: [31] Blockchain technology is being used to facilitate the authenticity of medical records. 5. Clinical research and data monetization: [34] A major benefit of blockchain technology is moving data ownership from institutions and corporations into the hands of the people who generated the data. 6. Blockchain for diabetes health care: [35] Blockchain-based platforms exist for AI assisted sharing of cross-institutional diabetic health care data.

4.4 Health information and records (electronic health record or EHR) [Text #3] Record keeping is the foundation of the health care business. The introduction of the electronic health record (EHR originally referred to as EMR for “electronic medical record”) has brought that process to new heights. EHR systems have the capacity to store data for each patient encounter, including demographic information, diagnoses, laboratory tests and results, prescriptions, radiological images, clinical notes, and more [36]. Primarily designed for improving healthcare efficiency, the secondary use of the EHR for clinical informatics applications is of value in many studies [37]. Secured, encrypted data can now be stored and processed electronically from all levels of data processing. From the private office desktop computers to the most sophisticated database servers, Through IoTs, blockchain (Text #2, page 85) technologies, and the “cloud,” AI can now be applied to all of these digital electronic systems. In 2009, 12.2% of hospitals and doctors were using EHR systems. By 2017, the percentage had grown to 75.5% [38]. Then, with the inclusion of electronic requirements for reimbursement under the “Administrative Simplification Compliance Act (ASCA)” [39] from CMS (Center for Medicare and Medicaid Services) and HIPAA (Health Insurance Portability and Accountability Act), by 2019 virtually 100% of all health providers had implemented EHR systems. Beyond the administrative issues associated with EHRs, the dynamic clinical patient

Chapter 4 • AI applications in the business and administration of health care

89

information captured is of far more value. It provides opportunities for research, risk prediction algorithms, and the advancement and values of population health [Text #4, page 91] and precision medicine [Text #10, page 101] [40]. Notwithstanding the power of AI technology and its relationship to the EHR, the risks and dangers must also be recognized in the areas of security of the highly personal and sensitive information in this health records process. Among the EHR systems highlighted below, you will see that security is the major issue with most, and blockchain technology is often the common denominator in providing such protection.

4.4.1 Primary AI literature reviews of health information and records (EHR) (1, 2, 3) 1. Diagnostic and/or predictive algorithms [41]: Leveraging digital health data through AI enables computer information systems to understand and improve care. Routinely collected patient data is approaching a volume and complexity never seen before. Using big data analytics [Text #1, page 83] to sort this data into predictive statistical models provides major benefits in precision medicine [Text #10, page 101], not only for patient safety and quality but also in reducing healthcare costs [42,43], A problem in this approach to data processing is that 80% of the effort in an analytic model is preprocessing, merging, customizing, and cleaning datasets, and not analyzing them for insights. This limits the scalability (expanding) of predictive models [44,45], Also, there can be thousands of potential predictor variables (e.g., free-Text notes from doctors, nurses, and other providers) in the EHR that are often excluded. This exclusion reduces the potential accuracy of the data and may produce imprecise predictions: false-positive predictions [46]. Deep learning and artificial neural networks (ANNs) address these challenges to effectively unlock the information in the EHR. Natural language processing (NLP) is also invaluable in sequencing predictions and resolving mixed data settings. Given these strengths, AI systems are most capable of handling large volumes of relatively “messy” data, including errors in labels and large numbers of input variables [47,48]. These neural networks are able to learn representations of key factors and interactions from the data. The deep learning approaches can incorporate the entire EHR, including free-Text notes. They produce predictions for a wide range of clinical problems and outcomes that outperform state-of-the-art traditional predictive models. By mapping EHR data into a highly organized set of structured predictor variables and then feeding them into a statistical model, the system can simultaneously coordinate inputs and predict diagnostics and medical events through direct machine learning [49]. 2. Data-mining medical records: According to data from the U.S. Department of Health and Human Services, there is progress in the value-based healthcare delivery system (the healthcare delivery model in which providers, including hospitals and physicians, are paid based on patient health outcomes [50]) in the U.S. This runs almost parallel to the significant implementation rate of electronic health records (EHR) [51].

90

Foundations of Artificial Intelligence in Healthcare and Bioscience

As the volume of patient data increases, finding tools to extract insights could grow more challenging and complex. Researchers are exploring how AI can extract useful information from massive and complex medical data sets [52]. Certain types of AI applications are emerging to improve analyses of medical records and are being implemented in the healthcare market. These AI applications include: • Diagnostic Analytics [Text #7, page 99]: Defined as “a form of advanced analytics which examines data or content” to determine why a health outcome happened. More time and research will be needed to determine if diagnostic analytics AI applications gain greater traction in the healthcare industry. • Predictive Analytics [Text #8, page 99]: When companies and healthcare professionals use machine learning they analyze patient data in order to determine possible patient outcomes, such as the likelihood of a worsening or improving health condition, or chances of inheriting an illness in an individual’s family. Predictive analytics platforms appear to deliver trends that are proving meaningful to healthcare systems and investors in the healthcare space. • Prescriptive Analytics [Text #9, page 100]: An example of prescriptive analytics is when research firms develop machine learning algorithms to perform comprehensive analyses of patient data to improve the quality of patient management such as handling patient cases or coordinating the flow of tasks, such as ordering tests. 3. EHR integration and interoperability: The Office of the National Coordinator for Health Information Technology (ONC) defines interoperability as, “the ability of a system to exchange electronic health information with and use electronic health information from other systems without special effort on the part of the user.” [53] Users must be able to find and use the information, whether sending or receiving, as well as send to, and receive information from third-party systems (independent IT vendors). In practical terms, integration is having automatic access (versus manual entry) into the EHR for clinical data from sources within and outside the health systems and to be able to use that information when treating a patient. Interoperability is anticipated to be a key enabler of population-based alternative payment models, delivery reforms, and improved performance measurements. However, integrating third-party information involves more than finding, sending, and receiving. Only the more advanced EHR systems support integration. This is the reason there has been slower progress toward integration [54]. EHR faces 2 main challenges to integration. First, the technical infrastructure must make relevant information available through the user interface. APIs will be able to integrate relevant information when individual screens are accessed, or it will provide tabs to link to third-party content for worklists with pertinent details. Second, administrative challenges exist regarding willingness among key players (health systems, insurers, and vendors) to make integration happen. EHR integration will necessitate the healthcare industry establishing standards and prioritize functional integration. Again, APIs can enable platforms that can aggregate data across multiple providers or vendors. This could ultimately help facilitate a widely adopted, fully integrated digital healthcare ecosystem. Further integration can

Chapter 4 • AI applications in the business and administration of health care

91

occur as publicly available “open” APIs allow users to access data with few restrictions. The U.S. Department of Health and Human Services will likely create a definition of open APIs in healthcare that will include openly published specifications [55].

4.4.2 Additional AI literature reviews of health information and records (EHR) (4, 5, 6) 4. Data extraction from faxes, clinical notes entered into EHR automatically [56]; 5. Recording of the patient/doctor conversation incorporated into EHR [57]; 6. Innovative data pipeline using raw EHR data along with delivering (Fast Healthcare Interoperability Resources) FHIR standard output data [58];

4.5 Population health [Text #4] Broadly defined, population health is “the health outcomes of a group of individuals, including the distribution of such outcomes within the group” [59]. It is a health research and organizational concept that has advanced enormously with expanding applications of AI. Because of its intimate association with AI and its vertical integration with so many organizational, business, administrative, and clinical health care issues, its elements must be understood individually as well as their associations with the other health care topics discussed in this chapter. Population Health comprises 3 main components: health outcomes, health determinants, and policies [60]. Population health outcomes are the product of multiple “inputs” or determinants of health, including policies, clinical care, public health [Text #12, page 111], genetics, behaviors (e.g., smoking, diet, and treatment adherence), social factors (e.g., employment, education, and poverty), environmental factors (e.g., occupational, food, and water safety), and the distribution of disparities in the population. Thus, population health can be thought of as the science of mathematically analyzing (AI) the inputs and outputs of the overall health and well-being of a population and using this knowledge to produce desirable population outcomes. A population’s health can be analyzed at various geographic levels (e.g., countries, states, counties, or cities) as well as health disparities based on race or ethnicity, income level, or education level [61]. Population health, in more practical terms, is the influence of social and economic forces combined with biological and environmental factors that shape the health of entire populations. The “affecting influences” in a defined population group are referred to as “determinants” or “independent variables” and the outcomes as “dependent variables.” These variables contribute to policies and interventions that can benefit the population. All of these demographic and epidemiological variables occur in small sample populations (e.g., a nursing home) and in large samples (e.g., a country). Fig. 4 2 presents an example of a small population (we’ll use a nursing home as our example) where we measure some key demographic and epidemiological factors (independent variables) among the

92

Foundations of Artificial Intelligence in Healthcare and Bioscience

residents, including age, companionship, physical health, and cognitive memory. AI algorithms (regression analysis, Bayesian probabilities and inference logic) are used to analyze the statistical results of these independent variables against one another as well as against other potential dependent variables (e.g., health and wellness, risks of aging, etc.), associations and outcomes result (Fig. 4 3). In this small population (nursing home) example, greater age correlated positively with greater risk of falls; less companionship correlated negatively or reduced health; reduced physical health correlated negatively with reduced companionship; and greater cognitive memory correlated positively with increased companionship. These positive and negative correlations allowed appropriate professionals and caregivers the ability to introduce corrective interventions, e.g., more focused physical therapy; and promoting social interactions and companionships for certain patient cohorts. These evidence-based interventions represent applied population health (Fig. 4 4). Similar to the example above for a small sample, the same approach can be (and is being) used at a global level. Fig. 4 5 illustrates the population health concept expanded to analyze the health of an entire country’s population (e.g., Country X). Whereas the demographic and epidemiologic dependent variables may differ from the small example, the AI

FIGURE 4–2 Population health (independent variables small scale). Example of a small population model where demographic and epidemiological factors (independent variables) are measured.

FIGURE 4–3 Population health (dependent variables small scale). AI algorithms (regression analysis, Bayesian probabilities and inference logic) analyze the statistical results of independent variables against one another as well as against other potential dependent variables (e.g., health and wellness, risks of aging, etc.).

Chapter 4 • AI applications in the business and administration of health care

93

FIGURE 4–4 Population health (interventions small scale). AI algorithms measure positive and negative correlations between dependent and independent variables allowing appropriate professionals and caregivers the ability to introduce corrective interventions.

FIGURE 4–5 Population health (large scale). The population health concept can be used in large scale assessments where AI analysis of demographic and epidemiologic independent variables produce the dependent variable outcomes and the intervention similar to small models.

analysis process producing the independent variable outcomes and the intervention process remains similar. The outcomes derived from the analysis of independent to dependent variables provide a road map for interventions and policies that address the negative outcomes and reinforce the positives. In effect, the potential health of the “population” versus individual care is enhanced.

94

Foundations of Artificial Intelligence in Healthcare and Bioscience

It is worthwhile to differentiate here between the terms “population health”, “public health” [Text #12, page 111] and “community health.” Population health is the health outcome of a group of individuals. In a broader sense, population health includes the health outcomes themselves and the efforts to influence those outcomes. Public health tends to focus primarily on large scale concerns or threats, such as vaccination and disease prevention, injury and illness avoidance, healthy behaviors, and minimizing outbreaks that jeopardize public health. It tends to be more focused on creating conditions in which individuals can be healthy. Community Health shares many similarities with both population health and public health, but it tends to be more geographically based. Community health also addresses a broader spectrum of issues than population health or public health. Such issues include public policy influences, creating shared community resources, and conducting a more holistic approach to health and wellness [62]. One other related term is “Population Health Management (PHM)” which is a systematic approach whose goal it is to enable all people in a defined population to maintain and improve their health. It is a practical application of population health within the medical delivery system [60]. AI applications in population health management are expanding from tabletop exercise software for identifying strengths and weaknesses in preparation and management of medical disasters [63] to reconfiguring management protocols or strategies at the personal and/or population level. The continuing growth of AI applications in PHM is unlimited, with a projected estimate of more than $100 billion in the next 2 decades [64]. Indeed, Covid-19 (see Chapter 8) is demonstrating the significant value population health will provide in applications to active pandemic management and future related planning. Population health management is a discipline within the healthcare industry that studies and facilitates care delivery across the general population or groups of individuals. Thus, it is within this domain wherein AI can help reduce health care costs. Also, regulations such as Health Insurance Portability and Protection Act (HIPPA) and the HITECH Act have helped advance the digitization of the health care industry and incentivize the development and adoption of electronic health records (EHR) [Text #3, page 88]. However, ultimately, the most valuable assets in the reams of medical data collected by organizations and companies in the population remains to be seen. Population health management tools are currently being used to analyze health care datasets. However, countless patterns and trends are not uncovered because clinicians may not ask the right question. AI’s unsupervised learning can assist in this breach with minimal human intervention by analyzing data and discovering common patterns and anomalies. AI algorithms powered by unsupervised learning can assess health records data, financial data, patient-generated data, IoT devices, and other relevant sources to automatically discover patient cohorts that share unique combinations of characteristics. Patterns derived from population health data can help health institutions engage in preventive [Text #11, page 106] and predictive care [Text #8, page 99]. This data can result in cost savings when discovered early versus late stages in managing and treating costly and complex diseases.

Chapter 4 • AI applications in the business and administration of health care

95

Personalized healthcare [Text #10, page 101] of this nature may seem unattainable within the traditional healthcare model because of costs and logistics, such as paying medical professionals and scheduling appointments. Nonetheless, it can lower cost and logistical barriers and provide the ability to complement care delivered by healthcare providers. By reaching many patients and promoting population-based recommendations, it can integrate strategies of PHM while at the same time tailoring care to individual patients, using a model of healthcare based on an “N of 1” [65]. Population Health is a moving target whose goals, once achieved, will quickly be replaced with new ones. Because of such complexities and dynamic nature of health management, Population Health is especially suitable to machine learning, which uncovers hidden patterns and predictive trends in large and disparate sources of data in short timespans.

4.5.1 Primary AI literature reviews of population health (1, 2, 3) 1. Health intelligence: How artificial intelligence transforms population and personalized health [66]: To present a coherent portrait of population health [67], AI methods extract health and non-health data at different levels of degree, coordination, and integration about populations and communities and their epidemiology and history of control of chronic diseases. Health intelligence is the critical metric of initiatives such as precision medicine [Text #10, page 101] [68]. Initiatives like the National Institute of Health (NIH) “All of Us” Research Program [69] seek to introduce new models of patient-powered research for better health care solutions to prevention, treatment of disease, prediction of epidemics and pandemics, and improving the quality of life. Personalized healthcare [Text #10, page 101] is a system that seeks to manage disease by considering individual variability in environment, lifestyle, and genes for each person. In personalized healthcare, information technology is necessary to improve communication, collaboration, and teamwork between patients, their families, healthcare communities, and multidisciplinary care teams. AI is used to process personalized data, to elicit patient preferences, and to help patients (and families) participate in the care process. It also allows physicians to use this participation to provide high quality, efficient, personalized care by personalizing “generic” therapy plans and connecting patients with information not typically included in their care setting. Clinicians and public health [Text #12, page 111] practitioners use personalized healthcare to deliver “personalized” therapy or interventions based on the best evidence to maintain high-quality patient care and create healthier communities. Acceptance of all technologies, especially for diagnostics in a clinical setting, concerns related to scalability, data integration and interoperability, security, privacy, and ethics of accumulated data, are examples of future challenges. But notwithstanding such limitations, evolving AI tools and techniques are already providing substantial benefits in in-depth knowledge of individuals’ health and predicting population health risks, and their use for medicine. As a result, public health is likely to increase substantially in the near future.

96

Foundations of Artificial Intelligence in Healthcare and Bioscience

2. Transforming diabetes care through artificial intelligence [70]: A recent study of 300,000 patients with type 2 diabetes who were started on medical treatment found that after 3 months, 31% of patients had discontinued their diabetes medications. This increased to 44% by 6 months, and 58% by 1 year. Only 40% eventually restarted diabetes medications [71]. Optimal care for persons with diabetes often is hampered by the absence of real-time monitoring of key health information necessary to make informed choices associated with intensive therapy and tight diabetes control. AI is employed to gather massive amounts of vital information to meet consumer demand in every business, including health care and specifically, diabetic care. A 2017 survey found that 68% of mobile health app developers and publishers believe that diabetes continues to be the single most important health care field with the best market potential for digital health solutions within the near future, and that 61% see AI as the most disruptive technology shaping the digital health sector [72]. A review of the published literature documents the substantial advances in AI technology over the last 10 years and how it is helping diabetes and their clinicians make more informed choices. Studies are underway that represent a diverse and complex set of innovative approaches aimed at transforming diabetic care in 4 main areas: automated retinal screening, clinical decision support, predictive population risk stratification, and patient self-management tools. Today, research suggests that AI approaches in these 4 vital areas of diabetic care are rapidly transforming care. Despite challenges, this review of recently published, high-impact, and clinically-relevant studies suggest that diabetes is attracting top health care AI technology companies as well as start-ups that are using innovative approaches like population health and PHM to tackle daily challenges faced by people with diabetes. Many of the applications have already received regulatory approval and are on the market today. Many more are on the way. 3. Inching toward the data-driven future of population health management [73]: The healthcare industry, data analytics, and population health management are being highlighted in 2019. Nonetheless, the industry is at an early stage of population health management. Currently, population health efforts primarily tend to focus on healthy individuals (veersus the diabetic example in #2 above) in need of preventive screenings, but little on patients with chronic conditions (see Chapter 7, page 411), and not much beyond that. Population health management starts with data. Fragmented silos of data and communication must be dismantled, and a single, centralized data source is one way to do that. Store the relevant data is in one place, so all participants in the system can access the same information. Thus, when providers start working with a patient, they can use a single source to see what the needs and requirements are, leading to more targeted treatments. Providers will be looking at advanced analytics technologies with new sources of data gaining more clinical importance. These include [74]: • Claims data; • Electronic health record (EHR) data [Text #3, page 88]; • Social and community determinants of health;

Chapter 4 • AI applications in the business and administration of health care

97

• Patient-generated health data (PGHD); • Prescription and medication adherence data. Healthcare organizations are focusing more on the social determinants of health, like transportation, housing stability, and food security. AI is ideally suited to analyze that data to help providers identify patients’ needs. AI can also power analytic health tools to help doctors make more informed treatment decisions. Analyzing and powering these tools through AI will become increasingly important as analytic health data becomes more widely used. In 2018, CMS issued its final 2019 Physician Fee Schedule and Quality Payment Program that included plans to reimburse healthcare providers for specific remote AI patient monitoring and telehealth services. These plans suggest a healthcare system that is beginning to embrace virtual care and looking for new ways to improve population health. The road to data-driven, advanced care delivery in the industry may take some time, but it is gradually making its way. Providers are beginning to recognize the importance of access to a central data source, and the value and power of mobile technologies like remote patient monitoring. As reimbursement and the acceptance of technology increase, as COVID-19 (see Chapter 8) continues to integrate into all areas of care, and as providers gain better access to needed data sources, there will be more adoption of data-driven devices.

4.5.2 Additional AI literature reviews of population health (4, 5, 6) 4. Machine learning in population health: opportunities and threats [75]: AI methods that are explainable (e.g., explainable, XAI), that respect privacy, and that make accurate causal inferences will help us take advantage of this opportunity. 5. How mobile health technology and electronic health records will change care of patients with Parkinson’s disease [76]: Non-obtrusive data collection will dominate the market as they interoperate with the personal EHR and other potentially health-related electronic databases such as clinical warehouses and population health analytics platforms. 6. Machine learning in radiology: applications beyond image interpretation [77]: Machine learning has the potential to solve many challenges that currently exist in radiology beyond image interpretation, including implications for population health.

4.6 Healthcare analytics (descriptive, diagnostic, predictive, prescriptive, discovery) [78] [Text #5] The growing healthcare industry generates a large volume of useful data on patient demographics, treatment plans, payment, and insurance coverage, attracting the attention of clinicians and scientists alike. The field of Health Analytics provides tools and techniques that extract information from these complex and voluminous datasets and translate them into knowledge to assist decision-making in healthcare. Health Analytics develops insights through the efficient use of (big) data and the application of quantitative and qualitative analysis. It generates fact-based decisions used for “planning, management, measurement, and learning” purposes [78]. It is a multidisciplinary field that uses mathematics, statistics,

98

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 4–6 Healthcare analytics. Data mining techniques in health care analytics fall under 4 categories: (1) descriptive (i.e., exploration and discovery of information in the dataset); (2) diagnostic (i.e., why something happened); (3) predictive (i.e., prediction of upcoming events based on historical data); and (4) prescriptive (i.e., utilization of scenarios to provide decision support).

predictive modeling, AI algorithms and machine learning techniques to discover meaningful patterns and knowledge. AI applications allow health analytics to use data mining, Text mining, and big data analytics [Text #1, page 83] to assisting healthcare professionals in disease prediction, diagnosis, and treatment. The “Internet of Things” (IoT) serves as a platform to obtain medical and healthrelated data in numerous healthcare application settings. IoT is a digital technology that connects physical objects that are accessible through the Internet (see Chapter 2, page 23). These physical objects are connected using RFID (radio-frequency identification) technologies including sensors/actuators and communication [79]. Combining health care analytics with IoT enables all the relevant data to be turned into actionable insight. IoT and analytics become an ideal combination for better healthcare solutions. This is resulting in an improvement in service quality and reduction in cost [80]. According to some estimates, the application of data mining can save $450 billion each year from the U.S. healthcare system [81]. Such data mining techniques fall under 4 categories of analytics (Fig. 4 6 [82]) which provide increasing value and difficulty [83]: 1. 2. 3. 4.

descriptive (i.e., exploration and discovery of information in the dataset); diagnostic (i.e., why something happened); predictive (i.e., prediction of upcoming events based on historical data); and prescriptive (i.e., utilization of scenarios to provide decision support).

Each of these health analytic types is described below followed by examples (3 primary and 3 additional) of their use in health care programs.

4.6.1 Descriptive analytics [Text #6] [84] Descriptive analytics is the most basic type. It serves to answer questions like “what has happened?” Descriptive analytics analyze the real-time incoming and historical data for having

Chapter 4 • AI applications in the business and administration of health care

99

insights on how to approach the future. Basic statistics and mathematical techniques are among the most frequently used methods for descriptive analytics. Health care data is often “siloed” (stored) in repositories that do not communicate easily with one another. Besides the challenge of resources to store large amounts of data required for analysis, gaining access, and extracting information from these different sources can consume as much time and resources as the data analysis itself. The healthcare industry relies, largely, on traditional analysis that yields merely descriptive analytics, which summarizes raw data to make it easier to understand but requires humans to interpret past behavior and consider how it influences future outcomes. Machine learning addresses these challenges in descriptive analytics.

4.6.2 Diagnostic analytics [Text #7] [85] Diagnostic health analytics works on answering “why something happened.” It needs extensive exploration and focused analysis of the existing data using tools such as visualization techniques to discover the root causes of a problem, and it helps users realize the nature and impact of issues. This may include understanding the effects of input factors and processes on performance. Diagnostic data analytics works backward from the symptoms to suggest the cause of what has happened. Doctors continue to be responsible for the final diagnosis, but they can use data analytics to save time and avoid possible errors of judgment. Subsequently, the results of each diagnosis, together with a description of the symptoms and additional contributing factors are added to the database used for the analytics. This helps the diagnostic data analytics to be increasingly accurate [86].

4.6.3 Predictive analytics [Text #8, page 99] [78] Predictive analytics is a higher level of analytics that serve to answer questions such as “what could happen in the future based on previous trends and pattern?” Thanks to AI, “intelligence” is added to many mainstream healthcare applications, whereas relying on descriptive analytics is limited to reactive, rather than proactive, solutions. Predictive analytics combined with human interpretation, can drive better decision-making. Thus, the advent of AI in health analytics can now allow healthcare organizations to benefit from leveraging predictive and prescriptive algorithms to improve operations, reduce fraud, manage patient health, and improve patient outcomes. Predictive analytics involves using empirical methods (statistical and others) to generate data predictions as well as assessing predictive power. It uses statistical techniques, including modeling, machine learning, and data mining that analyze current and historical data to make predictions. As an example, predictive analytics can identify high-risk patients and provide them treatment, thus reducing unnecessary hospitalizations or readmissions. Predictive analytics analyzes past data patterns and trends as it tries to predict what could happen in the future. Frequent techniques used for predictive analytics include linear regression, multivariate regression, logistic regression, decision trees, random forest, naïve Bayes, and k-means [84]. At this functional level, a healthcare organization can more effectively monitor operational data and develop predictive analytics to improve operational performance.

100

Foundations of Artificial Intelligence in Healthcare and Bioscience

Big data analytics [Text #1, page 83] goes beyond improving profits and cutting down on waste. It can contribute to predicting epidemics, curing diseases, improving the quality of life, and reducing preventable deaths. Among these applications, “predictive analytics” is being considered the next revolution, both in statistics and medicine around the world. The COVID-19 (see Chapter 8) pandemic puts predictive analytics center-stage in future public health strategies.

4.6.4 Prescriptive analytics [Text #9, page 100] [83] Prescriptive analytics is an advanced form of analytics. It serves to answer a question like “what should a health organization do?”. Prescriptive analytics advises on possible outcomes and results through actions that are likely to maximize critical metrics. Among the prominent techniques used for prescriptive analytics are optimization and simulation. Prescriptive analytics is used when decisions are made regarding a wide range of feasible alternatives. It enables decision-makers to look into consequences and expected results of their decisions and see the opportunities or problems. It also provides the decision-makers with the best course of action to take promptly. Successful prescriptive analytics depends mainly on the adoption of 5 essential elements (1) utilizing hybrid data; (2) including both structured and unstructured data types; (3) integrating predictions and prescriptions; (4) considering all possible side effects; and (5) using adaptive algorithms that can be tailored easily to each situation in addition to the importance of robust and reliable feedback mechanisms [87]. Prescriptive analytics offers the potential for optimal future outcomes for healthcare decision-makers. Based on decision optimization technology, such capabilities enable users to not only recommend the best course of action for patients or providers, but they also enable the comparison of multiple “what if” scenarios to assess the impact of choosing one action over another [88].

4.6.5 Primary AI literature reviews of health analytics (1, 2, 3) 1. Prevent opioid abuse through predictive analytics [89]: Big data analytics in healthcare might be the answer everyone is looking for in the opioid abuse crisis. Data scientists at Blue Cross Blue Shield have started working with analytics experts at Fuzzy Logix to tackle the problem. Using years of insurance and pharmacy data, Fuzzy Logix analysts have successfully identified 742 risk factors that predict, with a high degree of accuracy, whether someone is at risk for abusing opioids. As Blue Cross Blue Shield data scientist Brandon Cosley states in Forbes [90], “It’s not like one thing ‘he went to the doctor too much’ is predictive . . . it’s like ‘well you hit a threshold of going to the doctor, and you have certain types of conditions, and you go to more than one doctor and live in a certain zip code. . .’ Those things add up.” 2. Enormous growth in health care database through analytics and IoT [91]: A recent study by McKinsey reveals that the pieces of content uploaded to Facebook are in the 30 billion range, while the value of big data for the healthcare industry is about 300 billion [92]. These growths are necessitated by technological changes, including internal and external activities in electronic commerce (e-commerce), business operations, manufacturing, and

Chapter 4 • AI applications in the business and administration of health care

101

healthcare systems. Moreover, recent development in in-memory databases provided an increase in database performance and makes data collection possible through the Internet of things (IoT) and cloud computing facilities that provide persistent large-scale data storage and transformation achievable. 3. Chronic illness patients sharing experiences with other patients and doctors [93]: A vertically integrative healthcare analytics system called GEMINI allows point of care analytics for doctors when real-time usable and relevant information of their patients is required from questions they ask about the patients for whom they are caring. GEMINI extracts various data sources for each patient and stores them as information in a patient profile graph. The data sources (patients’ demographic data, laboratory results, and medications) and unstructured data (such as, doctors’ notes) are complex and varied, consisting of both structured data. The patient profile graph provides holistic and comprehensive information on patients’ healthcare profile. GEMINI can infer implicit information useful for administrative and clinical purposes, and extract relevant information for performing predictive analytics. At its core, GEMINI acts as a feedback loop that keeps interacting with the healthcare professionals to gather, infer, ascertain, and enhance the self-learning knowledge base.

4.6.6 Additional AI literature reviews of health analytics (4, 5, 6) 4. AI analytics support the practice of precision medicine (Text #10) [94]. In the challenging setting of chronic diseases characterized by multiorgan involvement, erratic acute events, and prolonged illness progression latencies, AI Health Analytics is heavily utilized. 5. Real-time alerting using prescriptive analytics [89]: In hospitals, Clinical Decision Support (CDS) software analyzes medical data on the spot, providing health practitioners with advice as they make prescriptive decisions. 6. Risk of hospital readmissions [95]: Data analytics tools, like machine learning algorithms and predictive models, can discover whether patients are at risk of hospital readmission or medication non-adherence, and then take appropriate actions to mitigate that risk.

4.7 Precision health (aka precision medicine or personalized medicine) [Text #10] According to the CDC’s (Center for Disease Control) “Precision Medicine Initiative,” this relatively new term is an emerging approach for disease treatment and prevention. It takes into account individual variability in genes, environment, and lifestyle for each person. It allows doctors and researchers to predict more accurately which treatment and prevention strategies for a particular disease works in which groups of people versus the “one-size-fits-all approach” [96]. Sometimes called “Personalized Medicine”, Precision Medicine is another vertically integrated concept in health and wellness care relating to numerous organizational, business,

102

Foundations of Artificial Intelligence in Healthcare and Bioscience

administrative and clinical health care issues. Its goal is to find unique disease risks and treatments that work best for patients. It includes: • • • • •

the use of family history, screening for diseases before you get sick; tailoring prevention; tailoring treatments; looking at your DNA; and the “All of Us” initiative [97] to track history and physical findings and genetic, behavioral, and environmental factors of 1 million Americans for several years to assess health factors; • develop health care solutions that make the best decisions to prevent or treat disease; • predict epidemics; and • improve the quality of life. AI-based precision medicine combines medicine, biology, statistics, and computing. Researchers are taking the first steps toward developing personalized treatments for diseases. They are applying AI and machine learning to multiple data sources, including genetic data, EHRs [Text #3, page 88], sensor data, environmental, and lifestyle data. Characterized by sustained collaboration across disciplines and institutions is the most promising research in the field. AI is being used by large corporations, universities, and government-funded research collectives to develop precision treatments for complex diseases [98].

4.7.1 Primary AI literature reviews of precision medicine/health (1, 2, 3) 1. Translating cancer genomics into precision medicine with artificial intelligence: [99] The foundation of precision medicine is becoming the integration of AI approaches such as machine learning, deep learning, and natural language processing (NLP) to transform big data into a clinically actionable plan. In the field of cancer genomics (see Chapters 5 and 7), the availability of multi-omics data, genotype-phenotype data through genome-wide association studies (GWAS), and literature mining have promoted the development of advanced AI techniques and solutions. This allows medical professionals to deliver personalized care through precision medicine [100]. Precision medicine is a valuable method being applied broadly for gaining insights into the genomic profile of tumor Next-generation sequencing (NGS). Simultaneously sequencing millions of DNA fragments in a single sample to detect a wide range of aberrations provides a complete profile of the tumor. The adoption of NGS for clinical purposes has grown tremendously due to the comprehensive detection of aberrations, combined with improvements in reliability, sequencing chemistry, pipeline analysis, data interpretation, and cost [101]. NGS supports the discovery of novel biomarkers, including mutation signatures and tumor mutational burden (TMB). Statistical analyses are performed, and patterns are discovered through millions of mutations detected by NGS [102]. It is revolutionizing

Chapter 4 • AI applications in the business and administration of health care

103

medical research and enabling multi-layer studies that integrate genomic data of high dimensionality such as DNA-seq, RNA-seq. It uses multi-omics data such as proteome, epigenome, and microbiome. The integrative analysis of multi-omics data provides a better view of biological processes leading to a fuller understanding of these systems compared to single-layer analysis. The advancement of machine learning (ML) technologies impacts the interpretation of genomic sequencing data, which has traditionally relied on manual curation. The 2 critical limitations of manually curating and interpreting the results from genomics data are scalability and reproducibility. These limitations continue to grow as more genomic data become available. The number of curation and amount of time experts or variant scientists can dedicate daily to this task is limited. Organizations are working to build and standardize multi-step AI protocols for variant classification to address such limitations. Some of these organizations include the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG-AMP). Cancer biologists and molecular pathologists are experts in classifying cancer sequence variants for their pathogenicity and clinical relevance. This is a complex process, difficult to compile into a set of rules comprehensive enough to cover all scenarios. To what degree can ML algorithms learn the complex clinical decisions made by individual pathologists and classify the variants automatically? Massachusetts General Hospital (MGH) experimented and got very promising results. They selected B500 features, built multiple ML models on B20,000 clinical sign-out variants reported by board-certified molecular pathologists, and then compared the prediction results to find the best model [103]. The logistic regression model demonstrated the best performance, with only 1% false negativity and 2% false positivity, which is comparable to human decisions. Bio-NER (Name Entity Recognition) is the foundation of evidence extraction for precision medicine. NLP tools are being used in cancer genomics for the automated extraction of entities such as gene, genetic variants, treatments, and conditions. An essential step for tumor molecular profiling and downstream gene protein or genedisease relationship analysis is identifying genetic variants. Medical providers need accurate identification and interpretation of genetic variation data to design effective personalized treatment strategies for patients. Unfortunately, there is no universal standard for how genetic variants are called out, nor are there multiple ways of presenting the same event in the literature and genomic databases. More sophisticated AI learning methods are being developed. AI assists across the medical continuum from research to prognosis, therapy, and post-cancer treatment care. It remains the main driver of healthcare transformation towards precision medicine. As to how this revolutionary role of AI in healthcare translates into an improvement in the lives of patients remains to be demonstrated. It is dependent heavily on the availability of patient outcome data. 2. From hype to reality: data science enabling personalized medicine [104]: Personalized medicine is deeply connected to and dependent on data science, specifically AI

104

Foundations of Artificial Intelligence in Healthcare and Bioscience

machine learning. The key idea is to base medical decisions on individual patient characteristics (including biomarkers) rather than on averages over a whole population. The US Food and Drug Administration uses the term biomarker for any measurable quantity or score that can be used as a basis to stratify patients (e.g., genomic alterations, molecular markers, disease severity scores, lifestyle characteristics, etc.) [105]. The advantages of personalized medicine are summarized as [106]: a. better medication effectiveness, since treatments are tailored to patient characteristics, e.g., genetic profile; b. reduction of adverse event risks through avoidance of therapies showing no apparent positive effect on the disease, while at the same time exhibiting (partially unavoidable) adverse side effects; c. lower healthcare costs as a consequence of optimized and effective use of therapies; d. early disease diagnosis and prevention by using molecular and non-molecular biomarkers; e. improved disease management with the help of wearable sensors and mobile health applications; and f. smarter design of clinical trials due to the selection of likely responders at baseline. Patient characterization or “personalization” is difficult and requires state-of-the-art approaches offered by data science. Specifically, multivariate stratification algorithms using techniques from AI (including machine learning) play an important role. Driven by the increasing availability of large datasets, there is an increasing dependence on such data science-driven solutions. Specifically, ‘deep learning’ techniques have received a great deal of attention in the area of personalized medicine [107]. Large commercial players entering the field emphasize the perceived potential for machine learning-based solutions [108]. Machine learning algorithms have the potential of integrating multi-scale, multimodal, and longitudinal patient data to make relatively accurate predictions, which, may even exceed human performance [109]. However, the current hype around AI and machine learning must be kept in perspective. Major existing bottlenecks include: a. lack of sufficient prediction performance due to a lack of signals in the employed data; b. challenges with model stability and interpretation; c. a lack of validation of stratification algorithm via prospective clinical trials, which demonstrate benefit compared to standard of care; and d. general difficulties in implementing a continuous maintenance and updating scheme for decision support systems. Current algorithms can’t recommend the right treatment at the right time and dose for each patient. Steps that bring us closer to this goal: a. innovative software tools that better link knowledge with machine learning-based predictions from multi-scale, multi-modal, and longitudinal data; b. innovative modeling approaches, such as causal inference techniques and hybrid modeling, which go beyond typical state-of-the-art machine learning; and

Chapter 4 • AI applications in the business and administration of health care

105

c. new computational modeling approaches that allow us to identify critical transitions in a patient’s medical trajectory. No algorithm can replace a health care provider. Instead, the idea is to provide the health care provider with a tool that supports their decisions based on objective, datadriven criteria, and the wealth of available biomedical knowledge. 3. The future of precision medicine: potential impacts for health technology assessment [110]: Technological progress in precision medicine is expected to continue, spearheaded by programs such as the Precision Medicine Initiative [111] and the 100,000 Genomes Project [112]. This has consequences for the generation of clinical and economic evidence, which means healthcare decision-makers, including health technology assessment (HTA) agencies and guideline producers, should consider how their methods and processes accommodate these new technologies and services. A recent study suggests that precision medicine encompasses more than just pharmacogenetic and pharmacogenomic tests. It covers technologies that offer unique treatment pathways for individual patients [113]. Three major types of precision medicine technology likely to emerge over the next decade are identified as: complex algorithms; digital health applications (‘health apps’); and ‘omics’-based tests (all covered extensively in Chapters 5, 6 and 7). The experts anticipated increased use of algorithms that use AI to aid clinical decision-making over the next decade [114]. These algorithms require large datasets (‘knowledge bases’) that include a large number of variables, such as genetic information, sociodemographic characteristics, and electronic health records. Algorithms provide clinicians and patients with predictions on expected prognosis and optimal treatment choices using patient-level characteristics. These algorithms regularly update as new information is added to the knowledge base, an approach termed ‘evolutionary testing.’ Health apps include a wide range of tools providing disease management advice, receive and process patient-inputted data, and record physical activity and physiological data such as heart rate. Also, a subset of apps will likely fall under precision medicine, with the most advanced utilizing AI-based technologies. The number of health apps is expected to increase significantly over the next decade. Digital health experts predict that significant developments in this area will also involve apps that analyze social or lifestyle determinants of health, such as socioeconomic status or physical activity, to stratify patients, including apps linked to activity monitoring devices (or wearable technologies). The decision problem presented to HTA agencies and guideline developers become increasingly difficult to define when dealing with some precision medicine technologies and services. One expert noted that these issues are particularly relevant for whole-genome sequencing. This can be performed at any point during an individual’s lifetime, inform care pathways for a wide range of diseases, and be analyzed using many different methods [115]. The fast pace of innovation in precision medicine may also mean that assessment bodies will face higher volumes of evaluations. Stratification of a patient population may result in smaller sample sizes recruited to trials for precision medicine interventions. Combined with more complex and variable treatment pathways, this could increase levels of uncertainty of cost-effectiveness estimates presented to decision-makers.

106

Foundations of Artificial Intelligence in Healthcare and Bioscience

Another source of uncertainty is unit costs like ‘omics’-based tests that vary by laboratory [116]. Such tests may also yield continuous results where thresholds must be set to determine the outcome of testing [117]. Resolving some of the issues may require a balanced evaluation of the strengths and potential weaknesses of normative choices within an HTA framework. Any departure from currently established frameworks requires deliberation and co-operation between a wide range of entities across the health system. An appropriate solution depends. Upon: a. decision-making conText within which the HTA agency exists; b. stated objectives of the health system as a whole; c. practicality of the assessment, and d. relevance of the framework to the technology type [118]. Precision medicine interventions will increase over the next decade and will change the way health care services are delivered and evaluated. Such changes may be driven first by the complexity and uncertainty around delivering therapies that use biomarker data and, secondly, by the innovative nature of AI-based technologies. Worldwide healthcare systems need to consider adjusting their evaluative methods and processes to accommodate these changes.

4.7.2 Additional AI literature reviews of precision medicine/health (4, 5, 6) 4. Radiomics with artificial intelligence for precision medicine in radiation therapy [119]: Radiomics, emerging from radiation oncology, is a novel approach for solving the issues of precision medicine and how it can be performed, based on multimodality medical images that are noninvasive, fast and low in cost. 5. Machine learning in medicine (MLm): Addressing ethical challenges [120]: MLm algorithms use data that are subject to privacy protections, requiring that developers pay close attention to ethical and regulatory restrictions at each stage of data processing. The “All of Us” precision medicine research cohort in the US, is to fund the development of more representative data sets that can be used for training and validation [121]. 6. The future of personalized care: humanism meets artificial intelligence [122]: Artificial intelligence holds tremendous promise for transforming the provision of health care services in resource-poor settings [123]. AI tools and techniques that are still in their infancy already provide substantial benefits in providing in-depth knowledge on individuals’ health and predicting population health (Text #4, page 91) risks, and their use for medicine and public health is likely to increase substantially in the near future.

4.8 Preventive medicine/healthcare [Text #11] The holy grail of health care is preventive medicine (also referred to as preventive healthcare). It is something that has been talked about and strived towards for generations. Perhaps it can be postulated that through the greater emphasis on self-care (i.e., exercise, diet, healthier lifestyles), preventive medicine has made some progress. But in a broader perspective, the business and costs of health care proffers that little headway has been achieved.

Chapter 4 • AI applications in the business and administration of health care

107

Nonetheless, preventive health continues to be vertically integrated into virtually all of the health care categories discussed in this chapter. Mainly, preventive medicine focuses on the health of individuals, communities, and defined populations with the goal of protection, promotion, and maintenance of health and well-being through the prevention of disease, disability, and premature death [124]. The 3 types of prevention generally referred to include primary, secondary, and tertiary. Primary (aka “prevention”) meaning methods to avoid the occurrence of disease either through eliminating disease agents or increasing resistance to disease. Secondary (aka “treatment”) means methods to detect and address an existing disease prior to the appearance of symptoms. And tertiary (aka “rehabilitation”) [125] means methods to reduce the negative impact of symptomatic diseases, such as a disability or premature death, through rehabilitation and treatment [126]. Among the 3 types of prevention, there are universal concepts that can be ascribed to each. First is the concept of “disease prevention.” This includes measures that are aimed at preventing and/or detecting specific conditions. Essential measures in this concept include re-screening, vaccinations, and preventive medication. Second is the concept of “health promotion.” This focuses on promoting and maintaining a healthy lifestyle and a healthy social and physical environment. And third is “health protection” which aims to protect the population against health-threatening factors. Examples are: monitoring the quality of drinking and bathing water, waste disposal, and road safety [127]. All clinical and administrative health professionals involved in preventive health care address an array of components (Table 4 3 [128]) related to the field. There are no less than 4 professional disciplines that address these multiple areas. They include: (1) occupational medicine specialists; (2) aerospace medicine specialists; (3) general preventive medical practitioners (physicians, nurses, social workers, etc.); and (4) public health professionals [Text #12, page 111] [128]. Beyond these general categories of preventive health and medical professionals, there are additional subspecialty categories in the field. They include “Addiction Medicine” which is concerned with the prevention, evaluation, diagnosis, and treatment of persons with the disease of addiction, of those with substance-related health conditions, and of people who Table 4–3

Components of preventive medicine.

• Biostatistics and the application of biostatistical principles and methodology; • Epidemiology and its application to population-based medicine and research; • Health services management and administration including: developing, assessing, and assuring health policies; planning, implementing, directing, budgeting and evaluating population health and disease management programs; and utilizing legislative and regulatory processes to enhance health; • Control of environmental factors that may adversely affect health; • Control and prevention of occupational factors that may adversely affect health safety; • Clinical preventive medicine activities, including measures to promote health and prevent the occurrence, progression and disabling effects of disease and injury; and • Assessment of social, cultural and behavioral influences on health. Data from Preventive Medicine. American board of preventive medicine. www.theabpm.org.

108

Foundations of Artificial Intelligence in Healthcare and Bioscience

show unhealthy use of substances including nicotine, alcohol, prescription medications, and other licit and illicit drugs. “Medical Toxicologists” are physicians and PhDs who specialize in the prevention, evaluation, treatment, and monitoring of injury and illness from exposures to drugs and chemicals, as well as biological and radiological agents. “Undersea and Hyperbaric Medicine” physicians deal with decompression illness and diving accident cases. They use hyperbaric oxygen therapy to treat such conditions as carbon monoxide poisoning, gas gangrene, non-healing wounds, tissue damage from radiation, and burns and bone infections [128]. The categories of general preventive medical practitioners and public health professionals are the disciplines most related to the theme of this book, which is AI in health and wellness. The general preventive medical practitioners (physicians, nurses, social workers) deal mostly with the clinical aspects associated with preventive health care and, as such, receives full attention in Chapters 5 7. The category of public health professionals, while heavily related to clinical aspects of preventive care, have profound relevance to the business and administrative aspects of health care. AI’s influence in these programs conducted by public health professionals addressed in this Chapter 4 through the Primary and Additional programs are discussed below.

4.8.1 Primary AI literature reviews of preventive medicine/healthcare (1, 2, 3) 1. A digital platform to identify health conditions and therapeutic interventions using an automatic and distributed artificial intelligence system [129]: Google has developed a method and system for automatic, distributed, computer-aided, and intelligent data collection/analytics, health monitoring, health condition identification, and patient preventive/remedial health advocacy. The system integrates: (1) distributed patient health data collection devices; (2) centralized or distributed data servers running various intelligent and predictive data analytics engines for health screening, assessment, patient health condition identification, and patient preventive/remedial health advocacy; (3) specifically designed data structures including measurable health indicator vectors, patient health condition identification matrices and patient health condition vectors; (4) portal servers to interface with distributed physician terminal devices; and (5) distributed patient terminal devices for delivering health condition identification, health interventions, and patient preventive/remedial health advocacy, and for monitoring and tracking patient activities. The various intelligent and predictive engines are programmed to learn and extract hidden features and correlations from a large amount of (big) data obtained from the distributed data collection devices. Patient health data (PHD) is normally collected and processed onsite in centralized medical centers, hospitals, clinics, and medical labs. The collected data are transmitted to electronic medical record (EHR) systems to be examined and analyzed by medical professionals for further health screening, health risk assessment, disease prevention, patient health condition identification (PHCI), and patient preventive/remedial health

Chapter 4 • AI applications in the business and administration of health care

109

advocacy (PPRHA). Patient preventive/remedial healthy advocacy may also be referred to as patient therapeutic interventions. The term “therapeutic” is used here to broadly refer to prescriptive or nonprescriptive medicine, supplements, self-directed management, at-home care, therapies, medical/biological tests, referrals based on the patient’s health conditions. This system is a method for automatic and intelligent patient health condition identification (PHCI) and patient preventive/remedial health advocacy (PPRHA) by a processing circuitry that communications with a data repository and a communication interface. It is based on computer technologies and designed to use AI tools to solve technical problems associated with computer-aided health screening, risk assessment, PHCI, and PPRHA. 2. Aging well: using precision to drive down costs and increase health quality [130]: Health is a prominent component of aging well (see Chapter 7, page 406). It is considered as “a state of an individual characterized by the core features of physiological, cognitive, physical and reproductive function, and a lack of disease” [131]. The high prevalence and burden of chronic illnesses in the aging population also imposes a substantial economic toll on the health care system, which includes direct medical costs and income losses. It was estimated that approximately 3.8 trillion global healthcare dollars were spent toward the 4 major diseases (respiratory, cardiovascular, diabetes, and cancer) in 2010 [132]. Precision Medicine [Text #10, page 101] brings the promise of “the right treatment for the right person at the right time.” However, this paradigm promotes a dichotomy of high health care costs and low health quality because the majority of diseases of aging are chronic. Thus, an emphasis on precision medicine alone in aging care may aid in solutions responsive to the rapid development of epidemics. It does not identify the root cause of the problem. To break the seemingly irreversible trend of ever-increasing healthcare costs demands rethinking the current approaches for addressing chronic disease risk and management. Public health (Text #12, page 111) serves to adjust from treatment to prevention, and efforts to do so are increasing. The first step in prevention begins with health promotion and identification of risk factors. Studies show that 10% of modifiable risk factors accounted for 90% of cerebrovascular disease alone [133]. Similarly, researchers have also attributed 42% of all cancers to modifiable risk factors [134]. Findings like this require improvements in risk level knowledge on an individual as well as a population level to determine what early detection and health promotion efforts might prove useful. While more studies are needed to determine causality, current results underscore the potential of risk factors for protecting against disease. This is true at all disease stages where objectives must include slowing onset, mitigating progression, or averting the disease state altogether. Favorable outcomes flow from preventive schemes successfully executed. New York City’s approach to reducing sugary drink consumption is a good case study of this tactic. Using smoking cessation campaigns as a blueprint, the New York City (NYC) Department of Health and Mental Hygiene (DOHMH) implemented a mass media educational campaign identifying the health consequences of sugary drink consumption. For 7 years,

110

Foundations of Artificial Intelligence in Healthcare and Bioscience

these initiatives accomplished a 35% decrease for NYC adults and 27% for public high school students in consuming 1 or more sugary drinks a day [135]. Technology and AI are shaping the way we think of health and will improve current medical and public health practice. The economic impact will be substantial and provide a roadmap for reining in healthcare expenditures and improving health and wellness. Along with the medical model, an emerging new business ecosystem comprised of tens of thousands if not more new technology companies are emerging, centered on providing the right solution to the right person at the right time. Given the global pandemic of chronic diseases, the challenge is timely to bring together these key stakeholders to reduce the timeline of reaching the precision health and preventive health objectives. 3. Artificial intelligence can predict premature death [136]: Computers capable of teaching themselves to predict premature death could significantly improve preventive healthcare. A team of healthcare data scientists and doctors have developed and tested a system of computer-based ‘machine learning’ algorithms that predict the risk of early death due to chronic disease in a mainly middle-aged population. They found this AI system to be accurate in its predictions and performed better than the current standard approach to prediction developed by human experts [137]. The team used health data for just over half a million people aged between 40 and 69 recruited to the UK Biobank between 2006 and 2010 and followed up until 2016. The AI machine learning models used in the study are ‘random forest’ and ‘deep learning’ algorithms. This study builds on previous work by the University of Nottingham which showed that 4 different AI algorithms, ‘random forest,’ ‘logistic regression,’ ‘gradient boosting’ and ‘neural networks,’ were significantly better at predicting cardiovascular disease than an established algorithm used in current cardiology guidelines. The Nottingham researchers predict that AI will play a vital role in the development of future tools that can deliver personalized medicine, tailor risk management to individual patients. Further research will be required to verify and validate these AI algorithms in other population groups and explore ways to implement these systems into routine healthcare.

4.8.2 Additional AI literature reviews of preventive medicine/healthcare (4, 5, 6) 4. AI in early detection of breast cancer [138]: Various factors are driving interest in the application of artificial intelligence (AI) for breast cancer (BC) detection. 5. Role of the primary care provider in preventive health exams [139]: The Affordable Care Act (ACA) of 2010 successfully shifted US health policy to emphasize disease prevention by mandating full coverage of approved preventive services by primary care providers (PCPs). 6. Use of preventive health services among cancer survivors in the U.S. [140]: Although many cancer survivors are in excellent health, the underlying risk factors and side effects of cancer treatment increase the risk of medical complications and secondary malignancies.

Chapter 4 • AI applications in the business and administration of health care

111

4.9 Public health [Text #12] The CDC Foundation defines “Public Health” as “the science of protecting and improving the health of people and their communities by promoting healthy lifestyles, researching disease and injury prevention, and detecting, preventing and responding to infectious diseases. Overall, public health is concerned with protecting the health of entire populations. These populations can be as small as a local neighborhood, or as big as an entire country or region of the world” [141]. The core science of Public Health is epidemiology. From among 102 definitions of epidemiology reviewed in the literature, the currently accepted definition (World Health Organization [142]) reads as follows: “Epidemiology is the study of the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Various methods are used to carry out epidemiological investigations: surveillance and descriptive studies can be used to study distribution; analytical studies are used to study determinants.” Practically speaking, Public Health attempts to prevent people from getting sick or injured in the first place. It promotes wellness by encouraging healthy behaviors. It conducts scientific research, educates the public about health, and works to assure conditions in which people can be healthy. That can mean educating people about the risks of alcohol and tobacco, vaccinating children and adults to prevent the spread of disease, sets safety standards to protect workers, and develop school nutrition programs to ensure kids have access to healthy food. In its efforts to protect the general population, Public Health works to track disease outbreaks, prevent injuries, and shed light on why some of us are more likely to suffer from poor health than others. The many facets of public health range from speaking out for laws that promote smoke-free indoor air, seatbelts, spreading the word about ways to stay healthy, and giving science-based solutions to problems. Examples of the many fields of Public Health are listed in Table 4 4 [143]. Beyond the fields and providers of Public Health services, there is a series of Public health systems (Table 4 5 [144]) commonly defined as “all public, private, and voluntary entities that contribute to the delivery of essential public health services within a jurisdiction.” This concept ensures that all entities’ that contribute to the health and well-being of the community or state are vertically integrated and interoperable (Fig. 4 7 [144]) in delivering and assessing the provision of public health services. AI has the potential to improve the efficiency and effectiveness of an expanded public health continuum. It can make the concepts of precision/personalized health [Text #10, page 101], predictive health [Text #8, page 99] and preventive health [Text #11, page 106] realities. The use of AI presents a radical expansion in the scope of public health, and many of these activities are vertically integrated and made interoperable through organizations within and beyond established public health institutions [145].

112

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 4–4 • • • • • • • • • • • • •

Fields and providers in public health.

First responders Restaurant inspectors Health educators Scientists and researchers Nutritionists Community planners Social workers Epidemiologists Public health physicians Public health nurses Occupational health and safety professionals Public policymakers Sanitarians

Data from WHO. Epidemiology. 2017. http://www.who.int/topics/epidemiology/en/.

Table 4–5

10 Essential public health services.

Public health activities that all communities should undertake: 1. Monitor health status to identify and solve community health problems; 2. Diagnose and investigate health problems and health hazards in the community; 3. Inform, educate, and empower people about health issues; 4. Mobilize community partnerships and action to identify and solve health problems; 5. Develop policies and plans that support individual and community health efforts; 6. Enforce laws and regulations that protect health and ensure safety; 7. Link people to needed personal health services and assure the provision of health care when otherwise unavailable; 8. Assure competent public and personal health care workforce; 9. Evaluate effectiveness, accessibility, and quality of personal and population-based health services; 10. Research for new insights and innovative solutions to health problems. From CDC.org. The public health system & the 10 essential public health services; 2018.

4.9.1 Primary AI literature reviews of public health (1, 2, 3) 1. P4 medicine [146]: Back in the introduction to this chapter, I spoke about the concepts of vertical integration and interoperability and their role in both the business and clinical aspects of health care. Throughout this chapter, the 2 concepts kept reappearing in different forms and functions for many current and evolving programs and strategies. We now arrive at the concept of P4 Medicine, which represents the perfect example of both vertical integration and interoperability in both health care business and clinical delivery. Most of the components to be described as part of the P4 Medicine model (predictive analytics) [Text #8, page 99], preventive care [Text #11, page 106], precision or personalize care [Text #10, page 101]) have already been mentioned numerous times

Chapter 4 • AI applications in the business and administration of health care

113

FIGURE 4–7 The public health system. Public health is a network of vertically integrated and interoperable systems delivering and assessing the provision of public health services. From CDC.org. The Public Health System; 2018.

throughout this chapter. They now come together in an elegant public health and clinical delivery model of integrated health care. The quintessential example of the value of AI and its vertical integration and interoperable capabilities in promoting public health, personal health, and the business and administration of health care (i.e., cost containment, access, and quality) is the development “P4 Medicine.” This strategy facilitates 2 revolutions in medicine: (1) systems medicine (global or comprehensive approaches to disease); and (2) AI data-driven personalized measurements. Together they create a new system of health care that provides 4 (“P4 Medicine”) evolving health care concepts: (1) predictive analytics, (2) preventive care, (3) personalized (precision) healthcare, and (4) participatory care [147]. P4 medicine is a plan to radically improve the quality of human life via biotechnology. The premise of P4 Medicine is that, over the next 20 years or so, medical practice will be revolutionized by biotechnology to manage a person’s health, instead of managing a patient’s disease. Today’s medicine is reactive, meaning we wait until someone is sick before administering treatment. Medicine of the future will be predictive and preventive by examining the unique biology of an individual to assess their probabilities of developing various diseases and then designing appropriate treatments, even before the onset of a disease [148]. We are rapidly approaching a point in healthcare evolution where it is possible to treat disease while considering the various factors that affect individual patients so that they may receive customized care. We are transitioning to patient-specific diagnostics and therapeutics. As it relates to predictive medicine (first tier in P4 Medicine), AI technology is beginning to allow us to understand better not only our genomic DNA but also our

114

Foundations of Artificial Intelligence in Healthcare and Bioscience

epigenetic response to environmental changes. We are moving further into the analysis of proteomics, transcriptomics, genomics, metabolomics, and lipidomics, which allows us to predict and target diseases [149]. Preventive care (second tier) flows from predictive care, as well as precision care (third tier). In precision care, we classify people into subpopulations using their common genetic patterns, lifestyles, drug responses, and environmental and cultural factors. This provides enormous amounts of information that is used to deliver the most efficient treatment or preventive care at the right time, to the patients who will best benefit from it. The P4 system does not design a novel therapeutic intervention for each patient, such as a personalized one-patient drug, which would be financially and technically difficult [150]. The concept of participatory medicine relies on the idea that patients should play a decisive role in their healthcare by actively controlling their health status and by participating in the decision-making process regarding their treatments. Doctors are encouraged to evaluate treatment possibilities while considering the benefits and limitations of each alternative, rather than merely imposing a treatment. Decision-making becomes a much more complicated process with more information available and given the increasing number of variables that are taken into consideration. Innovative ways to practice medicine lead to changing the way medicine is taught. Medical education needs to shift its focus from pure content to the development of integrative skills and competences. Basic subjects such as molecular and cell biology, genetics, and pathophysiology must be better understood to practicing precision medicine. Interpersonal skills are another competence of great importance to medical education in the next era. Doctors will be required to communicate and discuss treatment options with their patients as well as with a series of other professionals with whom they must cooperate. It is also a benefit for the next generation of doctors to understand, at least at a basic level, subjects beyond the current medical curriculum, such as coding, AI, blockchain technology, 3-D printing, and other related topics [151]. 2. “geoAI” in environmental epidemiology [152]: The scientific field of geospatial AI (geoAI) has recently been formed from the combination of innovations in spatial science with the rapid growth of methods in AI to glean meaningful information from spatial big data. The innovation of geoAI lies partially in its applications to address real-world problems. These novel geoAI methods are used to address human health-related problems such as environmental epidemiology [153]. Ultimately, one of the higher goals for integrating geoAI with environmental epidemiology is in conducting more accurate and highly resolved modeling of environmental exposures (compared to conventional approaches). This, in turn, would lead to a more accurate assessment of the environmental factors to which we are exposed, and that would improve our understanding of the potential associations between environmental exposures and disease in epidemiologic studies.

Chapter 4 • AI applications in the business and administration of health care

115

Given the advances and capabilities in recent research, we can begin to consider how geoAI technologies can be applied explicitly to environmental epidemiology. By determining the factors to which we may be exposed and thus may influence health, environmental epidemiologists can implement direct methods of exposure assessment. These might include such technologies as biomonitoring (e.g., measured in urine), and indirect methods, such as exposure modeling. Spatial science has been enormously valuable in exposure modeling for epidemiologic studies over the past 2 decades. It has enabled environmental epidemiologists to use GIS technologies to create and link exposure models to health outcome data using geographic variables (e.g., geocoded addresses) in their investigation of the effects of factors such as air pollution on the risk of developing diseases such as cardiovascular disease [154]. “geoAI” applications for environmental epidemiology are moving us closer to achieving the goal of providing a more accurate picture of the environmental factors to which we are exposed, which can combine with other relevant information regarding health outcomes, confounders. It lets us investigate whether a particular environmental exposure is associated with a particular outcome of interest in an epidemiologic study. geoAI is an emerging interdisciplinary scientific field that captures the innovations of spatial science, AI (particularly machine learning and deep learning), data mining, and high-performance computing for knowledge discovery from spatial big data. As climate change increasingly threatens the environmental balance and ecosystems of our planet, geoAI and environmental epidemiology have become some of the most essential worldwide considerations in Public Health. Without immediate attention to these environmental factors adversely affecting climate (i.e., CO2, carbon emissions, greenhouse gases, methane gases, air, and water pollution), the health of future generations in the intermediate and long-range future is at stake. 3. Public health research on gun violence: long overdue [155]: Gun violence is a defining and critical public health challenge of our time, particularly in the USA. Public health strategies are built on research to identify patterns of risk, illuminate productive targets for intervention, and assess the effectiveness of interventions. Unfortunately, in the United States, there seems to be a wanton lack a comprehensive public health approach to gun violence, due in large part to the absence of federal funding for research on gun violence for more than 2 decades. After the 2013 tragedy at Sandy Hook Elementary School, the Centers for Disease Control and Prevention (CDC) asked the Institute of Medicine (IOM) and the National Research Council (NRC) to define a public health research agenda for gun violence. The resulting consensus report, “Priorities for Research to Reduce the Threat of Firearm-Related Violence,” laid out the highest-priority research questions to effect progress in a 3 5-year time frame [156]. A provision in a 1996 omnibus spending bill known as the Dickey Amendment forbade the CDC from using its funds to promote or advocate for gun control. This is thus interpreted as a prohibition on supporting any research on firearms, and the CDC program was dismantled. As a result, we lack even the most basic information about the prevalence and safety of firearms in the United States, nor on the effectiveness of interventions aimed at reducing the

116

Foundations of Artificial Intelligence in Healthcare and Bioscience

probability of injury and death related to their use. The 2013 IOM/NRC report notes that public health research should be integrated with insights from criminal justice and other fields since no single agency or research strategy can provide all of the answers. If implemented, the IOM/NRC public health research agenda report would provide knowledge to inform the U.S. approach to minimizing firearm-related violence and its effects on the health of the American public. Scientific evidence generated by this research would also enable the development of sound policies that support the rights and responsibilities which are central to gun ownership in the United States. It is long overdue to bring the full power of science into this issue of such significant concern to the United States. We need researchers from different disciplines, including public health, social and behavioral sciences, mental health, law enforcement, and AI to work together to tackle this problem. That can only happen if we restore much-needed research funding. It is time to end the counterproductive research freeze on gun violence and its deleterious effects on the public health of the United States.

4.9.2 Additional AI literature reviews of public health (4, 5, 6) 4. Using AI to solve public health problems [157]: Public education program to combat opioid misuse and abuse. 5. AI opens a new frontier for suicide prevention [158]: AI is harnessed to monitor and respond to mental health crises. 6. How to solve homelessness with artificial intelligence [159]: Computer science and social work are merging to battle complex public health societal problems. The goal of Chapter 4 is to serve as a “guidebook” to how the applications of AI in the business and administrative aspects of health care have advanced an industry we all depend upon, perhaps more than any other in our lives. The vertical integration and interoperability of big data analytics, blockchain; GOV and NGO organizations; EHRs; population health; healthcare analytics; precision medicine/health (personalized health); preventive medicine/ healthcare; and public health have changed our health care system, all for the ultimate betterment of our personal health, the health of our loved ones and all humanity. The next chapters attempt to expand this “guidebook” as we drill down into the specifics of how AI is applied in the biosciences and in our clinical health care delivery system. It provides an in-depth analysis and review of AI applications in diagnostic technologies and services (Chapter 5), AI applications in medical therapies and services (Chapter 6), and AI applications in prevalent diseases and disorders (Chapter 7). Finally, Chapter 8 will address AI’s role in the epidemiology, pathogenesis, clinical and management aspects of the SARSCoV-2 virus and the COVID-19 pandemic. It is the specific AI-enhanced clinical health care technologies and services presented in the next 4 chapters that we depend upon to protect, prevent, and assure our health and wellness. I hope you found the information in Chapter 4 valuable and that you will find the remaining chapters worthwhile.

Chapter 4 • AI applications in the business and administration of health care

117

References [1] Kasich JR. State of the State Address. Lima: Ohio State Legislature; 2013. [2] FederalRegister.Gov. 2019. [3] Nongovernmental organizations (NGOs) working in global health research. Fogarty International Center at the National Institutes of Health; 2019. [4] Papanicolas I, Woskie LR, Jha AK. Health care spending in the United States and other high-income countries. JAMA 2018;319(10):1024 39. [5] Marbury D. How health systems are using AI and future predictions. Managed Healthc Executives 2018. [6] Woolhandler S, Himmelstiein DU. Single payer reform: the only way to fulfill the President’s pledge of more coverage, better benefits, and lower costs. Ann Intern Med 2017;166(8):587 8. [7] Mesko B. Artificial intelligence will redesign healthcare. Medical Futurist; 2016. [8] Das R. Top 8 Healthcare predictions for 2019. Forbes; 2018. [9] Insights Team. AI and healthcare: a giant opportunity. Forbes Insights; 2019. [10] Muegge C. Generate tangible value in medicine, health economics and outcomes with AI and machine learning. Digital.AI; 2019. [11] Joudak H, Rashidian A, Minaei-Bidgoli B, et al. Using data mining to detect health care fraud and abuse: a review of literature. Glob J Health Sci 2015;7(1):194 202. [12] IBM. Watson Care Manager. IBM.com; 2019. [13] Kalis B, Collier M, Fu R. 10 promising AI applications in health care. Harvard Business Review; 2018. [14] Smith B. Using AI to help save lives. Microsoft; 2018. [15] Davenport TH, Bean R. How big data and AI are accelerating business transformation. NewVantage Partners LLC. Big Data and AI Executive Survey; 2019. [16] Yichuan Wang Y, Kung KA, Byrd TA. Big data analytics: understanding its capabilities and potential benefits for healthcare organizations. Technol Forecast Soc Change 2016. Available from: http://dx.doi.org/ 10.1016/j.techfore.2015.12.019. [17] Dagliati A, Tibollo V, Sacchi L, et al. Big data as a driver for clinical decision support systems: a learning health systems perspective. Front Digit Humanit 2018;5:8. Available from: https://doi.org/10.3389/fdigh. [18] Ayers R. Can big data help provide affordable healthcare? Dataconomy 2019. [19] Ayers R. 4 ways AI is reshaping the future of health care. CoinBase; 2018. [20] Kuch M. Understanding “Big Data” and its role in global health. Medium; 2017. [21] Subías P, Ribas V. Big data for critical care. Big Data CoE. Version 1.0; 2018. [22] High R, Low J. Scientific American blogs (2014); blogs. ,Scientificamerican.com/ mind.guest-blog/ 2014/10/20/expert-cancer-care-may-soon-be-everywhere-thanks-to-watson. . [23] Marr B. How big data is transforming medicine. Forbes; 2016. [24] Omer Gottesman O, Johansson F, Komorowski M, et al. Guidelines for reinforcement learning in healthcare. Nat Med 2019;25:16 18. [25] Dumka A, Sah A. Chapter 6-Healthcare data analytics and management advances in ubiquitous sensing applications for healthcare. Smart ambulance system using concept of big data and internet of things. Academic Press; 2019. p. 155 76. [26] Rodriguez F, Scheinker D, Harrington RA. Promise and perils of big data and artificial intelligence in clinical medicine and biomedical research. Circulation Res 2018;123:1282 4. [27] Fu R. Digital disruption: the transformation of blockchain in healthcare. Healthcare Weekly: 2018.

118

Foundations of Artificial Intelligence in Healthcare and Bioscience

[28] Quarré F. The Advent of AI and blockchain in health care. Forbes; 2019. [29] BIS Research. Global blockchain in healthcare market: focus on industry analysis and opportunity matrix analysis and forecast, 2018 2025. 2018. [30] Brennan B. Pharmeum: the world’s first blockchain and AI platform enabling access to affordable, digital healthcare globally. Blockchain Healthcare Review; 2019. [31] Makary M. Medical errors now third leading cause of death in United States. BMJ 2016;353:i2139. [32] Heckman J. Blockchain prototype promises ‘complete transparency’ between CDC, health providers. Federal News Network; 2018. [33] Siemienczuk J. No, technology can’t solve our vaccination problem (but it sure can help). Forbes; 2019. [34] Kamel MN, Wilson JT, Clauson KA. Geospatial blockchain: promises, challenges, and scenarios in health and healthcare. Int J Health Geographics 2018;17:25. [35] Simon Lebech Cichosz SL, Stausholm MN, Kronborg T., et al. How to use blockchain for diabetes health care data and access management: an operational concept. Journ Diab Sc Tech 2018. [36] Birkhead GS, Klompas M, Shah NR. Uses of electronic health records for public health surveillance to advance public health. Annu Rev Public Health 2015;36:345 59. [37] Botsis T, Hartvigsen G, Chen F, et al. Secondary use of EHR: data quality issues and informatics opportunities. AMIA Joint summits on translational science proceedings. AMIA Summit Transl Sci 2010;2010:1 5. [38] Gabriel CD, Searcy MT. Adoption of electronic health record systems among U.S. non-federal acute care hospitals: 2008 2014. 2015. [39] Rep. Hobson DL. H.R.3323-Administrative Simplification Compliance Act 107th Congress (2001-2002). Congress.gov; 2002. [40] Goldstein BA, Navar AM, Pencina MJ, et al. Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review. J Am Med Inform Assoc 2017;24(1):198 208. Available from: https://doi.org/10.1093/jamia/ocw042. [41] Davenport TH, Hongsermeier TM, Mc Cord KA. Using AI to improve electronic health records. Harvard Business Review; 2018. [42] Parikh RB, Schwartz JS, Navathe AS. Beyond genes and molecules precision medicine. N Engl J Med 2017;376:1609 12.

a precision delivery initiative for

[43] Bates DW, Saria S, Ohno-Machado L, Shah A, Escobar G. Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Aff 2014;33:1123 31. [44] Press, G. Cleaning big data: most time-consuming, least enjoyable data science task, survey says. Forbes; 2016. [45] Lohr, S. For big-data scientists, ‘Janitor Work’ is key hurdle to insights. NY Times; 2014. [46] Chopra V, McMahon Jr. LF. Redesigning hospital alarms for patient safety: alarmed and potentially dangerous. JAMA 2014;311:1199 200. [47] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:436 44. [48] Wu Y, et al. Google’s neural machine translation system: bridging the gap between human and machine translation. arXi v [cs.CL]; 2016. [49] Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 2013;35:1798 828. [50] What is value-based healthcare? NEJM Catalyst; 2017. [51] Henry J, Pylypchuk Y, Searcy T, et al. Adoption of electronic health record systems among U.S. Non-Federal Acute Care Hospitals: 2008-2015. Healthit.gov; 2016.

Chapter 4 • AI applications in the business and administration of health care

119

[52] Lee CH, Yoon HJ. Medical big data: promise and challenges. Kidney Res Clin Pract 2017;36(1):3 11. Available from: https://doi.org/10.23876/j.krcp.2017.36.1. [53] HHS announces next steps in advancing interoperability of health information. HHS.gov; 2019. [54] Holmgren AJ, Patel V, Adler-Milstein J. Progress in interoperability: measuring US hospitals’ engagement in sharing patient data. 2017. https://doi.org/10.1377/hlthaff.2017.0546 [55] Orenstein D. EHR integration: achieving this digital health imperative. HealthCatalyst; 2018. [56] Knowles M. 4 ways AI can make EHR systems more physician-friendly. HealthIT and CIO Report; 2018. [57] Bresnick J. EHR users want their time back, and artificial intelligence can help. HealthAnalytics; 2018. [58] Rajkomar A, et al. Scalable and accurate deep learning with electronic health records (EHRs). Digital Med 2018;1:18. Available from: https://doi.org/10.1038/s41746-018-0029-1. [59] Kindig D, Stoddart G. What is population health? Am J Public Health 2003;93(3):380 3. [60] Nash DB, Fabius RJ, Skoufalos A, Clarke JL, Horowitz MR. Population health: creating a culture of wellness. 2nd ed. Burlington, MA: Jones & Bartlett Learning; 2016. [61] Jacobson DM, Teutsch S. An environmental scan of integrated approaches for defining and measuring total population health by the clinical care system, the government public health system, and stakeholder organizations. Available at: http://www.improvingpopulationhealth.org. [accessed 08.15.15]. [62] Company Author. Population health vs community health and everything in-between. Eccovia Solutions; 2018. [63] Steward D, Wan TTH. The role of simulation and modeling in disaster management. J Med Syst 2007;31:125 30. [64] Wan TTH. Strategies to modify the risk for heart failure readmission: a systematic review and meta-analysis. In: Wan TTH, editor. Population health management for poly chronic conditions: evidence-based research approaches. New York: Springer; 2018. p. 85 105. [65] Stein N. The future of population health management: artificial intelligence as a cost-effective behavior change and chronic disease prevention and management solution. MOJ Public Health 6(5):00188. Available from: https://doi.org/10.15406/mojph.2017.06.00188. [66] Shaban-Nejad A, Michalowski M, Buckeridge DL. Health intelligence: how artificial intelligence transforms population and personalized health. npj Digital Med 2018;1 Article number: 53. [67] Shaban-Nejad A, Lavigne M, Okhmatovskaia A, Buckeridge DL. PopHR: a knowledge-based platform to support integration, analysis, and visualization of population health data. Ann N Y Acad Sci 2017;1387:44 53. [68] Collins FS, Varmus H. A new initiative on precision medicine. N Engl J Med 2015;372:793 5. [69] The National Institute of Health (NIH). All of Us research program. ,https://allofus.nih.gov/. [accessed 29.08.18]. [70] Dankwa-Mullan I, Rivo M, Sepulveda M, et al. Transforming diabetes care through artificial intelligence: the future is here. Popul Health Manag 2019;22(3). Available from: https://doi.org/10.1089/ pop.2018.0129. [71] Latts L. ADA/IBM Watson Health Study (N . 300,000) finds that nearly 60% of people with T2D discontinue therapy after one year. Presented at: American diabetes association 78th scientific session, June 22 26, 2018. [72] Research 2 Guidance. Top 3 therapy fields with the best market potential for digital health apps. ,https://research2guidance.com/top-3-therapy-fields-with-the-best-marketpotential-for-digital-healthapps/. [accessed 18.07.18]. [73] Kent J. Inching toward the data-driven future of population health management. Healthcare Analytics; 2019. [74] Bresnick J. Which healthcare data is important for population health management? HealthIT Analytics; 2017.

120

Foundations of Artificial Intelligence in Healthcare and Bioscience

[75] Flaxman AD, Vos T. Machine learning in population health: opportunities and threats. PLoS Med 2018;15(11):e1002702. [76] Brundin P, Langston JW, Bloem BR. How mobile health technology and electronic health records will change care of patients with Parkinson’s disease. J Parkinson’s Dis 2018;8(s1):S41 5. [77] Lakhani P, Prater AB, Hutson RK. Machine learning in radiology: applications beyond image interpretation. J Am Coll Radiol 2017. [78] Saiful I, Hasan M, E-Alam N. A systematic review on healthcare analytics: application and theoretical perspective of data mining. Healthc (Basel) 2018;6(2):54. Available from: https://doi.org/10.3390/ healthcare6020054. [79] Madakam S, Ramaswamy R, Tripathi S. Internet of Things (IoT): a literature review. J Comput Commun 2015;3:164 73. [80] Tomar D, Agarwal S. A survey on data mining approaches for healthcare. Int J Bio-Sci Bio-Technol 2013;5:241 66. Available from: https://doi.org/10.14257/ijbsbt.2013.5.5.25. [81] Herland M, Khoshgoftaar TM, Wald R. A review of data mining using big data in health informatics. J Big Data 2014;1:2. Available from: https://doi.org/10.1186/2196-1115-1-2. [82] 4 key types of healthcare analytics 2018.

is your practice using them? Integrated Medical Partners Blog;

[83] Khalifa M. Health analytics types, functions and levels: a review of literature. In: Hasman, et al., editors. Data, informatics and technology: an inspiration for improved healthcare. IOS Press; 2018. Available from: http://dx.doi.org/10.3233/978-1-61499-880-8-137. [84] Zerbib LP, Peth L, Outcault S. Leveraging artificial intelligence to improve healthcare organization operations. Mazars 2019. [85] Simpao AF, Ahumada LM, Gálvez JA, et al. A review of analytics and clinical informatics in health care. J Med Syst 2014;38(4):45. [86] HertzI. Healthcare and BI: how can analytics improve patient care? Technology Advice; 2018. [87] Basu A, Five pillars of prescriptive analytics success. Analytics Magazine; 2013. p. 8 12. [88] Kuttappa S. Prescriptive analytics: the cure for a transforming healthcare industry. IBM Big Data Hub and Analytics; 2019. [89] Lebied M. 12 examples of big data analytics in healthcare that can save people. Bus Intell 2018. [90] Marr B. How big data helps to tackle the no 1 cause of accidental death in the U.S. Forbes; 2017. [91] Ifeyinwa AA, Nweke HF. Big data and business analytics: trends, platforms, success factors and applications. Big Data Cogn Comput 2019;3:32. [92] Brynjolfsson E, Hitt LM, Kim HH. Strength in numbers: how does data-driven decision-making affect firm performance? 2011. Available online: http://ssrn.com/abstract 5 1819486. [93] Ling ZJ, Tran QT, Fan J, Gerald CHK. GEMINI: an integrative healthcare analytics system. Proc VLDB Endow 2017;7(13):1766 71. [94] Miller DD, Brown EW. Artificial intelligence in medical practice: the question to the answer? Am J Med 2018;131:129 33. [95] Kent J. 5 ways to ethically use social determinants of health data. Health IT Analytics; 2019. [96] CDC. What is precision medicine? National Institute of Health (NIH). Genetics Home Reference (GHR); 2019. [97] The National Institute of Health (NIH). All of Us research program. 2019. ,https://allofus.nih.gov/. . [98] Insights Team. How machine learning is crafting precision medicine. Forbes Insights; 2019. [99] Xu J, Yang P, Xue S. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges and future perspectives. Hum Genet 2019;138(2):109 24.

Chapter 4 • AI applications in the business and administration of health care

121

[100] Li Y, Shi W, Wasserman WW. Genome-wide prediction of cis-regulatory regions using supervised deep learning methods. BMC Bioinform 2018;19:202. Available from: https://doi.org/10.1186/s12859-018-2187-1. [101] Pennell NA, Mutebi A, Zhou ZY. Economic impact of next generation sequencing vs sequential singlegene testing modalities to detect genomic alterations in metastatic non-small cell lung cancer using a decision analytic model. ASCO 2018. [102] Steuer CE, Ramalingam SS. Tumor mutation burden: leading immunotherapy to the era of precision medicine? J Clin Oncol 2018;36:631 2. Available from: https://doi.org/10.1200/JCO.2017.76.8770. [103] Zomnir MG, Lipkin L, Pacula M, et al. Artificial intelligence approach for variant reporting. JCO Clin Cancer Inf 2018. Available from: https://doi.org/10.1200/CCI.16.00079. [104] Fröhlich H, Balling R, Beerenwinkel N, et al. From hype to reality: data science enabling personalized medicine. BMC Med 2018;16:150. [105] FDA. ,https://www.fda.gov/ucm/groups/fdagov-public/@fdagov-drugs-n/documents/document/ ucm533161.pdf. . [106] Mathur S, Sutton J. Personalized medicine could transform healthcare. Biomed Rep 2017;7:3 5. Available from: https://doi.org/10.3892/br.2017.922. [107] Beaulieu-Jones BK, Orzechowski P, Moore JH. Mapping patient trajectories using longitudinal extraction and deep learning in the MIMIC-III critical care database. Pac Symp Biocomput 2018;23:123 32. [108] ,http://medicalfuturist.com/innovative-healthcare-companies/. [109] Yu K-H, Zhang C, Berry GJ, Altman RB, Ré C, Rubin DL, et al. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat Commun 2016;7:12474. Available from: https://doi.org/10.1038/ncomms12474. [110] Love-Koh J, Peel A, Rejon-Parrilla JC. Future precis medicine: potential impacts health technol assess 2018;36(12):1439 51. [111] Ashley EA. The precision medicine initiative: a new national effort. JAMA 2015;313(21):2119 20. Available from: https://doi.org/10.1001/jama.2015.3595. [112] England NHS. 100,000 genomes project: paving the way to personalised medicine. London: NHS England; 2016. [113] Ijzerman MJ, Manca A, Keizer J, Ramsey SD. Implementation of comparative effectiveness research in personalized medicine applications in oncology: current and future perspectives. Comp Eff Res 2015;5:65 72. Available from: https://doi.org/10.2147/CER.S92212. [114] Alemayehu D, Berger ML. Big data: transforming drug development and health policy decision making. Health Serv Outcomes Res Methodol 2016;16(3):92 102. Available from: https://doi.org/10.1007/ s10742-016-0144-x. [115] Dewey FE, Grove ME, Pan C, Goldstein BA, Bernstein JA, Chaib H, et al. Clinical interpretation and implications of whole-genome sequencing. JAMA 2014;311(10):1035 45. Available from: https://doi. org/10.1001/jama.2014.1717. [116] Fugel H-J, Nuijten M, Postma M, Redekop K. Economic evaluation in stratified medicine: methodological issues and challenges. Front Pharmacol 2016;7:113. Available from: https://doi.org/ 10.3389/fphar.2016.00113. [117] Garattini L, Curto A, Freemantle N. Personalized medicine and economic evaluation in oncology: all theory and no practice? Expert Rev Pharmacoecon Outcomes Res 2015;15(5):733 8. [118] Cowles E, Marsden G, Cole A, Devlin N. A review of NICE methods and processes across health technology assessment programmes: why the differences and what is the impact? Appl Health Econ Health Policy 2017;15(4):469 77. Available from: https://doi.org/10.1007/s40258-017-0309-y. [119] Arimura H, Soufi M, Kamezawa H. Radiomics with artificial intelligence for precision medicine in radiation therapy. J Radiat Res 2018;60(1):150 7. Available from: https://doi.org/10.1093/jrr/rry077.

122

Foundations of Artificial Intelligence in Healthcare and Bioscience

[120] Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med 2018. Available from: https://doi.org/10.1371/journal.pmed.1002689. [121] National Institutes of Health. All of Us research program. [Cited 28 Sept 2018]. https://allofus.nih.gov/. [122] Shah DT. The future of personalized care: humanism meets artificial intelligence Marshall J Med 2018;4(4)Article 1. Available from: http://dx.doi.org/10.18590/mjm.2018.vol4.iss4.1. [123] Wahl B, Cossy-Gantner A, Germann S, et al. Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ Glob Health 2018. [124] Preventive Medicine. American College of Preventive Medicine. www.acpm.org; 2019. [125] Goldston SE, editor. Concepts of primary prevention: a framework for program development. Los Angeles, CA: California Department of Mental Health; 1987. [126] Leavell HR, Clark EG. Preventive medicine for the doctor and his community. New York: McGraw-Hill; 1965. [127] Maas P. What is meant by prevention? Eindhoven University of Technology; 2016. [128] Preventive Medicine. American board of preventive medicine.www.theabpm.org. [129] Google Patent. Digital platform to identify health conditions and therapeutic interventions using an automatic and distributed artificial intelligence system. US Patent Application US10327697B1. United States; 2018. [130] Au R, Ritchie M, Hardy S, et al. Aging well: using precision to drive down costs and increase health quality. Adv Geriatr Med Res 2019;1:e190003. Available from: https://doi.org/10.20900/agmr20190003. [131] Fuellen G, Jansen L, Cohen AA, Luyten W, Gogol M, Simm A, et al. Health and aging: unifying concepts, scores, biomarkers and pathways. Aging Dis 2018. Available from: https://doi.org/10.14336/ AD.2018.1030. [132] Bloom DE, Cafiero ET, Jané-Llopis E, Abrahams-Gessel S, Bloom LR, Fathima S, et al. The global economic burden of non-communicable diseases. Geneva: World Economic Forum; 2011. [133] O’Donnell MJ, Chin SL, Rangarajan S, Xavier D, Liu L, Zhang H, et al. Global and regional effects of potentially modifiable risk factors associated with acute stroke in 32 countries (INTERSTROKE): a case-control study. Lancet 2016;388(10046):761 75. Available from: https://doi.org/10.1016/S0140-6736(16)30506-2. [134] Islami F, Goding Sauer A, Miller K, Siegel R, Fedewa S, Jacobs E, et al. Proportion and number of cancer cases and deaths attributable to potentially modifiable risk factors in the United States. CA Cancer J Clin 2018;68(1):31 54. Available from: https://doi.org/10.3322/caac.21440. [135] Kansagra SM, Kennelly MO, Nonas CA, Curtis CJ, Van Wye G, Goodman A, et al. Reducing sugary drink consumption: New York city’s approach. Am J Public Health 2015;105(4):e61 4. Available from: https://doi.org/10.2105/AJPH.2014.302497. [136] University of Nottingham. Artificial intelligence can predict premature death, study finds. ScienceDaily; 2019. [137] Weng SF, Vaz L, Qureshi N, et al. Prediction of premature all-cause mortality: a prospective general population cohort study comparing machine-learning and standard epidemiological approaches. PLoS One 2019;14(3):e0214365. Available from: https://doi.org/10.1371/journal.pone.0214365. [138] Houssami N, Kirkpatrick-Jones G, Noguchi N, et al. Artificial intelligence (AI) for the early detection of breast cancer: a scoping review to assess AI’s’s potential in breast screening practice. Expert Rev Med Devices 2019;16(5):351 62. Available from: https://doi.org/10.1080/17434440.2019.1610387. [139] Rao A, Kale MS. Characterizing the role of the primary care provider in preventive health exams: NAMCS 2011 2014. J Gen Intern Med 2019. [140] Gupta BS, Cole AP, Marchese M, et al. Use of preventive health services among cancer survivors in the U.S. Am J Preventive Med 2018;55(6):830 8. [141] CDC Foundation. What is public health. CDC.gov; 2019.

Chapter 4 • AI applications in the business and administration of health care

123

[142] WHO. Epidemiology. 2017. http://www.who.int/topics/epidemiology/en/. 15:40:58. [143] APHA. What is public health. American Public Health Association (APHA.org); 2019. [144] CDC. The public health system. USA.gov; 2018. [145] Trishan Panch T, Pearson-Stuttard J, Greaves F, et al. Artificial intelligence: opportunities and risks for public health. Lancet 2019. Available online: https://doi.org/10.1016/S2589-7500(19)30002-0. [146] Hood LE. P4 medicine and the democratization of health care. Institute for Systems Biology. NRJM Catalyst; 2017. [147] Hood L, Flores M. A personal view on systems medicine and the emergence of proactive P4 medicine: predictive, preventive, personalized and participatory. N Biotechnol 2012;29(6):613 24. Available online: http://dx.doi.org/10.1016/j.nbt.2012.03.004. [148] Hood LE. Genomics and P4 medicine. Genomics for Everyone; 2019. [149] Toledo RA, Sekiya T, Longuini VC, Coutinho FL, Lourenc o̧ Jr DM, Toledo SP. Narrowing the gap of personalized medicine in emerging countries: the case of multiple endocrine neoplasias in Brazil. Clinics. 2012;67(Suppl. 1):3 6. Available from: http://dx.doi.org/10.6061/clinics/2012(Sup01)02. [150] Psaty BM, Dekkers OM, Cooper RS. Comparison of 2 treatment models: precision medicine and preventive medicine. JAMA 2018;320(8):751 2. Available from: http://dx.doi.org/10.1001/jama.2018.8377. [151] Rosa G, Gameiro I, Sinkunas V, et al. Precision medicine: changing the way we think about healthcare. Clinics 2018;73:e723. [152] Trang Vo T, Hart JE, Laden F, et al. Emerging trends in geospatial artificial intelligence (geoAI): potential applications for environmental epidemiology. Environ Health 2018;17:40. Available from: https://doi.org/10.1186/s12940-018-0386-x. [153] Baker D, Nieuwenhuijsen MJ. Environmental epidemiology: study methods and application. New York, NY: Oxford University Press; 2008. [154] Hart JE, Puett RC, Rexrode KM, et al. Effect modification of long-term air pollution exposures and the risk of incident cardiovascular disease in US women. J Am Heart Assoc 2015;4(12). [155] Dzau VJ, Leshner AI. Public health research on gun violence: long overdue. Ann Intern Med 2018. [156] Institute of Medicine, National Research Council. Priorities for research to reduce the threat of firearmrelated violence. Washington, DC: National Academies Press; 2013. [157] Bostic B. Using artificial intelligence to solve public health problems. Becker’s Health IT & CIO Report; 2018. [158] Vogel L. AI opens new frontier for suicide prevention. CMAJ 2018;190(4):E119. Available from: https://doi.org/10.1503/cmaj.109-5549. [159] Stern C. How to solve homelessness with artificial intelligence. OZY; 2019.

5 AI applications in diagnostic technologies and services Let’s begin this chapter on current AI diagnostic and treatment technologies with the “Top 10” parlor game we introduced way back in Section 1 Introduction of this book. The game was merely to identify a timeframe and within the limits of that timeframe, select your top 10 (or top 5) “most disruptive technologies.” Then, have each of your willing friends do the same. The common denominators and differences among your listing and theirs would undoubtedly produce an interesting array of pro and con discussions. Given those game rules, let’s try it (again) using as the topics, those of this upcoming Chapter 5 on “diagnostic and treatment technologies.” To make it more manageable for this brief example, let’s do a “Top 5” listing of the most disruptive “diagnostic and treatment technologies” that will dominate the medical landscape by the year 2025. We’ll use some Internet listings (dozens of “Top 5 and Top 10” listings on the Internet, especially healthcare technologies) for you to compare with yours. As in Section 1, I’ll take a shot at my “Top 5” list (below). But before you read mine, make 1 of your own so you can compare your choices with mine and more so, with some of the Internet listings we’ll review. And don’t cheat and look up Internet listings first (I didn’t - honestly). And trust me! There’s a pony in here somewhere, I promise. My Top 5 list of the most disruptive diagnostic and treatment technologies by 2025 include: 1. 2. 3. 4. 5.

DNA intrauterine (prenatal) diagnoses; Routine genomic health profiling; Population health and precision medicine for preventive health; Diagnostic imaging using AI and deep learning algorithms; Telemedicine Internet diagnoses and remote management.

Now that we’ve made our lists let’s go to the Internet and find some lists from the professional analysts and futurists that make projections for a living. Not all lists are projected out to 2025 but are often still worth noting because of their interesting choices. Just about every significant listing I reviewed had common denominators between them (mine included), which begins to hint at where I’m going with this exercise. For openers, the prestigious firm of Deloitte Consulting has a list of “Life Sciences and Health Care Predictions” up to the year 2022 (close enough!). Their list includes the following [1]: 1. Managing healthcare through the genome ; 2. “Smart healthcare” (wearables, digital, IoTs - see Chapter 3, page 55) ; Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00005-5 © 2021 Elsevier Inc. All rights reserved.

125

126

Foundations of Artificial Intelligence in Healthcare and Bioscience

3. AI and machine learning to increase the pace and productivity of care ; 4. Artificial intelligence (AI) in health data analysis ; 5. Precision medicine (Personalized healthcare) . Wow! Five our of five of their Top 5 are directly or indirectly related to AI. Tells you something? McKinsey and Company (Healthcare Systems and Services) had a “Top 8” list, so I included them all [2]: 1. 2. 3. 4. 5. 6. 7. 8.

Connected and cognitive devices ; Electroceuticals; Targeted and personalized medicine ; Robotics process automation ; 3D printing; Big data and analytics ; Artificial intelligence (AI) ; Blockchain .

There is an interesting list (“Top 6”) by 2025 on a website (Nexter.org) that bills itself as “. . .a new generation of virality. . .ready to offer our readers the latest hot stories and latest news.” This somewhat unique list includes: 1. 2. 3. 4. 5. 6.

Electronic skins that can measure vital signs; Digital pills with embedded sensor reporting to the prescribing doctor ; 3-D Bioprinting to replace organs with 3-D-printed copies; Recreational cyborgs as human-machine parts and a new fashion trend; Genomics: a medical tool to prevent and cure diseases ; Robots and AI as surgery assistants and remote nurses .

And finally, a “Top 5” list for 2025 from an economics perspective (The New Economy Journal), including: [3] 1. 2. 3. 4. 5.

Gene therapy ; Virtual reality; Immuno-oncology ; Chatbots ; Artificial Intelligence (AI) .

Yet another four out of five list of AI technologies. Staring to see a pony? Now, finally, to make my point about this little game, whether it’s serious consulting firms, healthcare analysts, trendy news sources, or economists, all include AI in their “Top” projections for the coming 3 5 years. And if you count up the items that I identified by an asterisk ( ) after them, you will see that 17 of the total 24 predictions or 70.8% are directly or indirectly related to AI. This kind of a review, albeit “very” informal, suggests that AI has the potential to have a dominant influence in our healthcare going forward. That being the case, I would hope the information from the previous Chapter 4 about AI and the business and

Chapter 5 • AI applications in diagnostic technologies and services

127

administration of healthcare and the information in the remaining Chapters 5 8, you will consider relevant to you, your friends (from the “Top 10” games) and your loved ones. This Chapter 5 will address AI applications on diagnostic technologies and services. You will certainly be familiar with most, if not all of the technologies and services we’ll be discussing, and I think you will find the AI applications regarding them informative and enlightening. Chapter 6 will cover AI applications in what we might consider the “implementation half” of the healthcare delivery system, i.e., medical therapies and services, those which impact the patient directly. Chapter 7 (“AI applications in prevalent diseases and disorders”) will address the bioscience and AI’s applications on each of the major disease categories, all of which you are well aware of, and some, I’m sure that have and will continue to impact your life. And finally, as a late, but necessary edition to this book, Chapter 8 will address “SARS-COV-2 and COVID-19 Pandemic.” Again, I think you will find the information compelling. Indeed, the combination of these 4 remaining Chapters 5 8 should provide a “launchpad” for your further understanding, appreciation, and management of your health and wellness and of those for whom you care for into the future. We will approach Chapter 5's material in the following “guidebook” format: First, we will address 3 significant categories of AI-assisted diagnostic technologies, including: 1. Diagnostic imaging; 2. Laboratory testing; and 3. Genetic and genomic testing. Next, we will address some “additional diagnostic technologies,” some of which are playing an ever-increasing role in medical diagnosis due to their evolving AI relationships and applications. They include: 1. 2. 3. 4. 5.

Vital signs: Electrodiagnosis; Telemedicine (aka Telehealth) Chatbots; Expert systems.

For each of the diagnostic categories and services, we will give a brief description of the relevant technologies and services in the category. Then we will discuss AI’s influences on each of the specific technologies and services. And finally, for each technology and service, I will present 3 literature reviews of recent papers (from among thousands in the literature) on the respective topics, some or all of which will hopefully be of interest to the readers of this guidebook.

5.1 Major diagnostic technologies [4] and their AI applications Nothing in healthcare is more important than a timely and accurate diagnosis. AI can do nothing more valuable for humanity than to make a diagnosis as good as it can be. Few in healthcare doubt that AI is indeed bringing the art and science of clinical diagnosis to new levels. And in this particular arena of healthcare, the concerns over “replacing the doctor” are moot in that the AI-assisted diagnostic tools are in fact, tools and

128

Foundations of Artificial Intelligence in Healthcare and Bioscience

technologies “assisted” by AI and managed by, controlled by, and evaluated for decision-making by the “human” doctor, nurse or appropriate health professional. The process of AI-assisted diagnostic technologies can be described as a combination of hardware for acquisition and data collection (input layer), AI software algorithms for interpretation (inner layer), and results (output) (remember the computer layers from Chapters 2 and 3?). This distinction between the hardware and software of diagnostic technologies will be an excellent way to categorize the vast array of methods and types of clinical testing done in routine healthcare. The categories (Table 5 1) divided by the nature and goals of the

Table 5–1

Diagnostic testing procedures.

Diagnostic imaging: • X-Rays (Conventional radiography); • Mammography; • Fluoroscopy; • Radiomics; • CT Scan; • MRI Scan; • Nuclear medicine scan; • Ultrasound; • Endoscopy; • Fundus imaging • Medical photography Laboratory testing: • Diabetic testing (a1c, FPG, RPG, OGTT); • Hepatitis testing; • Laboratory tests; • Metabolic panel; • Thyroid tests; • Kidney tests; • Liver function tests; Genetic testing (and cancer screening): • Biopsy; • Genetic testing; • Prenatal testing Additional diagnostic technologies: • Vital signs: a. Blood pressure; b. Heart rate; c. Respiratory rate; d. Temperature • Electrodiagnosis; • Telemedicine; • Concurrent medical conditions (“Comorbidity”) • Expert Systems; • Chatbots

Chapter 5 • AI applications in diagnostic technologies and services

129

general testing category, and then each test within each category, will be sequenced and presented as described in the introduction to this chapter.

5.1.1 Diagnostic imaging Among the most promising innovative areas in healthcare is the application of AI in medical imaging, including, but not limited to, image processing and interpretation [5]. Along with clinical laboratory analysis (blood, urine, cultures), the process of diagnostic imaging is arguably the most valuable clinical information we can gather in the diagnostic process. With the irreversible increase in imaging data and the possibility to identify findings that humans can or cannot detect, radiology is now moving from a subjective perceptual skill to a more objective science [6]. Moreover, deep learning networks (convolutional neural network’s - CNNs) have led to more robust models for radiomics (discussed below, page 139). Radiomics is an emerging field that deals with the high-throughput extraction of peculiar quantitative features from radiological images [7]. A staggering 90% of all healthcare data comes from medical imaging [8]. This statistic being the case, it’s exciting to realize that AI’s deep learning (CNN) greatest strength lies in pattern recognition (e.g., Graphic Processing Unit or GPU hardware and software [see Chapter 3, page 48]) and thus, image analysis. No matter what the image (pattern) may be (Figs. 5 1 and 5 2) [9], using Bayesian probabilities and inference logic algorithms, AI’s deep learning CNN process classifies the image for diagnosis. This makes AI-assisted clinical diagnostic image analysis, one of, if not the most important advance in clinical AI healthcare applications. AI algorithms look at medical images to identify patterns after being trained using vast numbers of examinations and images. Those systems will be able to give information about the characterization of abnormal findings, mostly in terms of conditional probabilities to be applied to Bayesian decision-making [10]. AI systems look at specific labeled structures and also learn how to extract image features either visible or invisible to the human eye. This approach mimics human analytical cognition, allowing for better performance than that obtained with old CAD (computer-aided design) software [11]. The current applications of these diagnostic imaging algorithms exceed all of the other approved and clinically utilized AI healthcare applications [12]. Radiological imaging methods are used to investigate regions of the body to detect potential abnormal pathology and aid diagnosis. Through these methods, large volumes of complex digital imaging data are generated from regional or whole-body scanning, which creates a challenge to “reading and interpreting” images. Combined with radiomics, a paradigm shift in radiology and all of medicine is occurring through the use of AI and advanced machine and deep learning algorithms. Neuroradiology focuses on diagnosing conditions of the spine, neck, head, and central nervous system using computed tomography (CT) or magnetic resonance imaging (MRI) machines. In such neuroradiological imaging, rapid diagnosis of acute neurological illnesses such as stroke, cranial aneurysm, hemorrhaging, trauma, etc. is critical. Computer-assisted-diagnosis (CAD)

130

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 5–1 Convolutional Neural Networks (CNNs). Similar to the convolutional neural networks described in Chapter 3 (Figure 3 5), AI’s deep learning CNN process is used to classify the image for diagnosis. In this example, 5 convolutional layers are followed by 3 fully connected layers, which then output a probability of the image belonging to each class. These probabilities are compared with the known class (stroke in the training example) and can be used to measure how far off the prediction was (cost function), which can then be used to update the weights of the different kernels and fully connected parameters using back-propagation. When the model training is complete and deployed on new images, the process will produce a similar output of probabilities, in which it is hoped that the true diagnosis will have the highest likelihood. Courtesy of American Journal of Neuroradiology.

using 3-D convoluted neural networks (3D-CNN) to screen head CT scans learned from a dataset of 37,236 scans was compared to standard neuroradiologic methods [13]. This AI method to triage radiology workflow and accelerate the time to diagnosis went from minutes to seconds in a clinical environment. Finally, we must remember that AI mimics human intelligence. Radiologists are key people for several current AI challenges, such as the creation of high-quality training datasets, the definition of the clinical task to address, and interpretation of obtained results. Radiologists may play a pivotal role in the identification of clinical applications where AI methods may make a difference. They represent the final user of these technologies, who knows where they can be applied to improve patient care. For this reason, their point of view is crucial to optimize the use of AI-based solutions in the clinical setting. The application of AI-based algorithms often leads to the creation of complex data that needs to be interpreted and linked to their clinical utility. In this scenario, radiologists may play a crucial role in data interpretation, cooperating with data scientists in the definition of useful results [14].

Chapter 5 • AI applications in diagnostic technologies and services

131

FIGURE 5–2 Comparison of machine learning vs. deep learning CNNs. AI systems look at specific labeled structures and also learn how to extract image features either visible or invisible to the human eye. The comparison in this Figure between classic machine learning and deep learning approaches is applied to a classification task. Both approaches use an artificial neural network organized in the input layer (IL), hidden layer (HL) and output layer (OL). The deep learning approach avoids the design of dedicated feature extractors by using a deep neural network that can represents complex features as a composition of simpler ones. Courtesy of American Journal of Neuroradiology.

5.1.1.1 Categories of diagnostic imaging • X-Ray (conventional radiography): There are many types of medical imaging procedures that all work on the same principle. An X-ray beam passes through the body and specific internal structures either absorbed or scattered them with the remaining X-rays producing a pattern sent to a detector on film or a computer screen that is recorded or processed by a computer. Radiography is a technique for generating and recording an x-ray pattern to provide a static image(s) after the termination of the exposure. It is used to diagnose or treat patients by recording images of the internal structure of the body to assess the presence or absence of disease, foreign objects, and structural damage or anomaly. The recording of the pattern may occur on film or through electronic means. 5.1.1.1.1 AI’s influence on conventional radiography [15] From the early days of X-ray imaging in the 1890s to more recent advances in CT, MRI, and PET scanning, medical imaging continues to be a pillar of medical treatment. Moreover, and in contrast to traditional methods based on predefined features, as deep learning algorithms progress, that is, as more data are generated every day with ongoing research efforts, results will provide greater improvements in performance. All these advances promise increased accuracy and reduction in the number of routine tasks that exhaust time and effort.

132

Foundations of Artificial Intelligence in Healthcare and Bioscience

Enabling interoperability among the multitude of AI applications that are currently scattered across healthcare will result in a network of powerful tools. Utilizing such data to train AI on a massive scale will enable a robust AI that is generalizable across different patient demographics, geographic regions, diseases, and standards of care. 5.1.1.1.2 Literature reviews re AI’s influence on conventional radiography 1. Automated triaging of adult chest radiographs with deep artificial neural networks [16]: This study aimed to develop and test an AI system, based on deep convolutional neural networks (CNNs), for automated real-time triaging of adult chest radiographs based on the urgency of imaging appearances. A data set of 470,388 consecutive adult chest radiographs were selected for deep learning. Annotation of the radiographs was automated by developing an NLP system that was able to process and map the language used in each radiology report [17]. NLP performance was excellent, achieving a sensitivity of 98%, specificity of 99%, PPV (positive predictive value) of 97%, and NPV (negative predictive value) of 99% for normal radiographs and a sensitivity of 96%, specificity of 97%, PPV of 84%, and NPV of 99% for critical radiographs. The NLP system was able to extract the presence or absence of almost all the radiologic findings within the free-text reports with a high degree of accuracy. The computer vision algorithms were used to build an automated radiograph prioritization system. AI performance was good, with a sensitivity of 71%, specificity of 95%, PPV of 73%, and NPV of 94% for normal radiographs and a sensitivity of 65%, specificity of 94%, PPV of 61%, and NPV of 95% for critical radiographs. This NLP system was able to extract out the presence or absence of radiologic findings within the free-text reports with a high degree of accuracy. It was also able to assign a priority level with a sensitivity of greater than 90% and specificity of greater than 96%, as assessed with the reference standard data set. Similarly, the deep CNN based computer vision system was able to separate normal from abnormal chest radiographs with a sensitivity of 71%, specificity of 95%, and NPV of 94%. This deep learning system developed on the institutional data set (470,388) of adult chest radiographs was able to interpret and prioritize chest radiographs. Thus, abnormal radiographs with critical or urgent findings could be queued for real-time reporting and completed sooner than the current system [16]. 2. Artificial intelligence rivals radiologists in screening X-rays for certain diseases [18]: A new artificial intelligence algorithm, CheXNeXt, developed at Stanford University, can reliably screen chest X-rays for more than a dozen types of disease in less time than it takes to read this sentence. The algorithm simultaneously evaluates X-rays for a multitude of possible maladies and returns results that are consistent with the readings of radiologists. Scientists trained the algorithm to detect 14 different pathologies. For 10 diseases, the algorithm performed as well as radiologists. For 3, it underperformed compared with radiologists, and for 1, the algorithm outdid the experts. Besides serving underserved areas, algorithms like CheXNeXt could one day expedite

Chapter 5 • AI applications in diagnostic technologies and services

133

care, empowering primary care doctors to make informed decisions about X-ray diagnostics faster, without having to wait for a radiologist. A graduate student, Pranav Rajpurkar said: “The algorithm has evaluated over 100,000 X-rays so far, but now we want to know how well it would do if we showed it a million X-rays — and not just from 1 hospital, but from hospitals around the world.” Scientists used about 112,000 X-rays to train the algorithm. A panel of 3 radiologists then reviewed a different set of 420 X-rays, one by one, for the 14 pathologies. Their conclusions served as a “ground truth”— a diagnosis that experts agree is the most accurate assessment — for each scan. “We should be building AI algorithms to be as good or better than the gold standard of human, expert physicians. Now, I’m not expecting AI to replace radiologists any time soon. Still, we are not truly pushing the limits of this technology if we’re just aiming to enhance existing radiologist workflows,” Rajpurkar said. “Instead, we need to be thinking about how far we can push these AI models to improve the lives of patients anywhere in the world.” 3. Artificial intelligence on the identification of risk groups for osteoporosis [19]: Osteoporosis is an osteometabolic disease characterized by low bone mineral density (BMD) and deterioration of the microarchitecture of the bone tissue, causing an increase in bone fragility and consequently leading to an increased risk of fractures. It affects all bones in the body and shows no signs or symptoms until a fracture occurs. The decrease of bone mineral density occurs from aging, and fracture rates increase over the years, causing morbidity and some mortality [20]. The goal of this paper was to present a critical review of the central systems that use artificial intelligence to identify groups at risk for osteoporosis or fractures. The systems considered for this study were those that fulfilled the following requirements: the range of coverage in diagnosis, low cost, and capability to identify more significant somatic factors. The application of artificial intelligence showed to be adequate for the prognosis of the disease or fracture. The critical review concluded that the construction of a hybrid system composed of artificial intelligence with a simplified method of examination of bone mineral density could provide better results. Given the proposal of greater population coverage, the system will have to deal with a high level of data complexity. • Mammography: Mammography is a type of radiography used to capture images (mammograms) of internal structures of the breasts. Thus, mammography can detect breast cancer in its early, often treatable stages. Two types of procedures include: 1. Screen-film mammography where x-rays are beamed through the breast to a cassette containing a screen and film that must be developed. 2. Full-field digital mammography is a type where x-rays are beamed through the breast to an image receptor. A scanner converts the information to a digital picture that is sent to a digital monitor and/or a printer.

134

Foundations of Artificial Intelligence in Healthcare and Bioscience

5.1.1.1.3 AI’s influence on mammography Radiologic breast cancer screening has witnessed significant changes along with the successes of deep learning in the biomedical imaging in general. One such advancement, published in Radiology Journal, was developed by Rodriguez-Ruiz et al. [21]. Authors compared radiologists’ performances for reading mammographic examinations unaided versus aided (supported by an AI system) and revealed that radiologists improved their cancer detection performance at mammography when using an AI system. Results also indicated that this benefit was obtained without requiring additional reading time. In a complementary study, the data from 7 countries was curated by 101 radiologists. This broad experimental setting included a total of 2652 exams, and the stand-alone AI system was statistically similar to that of radiologists’ interpretations. The sensitivity and specificity of the system were also found better than the majority of radiologists, but always worse than the best radiologist, which is not surprising. These results indicate that AI tools can be used in much broader settings that have never been used before in breast cancer diagnosis routine. However, for this to be a regular clinical practice, there is still an expectation that a lot more experimentation should be done in both retrospective and prospective settings for independent validations. The success of deep learning as a tool to build AI systems is pushing the performance even closer to humans in computer-aided diagnosis and screening in radiology rooms. The subjectivity in terms of the underlying data (the type of lesions, racial and age differences, device manufacturers) when training the deep learning models remains a challenge that needs to be carefully addressed. A stand-alone AI system can supplement expert radiologists as a second reader, which can translate in a reduction in reading time. For AI, to be used up to its full potential in clinical practice, more studies should be performed in real-world settings. The increasing number of scans for diagnosing breast cancer generates tremendous workload for radiologists. For efficient screening and precise diagnosis, AI can play its role, as shown in recent studies on breast cancer screening. 5.1.1.1.4 Literature reviews re AI’s influence on mammography 1. Reduction of false-positive markings on mammograms [22]: A significant disadvantage in using currently available computer-aided detection (CAD) systems is a high rate of false-positive marks. The usefulness of CAD can be assessed by a count of these marks on each image, which is false positives per image (FPPI). False-positive marks may distract the interpreting radiologist with too much “noise.” They could lead to unnecessary workups and biopsies [23]. Therefore, high FPPI is a common complaint of radiologists when reviewing CAD marks. Approximately $4 billion per year are spent in the USA on false-positive recalls and workups. The cost for diagnostic mammogram workups alone is 1.62 billion [24]. For the patient, these falsepositive screening mammograms create unnecessary anxiety and may lead the patient to a biopsy that was not needed. This study compares the performance of a recently developed AI-CAD algorithm directly to a commercially available conventional CAD software using the same test

Chapter 5 • AI applications in diagnostic technologies and services

135

dataset of clinical cases. A retrospective study was performed on a set of 250 2-dimensional (2D) full-field digital mammograms (FFDM) collected from a tertiary academic institution based in the USA, which specializes in cancer healthcare. All of the mammograms were initially interpreted using the ImageChecker CAD, version 10.0 (Hologic, Inc., Sunnyvale, CA). Inclusion criteria were asymptomatic female patients of all ages and any race with 2D screening mammograms performed at the academic institution between 1 January 2013, and 31 March 2013, and whose mammogram records contain archived CAD markings. Mastectomy and breast implant patients were excluded from the analysis. CAD displayed false marks on 200 of the 242 non-cancer cases (83%) and no marks at all on 42 cases (17%). AI-CAD software displayed wrong marks on 126 cases of the 242 non-cancer cases (52%) and no marks at all on 116 cases (48%). This equates to 37% fewer cases with mean scores with AI-CAD and simultaneously 64% more mark-free cases with AI-CAD compared with CAD. There was a 69% reduction in overall FPPI with AI-CAD compared to CAD. Evaluation by lesion type shows the outperformance of AI-CAD for both masses and calcifications. There was an overall 83% reduction in FPPI for calcifications with AI-CAD and a 56% reduction for mass. FPPI for masses was higher than FPPI for calcifications for both systems. The significant decrease in FPPI with AI-CAD may translate into fewer false recalls, improved workflow, and decreased costs. The economic impact of the false-positive recall is considered to be one of the significant drawbacks of screening mammography. But more so, the known psychological and physical risks for patients related to false-positive recalls are incalculable [25]. The scare of the potential of a breast lesion can be a frightening experience. Even when the workup results are benign, the psychological effects of anxiety can last up to 3 years [26]. 2. Visual search in breast imaging: a review [27]: The diagnostic accuracy of radiologists’ interpretation of mammograms is limited by human factors producing both false-positive and false-negative errors. Identifying causes of diagnostic errors may be achievable by better understanding visual searching in breast images and find ways to reduce them. Improving education for radiology residents is also essential. Studies showed that 70% of missed lesions on mammograms attract radiologists’ visual attention and that a plethora of different reasons, such as the satisfaction of search, incorrect background sampling, and erroneous first impression, can cause diagnostic errors in the interpretation of mammograms. Recently, highly accurate tools, which rely on both eye-tracking data and the content of mammograms, have been proposed to provide feedback to the radiologists. In the past few years, deep learning has led to improving the diagnostic accuracy of computerized diagnostic tools and visual search studies will be required to understand how radiologists interact with the prompts from these tools, and to identify the best way to utilize them. Improving these tools and determining the

136

Foundations of Artificial Intelligence in Healthcare and Bioscience

optimal pathway to integrate them into the radiology workflow could be a possible line of future research. 3. Assessing cancer risk from mammograms: deep learning is superior to conventional risk models [28] On 28 March 2019, the U.S. Food and Drug Administration announced a proposed rule [29] to update the landmark policy passed by Congress in 1992 to ensure the quality of mammography for early breast cancer detection (known as the Mammography Quality Standards Act). Yala et al. [30] retrospectively examined nearly 90,000 consecutive screening mammographic examinations from almost 40,000 women obtained over 4 years (2009 2012) at Massachusetts General Hospital (Boston, Mass). The authors defined 4 breast cancer risk models intended to quantify the probability of discovering breast cancer within 5 years after a mammographic screening examination that was negative for disease. The third model, which used only DL analysis of full-resolution mammography, outperformed the first and second models. This suggested that there was more information about breast cancer risk on the mammograms than in the clinical data used by standard risk models. Because breast density scores are included in the standard models, it follows that those density scores do not reflect all the relevant information on the mammogram. DL results are not easily explainable to humans. DL methods are so-called black boxes (see Chapter 3, page 55) that provide little guidance about why conclusions were reached. Yala does not speculate on what exactly makes 1 mammography image predictive of the future occurrence of breast cancer. The effort to make deep learning (DL) methods fully explainable is an area of active research in academia and industry [31]. At the moment, it is unclear whether DL methods will ever be fully explainable (i.e., explainable AI XAI). Are we willing to follow computer recommendations in radiology without completely understanding them if we know that they provide sound advice? There can be little doubt that more deep learning studies will produce further advances of the sort described. • Fluoroscopy: Fluoroscopy is a continuous X-ray image displayed on a monitor that provides the ability of real-time monitoring of a medical procedure or the course of a contrast agent (“dye”) through the body. It is used in a wide variety of examinations and procedures to diagnose or treat patients. Some examples are [32] • Barium X-rays and enemas (to view the gastrointestinal tract) • Catheter insertion and manipulation (to direct the movement of a catheter through blood vessels, bile ducts or the urinary system) • Placement of devices within the body, such as stents (to open narrowed or blocked blood vessels) • Angiograms (to visualize blood vessels and organs) • Orthopedic surgery (to guide joint replacements and treatment of fractures).

Chapter 5 • AI applications in diagnostic technologies and services

137

There are some risks associated with fluoroscopy, similar to other X-ray procedures. Varying with the given procedure, the patient may receive a relatively high dose of radiation, especially for complex interventional procedures. Methods such as placing stents or other devices inside the body requiring extended periods are at increased risk for excessive radiation doses. The probability, however, that a patient will experience these adverse effects from a fluoroscopic procedure is statistically very small. Fluoroscopy should always be performed with the lowest acceptable exposure for the shortest time necessary to minimize the radiation risk. 5.1.1.1.5 AI’s influence on fluoroscopy [33] Artificial intelligence is a powerful assistant in the fluoroscopy procedure room that allows a physician to integrate historical and real-time imaging information. Moreover, since the human brain operation inspires AI, it continually learns from each interaction with the operational environment around it, while gradually improving its performance over time without the need for software engineering involvement. The AI system was trained over the set of 51 procedures from 8 clinical sites. Its performance was tested and measured over 18 independent procedures, comprising 398 configurations from 7 clinical sites, to quantify the AI capabilities compared with a trained human operator in offline on prerecorded procedures. The AI component of the navigation and biopsy guidance system demonstrates performance improvement from 80% to 95% to detect surgical tools on fluoroscopic images. The AI system ignored the operational environment, the fluoroscope. Its performance over time, tracking complex anatomy and operational tools on the challenging fluoroscopic imaging, makes the guidance of diagnostic biopsy reliable and meaningful during a procedure. This, in turn, translates into better control of the procedure and high diagnostic results. 5.1.1.1.6 Literature reviews re AI’s influence on fluoroscopy 1. Robotics-assisted versus conventional manual approaches for total hip arthroplasty [34]: Several studies have compared robotics-assisted (RA), and conventional manual (CM) approaches for total hip arthroplasty (THA), but their results are controversial. Total hip arthroplasty (THA) is an effective method for the management of severe hip joint disorders. Precise placement of cups and femoral stems is crucial to the efficacy of THA [35]. However, this accuracy is difficult to achieve with the conventional manual (CM) approach. Computer-assisted orthopedic surgery (CAOS) has been performed over the last 30 years. The existing CAOS technologies can be broadly categorized into image-guided (based on computed tomography [CT] or X-ray fluoroscopy), imageless navigation systems, positioning systems (patient-specific models, self-positioning robots); and semiactive or active robotics-assisted (RA) systems [36]. The advances in computer and artificial intelligence technology have resulted in parallel developments in robot-assisted

138

Foundations of Artificial Intelligence in Healthcare and Bioscience

THA [37]. Optical positioning is the most widely applied method in orthopedics with Xray fluoroscopy-based navigation. RA-THA achieves the same clinical results as traditional manual techniques, with fewer intraoperative complications and better radiological assessment results. On the other hand, the advantages of the conventional methods are shorter operation time, lower revision rate, and less postoperative complications such as dislocation, which may also be related to the surgical approach. Despite some shortcomings and controversies, with the advancement of artificial intelligence technology, we believe that RA hip replacement technology has excellent potential for clinical application. 2. Use of fluoroscopy in endoscopy: indications, uses, and safety considerations [38]: Historically, fluoroscopy was a tool of the radiologist. Interventional cardiologists and vascular surgeons have revolutionized their respective fields by adopting and adapting their use to their practice. This expansion of fluoroscopic utilization has also flourished within the field of endoscopy and continues to evolve. Perhaps the most common use of fluoroscopy in the endoscopy suite is endoscopic retrograde cholangiopancreatography (ERCP). Numerous other therapeutic interventions can be performed, including biliary and pancreatic stent placement, biopsy brushings, and balloon sweeps on the bile duct to remove stones and debris. Fluoroscopy has afforded this great asset in the management of these biliary-pancreatic conditions. A growing indication for fluoroscopy in endoscopy is the placement of enteral stents. These include esophageal, gastric, duodenal, and colonic stents used in the setting of advanced malignancies for the palliative restoration of luminal patency. While the indications for fluoroscopy during endoscopic procedures continue to expand, formal training in radiation exposure and protection is still not widely emphasized during advanced endoscopy training [39]. The risks of adverse radiation effects are almost always outweighed by the patient’s benefit of these procedures. However, to improve this risk-to-benefit ratio, mainly since only the patient receives the benefit while both the patient and staff assume exposure to risk, it is imperative that the operator understands the principles behind radiation and how to minimize exposure. 3. Imalogix brings fluoroscopy capabilities to radiation dose management platform [40]: Imalogix, an AI provider of process and workflow solutions, announced the availability of the latest evolution to its platform for diagnostic imaging, interventional radiology, cardiology, and surgery. The Imalogix Platform provides a comprehensive infrastructure to enable healthcare organizations to better understand and manage the process, quality and safety related to diagnostic imaging services, interventional procedures, and meet evolving regulatory standards. The Imalogix Platform enables departments utilizing fluoroscopy procedures to manage and enhance the safety and quality of exams through a comprehensive view of peak skin dose (PSD) across their entire patient population. The impending regulations surrounding fluoroscopy from TJC (The Joint Commission) state that this calculation is required for every fluoroscopy exam.

Chapter 5 • AI applications in diagnostic technologies and services

139

Now with the ability to instantly calculate PSD by the patient and classify this information by exam type, organizations can determine realistic radiation dose estimates for different procedure types to better protect patients and physicians. The Imalogix Platform also automatically alerts designated staff for procedures that emit over 5 Gray (Gy) and tracks cumulative patient dose to help ensure all sentential events (.15 Gy) are either avoided or flagged for immediate review and patient follow-up. • Radiomics [41]: The suffix “-omics” has been used numerous times in this book up to this point and will be used extensively through the balance of the book. When used as the suffix of any word in this text, it defines that word as “a field of study in biology” [42] (e.g., genomics, transcriptomics, metabolomics, and radiomics). This may help with the many terms you will encounter with the ending “-omics.” In the new era of precision medicine (see Chapter 4, page 101), radiomics is emerging as the research field that will translate associations extracted from qualitative and quantitative clinical images and clinical data with information with or without associated gene expression to support evidence-based clinical decision-making [43]. Dividing the process into separate steps can yield definable inputs and outputs, including image acquisition and reconstruction, image segmentation, features extraction and qualification, analysis, and model building. Careful evaluation is needed with each step to construct robust and reliable models transferrable into clinical practice for prognosis, non-invasive disease tracking, and evaluation of disease response to treatment. Different kinds of features can be derived from clinical images. Quantitative traits are descriptors extracted from the images by software implementing mathematical algorithms [44]. They exhibit different levels of complexity and express properties firstly of the lesion shape, describing the shape of the traced region of interest and its geometric properties. Second, textural features [45] calculate the statistical inter-relationships providing a measure of the spatial arrangement of intensities, and hence of intra-lesion heterogeneity. They can be extracted either directly from the images or after applying different filters or transforms (e.g., wavelet transform). The main innovation of radiomics relies on the omics suffix, created for molecular biology disciplines. This refers to the simultaneous use of large amounts of parameters extracted from a single lesion, which are mathematically processed with advanced statistical methods. The hypothesis is that an appropriate combination of parameters, along with clinical data, can express significant tissue properties, useful for diagnosis, prognosis, or treatment in an individual patient (personalization). Additionally, radiomics takes advantage of the full use of vast data-analysis experience developed by other -omics disciplines, as well as by big-data analytics (see Chapter 4, page 83) [46]. 5.1.1.1.7 AI’s influence on radiomics Deep learning and radiomics are rapidly taking over in many areas of research and is a relatively new field of research, even though it uses methods developed decades ago. The

140

Foundations of Artificial Intelligence in Healthcare and Bioscience

embrace of deep learning has been in the development of economical and increased computational methods with early success in many different areas. For radiomics, the major challenge is the link to biology and function. Deep learning and radiomic methods will transform medical imaging and its application to personalized medicine (see Chapter 4, page 101) in the next 5 years. As deep learning and radiomics methods mature, their use will become part of clinical decision support systems. They can be used to rapidly mine patient data spaces and radiological imaging biomarkers to move medicine towards the goal of precision medicine for patients [46]. 5.1.1.1.8 Literature reviews re AIs influence on radiomics 1. Radiomics with artificial intelligence for precision medicine in radiation therapy [47]: Recently, the concept of radiomics has emerged from radiation oncology. It is a novel approach for solving the issues of precision (personalized) medicine and how it can be performed based on multimodality medical images that are non-invasive, fast, and low in cost. Radiomics is the comprehensive analysis of massive numbers of medical images to extract a large number of phenotypic features (radiomic biomarkers) reflecting cancer traits, and it explores the associations between the characteristics and patients’ prognoses to improve decision-making in precision medicine. Individual patients can be stratified into subtypes based on radiomic biomarkers that contain information about cancer traits that determine the patient’s prognosis. Machine-learning algorithms are boosting the powers of radiomics for prediction of prognoses or factors associated with treatment strategies, such as survival time, recurrence, adverse events, and subtypes. Therefore, radiomic approaches, in combination with AI, may potentially enable practical use of precision medicine in radiation therapy by predicting outcomes and toxicity for individual patients. 2. What can artificial intelligence teach us about the molecular mechanisms underlying disease? [48] While molecular imaging with positron emission tomography (see Nuclear Medicine Scan, PET, page 146) or single-photon emission computed tomography already reports on tumor molecular mechanisms on a macroscopic scale, there is increasing evidence that there are multiple additional features within medical images. These can further improve tumor characterization, treatment prediction, and prognostication. Early reports have already revealed the power of radiomics to personalize and improve patient management and outcomes. What remains unclear is how these additional metrics relate to underlying molecular mechanisms of disease. Furthermore, the ability to deal with increasingly large amounts of data from medical images and beyond in a rapid, reproducible, and transparent manner is essential for future clinical practice. Here AI may have an impact. It encompasses a broad range of ‘intelligent’ functions performed by computers, including language processing (NLP), knowledge representation, problem-solving, and planning. While rule-based algorithms, e.g., computer-aided

Chapter 5 • AI applications in diagnostic technologies and services

141

diagnosis, have been in use for medical imaging since the 1990s, the resurgent interest in AI is related to improvements in computing power and advances in machine learning (ML). 3. Radiomics in cancer research [15]: Radiographic images, coupled with data on clinical outcomes, have led to the emergence and rapid expansion of radiomics as a field of medical research [49]. Early radiomics studies were largely focused on mining images for a large set of predefined engineered features that describe radiographic aspects of shape, intensity, and texture. More recently, radiomics studies have incorporated deep learning techniques to learn feature representations automatically from example images, hinting at the substantial clinical relevance of many of these radiographic features. Within oncology, multiple efforts have successfully explored radiomics tools for assisting clinical decision making related to the diagnosis and risk stratification of different cancers [50]. Such findings have motivated exploration of the clinical utility of AI-generated biomarkers based on standard-of-care radiographic images [51] with the ultimate hope of better-supporting radiologists in disease diagnosis, imaging quality optimization, data visualization, response assessment, and report generation. • Computed tomography (CT or CAT) scan: CT (CAT) scans are procedures where many X-ray images are recorded as the detector moves around the patient’s body. A radiology technologist performs the CT scan during which you lie on a table inside a large, doughnut-shaped CT machine. The table slowly moves through the scanner, and the X-rays rotate around your body. By repeating this process, several scans are generated in which the computer stacks 1 on top of the other to produce detailed images of organs, bones, or blood vessels. From these cross-sectional images or “slices” a computer reconstructs all the individual pictures of internal organs and tissues. CT scans are used for a long list of diagnostic assessments including (but not limited to) [52]: • detecting bone and joint problems, like complex bone fractures and tumors; • If you have a condition like cancer, heart disease, emphysema, or liver masses, CT scans can spot it or help doctors see any changes; • CT scans show internal injuries and bleeding, such as those caused by a car accident; • CT scans can help locate a tumor, blood clot, excess fluid, or infection; • Used to guide treatment plans and procedures, such as biopsies, surgeries, and radiation therapy; • Doctors can compare CT scans to find out if specific treatments are working. For example, scans of a tumor over time can show whether it’s responding to chemotherapy or radiation. 5.1.1.1.9 AI’s influence on computed tomography (CT or CAT) scans [53] AI has already proven its value to radiologists and pathologists looking to accelerate productivity and improve accuracy. Studies have shown that AI tools can identify features in images quickly and precisely as well, if not better, than humans.

142

Foundations of Artificial Intelligence in Healthcare and Bioscience

To foster standardized, safe, and effective AI for clinical decision support and diagnostics, the American College of Radiology Data Science Institute (ACR DSI) has released [54] several high-value use cases for artificial intelligence in medical imaging, which will be continuously updated as new opportunities present themselves. They include: 1. 2. 3. 4. 5.

Identifying cardiovascular abnormalities; Detecting fractures and other musculoskeletal injuries; Aiding in the diagnosis of neurological diseases; Flagging thoracic complications and conditions; Screening for common cancers.

While more studies will be required to test the utility of AI for these and other use cases, ACR DSI appears confident that medical imaging is ready for artificial intelligence. Supplementing diagnostics and decision-making with AI could offer providers and patients life-changing insights into a variety of diseases, injuries, and conditions that may be difficult to identify with the human eye alone. 5.1.1.1.10 Literature reviews re AI’s influence on computed tomography (CT or CAT) scans 1. Artificial intelligence and cardiovascular computed tomography [55]: Routinely acquired volumetric datasets with large numbers of axial slices are routinely obtained by cardiovascular computed tomography (CT). Review of the axial slices is an integral part of the image analysis, and a critical step is detailed reconstruction of multiple oblique planes and volumes on an advanced 3-D workstation [56]. This is performed by a combination of manual and (semi) automated reconstruction along oblique planes, vascular centerlines, and volumes. This analysis results in a large number of discrete qualitative and quantitative data-points and is increasingly supported by dedicated “smart” software. The combination of large EHRs and automated analysis with ML algorithms allows information gathering, data analysis, and feedback to the individual practitioner and healthcare systems. Similar to the impact of data technology in many aspects of daily life, these changes will impact current models of doctor-patient relationships, with potential benefits for the individual patient and also larger patient populations. It may also have an impact on the work of specialists, including radiologists, but not substituting for them. It will enhance their diagnostic workflow, allowing faster and more precise diagnosis. The role of the future imaging specialist will likely increasingly include generation and management of imaging data, providing access to discrete data beyond the traditional report. This will require close collaboration with IT specialists. 2. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration [57]: Intracranial hemorrhage (ICH) requires prompt diagnosis to optimize patient outcomes. This study hypothesized that machine learning algorithms could automatically analyze computed tomography (CT) of the head, prioritize radiology worklists, and reduce time to diagnosis of ICH. A deep convolutional neural network was trained on 37,074 studies and

Chapter 5 • AI applications in diagnostic technologies and services

143

subsequently evaluated on 9499 unseen studies. The predictive model was implemented prospectively for 3 months to re-prioritize “routine” head CT studies as “stat” on real-time radiology worklists if an ICH was detected. Time to diagnosis was compared between the re-prioritized “stat” and “routine” studies. AI provided improved timing. Computer-aided diagnosis (CAD) has been an active area of research in the past 5 decades [58]. Starting with the detection of breast cancer on mammograms [59], CAD has been extended to several other diseases such as lung cancer [60], colon cancer [61] and, more recently, several brain disorders such as Alzheimer’s disease [62]. Despite all these efforts, the only CAD for breast imaging has been widely adopted in clinical practice. Importantly, most clinical CAD systems are based on traditional computer vision techniques and do not utilize deep learning. 3. Disease staging and prognosis in smokers using deep learning in chest computed tomography [63]: Objective CT analysis provides clinically relevant insights into COPD. Still, it is a radiographic method of anatomic and physiologic analysis that relies on the prespecification of radiographic features considered most likely to be associated with specific clinical outcomes. New techniques in computer vision, natural image analysis, and machine learning have enabled the direct interpretation of imaging data, going directly from the raw image data to clinical outcome without relying on the specification of radiographic features of interest [64]. Convolutional neural network (CNN) analysis and other deep learning-based models are trained using large amounts of data from individuals with known outcomes, such as known disease diagnoses or clinical events like death. Once trained, the CNN model can then use data from other individuals to determine their probability for that event, and can rapidly assess risk across large populations without the need for the manual extraction or review of specific clinical or radiographic features [65]. It is hypothesized that in-depth learning analyses of imaging data could predict clinically relevant outcomes in smokers without the pre-specification of features of interest. • MRI (Magnetic Resonance Imaging) scan: An MRI scan is a medical imaging procedure using strong magnetic fields and radio waves (radiofrequency energy) to create images. The signal in an MRI image is produced from the protons in fat and water molecules in the body. MRI imaging does not expose you to radiation. During an MRI exam, an electric current is passed through coiled wires to create a temporary magnetic field in a patient’s body. Radio waves are sent from and received by a transmitter/receiver in the machine, and these signals are used to make digital images of the scanned area of the body. For some MRI exams, intravenous (IV) drugs given in the arm, such as gadolinium-based contrast agents (GBCAs), are used to change the contrast of the MR image. MRI imaging of the body is performed to evaluate [66]: • organs of the chest and abdomen including the heart, liver, biliary tract, kidneys, spleen, bowel, pancreas, and adrenal glands;

144

Foundations of Artificial Intelligence in Healthcare and Bioscience

• pelvic organs including the bladder and the reproductive organs such as the uterus and ovaries in females and the prostate gland in males; • blood vessels (including MR Angiography); • lymph nodes. Physicians use an MRI examination to help diagnose or monitor treatment for conditions such as: • tumors of the chest, abdomen or pelvis; • diseases of the liver, such as cirrhosis, and abnormalities of the bile ducts and pancreas; • inflammatory bowel diseases such as Crohn’s disease and ulcerative colitis; • heart problems, such as congenital heart disease; • malformations of the blood vessels and inflammation of the vessels (vasculitis); • a fetus in the womb of a pregnant woman. 5.1.1.1.11 AI’s influence on MRI scans One of the greatest strengths of MRIs is their ability to capture enormous amounts of data through their scanning process. To do so, however, requires significant amounts of scanning time while the patient (who frequently is in distress) remains perfectly still inside a chamber that produces claustrophobic effects in many people. The long duration for MRIs is because the machine creates a series of 2D images or slices, which it subsequently stacks to make a 3-D image. NYU has been working on a way to accelerate this process and is now collaborating with Facebook on a project (“FastMRI” [67]) to cut down MRI durations by 90% by applying AI-based imaging tools. AI can capture less data, and therefore image faster, while still preserving, even enhancing all the rich information content of the magnetic resonance images. The scan is run faster, collecting less raw data, but then a machine learning algorithm trained to do the data-to-image conversion at up to 10 times the speed. To interpret the fastMRI data traditionally, there would not be enough data to create the 3-D image. The complexity of the MRI instrumentation makes it prone to breakdowns. Many companies are using AI tools to predict maintenance needs before breakdowns avoiding costly downtime. AI algorithms can make proactive predictions about maintenance and prevention. Hospitals use MRI data before, during, and after operations so surgeons can plan how to proceed with their care. At Boston’s Brigham and Women’s Hospital, an MRI is inside the operating room as part of a more extensive imaging setup called the Advanced Multimodality Image Guided Operating Suite (AMIGO) [68]. Combined with a mass spectrometer to its AMIGO equipment, machine learning analyzes data collected by that component and compares it to a decade worth of historical data about brain tumors while looking at a segmented MRI image. Then surgeons benefit from better insights about patients’ tumors. As a result, people may undergo fewer future operations because the first attempts are maximally successful. Also, some algorithms are trained on over a million images. That means this process could help physicians feel more confident when making diagnoses, thereby reducing potential mistakes and improper treatment plans.

Chapter 5 • AI applications in diagnostic technologies and services

145

5.1.1.1.12 Literature reviews re AI’s influence on MRI scans 1. Artificial intelligence enhances MRI scans [69]: A research team at Massachusetts General Hospital, Martinos Center for Biomedical Imaging, and Harvard University improved image reconstruction through AI and machine learning. Funded by several NIH components, the work was published on 21 March 2018, in Nature. The researchers used recent advances in technology, including more powerful graphical processing units (GPUs) in computers and artificial neural networks, to develop an automated reconstruction process. They named it AUTOMAP, for automated transform by manifold approximation. To train the neural network, the team used a set of 50,000 MRI brain scans from the NIH-supported Human Connectome Project. The team then tested how well AUTOMAP could reconstruct data using a clinical, real-world MRI machine and a healthy volunteer. They found that AUTOMAP enabled better images with less noise than conventional MRI. The signal-to-noise ratio was better for AUTOMAP than conventional reconstruction (21.6 vs. 17.6). AUTOMAP also performed better on a statistical measure of error known as root-mean-squared-error (6.7% vs. 10.8%). Also, AUTOMAP was faster than the manual tweaking now done by MRI experts. “Since AUTOMAP is implemented as a feedforward neural network, the speed of image reconstruction is almost instantaneous, just tens of milliseconds,” Rosen says. “Some types of scans currently require time-consuming computational processing to reconstruct the images. In those cases, immediate feedback is not available during initial imaging, and a repeat study may be required to identify a suspected abnormality better. AUTOMAP would provide immediate image reconstruction to inform the decision-making process during scanning and could prevent the need for additional visits.” There are many potential applications of AUTOMAP. 2. Artificial intelligence in cancer imaging: clinical challenges and applications [70]: Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, AI promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. However, most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability. The results do highlight increasingly concerted efforts in pushing AI technology to clinical use and impact future directions in cancer care.

146

Foundations of Artificial Intelligence in Healthcare and Bioscience

3. Emerging applications of artificial intelligence in neuro-oncology [71]: AI used with radiomics and radiogenomics in neuro-oncologic imaging will improve our diagnostic, prognostic, and therapeutic methods. This is helping to move the field into precision medicine. With the growth of computational algorithms, AI methods are poised to improve the precision of diagnostic and therapeutic approaches in medicine. The area of radiomics in neuro-oncology will likely continue to be at the forefront of this revolution. A variety of AI methods applied to conventional and advanced neurooncology MRI data can already delineate infiltrating margins of diffuse gliomas, differentiate pseudo-progression from actual progression, and predict recurrence and survival better than methods used in daily clinical practice. Radiogenomics will also advance our understanding of cancer biology, allowing noninvasive sampling of the molecular environment with high spatial resolution and providing a systems-level understanding of underlying heterogeneous cellular and molecular processes. By providing in vivo markers of spatial and molecular heterogeneity, these AI-based radiomic and radiogenomic tools have the potential to stratify patients into more precise initial diagnostic and therapeutic pathways. This will enable better dynamic treatment monitoring in this era of personalized medicine (see Chapter 4, page 101). Although substantial challenges remain, radiologic practice is set to change considerably as AI technology is further developed and validated for clinical use. • Nuclear medicine scan: Also called Radioisotope scans or Radionuclide scans, these scans use radioactive substances to see structures and functions inside your body. Nuclear imaging allows for the visualization of organ and tissue structure as well as their function. A radiopharmaceutical absorbed, or “taken up,” by a particular organ or tissue, can indicate the level of function of the organ or tissue. Nuclear imaging uses a special camera that detects radioactivity. Before the test, a small amount of radioactive material as an injection, swallowed, or inhaled. The camera makes images while the patient lies still on a table. Areas, where the radionuclide collects in more significant amounts, are called “hot spots.” Non-absorbed areas appear less bright on the scanned image and are referred to as “cold spots.” Nuclear medicine scans are used to diagnose many medical conditions and diseases. Some of the more common tests include the following [72]: • PET scan (positron emission tomography) is a nuclear medicine exam that produces a 3-dimensional image of functional processes in the body. A PET scan uses a small amount of a radioactive drug to show differences between healthy and diseased tissue. Some common uses of PET include [73]: - to assess cancer; - to determine the risk of cancer in a lung nodule; - to evaluate the brain in some patients for memory disorders, brain tumors, or seizure disorders; - to assess the heart for blood flow and ischemic heart disease.

Chapter 5 • AI applications in diagnostic technologies and services

147

• Renal scans. These are used to examine the kidneys and to find any abnormalities. These include abnormal function or obstruction of the renal blood flow. • Thyroid scans. These are used to evaluate thyroid function or to evaluate a thyroid nodule or mass better. • Bone scans. These are used to evaluate degenerative and/or arthritic changes in joints, to find bone diseases and tumors, and/or to determine the cause of bone pain or inflammation. • Gallium scans. These are used to diagnose active infectious and/or inflammatory diseases, tumors, and abscesses. • Heart scans. These are used to identify abnormal blood flow to the heart, to determine the extent of the damage of the heart muscle after a heart attack, and/or to measure heart function. • Brain scans. These are used to investigate problems within the brain and/or in the blood circulation to the brain. • Breast scans. These are often used in conjunction with mammograms to locate cancerous tissue in the breast. 5.1.1.1.13 AI’s influence on nuclear medicine scans [74] Nuclear medicine has been using intelligent systems for some years with technologies such as voice recognition, radiomics, nuclear cardiology, oncology, and neurology. There has been success with automatic edge detection and tumor volume delineation plus automatic detection of anatomy and pulmonary nodules on PET/CT (Fig. 5 3). Predictive algorithms such as FRAX (Fracture Risk Assessment Tool) are already in clinical use, and over the last 2 3 years, there have been a growing number of publications covering a variety of clinical AI applications in imaging. The nonclinical parts of nuclear medicine are also benefitting from AI. Improved methods of monitoring doses, dose limit compliance, and algorithms leading to dose reduction are now in regular clinical use.

FIGURE 5–3 Number of diagnostic clinical algorithms by technology. Diagnostic imaging currently more than doubles all other forms of AI diagnostic algorithms due primarily to advanced (GPU) image recognition software. However, it is felt that genetic testing will grow significantly in the coming years as a principal diagnostic testing modality. Courtesy of Mayo Clinc.

148

Foundations of Artificial Intelligence in Healthcare and Bioscience

Proper education about ML would benefit nuclear medicine staff along with a willingness to try new things so that we are in an excellent position to embrace all that AI has to offer. The imaging quality could improve along with dose reduction, increased speed, and accuracy of reporting with the prediction of outcomes. AI could be deeply embedded in many aspects of nuclear medicine in the future. In the words of Bill Gates, “We always overestimate the change that will occur in the next 2 years and underestimate the change that will occur in the next 10.” This will undoubtedly be the case in AI and nuclear medicine. 5.1.1.1.14 Literature reviews re AI’s influence on nuclear medicine scans 1. Application of artificial neural networks to identify Alzheimer’s disease using cerebral perfusion SPECT data [75]: This study aimed to demonstrate the usefulness of artificial neural networks in Alzheimer’s disease diagnosis (AD) using data of brain single-photon emission computed tomography (SPECT). The results were compared with discriminant analysis. The study population consisted of 132 clinically diagnosed patients. The sensitivity of Alzheimer’s disease diagnosis detection by artificial neural network and discriminant analysis was 93.8% and 86.1%, respectively, and the corresponding specificity was 100% and 95%. Artificial neural networks and conventional statistics methods (discriminant analysis) are useful tools in Alzheimer’s disease diagnosis. The results of this study indicate that artificial neural networks can discriminate AD patients from healthy controls. The study simulations provide evidence that artificial neural networks can be a useful tool for clinical practice. 2. Scanning the future of medical imaging [76]: As innovation accelerates, the medical device industry is undergoing rapid change in the form of new business models, AI, and as the Internet of Things creates disruptive possibilities in healthcare. Annual patent applications related to medical devices have tripled in 10 years due to innovations, and technology cycle times have halved in just 5 years (Moore’s Law). Connectivity will have exploded by 2021. The world will have more than 3 times as many smart connected devices as people, and more and more medical devices and processes contain integrated sensors. A total of 13 companies had more than 50 transactions with a median value of slightly below $4 million, possibly reflecting the niche nature of markets for the newer nuclear medicine ligands (a molecule that binds to another molecule). In nuclear and PET imaging, the introduction of specific ligands allows for better targeting in cancer diagnosis and reduces the need for tissue sampling. Noninvasive diagnostic techniques will play an increasing role in medicine, especially cancer diagnosis and management. And theranostic platforms, in which the nuclear diagnostic agent is paired with a complementary radiotherapeutic, hold promise for meeting unserved needs in oncology. As nuclear imaging grows and moves away from metabolic imaging for cancer localization to pathology-specific cancer diagnosis (for instance, the combining of ligands specific to the prostate or breast cancer with radioisotopes for definitive diagnosis),

Chapter 5 • AI applications in diagnostic technologies and services

149

imaging companies will have increasing opportunities to engage in cancer diagnostics, and potentially therapeutics as well. 3. Four future trends in medical imaging that will change healthcare [77]: Artificial Intelligence (AI): AI in the medical imaging market is estimated to rise from $21.48 billion in 2018 to a projected value of $264.85 billion by 2026, according to Data Bridge Market Research’s April 2019 report. These vendors will need to prove their ROI in a very competitive and crowded market with hundreds of AI technology solutions being developed for medical imaging. Through its ability to sift through mountains of scans quickly, AI has the potential to revolutionize the medical imaging industry by offering providers and patients life-changing insights into a variety of diseases, injuries, and conditions that may be hard to detect without the supplemental technology. Virtual Reality & 3-D Imaging: Right now, the world can’t get enough of virtual reality (VR). As amazing as MRIs and CT scans are, currently, their display in 2D requires physicians to use their imaginations to mentally stitch together a full picture of the 3-D organ or body part. Now, new augmented reality technologies, like EchoPixel True 3-D, have made it possible for radiologists or physicians to take slices of MRI pictures to create a 3-D image. The physicians can then examine with 3-D glasses, a VR headset, or even print using a 3-D printer and special plastic. Nuclear Imaging: With nuclear imaging, a patient is injected with or swallows radioactive materials called radiotracers or radiopharmaceuticals before a medical imaging scan like a position emission tomography (PET) and a single-photon emission computed tomography (SPECT). During the scan, the camera focuses on the area where the radioactive material concentrates, showing the doctor what kind of problem exists. These types of scans are particularly helpful when diagnosing thyroid disease, gall bladder disease, heart conditions, cancer, and Alzheimer’s disease. Wearable medical devices are not only a top healthcare trend this year, but they are also slated to revolutionize diagnostic imaging in 2019 as well. • Ultrasound (sonography): Ultrasound (sonography) is a type of imaging that uses high-frequency sound waves to look at organs and structures inside the body. Unlike x-rays, with ultrasound, there is no exposure to radiation. With an ultrasound test, you lie on a table while a special technician moves a device called a transducer over part(s) of your body. The transducer transmits sound waves that bounce off the tissues and structures inside your body. The transducer then captures the waves that bounce back. The ultrasound machine creates images from the sound waves. Images in thin, flat sections of the body are displayed in conventional ultrasound. Ultrasound technology can also produce 3-dimensional (3-D) displays. Doppler ultrasound is another ultrasound technique that allows the doctor to evaluate blood flow through arteries and veins in the abdomen, arms, legs, neck, various body organs such as the liver or kidneys and/or brain (in infants and children) or within.

150

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 5–2

Internal organs examine by ultrasound (sonography).

• Heart and blood vessels, including the abdominal aorta and its major branches; • Liver; • Gallbladder; • Spleen; • Pancreas; • Kidneys; • Bladder

• Uterus, ovaries, and unborn child (fetus) in pregnant patients; • Eyes • Thyroid and parathyroid glands • Scrotum (testicles); • Brain in infants; • Hips in infants; • Spine in infants

Ultrasound is used to exam the multiple internal organs in the body (Table 5 2). It is also used to: • guide procedures such as needle biopsies, in which needles are used to sample cells from an abnormal area for laboratory testing; • image the breasts and guide biopsy of breast cancer; • diagnose a variety of heart conditions, including valve problems and congestive heart failure, and to assess damage after a heart attack. (Ultrasound of the heart is commonly called an “echocardiogram” or “echo” for short.) Doppler ultrasound images can help the physician to see and evaluate: • blockages to blood flow (such as clots); • narrowing of vessels; • tumors and congenital vascular malformations; • reduced or absent blood flow to various organs; • greater than normal blood flow to different areas, which is sometimes seen in infections. 5.1.1.1.15 AI’s influence on ultrasound (sonography) [78] It is expected that AI will be producing a distinctly different generation of products in the longer-term with the potential of significant economic value. There are promising signs of value-adding AI solutions with ultrasound imaging as it is brought to market. Vendors will need to make ultrasound systems more straightforward and more intuitive to use without compromising on scan quality. Innovations must include providing real-time support for clinicians during scans and procedures. AI-technology and deep learning techniques are being used to address this challenge by using image recognition capabilities and real-time training aids such as probe placement guidance and organ detection. The market is starting to develop solutions to address some of these more technically challenging possibilities. However, they are at an early stage of development, and accurate scan reads, as well as scan quality, remain a concern for regulators. Ultimately, the longer-term outcome is that AI could enable ultrasound to become accessible to anyone and everyone. However, there are sizeable technical challenges and commercial hoops to jump through before this vision can be realized.

Chapter 5 • AI applications in diagnostic technologies and services

151

We are now starting to see AI and deep learning capabilities being deployed to provide anatomy-awareness where specific body parts can be automatically recognized. This image recognition capability is opening the possibility of contextually-aware systems that can assist sonographers in real-time by suggesting relevant tools as well as providing diagnostic or decision-support. For in-experienced users, these real-time aids might be as simple as communicating which organ or body part is being scanned, providing support on how to best position the probe and guiding the user on the anatomical features relevant to the scan. There are many emerging markets for the use of ultrasound in breast imaging. The main limitations of ultrasound for breast imaging include a lengthy exam and reading times, a relatively high number of false positives, and the lack of repeatability of results due to operator dependence. Some vendors have already identified the opportunity to deploy AI-powered solutions to address these limitations, and various propositions have been brought to market. The ultimate step for AI within this part of the workflow will be to arrive at a credible and accurate diagnosis independent of the radiologist. There are still many unanswered technical challenges, including how well algorithms and neural networks trained using localized datasets will perform when applied to more extensive populations. 5.1.1.1.16 Literature reviews re AI’s influence on ultrasound (sonography) 1. Automatic classification of pediatric pneumonia [79]: Pneumonia is one of the most prevalent causes of death among children under 5 years of age [80], with its highest case-fatality rate among infants in the post-neonatal period. Thoracic ultrasound is becoming a useful and readily available technique for physicians assessing a variety of respiratory, hemodynamic, and traumatic conditions [81]. To facilitate the diagnostics of pneumonia in low-resource settings, it would be useful to have an automatic system to assist in the interpretation of ultrasound images. Artificial intelligence based on artificial neural networks is a common approach for automated computer learning and classification. Based on the measurable characteristics of a particular phenomenon after a training and validation step based on a selected dataset, it can assign a classification that can be used as the basis of a diagnosis [82]. This study demonstrates that it is possible to train an artificial neural network to detect evidence of pneumonia infiltrates in ultrasound lung images collected from young, hospitalized children with a diagnosis of pneumonia is. The method achieved a sensitivity of 90.9% and a specificity of 100% to detect vectors associated with pneumonia consolidates when compared to the visual recognition performed by an expert analyst. An algorithm like the one developed in this study proves that it is possible to create a methodology to automatically detect lung infiltrates due to pneumonia in young children using ultrasound. Moreover, this technology might be applied to more portable and less expensive ultrasound devices and be taken to remote, rural areas. High sensitivity and specificity were obtained for the classification of characteristic vectors associated with pneumonia infiltrates. This has the possibility of reducing deaths in resource-limited areas of the world if data is adequately processed to make automatic

152

Foundations of Artificial Intelligence in Healthcare and Bioscience

interpretation feasible for a larger group of patients. In these populations, the lack of diagnostics for pneumonia is critical. 2. Artificial intelligence in breast ultrasound [83]: AI is receiving much attention for its excellent performance in image-recognition tasks. It is increasingly applied in breast ultrasound as a first-line imaging tool for breast lesion characterization for its availability, cost-effectiveness, acceptable diagnostic performance, and noninvasive and real-time capabilities. The use of AI in breast ultrasound has also been combined with other novel technology, such as ultrasound radiofrequency (RF) time series analysis [84], multimodality GPU-based computerassisted diagnosis of breast cancer using ultrasound and digital mammography image [85], optical breast imaging, QT-based breast tissue volume imaging [86], and automated breast volume scanning (ABVS) [87]. AI has been increasingly applied in ultrasound and proved to be a powerful tool to provide a reliable diagnosis with higher accuracy and efficiency and reduce the workload of physicians. It is roughly divided into early machine learning controlled by manual input algorithms, and DL, with which software can self-study. Soon, it is believed that AI in breast ultrasound can not only distinguish between benign and malignant breast masses but also further classify specific benign diseases, such as inflammatory breast mass and fibroplasia. Besides, AI in ultrasound may even predict Tumor Node Metastasis classification [88], prognosis, and the treatment response for patients with breast cancer. 3. Diagnosis of coronary artery diseases and carotid atherosclerosis using intravascular ultrasound images [89]: Cardiovascular ultrasound examination complements other imaging modalities such as radiography and allows more accurate diagnostic tests to be conducted. This modality is non-invasive, widely used in the diagnosis of cardiovascular diseases. Recently, 2 leading ultrasound-based techniques were used for the assessment of atherosclerosis: B-mode ultrasound used in the measurement of carotid artery intima thickness and intravascular ultrasound. These techniques provide images in real-time, portable, substantially lower in cost, and no harmful ionizing radiations are used in imaging. The processing of ultrasound images plays a significant role in the accurate diagnosis of the disease level. The diagnostic accuracy depends on the time to read the image and the experience of the practitioner to interpret the correct information. AI computer-aided methods for the analysis of the intravascular ultrasound images can assist in better measurement of plaque deposition in the coronary artery. In this study, the level of plaque deposition is identified using Otsu’s segmentation method, and classification of plaque deposition level is performed using Back Propagation Network (BPN) and Support Vector Machine (SVM). The result shows SVM classifies more significantly in comparison with the BPN network. • Endoscopy: Endoscopy is the generic term for a procedure that uses an instrument called an endoscope, or scope for short. These scopes have a tiny camera attached to a long, thin

Chapter 5 • AI applications in diagnostic technologies and services

153

tube. The tube is moved through a body passageway or opening to see inside an organ. Sometimes scopes are used for surgery, such as for removing polyps from the colon. There are many different kinds of endoscopy, including (but not limited to): • Arthroscopy to examine joints; • Bronchoscopy to examine lungs; • Colonoscopy and sigmoidoscopy to examine large intestine; • Cystoscopy and ureteroscopy to examine the urinary system; • Laparoscopy to examine internal structures (e.g., abdomen, pelvis, etc.) and perform surgical procedures on internal structures; • Upper gastrointestinal endoscopy to examine the esophagus and stomach. 5.1.1.1.17 AI’s influence on endoscopy [90] A computer-aided method for medical image recognition has been researched continuously for years [91]. Most traditional image recognition models use feature engineering, which is essentially teaching machines to detect explicit lesions specified by experts. As opposed to feature engineering, AI based on deep learning enables recognition models to learn most predictive features from the large data sets of labeled images and perform image classification spontaneously [92]. In this way, AI is now considered more efficient and has become increasingly popular. Recent studies demonstrate that AI would be able to overcome diagnostic subjectivity, which is caused by endoscopists who unconsciously take additional characteristics into account other than microstructures and capillaries. Therefore, it could be a useful real-time aid for nonexperts to provide an objective reference during endoscopy procedures. In the near future, combined electronic chromoendoscopy with AI, the optical diagnosis will achieve optimal diagnostic accuracy that is comparable with a standard histopathologic examination. This will reduce medical costs by avoiding unnecessary resection and pathologic evaluation. 5.1.1.1.18 Literature reviews re: AI’s influence on endoscopy 1. Artificial intelligence and colonoscopy: current status and future perspectives. Digestive endoscopy [93]: In the field of gastrointestinal endoscopy, computer-aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e., predicting the polyp’s pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have .90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection.

154

Foundations of Artificial Intelligence in Healthcare and Bioscience

2. GI disease screening with artificial intelligence is close [94]: As a tool for the screening and diagnosis of diseases in the gastrointestinal (GI) tract, AI is advancing rapidly. Much of the recently updated literature is on screening colonoscopy. Still, the same principles are relevant and being pursued other GI conditions, such as dysplasia screening in patients with Barrett’s esophagus and the assessment of mucosal healing in inflammatory bowel disease. “A computer can consider a thousand features when evaluating a polyp, which is way beyond what we can do,” said Dr. Michael Byrne, clinical professor in the division of gastroenterology at Vancouver General Hospital. Even with advances to improve visualization in screening colonoscopy, such as improved resolution and better lighting, the reason that AI is expected to prevail is that “the human eye is just not accurate enough.” “There are many technologies to improve screening and diagnosis of GI diseases, but I believe these will struggle if they do not also have some kind of built-in machine intelligence,” Recognizing dysplasia associated with Barrett’s esophagus has parallels with identifying adenomatous polyps in screening colonoscopy. Still, Dr. Byrne also discussed machine learning as an “optical biopsy” for evaluating the mucosa of patients with IBD. No longer a screening approach, the characterization of inflammatory bowel disease (IBD) tissue could help with therapeutic decisions. Overall, there is abundant evidence that “optical biopsy is feasible,” Dr. Byrne said. He indicated that clinical applications are approaching quickly. 3. Artificial intelligence in upper endoscopy: location, location, location [95]: Not every endoscopy is done under optimal conditions, and sometimes endoscopists do not clean as thoroughly as possible or take the time and effort required to inspect all locations within the stomach. The result is lesions not being recognized, as the speed of the exam does not allow recognition of subtle changes, lesions are partially or entirely covered, or lesions never were in the field of view. Interval colorectal cancer is most likely related to the skillset of the endoscopist—that is, some have few or no patients who develop interval colorectal cancer, and some have more patients who develop it. The same appears to be the case for upper endoscopic screening for gastric cancer. The critical quality question for upper and lower endoscopy is the same. How can we change the behavior of the endoscopist with more interval lesions toward that of the endoscopist with few or no interval lesions? Inspection quality varies among endoscopists; complete inspection of the stomach requires time. An AI method that verifies a view of every location within the stomach in itself does not guarantee that lesions will not be missed. But it is very likely to improve the odds of not missing lesions by stimulating those with an incomplete inspection to inspect any lost locations. • Fundus imaging (fundoscopy or ophthalmoscopy): The fundus of the eye is the inner portion that can be seen during an eye examination by looking through the pupil with an instrument called an ophthalmoscope (light source with a condensing magnifying lens). This provides a direct view of the retina, macula, and

Chapter 5 • AI applications in diagnostic technologies and services

155

optic nerve. One of the most significant values of this test is that it is the only noninvasive way to observe the vasculature and nerve tissue of the body in live activity (“in vivo”). Thus, capturing photographic, video, and/or digital imagery of these tissues in vivo provides a unique opportunity for the exam. It allows for the study of multiple body functions and disorders of the vascular (e.g., cardiovascular, diabetes) and neurological (e.g., central nervous system [CNS] abnormalities, tumors, aneurysms) systems. Among the ways to capture fundus imagery for diagnosis include: [96] • Fundus photography (standard, digital [smartphone] 2 dimensional images); • Visual field analysis (patterns of retinal nerve fiber function); • Optical Coherence Tomography (OCT) analyzing retinal layers; • Fluorescein angiography (analysis of status and functions of retinal vessels). 5.1.1.1.19 AI’s influence on fundus imaging [97] One of the earliest associations of AI’s influence in healthcare came with the development of a screening method for diabetic retinopathy using a machine learning algorithm trained to identify abnormal (diabetic) fundi from a database of normal fundus photographs. Deep learning (DL) algorithms have gone well beyond just diabetic screening applied to optical coherence tomography and visual fields. They can now achieve robust classification performance in the detection retinopathy of prematurity, the glaucoma-like disc, macular edema, and age-related macular degeneration. DL in ocular imaging is being used in conjunction with telemedicine as a solution to screen, diagnose, and monitor major eye diseases for patients in primary care and community settings. Due to the strengths of AI convolutional neural networks (CNNs) in pattern recognition, progress in fundoscopy applications for clinical screening and diagnosis of multiple ocular diseases (glaucoma, retinal disease, etc.), as well as systemic (cardiovascular, neurologic, kidney disease) and beyond, are being introduced clinically and continue to advance at a rapid rate. 5.1.1.1.20 Literature reviews re AI’s influence on fundus imaging 1. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning [98]: Using deep-learning models trained on data from 284,335 patients and validated on 2 independent datasets of 12,026 and 999 patients, cardiovascular risk factors were predicted that were not previously thought to be present or quantifiable in retinal images. These include age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC 5 0.97), smoking status (AUC 5 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC 5 0.70). It was also demonstrated that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction. 2. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence [99]:

156

Foundations of Artificial Intelligence in Healthcare and Bioscience

The role of artificial intelligence (AI)-based automated software for detection of diabetic retinopathy (DR) and sight-threatening DR (STDR) by fundus photography was assessed using a smartphone-based device and validated against ophthalmologist’s grading. Three hundred and one patients with type 2 diabetes underwent retinal photography with Remidio ‘Fundus on phone’ (FOP), a smartphone-based device, at a tertiary care diabetes center in India. Grading of DR was performed by the ophthalmologists using International Clinical DR (ICDR) classification scale. STDR was defined by the presence of severe non-proliferative DR, proliferative DR, or diabetic macular edema (DME). Retinal images of 296 patients were graded. DR was detected by the ophthalmologists in 191 (64.5%) and by the AI software in 203 (68.6%) patients while STDR was detected in 112 (37.8%) and 146 (49.3%) patients, respectively. The AI software showed 95.8% (95% CI 92.9 98.7) sensitivity and 80.2% (95% CI 72.6 87.8) specificity for detecting any DR and 99.1% (95% CI 95.1 99.9) sensitivity and 80.4% (95% CI 73.9 85.9) specificity in detecting STDR. Automated AI analysis of FOP smartphone retinal imaging has a very high sensitivity for detecting DR and STDR. Thus, it can be an initial tool for mass retinal screening in people with diabetes. 3. Artificial intelligence in glaucoma [100]: AI techniques can successfully analyze and categorize data from visual fields, optic nerve structure (e.g., optical coherence tomography [OCT] and fundus photography), ocular biomechanical properties, and a combinations thereof. This allows for the identification of disease severity, determine disease progression, and/or recommend referral for specialized care. Algorithms continue to become more complex, utilizing both supervised and unsupervised methods of artificial intelligence. These algorithms, however, often outperform standard global indices and expert observers. Artificial intelligence has the potential to revolutionize the screening, diagnosis, and classification of glaucoma, both through the automated processing of large data sets and by earlier detection of new disease patterns. Also, artificial intelligence holds promise for fundamentally changing research aimed at understanding the development, progression, and treatment of glaucoma, by identifying novel risk factors and by evaluating the importance of existing ones. • Medical (clinical) photography: Finally, both historically and to the present day, clinical photography has been the essence of medical documentation. Photography has changed the way care providers document, discuss, and deliver modern medical care. Medical documentation is now a critical part of patient care. Specialties relying on a visual diagnosis like dermatology have integrated photography into routine practice: rash and lesions appearance and progression can be documented, and patients can even self-document their skin examination for early detection of skin cancer [101]. Photographic documentation encompasses nearly every specialty in medicine. It is crucial in wound management, allowing wound care teams to track the progression of

Chapter 5 • AI applications in diagnostic technologies and services

157

wound healing. Mobile retinal imaging is changing ophthalmologic evaluation. Pre and postoperative imaging in plastic and reconstructive surgery are critical for documentation and identification of subtle contour changes. Digital cameras, smartphones, and laparoscopic/endoscopic image capture are now ubiquitous. Documentation of physical abuse mandates the use of photography and is often vital evidence for legal proceedings [102]. Photography is used to educate both the health professional in training and patients. Part of a physician’s task is to help patients understand their diseases, and photography plays a significant role. As long as the photograph has existed, images have been used in textbooks, atlases, meeting presentations, and case reports. Images are used to publish newly identified diseases, very rare diseases, or unique presentations of common diseases. Nothing has endured as much as a healthcare educational and diagnostic tool as the clinical photograph [102]. 5.1.1.1.21 AI’s influence on medical (clinical) photography As was stated in the opening discussion on “Diagnostic Imaging” (page 129), “One of the most promising areas of health innovation is the application of AI in medical imaging.” With pattern recognition and GPU image analysis as one of its greatest strengths, AI is revolutionizing digital imagery as a diagnostic resource in all areas of healthcare. And among image capturing devices in the clinical setting, the medical photograph is perhaps the most popular source of capture. Indeed, in external disease categories such as dermatology, the medical photograph which has always been a go-to diagnostic tool, now it is enjoying expanded use through AI and deep learning (CNN) analysis [103]. There are already several artificial intelligence studies focusing on skin disorders (see Chapter 7, page 355) such as skin cancer, psoriasis, atopic dermatitis, and onychomycosis. Even more significant expansions of AI’s applications are being applied in eye care [104]. Major clinical areas for these applications include (and are not limited to) vision and refractive care; blindness prevention; cornea and ocular surface; anterior segment (cataracts, uveitis, etc.); retina (especially diabetic retinopathy); glaucoma; and neuro-ophthalmic disorders. 5.1.1.1.22 Literature reviews re AI’s influence on medical (clinical) photography 1. AI in skin cancer [105]: Significant advances in recent years are related to advancements in the utilization of convolutional neural networks (CNNs) for dermatologic image analysis, especially dermoscopy. Recent studies show CNN-based approaches can perform as well as or even better than humans in diagnosing close-up and dermoscopic images of skin lesions. Limitations for AI development include the need for large datasets, “ground truth” diagnoses, and a lack of widely accepted standards. Despite recent breakthroughs, the adoption of AI in clinical settings for dermatology is in the early stages. Close collaboration between researchers and clinicians may provide the opportunity to investigate the implementation of AI in clinical settings to give real benefit to both clinicians and patients.

158

Foundations of Artificial Intelligence in Healthcare and Bioscience

2. Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs [106]: An extensive database of fundus photographs (n 5 14,822) from a racially and ethnically diverse group of individuals (over 33% of African descent) was evaluated by expert reviewers and classified as glaucomatous optic neuropathy (GON) or healthy. Several deep learning architectures and the impact of transfer learning (a technique to apply features learned to perform 1 task to be applied to other tasks) [107] were evaluated. Results suggest that deep learning methodologies have high diagnostic accuracy for identifying fundus photographs with glaucomatous damage to the optic nerve head (ONH) in a racially and ethnically diverse population. The best performing model was the transfer learning ResNet [108] architecture and achieved an AUC of 0.91 in identifying GON from fundus photographs. Given the increasing burden of glaucoma on the healthcare system as our population ages and the proliferation of ophthalmic image devices, automated image analysis methods will serve an important role in decision support systems for patient management and population and primary care-based screening approaches for glaucoma detection. 3. Doctors’ use of mobile devices in the clinical setting: a mixed-methods study [109]: Mobile device use has become almost ubiquitous in daily life and therefore includes use by doctors in clinical settings. A study was conducted to explore how doctors use mobile devices in the clinical setting and understand drivers for use. The study included doctors in a pediatric and adult teaching hospital in 2013. Focus groups explored doctors’ reasons for using or refraining from using mobile devices in the clinical setting, and their attitudes about others’ use. The survey, completed by 109 doctors, showed that 91% owned a smartphone, and 88% used their mobile devices frequently in the clinical setting. Trainees were more likely to use their mobile devices for learning and accessing information related to patient care, as well as for personal communication unrelated to work. Focus group data highlighted a range of factors that influenced doctors to use on mobile devices in the clinical setting, including convenience for medical photography and factors that limited use. Distraction in the clinical setting due to the use of mobile devices was a critical issue. Personal experience and confidence in using mobile devices affected their use and was guided by role modeling and expectations within a medical team. Doctors use mobile devices to enhance efficiency in the workplace. In the current environment, doctors are making their own decisions based on balancing the risks and benefits of using mobile devices in the clinical setting. There is a need for guidelines around acceptable and ethical use that is patient-centered and that respects patient privacy.

5.1.2 Laboratory (clinical diagnostic) testing Laboratory tests are clinical studies using medical devices that are intended for use on samples of blood, urine, or other tissues or substances taken from the body to help diagnose a disease or other conditions. They are used primarily to support the healthcare practitioner:

Chapter 5 • AI applications in diagnostic technologies and services

• • • • •

159

identify changes in your health condition before any symptoms occur; diagnose or aid in diagnosing a disease or condition; plan your treatment for an illness or condition; evaluate your response to treatment and/or monitor the course of a disease over time.

After the sample is collected, it is sent to a laboratory to see if it contains certain substances and how much. Depending on the test, the presence, absence, or amount of an analyte (substance) may mean you do or do not have a particular condition in question. Sometimes laboratories compare your results to results obtained from previous tests to see if there has been a change in your condition. Some types of lab tests show whether or not your results fall within normal ranges. Normal test values are usually given as a range, rather than as a specific number because normal values vary from person to person. What is normal for one person may not be normal for another person. Other types show whether there is a particular substance present or absent, such as a mutation in a gene, or an infectious organism, which indicates whether you have a disease, an infection, or may or may not respond to therapy. Some laboratory tests are precise, reliable indicators of specific health problems, while others provide more general information that gives health practitioners clues to your possible health problems. Information obtained from laboratory tests may help decide whether other tests or procedures are needed to make a diagnosis or to develop or revise a previous treatment plan. All laboratory test results must be interpreted within the context of your overall health and should be used along with other exams or tests [110]. The magnitude of specific laboratory tests currently used in healthcare is too large for discussion of all trials (350 listed in Table 5 3). Comprehensive resources regarding all lab tests can be found at the U.S. National Library of Medicine, MedlinePlus, Laboratory Tests (https://medlineplus.gov/laboratorytests.html), and Medical Tests (https://medlineplus.gov/ lab-tests/). Also, detailed descriptions (for patients) and normative range values for each test can be found at the American Association for Clinical Chemistry website (https://labtestsonline.org/patient-resources). Discussions in this Chapters 5 (and 6) will identify selected diagnostic tests utilized in the more common diagnostic categories.

5.1.2.1 AI’s influence on laboratory testing The clinical laboratory in healthcare has been among the earliest entities to adopt robotics and algorithms into its workflow. AI technologies known as “Expert Systems” (see Chapter 3, page 53) introduced knowledge-based systems that provide sequential laboratory testing and interpretation as early as 1984 [111]. Expert systems don’t include the ability to learn by themselves, but instead, make decisions based on the accumulated knowledge with which they are programmed. Computational Pathology applies to computational models, machine learning, and visualizations to make lab output both more useful and easily understood for the clinical decisionmaker. Computational pathology has clinical value in all aspects of medicine via a focus on

160

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 5–3 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46.

Clinical laboratory tests.

5-HIAA 17-Hydroxyprogesterone Acetaminophen Acetylcholine Receptor (AChR) Antibody Acid-Fast Bacillus (AFB) Testing Activated Clotting Time (ACT) Acute Viral Hepatitis Panel Adenosine Deaminase Adrenocorticotropic Hormone (ACTH) Alanine Aminotransferase (ALT) Albumin Aldolase Aldosterone and Renin ALK Mutation (Gene Rearrangement) Alkaline Phosphatase (ALP) Allergy Blood Testing Alpha-1 Antitrypsin Alpha-fetoprotein (AFP) Tumor Marker AMAS Aminoglycoside Antibiotics Ammonia Amniotic Fluid Analysis Amylase Androstenedione Angiotensin-Converting Enzyme (ACE) Anti-DNase B Anti-dsDNA Anti-LKM-1 Anti-Müllerian Hormone Anti-Saccharomyces cerevisiae Antibodies (ASCA) Antibiotic Susceptibility Testing Anticentromere Antibody Antidiuretic Hormone (ADH) Antimitochondrial Antibody and AMA M2 Antineutrophil Cytoplasmic Antibodies (ANCA, MPO, PR3) Antinuclear Antibody (ANA) Antiphospholipid Antibodies Antistreptolysin O (ASO) Antithrombin Apo A-I Apo B APOE Genotyping, Alzheimer Disease APOE Genotyping, Cardiovascular Disease Arbovirus Testing Aspartate Aminotransferase (AST) Autoantibodies (Continued)

Chapter 5 • AI applications in diagnostic technologies and services

Table 5 3 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91.

(Continued)

B Vitamins B-cell Immunoglobulin Gene Rearrangement Bacterial Wound Culture Basic Metabolic Panel (BMP) BCR-ABL1 Beta-2 Glycoprotein 1 Antibodies Beta-2 Microglobulin Kidney Disease Beta-2 Microglobulin Tumor Marker Bicarbonate (Total CO2) Bilirubin Blood Culture Blood Gases Blood Ketones Blood Smear Blood Typing Blood Urea Nitrogen (BUN) BNP and NT-proBNP Body Fluid Testing Bone Markers Bone Marrow Aspiration and Biopsy BRCA Gene Mutation Testing Breast Cancer Gene Expression Tests C-peptide C-Reactive Protein (CRP) CA 15-3 CA-125 Calcitonin Calcium Calprotectin CALR Mutation Cancer Antigen 19-9 Carbamazepine Carcinoembryonic Antigen (CEA) Cardiac Biomarkers Cardiac Risk Assessment Cardiolipin Antibodies Catecholamines CD4 Count Celiac Disease Antibody Tests Cell-Free Fetal DNA Cerebrospinal Fluid (CSF) Analysis Ceruloplasmin Chemistry Panels Chickenpox and Shingles Tests Chlamydia Testing (Continued)

161

162

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 5 3 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137.

(Continued)

Chloride Cholesterol Cholinesterase Tests Chromogranin A Chromosome Analysis (Karyotyping) Chymotrypsin CK-MB Clopidogrel (CYP2C19 Genotyping) Clostridium difficile and C. diff Toxin Testing Coagulation Cascade Coagulation Factors Cold Agglutinins Complement Complete Blood Count (CBC) Comprehensive Metabolic Panel (CMP) Continuous Glucose Monitoring Copper Cortisol Creatine Kinase (CK) Creatinine Creatinine Clearance Cryoglobulins Cyclic Citrullinated Peptide Antibody Cyclosporine Cystatin C Cystic Fibrosis (CF) Gene Mutations Testing Cytomegalovirus (CMV) Tests D-dimer Dengue Fever Testing Des-gamma-carboxy prothrombin (DCP) DHEAS Digoxin Direct Antiglobulin Test Direct LDL Cholesterol Drug Abuse Testing EGFR Mutation Testing Electrolytes Emergency and Overdose Drug Testing Epstein-Barr Virus (EBV) Antibody Tests Erythrocyte Sedimentation Rate (ESR) Erythropoietin Estimated Glomerular Filtration Rate (eGFR) Estrogen Receptor, Progesterone Receptor Breast Cancer Testing Estrogens Ethanol Extractable Nuclear Antigen Antibodies (ENA) Panel (Continued)

Chapter 5 • AI applications in diagnostic technologies and services

Table 5 3 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181. 182. 183.

(Continued)

Factor V Leiden Mutation and PT 20210 Mutation Fecal Fat Fecal Immunochemical Test and Fecal Occult Blood Test Ferritin Fetal Fibronectin (fFN) Fibrinogen FIP1L1-PDGFRA First Trimester Screening Follicle-stimulating Hormone (FSH) Fructosamine Fungal Tests G6PD Gamma-Glutamyl Transferase (GGT) Gastrin Gastrointestinal Pathogens Panel Genetic Tests for Targeted Cancer Therapy Glucose Tests Gonorrhea Testing Gram Stain Growth Hormone Haptoglobin hCG Tumor Marker HDL Cholesterol Heavy Metals Helicobacter pylori (H. pylori) Testing Hematocrit Hemoglobin Hemoglobin A1c Hemoglobinopathy Evaluation Heparin Anti-Xa Heparin-induced Thrombocytopenia PF4 Antibody Hepatitis A Testing Hepatitis B Testing Hepatitis C Testing HER2 Herpes Testing High-sensitivity C-reactive Protein (hs-CRP) Histamine Histone Antibody HIV Antibody and HIV Antigen (p24) HIV Antiretroviral Drug Resistance Testing, Genotypic HIV Viral Load HLA Testing HLA-B27 Homocysteine Human Epididymis Protein 4 (HE4) (Continued)

163

164

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 5 3 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223. 224. 225. 226. 227. 228. 229.

(Continued)

Human Papillomavirus (HPV) Test Human T-cell Lymphotropic Virus (HTLV) Testing IGRA TB Test Immunoglobulins (IgA, IgG, IgM) Immunophenotyping Immunoreactive Trypsinogen (IRT) Influenza Tests Insulin Insulin-like Growth Factor-1 (IGF-1) Interleukin-6 Intrinsic Factor Antibody Iron Iron Tests Islet Autoantibodies in Diabetes JAK2 Mutation Kidney Stone Risk Panel Kidney Stone Testing KRAS Mutation Lactate Lactate Dehydrogenase (LD) Lactoferrin Lactose Tolerance Tests LDL Cholesterol LDL Particle Testing (LDL-P) Lead Legionella Testing Leptin Levetiracetam Lipase Lipid Panel Lipoprotein (a) Lithium Liver Panel Lp-PLA2 Lupus Anticoagulant Testing Luteinizing Hormone (LH) Lyme Disease Tests Magnesium Marijuana (THC) Testing Maternal Serum Screening, Second Trimester Measles and Mumps Tests Mercury Metanephrines Methotrexate Methylmalonic Acid Mononucleosis (Mono) Test (Continued)

Chapter 5 • AI applications in diagnostic technologies and services

Table 5 3 230. 231. 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265. 266. 267. 268. 269. 270. 271. 272. 273. 274. 275.

(Continued)

MRSA Screening MTHFR Mutation Mycophenolic Acid Mycoplasma Myoglobin Nicotine and Cotinine Non-High Density Lipoprotein Cholesterol Opioid Testing Osmolality and Osmolal Gap Ova and Parasite Exam Pap Smear (Pap Test) Parathyroid Hormone (PTH) Parietal Cell Antibody Partial Thromboplastin Time (PTT, aPTT) Parvovirus B19 PCA3 Pericardial Fluid Analysis Peritoneal Fluid Analysis Pertussis Tests Pharmacogenetic Tests Phenobarbital Phenytoin Phosphorus Plasma Free Metanephrines Platelet Count Platelet Function Tests Pleural Fluid Analysis PML-RARA Porphyrin Tests Potassium Prealbumin Pregnancy Test (hCG) Pregnenolone Prenatal Group B Strep (GBS) Screening Procalcitonin Progesterone Prolactin Prostate Specific Antigen (PSA) Protein C and Protein S Protein Electrophoresis Immunofixation Electrophoresis Prothrombin Time and International Normalized Ratio (PT/INR) PSEN1 Red Blood Cell (RBC) Antibody Identification Red Blood Cell (RBC) Antibody Screen Red Blood Cell Count (RBC) Red Cell Indices (Continued)

165

166

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 5 3 276. 277. 278. 279. 280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307. 308. 309. 310. 311. 312. 313. 314. 315. 316. 317. 318. 319. 320. 321.

(Continued)

Renal Panel Respiratory Pathogens Panel Respiratory Syncytial Virus (RSV) Testing Reticulocytes Rheumatoid Factor (RF) Rubella Test Salicylates Semen Analysis Serotonin Serum Free Light Chains Sex Hormone Binding Globulin (SHBG) Shiga toxin-producing Escherichia coli Sickle Cell Tests Sirolimus Smooth Muscle Antibody (SMA) and F-actin Antibody Sodium Soluble Mesothelin-Related Peptides Soluble Transferrin Receptor Sputum Culture, Bacterial Stool Culture Stool Elastase Strep Throat Test Sweat Chloride Test Synovial Fluid Analysis Syphilis Tests T-Cell Receptor Gene Rearrangement T3 (Free and Total) T4, Free Tacrolimus Tau Protein and Beta Amyloid TB Skin Test Testosterone Theophylline and Caffeine Therapeutic Drug Monitoring Thiopurine methyltransferase (TPMT) Thrombin Time Thyroglobulin Thyroid Antibodies Thyroid Panel Thyroid-stimulating Hormone (TSH) TORCH Total IgE Total Protein and Albumin/Globulin (A/G) Ratio Toxoplasmosis Testing Trace Minerals Transferrin and Iron-binding Capacity (TIBC, UIBC) (Continued)

Chapter 5 • AI applications in diagnostic technologies and services

Table 5 3 322. 323. 324. 325. 326. 327. 328. 329. 330. 331. 332. 333. 334. 335. 336. 337. 338. 339. 340. 341. 342. 343. 344. 345. 346. 347. 348. 349. 350.

167

(Continued)

Trichomonas Testing Triglycerides Troponin Tryptase Tumor Markers Uric Acid Urinalysis Urine Albumin and Albumin to Creatinine Ratio Urine Culture Urine Metanephrines Urine Protein and Urine Protein to Creatinine Ratio Valproic Acid Vancomycin Vanillylmandelic Acid (VMA) VAP Vitamin A Vitamin B12 and Folate Vitamin D Tests Vitamin K VLDL Cholesterol von Willebrand Factor Warfarin Sensitivity Testing West Nile Virus Testing White Blood Cell (WBC) Differential White Blood Cell Count (WBC) Widal Test Xylose Absorption Test Zika Virus Testing Zinc Protoporphyrin

Source: American Association for Clinical Chemistry; 2019.

computational methods that incorporate clinical pathology, anatomic pathology (including digital imaging), and molecular/genomic pathology datasets (more below under “Genetic Testing”). Continuous remote sensing of patients using “wearables” such as glucose monitoring devices, oximetry, temperature, heart rate, and respiratory rate monitors connected to a central computing device via the “Internet of Things” (IoT) will be the norm. AI-enhanced microfluidics and compact small interactive point-of-care-testing (POCT) labs are set to alter the way diagnostics are carried out. An example is the “Maverick Detection System” from Genalyte [112]. Biological probes bound on silicon biosensors chips bind macromolecules in the serum. The binding is detected by a change in light resonance, which is determined photometrically. They plan to detect up to 128 analytes (substances in the serum) using disposable chips from a single sample.

168

Foundations of Artificial Intelligence in Healthcare and Bioscience

Today’s clinical labs are already using advanced robotics to test minute volumes of blood, serum, and other body fluids from thousands of samples in a day. They give highly accurate and reproducible answers to clinical questions, in scales almost too complicated for humans to duplicate. These machines are driven by conventional algorithmic programs, which represent and use data. They iterate repetitively and exhaustively using a decision sequence, using mathematics and equations, finally presenting a number or result within confidence limits. In the future, robots used in the clinical laboratory will be heuristic (self-learning), using Bayesian logic and inferential processes, with numerous ways to derive the best decision possible, even allowing for missing information. Artificial Intelligence programs combined with databases, data mining, statistics, mathematical modeling, pattern recognition, computer vision, natural language processing, mixed reality, and ambient computing will change the way laboratories generate and display clinical information in the future. AI and machine learning software are beginning to integrate themselves as tools for efficiency and accuracy within pathology. Software is being developed by start-ups, often in tandem with prominent educational institutions or large hospital research laboratories, addressing different diseases and conditions. A review of the functionalities of AI and machine learning software in the field of pathology reveal predominant usage in whole slide imaging analysis and diagnosis, tumor tissue genomics and its correlation to therapy, and finally companion diagnostic devices. The ICU (Intensive Care Unit) of the future will have AI programs, which will concurrently evaluate the continuous streams of data from multiple monitors and data collection devices. The programs will pool their information and present a comprehensive picture of the patient’s health to doctors autonomously adjusting equipment settings to keep the patient in optimal condition [113]. ML offers significant potential to improve the quality of laboratory medicine. ML-based algorithms in commercial and research-driven applications have demonstrated promising results. Laboratory medicine professionals will need to understand what can be done reliably with the technology, what the pitfalls are, and to establish what constitutes best practices as ML models are introduced into clinical workflows [114].

5.1.3 Genetic and genomic screening and diagnosis The best way to begin a discussion on AI’s applications and influences in genetic and genomic screening and diagnosis is to review the relevant science including anatomy (molecular biology), cytogenetics (examination of chromosomes) and genes and genomes, as well as the related AI categories (i.e., machine learning, big data analytics, precision medicine, predictive analytics, preventive health, EHR and robotics). Beyond the science discussion, the remaining information in this Chapter 5 will focus on the screening and diagnostic applications of the related science(s). The genetic and genomic therapeutic considerations will be addressed in Chapter 6 (“AI applications in medical therapies and services”). However, all of the basic science discussed herein applies equally to the Chapters 6 and 7 therapeutic discussions. And finally, it’s important to note that all of the genetic information throughout these chapters (and in fact, throughout this book) is a direct result of (and in gratitude to) the Human Genome Project completed in 2003 [115].

Chapter 5 • AI applications in diagnostic technologies and services

169

FIGURE 5–4 Chromosome. In the nucleus of each cell, the DNA molecule is packaged into thread-like structures called chromosomes. Within each DNA helix are “sequences” (“genetic code”) made up of four nitrogen base compounds, paired as “base pairs” (adenine paired with thymine and guanine paired with cytosine). Together, a base pair along with a sugar and phosphate molecule is called a nucleotide. Credit From: National Library of Medicine, NIH.

5.1.3.1 The science A gene is a fundamental physical and functional unit of heredity made up of DNA (deoxyribonucleic acid, the carrier of genetic information). It is estimated that humans have between 20,000 and 25,000 genes. Every person has 2 copies of each gene, 1 inherited from each parent. Most genes are the same in all people, but a small number of genes (less than 1% of the total) are slightly different between people. Alleles are forms of the same gene with slight differences in their sequence of DNA base compounds. These small differences contribute to each person’s unique physical features (their phenotype) [116]. The human body is composed of trillions of cells. In the nucleus of each cell, the DNA molecule is packaged into thread-like structures called chromosomes (Fig. 5 4). Virtually every single cell in the body contains a complete copy of the approximately 3 billion DNA base pairs (nucleotides or exome) or letters (adenine [As], thymine [Ts], guanine [Gs], and cytosine [Cs]) that make up the human genome. Each chromosome has a constriction point called the centromere, which divides the chromosome into 2 sections, or “arms.” The location of the centromere on each chromosome gives the chromosome its characteristic shape and can be used to help describe the location of specific genes. The overall number and shape of all your chromosomes are called a karyotype (Fig. 5 5). Your genotype is the genetic information you carry for a trait, and your phenotype is how that trait is manifested on your physical body. Genetics is defined as a branch of biology (molecular biology) concerned with the study of genes, genetic variation, and heredity in organisms [117]. Genes express specific traits called the phenotype which may be physical (e.g., hair, eye color, skin color, etc.), while others may carry the risk of certain diseases and disorders that may pass on from parents to offspring. Thus, genetics is the study of heredity, or how the characteristics of living organisms are transmitted from 1 generation to the next via DNA [118,119]. It is the

170

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 5–5 Normal human karyotype. The overall number and shape of all your chromosomes is called a karyotype. Credit From: National Library of Medicine, NIH.

FIGURE 5–6 The cellular biology of the human genome. There are a number of elements that make up what is referred to as the human genome including the cellular biology of genetics which includes the cell, its nucleus, chromosomes within the nucleus, the DNA strands within the chromosomes and the base compounds of the genes within the chromosomes. Courtesy: Creative Commons License; By Sponk, Tryphon, Magnus, Manske.

bioscience that has the highest potential of influencing virtually every category of health, wellness, and prevention. Genomics is a more recently popularized term that describes the study of all of a person’s genes (their genome), including interactions of those genes with each other and with the person’s environment. A genome is an organism’s complete set of deoxyribonucleic acid (DNA), the chemical compound that contains the genetic instructions to develop and direct the activities of every organism. This science deals with the immense volume of clinical material in the human genome through cellular and molecular biology to advance genetic therapies in the study of the human immune system (immunogenomics) and the treatments, cures, and prevention of disease [119]. Let's summarize the many cellular elements (Fig. 5 6) that collectively make up the Human Genome. They are “relatively straightforward” to understand (kind of) and include [120]:

Chapter 5 • AI applications in diagnostic technologies and services

171

• The human cell within which is its nucleus; • Within the cell nucleus reside chromosomes (23 diploid pairs in somatic cells or 23 single haploid strands in embryonic or germ cells); • Within each chromosomal strand is a double-stranded spiral (helix) of DNA (deoxyribonucleic acid); • Within each DNA helix are “sequences” (“genetic code”) made up of 4 nitrogen base compounds, paired as “base pairs” (adenine paired with thymine and guanine paired with cytosine); • Together, a base pair along with a sugar and phosphate molecule is called a nucleotide; • These nucleotides are held together by hydrogen bonds and arranged in 2 long strands that form the double-stranded spiral mentioned above, called the DNA double helix; • The mapping of these double helixes in the cell of a living organism is called its “karyotype” (see Fig. 5 5); • Defined groups (from a few hundred to a few million) of these base-compound paired sequences on a DNA double helix are called genes (humans have between 20,000 to 25,000 genes); • Pairs or series of inherited genes on a chromosome that determines hereditary characteristics (e.g., hair color, eye color, height, etc.) are called alleles; • The specific makeup and positioning (loci) of these alleles on a chromosome are called a genotype; • A pair of alleles in the same gene is either autosomal dominant or recessive. Homozygous means that both copies of a gene or loci match while heterozygous means that the copies do not match. Two dominant alleles (AA) or two recessive alleles (aa) are homozygous. One dominant allele and one recessive allele (Aa) is heterozygous. An autosomal dominant allele will always be preferentially expressed over a recessive allele; • The visible or observable expression of the results of the genotype, combined with environmental influences (any bodily adjustment to the environment over time), is called the phenotype. DNA molecules are made of 2 twisting, paired strands (double helix). Each strand is made of 4 chemical units, called nucleotide bases (collectively called the human exome). The bases are adenine (As), thymine (Ts), guanine (Gs), and cytosine (Cs). Bases on opposite strands pair specifically; an A always pairs with a T, and a C always with a G. Sequencing DNA means determining the order of the 4 chemical bases that make up the DNA molecule. The sequence tells scientists the genetic information that is carried in a particular DNA segment. Also, sequence data can highlight changes in a gene that may cause disease [121]. Besides dictating the phenotype of the organism, the critical function of the genetic process and thus, the genes are the production of amino acids, the building blocks of the large, complex protein molecules that play many critical roles in structure, function, and regulation of the body’s tissues and organs. A gene traditionally refers to the unit of DNA that carries the instructions for making a specific protein or set of proteins. Each of the estimated 20,000 to 25,000 human genes in the human genome codes for an average of 3 proteins.

172

Foundations of Artificial Intelligence in Healthcare and Bioscience

Located on 23 pairs of chromosomes in the nucleus of a human cell, the genes direct production of proteins with the assistance of enzymes and messenger molecules (ribonucleic acid or RNA). During a process called transcription, information stored in a gene’s DNA is transferred to a smaller ribonucleic acid molecule called mRNA (messenger RNA), which transfers the information (“message”) from the nucleus into the cell cytoplasm. In the next step, translation, the mRNA interacts with a specialized complex called a ribosome, which “reads” the sequence of mRNA bases. This genetic process will be revisited (in a negative light) representing the life cycle of the SARS-CoV-2 virus in Chapter 8 (page 450). Finally, a type of RNA called transfer RNA (tRNA) assembles amino acids into functional body proteins. This stepped process in which hereditary information in DNA is used to make proteins is called the “central dogma of molecular biology” or the science of “transcriptomics.” [122] Beyond transcriptomics, which defines the expression of the genes’ proteins (the proteome), “proteomics” studies their biochemistry, functions, and interactions within the body [123]. If a cell’s DNA is mutated, an abnormal protein will be produced, which disrupts the body’s usual processes and lead to a disease such as cancer [124]. Chemical compounds added to a single gene can regulate its activity. Such modifications are known as epigenetic changes [125]. The epigenome comprises all chemical compounds that have been added to the entirety of one’s DNA (genome) to regulate the activity (expression) of all the genes within the genome. The chemical compounds of the epigenome are not part of the DNA sequence. They exist on or attached to DNA (“epi-” means above in Greek). Epigenetic modifications remain as cells divide, and sometimes, they can be inherited through generations. Patterns of epigenetic modification vary among individuals, in different tissues within an individual, and even in different cells. Because errors in the epigenetic process, such as modifying the wrong gene or failing to add a compound to a gene, can lead to abnormal gene activity or inactivity, they can cause genetic disorders. Conditions, including cancers, metabolic disorders, and degenerative disorders, have all been found to be related to epigenetic errors. Examples of these errors will be mentioned below in the discussion of immunogenetics. The description of the genetic components and process, albeit “challenging,” is only part of the genetics and genomics story. The more overwhelming feature of these sciences lies in the astronomical numbers their components represent. Those numbers include B37.2 trillion (B37.2 3 1016) cells in the human body; with B3 billion (3.0 3 1012) chromosomes within the nuclei of the cells; with 4 base compound sequences within the DNA (deoxyribonucleic acid) helixes constituting the 20,000 to 25,000 genes. The number of possible combinations within these sequences (“genetic codes”) of base compounds is astronomical. And yet, it is among these prodigious numbers of gene sequences that congenital (hereditary) and acquired mutations occur. These mutations (structural changes resulting in a genetic variant) are the underlying cause of all abnormalities in the human organism [116]. Using the standard advanced algebra factorial formula, nCr 5 n! / r!  (n 2 r)!, for possible combinations, where ‘n’ represents the total number of items (in this case, the number of genes or 25,000), and ‘r’ represents the number of items being chosen at a time (4 base compounds), the number of possible mutations in the human genome, spread among 37.2 trillion

Chapter 5 • AI applications in diagnostic technologies and services

173

cells, is 2.5 3 1020. (You can appreciate why we have been using exponential notation instead of whole numbers with “lots and lots of zeros!”). Locating and identifying those mutations and their clinical manifestations is a challenge. Enter AI and the Human Genome Study! Since the completion of the Human Genome Study in 2003 and the continuing advances in AI, machine, and deep learning, the scientific community now has the tools (e.g., Next Generation Sequencing [NGS] [126] to locate and identify genetic mutations in timeframes of minutes, hours, and days, versus the original “non-AI processes” of weeks, months, and years, if at all. Given the picture of the average human genome, the science of genomics is now able to identify the 2.5 3 1020 potential mutations in a human being and where among the 37.2 trillion somatic cells they may be occurring. This ability has changed the current face of healthcare. It provides a future for continuing, expanding diagnostic capabilities (locating and identifying mutations), and treatment options (engineering gene mutations to reduce their negative effects presented in Chapter 7)). Finally, it introduces real personalized, precision health prevention (more below) by correcting gene abnormalities before they produce their negative effects.

5.1.3.2 Cytogenetics Cytogenetics involves the examination of chromosomes to identify structural abnormalities. It’s hard to get one’s head around such testing methodology when we consider the numbers in the previous section (“The Science” of genetics). Think about the multiples of “20,000 and 25,000 genes” in “trillions of cells” in the human body with “3 billion DNA base pairs” in each cell. Exponentially that sum would be 7.5 3 1025 (I think?) or 750,000,000,000,000,000,000,000,000. And that would be the genome for 1 human being. Finding an abnormality (mutation or variants) in that genome is what “big data analytics” (below) is all about. Progress in genetic testing was stalled by the complexity and enormity of the data that needed to be evaluated. Instead, the extensive datasets (e.g., 7.5 3 1025) in cytogenetics provide training for deep learning (CNN) algorithms resulting in dramatically faster (than human) and more accurate analysis. With such advances in artificial intelligence and machine learning applications, researchers are now able to interpret data better and faster, and act on genomic data through genome sequencing. Because AI systems can do it faster, cheaper, and more accurately, they gain perspective on the particular genetic blueprint that orchestrates all activities of that organism. These insights help AI make decisions about care, what an organism might be susceptible to in the future, what mutations might cause different diseases, and how to prepare for the future [127].

5.1.3.3 Genetic testing [128] Genetic testing diagnostic applications are ranked second only to diagnostic imaging.1 But, remember our “Top 5 parlor game” from back in this chapter’s introduction. In my list I included genetic testing as my number 1 and 2 choice and diagnostic imaging as number 4. I stand on my prediction in spite of the published rankings (i.e., genetic testing second and 1

Fei Jiang, Yong Jiang, Hui Zhi, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. June 21, 2017; 2:e000101. https://doi.org/10.1136/svn-2017-000101.

174

Foundations of Artificial Intelligence in Healthcare and Bioscience

diagnostic imaging first) because almost all medical and non-medical prognosticators agree that genetic testing will surpass all forms of diagnostic tests by 2022 [129]. The downside of that phenomenon will be “buyer beware.” Not all genetic testing is created equally and beside legitimate medical controlled and supervised testing, there will be a proliferation of proprietary tests (e.g., CRIGenetics, 23andMe, Ancestry) that may or may not be accurate and validated. Genetic testing is a medical test that identifies changes in chromosomes, genes, or proteins. The results of a genetic test can confirm or rule out a suspected genetic condition, or it can help determine a person’s risk of developing or passing on a genetic disorder. Available types of testing include the following: • Newborn screening is used just after birth to identify genetic disorders that may be amenable to early treatment. Millions of babies are tested each year in the United States. All states currently test infants for genetic diseases such as phenylketonuria (a genetic disorder that causes intellectual disability if left untreated) and congenital hypothyroidism (a disorder of the thyroid gland). Some states also test for other genetic diseases. • Diagnostic testing identifies or rules out a specific genetic or chromosomal condition. In many cases, genetic testing confirms a diagnosis when a particular condition is suspected based on physical signs and symptoms. Diagnostic testing can be performed before birth (prenatal see below) or at any time during a person’s life, but is not available for all genes or all genetic conditions. • Pharmacogenetics is used to determine what medication and dosage will be most effective and beneficial for you if you have a particular health condition or disease. • Carrier testing is used to identify people who carry 1 copy of a gene mutation that, when present in 2 copies, causes a genetic disorder. This type of testing is offered to individuals who have a family history of a genetic disorder and to people in certain ethnic groups with an increased risk of specific genetic conditions. If both parents are tested, the test can provide information about a couple’s risk of having a child with a genetic condition. • Prenatal testing is used to detect changes in a fetus’s genes or chromosomes before birth. This type of testing is offered during pregnancy if there is an increased risk that the baby will have a genetic or chromosomal disorder. In some cases, prenatal testing can lessen a couple’s uncertainty or help them make decisions about a pregnancy. (As mentioned above, this was my #1 choice in my “Top 5” listing of the most disruptive diagnostic and treatment technologies by 2025 at the beginning of this chapter, page 125). • Preimplantation testing, also called preimplantation genetic diagnosis (PGD), is a specialized technique that can reduce the risk of having a child with a particular genetic or chromosomal disorder. It is used to detect genetic changes in embryos that were created using assisted reproductive techniques such as in-vitro fertilization. • Predictive and pre-symptomatic testing is used to detect gene mutations associated with disorders that appear after birth, often later in life. These tests can be helpful to people who have a family member with a genetic disorder, but who have no features of the disease themselves at the time of testing. Predictive analytics testing can identify mutations that increase a person’s risk of developing disorders with a genetic basis, such

Chapter 5 • AI applications in diagnostic technologies and services

175

as certain types of cancer. The results of predictive and pre-symptomatic testing can provide information about a person’s risk of developing a specific disorder and help with making decisions about medical care (see Cancers and breast cancer in Chapter 7) . • Forensic testing uses DNA sequences to identify an individual for legal purposes. Unlike the tests described above, forensic testing is not used to detect gene mutations associated with the disease. This type of testing can identify crime or catastrophe victims, rule out or implicate a crime suspect, or establish biological relationships between people (for example, paternity). Determining the order of DNA proteins (nucleotides) in an individual’s genetic code, called DNA sequencing, has advanced genetics both for research and clinically. Two methods, whole-exome sequencing and whole-genome sequencing are being used more frequently in healthcare and research to identify genetic variations. Both methods rely on AI and big data analytics, which allows the rapid sequencing of large amounts of DNA. These approaches are known as next-generation sequencing (or next-gen sequencing). The original sequencing technology (the Sanger method) would take years to sequence all of a person’s DNA. Next-generation sequencing has sped up the process (taking only days to weeks to sequence a human genome) while reducing the cost.

5.1.3.4 Big data analytics in genomics [130] Genomic medicine is a clinical science that attempts to build individual strategies for diagnostic or therapeutic decision-making. It utilizes patients’ genomic information to make assessments. Big Data Analytics reveals hidden patterns, unknown correlations, and other insights by examining large-scale datasets. Notwithstanding challenges in the integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs), on a Big Data the process also provides an opportunity to develop an efficient and practical approach to identifying clinically actionable genetic variants for individual diagnosis and therapy. Next-generation sequencing (NGS) technologies, such as whole-genome sequencing (WGS), whole-exome sequencing (WES), and/or targeted sequencing, are progressively more applied to bioscience studiies and medical practice to identify disease and/or drug-associated genetic variants to advance precision medicine [131]. Precision medicine (see Chapter 4, page 101) allows scientists and clinicians to predict more accurately which therapeutic and preventive approaches to a specific illness can work effectively in subgroups of patients based on their genetic make-up, lifestyle, and environmental factors [132]. Clinical research leveraging EHRs has become feasible as EHRs have been widely implemented [133]. In the area of healthcare research, clinical data derived from EHRs have expanded from digital formats of individual patient medical records into high-dimensional data of enormous complexity. Big Data refers to novel technological tools delivering scalable capabilities for managing and processing large and diverse data sets. On a single level, approaches such as natural language processing (NLP) allow incorporation and exploration of textual data with structured sources. On a population level, Big Data provides the

176

Foundations of Artificial Intelligence in Healthcare and Bioscience

possibility to conduct a large-scale investigation of clinical consequences to uncover hidden patterns and correlations [134].

5.1.3.5 AI in genetic cancer screening Due to its variegated forms with the evolution of the disease, cancer offers a unique context for medical decisions (see also Cancer in Chapter 7, page 314). Radiographic analyses of the disease commonly rely upon visual evaluations, the interpretations of which may be augmented by AI’s advanced computational studies. In particular, AI is making great strides in the qualitative interpretation of cancer imaging, including volumetric delineation of tumors over time extrapolation of the tumor genotype and biological course from its radiographic phenotype. AI can also provide a prediction of clinical outcome, and an assessment of the impact of disease and treatment on adjacent organs. It can automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection and management decisions [135]. The automated capabilities of AI offer the potential to enhance the qualitative expertise of clinicians, including parallel tracking of multiple lesions, translation of intratumoral phenotypic nuances to genotype implications, and outcome prediction through cross-referencing individual tumors to databases of potentially limitless (millions, perhaps billions) comparable cases. The strengths of AI are well suited to the current generation of targeted and immunotherapies (see also Immunology, Chapters 6 and 7) which can produce a clear clinical benefit that is poorly captured by endpoints based on RECIST (Response Evaluation Criteria in Solid Tumors). These endpoints rely on the assumption that a successful response to therapy will be reflected by tumor shrinkage and, in particular, the measurement of response based on tumor diameter assumes that tumors are spherical and undergo uniform spatial change after treatment. Targeted therapies and immunotherapies lead to novel patterns of response that confound current RECIST-based endpoints and may contribute to the high failure rate of clinical trials and the cost of drug development. Thus, the ability of AI and big data to quantify the biological processes associated with a response other than size answers an urgent need in the field [135]. Several firms focus specifically on diagnosis and treatment recommendations for certain cancers based on their genetic profiles. Since many cancers have a genetic basis, human clinicians have found it increasingly complex to understand all genetic variants of cancer and their response to new drugs and protocols. Firms like Foundation Medicine [136] and Flatiron Health [137], both now owned by Roche, specialize in this approach.

5.1.3.6 AI in immunogenetics (see also Immunology, Chapters 6 and 7) Immunogenetics is the study of the genetic basis of the immune response. It includes the study of normal immunological pathways and identifying genetic variations that result in immune disorders, which may lead to new therapeutic targets for immune diseases [138]. Computational immunogenetics (immunoinformatics) encompasses the use and application of AI, bioinformatics methods, mathematical models, and statistical techniques. Computational approaches are increasingly vital to understand the implications of the wealth of gene expression and epigenomics data being gathered from immune cells. Dozens of immune databases play a vital role in organizing the vast quantities of experimental data generated by

Chapter 5 • AI applications in diagnostic technologies and services

177

modern high-throughput technologies [139]. The sum of these mechanisms is fundamental to the regulation of diverse cellular processes through differential transcriptional readout of the same genetic material. The importance of epigenetics is underscored by many diseases that can develop due to mutations in epigenetic regulatory proteins, dysregulation of the epigenetic machinery, and aberrant placement or removal of epigenetic marks [140].

5.1.3.7 Genetics, precision medicine and AI One of the most exciting prospects about gene technology is the development of precision or personalized medicine (see Chapter 4, page 101). The field, which enables interventions specific to a patient or population of genetically similar individuals, is expected to reach $87 billion by 2023 [141]. Historically, cost and technology limited the implementation of personalized medicine, but machine learning techniques are helping to overcome these barriers. Machines help identify patterns within genetic data sets, and then computer models can make predictions about an individual’s odds of developing a disease or responding to interventions. AI will remain the primary driver to healthcare’s transformation toward precision medicine. AI approaches, such as machine learning, deep learning, and natural language processing (NLP), will address the challenges of scalability and high dimensionality. It will transform big data into clinically actionable knowledge, which is expanding and becoming the foundation of precision medicine. Precision medicine or personalized medicine addresses disease by tailoring treatment based on genomic, lifestyle, and environmental characteristics of each patient. With precision medicine and the advancement of next-generation sequencing (NGS), genomic profiles of patients are increasingly used for risk prediction, disease diagnosis, and development of targeted therapies. The number of publications each year, as indexed in PubMed, has exceeded 1 million since 2011. This volume and veracity in publications indicate multiple hypotheses are being tested at the same time, which makes it harder for researchers to stay up to date in their field in the absence of some automated assists. It, therefore, impacts their ability to generate meaningful and coherent conclusions in a timely manner which are required for evidence-based recommendations in the context of precision medicine [142]. In cancer genomics, publications per year can easily run into tens of thousands, far more than a researcher can keep up with, and this growth in publications has resulted in the rapid growth of application of text mining and NLP techniques. Capturing data on individual variability in genomic, lifestyle, and clinical factors are at the core of precision medicine, which would empower patients to be more engaged in their healthcare. To facilitate patient participation in this AI-empowered digital health transformation, medical professionals should provide robust patient education initiatives related to precision medicine, benefits, and risks of AI, data sharing, and protection. Healthcare providers need to be sensitive to varying degrees of patient preferences for privacy and properly obtain consent for patient data collection and use [143].

5.1.3.8 Literature reviews re AI’s influence on genetics and genomics 1. The immunogenetics of neurological disease [144]: Genes encoding antigen-presenting molecules within the human major histocompatibility complex (MHC) account for the greatest component of genetic risk for

178

Foundations of Artificial Intelligence in Healthcare and Bioscience

many neurological diseases, such as multiple sclerosis, neuromyelitis optica, Parkinson’s disease, Alzheimer’s disease, schizophrenia, myasthenia gravis, and amyotrophic lateral sclerosis. Myriad genetic, immunological, and environmental factors contribute to an individual’s susceptibility to neurological disease. Taken together, the findings of human leukocytic antigens and killer-cell immunoglobulinlike receptor association studies are consistent with a polygenic model of inheritance in the heterogeneous and multifactorial nature of complex traits in various neurological diseases. The majority of neurological conditions, such as MS, NMO, PD, AD, SCZ, MG, and ALS, are considerably more frequent among individuals transmitting specific human leukocytic antigens alleles. This further strengthens the decades-long contention of a strong immune component in the determination of clinical outcomes of neurological diseases. 2. The emerging role of epigenetics in human autoimmune disorders [145]: Epigenetic mechanisms, known for their ability to regulate gene transcription and genomic stability, are key players for maintaining normal cell growth, development, and differentiation. Epigenetic dysregulation directly influences the development of autoimmunity (see Autoimmunity, Chapter 7) by regulating immune cell functions [146]. A more in-depth exploration of the complex epigenetic interactions may be useful for the development of promising treatment strategies targeting the epigenome. Epigenetics will very likely aid in providing further progress in the field of autoimmunity. 3. Predicting response to cancer immunotherapy using noninvasive radiomic biomarkers [147]: Cancer immunotherapy has made promising strides as a result of improved understanding of biological interactions between tumor cells and the immune system. The recent emergence of quantitative imaging biomarkers provides promising opportunities. Unlike traditional biopsy-based assays that represent only a sample of the tumor, images reflect the entire tumor burden, providing information on each cancer lesion with a single noninvasive examination. Computational imaging approaches originating from AI have achieved impressive successes in automatically quantifying radiographic characteristics of tumors [15]. Radiomics-based biomarkers have shown success in different tumor types [148], but there is no evidence yet in immunotherapy. Tumor morphology, visualized on imaging, is likely influenced by several aspects of tumor biology. Findings suggest associations between radiomics characteristics and immunotherapy response showing consistent trends across cancer types and anatomical location. Lesions that are more likely to respond to immunotherapy typically tend to present with more heterogeneous morphological profiles with nonuniform density patterns and compact borders.

5.2 Additional diagnostic technologies and their AI applications Whether consciously or unknowingly, all of us are using and relying on AI every day in countless ways. Things are now taken for granted, like the Internet, email, search engines, social media, chatbots, GPS, online services, voice recognition (natural language processing

Chapter 5 • AI applications in diagnostic technologies and services

179

[NLP]), and on and on. So too in healthcare, AI is being used extensively by healthcare providers and administrators in medical practice, EHRs, hospital care, third party insurers, emergency medical services (EMS) and rescue, robotics, and again, on and on. Oops, by the way, I almost forgot to mention the ubiquitous smartphone and all those healthcare apps (as the saying goes, “There’s an app for that”). I mention all this in the context of “additional diagnostic technologies” created and supported by and continually being expanded by AI. All of the previously discussed AI applications in diagnostic technologies and services are the skeletal framework of healthcare diagnosis. Ultimately, the human level (human intelligence) is the engine that drives, coordinates, and facilitates the machine learning of AI. It is important to keep in mind that the additional resources, technologies, and services presented in the following section, as with those discussed previously, are supporting assets in a diagnostic armamentarium supplemented, driven and managed by the human “captain of the ship.”

5.2.1 Vital signs We are all very familiar with the vital signs used in diagnosis at the acute and chronic levels of care. The signs include body temperature (BT), blood pressure (BP), pulse rate (PR), and respiration (breathing) rate (RR). BT normal range is from 97.8 F (Fahrenheit) or 36.5 C (Celsius) to 99 F (37.2 C). It is obtained (mercuric or electronic) orally, rectally, axially, and by ear or skin. BP is measured using a manual sphygmomanometer or more modern electronic devices. Systolic BP is the higher number or the pressure inside the artery when the heart contracts and pumps blood through the body. The lower number is diastolic BP, which is the pressure inside the artery when the heart is at rest and is filling with blood. PR measures heartbeat rhythm and strength with normal at 60 100 per minute. It is measured at the wrist (radial artery) or side of the neck (carotid artery). The RR is the number of breaths a person takes per minute. It is usually measured when a person is at rest and simply involves counting the number of breaths for 1 minute by counting how many times the chest rises. Normal RR for an adult person at rest ranges from 12 to 16 breaths per minute. Monitoring vital signs is critical in emergencies, hospital care, and the chronically ill patient. In emergency care, “vital signs are vital” is a common refrain. In hospital care and chronically ill patients, monitoring vital signs often requires waking patients to check the signs (although this routine is being challenged as unnecessary and even potentially harmful [149]). Under any circumstances, AI is considered a viable adjunct to increasing the efficiency of the process as well as assuring timely and accurate monitoring. Devices such as the ViSi (Sotera Wireless) [150] are approved and currently being used by many health systems. ViSi (an IoT) is a mobile system platform that is designed to detect deterioration earlier and keep clinicians connected to their patients. The system enables highly accurate, continuous monitoring of core vital signs through wearable sensors that allow for freedom of movement. The concept of wearable devices (IoTs) in healthcare merges ideally with AI as the system attempts to decentralize. There are many non-acute health decisions that people make daily.

180

Foundations of Artificial Intelligence in Healthcare and Bioscience

These decisions do not require a skilled clinician, but they play a significant role in determining a patient’s health, and ultimately, the cost of healthcare. Thanks to AI-driven models, patients now have access to interventions and reminders throughout their day to process decisions based on changes to their vital signs. In southeast England, patients discharged from a group of hospitals serving 500,000 people are being fitted with a Wi-Fi-enabled armband that remotely monitors vital signs [151]. By deploying AI, for instance, the NHS program is not only able to scale up in the U.K. but also internationally. Current Health, the venture-capital-backed maker of the patient monitoring devices used in the program, recently received FDA clearance to pilot the system in the U.S. and is now testing it with New York’s Mount Sinai Hospital. It’s part of an effort to reduce patient readmissions, which costs U.S. hospitals about $40 billion annually [152]. MIT’s CSAIL, helps doctors predict if and when a patient will need mechanical ventilation or vasopressors, blood pressure control, and other interventions. Another CSAIL algorithm helps determine the optimal time to transfer a patient out of the ICU. This has the objective of reducing hospital stay as well as preventing mortality. Other CSAIL algorithms centered on the ICU to lessen the burden on the nurse by automated surveillance with a combination of cameras or algorithmic processing of vital signs [153]. There is, however, no FDA-approved device for home use yet. Until we have FDA devices approved for home use that are automatic, accurate, inexpensive, and integrate with remote monitoring facilities, there remains an obstacle.

5.2.2 Electrodiagnosis Electrodiagnosis (EDX) is a broad term describing methods of medical diagnosis that obtain information about diseases by passively recording the electrical activity of body parts (that is, their natural electrophysiology) or by measuring their response to external electrical stimuli (evoked potentials) [154]. The term EDX includes the following diagnostic tests: • • • • • • • • • • • • • •

Electroencephalography (Intracranial EEG, stereoelectroencephalography) Magnetoencephalography (MEG) Evoked potentials Electrogastrogram (EGG) Magnetogastrography Electrocochleography Electrooculography (EOG) Electroretinography (ERG) Electronystagmography (ENG) Electrocardiography (ECG) Vectorcardiography Magnetocardiography Electromyography (Facial electromyography) (EMG) Nerve conduction study (NCS)

Chapter 5 • AI applications in diagnostic technologies and services

Table 5–4

181

Electrodiagnostic tests.

System

Diagnostic test

Description

Central nervous system

Electroencephalography (Intracranial EEG, stereoelectroencephalography) Magnetoencephalography (MEG)

An electrophysiological monitoring method to record electrical activity of the brain.

Evoked potentials Digestive system

Electrogastrogram (EGG) Magnetogastrography

Ears

Electrocochleography

Eyes

Electrooculography (EOG)

Electroretinography (ERG)

Electronystagmography (ENG)

Heart

Electrocardiography (ECG) Vectorcardiography

Peripheral nervous system

Magnetocardiography Electromyography (EMG)

Nerve conduction study (NCS)

A functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain. An electrical potential in a specific pattern recorded from a specific part of the nervous system, especially the brain. Records the electrical signals that travel through the stomach muscles and control the muscles’ contractions. Recordings of magnetic fields resulting from electrical currents in the stomach. A technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation. A technique for measuring the corneo-retinal standing potential that exists between the front and the back of the human eye. Measures the electrical responses of various cell types in the retina, including the photoreceptors, inner retinal cells, and the ganglion cells. Records involuntary movements of the eye caused by a condition known as nystagmus. It can also be used to diagnose the cause of vertigo, dizziness or balance dysfunction by testing the vestibular system. A recording a graph of voltage versus time of the electrical activity of the heart. A method of recording the magnitude and direction of the electrical forces that are generated by the heart. Records the magnetic fields generated by the heart. An electrodiagnostic medicine technique for evaluating and recording the electrical activity produced by skeletal muscles. A medical diagnostic test commonly used to evaluate the function, especially the ability of electrical conduction, of the motor and sensory nerves of the human body.

Table 5 4 lists all major electrodiagnostic tests with thumbnail descriptions of each. Each of these electrodiagnostic studies can help establish diagnoses for disorders affecting nerves and muscles, and they can identify other problems and define the severity of the problem. An injury or disease can interrupt electrical signals that travel from the brain to motor nerves to muscles. Indications for electrodiagnostic testing include:

182

Foundations of Artificial Intelligence in Healthcare and Bioscience

• Entrapment neuropathies including: • Carpal tunnel syndrome • Cubital tunnel syndrome • Radial nerve palsy • Peroneal nerve palsy • Tarsal tunnel syndrome • Radiculopathy in the cervical and lumbar regions • Brachial and lumbosacral plexopathy • Peripheral nerve injuries • Pain, numbness or weakness of upper or lower extremities • Polyneuropathy • Diseases of the neuromuscular junction like myasthenia gravis • Myopathy • Motor neuron disease • Autonomic neuropathy Relative to the AI algorithms used in clinical diagnosis, applications in electrodiagnosis represents the third leading usage. Each of the forms of electrodiagnostic testing listed above has adopted machine learning and deep learning to enhance and refine the information gathered. Much of that has to do with the availability of GPUs for parallel processing faster, cheaper, and more powerful. It also has to do with the simultaneous increase in storage capacity and the flood of data, with each type of testing [155]. An example of deep learning in electrodiagnosis is an early diagnostic system based on Echo State Networks (ESN) that takes specific features of EEG data as an input. This input can predict if the subject is likely to develop Parkinson’s Disease (PD) 10 15 years before developing any symptoms. Results have predicting individual PD development with 85% accuracy [156]. More recently, other deep learning techniques achieved a similar performance while reducing the computational cost and avoiding the need for feature selection [157]. This would allow implementing preventive treatments before the disease develops, when it is too late for treatment.

5.2.3 Telemedicine (aka telehealth) Some things just go together. Bacon and eggs, peanut butter and jelly, wine and cheese, Ben and Jerry, King Kong and Fay Wray (for seniors), Burt and Ernie (for young. . .and old), Lennon and McCartney (that was fun!). So too, telemedicine and AI just go together. Perhaps, even more so, smartphones (cameras) and AI go together. Telemedicine (aka telehealth) is defined by the Health Resources and Services Administration (HRSA) of the US Department of Health and Human Services as the use of electronic information and telecommunications technologies to support and promote long-distance clinical healthcare, patient and professional health education, public health and health administration. Technologies include videoconferencing, the internet, store-and-forward imaging, streaming media, and terrestrial and wireless communications. Telehealth applications include [158]

Chapter 5 • AI applications in diagnostic technologies and services

183

• Live (synchronous) videoconferencing: a 2-way audiovisual link between a patient and a care provider; • Store-and-forward (asynchronous) videoconferencing: transmission of recorded health history to a health practitioner, usually a specialist; • Remote patient monitoring (RPM): the use of connected electronic tools including the Internet of Things (IoT) to record personal health and medical data in 1 location for review by a provider in another location, usually at a different time; • Mobile health (mHealth): healthcare and public health information provided through mobile devices. The information may include general educational information, targeted texts, and notifications about disease outbreaks. There is some debate over the use of the term “telemedicine” or “telehealth” [159]. How do telehealth and telemedicine technology differ? In most cases, they both rely on the same technologies telecommunications technology. However, because telemedicine always deals with private patient health information, it needs to be secure and HIPAA compliant. Telehealth technology, on the other hand, shares general health information or health education and, thus, does not need to follow the same security requirements. The Federal Communications Commission (FCC) defines telehealth as a broader form of telemedicine, encompassing “remote healthcare services beyond the doctor-patient relationship” [160] such as interactions with nurses, pharmacists, and social services personnel. Thus, for our discussion, we will simply use telehealth, which seems to be the broader term. Telehealth technology is constantly evolving. Several decades ago, physicians and other health professionals engaged in telehealth using radios and telephones. Since the advent of the Internet Age and the widespread use of smartphones, telehealth technology has changed dramatically. Now, telehealth technology might include a smartphone app or an online video conferencing software. And with the growth of mobile medical devices, telehealth equipment is starting to incorporate sophisticated tools that can measure a patients’ vitals or scan health data in the home without supervision by a medical professional. The implementation of telehealth practices and technology is showing increased adoption among healthcare providers and institutions. Results from a 2017 survey [161] of 436 medical professionals conducted by telemedicine software company REACH Health shows that 51% ranked telemedicine as a “top” or “high” priority in their practice. To increase clinical and administrative capacity through telehealth, researchers are developing AI-driven technology for healthcare professionals and consumers [162]. To say the least, the 2020 commencement and escalation of the COVID-19 pandemic has driven telehealth to new heights and broad acceptance among healthcare providers worldwide. AI applications in telehealth fall into 4 categories of telehealth mentioned above: live (synchronous) videoconferencing, store-and-forward (asynchronous) videoconferencing, remote patient monitoring (RPM); and mobile health (mHealth). Considering the impact of AI within these 4 categories will provide a dramatic picture of the extent AI-supported telehealth is influencing healthcare.

184

Foundations of Artificial Intelligence in Healthcare and Bioscience

The value of synchronous and task-oriented computer-generated dialogue has been observed for a broad range of applications [163]. Automated conversational interactions offer many other opportunities across the care spectrum to augment and, in some cases, replace human caregiver tasks. These may include: • • • • •

reminders and motivational messages, e.g., for medication, nutrition, and exercise; routine condition checks and health maintenance, based on personal monitoring data; answering health queries and provision of targeted health information and education; providing a personalized means to address social isolation and community involvement; acting as an intermediary or broker entity between multiple careers or service agencies.

The nature and complexity of conversational agent (or virtual assistant) solutions can vary considerably. Speech-text conversion utilities and chatbots capable of audio or typed inputs and outputs are examples of such technologies. These solutions are better suited for interactions where the context of the situation and the user are simple and established. AI mechanisms for these agents are typically rule-based using expert systems or decision tree logical constructs [164]. Asynchronous or store-and-forward telehealth is a platform that allows both patients and providers to interact on their timeline. The patient can enter data through a secure portal by filling out an interactive medical questionnaire and uploading images, labs, or other diagnostic information. Provider logs at the other end review the information, render a diagnosis, and issue treatment recommendations. The provider can always communicate with the patient to ask questions, and can also decide that the case is not suitable for a telehealth diagnosis and recommends a visit to the ER or doctor’s office [165]. CaptureProof’s [166] “asynchronous telehealth” uses an advanced computer vision system that allows healthcare providers and doctors to monitor patient progress, provide instant feedback, update health records, and send instructional media. The patient can also monitor healing progress over time on their own, and coordinate reports to be shared with other healthcare providers (including home health aides, social workers, physical therapists, etc.) involved with the care. The last World Health Organization global eHealth observatory survey [167] noted 4 exemplary well-embedded telehealth services: tele-radiology, tele-pathology, teledermatology, and tele-psychiatry; of these, the first 3 follow asynchronous models of care, and the fourth is synchronous. Remote monitoring (or telemonitoring) involves data acquisition using an appropriate sensor, transmission of data from patient to clinician, integration of data with other data describing the state of the patient (e.g., sourced from the electronic health record), synthesis of an appropriate action or response or escalation in the care of the patient with associated decision support, and storage of data [168]. AI systems for telemonitoring depend on and also expand the scope of other health system ICT (Information and Communication Technology) components. They can potentially outperform humans in many ways. They consistently, mathematically execute their instructions, with a fundamental reliance on inbuilt logic moderated by statistical evidence extracted by machine learning methods from large-scale datasets. They can immediately incorporate and coordinate

Chapter 5 • AI applications in diagnostic technologies and services

185

data from additional tools like location finders (GPS), accelerometers, motion sensors, gyroscopes, etc. Sourcing such additional data by humans is tedious and would need education and training for incorporation in care delivery [164]. Tele-monitoring has been evaluated for the remote surveillance of patients with chronic disease, such as chronic heart failure [169], chronic obstructive pulmonary disease (COPD) [170], and diabetes mellitus [171]. In COPD, AI methods have been applied to the management and surveillance of the condition. A Classification and Regression Tree (CART) algorithm for the early identification of patients at high risk of an imminent exacerbation has been validated using telehealth measurement data recorded from patients with moderate/ severe COPD living at home [172]. Similar approaches could be used as a real-time exacerbation event detector in several chronic conditions. Among the categories of telehealth, perhaps the most versatile would be the mobile health (mHealth) platform. The dramatic proliferation of smartphone apps designed to foster health and well-being provides a range of communication. These include programs that target text messages encouraging healthy behaviors, alerts about disease outbreaks, programs, or apps that help patients with reminders and adherence to specific care regimens. Increasingly, smartphones are using cameras, microphones, or other sensors or transducers to capture vital signs for input to apps and bridging all areas of remote patient monitoring (RPM). A developing mobile app, AI-supported platform, “Curbside Consults/Deep Doc (C2D2)”, will provide 24/7 stat, asap and routine communications between primary care providers and medical/surgical specialists as well as audio/visual provider/patient communications, continuing education, search engine functions and additional professional services [173]. The National Institutes of Health (NIH) and Office of the National Coordinator for Health Information Technology (ONC) support telehealth activities, the development of mobile technologies (such as remote sensors), and research that assesses the effectiveness of care delivered remotely [174]. AI-enabled telehealth offers contributions in the form of quality improvement and enhancement of existing practice, as well as the instigation of new models of care. Examples of the role of AI in remote delivery of healthcare include the use of tele-assessment, telediagnosis, tele-interactions, and telemonitoring. Most AI methods require a substantial learning phase that can only achieve reliability after a very long time and hence should be subject to continuous testing and refinement to develop the human-AI interchange better [164].

5.2.4 Chatbots The form and function of chatbots were first mentioned in Chapter 3 (page 62) under the “Robotics” discussion and then again multiple times in Chapters 4 and 5. In Chapter 3, it was defined as computer programs that simulate human conversation through voice commands (NLP) or text chats or both. The name chatbot was described as a contraction of “chatterbot.” There are several synonyms for a chatbot, including talkbot, bot, IM bot, interactive agent, conversational agents, virtual agent, PACT [175]. (Rather than add yet another acronym to this text, we’ll continue to use chatbots [mostly] in this discussion.) Effectively, it is an AI feature that can be embedded and used through any major messaging applications.

186

Foundations of Artificial Intelligence in Healthcare and Bioscience

AI (in the form of natural-language processing, machine learning, and deep learning, which was discussed in Chapter 3, page 51) makes it possible for chatbots to “learn” by discovering patterns in data. Without training, these chatbots can then apply the model to similar problems or slightly different questions. AI makes it possible for chatbots to “learn” by identifying patterns in data. Without training, these chatbots can apply the defined pattern to similar problems or slightly different questions. This gives them the “intelligence” to perform tasks, solve problems, and manage information without human intervention. Chatbots date back to 1966 when Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology (MIT), developed the ELIZA program, named after a character in Pygmalion, a play (and Broadway musical/movie, My Fair Lady) about a Cockney girl (Eliza Doolittle) who learns to speak and think like an upper-class lady. The introduction of machine learning capabilities in bots has vastly improved the humanlike quotient of their conversations. Given the advancements in artificial intelligence and the considerable amount of time most customers spend on messaging platforms, where many chatbots are deployed, it’s not an exaggeration to say that AI chatbots are becoming a necessity in some industries. Talking bots are an excellent choice for systematic, time-consuming tasks. Chatbots are already part of virtual assistants, such as Siri, Alexa, Cortana, and Google Assistant. Besides, so-called “social bots” promote issues, products, or candidates. At the cutting edge of the chatbot, the spectrum is bots performing more complex, nuanced functions in fields where the human touch remains essential, such as law. In marketing, the belief that AI and chatbots are the next big thing is widespread [176]. Chatbots in healthcare may have the potential to provide patients with access to immediate medical information, recommend early diagnoses at the first sign of illness, or connect patients with suitable healthcare providers (HCPs) in their community [177]. Theoretically, in some instances, chatbots may be better suited to help patient needs than a human physician because they have no biological gender, age, or race and elicit no bias toward patient demographics. Early research has demonstrated the benefits of using healthcare chatbots, such as helping with diagnostic decision support [178], promoting and increasing physical activity [179], and cognitive behavioral therapy for psychiatric and somatic disorders [180], which provide effective, acceptable, and reasonable healthcare with accuracy comparable with that of human physicians. Patients may also feel that chatbots are safer interaction partners than human physicians and are willing to disclose more medical information and report more symptoms to chatbots [181]. However, despite the demonstrated efficacy and cost-effectiveness of healthcare chatbots, the technology is usually associated with poor adoption by physicians and poor adherence by patients [182]. This may be because of the perceived lack of quality or accountability that is characterized by computerized chatbots as opposed to traditional face-to-face interactions with human physicians. Again, the COVID -19 pandemic is rapidly changing such attitudes. The areas where physicians believed chatbots would be most helpful were in the improvement of nutrition, diet, and treatment compliance as well as logistical tasks such as scheduling appointments, locating clinics, and providing medication reminders. The major challenges perceived were an inability of chatbots to understand emotions and address the

Chapter 5 • AI applications in diagnostic technologies and services

187

full extent of a patient’s needs. Physicians believe that healthcare chatbots could replace a substantial role of human HCPs sometime in the future. However, chatbots can be best applied to help physicians rather than replace them [183]. Reverting once again to our “Top 10 Listing” game, the following are the Top 10 chatbot companies as of mid-2019 [184]: 1. Baidu, Inc.: Founded in 2000 and headquartered in Beijing, China, Baidu, Inc. is a technology-based media company engaged in providing Chinese language internet search through its website Baidu.com. 2. Sensely Inc.: Founded in 2013, and headquartered in California, U.S., Sensely is involved in the development of clinical assistance platforms that help clinicians to manage their patients based on the severity of the symptoms and get a better understanding of their health conditions. 3. Your. MD Limited: Founded in 2013, headquartered in London, U.K., Your. MD Limited develops personalized health assistant applications to smartphone users worldwide. 4. Babylon Health Services: Founded in 2013 and headquartered in London, U.K., Babylon Health Services operates a subscription-based mobile healthcare application. 5. HealthTap, Inc.: Founded in 2010 and headquarters at California, US, HealthTap, Inc. operates an online platform that connects people looking for health information to a network of doctors that answer their health questions. 6. Buoy Health, Inc.: Founded in 2014 and headquartered in Massachusetts, U.S., Buoy Health, Inc. develops software that would analyze the symptoms communicated by users, give out a list of possible diagnoses, and guide them toward next steps for their care. 7. Infermedica SP: Founded in 2012, Infermedica is headquartered at Wroclaw, Poland. The company provides artificial intelligence technology solutions for healthcare companies. 8. ADA Digital Health, Ltd.: Incorporated in 2011 and headquartered in Berlin, Germany. The company is a provider of AI-based health applications. 9. PACT Care BV: Incorporated in 2018 and headquartered at Amsterdam, Netherlands, PACT Care BV is into building the platform that connects people with the services, resources, and people they need for health. 10. Woebot Labs, Inc.: Incorporated in 2017 and headquartered in California, U.S. Woebot Labs, Inc. develops healthcare software. The company offers a platform that makes therapy accessible and stigma-free for a patient suffering from anxiety, depression, and mental health issues.

5.2.5 Expert systems Our last “additional diagnostic technology or service” in this Chapter 5 is an AI application, Expert Systems, whose basic structure was described in detail in Chapter 3, page 53. Here we will discuss the expert system’s direct influences in healthcare. Specifically, we will focus on

188

Foundations of Artificial Intelligence in Healthcare and Bioscience

expert systems as a part of current machine-learning-based approaches [185] that combine the best of both old and new-age approaches, i.e., latent causal relationships that are encoded in such expert systems but are further probabilistically refined and continually selfimproving by learning from new data. Medical diagnosis is difficult for a number of reasons: • Cost of acquiring information including testing, talking to the patient; • Reliance on bias and the need to quickly arrive at a diagnosis [186]; • Lack of appropriate diagnostic tools to gather complete information about the patient for a multitude of diseases and disorders; • Incomplete patient medical records; and • Dynamic nature of the diagnostic process. Expert systems seek to emulate the diagnostic decision-making ability of human experts. They include 2 components: (1) a knowledge base (KB), which encapsulates the evidencebased medical knowledge that is selected and organized by experts; and (2) a rule-based inference engine devised by the expert, which operates on the knowledge base to generate a differential diagnosis (see Figure 3 7). Diagnostic knowledge bases generally consist of diseases, findings (i.e., symptoms, signs, history, or lab results), and their relationships. In many cases, they explicitly lay out the relationships between a set of findings and the things that cause them (diseases). Specifically, the (positive predictive value) captures how strongly one should consider a disease if the finding was observed, while the frequency (sensitivity) models how likely it is that a patient with a disease manifests a particular finding. The rule-based inference engine outputs a ranks differential diagnosis by scoring the diseases in the knowledge base as a function of their relationship strengths over all of the input findings. In other words, given a set of findings from a patient, the inference engine examines the strength of the relationships those findings have with each disease in the KB and sort based on some defined scoring function [187]. To address the constraints on the expert system (e.g., dedicate experts, lag times in identifying new diseases, environmental “noise”), a new approach has been developed that lets the knowledge encoded in expert systems serve as the prior for learning a new diagnosis model from scratch. The central idea is that you can utilize expert systems as a data generator, and the generated synthetic medical cases can be used as labeled data for training a model [185]. This machine learning approach to expert systems provides a methodology that not only preserves the properties of the original expert system but also improves accuracy and flexibility. The approach enables combining data generated by an expert knowledge base with data coming from real-world medical records. These types of methods represent an exciting area of AI research as they present the massive stepfunctions deep learning has introduced in the past decade to achieve better accuracy and resiliency to noise. This approach can be a valuable tool in settings where you can cobble together several diverse but related data sets [188].

Chapter 5 • AI applications in diagnostic technologies and services

189

5.2.5.1 Literature reviews re AI’s influences on “additional diagnostic technologies” 1. Association of vital signs and process outcomes in emergency department patients [189]: In discharged elderly patients, specific vital sign abnormalities (systolic blood pressure [SBP] , 97 mm of mercury [mmHg], heart rate .101 beats per minute, body temperature .37.3  C, and pulse oximetry ,92 SpO2) were associated with twice the odds of admission within 7 days of emergency Mayo Clinic Hospital [190]. If vital sign abnormalities are consistently associated with undesirable process outcomes, AI programs could notify EPs prior to final disposition. Recent work has focused on the development of predictive tools based on ED vital signs to assist EPs in identifying patients at risk for decompensation [191]. Despite the associations of vital signs with negative process outcomes, most patients discharged or admitted to the floor with abnormal vital signs did not have adverse results, limiting the utility of vital signs alone as a predictive tool. This suggests a need to incorporate additional factors in any predictive algorithm. Age, serum bicarbonate, and lactic acid have separately been shown to be associated with inpatient deterioration [192]. AI may soon be able to prospectively identify patients at risk of both inpatient and outpatient deteriorations. Although vital sign data by itself was insufficient to create a sensitive and specific algorithm, the addition of other clinical data could lead to the development of a useful AI tool to alert EPs to potentially unsafe dispositions. 2. Recent patient health monitoring platforms incorporating internet of things-enabled smart devices [193]: Synergistic integration of the Internet of Things (IoT), cloud computing, and big data technologies in healthcare refers have led to the notion of “smart health.” Smart health is an emerging concept that addresses the provision of healthcare services for prevention, diagnosis, treatment, and follow-up management at any time or any place by connecting information technologies and healthcare. As a significant breakthrough in smart healthcare development, IoT-enabled smart devices allow medical centers to carry out preventive care, diagnosis, and treatment more competently. Smart health is a major up-and-coming research topic that is based on emerging ICT (Information and Communications Technology) and has attracted cross-disciplinary researchers. The use of IoT technology helps automate the entire patient care workflow. In other words, IoT-enabled smart devices have started to facilitate care and accurate treatment services and strategies by healthcare providers, including doctors, hospitals, and clinics. Patients can use these devices anywhere and immediately transmit their health conditions and test results using IoT-enabled devices and integrated apps, making it easier to fit testing into daily life. For doctors, real-time, remote patient monitoring makes it easier to stay up-to-date and in contact with patients without in-person visits. 3. Physicians’ perceptions of chatbots in healthcare: cross-sectional web-based survey [183]: Physicians believe in both costs and benefits associated with chatbots, depending on the logistics and specific roles of the technology. Physicians agreed that there were

190

Foundations of Artificial Intelligence in Healthcare and Bioscience

significant risks associated with chatbots, including inaccurate medical information. These findings suggest that physicians may be comfortable with using chatbots to automate simple logistical tasks but do not believe that chatbots are advanced enough to replace complex decision-making tasks requiring expert medical opinion. This is not to say that healthcare chatbots have a particular stigma associated with them, but rather, this suggests that improvements are needed for future use to overcome the risks and challenges related to the technology. Nevertheless, nearly half of the physicians surveyed believed that healthcare chatbots could replace a significant role of human HCPs sometime in the future. However, chatbots can be best applied to help physicians rather than replace them. Chatbots are cost-effective to run and can automate repetitive administrative tasks, thus freeing time for physicians to provide higher quality, personalized, and empathetic care to their patients. This research lays the foundation for future investigations on the factors influencing physician adoption of chatbots. Providing physicians with evidence-based research on the advantages and disadvantages of this emerging technology will help inform them on the most appropriate use to complement their practice rather than impede their work. So ends our journey through AI’s applications and influences in healthcare’s diagnostic technologies and services. As I said in the opening comments of this chapter, “Nothing in healthcare is more important than a timely and accurate diagnosis.” And indeed, that is so. However, once the diagnosis is established, our job in healthcare is then to provide proper care to ameliorate what is abnormal. This is when medical therapies and services become the capstone of health and wellness. Whereas AI’s applications in diagnostic technologies may be of direct value to the health professional, they are relatively indirect to the patient. In medical therapies, however, the technologies and services “touch” the patient directly, and thus, AI’s applications and influences may be more felt and appreciated. See how you feel about that as you read Chapters 6 and 7.

References [1] The future awakens. Life sciences and health care predictions 2022. Deloitte Consulting; 2017. [2] Singhal S, Carlton S. The era of exponential improvement in healthcare? McKinsey and Company; 2019. [3] Ballard B. Top 5 healthcare innovations shaping the industry’s future. The New Economy; 2018. [4] Unless footnoted separately, all Diagnostic Technologies descriptions in this section come from MedlinePlus. U.S. National Library of Medicine. U.S. Department of Health and Human Services. National Institutes of Health [Updated May 31, 2019]. [5] Lakhani P, Prater AB, Hutson RK, et al. Machine learning in radiology: applications beyond image interpretation. J Am Coll Radiol 2018;15(2):350 9. [6] Jha S, Topol EJ, Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA 2016;316(22):2353 4. [7] Lambin P, Leijenaar RTH, Deist TM, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 2017;14(12):749 62.

Chapter 5 • AI applications in diagnostic technologies and services

191

[8] Patel A. Health technology: the digital revolution - Part 1: AI & imaging. Science Entrepreneur; 2019. [9] Zaharchuk G, Gong E, Wintermark M, et al. Deep learning in neuroradiology. Am J Neuroradiol 2018; 39(10):1776 84. Available from: https://doi.org/10.3174/ajnr.A5543. [10] Sardanelli F, Hunink MG, Gilbert FJ, et al. Evidence-based radiology: why and how? Eur Radiol 2010; 20(1):1 15 Jan. [11] Kohli M, Prevedello LM, Filice RW, et al. Implementing machine learning in radiology practice and research. AJR Am J Roentgenol 2017;208(4):754 60. [12] Jiang F, et al. Stroke Vasc Neurol 2017; svn-2017-000101. [13] Titano JJ, Badgeley M, Schefflein J, et al. Automated deep-neural-network surveillance of cranial images for acute neurologic events. Nat Med 2018;24(9):1337 41. [14] Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018;2:35. Available from: https://doi.org/10.1186/s41747-018-0061-6. [15] Hosny A, Parmar C, Quackenbush J, et al. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18(8):500 10. Available from: https://doi.org/10.1038/s41568-018-0016-5. [16] Annarumma M, Withey SJ, Bakewell RJ, et al. Automated triaging of adult chest radiographs with deep artificial neural networks. Radiology 2019. Available from: https://doi.org/10.1148/ radiol.2018180921. [17] Zech J, Pain M, Titano J, et al. Natural language-based machine learning models for the annotation of clinical radiology reports. Radiology 2018;287(2):570 80. [18] Armitage H. Artificial intelligence rivals radiologists in screening X-rays for certain diseases. Stanford Medicine; 2018. [19] Cruz AS, Lins H, Medeiro RV, et al. Artificial intelligence on the identification of risk groups for osteoporosis, a general review. Biomed Eng Online 2018;17:12. Available from: https://doi.org/10.1186/s12938-018-0436-1. [20] Kanis JA. Diagnosis of osteoporosis and assessment of fracture risk. Lancet 2002;359(9321):1929 36. Available from: https://doi.org/10.1016/S0140-6736(02)08761-5. [21] Rodríguez-Ruiz A, Krupinski E, Mordang JJ, et al. Detection of breast cancer with mammography: effect of an artificial intelligence support system. Radiology 2019;290:305 14. [22] Cody R, Kent D, Chang L. Reduction of false-positive markings on mammograms: a retrospective comparison study using an artificial intelligence-based CAD. J Digital Imaging 2019;1 7. [23] Alcusky M, Philpotts L, Bonafede M, et al. The patient burden of screening mammography recall. J Women’s Health 2014;23(S1):S-11 9. [24] Siu AL. Screening for breast cancer: U.S. Preventive services task force recommendation statement. Ann Intern, Med 2016;164:279 96. [25] Bond M, Pavey T, et al. Systematic review of the psychological consequences of false-positive screening mammograms. Health Technol Assess 2013;17(13):1 170. [26] Andrykowski MA, et al. Psychological impact of benign breast biopsy: a longitudinal, comparative study. Health Psychol 2002;21(5):485 94. [27] Gandomkar Z, Mello-Thoms C. Visual search in breast imaging: a review. Br J Radiol 2019;20190057. Available from: https://doi.org/10.1259/bjr.20190057. [28] Sitek A, Wolfe JM. Assessing cancer risk from mammograms: deep learning is superior to conventional risk models. Radiology 2019. Available from: https://doi.org/10.1148/radiol.2019190791. [29] To the Mammography Quality Standards Act, FDA; 2019. [30] Yala A, Lehman C, Schuster T, et al. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology 2019;292:60 6.

192

Foundations of Artificial Intelligence in Healthcare and Bioscience

[31] Seah JCY, Tang JSN, Kitchen A, et al. Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology 2019;290(2):514 22. [32] The U.S. Food and Drug Administration. Fluoroscopy. 06/14/2019. [33] Kalanjeri S, et al. State-of-the-art modalities for peripheral lung nodule biopsy. Clin Chest Med 2018;39:125 38. [34] Han P, Chen C, Li P, et al. Robotics-assisted versus conventional manual approaches for total hip arthroplasty: a systematic review and meta-analysis of comparative studies. Int Jour Med Robot 2019. [35] Murayama T, Ohnishi H, Mori T, Okazaki Y, Sujita K, Sakai A. A novel non-invasive mechanical technique of cup and stem placement and leg length adjustment in total hip arthroplasty for dysplastic hips. Int Orthop 2015;39(6):1057 64. [36] Joskowicz L, Hazan EJ. Computer-aided orthopedic surgery: incrementall shift or paradigm change? Med Image Anal 2016;33:84 90. [37] Zheng G, Nolte LP. Computer-assisted orthopedic surgery: current state and future perspective. Front Surg 2015;2:66. [38] Sippey M, Maskal S, Anderson M, Marks J. Use of fluoroscopy in endoscopy: indications, uses, and safety considerations. Ann Laparosc Endosc Surg 2019;4:59. [39] Sethi S, Barakat MT, Friedland S, et al. Radiation training, radiation protection, and fluoroscopy utilization practices among US therapeutic endoscopists. Dig Dis Sci 2019. [40] www.Imalogix.com. Brings fluoroscopy capabilities to radiation dose management platform. Imaging Technology News; 2018. [41] Rizzo S, Botta F, Raimondi S, et al. Radiomics: the facts and the challenges of image analysis. Eur Radiol Exp 2018. Available from: https://doi.org/10.1186/s41747-018-0068-z. [42] “OMICS.” Definitions.net. STANDS4 LLC, 2019. Web. 2019. [43] Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than pictures. They are data. Radiology 2016;278(2):563 77. [44] Larue RTHM, van Timmeren JE, de Jong EEC, et al. Influence of gray level discretization on radiomic feature stability for different CT scanners, tube currents, and slice thicknesses: a comprehensive phantom study. Acta Oncol 2017;56(11):1544 53. [45] Ergen B, Baykara M. Texture based feature extraction methods for content-based medical image retrieval systems. Biomed Mater Eng 2014;24(6):3055 62. [46] Parekh VS, Jacobs MA. Integrated radiomic framework for breast cancer and tumor biology using advanced machine learning and multiparametric MRI. NPJ Breast Cancer 2017;3(1):43. [47] Arimura H, Soufi M, Kamezawa H, et al. Radiomics with artificial intelligence for precision medicine in radiation therapy. J Radiat Res 2019;60(1):150 7. Available from: https://doi.org/ 10.1093/jrr/rry077. [48] Cook GJR, Goh V. What can artificial intelligence teach us about the molecular mechanisms underlying disease? Eur J Nucl Med Mol Imaging 2019. [49] Aerts HJ. The potential of radiomic-based phenotyping in precision medicine: a review. JAMA Oncol 2016;2(12):1636 42. [50] Kolossváry M, Kellermayer M, Merkely B, et al. Cardiac computed tomography radiomics: a comprehensive review on radiomic techniques. J Thorac Imaging 2018;33(1):26 34. [51] O’Connor JP, Aboagye EO, Adams JE, et al. Imaging biomarker roadmap for cancer studies. Imaging biomarker roadmap for cancer studies. Nat Rev Clin Oncol 2017;14(3):169 86. [52] Chang L. What is a CT scan? WebMD. National Institute of Biomedical Imaging and Bioengineering: “Computed Tomography” 2018. [53] Bresnick J. Top 5 use cases for artificial intelligence in medical imaging. Health IT Analytics; 2018.

Chapter 5 • AI applications in diagnostic technologies and services

193

[54] https://www.acr.org/Media-Center/ACR-News-Releases/2018/ACR-Data-Science-Institute-ReleasesLandmark-Artificial-Intelligence-Use-Cases. [55] Schoenhagen P, Zimmermann M. Artificial intelligence, and cardiovascular computed tomography. J Med Artif Intell 2018;1:11. [56] Schoenhagen P, Numburi U, Halliburton SS, et al. 3 -dimensional imaging in the context of minimally invasive and transcatheter cardiovascular interventions using multi-detector computed tomography: from pre-operative planning to intra-operative guidance. Eur Heart J 2010;31:2727 40. [57] Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. NPJ Digital Med 2018;1:9. Available from: https://doi.org/10.1038/s41746017-0015-z. [58] Doi K. Computer-aided diagnosis in medical imaging: historical review, current status, and future potential. Comput Med Imaging Graph 2007;31:198 211. [59] Winsberg F, Elkin M, Macy Jr. J, Bordaz V, Weymouth W. Detection of radiographic abnormalities in mammograms utilizing optical scanning and computer analysis 1. Radiology 1967;89:211 15. [60] Monnier-Cholley L, et al. Computer-aided diagnosis for the detection of interstitial opacities on chest radiographs. Ajr Am J Roentgenol 1998;171:1651 6. [61] Yoshida H, Masutani Y, Maceneaney P, Rubin DT, Dachman AH. Computerized detection of colonic polyps at ct colonography based on volumetric features: pilot study 1. Radiology 2002;222:327 36. [62] Arbabshirani MR, Plus S, Sui J, Calhoun VD. Single subject prediction of brain disorders in neuroimaging: promises and pitfalls. Neuroimage 2016;145:137 65. [63] González G, Ash SY, Vegas-Sánchez-Ferrero G, et al. Disease staging and prognosis in smokers using deep learning in chest computed tomography. AJRCCM 2018;197(2). [64] Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R, editors. Advances in neural information processing systems 28 (NIPS 2015). New York: Curran Associates, Inc.; 2015. p. 91 9. [65] Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115 18. [66] General ultrasound. RadiologyInfo.org. 2018. [67] Verger R. AI could make MRI scans as much as ten times fasterIn medical imaging, fewer data could be better. Popular Science; 2018. [68] Brigham Health. Inside the advanced multimodality image guided operating suite. Brigham and Women’s Hospital; 2019. [69] Piazza G. Artificial intelligence enhances MRI scans. NIH Research Matters; 2018. [70] Wenya L, Hosny A, Schabath MB, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA: A Cancer Jour Clinicians 2019. Available from: https://doi.org/10.3322/ caac.21552. [71] Rudie JD, Rauschecker AM, Bryan RN, et al. Emerging applications of artificial intelligence in neuro-oncology. Radiology 2019. Available from: https://doi.org/10.1148/radiol.2018181928. [72] Nuclear Medicine. Johns Hopkins Medicine. Johns Hopkins University, 2019. [73] What is positron emission tomography? RadioInfo.org; 2019. [74] Hall M. Artificial intelligence and nuclear medicine. Nucl Med Commun 2019;40(1):1 2. ´ D, Białowa˛s J. Application of artificial neural networks to identify Alzheimer’s Disease using [75] Swietlik cerebral perfusion SPECT data. Int J Env Res Public Health 2019;16(7):1303. [76] Alexander A, McGill M, Tarasova A, et al. Scanning the future of medical imaging. J Am Coll Radiol 2019;16:501 7.

194

Foundations of Artificial Intelligence in Healthcare and Bioscience

[77] Waldron T. 4 future trends in medical imaging that will change healthcare. Definitive Healthcare; 2019. [78] Kusta S. Artificial intelligence within ultrasound. Signify Research; 2018. [79] Correa M, Zimic M, Barrientos F, et al. Automatic classification of pediatric pneumonia based on lung ultrasound pattern recognition. PLoS One 2018;13(12):e0206410. Available from: https://doi.org/ 10.1371/journal.pone.0206410. [80] UNICEF. Pneumonia and diarrhoea: tackling the deadliest diseases for the worlds most impoverished children. New York: UNICEF; 2012. 2013. [81] Trinavarat P, Riccabona M. Potential of ultrasound in the pediatric chest Eur J Radiol 2014;83 (9):1507 18PMID. Available from: 24844730. [82] Wang S-C. Artificial neural network. Interdisciplinary computing in Java programming. Springer; 2003. p. 81 100. [83] Wu G, Zhou L, Xu J, et al. Artificial intelligence in breast ultrasound. World J Radiol 2019;11(2):19 26. Available from: https://doi.org/10.4329/wjr.v11.i2.19. [84] Uniyal N, Eskandari H, Abolmaesumi P, et al. Ultrasound RF time series for classification of breast lesions. IEEE Trans Med Imaging 2015;34(2):652 61. [85] Sidiropoulos KP, Kostopoulos SA, Glotsos DT, et al. Multimodality GPU-based computer-assisted diagnosis of breast cancer using ultrasound and digital mammography images. Int J Comput Assist Radiol Surg 2013;8(4):547 60. [86] Malik B, Klock J, Wiskin J, Lenox M. Objective breast tissue image classification using quantitative transmission ultrasound tomography. Sci Rep 2016;6:38857. [87] Wang HY, Jiang YX, Zhu QL, et al. Automated breast volume scanning: identifying 3-D coronal plane imaging features may help categorize complex cysts. Ultrasound Med Biol 2016;42:689 98. [88] Plichta JK, Ren Y, Thomas SM, Greenup RA, et al. Implications for breast cancer restaging based on the 8th edition AJCC staging manual. Ann Surg. 2018;271(1):169 76. [89] Archana KV, Vanithamani R. Diagnosis of coronary artery diseases and carotid atherosclerosis using intravascular ultrasound images. Artif Intell 2019;281 8. [90] Wang Z, Zhao S, Bai Y. Artificial intelligence as a third eye in lesion detection by endoscopy. Clin Gastroenterol Hepatol 2018;16(9):1537. [91] Kegelmeyer Jr WP, Pruneda JM, Bourland PD. Computer-aided mammographic screening for spiculated lesions. Radiology 1994;191(2):331 7. [92] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436 44. Available from: https:// doi.org/10.1038/nature14539. [93] Kudo S, Mori Y, Misawa M. Artificial intelligence and colonoscopy: current status and future perspectives. Digestive Endosc 2019. Available from: https://doi.org/10.1111/den.13340. [94] Bosworth T. GI disease screening with artificial intelligence is close. GI and Hepatology News; May 14, 2019. [95] de Groen PC. Artificial intelligence in upper endoscopy: location, location, location. gastroenterology. Expert Opinion / Commentary; 2019. [96] Duker JS, Freund B, Sarraf D. Retinal imaging: choosing the right method. Eyenet Mag Amer Acad Ophth 2014. [97] Shu D, Ting W, Pasquale LR, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2018;103(2). [98] Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018;2:158 64. [99] Rajalakshmi R, Subashini R, Anjana RM, et al. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye 2018;32:1138 44.

Chapter 5 • AI applications in diagnostic technologies and services

195

[100] Zheng C, Johnson T, Garg A, et al. Artificial intelligence in glaucoma. Curr Opin Ophthalmol 2019; 30(2):97 103. [101] Molenda SR, Mostow EN. The introduction of skin self-photography as a supplement to skin self-examination for the detection of skin cancer. J Am Acad Dermatol 2014;70:e15. [102] Harting MT, DeWees JM, Vela KM, et al. Medical photography: current technology, evolving issues, and legal perspectives. Int J Clin Pract 2015;69(4):401 9. Available from: https://doi.org/10.1111/ ijcp.12627. [103] Hogarty DT, Su JC, Phan K, et al. Artificial intelligence in dermatology—where we are and the way to the future: a review. Am J Clin Dermatol 2019;1 7. [104] Catania LJ, Nicolitz E. Artificial intelligence and its applications in vision and eye care. Adv Ophthalmol Optometry 2018;3(1):21 38. [105] Reiter O, Rotemberg V, Kose K, et al. Artificial intelligence in skin cancer. Curr Dermatol Rep 2019; 8(3):133 40. [106] Christopher M, Belghith A, Bowd C, et al. Performance of deep learning architectures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Nat Sci Rep 2018;8 Article number: 16685. [107] Weiss K, Khoshgoftaar TM, Wang DD. A survey of transfer learning. J Big Data 2016;3. [108] Szegedy C, Ioffe S, Vanhoucke V. Inception-v4, Inception-ResNet, and the impact of residual connections on learning Pattern Recognit Lett 2016;42Arxiv 12. Available from: https://doi.org/10.1016/ j.patrec.2014.01.008. [109] Nerminathan A, Harrison A, Phelps M, et al. Doctors’ use of mobile devices in the clinical setting: a mixed-methods study. Intern Med J 2017;47(3):291 8. Available from: https://doi.org/10.1111/ imj.13349. [110] U.S. Food and Drug Administration. Tests used in clinical care. 2018. [111] Politakis P, Weiss SM. Using empirical analysis to refine the expert system knowledge bases. Artif Intell 1984;22(1):23 48. [112] Friedman B. Genalyte reveals a POCT system with QC in the cloud. Clinical Lab Industry News; 2018. [113] Menon PK, Medicity T. Effect of artificial intelligence in the clinical laboratory. Medlab Magazine. [email protected]; 2019. [114] Durant TJS. Machine learning and laboratory medicine: now and the road ahead. AACC.org; 2019. [115] Lindor MN, Thibodeau SN, Burke W. Whole-genome sequencing in healthy people. Mayo Clin Proc 2017;92:159 72. Available from: https://doi.org/10.1016/j.mayocp.2016.10.019. [116] Genetics Home Reference (GHR). What is a gene? U.S. Library of Congress; 2019. [117] The definition of genetics. www.dictionary.com. Retrieved October 25, 2018. [118] Genetics Home Reference (GHR). What are genetics and genomics? U.S. Library of Congress; 2019. [119] Genetics vs. Genomics. The Jackson Laboratory; 2019. [120] Genetics Home Reference. What is a genome? U.S. National Library of Medicine. USA.gov; 2019. [121] National Human Genome Institute Research Institute. DNA sequencing factsheet. 2015. [122] Transcriptomics. Nature.com; 2019. [123] Pocket K. No. 15: ‘Omics’ sciences: genomics, proteomics, and metabolomics. International Service for the Acquisition of Agri-biotech Applications (ISAAA); 2019. [124] National Human Genome Institute Research Institute. A brief guide to genomics. 2003. [Updated August 27, 2015]. [125] Genetics Home Reference (GHR). What is epigenetics? U.S. Library of Congress; 2019.

196

Foundations of Artificial Intelligence in Healthcare and Bioscience

[126] NCI Dictionary of Genetics Terms. U.S. Department of Health and Human Services. National Institutes of Health. National Cancer Institute; 2019. USA.gov. [127] Marr B. The wonderful ways artificial intelligence is transforming genomics and gene editing. Forbes; 2018. [128] Genetic Testing. Lister Hill National Center for Biomedical Communications U.S. National Library of Medicine. National Institutes of Health. Department of Health & Human Services. https://ghr.nlm.nih. gov/; 2019. [129] Brodwin E. Genetic testing is the future of healthcare, but many experts say companies like 23andMe are doing more harm than good. Business Insider; 2019. [130] He KY, Ge D, He MM. Big data analytics for genomic medicine. Int J Mol Sci 2017. [131] Collins FS, Varmus H. A new initiative on precision medicine. N Engl J Med 2015;372:793 5. [132] Vassy JL, Korf BR, Green RC. How to know when physicians are ready for genomic medicine. Sci Transl Med 2015;7:287fs219. [133] Gottesman O, Kuivaniemi H, Tromp G, et al. The electronic medical records and genomics (eMERGE) network: past, present, and future. Genet Med 2013;15:761 71. [134] Peters SG, Buntrock JD. Big data and the electronic health record. J Ambul Care Manag 2014;37:206 10. [135] Bi WL, Hosny A, Schabath MB, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. Cancer J Clin 2019. Available from: https://doi.org/10.3322/caac.21552. [136] Staff Reporter. Roche to acquire outstanding shares of foundation medicine for $2.4B. GenomeWeb; 2018. [137] Das R. The flatiron health acquisition is a shot in the arm for Roche’s oncology real-world evidence needs. Forbes; 2018. [138] Immunogenetics. Nature.com; 2019. [139] Gómez Perosanz M, Pappalardo F. Computational immunogenetics. Encycl Bioinforma Comput Biol 2019. [140] Agarwal SK, Weinstein LS. Epigenetics. Genetics of bone biology and skeletal disease. 2nd ed. 2018. [141] Cision. Precision medicine market size to exceed $87 billion by 2023: Global Market Insights Inc.; 2016, [142] Harmston N, Filsell W, Stumpf MP. What the papers say: text mining for genomics and systems biology. Hum Genom 2010;5:17 29. [143] Xu J, Yang P, Xue S, et al. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges, and future perspectives. Hum Genet 2019;138(2):109 24. [144] Misra MK, Damotte V, Hollenbach JA. The immunogenetics of neurological disease. Immunology 2017. Available from: https://doi.org/10.1111/imm.12869. [145] Mazzone R, Zwergel C, Artico M, et al. The emerging role of epigenetics in human autoimmune diseases. The emerging role of epigenetics in human autoimmune disorders. Clin Epigenetics 2019; 11 Article number: 34. [146] Moosavi A, Motevalizadeh AA. Role of epigenetics in biology and human diseases. Iran Biomed J 2016;20(5):246 58. [147] Trebeschi S, Drago SG, Birkbak NJ, et al. Predicting response to cancer immunotherapy using noninvasive radiomic biomarkers. Ann Oncol 2019;30(6):998 1004. [148] Coroller TP, Agrawal V, Narayan V, et al. Radiomic phenotype features predict pathological response in non-small cell lung cancer. Radiother Oncol 2016;119(3):480 6. [149] Thompson D. Waking hospital patients to check vital signs may do more harm than good. MedicineNet; 2019. [150] Sotera. ViSi. Sotera Wireless Inc.; 2019.

Chapter 5 • AI applications in diagnostic technologies and services

197

[151] Miyashita M, Brady M. The health care benefits of combining wearables and AI. Harv Bus Rev 2019. [152] McCann C. Current health receives FDA clearance for its remote patient monitoring solution, enabling earlier intervention and improved patient outcomes. Current Health; 2019. [153] Matheson R. Automating artificial intelligence for the medical decision-making model replaces the laborious process of annotating massive patient datasets by hand. MIT News Office; 2019. [154] U.S. National Library of Medicine. Electrodiagnosis MeSH Descriptor Data. 2019. [155] Dubreuil L. How can we apply AI, machine learning, or deep learning to EEG? Neuroelectrics; 2018. [156] Ruffini G, IbañezMarta D, Dunne S, et al. EEG-driven RNN classification for prognosis of neurodegeneration in at-risk patients. International conference on artificial neural networks ICANN 2016: Artificial neural networks and machine learning ICANN 2016; 2016, pp. 306 13. [157] Ruffinia G, Iba~neza D, Kroupia E. Deep learning using EEG spectrograms for prognosis in idiopathic rapid eye movement behavior disorder (RBD). BioRxiv 2018. Available from: http://doi.org/10.1101/240267. [158] HealthIT.gov. Telemedicine and telehealth. Official Website of The Office of the National Coordinator for Health Information Technology (ONC); 2017. [159] Telehealth and telemedicine technology. eVisic; 2018. [160] Connect2HealthFCC. Telehealth, telemedicine, and telecare: what’s what?. Federal Communications Commission; 2019. [161] 2017 US telemedicine industry benchmark survey. Reach Health; 2017. [162] Sennaar K. Artificial intelligence in telemedicine and telehealth

4 current applications. Emerj; 2019.

[163] Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inf Assoc 2018;25(09):1248 58. [164] Kuziemsky C, Maeder AJ, John O, et al. Role of artificial intelligence within the telehealth domain. Yearb Med Inf 2019. Available from: https://doi.org/10.1055/s-0039-1677897. [165] Wicklund E. Asynchronous telehealth gives providers an alternative to DTC video. mHealth Intelligence; 2019. [166] CaptureProof.com. Keep track of your health over time. 2019. [167] World Health Organization. Telemedicine: opportunities and developments in member states. Report on the second global survey on eHealth. Geneva: World Health Organization; 2010. [168] Nangalia V, Prytherch DR, Smith GB. Health technology assessment review: remote monitoring of vital signs-current status and future challenges. Crit Care 2010;14(05):233. [169] Inglis SC, Clark RA, McAlister FA, et al. Which components of heart failure programs are effective? A systematic review and meta-analysis of the outcomes of structured telephone support or telemonitoring as the primary component of chronic heart failure management in 8323 patients: abridged cochrane review. Eur J Heart Fail 2011;13(09):1028 40. [170] Bolton CE, Waters CS, Peirce S, Elwyn G. Insufficient evidence of benefit: a systematic review of home telemonitoring for COPD. J Eval Clin Pract 2011;17(06):1216 22. [171] Polisena J, Tran K, Cimon K, Hutton B, McGill S, Palmer K. Home telehealth for diabetes management: a systematic review and meta-analysis. Diabetes Obes Metab 2009;11(010):913 30. [172] Mohktar MS, Redmond SJ, Antoniades NC, Rochford PD, Pretto JJ, Basilakis J, et al. Predicting the risk of exacerbation in patients with chronic obstructive pulmonary disease using home telehealth measurement data. Artif Intell Med 2015;63(01):51 9. [173] Catania L, Jackson R, Armitage B. Curbside consults/deep doc (C2D2). Atlantic Beach, FL: Louis J. Catania Inc; 2020. [174] Report to Congress. E-health and telemedicine. US Department of Health and Human Services; 2016.

198

Foundations of Artificial Intelligence in Healthcare and Bioscience

[175] Frankenfield J. Chatbot. Investopedia; 2019. [176] Farrell G. Artificial intelligence chatbots are changing the way you do business and may impact your bottom line. SmartSheet; 2019. [177] Amato F, Marrone S, Moscato V, et al. CEUR workshop proceedings. Chatbots Meet eHealth: Automatizing Healthcare. ,http://ceur-ws.org/Vol-1982/paper 6.pdf.; 2017 [accessed 26.02.19]. [178] Razzaki S, Baker A, Perov Y, et al. A comparative study of artificial intelligence and human doctors for triage and diagnosis. arXiv 2018. Available from: https://arxiv.org/pdf/1806.10698.pdf [accessed 26.02.19]. [179] Kramer J, Künzler F, Mishra V, et al. Investigating intervention components and exploring states of receptivity for a smartphone app to promote physical activity: protocol of a microrandomized trial. JMIR Res Protoc 2019;8(1):e11540. [180] Suganuma S, Sakamoto D, Shimoyama H. An embodied conversational agent for unguided internetbased cognitive behavior therapy in preventative mental health: feasibility and acceptability pilot trial. JMIR Ment Health 2018;5(3):e10454. [181] Lucas GM, Rizzo A, Gratch J. Reporting mental health symptoms: breaking down barriers to care with virtual human interviewers. Front Robot AI 2017;4:51. [182] Andersson G. Internet-delivered psychological treatments. Annu Rev Clin Psychol 2016;12:157 79. [183] Palencia A, Flaschner P, Thommandram A. Physicians’ perceptions of chatbots in health care: cross-sectional web-based survey. JMIR 2019;21(4). [184] Meticulous Blog. Top 10 companies in healthcare chatbots market. Meticulous Research; 2019. [185] Ravuri M, Kannan A, Tso GJ, et al. From expert systems to machine-learned diagnosis models. Proc Mach Learn Res 2018;85:1 16. [186] Saposnik G, Redelmeier D, Ruff CC, et al. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inf Decis Mak 2016;16:138. Available from: https://doi.org/10.1186/ s12911-016-0377-1. [187] Ravuri M, Kannan A, Tso GJ, et al. Learning from the experts: from expert systems to machine-learned diagnosis models. Cornell University; 2018. V1. Revised August 14. [188] Kannan A. The science of assisting medical diagnosis: from expert systems to machine-learned models. Medium; 2019. [189] Hodgson N, Poterack R, Mi KA, et al. Association of vital signs and process outcomes in emergency department patients West J Emerg Med 2019;20(3):433 7eScholarship UC Irvine. Available from: https://doi.org/10.5811/westjem.2019.1.41498. [190] Gabayan GZ, Gould MK, Weiss RE, et al. Emergency department vital signs and outcomes after discharge. Acad Emerg Med 2017;24(7):846 54. [191] Imperato J, Mehegan T, Henning DJ, et al. Can an emergency department clinical “triggers” program based on abnormal vital signs improve patient outcomes? CJEM 2017;19(4):249 55. [192] Henning D, Oedorf K, Day D, et al. Derivation and validation of predictive factors for clinical deterioration after admission in emergency department patients presenting with abnormal vital signs without shock. West J Emerg Med 2015;16(7):1059 66. [193] Kang M, Park E, Hwan Cho B, et al. Recent patient health monitoring platforms incorporating internet of things-enabled smart devices. Int Neurourol J 2018;22(Suppl. 2):S76 82. Available from: https://doi. org/10.5213/inj.1836144.072.

6 Current AI applications in medical therapies and services The range and depth of today’s medical therapies and services are enormous. Any attempt to discuss them comprehensively and understandably requires that they be divided into generic categories based on their purpose and goal(s). That being said, doing so yields a fairly extensive listing, hopefully, one with no major omissions. Thus, the listing I have evolved for this Chapter includes: A. B. C. D. E. F. G. H. I. J.

Medical care (primary, secondary, tertiary, quaternary care); Pharmaceutical and biopharmaceutical care; Hospital care; Nursing care; Home health care, nursing home, and hospice care; Concurrent medical conditions (“Comorbidity”, aka “Multimorbidity”); Precision medicine; Medical/surgical Robotics; Stem cells and regenerative medicine; Genetics and genomics therapies.

One other guidepost I must explain in furtherance of the theme of this book, as a userfriendly guide to AI in healthcare and bioscience and, in the interest of a comprehensive and understandable approach to the healthcare-related categories listed above. Where applicable, I will reference the AI-related technologies and services presented in the previous Chapters (2 5) to provide a reasonably comprehensive discussion for each of the medical therapy categories above. A listing with Chapter notations for each of the AI-related technologies and services includes: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Big data analytics (Chapters 3 and 4); Health information and records (EHR) (Chapters 3 and 4); Medical research and clinical trials (Chapter 4); Blockchain (Chapter 3 and 4); Internet of Things (IoT) (Chapter 2); Telemedicine/telehealth (Chapter 5); Chatbots (Chapter 5); Natural language processing (NLP) (Chapter 3); Expert Systems (Chapter 3 and 5); Robotics (Chapter 3 and 6);

Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00013-4 © 2021 Elsevier Inc. All rights reserved.

199

200

11. 12. 13. 14. 15.

Foundations of Artificial Intelligence in Healthcare and Bioscience

Population health (Demographics and Epidemiology) (Chapter 4); Healthcare Analytics (Chapter 4); Preventive Medicine/Healthcare (Chapter 4); Public Health (Chapter 4); Access and availability (Chapter 6).

In the interest of “saving trees,” I will not repeat the full descriptions of each of the AI technologies and services listed above, which have all been thoroughly covered in previous Chapters (identified with each listed item). However, I would urge the reader to refer back to any respective Chapter when necessary to review a given topic before reading about its applications in the specific medical therapies or services discussed in this Chapter. You will also notice throughout the discussions in this Chapter that there will be repeated: “cross-references” between AI technologies, therapies, and the health-related categories. This is the product of extensive horizontal and vertical integration and interoperability among these technologies, therapies, and categories, all ultimately enhancing the overall delivery of care. Also, in this particular approach, the business considerations from Chapter 4 have as much significance as the Chapter 5 diagnostic considerations. Thus, all will frequently appear in this comprehensive review. The applications of AI-related therapies in each of the health care technologies and service categories are extensive and having profound and beneficial effects for you, those for whom you care for, and for your loved ones. Many of these AI applications you will recognize from your own health care, and many embedded and concealed within the delivery of your care. All are providing benefits to you and the health care system, and yes, all of them are “disruptive” in nature. As we discuss them by category, you will begin to appreciate how many AI therapies are affecting you and have become part of your personal life, your professional careers, your care, and your health and wellness. And finally, one other point regarding this Chapter’s categories and content. The total body of related literature is enormous. So, what I have tried to do is select the current and what I hope you will find in most cases (but not all) to be relevant information for your interests. Some topics are addressed with descriptive details, and some are presented through clinical studies and research. Once again, if the subject is not discussed adequately regarding your “need to know,” online and/or library research will provide as much additional information as you could want or need. (See the listing of resources I mention in Section 1 Introduction, Table Intro 2.)

6.1 Medical care (primary, secondary, tertiary, quaternary care) The essence of health care starts with the health care professional providing care to persons in need. That care is usually divided into primary care, secondary care, tertiary care, and

Chapter 6 • Current AI applications in medical therapies and services

201

quaternary care. While definitions vary widely, there are broad, general concepts that define each level [1]: • Primary care: This level provider, often referred to as the “gatekeeper,” is the first point of medical consultation. A primary care physician (PCP) (also called a general practitioner or family physician) or a nurse practitioner (nurse clinician) usually serves as the patient’s entry point to the health care system. Care is provided at the doctor’s office, health center, or an urgent care center. Emergency rooms are also a source (not optimal) of primary care for uninsured patients. • Secondary care: This level provider includes medical specialists, surgeons and other health professionals whose training and services focus primarily in a specific field (e.g., neurology, cardiology, dermatology, oncology, orthopedics, ophthalmology, and other specialized fields) and who typically don’t have initial contact with patients. Acute care hospitals are also considered a category of secondary care. This category covers care to admitted patients, a hospital emergency room (ER); childbirth; medical imaging services; and an intensive care unit. Physical therapists, respiratory therapists, speech therapists, occupational therapists, and other allied health professionals often work in secondary care. • Tertiary care: This level of care infers a higher level of care in a hospital and usually on referral from a primary or secondary provider. Health professionals and equipment at this level are highly specialized. Tertiary care services include areas such as cardiac surgery, cancer treatment, acute and chronic pulmonary care and management, burn treatment, plastic surgery, neurosurgery, and other complicated treatments or procedures. • Quaternary care: This level of care identifies a standard of care more complicated than tertiary care. Highly specialized care, experimental treatments, and complicated procedures are considered quaternary care.

6.1.1 Big data analytics and AI in medical care There is no AI technology more vital to and integrated into medical care than Big Data Analytics. It is generated from historical clinical activities and has significant effects on medical care and the health care industry. It assists in planning treatment paths for patients, processing clinical decision support (CDS), and improving health care technology and systems [2]. Big Data comes from hospital information resources, surgeons’ work, activities of anesthesia, physical examinations, radiography, magnetic resonance imaging (MRI), computer tomography (CT), information of patients, pharmacy, treatment, medical imaging, and imaging reports [3]. Clinical activities generate a large number of records, including identification information of patients, diagnosis, medicine care, notes from physicians, and sensor data [3]. AI and biomedical informatics struggle with large amounts of data beyond the capabilities of standard data processing technologies. AI analytical methods have been developed to deal with such data, combining statistical and computational techniques. Thus, “big data” has always been characterized with medical care [4] and the need for “big data analytics” has developed corresponding methods. New deep machine-learning methods and data science have been explored by computer scientists, biomedical informaticians, and AI

202

Foundations of Artificial Intelligence in Healthcare and Bioscience

researchers for decades. However, some experts consider machine learning and AI as nearly synonymous [4].

6.1.2 Health information and records (EHR) and AI in medical care Important health information from clinical activities includes electronic health record (EHR)/electronic medical record (EMR), personal health record (PHR), and medical images. The EMR encompasses structured and unstructured data that contain all of a patient’s medical activity information and often used for treatment and treatment decisions. Conversely, the EHR is more health-related information such as medical information and financial information closely related to the health care of individuals [5]. EHR focuses on health management, whereas EMR focuses on the clinical diagnosis of patients. The EHR also contains data of demographics, medical history, medication and allergies, immunization status, laboratory test results, radiology images, vital signs, personal statistics, and billing information. Conversely, the EMR is the record of care delivery organization (CDO) and belongs to CDO, while EHR is a subset of CDO and belongs to the patients or stakeholders [6]. PHR (personal health record) is derived from a variety of the patient’s health and social information. Its primary role is as a data source for medical analysis and clinical decision support [7]. For example, it includes data of allergies and adverse drug reactions (ADRs), chronic diseases, family history, illnesses and hospitalizations, imaging reports, laboratory test results, medications and dosing, prescription records, surgeries and other procedures, vaccinations, and observations of nursing home events and daily living (ODLs). Medical imaging, on the other hand, mainly comes from X-ray, CT, PET, radiography, magnetic resonance imaging (MRI), nuclear medicine, ultrasound, elastography, tactile imaging, photoacoustic imaging, echocardiography, and so on [7]. Even though EHR use is extensive now, receptionists, medical assistants, and doctors still do a lot of manual entry. Meanwhile, voice recognition capabilities (NLP) are increasingly replacing keyboards. The user can now simply speak into the EHR system rather than typing in information. Also, video-based image recognition capabilities will supplement EHRs and provide additional insight into patients’ conditions that AI is capable of analyzing, and humans may miss [8].

6.1.3 Research/clinical trials and AI in medical care The application of emerging digital technologies has increased our understanding of disease in larger patient cohorts and has fostered the development of personalized therapies. For example, most of the new molecular entities approved by the U.S. FDA in recent years were designed to target disease aberrations initiation and maintenance of precision medicine [9]. Machine learning and computer vision enhance aspects of human visual perception and identify clinically meaningful patterns in, e.g., imaging data [10]. Neural networks are being used for tasks ranging from medical image segmentation, generation, classification, and prediction of clinical data sets [11]. Perspectives and commentaries have recently highlighted applications of DNN to imaging data sets, pharmaceutical properties of compounds, clinical diagnoses and genomics, computer vision applications for medical imaging, and NLP applied

Chapter 6 • Current AI applications in medical therapies and services

203

in EHR. Studies have predominantly focused on data in primary care or hospital ecosystem and early drug discovery applications [12]. Recent research in computer science and medicine, proactive regulatory landscape, and availability of large data sets offers examples and promise of delivering faster cures to patients through AI and ML methods. This perspective seeks to engage and inform researchers from fields, such as computer science, biology, medicine, engineering, biostatistics, and policymakers, with the value of emerging technologies of AI and ML to solve challenges facing modernization of the current clinical development process [13].

6.1.4 Blockchain and AI in medical care AI could be incorporated into a unified health information blockchain to accelerate the discovery, development, and delivery of personalized medicine. If the blockchain is populated with codified, abstracted, and distributed health data, with a non-alterable control over privacy, AI could then be permitted by patients and organizations to search for records. This provides a representation of the patient’s databank in search of health-related red flags, trends, insights, outbreaks, and overlooked cures. Health data reflects who we are at our hereditary levels, the care we receive, and its outcomes. It may be helpful for corporations and universities to have that accessible data arrangement on the blockchain. The transparency blockchain offers, along with the size of medicinal records, may make it impossible. However, with the correct model, there is no reason why the broader learning potential can’t be utilized in the data generated in health care, as long as the records are not stored on the blockchain directly. Instead, the blockchain can be used as a mechanism by which access and authorization for medical records can be offered. AI models use semi-structured datasets. They uncover hidden patterns and create meaning by linking thousands of disparate entities. A permissible distribution of health information via blockchain could make available a wide array of executable datasets for these highly specialized, trusted, narrow AI agents for the first time in history. The ultimate goal would be not just aggregating and analyzing data, but bettering the care delivered to patients everywhere [14].

6.1.5 Internet of Things (IoT) and AI in medical care [15] The application of the Internet of Things (IoT) in healthcare, as with so many things, is enormous. It includes, at a minimum, remote monitoring, personal healthcare, smart sensors (see CarePredict, Chapter 5, page 125, medical device integration. Also included are the pharmaceutical industry integration, healthcare insurance, healthcare building facilities, robotics, smart pills, and even treatments of diseases. It is keeping patients safe and healthy as well as improving how physicians deliver care. To some degree, IoT in healthcare is heavily focused on remote monitoring, telemonitoring, tracking, monitoring, and maintenance of assets. Its principal function includes EHR and the concept of computerized health records. Its use in EMR promises to

204

Foundations of Artificial Intelligence in Healthcare and Bioscience

advance coordination of medical care, nursing home care, facilitate interaction with patients and families, and in improving its efficiency. This has also become the main idea of telehealth (see below), that is, a pool of technologies delivering virtual medical, health, and education services. The integration of EHR systems with the IoT can create broad personalized healthcare solutions which could enable the following: • Connect any wearable/portable device to the cloud, pull and analyze collected patient data in real-time; • Monitor vital health indicators collected by portable devices; • Charts and diagram visualization based on collected data; • Monitor patients at home with the help of live video and audio streaming; • Intelligent emergency notifications sent to a physician and/or family.

6.1.6 Telehealth and AI in medical care [16] Health care inevitably necessitates some face-to-face clinical interaction, with required follow-up depending on the circumstances. Nonetheless, telehealth and Information and Communication Technology (ICT) tools can be used to address many critical issues in health care, such as misdistribution of the demand versus supply of healthcare services. AI could boldly assist with this issue with algorithms to match the availability of care providers with appropriate clinical skillsets and to the need for such skillsets in the varying locations. Telehealth also introduces several operational issues, such as failures in the technology, or when a remote care clinician is not available. AI could potentially alleviate such situations by providing mechanisms for human or virtual interactions to occur, thereby addressing difficulties in timing and availability of clinicians, such as nursing home issues and the time it takes to understand the patient’s problem or taking a history.

6.1.7 Chatbots and AI in medical care [16] Virtual assistants (chatbots) can provide a viable alternative to traditional healthcare delivery models including situations like dealing with cognitively impaired individuals [17], improving accessibility of online clinical information [18], or providing avatar-based patient agents for the elderly at home and in nursing facilities [19]. These cases rely upon a more sophisticated conversational level and knowledge base, one which AI can accommodate with a deeper understanding and through data accumulation by an AI agent. Sometimes, an authentic conversational dialogue may be necessary to incorporate aspects of affective behavior, using multimodal contextual awareness mechanisms [20]. For example, when issues arise from a patient’s past interactions or past medical history that need to be considered in making conversational decisions. Here, a personalized model of the individual’s context will be required (as in NLP) in addition to the context model for the current conversation.

Chapter 6 • Current AI applications in medical therapies and services

205

6.1.8 Natural language processing (NLP) and AI in medical care There are 4 primary areas where natural language processing (NLP) improves the delivery of health care. They include the following [21]: 1) NLP improves EHR data usability: An NLP EHR interface can make it easier for clinicians to access patient information. The interface is organized into sections including: (a) words describing patients’ concerns during an encounter; (b) information relating to the words (e.g., all mentions of fatigue would show on a timeline and note about the word); and (c) discover buried diagnostic data. 2) NLP enables predictive analytics: NLP enables predictive analytics to improve population health concerns. For example, health professionals can use NLP to monitor individuals showing psychological risk factors on social media. A study in 2018 [22] showed a 70% suicide prediction rate. 3) NLP boosts phenotyping capabilities: NLP enables analysts and clinicians to extract and analyze unstructured data such as follow-up appointments, vitals, charges, orders, encounters, symptoms, etc. which enables creation of phenotypes for patient groups. NLP empowers analysts to extract this type of data as well as pathology reports, medications, genetic data to answer complex and specific questions. 4) NLP enables Health System Quality Improvement: NLP automates and accelerates the data collection process from the federal government, and associated agencies requiring all hospitals and nursing homes to report specific outcome metrics. Healthcare organizations are increasingly using NLP to get at the simpler diagnostic entities (e.g., “chest pain”), and major tech companies like Amazon are developing healthrelated, clinical NLP tools [23]. Many open-source tools are also available at no cost for classification, to find phrases, and to look for contextual information that provides clues about family history. But to maximize NLP’s potential, healthcare-specific vendor systems must develop programs that integrate into existing workflows.

6.1.9 Expert systems and AI in medical care Expert or knowledge-based systems are the most frequent type of AI medical (AIM) system in routine clinical use [24]. They contain medical knowledge, usually about a very explicitly defined task, and they can reason with data from individual patients to come up with reasonable conclusions. Although there are many variations, the knowledge in an expert system is typically represented in the form of a set of rules [25]. Expert systems are found more frequently in clinical laboratories and educational settings, for clinical surveillance, or in data-rich clinical environments like intensive care units [26]. There are many types of clinical tasks to which expert systems can be applied. In realtime situations, an expert system attached to a monitor can warn of changes in a patient’s condition. In less acute circumstances, an expert system can scan laboratory results or drug orders and send reminders or warnings through an email system. In a more complicated

206

Foundations of Artificial Intelligence in Healthcare and Bioscience

case scenario, or if the person making the diagnosis is simply inexperienced, an expert system can help come up with likely diagnoses based on patient data [27]. Laboratory expert systems usually do not participate in clinical practice. Instead, they are embedded within the process of care, and clinicians working with patients do not need to interact with them. For the ordering clinician, the system can print a report with a diagnostic hypothesis for consideration. Still, it does not assume responsibility (“garbage in garbage out”) for information gathering, examination, assessment, and treatment. Although there is much practical use of expert systems in routine clinical settings, at present, machine learning systems still seem to be used in a more experimental way. Given a set of clinical cases, a machine learning system will produce a systematic description of those clinical features that uniquely characterize the clinical conditions.

6.1.10 Robotics and AI in medical care During the year 2018, more than 5000 surgical procedures, including orthopedic, ophthalmologic, neurologic, and dental, were conducted by AI-assisted robots [28]. The first medical robotic application appeared in 1985 with the use of a robotic surgical arm assisting in a neurosurgical biopsy. The first fully FDA-approved system (known as the da Vinci surgery system) for laparoscopic surgery emerged 15 years later, giving surgeons the ability to control surgical instruments indirectly via a console [29]. Robotics is serving mostly as a high-tech surgical assistant to help doctors perform minimally invasive surgeries, especially in hard to reach or micro areas. Most of these systems, classified as robotically-assisted surgical (RAS) devices by the FDA, enable surgeons to perform operations using a console to operate surgical arms, cameras, and other instruments that perform the procedure. RAS systems can result in fewer and smaller incisions, lowering the likelihood of blood loss and infections and often translate into less pain and fewer complications for patients. RAS systems are being used increasingly in orthopedic surgery, particularly in knee and hip surgeries. Another interesting development is remotely performed robotic surgery. This surgery could be particularly beneficial for use cases such as battlefield medical treatment and even long space exploration missions. Additionally, artificial limbs and rehabilitation robots are being introduced in various forms. Robots are expanding into diagnosis with the use of microscopic bots that travel inside the human body, to robots to diagnose diseases, detect abnormalities, or identify potential at-risk patients (see Chatbots and AI in Surgical/Robotics below, page 206 and Fig. 6 2 [30]). A procedure, capsule endoscopy, has been FDA-approved and in use since 2001, which involves putting a tiny camera inside a pill-sized casing. Patients swallow the “pill,” and while it makes its way through the GI tract, the camera takes images that doctors can use to determine if there are abnormalities [31]. Medical robotics technology is here to stay with continuing advances in the field. There are, however, challenges to be overcome for these technologies to impact patient care over the long-run. Complex and expensive R&D, factors such as regulations, pricing, and training

Chapter 6 • Current AI applications in medical therapies and services

207

for medical professionals will all affect the evolution of robotics. And indeed, emotional and ethical considerations in a field will be a factor as well [32].

6.1.11 Population health (demographics and epidemiology) and AI in medical care The concept of population health has become a high priority in the delivery of medical care and, thus, on the radar screen of industry. BaseHealth of Sunnyvale, CA, an industry innovator in predictive analysis of health and disease risk and interventional management, has established an ongoing partnership with one of the country’s largest health systems, Banner Health. It has implemented its precision insights’ platform to enable Banner Health to identify both rising health risks within populations and actionable clinical interventions. This system allows clinicians and health managers to form customized, proactive care plans for their plan members designed to intervene in identified risk factors. BaseHealth explored categories of risk and cost on 100,000 Banner Health members. It also implemented its analytic platform on 50,000 of the network’s Medicare Advantage members. It incorporates data on health indicators for more than 42 common chronic and acute diseases, in particular, identifying members transitioning from low to moderate to high risk for chronic and acute conditions, with more information being added on an ongoing basis. “This gives us a much better way to target the right patient with the right intervention for their risk. It provides us with a more proactive approach to healthcare by alerting us to the specific factors driving disease risk. Then, we can plan and implement clinical and health management efforts,” said Michael Parris, Senior Director for Business Intelligence and Analytics at Banner Health. This partnership between industry and a medical delivery system represents a forwardthinking, scientific approach to population health and management of rising risk. Besides identifying who is at risk, this analytic approach also tells us who can avoid predisposed conditions with specific clinical interventions [33].

6.1.12 Precision medicine/health (personalized health) and AI in medical care Precision medicine targets medications and treatments based on a patient’s medical history, genetic makeup, and data recorded by wearable devices. Corporations, universities, doctors, and government-funded research collectives are using AI to develop precision treatments for complex diseases. Their goal is to gain insight from increasingly massive datasets into what makes patients healthy at the individual level. With those insights, new drugs could be developed, new uses for old drugs, suggest personalized drug combinations, and predicting disease risks could be refined [34]. Dr. Bertalan Meskó, director of the Medical Futurist Institute, suggested that “there is no precision medicine without AI.” Although forward-looking, his point acknowledges that without AI analysis, patient data will remain severely untapped [35]. Through the application of

208

Foundations of Artificial Intelligence in Healthcare and Bioscience

AI and machine learning to these multiple data sources like genetic data, EHRs, sensor data, environmental, and lifestyle data, researchers are taking initial steps toward developing personalized treatments for diseases like cancer, depression, and more. The National Institutes of Health (NIH) research program, plans on collecting data on 1 million patients to study precision medicine. It began enrolling participants in May 2018 to create a massive patient information database that researchers can use various methods, including AI, to analyze and develop precision treatments. Eric Topol, a geneticist, cardiologist, and director of the Scripps Research Translational Institute, stated, “Much more can be gleaned, many more strides can be made once we start working with this multi-modal data.” AI-based diagnostic tools, such as the FDA-approved imaging tool [36] for diagnosing diabetic eye disease, have already entered hospitals, but AIbased treatments are still at the foundational stage, Topol says. Dr. Elizabeth Krakow is using machine learning to develop precision cancer treatments at the Fred Hutchinson Cancer Research Center in Seattle, WA. Dr. Krakow treats and studies leukemia patients who have relapsed after stem cell transplant. “Past treatments, the current complexity of the disease, side effects—all that info needs to be integrated to intelligently choose the new treatment,” says Krakow. Dr. Krakow and her team assembled medical data of 350 relapse patients, 1000 pages per patient. They built a machine-learning algorithm to predict the best treatment sequence for any patient, at any point in time. Her study will enable future work by creating a goldstandard that will account for the sequential nature of cancer treatment, which clinical trials have failed to establish [37].

6.1.13 Healthcare analytics and AI in medical care Transforming big data into useful clinical information describes the various forms of health care analytics (descriptive, diagnostic, predictive, and prescriptive Analytics [38] - see also Chapter 4, page 99). Academic research suggests that the quality and efficiency of health care can grow by 15% 20% by transforming big data into analytics [39]. There are many analytical tools to analyze the data, including analysis, abstraction, processing, integration, management, coordination, and storage. The analytic results of healthcare can be “visualized” and presented in pictorial or graphical format for understanding complex data and better decision making. It can then be used to understand patterns and correlations among the data. The potential application areas are fraud detection; epidemic spread prediction, Omics, clinical outcome, medical device design, and the insurance industry. The data can then be applied in almost all the areas of personalized patient care, manufacturing, pharmaceutical development, etc. [40]. The application of big data is widely adopted in personalized healthcare, which offers an individual-centric approach [41]. The application in “Omics” enables the realization of strategies for diseases and increases the specification of medical treatments (e.g. “precision medicine”) [42]. Healthcare Insurance companies/ payers are using big data in underwriting, fraud deduction, and claim management. Big Data implementation facilitates a wider set of

Chapter 6 • Current AI applications in medical therapies and services

209

device materials, delivery methods, and tissue interactions, anatomical configurations to be evaluated. It is used during all phases of pharmaceutical development, particularly for drug discovery [43]. Specifically, data-derived influences will prompt suitable updates of diagnostic assistance, clinical guidelines, and patient triage to permit more particular and modified treatment to advance medical results for patients [44]. The most challenging parts for big data and health care analytics in healthcare are data privacy, data leakage, data security, efficient handling of large volumes of medical imaging data, information confidentiality, and security. Also, wrong use of health data or failure to safeguard the healthcare information, and understanding unstructured clinical notes in the right context, extracting potentially useful information must be addressed [45].

6.1.14 Preventive health and AI in medical care The health care system has always focused primarily on reacting to disease rather than prevention, treating people after they become patients. The exploding costs of disease treatment, particularly chronic and late-stage diseases, has led to a renewed focus on disease prevention. The goal of preventive health care is to assist reactive medicine by preempting disease through preventive measures and early detection. Though this is not a new concept [46], modern-day technological advances and AI have created a unique opportunity to combine preventive medicine with personalized data. Utilizing precision health, the practice of personalized health can change the way society perceives health care so that the individual feels empowered to protect and prevent their risk of disease. Disease risk has traditionally been evaluated based on patient age, family history, and more recently, through genetic screening. Genome-wide association studies have allowed for a more in-depth exploration of the genotypic landscape. This has to advance beyond monogenic disorders to complex diseases, such as diabetes and heart disease [47]. However, many commonly identified genetic variants have only modest effects on disease. Thus, the “exposome,” the nongenetic exposures that affect human health and disease, must not be ignored. The effect and benefits of disease risk prediction and monitoring must be followed with well-designed clinical trials to determine whether there is a true benefit in outcomes. However, the data can be used regardless of the outcome to further our understanding of the disease. Precision and preventive health must be just that and must be applied precisely. Not all diseases will benefit from the same degree of monitoring, and not all patients will be monitored the same for a given disease based on their risk [48].

6.1.15 Public health and AI in medical care Public health researchers collect data from many sources and analyze the incidence and prevalence of different health conditions and related risk factors. AI presents an accurate picture [49] of population health by extracting health and non-health data at different levels to coordinate and integrate information about populations and communities regarding evidence about the epidemiology and control of chronic diseases.

210

Foundations of Artificial Intelligence in Healthcare and Bioscience

Recent AI methodological developments enable multi-level modeling to combine individuallevel data with sociomarkers. These are measurable indicators of social conditions at the grouplevel to improve disease surveillance, disease prediction, and implementation and evaluation of population health interventions. Many of these sociomarkers shape the health and well-being of individuals and communities with roots outside the conventional healthcare system. The internet and AI have expanded public health research beyond its traditional realm [50]. Other surveillance methods collecting data from clinical systems and registries are complemented by tracking Internet-based health indicators. Advances in intelligent web-based applications, online smart devices, and social media platforms are assisting public health practitioners and researchers in disease surveillance, epidemic detection, behavior monitoring, and public health communication and education. AI is currently used to process personalized data, to elicit patient preferences, and to help patients (and families) participate in the care process. This participation allows physicians to provide high quality and efficient, personalized care by personalizing “generic” therapy plans that connect patients with information beyond those available within their care setting [51]. Clinicians and public health practitioners use this AI-based technology to deliver tailored therapy or interventions based on the best evidence to maintain high-quality patient care and create healthier communities. Patient complexity is increasing with the average life expectancy in the US on the decline (Table 6 1) [52]. Baby boomers are aging (20% of the 65 1 population by 2029), Table 6–1

Life expectancy in the United States compared to other industrial countries.

Data from National Center for Health Statistics; 2016.

Chapter 6 • Current AI applications in medical therapies and services

211

and multi-morbidity affects 60% (see Comorbidity below, page 242) of this population associated with over twice as many patient-physician encounters. Social and behavioral contexts are critical in the management of these complex patients, and as such, these factors become crucial components of technology-based solutions. Despite some limitations, AI tools and techniques, still in their infancy, already provide substantial benefits in providing in-depth knowledge on individuals’ health and help in predicting population health risks. Their use of medicine and public health is likely to increase substantially in the near future.

6.1.16 Access and availability and AI in medical care Adequately integrated and applied, AI could make health care more accessible to underserved communities while lowering costs, sorely needed in the United States. The U.S. ranks poorly on many health metrics despite an average annual health care cost of $10,739 per person [53]. AI systems could relieve overworked doctors and reduce the risk of medical errors that may kill tens of thousands, if not hundreds of thousands, of U.S. patients each year [54]. Yet, critics point out that all that such promise could vanish if AI tramples patient privacy rights, overlooks biases and limitations, or fails to provide services to improves health outcomes for most people. To make the most of these predictions in health care, along with AI, humans must help make decisions that can have health and financial consequences. Because AI systems lack human intelligence, AI can make predictions that could prove harmful if blindly followed by physicians and hospitals. If you follow the model blindly, says Kenneth Jung, a research scientist at the Stanford Center for Biomedical Informatics Research, “then you’re hosed.” Because the model is saying: ‘Oh, this kid with asthma came in and they got pneumonia, but we don’t need to worry about them, and we’re sending them home with some antibiotics.’ Just who stands to benefit most from AI health care services isn’t exactly clear. There are already health care disparities: According to the World Bank and the World Health Organization, half of the globe’s population lacks access [55] to crucial health care services, and nearly 100 million people are pushed into extreme poverty by health care expenses. AI could either improve these inequalities or make them worse. Accenture, a consulting firm, predicts that top AI applications could save the U.S. economy $150 billion per year by 2026. But it’s unclear if patients and health care systems supplemented by taxpayer dollars would benefit, or more money might simply flow to the tech companies, health care providers, and insurers. “The question of who is going to drive this and who is going to pay for this is an important question,” says Isaac Kohane, a biomedical informatics researcher at Harvard Medical School. An AI assistant may not sound as exciting as an AI doctor, but it could help to free up physicians to spend more time with their patients and improve the quality of care. Family physicians frequently spend more than half of their working days on recording in electronic health records. This is the main factor behind physical and emotional burnout, which has

212

Foundations of Artificial Intelligence in Healthcare and Bioscience

dire consequences [56], including even patient deaths. Ironically, EHRs were supposed to improve medical care and cut costs. Now many experts point to them and to AI more cautiously.

6.2 Pharmaceutical and biopharmaceutical care 6.2.1 Big data analytics and AI in pharmaceutical care Healthcare databases are being used more and more for supporting drug discovery [57]. Large repositories of data that can traditionally be used for pharmacovigilance include spontaneous reporting systems (SRS) databases and networks of healthcare databases that contain medical records or medical claims. Also, the gold-standard for adverse drug reaction (ADR) detection is the spontaneous suspected ADR reporting system [58]. The Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) contains publicly available information on adverse events and medication errors submitted to the FDA. It is essential to highlight the role of health care database networks as big data applied to pharmacovigilance. Such networks reflect the attributes of big data in routine pharmacovigilance, but they are a recent development in the field. Database networks are commonly made up of EMRs and claims databases. They could also be linked to genomic data, biological specimens held in biobanks, or social media. The growing use of healthcare database networks has led to an expanding range of analytic methodologies to harness the large volumes of heterogeneous data. These networks are developing not just in terms of building distributed data networks for analysis. The availability of large amounts of healthcare data and the increasingly powerful tools to analyze it is an opportunity to study drug use and safety on ever wider scales and in greater detail. The combined use of different types of databases in a distributed database network augments the power and potential of drug utilization and safety studies. More pharmacovigilance activity does not necessarily mean more effective signal detection and strengthening [59].

6.2.2 Health information and records (EHR) and AI in pharmaceutical care Data integration is a process that consists of retrieving, cleaning, and organizing data, usually obtained from many different sources. Critically important information exists in electronic health records (EHRs) and other sources of medical data. Medical information, obtained from the patient’s EHRs and imaging data, can sometimes be a key factor in the success of projects in the pharmaceutical area. All these datasets need to be integrated and organized to become useful for a given objective. Mining EHRs is a process that can also provide useful information to be used in drug discovery, drug recycling, and drug safety research. Given the current focus on evidencedriven medicine, EHRs will become one of the most valuable resources hospitals and

Chapter 6 • Current AI applications in medical therapies and services

213

health institutions have to manage and explore, with the help of the biotechnology industry and academy [60]. Both structured and non-structured data in EHRs can be mined, the latter requiring the use of natural language processing technologies that are just now becoming of age. Future developments in pharmacogenomics will be strongly dependent on the ability of companies to integrate the information in EHRs with the genetic information of the patients, which will become progressively more common. Big data, artificial intelligence, and machine learning will become instrumental in all future biotechnology research. More researchers in biotechnology will have to become aware of the methods required to deal with large amounts of data and to include in their research team people with the ability to integrate, organize, and explore this data. Researchers specialized in bioinformatics and EHR data will become vital elements in any biotechnology research team’s efforts.

6.2.3 Research/clinical trials and AI in pharmaceutical care AI-based research models are helping clinical trial design, patient recruitment, and AI-based monitoring systems aim to increase study drug trial adherence and decrease dropout rates. “AI is not a magic bullet and is very much a work in progress, yet it holds much promise for the future of healthcare and drug development” [61]. AI can potentially boost the success rate of clinical trials by: • Efficiently measuring biomarkers that reflect the effectiveness of the drug being tested; • Identifying and characterizing patient subpopulations best suited for specific drugs. Less than a third of all phase II compounds advance to phase III, and 1 in 3 phase III trials fail-not because the drug is ineffective or dangerous, but because the trial lacks enough patients or the right kinds of patients; • Start-ups, large corporations, regulatory bodies, and governments are all exploring and driving the use of AI for improving clinical trial design. IBM’s Stefan Harrer says, “What we see at this point are predominantly early-stage, proof-of-concept, and feasibility pilot studies demonstrating the high potential of numerous AI techniques for improving the performance of clinical trials.” The authors also identify several areas showing the most real-world promise of AI for patients. For example: • • • • •

AI-enabled systems might allow patients more access to and control over their data; Coaching via AI-based apps could occur before and during trials; AI could monitor individual patients’ adherence to protocols continuously in real-time; AI techniques could help guide patients to trials of which they may not have been aware; In particular, Harrer says, the use of AI in precision-medicine is promising, such as applying technology to advance how efficiently and accurately professionals can diagnose and treat and manage neurological diseases. “AI can have a profound impact on improving patient monitoring before and during neurological trials,” he says.

214

Foundations of Artificial Intelligence in Healthcare and Bioscience

Potential implications for pharma were also evaluated, which included: • Computer vision algorithms that could potentially pinpoint relevant patient populations through a range of inputs from handwritten forms to digital medical imagery; • Applications of AI analysis to failed clinical trial data to uncover insights for future trial design; • The use of AI capabilities such as Machine Learning (ML), Deep Learning (DL), and Natural Language Processing (NLP) for correlating large and diverse data sets such as electronic health records, medical literature, and trial databases to help pharma improve trial design, patient-trial matching, and recruiting, as well as for monitoring patients during trials. The authors also identified several important takeaways for researchers: • “Health AI” is a growing field connecting medicine, pharma, data science, and engineering; • The next generation of health-related AI experts will need a broad array of knowledge in analytics, algorithm coding, and technology integration; • Ongoing work is needed to assess data privacy, security, and accessibility, as well as the ethics of applying AI techniques to sensitive medical information. AI methods have been applied to clinical trials for only the past 5 8 years. Thus, it will be several years in a typical 10- to 15-year drug-development cycle for AI’s impact to be accurately assessed. During those years, rigorous research and development will be necessary to ensure the viability of these innovations, Harrer says. “Major further work is necessary before the AI demonstrated in pilot studies can be integrated into clinical trial design,” he says. “Any breach of a research protocol or premature setting of unreasonable expectations may lead to an undermining of the trust and ultimately, the success of AI in the clinical sector” [62]. An acute need for AI intervention in drug trials is being felt with the fast-track developments of vaccines for the COVID-19 pandemic. The U.S. government has initiated a program, "Operation Warp-speed," to move vaccine trials at a much faster pace than traditionally used in such large double blind, placebo-controlled studies. There is little information being provided as to the trial protocols, but it is more than likely (or should be) that AI is having a significant influence on the process.

6.2.4 Blockchain and AI in pharmaceutical care The pharmaceutical sector helps in the introduction of new drugs into the market as well as assists in ensuring the safety and validity of drugs sold to the end consumer. It also aids in the evaluation and processing of safe medications, which ultimately assist in quicker patient recovery. Drug companies face challenges, including tracking their products and facing severe risks by allowing counterfeiters to compromise their production, or introduce fake drugs into the system. During the production and research and development (R&D) of these drugs, blockchain could become a best-fit technology for evaluating, monitoring, and ensuring the production processes of potential drugs. Regarding the effective delivery of reliable and authentic medicines

Chapter 6 • Current AI applications in medical therapies and services

215

to patients, there is a need for monitoring, evaluating, and ensuring the overall process of pharmaceutical drug development and supply through the use of AI digital technologies. A digital drug control system (DDCS) [63] could be a solution to the prevention of counterfeit drugs. Using a blockchain-based DDCS, big pharmaceutical industries (Sanofi, Pfizer, and Amgen) launched a joint pilot project for inspection and the evaluation of new drugs [64]. Using blockchain, it would be possible to track the production and location of the drugs at any given time and to improve the traceability of falsified drugs [65], the security of the drug supply system [66], and guarantee the quality of drugs supplied to consumers or end-users [63]. Blockchain can decipher and report dissemination of pharmaceuticals. This arrangement expands the speed of preparing data, guarantee straightforwardness of archive dissemination, and reduce the likelihood of loss, harm, or fabrication of archives. The innovation itself controls such aspects: The made block can’t be transformed nor erased. Blockchain will guarantee that data can’t be tainted [67].

6.2.5 Internet of Things (IoT) and AI in pharmaceutical care Digital health innovations are being produced by the convergence of AI, the Internet of Things (IoT), and cyber-physical systems (CPS) in health care and Pharma (IoT and CPS described in Chapter 3, page 55). CPS creates a dynamic digital map of virtually all things that can be analyzed in much more sophisticated ways than a bar code scanning system. CPS examples include self-driving cars, wearables for digital monitoring of heart arrhythmias (AliveCor [68]), industrial AI-powered robots in smart factories and health robots delivering home care services to disabled persons. Another exciting prospect of digital health powered by AI, IoT, and CPS is remote phenotypic (genetic) data characterization of pharmaceutical outcomes in clinical trials meaningful to patients. The IoT could bring about pharmacy and health services innovation for rural or remote communities with limited access to medical product information [69]. These IoT and CPS innovations have led to what is being referred to as Pharma 4.0. This title comes from the 4 evolutionary stages of manufacturing, the fourth of which is called Industry 4.0, with a complementary step called Health 4.0 [70]. The concept of Pharma 4.0 integrates the manufacturing control process for drugs with the entire manufacturing process as well as with quality control and business functions. “Vertical integration, manufacturing execution, and enterprise resource planning systems allow real-time, data-driven decisions. Horizontal integration of laboratory systems with the manufacturing process, equipment, and facility systems allows feedforward and backward controls.” A primary goal of Pharma 4.0 is making pharmaceutical production safer and more efficient along the whole value chain [71].

6.2.6 Telehealth and AI in pharmaceutical care Typical data sources in telehealth settings include medical sensor devices such as blood pressure meters and body weight scales. Patients can also enter data regarding their

216

Foundations of Artificial Intelligence in Healthcare and Bioscience

subjective wellbeing and their medication intake [72]. To monitor adherence to medications, a commonly used approach is smartphone apps, where app denotes software specifically designed for mobile devices like tablets or smartphones [73]. In hospitals and other inpatient settings, longer lists of monitored drugs may exist. In these scenarios answering prompts on a touchscreen can provide a more comfortable alternative to physically interacting with every single medication. The prescribed medications can be displayed on the screens together with default values for the administration of a particular drug, e.g., 1 3 times daily, 100 200 mg, single dose. Rulesets for medication intake can be applied in telehealth settings through the implementation of reminders or notification messages, which are triggered by events derived from automated analysis of biosignals [74].

6.2.7 Chatbots and AI in pharmaceutical care The benefit of ChatBots to patients is to provide advice and information for a healthy life by interacting with them for any personal query related to health care without needing to be physically available to the health care professional or hospital for a small problem. A voiceto-text (natural language processing) analysis bot connects patients in conversation about their medical issues and answers their queries [75]. PharmaBot, developed in 2015 [76] help clinicians securely file prescription errors into the hospital’s system via their smartphone. The bot should, in theory, enable greater security and save time for staff. Medxnote [77], a bot, plugs directly into a hospital’s electronic medical records system. If a clinical pharmacist spots an error, they open up the app, tell the bot that something’s up, describe the error, and take a photo of the patient’s chart. This data is then logged into the system. This is done on the spot in a couple of seconds.

6.2.8 Natural language processing (NLP) and AI in pharmaceutical care Researchers have focused on developing and improving the use of natural language processing to recognize and extract information from medical text sources (e.g., detection of medication-related information [78] or patient safety events [79] from patient records; drugdrug interactions from the biomedical literature; [80] or disease information from emergency department free-text reports) [81]. Natural language processing techniques have also been applied to extracting information on adverse drug reactions from the vast amounts of unstructured data available from the discussion and exchange of health-related information between health consumers on social media [82]. The results demonstrate that it is feasible to apply AI to automate safety case processing. Machine-learning algorithms were able to successfully train solely based on AI database content (i.e., no source document annotations), and the multiple combined accuracy metrics allowed adjudication of the different vendor algorithms [83].

Chapter 6 • Current AI applications in medical therapies and services

217

6.2.9 Expert systems and AI in pharmaceutical care The introduction of a technology-based information expert system to identify drug-related problems is introducing a new generation in pharmacy. It is based on patient data captured from the pharmacy system and other external data systems. Consistent with workflow robotics, this makes for less work for the pharmacist who can then shoulder more responsibility in identifying serious drug-related problems [84]. Overprescribing of antibiotics continues to be a problem in health care. An expert system, TREAT [85], was developed based on a causal probabilistic network to improve antibiotic therapy in hospitalized patients. The system proposed and being used at the University of South Carolina applies user-centered design techniques for the infectious disease diagnosis and antibiotic prescription in intensive care units so as to prevent the overuse of antibiotics in primary care.

6.2.10 Robotics and AI in pharmaceutical care Collaborative robots, or “cobots are assisting the efficiency in pharmaceutical research, drug production, and quality control.” These cobots offer pharma greater reliability, consistency, and precision. Once a cobot is programmed, repetitive tasks are completed with low error. They are ideal for protecting sterile environments from contamination, and many cobots can work 24-hours a day. As a result, a cobot’s return on investment occurs within a year in most industries [86]. With the advances in AI, robots are more trustworthy. As a result, doctors and a large number of institutions are now employing robots along with human supervision to carry out activities previously done by humans. The major advantage of AI is that it reduces the time that is needed for drug development, and it reduces the costs that are associated with drug development. It also enhances the returns on investment and may cause a decrease in price for the end-user [87].

6.2.11 Population health (demographics and epidemiology) and AI in pharmaceutical care Pharmacists can play a dramatic role in improving population health through active chronic disease and drug therapy management. Further, by assuring appropriate patient education and compliance in association with medication therapy, pharmacists can not only improve health outcomes but reduce readmissions, improve patient safety, and ultimately reduce healthcare costs. The literature is replete with examples of pharmacist involvement and leadership on patient care teams [88]. Programs that include medication therapy management, disease state management, wellness promotion through initiatives in areas such as smoking cessation, medication management during care transitions, population health research, and the application of pharmacoeconomics are just a few examples of where pharmacists’ opportunities lie.

218

Foundations of Artificial Intelligence in Healthcare and Bioscience

As a part of a population health initiative, pharmacists must be wholly integrated with a focus on improving medication use, adherence, and outcomes. As medication experts, pharmacists are uniquely qualified to play an important role in population health management. They must take a leadership position in developing new strategies to deliver comprehensive patient care [89].

6.2.12 Precision medicine/health (personalized health) and AI in pharmaceutical care Medication nonadherence remains a substantial public health problem. Between 25% and 50% of patients worldwide do not take their medications as recommended [90]. In the USA, suboptimal adherence has been associated with 125,000 deaths, 10% of hospitalizations, and costs up to $289 billion annually [91]. Medication adherence is a complex and multifaceted behavior, but the adverse public health effects of nonadherence are preventable. Combining the principles of patient-centered care with “big data” and predictive analytic tools employed by precision medicine, this dynamic concept on healthcare will allow researchers to identify patients at greatest risk for adherence problems, and enable clinicians to deliver adherence interventions to those who need it. While precision medicine and population health take different approaches, there is a need for both. Aspects of precision medicine, such as tailoring therapies or intervention content to specific patients’ needs, can be scaled up and applied in a population health model. Innovative approaches are needed in precision medicine and population health to advance the field of medication adherence including standardizing taxonomy to describe adherence and report adherence-related study results, integrating data sources to develop innovative measurement of adherence, and considering innovative mechanisms (i.e., behavioral economics principles) to incentivize both patients and payers to engage in improving medication adherence [92].

6.2.13 Healthcare analytics and AI in pharmaceutical care Pharmacovigilance involves post-marketing monitoring and detection of adverse drug reactions (ADRs) to ensure patient safety [93]. Pharmacovigilance studies use descriptive analytics to focus on identifying an association between adverse drug effects with medication [94]. Big Data and advanced analytics are driving a paradigm shift in the healthcare and pharmaceutical industry with multiple innovations ranging from precision medicine and digital therapeutics to the adoption of accountable and value-based care models. Drug developers are making substantial investments in Big Data and artificial intelligence-driven drug discovery platforms to shorten the process of successfully discovering promising compounds. Besides, Big Data technologies are increasingly being utilized to streamline clinical trials, enabling biopharmaceutical companies to significantly lower costs and accelerate productive trials [95]. According to McKinsey, “Big data and machine learning in pharma and medicine could generate a value of up to $100 billion annually, based on better decision-making, optimized

Chapter 6 • Current AI applications in medical therapies and services

219

innovation, improved efficiency of research/clinical trials, and new tool creation for physicians, consumers, insurers and regulators” [96]. The growing applications of ML in pharma and medicine suggest a strong potential of synchronization of data, analysis, and innovation in the future of pharmacy and health care.

6.2.14 Preventive health and AI in pharmaceutical care The practice of pharmacy and the industry is evolving. It is setting as its priority, improving the quality of healthcare, and driving its efforts towards positive value-based outcomes. Pharmacies are the most accessible and affordable healthcare providers and have the potential to become preventive and health management centers instead of only medication “counters and pourers.” AI programs help practices in the areas of diagnosis, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. AI technology can help pharmacies provide more personalized healthcare and be able to offer advice, guidance, and an expanded suite of services (e.g., immunizations, screenings, MTM, disease state management). The primary aim of AI applications in pharmaceutical care is to analyze relationships between prevention or treatment techniques and patient outcomes. The next generation in pharmacy technology is the introduction of an AI technology-based information expert system to identify timely drug-related problems based on patient data captured from the pharmacy system and other external data systems [97].

6.2.15 Public health and AI in pharmaceutical care Total US pharmaceutical expenditures are reported at $1443 per capita [98]. This finding is consistent with the US Department of Health and Human Services calculations that included medications administered in hospitals, physician offices, and other facilities, and showed that 16.7% of total personal health care spending (an estimated $457 billion in 2015) was attributable to pharmaceuticals [99]. No other category of U.S. health-related spending accounts for as much of the costs as pharmaceuticals. More AI support and data management should encourage the push for regulation of drug prices, but the fundamental problem is political will. However, a concern for public health and wellness and public pressure will begin to drive the political process as rising health care costs fuel more public demand. The bill for health care products and services is mostly not paid by the people receiving the care, but collectively by society through insurance and taxes. Thus, the higher spending on health care by 1 person affects public health by taking away money from the rest of society. The United States can reduce the cost of health care. It will take the inclusion of AI systems and a medical profession, health systems, payers, and policymakers dedicated to controlling costs. In many ways, the future of the US health care system is in their hands [100].

220

Foundations of Artificial Intelligence in Healthcare and Bioscience

6.2.16 Access and availability and AI in pharmaceutical care The role of artificial intelligence in life sciences has a proven track record as a useful tool in pharmaceutical research. Companies are also rushing into the transformational impact they can have on their commercial operations, such as sales and marketing. Machine learning platforms can transform the ability of any life science company to mine big data for the right answers. It can influence formulary decisions, clinical guideline development, and separate customers into highly specific segments. This empowers sales teams to personalize their activity to health providers to a greater degree. Materials can be recommended based on the sales rep’s previous meetings with a doctor. For example, if a doctor has previously expressed an interest in diabetes drugs, the sales rep will arrive equipped with information on appropriate medications, ensuring that personalized and impactful. Machine learning will not replace marketing operations teams and reps because it cannot think on its own. However, companies and sales teams that use machine learning and AI in life sciences like pharmaceuticals will reach markets and patients that others can’t [101].

6.3 Hospital care 6.3.1 Big data analytics and AI in hospital care Innovations in AI healthcare technology are streamlining the patient experience, helping hospital staff process millions, if not billions of data points, faster and more efficiently. A health care company, CloudMedX [102] helps hospitals and clinics manage patient data, clinical history, and payment information by using predictive analytics to intervene at critical junctures in the patient care experience. The Cleveland Clinic is using AI to gather information on trillions of administrative and health record data points to streamline patient experiences [103]. Johns Hopkins Hospital recently announced a partnership with GE to improve the efficiency of patient operational flow through predictive AI techniques [104]. A risk prediction company, KenSci, uses big data with artificial intelligence to predict clinical, financial, and operational risk by taking data from existing sources to predict everything from who might get sick to what’s driving up hospital costs. It has partnered big tech and data science companies, including GE, KPMG, Allscripts, and Microsoft [105]. Other major hospitals and AI companies (Mayo, IBM, Google, Amazon) are working with hospital systems to predict ICU transfers, improve clinical workflows, and even pinpoint a patient’s risk of hospital-acquired infections. With infection, by using the company’s artificial intelligence to mine health data, hospitals can predict and detect sepsis, which ultimately reduces death rates.

6.3.2 Health information and records (EHR) and AI in hospital care Most hospitals are now integrating electronic health records (EHRs). The growth rate of the data exceeds that coming from any other media [106]. However, EHRs store data in a complex structure from different sources, making it difficult to build large datasets. Integrating EHRs requires first creating a dictionary to translate the system codes into readable text or NLP for physicians.

Chapter 6 • Current AI applications in medical therapies and services

221

The potential utility of gathering all available data from an EHR for research and AI applications is unimaginable. The availability of millions of data from a single topic could revolutionize descriptive and epidemiological studies as well as train machine and deep learning algorithms to predict clinical situations and help clinicians in clinical decisionmaking processes. The rapid data collection from EHRs and algorithm evaluations, real-time predictions, and links with clinical recommendations create an AI smart support system that can be implemented in a hospital to facilitate clinical decision-making processes [107]. The availability of large amounts of EHR data, the use of ML, and the high level of performance of new computers suggest the immense power of AI in improving medical and hospital care [108].

6.3.3 Research/clinical trials and AI in hospital care Clinical trials are a valuable tool for creating new treatments. But finding the right volunteer subjects is difficult and can undermine the effectiveness of the studies. Researchers at Cincinnati Children’s Hospital Medical Center designed and tested a new computerized solution that uses AI to identify eligible subjects from Electronic Health Records (EHRs) [109]. Compared to manually screening EHRs to identify study candidates, the study showed the system called the Automated Clinical Trial Eligibility Screener© (ACTES) reduced patient screening time by 34% and improved patient enrollment by 11.1%. The system also improved the number of patients screened by 14.7% and those approached by 11.1%. The system uses natural language processing (NLP) as the system analyzes vast amounts of linguistic data. Machine learning allows computerized systems to learn and evolve from experience without explicitly being programmed automatically. This AI ability makes it possible for computer programs to process data, extract information, and generate knowledge independently. The system extracts structured information, including patient demographics and clinical assessments from EHRs. It also identifies unstructured information from clinical notes, including the patients’ clinical conditions, symptoms, treatments, and so forth. The extracted information is matched with the study’s eligibility requirements to determine a subject’s qualifications for a specific clinical trial [110].

6.3.4 Blockchain and AI in hospital care The potential for the use of blockchain technology in hospitals has started to be tested in a pilot platform designed to help the Food and Drug Administration’s Office of Translational Sciences explore how to use the technology for healthcare data management. The pilot project is currently being implemented at 4 major hospitals. It is using Ethereum (an opensource, public, blockchain-based distributed computing platform and operating system [111]) to manage data access via virtual private networks. The project is built on the InterPlanetary File System (IPFS) to utilize encryption and reduce data duplication via offchain cloud components with cryptographic algorithms to create user sharing [112]. Blockchains enable decentralized management and makes it suitable for applications where healthcare stakeholders (e.g., hospitals, patients, payers, etc.) wish to collaborate

222

Foundations of Artificial Intelligence in Healthcare and Bioscience

without the control of a central management intermediary. Blockchains also provide immutable audit trails; it enables a record of data ownership; it ensures the robustness and availability of data; and it increases security and privacy of data [113].

6.3.5 Internet of Things (IoT) and AI in hospital care [15] The concept of “smart hospitals” is a new type of hospital that can optimize, redesign, or build new clinical processes and management systems using AI digitized networking infrastructure of integrated assets. Smart hospitals rely on optimized and automated means, utilizing the Internet of Things and big data combining connected devices with cloud computing, big data analytics, and AI. The smart hospital involves 3 essential layers — data, insight, and access. Data is collected daily and fed to analytics or machine learning software to derive a “smart” insight. This valuable insight moves a hospital a step further from being just digital. It makes it truly smart. This insight must be accessible to all potential users, including doctors, nurses, facility personnel, through an interface like a desktop or a mobile device. This enables them to make critical decisions faster. There are 3 areas that any smart hospital addresses: operations, clinical tasks, and patient centricity: • Efficiency at operations can be achieved by building automation systems, and implementing intelligent asset maintenance and management solutions, improving the internal logistics of mobile assets and control over people flow. • Efficiency in clinical tasks is concerned with ways to improve doctors’ and nurses’ work efficiency, especially the emergency, surgery, and radiology areas. Clinical efficiency also involves improving patient outcomes by ensuring patient engagement and monitoring. • Patient centricity of smart hospitals means improving the patient experience, such as building a smart patient room, which allows voice-based interactive devices such as Amazon Echo with Alexa or tablets, to call nurses or dim the lights.

6.3.6 Telehealth and AI in hospital care [114] Hospitals are using telehealth to improve access and fill gaps in care, provide services 24/7, and expand access to medical specialists. They offer several types of telehealth services to improve access to services and quality of care. Telehealth delivery platforms fall into 2 main categories: provider-to-provider; and direct-to-consumer. One of the most frequent reasons hospitals use telehealth is to extend access to specialty care. Other reasons for embracing telehealth include efficient post-operation follow-up, lower hospital-readmission rates, better medication adherence, and positive care outcomes. Nine of the most frequent uses of telehealth in hospitals include the following services: 1. Pharmacy services; 2. Chronic care management; 3. Telestroke services;

Chapter 6 • Current AI applications in medical therapies and services

4. 5. 6. 7. 8. 9.

223

Tele-ICU tool; Specialty telemedicine consults; Diagnostic screening for diabetes-related eye disease; Sleep disorders; Telepsychiatry; Opioid-use disorder (OUD).

Telehealth tools for treating patients are a more effective and efficient way to use limited staff and resources. Regional growth and development of telehealth systems can be leveraged across care sites by connecting hospitals, physician offices, diagnostic centers, and long-term care through telehealth networks. Virtual care technology can improve the timing, quality, and impact of care for more patients by eliminating travel and bringing in specialized care as needed.

6.3.7 Chatbots and AI in hospital care Among the top 5 US hospitals (Mayo, Cleveland Clinic, Mass General, Johns Hopkins, and UCLA), the most popular AI application is chatbots. Their uses range from automating physician inquiries and routing physicians to the proper specialist to an array of patient communication services. Mayo Clinic is using a startup technology, Tempus [115], which focuses on developing personalized cancer care using a machine learning platform. This startup partnership also includes such prestigious hospitals as the University of Michigan, the University of Pennsylvania, and Rush University Medical Center. Microsoft introduced its AI digital assistant, Cortana [116], to the Cleveland Clinic to help the medical center “identify potential atrisk patients under ICU care.” Cortana, a type of command center first launched in 2014, is now integrated into Cleveland Clinic’s hospital system. NVIDIA announced its affiliation with the Massachusetts General Hospital Clinical Data Science Center as a “founding technology partner” [117]. The Center aims to serve as a hub for AI applications in healthcare for the “detection, diagnosis, treatment and management of diseases.” Johns Hopkins Hospital teamed up with GE Healthcare Partners [118] to design the Judy Reitz Capacity Command Center, which receives “500 messages per minute.” Besides functioning as a chatbot for the hospital, the system integrates data from “14 different Johns Hopkins IT systems” across 22 high-resolution, touch-screen enabled computer monitors. UCLA researchers designed the Virtual Interventional Radiologist (VIR) [119]. This is a chatbot that “automatically communicates with referring clinicians and quickly provides evidence-based answers to frequently asked questions.” The ultimate goal is to expand the functionality of the application, for “general physicians interfacing with other specialists, such as cardiologists and neurosurgeons.”

6.3.8 Natural language processing (NLP) and AI in hospital care One of the biggest challenges in hospital care and administration is interpreting unstructured data (e.g., doctors' notes). One of the greatest strengths of natural language processing (NLP)

224

Foundations of Artificial Intelligence in Healthcare and Bioscience

is its ability to extract meaningful insights from unstructured data. Thus, the use of NLP in hospital care has become a critical ingredient [120]. NLP can analyze unstructured patient data based on clinical criteria, specific keywords, and text patterns. This accentuates the value healthcare NLP offers hospitals by eliminating the burdensome, time-consuming and thus costly task of clinical staff having to read through medical notes. There are numerous other valuable uses in NLP’s ability to extract useful insights from unstructured patient data. It can detect early signs of mental illness in patients by monitoring their social media posts and alerting a mental health professional. NLP accomplishes this through analysis of social media and by performing sentiment analysis and detection [121]. NLP can identify and extract text from clinical documents, such as physician’s notes and dictations. This feature allows healthcare organizations to reveal from clinical notes, such as prescriptions and medical history. It can also indicate missing information that should be included in clinical reports. Natural language understanding (NLU) [122] is a feature of NLP that is used to assist physicians by converting their spoken words in real-time, understandable dictated notes, along with discrete data elements—prescriptions, ICD-10 (International Statistical Classification of Diseases and Related Health Problems) codes, procedures, etc. This information is then populated directly into the appropriate section of the medical chart.

6.3.9 Expert systems and AI in hospital care Today’s hospitals are becoming “smart hospitals,” (see also page 222 above) digitalizing many of their administrative and clinical functions, including the use of diagnostic expert systems. OSHA (US Dept. of Labor Occupational Standards and Safety Administration) includes expert systems among its “eTools,” which are “stand-alone,” interactive, Web-based training tools utilize graphical menus including expert system modules. These modules enable the user to answer questions and receive reliable advice on how OSHA regulations apply to their worksite. eTools do not create new OSHA requirements [123]. An expert system eliminates the need to consult a human expert. In a Clinical Documentation Improvement (CDI) program, by collecting clinical information, examining its knowledge base, and applying rules, an expert system can make documentation widely available throughout a hospital. The expert system approach has already been shown to be capable of providing several enhancements to a CDI program, where its rules are applied to the clinical data that has been collected. By examining the list of a patient’s medications and laboratory results, the expert system can suggest additional diagnoses. An expert system cannot be expected to be better than the human expert, but the system provides the ability to automate expertize in the hospital process. This becomes extremely helpful in improving efficiencies and in training new staff [124].

6.3.10 Robotics and AI in hospital care Hospital robots are proving to be very useful, functional, and probably the next horizon in modern medicine. Robots in hospitals are now aiding patients in ways that were only a few

Chapter 6 • Current AI applications in medical therapies and services

225

years ago considered science fiction. PUMA (Programmable Universal Machine for Assembly) robots known for their mobility and wide range of actions as a general-purpose industrial robot was first used in 1985 to aid in a brain biopsy. The PROBOT, also a surgical robot, can remove the soft tissue from a human body during open surgery. ROBODOC [125] was initially used to aid in hip surgeries and, afterwards, used to help with bone surgeries in general. AESOP [126], the first surgical robot approved by the United States FDA has more advanced features that provide the doctors as well as the patients more options when it comes to surgery, including heart valve surgery. Other robots can now perform deep surgeries in very sensitive parts of the body, like the fallopian tube, lungs, and brain. The most recent hospital robot is the da Vinci Surgical System [127]. It is currently the leading technology in the growing line of hospital robots that are used for surgery. What sets this robot apart is its ability to perform minimally invasive yet complex surgical procedures.

6.3.11 Population health (demographics and epidemiology) and AI in hospital care The passage of MACRA (Medicare Access and CHIP Reauthorization Act of 2015 [128]) was designed to replace fee-for-service hospital reimbursement with value-based care (payment model that reimburses healthcare providers based on the quality they provide to patients rather than the number of patients seen [129]). Participating hospitals are investing in health IT solutions to help support population health management, data analytics, and care coordination capabilities [130]. Results indicate that hospitals with value-based incentives were more likely to have adopted population health management and care coordination technologies than hospitals with no incentives. Population health management and analytics technologies have been shown to help hospitals with risk stratification and predictive modeling, enabling them to deliver preventive, targeted care [130]. “New and emerging technologies, like the Internet of Things (IoT) and blockchain, will likely play an increasing role as well in supporting the capabilities needed under new payment models.” “Many health systems, including Vanderbilt University, Boston Children’s Hospital, and the Mayo Clinic, are already using these technologies to support key business and clinical decisions” [131].

6.3.12 Precision medicine/health (personalized health) and AI in hospital care As patient data in the form of genetic information and electronic health records (EHRs) dramatically increases, it has improved doctors’ assessments of the individual patient and treatments tailored to their precise needs. This type of care is referred to as precision medicine (see Chapter 4, page 101) meaning drugs or treatments designed for small groups rather than large populations.

226

Foundations of Artificial Intelligence in Healthcare and Bioscience

To achieve this level of care, doctors are using AI to develop precision treatments with an aim at increasing massive and available data sets insight into what makes patients healthy at the individual level. Those insights could lead to new drugs, uncover new uses for old ones, suggest personalized combinations, and predict disease risk. Dr. Bertalan Meskó, director of the Medical Futurist Institute, suggested that “there is no precision medicine without AI” [132]. By applying AI and machine learning to these multiple data sources, genetic data, EHRs, sensor data, environmental, and lifestyle data researchers are taking first steps toward developing personalized treatments for diseases from cancer to depression. AI-based diagnostic tools have already entered hospitals, but AI-based treatments are still at the foundational stage [133]. There are still barriers to AI in precision medicine, one being that the technology simply isn’t sophisticated enough. Another hurdle has to do with deficiencies in EHR data. The average hospital in the U.S. uses 16 different EHR platforms [134]. With different formats and permissions, an AI system may not have access to all the information, lacking the proper API it needs to suggest a personalized treatment [135].

6.3.13 Healthcare analytics and AI in hospital care The field of health care analytics covers a broad range of the healthcare industry, with insights into both the macro and micro levels. One of the principal areas addressed by health care analytics is hospital management. It helps hospital managers operate better by providing real-time information that can support decisions and deliver actionable insights. Health care analytics provides a combination of financial and administrative data alongside information that can aid patient care efforts, better services, and improve existing procedures. Research and development, clinical data, and patients and clients are feeling about services are crucial aspects of healthcare that, through analysis, can providing new innovative solutions and treatments. Beyond administrators, healthcare analytics can help providers improve upon several areas. One major area where using analytics can optimize efforts is managing hospital and foundation donations and grants. Often, for healthcare providers, donations are the basis of their yearly budgets, so organizing and tracking expenses and activity is vital for setting appropriate goals. Moreover, it can help track donor engagement, retention, and previous contributions. Finally, healthcare analytics allows hospitals to track physician records, patient histories and needs to ensure the right doctor or professional is directed to the patients in most need. These systems can help improve patient satisfaction and expedite the healing process.

6.3.14 Public health and AI in hospital care [136] When one thinks of public health in relation to hospital care, Bellevue Hospital in New York City is the most recognized and respected from its historical roots to its current activities. It is the oldest hospital in America, tracing its roots back to 1736. From its humble beginnings as a haven for the indigent, NYC Health 1 Hospitals/Bellevue has become a major academic medical institution of international renown.

Chapter 6 • Current AI applications in medical therapies and services

227

It provides the latest state-of-the-art medical technology along with compassionate care and continued dedication to its public health roots. From its origin, the hospital never turned away a needy patient, regardless of ability to pay. Over the centuries, Bellevue has served as an incubator for major innovations in public health, medical science, and medical education. Often referred to as a national treasure, NYC Health Hospitals/Bellevue defines the very best traditions of public medicine as a public service vital to the well-being of our civil society.

6.3.15 Access and availability and AI in hospital care A top priority for the health care system is access. According to most health system CEOs, that means providing patients with care where they want it, when they want it, and before they know they need it [137]. Using technology to support convenient access to care certainly isn’t a new concept. But there’s been a marked uptick in interest recently, said Brian Kalis, Accenture’s managing director of digital health. “That trend has accelerated significantly over the past 12 24 months.” Nearly half of all patients have used a walk-in or retail clinic to receive healthcare services, according to a recent survey [138] from Accenture. Younger patients seem to greatest users, with 24% of “Gen Zers” and 13% of millennials expressing dissatisfaction with the convenience of traditional healthcare services. But most health systems don’t use “true artificial intelligence” yet, said Catherine Jacobson, CEO of Milwaukee-based Froedtert Health. She does believe, however, that they are on the way there. “There are algorithms and things that use historical data, like predictive analytics, and that’s really what most of us are using right now,” she said. There’s also room for upstream interventions before the deployment of new technology. Emerging technologies require employees to learn new skills and adapt to changing components of their jobs. This may necessitate a change in the workforce, and that’s part of what drives resistance to change [137].

6.4 Nursing care 6.4.1 Big data analytics and AI in nursing care New data-driven, intelligent innovations in healthcare bring the capabilities of adding value to nursing care. AI has entered healthcare in a big way, and nurses can harness it to enhance standard patient care processes and workflows and improve quality of care, impact cost, and optimize the patient and provider experience. AI computer systems can perform tasks that would otherwise require human intelligence. They can enhance and expedite a critical component of nursing care delivery, namely decision making. New nursing technologies collect and analyze healthcare data that can foretell the risk of future events that could hinder patient care. However, even that data can be incomplete, unclean, and is found in disparate systems within organizations. Machine learning training with vast amounts of big data that is readily available from multiple sources can address such inconsistencies.

228

Foundations of Artificial Intelligence in Healthcare and Bioscience

Difficult to aggregate and analyze, nurses have yet to grasp the full use of big data to maximize its full potential and reap its many benefits. With a greater comprehension of AI, nurses can be at the forefront of embracing and encouraging its use in clinical practice [139].

6.4.2 Health information and records (EHR) and AI in nursing care A significant percentage of nurses (62%) are satisfied with their overall electronic health record (EHR) experience. This compares to physicians who report only a 16% level of satisfaction. The survey included 70,000 clinicians, including 28,000 nurses [140]. Nurses expressed greater satisfaction than providers in the EHR being helpful in terms of patient safety. Nurses indicated that EHR enables them to deliver high-quality care. Also, they reported that the EHR allowed them to deliver better patient-centered care. More than half of the nurses surveyed reported the need for improved integration with outside organizations. The same number agreed that the EHR improved efficiency and provided them with needed analytics, quality metrics, and reporting [141].

6.4.3 Research/clinical trials and AI in nursing care Nursing education and nursing research will change relative to a role and demand for professional nursing practice, with, and not for, robots in healthcare. These changes in routine nursing care will be dictated by the ability of robots to perform the currently prescribed procedures and accomplishment of nursing tasks. Notwithstanding the efficiencies derived from robotic nursing tasks, there is inherent risk in the loss of the unique, personalized care human nurses provide. From low fidelity machineries to high fidelity technologies with AI assistance, technological advancements have changed the practice of nursing. Advances in technology have been made available to aid nurses perform their jobs and care for patients more efficiently and safely. AI can determine clinically significant information underneath a massive amount of data, and this can aid in clinical decision making for nurses [142]. More research is needed to determine the effect of these new technologies in the nursing field and how to hasten their adoption in nursing practice [143]. Nursing education must incorporate technology and AI in the curriculum and increased focus in nursing research investigating the effects of technology in the nursing profession and effective ways of adopting technology in nursing practice.

6.4.4 Blockchain and AI in nursing care The full potential of blockchain in the health care sector, a standard, systematic, and enduser approach is needed to creating support tools for use in their daily practice. Blockchain solutions designed by nurses can serve as joint leaders in reforming health and social ecosystems, leading to a triple win for citizens, industry, and the service provider. Frontline nurses and a more holistic approach to value-based health and social care, placing the patient/citizen (prevention) at the center of the process, might produce an active

Chapter 6 • Current AI applications in medical therapies and services

229

community partnership. Blockchain can support citizen/patients’ empowerment in the management of their health and social data, by guaranteeing citizens in the chain know-how and where their data is being used [144]. Nurses added-value in blockchain relates to boosting the continuity of care, facilitating the communication between the different actors involved to deliver the best outcomes for patients and citizens. In particular, nurses are vital to improving access and outcomes in a peoplecentered approach, ensuring the continuity of care across the primary and secondary health and social care sectors. Blockchain can enable nurses to deliver on access to care, through AIsupported health and social care. In so doing, it needs to foster integration and continuity of care policies, support nurses in the delivery of a safe and high-quality level of care.

6.4.5 Internet of Things (IoT) and AI in nursing care The development of wearable technology and smart sensors on the Internet of Things (IoT) makes communication with a care team at any moment possible. Such communications are accomplished through a simple touch of a button or tracking essential health data in a way that doesn’t require multiple doctor’s office visits. Patients can now wear devices that measure vital signs and upload the data to their caregivers. This changes how patients interact with healthcare professionals. Specifically, such devices can assist nurses, who spend the most time interacting with patients. IoT monitoring could eliminate many pricey doctor office visits, offering a low-cost, high-tech way to access care easily. The FDA has approved more than 100 health apps for medical use [145]. Nurses and other healthcare professionals will need to continue to monitor the usage of IoT devices as they proliferate rapidly. As the IoT grows, there are a few things for nurses to consider, according to AdvanceWeb.com [146]: • How can nurses work to facilitate any necessary patient behavior modification through remote monitoring? • How can nurses provide the best possible patient and caregiver training on these devices? These considerations, along with adopting uniform policies for IoT devices, nurses can create an IoT care plan that has the potential to improve healthcare outcomes and potentially reduce costs [147].

6.4.6 Telehealth and AI in nursing care Telehealth nursing is growing rapidly as its services expand as an extension of healthcare providers’ offices, hospitals, and health plans. These services include independent nursing practices. Telehealth nursing enables these services to delivered remotely to improve efficiency and patient access to healthcare [148]. Numerous benefits of telehealth nursing have been identified as nurses assist with patient retention, decrease on-call hours for healthcare providers, and offering versatility for use during any time interval, including around-theclock, weekend, or after-hours care [149]. Telehealth nurses’ also include the ability to guide

230

Foundations of Artificial Intelligence in Healthcare and Bioscience

patients to ED visits, clarify appropriate treatment options, educate about self-care at home, and assist with appointment scheduling [150]. Telehealth nursing programs generally require nurses to have 3 5 years of clinical experience. Nurses undergo training before taking on real-world patient calls. During their training, nurses learn concepts and descriptions to conduct assessments, analyze, and plan using proper decision support tools. These tools include algorithms and protocols that provide specifics to direct the telehealth nurse [151]. The Patient Protection and Affordable Care Act enacted continuity of care through patient-centered medical homes, which has been expanded by telehealth nursing. This has offered the ability for caregivers to call at their convenience, which resulted in risk reduction and cost savings to providers and healthcare organizations, as well as meeting the accessible care standards set by the National Committee for Quality Assurance [152].

6.4.7 Chatbots and AI in nursing care Perhaps the most popular chatbot in healthcare is “Florence,” named after the Florence Nightingale, the English social reformer and founder of modern nursing. It was developed in 2016 by David Hawig, a German entrepreneur and researcher. It is a personal health assistant that reminds users to take their medication or birth control pills and helps them keep track of their body weight, mood, and other health issues. To use it, one simply starts chatting with Florence inside a messaging platform, like Skype, Kik, or Facebook messenger [153]. Another avatar-based virtual nurse assistant is called “Molly.” Molly communicates to patients through a nurse with clinical advice to assess their condition and suggest appropriate follow-up [154]. Using natural language processing (NLP see below), these chatbots give healthcare professionals the chance to communicate with patients and increase patient engagement interactively. They also help ease the workload of doctors and nurses by performing basic administrative tasks. “Not all patients will receive their care or advice by a doctor, and many will be dealt with by excellent nurses or even by an administrative staff that may hand out a leaflet,” was stated by Nils Hammerla, Ph.D., Machine Learning Specialist, Babylon in MobiHealthNews. “We see AI as a chance to improve the efficiency of the healthcare system, to support our doctors and other medical staff, and to improve accessibility all around the world to healthcare services” [155].

6.4.8 Natural language processing (NLP), and AI in nursing care Natural language processing (NLP) is an AI program that helps computers, when coupled with Automated Speech Recognition (ASR), to understand better and process human (unstructured) languages [156]. Utilizing machine learning, NLP and ASR enable scientists to develop algorithms for language translation, semantic understanding, and text summarization. This makes it easier for providers to understand and perform computations on volumes of text with less effort. Examples of NLP and ASR are virtual assistants, chatbots (mentioned above), cell phone voice texting/messaging, and nursing applications include extracting EHR text from notes in non-discreet fields, vocal charting, and speech-activated paging devices.

Chapter 6 • Current AI applications in medical therapies and services

231

Assessment in the nursing process is the organized collection and verification of data that is analyzed and communicated to members of the healthcare team and patients. NLP and ASR can aid this step by the use of hands-free input of data and mechanical extraction of patient-specific data from notes and indiscreet fields of the EHR to determine the patient’s condition more accurately. Diagnosis defines the disease; planning is where nurses set care priorities and define patient goals; and Intervention is when the actions are initiated and completed occurs [157]. All of these stages are made more precise and are expedited by predictive analytics and machine learning. NLP collects the data, and ASR to complete the process ensures the most accurate, highest quality of care for each patient throughout the care continuum [158].

6.4.9 Expert systems and AI in nursing care Clinical decision support system (CDSS) functionality offers nurses a means to promote and enhance care delivery by using rules-based tools. AI extends CDSS used in nursing care. The difference is that AI, particularly predictive analytics, adds breadth and precision to decision making for healthier care experiences for those giving and receiving care. The University of Pennsylvania School of Nursing (Penn Nursing) developed, validated, and tested a two-step clinical decision support (CDS) algorithm called Discharge Referral Expert System for Care Transitions (DIRECT). The DIRECT CDS helps clinicians identify patients most in need of PAC (post-acute care) and suggests whether skilled home care or facility level care is best. The researchers developed the DIRECT CDS using values of structured patient data drawn from the electronic health record and knowledge elicitation from clinical experts as they reviewed de-identified case studies of actual patients. “While the proportion of patients referred to PAC between the 2 phases did not change significantly, the algorithm may have identified those patients most in need. This resulted in significantly lower inpatient readmission rates for same-day, 7-, 14- and 30-day intervals,” explained Kathryn H. Bowles, Ph.D., RN, FAAN, FACMI [159]. “The DIRECT CDS indicates potential as a useful tool to optimize PAC decision-making and improve patient outcomes. It may also identify patients who need PAC but are unable to receive it because of policy or insurance barriers. Future studies examining the outcomes of these patients may have policy implications,” said Bowles.

6.4.10 Robotics and AI in nursing care The U.S. Bureau of Labor Statistics reports that there is a shortage of nurses, and the need will increase by 15% between 2016 and 2026 [160]. There is now a robot designed to help with nurse tasks and ease the pressure of understaffing. The robot’s name is Moxi, designed and built by Austin-based Diligent Robotics. Moxi will not replace nurses but rather will handle up to 30% of tasks nurses do that don’t involve patient interaction. These tasks include, but are not limited to, things like running errands around the floor or delivering specimens to the lab for analysis. “It’s hard to

232

Foundations of Artificial Intelligence in Healthcare and Bioscience

argue that we’re taking anyone’s job. Everyone is trying to make the nurses they have go further,” said the robot’s developers, Andrea Thomaz and Vivian Chu. Nurses can set up rules and tasks that give the robot a command for a particular errand when certain things change in a patient’s record (see Research in Nursing Care above). That means nurses wouldn’t have to remember specific tasks that otherwise would be part of their daily job. This reduces the cognitive load on the nurse. The preprogrammed nature of Moxi’s tasks doesn’t mean that the robot never interacts with people. It is programmed for human-robot social interaction, and carefully designed to be non-threatening and transparent in its actions. Moxi’s job is to take as many mundane tasks as possible off nurses’ plates so that they could spend more time interacting with patients, but beta trials revealed that patients enjoy interacting with the robot as well [161]. The threat of humanoid robots replacing nurses in nursing practice has become a topic of serious discussions [162]. Nonetheless, there is already a robotic revolution happening in nursing, and these robots have made tasks and procedures more efficient and safer rather than replacing nurses [163].

6.4.11 Population health (demographics and epidemiology) and AI in nursing care The passage of the Affordable Care Act (ACA) refocused health care from the individual, patientspecific, episodic care, towards population health management, that of control of groups of people with an emphasis on primary and preventive care. Population health management assists these groups in attaining and maintaining health with shared accountability for the future environment, social, and community factors that contribute to chronic disease and cost. The 3 million nurses in the United States [164], because of their role, their education, and respect for their profession, are well-positioned to help shape and improve our nation’s health status and care infrastructure. The Robert Woods Johnson Foundation Catalyst for Change report [165] has created an urgent call for harnessing the power of nurses in our communities, schools, businesses, homes, and hospitals to build capacity for population health. Informatics Nurse Specialists (INSs) are trained, prepared, and well-positioned to fulfill roles across practice, research, education, and policy to support this call. Informatics Nurse Specialists (INSs) are integral in actively supporting the nursing profession to build population health in the 21st century, aligned with the Catalyst for Change white paper [165]. INSs are critical partners to lead population health, care coordination across settings of care, and inter-professional and community stakeholder collaboration.

6.4.12 Precision medicine/health (personalized health) and AI in nursing care As described previously in Chapter 4 (page 101), precision medicine is defined as “the emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person” [166]. The focus on

Chapter 6 • Current AI applications in medical therapies and services

233

prevention, management, and alleviation of symptoms is perhaps nursing science’s most significant contribution to precision health [167]. As such, the National Institute of Nursing Research’s (NINR) Symptom Science Model [168] NINR-funded P20/30 Centers are planning to align symptom science within precision health, including “omics” approaches (e.g., epigenetics/genomics and microbiome). Behavioral science, including self-management and sociocultural as well as the physical environment, are also included. Nursing scientists who focus on symptoms and self-management science are incorporating the concept of precision health into their ongoing research. The idea also is significant for the development, testing, and targeting of nursing interventions across the health care continuum [169]. The NINR also emphasizes the development of symptom science and the application of precision health methods to self-management of chronic conditions and to address these major health concerns [170]. To reach the goal of precision health, approaches must be applied throughout research translation from basic science to clinical research and, ultimately, at the population level to improve health and prevent disease. Nurse scientists need to become increasingly more knowledgeable and facile in integrating precision health approaches as they develop symptoms and self-management interventions [171].

6.4.13 Healthcare analytics and AI in nursing care Predictive analytics falls under the umbrella of AI healthcare analytics (see Chapter 4, page 97). This type of advanced analytics allows nurses to discover previously unknown patterns in multiple sources of clinical and operational data that can guide better decision making. Through the use of predictive data, nurses can gain actionable insights that enable greater accuracy, timely, and appropriate interventions in a prescriptive way for both patients and nurses. Predictive analytics can alleviate the untoward effects of taxing patient care that causes nurse dissatisfaction and burnout. Innovations in technology, including predictive analytics applications, can increase nurse satisfaction and improve this facet of the Quadruple Aim (patient-centered approach, improving population health, containing costs, and making a difference on a large scale) [139]. Intelligent computer systems also assist the nursing process, and critical and organized thinking to expedite decision making by synthesizing valuable nursing skills and knowledge. There are compelling reasons why nurses should care about AI innovations. Artificial Intelligence and predictive analytics can evolve nurses’ thinking about care delivery and operational tasks in functionally disruptive ways to serve the Quadruple Aim. With this shift, nurses can begin to advocate for the adoption and use of AI inpatient care delivery [139].

6.4.14 Preventive health and AI in nursing care Nurse practitioners (NPs) are registered nurses with advanced education, certification, and clinical skills to perform comprehensive physical assessments, the ability to diagnose patients’ conditions, prescribe treatments and medications, and take charge of a patient’s overall care. More so, beyond these clinical skills and personal skills synonymous with nursing, NPs have an added emphasis on prevention in their patient care [172].

234

Foundations of Artificial Intelligence in Healthcare and Bioscience

By nurses offering education and counseling, they significantly advance preventive health efforts nationwide. Preventive health refers to a collection of strategies that health care professionals encourage patients to implement to stay healthy and to reduce the risk of future disease. Preventive measures such as screenings, physical examinations, and immunizations often are performed in accordance with demographic factors like age, gender, and family history [173]. In a study published by the U.S. National Library of Medicine [174], the importance of nursing professionals working on the frontlines of patient care was highlighted to aid with preventive health care efforts. Nurses achieve this primarily through the dissemination of information that patients can harness to keep themselves as healthy as possible.

6.4.15 Public health and AI in nursing care Public health nursing (PHN) is a distinct and well-recognized area of nursing. It is defined by the American Public Health Association, Public Health Nursing Section, as the practice of promoting and protecting the health of populations using knowledge from nursing, social, and public health sciences [175]. The combination of a clinical nursing background with knowledge from the public health and social sciences provide a sound basis for public health leadership positions. Often used interchangeably with community health nursing, this nursing practice includes advocacy, policy development, and planning, which addresses issues of social justice. Public health nursing practice, utilizing AI tools, focuses on population health (discussed above) to promote health and preventing disease and disability. Public Health Nurses work with the individuals and families that compose the communities and the systems that affect the communities. They work in a variety of settings such as health departments, schools, homes, community health centers, clinics, correctional facilities, worksites, out of mobile vans, and even dog sleds [176].

6.4.16 Access and availability and AI in nursing care A low nurse to patient ratio [177] often leads to tired, stressed nurses affecting the quality of patient care. Projections from the Health Resources and Services Administration’s (HRSA) Health Workforce Simulation Model (HWSM) highlight the inequitable distribution of the nursing workforce across the United States [178]. The HWSM Model projected a national RN excess of about 8% of demand and a national LPN deficit of 13% by 2030. For RNs, the state-level projections show both projected deficits of RNs in several states, and significant variations in oversupply in other states. Similarly, national estimates of LPNs in 2030 show maldistribution among countries. These findings underscore the potential difficulties in ensuring adequate nursing workforce supply across the United States. Looking to the future, many factors will continue to affect demand for and supply of nurses, including demand for health services broadly and within specific health care settings [179]. Nonetheless, emerging care delivery models such as Accountable Care Organizations (ACOs) could change the way that RNs and LPNs delivery service. However, there is currently insufficient information to project the extent to which these new delivery models will materially affect the demand for nurses.

Chapter 6 • Current AI applications in medical therapies and services

235

The value of nursing access and availability could not have been better demonstrated by nursing critical and invaluable role during the COVID-19 pandemic. As frontline clinical and supportive caregivers, virus infected patients, inundated hospitals and the very health and well-being of nations throughout the world could not have survived. Their dedicated, compassionate service, at the risk of their own health and well-being will be appreciated and remembered forever by a grateful humanity.

6.5 Home health care, nursing homes and hospice care 6.5.1 Big data analytics and AI in home health, nursing homes, and hospice care Nursing informatics [180] is a specialized field of research that utilizes medical data to support the practice of nursing and focusing specifically on home healthcare and nursing home care. The goal is to help clinicians make better decisions about patient care and to help home care patients achieve better health outcomes. Protected data within the home health and health care industries present some barriers to adopting new big data analytics. However, the widespread use of EHRs is providing increasing, significant, and rapid change in access to data. In 2010, fewer than 50% of hospitals were using electronic records, and today, it’s nearly 90%. Legal, regulatory, and legislative issues confronting home health and hospice operators are very complicated and seriously require big data analytic support. Three problems are emerging that the National Association for Home Care & Hospice (NAHC) believes they will continue to be critical concerns for some time to come. This is because of increased bureaucratic interference with service operators [181]: 1) Patient-Driven Groupings Model (PDGM) which represents the most significant change to the payment system in the 21st century. The PDGM model devised by the Centers for Medicare & Medicaid Services (CMS) will subject home health providers to an 8.01% cut in the base rate; 2) Hospice “Carve-in” wherein Medicare Advantage benefits package would subject hospice care decisions to an additional layer of financial and utilization controls, fragmenting existing hospice benefits and diminish their value; 3) Report on Hospice Quality of Care provides for the Office of Inspector General to consider recommendations for increasing hospice responsibility related to the potential incidence of abuse.

6.5.2 Health information and records (EHR) and AI in home health, nursing homes, and hospice care Home healthcare and hospice organizations are planning on expanding interoperability initiatives by 30% in 2019 to improve patient data management, according to a Brightree survey [182]. The survey included 675 home healthcare and hospice providers as well as 440 of

236

Foundations of Artificial Intelligence in Healthcare and Bioscience

their referral sources about their opinions on interoperability and electronic referrals. The study generated 4 insights: 1) Seventy percent of home healthcare and hospice organizations said that within the past 1 2 years, there has been an increase in the number of referral sources that request data to be sent electronically; 2) More than half (60%) of referring providers said they would be willing to switch to a new post-acute provider if that organization can accept electronic referrals; 3) Just 4% of home healthcare and hospice organizations said they were able to accept electronic referrals from a referral source EHR system; 4) Thirty-one percent of post-acute care providers said they would switch EHRs if they found a vendor that could better support their interoperability needs.

6.5.3 Research/clinical trials and AI in home health, nursing homes, and hospice care The National Association for Home Care & Hospice conducts in-depth research studies on varying aspects of the home care and hospice industry. The 2 most recent include a study conducted by the Alliance for Home Health Quality & Innovation Research and Studies in conjunction with the Cleveland Clinic on “Optimizing Home Health Care: Enhanced Value and Improved Outcomes” [183]; and a second sponsored by the Alliance for Home Health Quality & Innovation in conjunction with the research firm Dobson DaVanzo & Associates on “The Clinically Appropriate and Cost-Effective Placement Project” [184]. “Optimizing Home Health Care: Enhanced Value and Improved Outcomes” demonstrated that as the baby-boom population ages, there is an increasing demand for home care as more people choose to age at home. The changing political and fiscal landscape serves as a sociomarker to policymakers and other health care stakeholders as they look to reform postacute care and make better use of home health. The “Clinically Appropriate and Cost-Effective Placement Project” examines how Medicare’s use of home health can better meet beneficiary needs and improve the quality and efficiency of care provided within the U.S. health care system. The project, which analyzed a 5% sample of 3 years of Medicare fee-for-service claims data.

6.5.4 Blockchain and AI in home health, nursing homes, and hospice care The aging population encounters daily difficulties and frustrations in navigating the health care system. They are required to repeat their history and symptoms repeatedly to providers. Poor reporting causes delays in care and undue anxiety at the complexity of the system. These problems present the scenario for blockchain technology. A company (Wellderly [185]), in conjunction with other industry partners, is building the world’s first blockchain-based platform for elderly wellness and care. The platform coordinates health and well-being services to meet the needs of aging citizens. Some of these

Chapter 6 • Current AI applications in medical therapies and services

237

services include adopting a personalized and integrated approach for the elderly, with the following capabilities and benefits: • Allow longitudinal data of the elderly to be securely stored on the blockchain which they can grant access to the service providers they had engaged; • Link all product and service providers for eldercare with the elderly, their families and caregivers together to form a seamless ecosystem where relevant information is readily accessible by them; • Motivate the elderly to actively participate in eldercare education and voluntarism with incentives to earn vouchers for senior citizen products and services.

6.5.5 Internet of Things (IoT) and AI in home health, nursing homes, and hospice care Due to their greater interconnectivity and data sharing, along with the operational advantages, the Internet of Things (IoT) is an ever-increasing force in business and health care. To be able to connect any person, particularly the elderly, with any device in virtually any location is a concept of enormous value in health care. The evolution of IoT is transforming the very nature of remote patient monitoring (RPM). RPM is designed to deliver care via a series of inter-connected, home-based devices by capturing and monitoring data on devices connected to the same. Innovative technologies like smart sensor [186], Healthcare facilities, nursing homes, and home care now have the power not only to gather and access a wide variety of data but also it can instantly leverage the data to deliver the best possible care while reducing time and wasteful costs. The benefits of IoTs are already apparent in the improved outcomes enjoyed in progressive hospitals and health facilities. Such benefits are especially recognized throughout many homes and home care settings, where unobtrusive but highly responsive devices wire patients directly to their off-site caregivers [187].

6.5.6 Telehealth and AI in home health, nursing homes, and hospice care Insurance companies are increasingly expanding coverage for telehealth and telecare programs. Center for Medicare and Medicaid Services (CMS) has also announced limited support of telemedicine [188]. This coverage is excellent news for home healthcare, nursing home, and hospice providers since telehealth can help virtually connect patients with doctors and be able to show them any patient changes or issues. Telehealth has the greatest potential to transform home healthcare and hospice agencies by improving patient care, efficiencies, and streamlining a care provider’s day. Many home care, nursing home, and hospice administrators report that telehealth and kiosks and already helping reduce costs for their agency’s patient population. The technology can also lead to better communicate internally and externally as well as fewer silos (isolations of data) within an organization [189].

238

Foundations of Artificial Intelligence in Healthcare and Bioscience

6.5.7 Chatbots and AI in home health, nursing homes, and hospice care What if, at least initially, you didn’t have to talk to another human about your end-of-life? What if your “end-of-life” conversation was with a machine? A team at Northeastern University in Boston is exploring this [190]. They’ve begun a trial in which they’re introducing terminally ill patients to chatbots able to converse with humans. According to the research team, “Patients tend to be referred to palliative care much too late. Something like a third of patients moved to a hospice die within a week.” Instead, perhaps people with a short life expectancy could use technology with AI to help prepare themselves logistically, emotionally, even spiritually for their deaths. After a voice greeting, the chatbot provides a choice of responses on the touchscreen. There is a tightly scripted interaction to keep the conversation focused and avoid communication breakdowns that can occur with even the most intelligent machines. That also protects the patient from revealing too much personal information. The chatbot also presents the option to expand the conversation beyond the person’s physical condition, too, perhaps to discuss “end of life” planning and spiritual considerations. The program doesn’t generate documents but enables family members or caregivers to see if, what and when a patient is ready to talk [191].

6.5.8 Natural language processing (NLP) and AI in home health, nursing homes, and hospice care NLP can be applied to extensive datasets through data-mining of the EHR and thus enabling widespread implementation and population-based assessment at low cost. Because of these strengths, NLP methodology can be applied to study documentation in EHR notes with the following guideline regarding recommended processes of care: goals of care conversations, clarifying code status, assessment for the nursing home, hospice, and palliative care consultation. Building a key term library is critical for implementing this NLP concept. Researchers manually reviewed medical records to identify documentation of each of these processes of care. These key term libraries were refined and validated by manual review of notes flagged by NLP, as well as manual review of records not flagged by NLP. This iterative process resulted in improvements in NLP performance over time and a “gold standard review.” Clinical chart reviews, laboratory, and imaging studies were manually performed, and assessment for hospice and palliative care consultation were conducted. NLP was then performed, and results from NLP were compared with findings from the gold standard chart review. The NLP libraries had high sensitivities and specificities that ranged from 93.8% to 100%, and the NLP search abstracted these records and provided a structured dataset in just 26 seconds. By comparison, manual review and data entry required over 20 hours to complete. This study suggests that established palliative care quality benchmarks are applicable in palliative surgery and can be rapidly and accurately implemented using NLP [192].

Chapter 6 • Current AI applications in medical therapies and services

239

6.5.9 Robotics and AI in home health, nursing homes, and hospice care Continuing advancements in AI and related tools like robots are helping bring the more traditional concept of physical care into reality. The real challenge may be building and costeffectively deploying such technologies. Advancements have already included combining proven communication capabilities, like video and chatbots, with the capabilities of a robot to move around remotely. Then the person with the robot doesn’t have to control it or know how it works. These home robots can also serve as the physical interface for numerous digital technologies and interventions, such as socialization chatbots or cognitive exercises for lonely seniors. The growing number of seniors and chronic disease patients is continuing to drive demand for new in-home, nursing home, and consumer-facing care tools. It’s anticipated that hardware-based AI will likely play an increasing role in the industry alongside their software-driven predecessors [193].

6.5.10 Population health (demographics and epidemiology) and AI in home health, nursing homes, and hospice care The U.S. health care system is moving towards value-based payment models (pay for outcomes rather than fee-for-service) and Medicare Advantage (MA) and other health plans are incentivized to provide higher quality care at a reduced cost. This is done by eliminating nonbeneficial and wasteful care [194]. Community-based palliative care programs are associated with decreased hospitalizations, nursing homes, and costs in the last months of life [195]. MA conducted a study incorporating key components of population health, including proactive identification, multidisciplinary team care management, phone, and home visits, emphasis on care coordination, collaboration with physicians and health plans, and leveraging a mobile platform to support workflows and reporting [196]. The clinical team also followed a consistent care process guided by AI standardized assessments, intervention-based care paths, and real-time clinical dashboards. A population health community-based palliative care program staffed by nurses and social workers were associated with lower costs, decreased hospitalizations and ICU days, and increased hospice utilization while improving care quality and member satisfaction. Care that is primarily driven by values, goals, and preferences of seriously ill individuals and their family members results in more compassionate, affordable, sustainable, and high-quality care [197].

6.5.11 Precision medicine/health (personalized health) and AI in home health, nursing homes, and hospice care Precision medicine is an emerging approach to disease treatment and prevention that considers differences in people’s lifestyles, environments, and biological makeup, including genes. The All of Us Research Program [198] partners with 1 million diverse people who share information about themselves over many years, intending to enable research to more precisely prevent and treat a variety of health conditions.

240

Foundations of Artificial Intelligence in Healthcare and Bioscience

Participants are asked to share different types of health and lifestyle information through online surveys and electronic health records (EHRs), which will continue to be collected throughout the program. Participants will be asked to visit a local partner site to provide blood and urine samples and to have basic physical measurements taken, such as height and weight. In the future, participants may be invited to share data through wearable devices and to join follow-up research studies, including clinical trials. The Program is partnering with trusted community partners, such as Diverse Elders Coalition member organization, the National Hispanic Council on Aging (NHCOA) [199], to ensure that their outreach efforts are culturally and linguistically competent. Factors such as the current political climate, which has created fear and unwillingness to reveal personal information to the U.S. government, have been considered by All of Us leadership and protections have been developed.

6.5.12 Healthcare analytics and AI in home health, nursing homes, and hospice care Useful data and analytics are critical in-home health care. Metrics that need frequently checking are the status of claims in the billing process, which claims came back “Return to Provider” (RTP) or rejected, and the reason codes for those claims. If there is still uncertainty about taking necessary action based on your claim data, a comparative analytics solution tools [200] offers insight into critical metrics like claim processing time, denials, and the amount paid on claims. Beyond the challenges of Medicare billing, home health agencies, and nursing homes also must deal with additional rules, deadlines, and possible penalties with Request for Anticipated Payments (RAPs). Canceled RAPs and paid RAPs are just a few numbers to track and therefore, a better handle on your deadlines and takebacks. The agency must also assess the risk of Z-RAPs [201], when RAPs pay at 0% instead of 60% of the final payment. And of course, home health agencies depend heavily on hospital referrals, and a great way to foster those referrals is to keep your hospital readmission rates low. Regularly monitoring and benchmarking your re-hospitalization rates against CMS best practice standards will put you in a better position to catch and fix any problems before re-admissions get too high.

6.5.13 Preventive health and AI in home health, nursing homes, and hospice care A digital health company, CarePredict [202], has produced an AI-supported preventive healthcare solution for senior adults. The AI platform consists of lightweight sensors and wearables (IoTs), designed for seniors, to unobtrusively collect rich data sets on senior’s activities and behavior patterns. These activities include but are not limited to eating, drinking, walking, grooming, sleeping, bathing, and toileting. Deep learning models trained on these activity sets are used to surface insights such as signs and symptoms of self-neglect indicative of depression, unusual toileting patterns

Chapter 6 • Current AI applications in medical therapies and services

241

characteristic of a urinary tract infection (UTI), or increased fall risk due to malnutrition, gait changes, lack of rest, and dehydration. This platform serves as a monitoring solution for parents and grandparents who want to age in the comfort of their own home in a safe, smart, and sustainable way. At the same time, it empowers family members with constant visibility and unparalleled insights into the evolving health of their loved ones. It allows them to make the right decisions well in advance [202].

6.5.14 Public health and AI in home health, nursing homes, and hospice care TRICARE [203] covers hospice care for military and veterans in the United States, District of Columbia and U.S. Territories administered through an AI-supported big data program under the following guidelines: • The patient, primary physician, or authorized family can initiate hospice care. • Hospice care will only start with a doctor’s order. • The patient must complete an “election statement” and file it with the regional contractor. Hospice care is not covered in any other overseas areas (outside of U.S. Territories). There are 4 levels of care within the hospice benefit: 1) 2) 3) 4)

Continuous home care; General hospice inpatient care; Inpatient respite care; Routine home care;

Care within the 4 levels may include physician services, nursing care, counseling, medical equipment, supplies, medications, medical social services, physical and occupational services. Also included are speech and language pathology, and hospice short-term acute patient care related to the terminal illness. Because hospice care emphasizes supportive services, such as pain control and home care, the benefit allows for home health aid and personal comfort items, which are limited under TRICARE’s main coverage programs. However, services for an unrelated condition or injury, such as a broken bone or unrelated diabetes, are still covered as a regular TRICARE benefit. TRICARE does not cover room and board unless the patient is receiving inpatient or respite care. Patients cannot receive other TRICARE services or benefits (curative treatments related to the terminal illness) unless the hospice care is formally revoked. No care for the illness is covered by TRICARE unless the hospice provides or arranges for the care.

6.5.15 Access and availability and AI in home health, nursing homes, and hospice care The number of home health agencies in the U.S. (as of 2016) was 12,200, 15,600 nursing homes, and 4300 hospice care agencies [204]. The National Center for Health Statistics: Vital and Health Statistics published comprehensive analytical and epidemiological studies in

242

Foundations of Artificial Intelligence in Healthcare and Bioscience

February 2019 [205]. In 2016, about 65,600 paid, regulated, long-term care services providers in 5 major sectors served more than 8.3 million people in the United States. This report provides information on the supply, organizational characteristics, staffing, and services offered by providers; and the demographic, health, and functional composition, and adverse events among users of these services. Services users include residents of nursing homes and residential care communities, patients of home health agencies and hospices, and participants of adult day services centers.

6.6 Concurrent medical conditions (“comorbidity,” aka “multimorbidity”) (See also, Chronic Illnesses, Chapter 7, page 411) Throughout this section’s discussion on comorbidities, you will begin to get a sense of the magnitude of the problem of concurrent medical conditions, especially in the aging population worldwide and now, among SARS-CoV-2 infected patients. These combined conditions encompass physical as well as mental disorders in patients. And because of their propensity towards the elderly, bodily injury also becomes an instigating complication. These confluent issues make comorbidities a demonstrable public health issue. Fig. 6 1 demonstrates the age distribution of comorbidities and the amount and their frequency graphically within the increasing age ranges.

FIGURE 6–1 Comorbidities by age. Combined conditions (“comorbidities”) encompass physical as well as mental disorders in patients with their greatest frequency for occurrence in the elderly population. This makes the issue of comorbidities a demonstrable public health issue. Source: European Respiratory Journal, Lancet 2007.

Chapter 6 • Current AI applications in medical therapies and services

243

Effective January 2018, the National Library of Medicine designated a MeSH (Medical Subject Headings, National Library of Medicine’s official vocabulary thesaurus) specific definitions for concurrent medical conditions. A different classification term for multimorbidity, distinct from comorbidity, was assigned. Nicholson et al. [206] re-emphasize that this is more than a semantic difference. While both terms focus on the occurrence of multiple chronic conditions within the same individual, the term “comorbidity” refers to the combined effects of additional conditions about a chronic index condition (such as comorbidity in diabetes mellitus, stroke, depression or cancer). In comparison, the term “multimorbidity” indicates that no single condition holds priority over any of the co-occurring conditions from the perspective of the patient and the health care professional.

6.6.1 Big data analytics and AI in concurrent medical conditions (“comorbidity”) Long-term post-acute care (LTPAC), skilled nursing facilities (SNFs), and home health agencies are major players in helping patients recover from serious events and support medically frail individuals, especially the elderly with comorbidities. Unfortunately, many of these facilities are not integrated digitally with their physician and hospital partners, which leads to incomplete information and fragmented treatment plans. Managing complex high risks patients, such as elderly individuals with multiple comorbidities, requires payers, providers, and LTPAC organizations to share more data and collaborate more closely is a high priority among many stakeholders. Learning to use more readily available data, like demographics, ICD-10 codes, and ADT alerts, are vital steps for eventually integrating much more complex and varied big data into comorbid population health management. Healthcare stakeholders that aim to create highly personalized preventive care and chronic disease management initiatives must work with their partners, AI health systems, and peers. This allows them to access and analyze big data that can meaningfully improve the delivery of quality patient care [207].

6.6.2 Health information and records (EHR) and AI in concurrent medical conditions (“comorbidity”) The EHR has proven unequivocally to be an effective tool in research studies used to discover patterns of disease susceptibility and comorbidity [208], and EHR data sets. Linking the EHR to other -omics data types within a network biology framework has also made significant contributions to better understanding various risk factors to disease etiology. These include genetics [209], environment, demography, and combinations thereof. A study was conducted to determine the relationship between disease comorbidities and commonly shared genetic architecture of disease. Records of pairs of conditions were combined from 2 different EHR systems electronic medical systems (Columbia, Stanford), and compared to an extensive database of published disease-associated genetic variants (VARIMED). Data on 35 disorders were available across all 3 sources. This included medical

244

Foundations of Artificial Intelligence in Healthcare and Bioscience

records for over 1.2 million patients and options from over 17,000 publications Disease pairs were categorized as having predominantly clinical, genetic (phenotype), or both kinds of manifestations. Bias effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similar incidence patterns. Disease pairs were categorized as having predominant clinical, genetic, or both kinds of manifestations. Confounding effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similarly shaped incidence patterns. This study using AI presented a method for integrating clinical EMR and genetics data to better understand the nature of disease comorbidity. The study identified sets of disease pairs that deviate from the assumption of their independent co-occurrence in 2 different EMR systems. By integrating the clinical observations with genetics, the study was able to categorize which disease their shared genetics might explain pairs and which might have more of an environmental component [209]. This study presented a method of integrating clinical EMR and genetics data to elucidate disease comorbidity. It identified a set of disease pairs which deviate from the independence assumption in their co-occurrence in two different EMR systems. Integrating the clinical observations with genetics, further categorizes which of the disease pairs might be explained by the shared genetics and which might have more of an enviornamental component.

6.6.3 Research/clinical trials and AI in concurrent medical conditions (“comorbidity”) In the previous section regarding EHRs relating to comorbidity, pairs of organic disease research revealed how diseases could be paired within an individual based on common clinical and genetic factors. Research studies in mental disorders have also been studied with some similar patterns. Comorbidity in mental illness can include a situation where a person receives a medical diagnosis that is followed by the diagnosis of a mental disorder (or vice versa), or it can involve the determination of a mental disorder that is followed by the diagnosis of another mental disorder. As with the EHR studies, these comorbidities with mental disorders were revealed in clinical trials through the use of big data analytics and machine learning algorithms. A large cross-sectional national epidemiological study in 2009 of comorbidity of mental disorders in primary care in Spain published in the Journal of Affective Disorders [210] showed that among a sample of 7936 adult patients, about half had more than 1 psychiatric disorder. Furthermore, in the U.S. National Comorbidity Survey, 51% of patients with a diagnosis of major depression also had at least 1 anxiety disorder, and only 26% of them had no other mental disorder [211].

6.6.4 Blockchain and AI in concurrent medical conditions (“comorbidity”) As an example of blockchain usage with comorbidity, a patient sent from their physician to a specialist to check for cardiovascular comorbidities such as hypertension is then sent on to

Chapter 6 • Current AI applications in medical therapies and services

245

another specialist to treat or prevent another disorder. Within the blockchain, such patients have a better chance of managing their disease and comorbidities because they would not need to take the time to gather all their health records from multiple doctors to send to their new specialist. All additional specialists in the patient’s care would be added to the existing blockchain from where the patient can access the same information as everyone else already participating in the chain. All the participants in the blockchain have the additional bonus of knowing that the information transmitted between different providers has undergone validation with a reduction in lost data (e.g., indecipherable handwriting). Based on the ease-ofuse, there are other benefits to be received by the blockchain, such as enhanced patientdoctor communication, real-time emergency alerts, and increased preventive care through the empowered of an informed patient [212]. With the aging demographic, there is a growing number of people living with comorbidities, non-communicable diseases, and in need of complex care interventions. A nursing co-designed blockchain technology value-based healthcare system has been developed with the potential to strengthen continuity of care and nurses’ advanced roles in care pathway co-ordination. Blockchain technology goes beyond the solutions provided by paper files or electronic health records (EHR) with a seamless and secure way to capture, track and share a citizen/ patient entire health experience. It combines personal data on patients with comorbidities, with primary care and public health data. This facilitates and enables the transition from feefor-service payments, prioritizing volume of medical actions over effective and efficient people-centered care, towards value-based reimbursement models that prioritize quality outcomes of continuity of care. The European Commission has recognized the considerable potential of blockchaininspired technologies for administrations, businesses, and society in general [213]. In this context, the EU has already allocated millions to blockchain-related projects, and potentially more to be committed from 2018 to 2020. The Commission recently launched the EU Blockchain Observatory and Forum [214], mapping existing initiatives on a blockchain, monitor related trends and developments, informing policy debates, and inspiring collective actions based on specific use-cases. Blockchain can significantly contribute to enabling nurses to deliver on access to health and social care through the digitalization of health and care. Engaging end-users with comorbidities and local frontline nurses in co-designing ‘fit for purpose’ health and social care systems is the first step forward to this path [215].

6.6.5 Telehealth and AI in concurrent medical conditions (“comorbidity”) Patients who benefit most from telehealth applications are usually (1) older, (2) affected by multimorbidity, and (3) on polypharmacy (i.e., using 5 drugs or more concurrently). This implies a challenging situation for the patient, caregiver, or healthcare professional [216]. Telehealth has a considerable potential to generate real-world evidence when the special needs of the affected patients are considered while setting up a telehealth system. Cognitive

246

Foundations of Artificial Intelligence in Healthcare and Bioscience

and physical impairments should be considered, as older and multimorbid adults may for example, have usability problems with common smartphone apps [217]. Data completeness on drug therapy include all drugs that have been prescribed, dispensed, and taken. Linking the monitoring system to a comprehensive EHR system, containing medication information can give healthcare professionals a comprehensive overview of the patient’s current medication status [218]. A combination of these data with patientreported medication intake from telehealth systems can prevent adverse effects resulting from e.g., drug-drug interactions, intolerance, or double prescriptions. Integrated telecare programs implemented for comorbid patients showed improved clinical outcomes, self-management, and improvements in quality of life. However, different patient populations benefit in different ways from these care plans. Thus, continuous evaluation, service adaptation in a real-life environment set with clear outcome metrics, is required for best results [219].

6.6.6 Chatbots and AI in concurrent medical conditions (“comorbidity”) A project called CONSULT (Collaborative mObile decisioN Support for managing mULtiple morbidiTies) was developed to explore the feasibility of employing a collaborative decisionsupport tool to help patients suffering from chronic diseases to self-manage their treatment plans. By ‘collaborative,’ it mean that the patient, caregivers, and medical professionals work as a team to decide on the best treatment plan for the patient [220]. The conversational component of the CONSULT system uses a chatbot integrated with the patient’s EHR. Interactions with the chatbot are supported by argumentation-based dialogue [221]. Additionally, the patient may have questions regarding their current treatment plan (e.g., why a particular medication has been prescribed). All the explanations are generated by the argumentation engine and displayed on the personalized dashboard. The conversational component can also alert the patient to an irregularity in 1 or more of their recent measurements and initiate a conversation. The purpose of such a discourse is to find a possible solution and suggest that the patient review specific clinical findings or to advise the patient to contact their health care provider. Patients with different characteristics in terms of risk factors, comorbidity, or demographic groups are participating in studies to evaluate the usability of the proposed system. Results have shown that argumentation is a promising method in explaining decisions to help patients choose a treatment plan together with their doctor. The use of argument and attack schemes specialized for the medical domain will be a next step to consider generating better explanations [222].

6.6.7 Natural language processing (NLP) and AI in concurrent medical conditions (“comorbidity”) To effectively identify phenotype-genotype associations, extracting comorbidity information is essential because of their confounding effects on clinical outcomes [223]. To address this

Chapter 6 • Current AI applications in medical therapies and services

247

challenge, an automated method was developed that accurately determines comorbidities from electronic medical records [224]. Using a modified version of the Charlson comorbidity index (CCI) [225], a reference standard was created of comorbidities through a manual review of 100 admission notes. They were processed using the MedLEE natural language processing system [226], and writing queries to extract comorbidities automatically from its structured output. Natural language processing (NLP) can be used to build a generalizable method for extracting comorbidities from electronic health records (EHR). Therefore, the goal was to develop an effective automated and generalizable NLP-based method that derives comorbidities from narrative records. The system was able to derive comorbidities automatically from narrative admission notes with high accuracy. This method has a higher sensitivity and accuracy than determining comorbidities from claims data and has an additional advantage of utility in prospective studies that need to identify phenotypes from medical records and correct for the confounding effects of comorbidities [227].

6.6.8 Expert systems and AI in concurrent medical conditions (“comorbidity”) The application expert systems in medicine have broadened and now covers many areas, especially in the use of a reasoning process in decision-making. Through the self-learning ability of machine learning, the reliability and accuracy of these programs have improved significantly in multiple disease categories, including mental and physical illnesses, which often are found as comorbidities [228]. An example of an expert system used to manage pulmonary diseases, commonly presenting as a comorbidity in chronically ill patients, was studied using clinical data from 189 patients. The study compared diagnostic accuracy between the practitioners and medical students using the expert system versus diagnostic gold standards. The goal was to help improve the expert system’s diagnostic accuracy while minimizing errors and costs in diagnosing and broadening practitioner knowledge [229]. It was concluded that the accuracy of the system is enhanced with the increasing total amount of data provided for each patient. Especially additional objective and reliable data provide much better accuracy for the output of the expert system, as expected. Further improvement on the performance and accuracy of the system may be obtained by designing the program with AI self-learning ability as well [230].

6.6.9 Robotics and AI in concurrent medical conditions (“comorbidity”) Socially assistive robots are now being used for patients with comorbidities to help manage their chronic disease condition(s). A study was conducted with patients with chronic obstructive pulmonary disease (COPD) and at least 1 other disease condition. Adherence to medication and availability of rehabilitation were suboptimal in the patient group and thus, increased the risk of rehospitalization. The study aimed to investigate the effectiveness of a robot delivering telehealth care to increase adherence to medication and home rehabilitation,

248

Foundations of Artificial Intelligence in Healthcare and Bioscience

improve quality of life, and reduce hospital readmission compared with a standard care control group. The study group consisted of 60 randomized patients who were provided with a robot at home for 4 months. The robot group did not show a significant reduction in rehospitalizations. However, 75% of the patients reported appreciating the robot’s capacity to offer companionship, which may provide benefits over other kinds of platforms such as computers or iPads. The intervention did improve adherence to both medication and rehabilitation exercises [231].

6.6.10 Population health (demographics and epidemiology) and AI in concurrent medical conditions (“comorbidity”) The financial and clinical success of population health management programs takes much more into account than what happens to patients in their doctor’s office. Patients must be assisted in overcoming socioeconomic barriers to improve their health truly. In a Robert Wood Johnson Foundation study [232], it was found that funding must be directed to community improvements that can reduce downstream medical costs. A 20% increase in the median social-to-health spending ratio was equivalent to 85,000 fewer obese adults and more than 950,000 adults with mental illness. The study added that this significantly reduced the associated spending for these conditions and their comorbidities. The World Health Organization defines social determinants as “the conditions in which people are born, grow, work, live, and age and the wider set of forces and systems shaping the conditions of daily life” [233]. Population health management programs that address these, manageable pieces of the great American puzzle can successfully change many lives for the better. Some of the more obvious pieces include [234]: • • • • • • •

Safe and secure housing; English language proficiency and cultural understanding; Health literacy and educational level; Transportation access; Access to healthy, nutritious food choice; Public safety and interpersonal violence; Social support and caregiver availability.

6.6.11 Precision medicine/health (personalized health) and AI in concurrent medical conditions (“comorbidity”) AI in health care is projected to grow 10-fold [235]. As described throughout this text, AI's applications are almost endless, including robot-assisted surgery, virtual nursing assistants, dosage control, automated workflow administration, etc. And while these exciting technologies have enormous immediate value, they are only scratching the surface of the more significant contribution to health care: intelligent diagnostics and precision medicine. AI’s machine learning process gets smarter as it learns from large volumes of high-quality data. As biometrics improves its data collection capabilities (with the IoT, pill-sized cameras,

Chapter 6 • Current AI applications in medical therapies and services

249

fitness bracelets, gene expression analysis, and beyond) and healthcare software and algorithms get better at organizing and analyzing data, we’re due for a revolution in how diseases are identified and treated. AI will be able to compare a patient’s health to an extensive database that will compare genetics, environment, and behavior to optimize and improve treatments in disease entities and in identifying paired entities associated with chronic illness and comorbidity [236].

6.6.12 Healthcare analytics and AI in concurrent medical conditions (“comorbidity”) Most chronic health conditions, such as diabetes, accompany other complications and symptoms like hypertension, fatigue, etc. Finding associations between these conditions help us identify comorbidity patterns. The field of data analytics can immensely contribute to health care by analyzing clinical, patient behavior, sentiment, and claims data. A study was conducted to analyze and obtain insight into comorbidity or the coexistence of several diseases in patients using clinical data. Most studies that address comorbidity from the data analytics perspective investigate the occurrence of symptoms/diseases among a large population of patients. This method can successfully capture the correlation among diseases, and it is highly prone to ignoring confounders or indirect correlations. This study strives to go beyond the conventional correlational perspective among lab test results and diseases by exploring the sufficient and necessary conditions of a disease based on the lab results. A novel angle was used by inferring the association patterns using binary Markov random fields (BMRF) that recreates the underlying comorbidity network of diseases through bootstrapping and cofactor elimination technique. Using this method, it is possible to obtain a more realistic picture of comorbidity and disease correlation and to infer an accurate comorbidity relationship among patients in different age, gender and race groups [237].

6.6.13 Preventive health and AI in concurrent medical conditions (“comorbidity”) The Centers for Disease Control and Prevention (CDC) reports [238] that nearly half of adults in the United States with arthritis also have at least 1 other chronic condition. While heart disease is the most common, diabetes, obesity, high cholesterol, and chronic respiratory conditions are high on the list as well. The CDC revealed that in the United States: • Forty-nine percent of adults with heart disease also had arthritis; • Forty-seven percent of adults with diabetes also had arthritis; • Thirty-one percent of adults who are obese have arthritis. There is no concrete answer regarding why it is common for people with arthritis to have comorbidities. AI analysis, as well as speculation, has pointed to non-modifiable risk factors as well as modifiable risk factors that are associated with arthritis and comorbidities. Researchers are increasingly concerned about the rise in comorbidity among people with arthritis [239]. As the U.S. population ages, they are looking at ways to mitigate the effects of

250

Foundations of Artificial Intelligence in Healthcare and Bioscience

treating multiple chronic conditions. Increasing physical activity, coordinating doctor appointments and tests, and properly managing medications are just a few of the suggestions [240].

6.6.14 Public health and AI in concurrent medical conditions (“comorbidity”) The risk factors for falls and fractures in people with schizophrenia-spectrum disorders are a complex/multifactorial public health problem compounded by the increase frequency of arthritis in the risk population as mentioned above. AI data analysis suggests that preexisting comorbid physical health conditions, particularly cardiovascular, metabolic, and osteoporosis, are associated with future hospital admissions due to falls. An extensive evidence base has demonstrated that people with schizophrenia-spectrum have considerably worse physical health compared to the general population [241]. Despite the poor physical health of people with schizophrenia spectrum, few studies have considered comorbid physical illnesses and falls. Many physical comorbidities recorded were associated with both falls and fractures, which is in line with the general falls/fracture literature. The key message is that people with schizophrenia-spectrum disorders should be screened for falls risk, particularly those who are older or who have comorbid medical comorbidities (particularly cardiovascular disease). Also, in line with the general population, older age, comorbid physical health disorders, and some physical health medications are predictors of falls and fractures in people with schizophrenia spectrum disorders. Future bone health promotion interventions targeting reducing falls and fractures are indicated among people with schizophrenia spectrum [242].

6.6.15 Access and availability and AI in concurrent medical conditions (“comorbidity”) Over 92% of US older adults have at least 1 chronic disease or medical condition. Of that, 77% have at least 2. Low-income and uninsured adults in particular experience a higher burden of comorbidities, and the Medicaid expansion provision of the Affordable Care Act was designed to improve access to healthcare in this population group. A study was conducted to determine the distribution of low-income and uninsured adults in expanded versus nonexpanded states and evaluate the prevalence of comorbidities in both groups. As compared with non-expanded Medicaid states, states with expanded Medicaid had a higher proportion of adults with an income of at least $50,000 per year (39.6% vs. 35.5%) and a lower proportion of individuals with no health insurance coverage (15.2% vs. 20.3%). In non-expanded states, among the uninsured, there was a higher proportion of obese (31.6% vs. 26.9%), and higher average number of comorbidities (1.62 vs. 1.52) [243] As compared to expanded states. Overall, the prevalence of comorbidities was higher among participants in states that did not expand Medicaid compared with those that did. States that did not participate in the ACA Medicaid expansion program experience a higher burden of comorbidities as compared with states that participated. Differences in

Chapter 6 • Current AI applications in medical therapies and services

251

socioeconomic status or healthcare access did not account for this finding. Negative health behaviors in non-expanded states may expand this difference in the coming years as individuals in expanded states obtain better access to preventive and medical care. Only programs in non-expanded states designed to improve the prevention and management of the disease will mitigate this adverse condition [244].

6.7 Medical/surgical robotics 6.7.1 Big data analytics and AI in medical/surgical robotics Medical robots and AI robots (robots using AI) are rapidly growing in the medical industry as well as people’s normal lives. Also, telepresence robots, such as the RP-VITA [245] are affecting hospital-based health care, being used in a broad range of medical diagnostic and treatment applications. Beyond the hospital ecosystem, wearable robotic devices such as the ReWalk [246] exoskeleton, are helping paralyzed victims to become mobile in their home setting. Although robotic use of big data in health care is still in its early stages, algorithms implementing HIPAA- compliant data collection through Cloud technologies will advance health care robotics and Robotic process automation (RPA) [247]. Through health professionals’ direction, cloud technologies will be able to allow telepresence robots to collect a patient’s health status indicators and, when necessary, provide reminders to enhance medication compliance. Cloud technologies can also enable therapeutic robots to collect statistics on pediatric patients at risk for developmental disabilities, monitor for early warning signs, and transmit suspicious information to clinicians. Surgical robots, such as daVinci [248] can connect to the Cloud to assist surgeons while in the operating room, as it mines through the large sets of open MRI data associated with patients with similar medical conditions. Robots will be able to access the personal Cloud data stream from a patient’s exercise band or smartphone GPS coordinates in case of an emergency. The continued growth of robotics and their use of Big Data and Cloud computing technologies will continue to raise security and privacy issues. The mass of data collected by robots and health professionals from multiple sources will include sensitive, medical, and personal information. Currently, there are few formal standards regarding security and privacy protection as robotics and RPA grow in the healthcare space. FDA regulations regarding access to HIPAA compliant data will be a focus of concern and federal administrative actions going forward. Solutions will include blockchain technologies (see below) and protocols for patients and doctors to participate in robot interaction or sequestration of shared information until its anonymity. Solutions will evolve as big data opportunities in robotic care expand.

6.7.2 Health information and records (EHR) and AI in medical/surgical robotics As AI in healthcare continues to grow, it will continue to improve its business operations and processes to increase operational efficiencies, expenses, and productivity. It that effort,

252

Foundations of Artificial Intelligence in Healthcare and Bioscience

robotic process automation (RPA) will be an ever-expanding approach to accomplish such business objectives. RPA is essentially mimicking human behavior for repetitive, rule-based tasks, thus allowing humans to focus on activities which require cerebral thinking. RPA executes routine tasks in a constant, repetitive fashion, and in a fraction of the time, it takes a human. It also reduces the risk of human-error synonymous with repetitive work. The robot accomplishes tasks through scripted commands and processes that have access to data sources and applications such as the EHR. Thus, the robot functions data through access to input screens, online application programming interfaces (APIs), and structured and unstructured data repositories. Certainly, RPA is not an automated solution for every EHR process, nor will it eliminate all human involvement and costs. But implemented correctly in EHR management, it is a highly effective and efficient tool producing functional excellence that delivers cost savings and streamlines processes. An example of a typical EHR function would be pre-authorizations. RPA can gather information from websites and other disparate systems and integrate the data directly into the EHR, often even capable of submitting for the pre-authorization [249].

6.7.3 Research/clinical trials and AI in medical/surgical robotics Research in robotics is exploding as new uses for medical robots grow, especially in the field of surgery. However, novel uses of robots for diagnosis (e.g., micro-bots), exoskeleton robots for paralyzed and recuperating injured patients, and on and on. Some examples of where research is taking the field of robotics illustrate the magnitude of the field: • A team of researchers at the University of California at Berkeley published research on the application of an algorithm for automated suturing performed by robots [250]; • Johns Hopkins University announced that one of its researchers was part of a team that developed a robotic surgical system called STAR or the Smart Tissue Autonomous Robot. The system integrates 3D computer imaging and sensors to help guide the robot through the suturing process [251]; • A study presented at the 2016 World Congress on Engineering and Computer Science discussed using machine learning to evaluate surgeon performance in robot-assisted minimally invasive surgery [252]; • Researchers at the University of California, San Diego (UCSD) Advanced Robotics and Controls Lab are exploring machine learning applications to improve surgical robotics [253]; • To improve how clinical reports are processed, a team of researchers developed a clinical information extraction system called IDEAL-X. The IDEAL-X adaptive learning platform uses machine learning to understand how a user generates reports. It predicts patterns to improve the speed and efficiency of the process [254].

6.7.4 Blockchain and AI in medical/surgical robotics Blockchain-based applications such as “smart contracts” are showing great potential to make distributed robotics operations more secure, autonomous, flexible, and even profitable.

Chapter 6 • Current AI applications in medical therapies and services

253

Therefore, blockchain promises a way to bridge the gap between purely scientific domains and real-world applications. To move beyond the classical view of distributed and decentralized AI, robotics improves an understanding of the possibilities of combining autonomous agents (either physical or virtual) with blockchain-based technologies. It also raises a plethora of questions that will have to be answered in the integration and interoperability of the 2 technologies [255]: • What blockchain tools are available to increase the audibility, transparency, and reliability of AI and robotics? • What kind of algorithms is suitable to combine both technologies? • Are there new models and methods to connect autonomous agents to blockchain-based innovations such as “smart contracts”? • Is blockchain technology a suitable way to achieve emergent aggregations of autonomous and self-adaptive agents? • Are distributed networks such as Bitcoin, Ethereum, EOS, Tezos, etc. a feasible way to integrate AI and robotics in our society? • Are there new business models for AI and robotics-based on cryptographic algorithms?

6.7.5 Internet of Things (IoT) and AI in medical/surgical robotics Whereas the Internet of Things (IoT) and robots have similarities, they are quite different. IoT applications are specific. That is, they are made to deal with specific, narrowly-defined problems. They also often process data in vastly different ways than robots do. Whereas many IoT applications rely on cloud computing mostly, robots more often process data locally [256]. Their integration can enhance IoT applications that have little intelligence on their own with intelligent robots that are more autonomous and capable of dealing with unique situations as they arise. Thus, in looking at the future of health care, there will be ever-increasing use-cases where IoT and robotics meet [257]. By 2019 and beyond, there will be a 50% increase over 2017 in the use of robots in conjunction with IoTs that carry out tasks such as medication delivery, food delivery, and delivery of supplies overall. In other words, they will fulfill routine tasks, freeing up (human) resources [258].

6.7.6 Telehealth and AI in medical/surgical robotics Telepresence robots (combined telehealth and robotic technology) are innovative technology in health care. They are known by various terms such as skype on wheels, virtual presence robots, or remote presence robots. Telepresence Robots Market, with the use of components such as cameras, speakers, microphones, and sensors, provides a platform to the user for remote communications. They can be controlled by using smartphones and tablets. With the help of these components, people can view, hear, and interact with the robot operator located at a remote location [259].

254

Foundations of Artificial Intelligence in Healthcare and Bioscience

Telepresence robots are experiencing wide adoption in healthcare and the medical sector worldwide. Factors include advancement in artificial intelligence technology, increasing usage of smartphones, and growing trend of bringing automation into operations by enterprises. These factors are fueling the market growth of telepresence robots. However, the high cost of manufacturing of robots, as well as their installation & maintenance, is expected to hamper the market adoption of telepresence robots in the coming years [260].

6.7.7 Chatbots and AI in medical/surgical robotics The medical field is entering a new era of diagnosis, less invasive surgery, and reduced surgical risks. Robotics is revolutionizing health care at both the macro and micro levels. New bot technologies are undertaking diagnostic and therapeutic procedures previously non-existent or done with greater risk and far less accuracy. Endoscopy (see Chapter 5, page 152) is a procedure using a small camera or surgical tool on a long wire, which is passed through an aperture or a bodily “tube” or natural opening. Its goal is to search for abnormalities, foreign objects, or traces of disease and, when necessary, perform an intra-body surgical procedure. It’s a somewhat uncomfortable and delicate procedure that is slowly being replaced by robotics. Slender, flexible robots are directed to the exact spot the doctor needs. They can be held there without the tremor of human hands while they perform anything from taking a biopsy to cauterizing a wound. An even more impressive robotic endoscopic technique is the “capsule endoscopy.” This is where the patient swallows a pill-sized robot that travels along with a particular organ system, gathering data and taking pictures that can be sent directly to a processor for diagnostics. Newly developed medical robots (nanobots) use near-microscopic mechanical particles to localize a drug or other therapy to a specific target site within the body (see Fig. 6 2). This procedure could be used to deliver radiation to a tumor, or simply to reduce the side

FIGURE 6–2 Nanorobotic technology. Medical robots (micro-bot, nanorobots, nanobots) use near-microscopic mechanical particles to localize a drug or other therapy to a specific target site within the body. Source: http:// scitechconnect.elsevier.com

Chapter 6 • Current AI applications in medical therapies and services

255

effects of the medication by confining it to the organ where it might be needed. The particles get to the target in a variety of ways. New research has generated micro-bots with tiny, helical tails that can be directed by magnetic fields to spin themselves forward through blood vessels to a specific spot in the body [261]. Bioengineers at Boston Children’s Hospital reported the first demonstration of a robot able to navigate autonomously inside the body. In an animal model of cardiac valve repair, the team programmed a robotic catheter to find its way along the walls of a beating, bloodfilled heart to a leaky valve without a surgeon’s guidance [262].

6.7.8 Natural language processing (NLP) and AI in medical/surgical robotics Robotic process automation (RPA) tools can help healthcare companies retrieve data from digital and physical clinical records. The process of searching through a database for the correct documents and then routing them to the appropriate user can be automated. But this process needs a human employee to supply it with login credentials so that it can access that network or an EHR system [263]. This type of data extraction can be improved and built upon in ways in which AI may not be suited. Natural language processing (NLP) solutions may be useful for digitizing clinical documents and identifying them based on their data. Then RPA may be able to recognize and transfer them faster and based on fewer credentials. RPA software can be trained to detect the metadata such as the filenames of scanned PDFs, or specific ID numbers for EMR documents.

6.7.9 Expert systems and AI in medical/surgical robotics An expert system is made up of 2 components, a knowledge base, and an inference engine. The knowledge base represents facts and rules, while the inference engine applies the rules to the known facts to come up with new points. Thus, the system “learns” and expands its knowledge base so that next time, the same problem is “easier” to solve (see Expert Systems, Chapter 3, page 53). Robotic process automation (RPA) is a combination of 2 different fields of AI. In health care, it is the combination of machine learning and expert systems operating to produce automated medical procedural or diagnostic results. The process can be applied to routine medical care or surgical care. Anything of a repetitive nature that can be automated is eligible for the RPA expert system application. The output from that simple expert system is piped into the automated testing tool technology that then performs a medical procedure transaction instead of a test transaction. The result is a real medical procedure transaction that requires no further user input. This is why the term “robotic” is used, even though robotics is not involved [264].

256

Foundations of Artificial Intelligence in Healthcare and Bioscience

6.7.10 Precision medicine/health (personalized health) and AI in medical/surgical robotics When we think about robots, we think about them doing physical work. But that’s not the only way for robots to help people. That’s where socially assistive robotics comes in, a form of precision health by helping people in social, not physical ways. The work of Maja Mataric´ , Ph.D., director of the University of Southern California’s Robotics and Autonomous Systems [265] The center was inspired by the need for accessible and useful care for large populations. “When people are engaging in difficult health-changing behaviors, they need companionship and encouragement, and that support is most effective when it comes from a physical agent, such as another person, pet, or a robot.” We’ve used the humanoid robot Bandit, which has arms, to show stroke patients how to do rehabilitation exercises [266], elderly users how to do chair aerobics and children with autism how to imitate movements [267]. In other cases, we needed the robot to be encouraging and fun, but smaller and inherently safer, so we used the owl-like robot, Kiwi, in the homes of children with autism [268].

6.7.11 Healthcare analytics and AI in medical/surgical robotics AI playing a significant role in health care data. It provides new and improved analytics. AI analytics are of use in the detection, diagnosis, and treatment of many diseases. In health care, it is helping to provide more targeted care for patients fighting medical conditions or issues [269]. There are different types of AI technologies that help patient outcomes when used in health care. Robotics is being used for repetitive tasks such as analyzing the results of X-rays, CT scans, etc. AI robot prototypes are performing health care tasks. They enable humanoid robots to figure out what is being said, and AI allows the humanoid robot also to respond [270]. This allows for the use of AI humanoid robots in various health care markets. Health care facilities using AI cognitive technology are using health data, which helps result in targeted, more personalized treatment [271]. Health care clinicians are using Google’s DeepMind Health, which can assess and solve real-world health care problems. It is a type of AI technology that combines learning and neuroscience to create algorithms that mimic the human brain [272].

6.7.12 Preventive health and AI in medical/surgical robotics A vast amount of cyber and physical systems (CPS) are carefully combined through the Internet of Things (IoT), intelligent sensing, and robotics to create digitized healthcare services and enterprises. Health engineering will lead to a revolutionized healthcare system that enables the participation of all people for the early prediction and prevention of diseases. But significant challenges can be foreseen in this emerging interdisciplinary field, including the acceptance of healthcare robotics applications in clinical practice. Research in the convergence of automation technology, artificial intelligence, biomedical engineering, and health informatics is essential to their successful applications in health care. Multiple scenarios of

Chapter 6 • Current AI applications in medical therapies and services

257

health engineering in primary care, preventive care, predictive technologies, wearable technologies, hospitalization, home care, and occupational health will be needed to determine the future of digital technologies like AI in health care [273].

6.7.13 Public health and AI in medical/surgical robotics Robots have the excellent ability to seeing patterns and making predictions from large data sets that would be simply overwhelming to humans. Because of this ability, epidemiology is a natural and logical target for AI robots and robot process automation (RPA). These AI robots analyze data on disease outbreaks from doctors in the field and utilize machine learning to cross-references that data with all available medical databases. From this analysis, the robot can predict when and where an outbreak is happening, as well as how to keep it from spreading. One such system is AIME [274] which has been deployed against outbreaks of dengue fever in Malaysia. It provided a nearly 85% accurate prediction rate, saving thousands of lives and potentially millions of dollars. Similar systems are probably being employed in the COVID-19 pandemic, but have not been reported to date.

6.7.14 Access and availability and AI in medical/surgical robotics The University of Pittsburgh School of Medicine and Carnegie Mellon University (CMU) each have been awarded 4-year contracts totaling more than $7.2 million from the U.S. Department of Defense. The goal is to create an autonomous trauma care system that fits in a backpack and can treat and stabilize soldiers injured in remote locations. The goal of TRAuma Care in a Rucksack: TRACIR [275] is to develop artificial AI technologies medicine. A multidisciplinary team of Pitt researchers and clinicians from emergency medicine, surgery, critical care, and pulmonary fields will provide real-world trauma data and medical algorithms that CMU roboticists and computer scientists will incorporate in the creation of a hard and soft robotic suit, into which an injured person can be placed. Monitors embedded in the suit will assess the injury, and AI algorithms will guide the appropriate critical care interventions and robotically apply to stabilize treatments, such as intravenous fluids and medications. “TRACIR could be deployed by drone to hikers or mountain climbers injured in the wilderness; people in submarines or boats could use it; it could give trauma care capabilities to rural health clinics or be used by aid workers responding to natural disasters,” the researchers reported. “And, someday, it could even be used by astronauts on Mars” [275].

6.8 Stem cells and regenerative medicine Among the categories of medical therapies and services discussed in this Chapter, perhaps the ones with the greatest potential of enhancing health and wellness in the coming years are these last 2, stem cells and regenerative medicine, and genetic (and immunogenomic) therapies. Both areas, in conjunction with AI, are likely to introduce “disruptive changes” to

258

Foundations of Artificial Intelligence in Healthcare and Bioscience

the future of medical care and, indeed, preventive health care more so than any other technologies. As will be presented here and in Chapter 7, these technologies are already changing our approach to health care from a process of disease diagnosis and treatment to methods for identification of disease risks and their correction and prevention. In this Chapter (6), we will present the basic bioscience of stem cells (genetics and genomics bioscience having been presented in Chapter 5) and how applicable AI categories are assisting in the development of both sciences. Then, in Chapter 7 (“AI applications in prevalent disease categories”), we will discuss how each of these technologies is treating human disorders; “preventing” disease (“the Holy Grail” of health care); and how AI is assisting both.

6.8.1 The basic bioscience of stem cells and regenerative medicine [276] Stem cells are cells within the body originating during embryologic development (from totipotent to pluripotent embryonic stem cells). During early life and growth these undifferentiated embryonic stem cells have the potential to develop into many different types of adult (somatic) stem cells found in organs and tissues in the body. They also differentiate into red blood cells (erythrocytes), platelets, and white blood cells (leukocytes or WBCs) including neutrophils, basophils, eosinophils, macrophages, monocytes as well as WBCs associated with the immune system including lymphocytes (T-cells, B-cells, natural killer cells) and plasma cells (Fig. 6 3). The adult stem cells serve as a repair system for the body. In some organs, such as the gut and bone marrow, they regularly divide to repair and replace worn out or damaged tissues. In other organs, however, such as the pancreas and the heart, stem cells only divide under special conditions. Given their unique regenerative abilities, the adult stem cells offer new potentials for treating conditions such as immune disorders, cancers, diabetes, and heart disease. When these cells are used in cell-based therapies to treat disease (Chapter 7) it is referred to as regenerative or reparative medicine. The more versatile human embryonic (pluripotent or PSC) stem cells can be harvested through embryos and used for reproductive purposes through in vitro fertilization. This method has met with some ethical and political resistance. However, in 2006 researchers made a (Nobel Prize winning) breakthrough by identifying conditions that would allow specialized adult cells to be “reprogrammed” genetically to assume a stem cell-like state. This new type of stem cell is called an induced pluripotent stem cells (iPSCs) and functions similarly to a natural pluripotent stem cell with the ability to become any cell type of the body [277]. The clincal value of stem cells lies in the differentiation of embryonic (pluripotent) stem cells into differentiated adult stem cells. Whereas this process is essential in repair and regeneration of normal healthy tissue in the body, it also plays a more sinister role in cells differentiating into disease-oriented progenitors. Cancers, diabetes, congenital disabilities, and so many other diseases and human disorders are generated through genetic and molecular processes producing differentiation of embryonic and adult stem cells from normal to abnormal. While understanding their role in the production of abnormal conditions, stem cells have a number of positive values in testing the effectiveness and safety of new medications, including

Chapter 6 • Current AI applications in medical therapies and services

259

FIGURE 6–3 Stages of human (and stem cells) development. Stem cells have the potential to develop into many different cell types during early (pluripotent) life and growth. Courtesy of Maharaj Institute of Immune Regenerative Medicine.

anti-tumor therapies and anti-infectives and in the analysis of a broad range of drugs on different cell types. Scientists must be able to precisely control the differentiation of stem cells into the specific cell type on which drugs can be tested. But, perhaps the most important potential application of human stem cells is the generation of cells and tissues that could be used for cell-based therapies (regenerative medicine or “stem-cell transplantation,” discussed in Chapter 7, page 302). Stem cells, directed to differentiate into specific cell types, offer the possibility of a renewable source of replacement cells and tissues to treat diseases including macular degeneration, spinal cord injury, stroke, burns, heart disease, diabetes, osteoarthritis, and rheumatoid arthritis. These will all be covered in Chapter 7 under “Immunology and Autoimmune Disorders.”

6.8.2 Big data analytics and AI in stem cells and regenerative medicine A detailed molecular understanding of normal and disease development will be facilitated by the identification of new drugs and cell therapy targets for disease treatment. Human pluripotent stem cells can provide a significant in vitro source of human cell types and, in a growing number of instances, also 3-dimensional multicellular tissues called organoids.

260

Foundations of Artificial Intelligence in Healthcare and Bioscience

Stem cell technology to discover and develop new therapies will be aided by detailed molecular characterization of cell identity, cell signaling pathways, and target gene networks [278]. Big data or ‘omics’ techniques, particularly transcriptomics and proteomics, facilitate cell and tissue characterization using thousands to tens-of-thousands of genes or proteins. These gene and protein profiles are analyzed using existing and/or emergent bioinformatics methods, including a growing number of methods that compare sample profiles against compendia of reference samples. Bioinformatic methods that generate comprehensive and integrated combinations of signaling pathways and gene regulatory networks are starting to provide specific molecular disease hypotheses that can be investigated using human PSC cell-derived cell types. Thus compendium-based big data approaches to stem cell research present significant opportunities for the development of novel cell and drug therapies [279].

6.8.3 Research/clinical trials and AI in stem cells and regenerative medicine The plasticity and pluripotent nature of bone marrow-derived cells have presented a dilemma in the scientific community for decades. Indeed, the true identity of stem cells responsible for these properties remains to be delineated [280]. VSELs (very small embryonic-like stem cells) could be the ultimate stem cell with this plasticity [281], but more in vivo evidence needs to be generated to support this conclusion. There has been discussion and concerted efforts by the scientific community, but more preclinical studies are required to demonstrate the regenerative potential of VSELs. The question remains as to whether these stem cells should be expanded in vitro and then transplanted for therapeutic purpose or AI strategies could evolve to manipulate these stem cells in vivo to bring about endogenous regeneration [282]. The second alternative seems promising, but currently, more research needs to continue in both directions. A better understanding of VSELs will help to understand normal tissue stem cell biology better and allow AI and machine learning to offer newer insights about their alteration with age and development of various diseases, including cancers [283].

6.8.4 Blockchain and AI in stem cells and regenerative medicine GIOSTAR Labs, a company dedicated to stem cell-based technologies is a subsidiary of a company called Giostar [284] that recently launched the non-profit Stem Cells for All initiative. They have joined forces with the Association of Professional Ball Players of America (APBPA) in this innovative program. The intent is to provide stem cell treatment to athletes, families, and fans at deeply discounted prices. GIOSTAR will be one of the first to apply blockchain in a new way to ensure a successful outcome. Many baseball players endure injuries that require not only extensive surgery that is demanding on the body and cause severe financial stress, disable them from playing for periods, if not career-ending. This pilot initiative will provide members of the APBPA free or

Chapter 6 • Current AI applications in medical therapies and services

261

deeply discounted therapies through GoldSTAR, who will also offer stem cell services to the underserved. Explains GIOSTAR Co-Founder Siddarth Bhavsar, “Federated permissioned blockchain based on HIPAA compatible nodes allows us to build trust via data integrity and permission collaboration from the ground up.” He continues, “Since personalized precision medicine like stem cells entails an additional level of data complexity. Thus, an AI and NLP algorithms layer will bring greater structure and extract meaning from the data to improve accuracy” [285].

6.8.5 Internet of Things (IoT) and AI in stem cells and regenerative medicine Interesting studies have been conducted that correlate data acquired from IoT activity trackers with quality of life (QOL). One such study prospectively investigated patients with advanced lung cancer who were given Fitbit activity trackers to measure daily step counts and multiple validated QOL questionnaires [286]. It was demonstrated that higher physical activity levels (as measured by daily step count) positively correlated with physical functioning, role functioning, emotional functioning, and global QOL in patients undergoing stem cell transplantation. Step counts were positively correlated with the National Cancer Institute’s Patient-Reported Outcomes Common Terminology Criteria for Adverse Events and multiple QOL items from PROMIS (PatientReport Outcomes Measurement Information System) [287].

6.8.6 3-D bioprinting and AI in stem cells and regenerative medicine Innovation produces developments in seemingly disparate areas that can sometimes (and often do) intersect. For instance, 3D printing (mentioned in multiple “Top Listings” in previous chapters) of body parts is currently being reviewed by the U.S. Food and Drug Administration (FDA), which historically has not concerned itself with customized (i.e., individualized) therapeutic interventions [288]. 3D bioprinting is, in many ways, conceptually similar to 3D printing generally. Both entail a process whereby the addition of successive layers of material, one on top of the other as a print-head moves back and forth, results in an output scenario. Both are based on a digital model of the part to be reproduced. Building 3-dimension body parts entails the usage of different materials and these may include a person’s stem cells. Already, some medical applications allow for the creation of “customized prosthetics, implants, and anatomical models, tissue and organ fabrication” [289]. On the horizon, we may see the ability to create fully functioning organs.

6.8.7 Chatbots and AI in stem cells and regenerative medicine Biobots modeled after sperm cells can now swim, which means they could one-day seed stem cells to deliver drugs, perform minimally invasive surgery, and target cancer. The

262

Foundations of Artificial Intelligence in Healthcare and Bioscience

biobots are modeled after nature, in particular sperm cells, and are propelled by muscles and nerves derived from rats [290]. Research teams led by Taher Saif and Rashid Bashir worked together “to develop the first self-propelled biohybrid swimming and walking biobots powered by beating cardiac muscle cells derived from rats” [291]. “Our first swimmer study successfully demonstrated that the bots, modeled after sperm cells, could, in fact, swim,” Saif said. A future version of these sperm-inspired biobots could one day be swimming through your body to seed stem cells, target illnesses, or administer drugs internally. “The long-term vision is simple. Could we make elementary structures and seed them with stem cells that would differentiate into smart structures to deliver drugs, perform minimally invasive surgery, or target cancer?” Saif said. “Wounds are living environments, and the conditions change quickly as cells and tissues communicate and attempt to repair. An ideal treatment would sense, process, and respond to these changes in the wound state and intervene to correct and speed recovery,” said BETR program manager Paul Sheehan, in a statement [290].

6.8.8 Natural language processing (NLP) and AI in stem cells and regenerative medicine Data on cancer stem cell surface molecular markers from 27 of the most common cancer diseases were analyzed using natural language processing and data mining techniques. The source used for the search was 8933 full-text open-access English-language scientific articles available on the Internet. Text mining was based on searching for 3 entities within 1 sentence, namely a tumor name, the phrase “cancer stem cells” or its synonym, and a name of a differentiation cluster molecule. As a result, a list of molecular surface markers was formed that included markers most frequently mentioned in the context of certain tumor diseases. This study illustrates the interoperability of AI and machine learning, through data mining and NLP, to conduct research studies previously not obtainable through standard research protocols [292].

6.8.9 Expert systems and AI in stem cells and regenerative medicine Microarray data is used extensively in cell cultures because it provides a more comprehensive understanding of genetic variants among diseases. Gene expression samples have high dimensionality, so it becomes time-consuming to analyze the samples manually. Thus, an automated system is needed to analyze these samples. The fuzzy expert system (see Chapter 3, page 53) is superior to machine learning and statistical analyzes in this scenario because it offers a precise classification. Knowledge acquisition would be a significant concern in the expert system’s fuzzy classification. Despite several existing approaches for knowledge acquisition, a great deal of effort is necessary to enhance the learning process. An innovative Hybrid Stem Cell (HSC) algorithm was studied that utilizes Ant Colony optimization and a stem cell algorithm for designing a fuzzy classification system to extract

Chapter 6 • Current AI applications in medical therapies and services

263

the informative rules to form the membership functions from the microarray dataset. The human stem cell (HSC) algorithm used a novel Adaptive Stem Cell Optimization (ASCO) to improve the points of membership function and Ant Colony Optimization. It produced a near-optimal rule set. To extract the most informative genes from the large microarray dataset, a method called Mutual Information was used. The performance results of the technique using the 5 microarray datasets proved that the proposed Hybrid Stem Cell (HSC) algorithm produces a precise fuzzy system better than existing methodologies [293].

6.8.10 Robotics and AI in stem cells and regenerative medicine A team of researchers developed a microrobot that can deliver therapeutic stem cells precisely to very specific parts of the brain. Their work demonstrates that neural stem cells can be cultured and differentiated on their robot and that the device can travel from the carotid artery into various parts of the brain. This development may 1 day provide an approach for treating several brain-related disorders [294]. The device is fabricated using 3D laser lithography and is designed to have a helical shape to travel through the body more easily. It can be manipulated by a magnet, allowing the researchers to move it through the body non-invasively using an external magnetic field. The device is also porous, which helps the attachment and proliferation of the stem cells. Professor Hongsoo Choi, who was involved in the research, said, “Through this research, we hope to increase the treatment efficiency and success rate for Alzheimer’s and central neural diseases, which couldn’t be approached through the existing method of stem cell treatment. Through continuous follow-up research with hospitals and related companies, we will do our best to develop a microrobot-based precise treatment system that can be used in actual hospital and clinical sites” [295].

6.8.11 Precision medicine/health (personalized health) and AI in stem cells and regenerative medicine Recent technical advances now allow the direct manipulation of DNA sequences in cells (DNA editing - see Chapter 7, CRISPR, page 303) through machine learning. Mutations can be corrected, and genetic variations of interest can be introduced. Using stem cells with this molecular biology technique enables a discovery process wherein genetic analysis can identify a disease. Stem cells can then be created from the patient to correct their mutation. This is the essence of precision medicine. Cells can be differentiated into cell types relevant to a disease for assessing the inference genetic analysis could provide. Then, subsequent study of these cells will identify genes that can modify mutations and reduce or eliminate the disease caused by the mutant gene. This could also lead to the identification of new drug targets. To realize this level of precision medicine, research scientists are collaborating to conduct relevant genetic analyzes and downstream stem cell-based vetting of these genes for disease

264

Foundations of Artificial Intelligence in Healthcare and Bioscience

mechanism and molecular interventions. Such a collaborative effort has been established as “The Stem Cell Core Lab” of Columbia University [296].

6.8.12 Healthcare analytics and AI in stem cells and regenerative medicine The promises of stem cell and regenerative medicine to generate precise disease models and designer cell samples for personalized therapeutics will require harnessing phenogenotypic cell-level data quantitatively and predictively in the lab and clinic. Those requirements could be met by developing a Big-Data-driven stem cell bioscience strategy and community [297]. Stem cell research has been increasingly making use of biological Big Data analytics and integration strategies to define better cell types or states and the differences among cell types/states. Recently more sophisticated mathematical and computational methods, including Machine Learning methods, have begun being used to address questions about cellular state and identity in a more specific, quantitative, data-driven, and predictive manner. A better definition is needed to quantitative Big-Data-driven stem cell bioscience and the joint development of a strong computational stem cell biology oriented toward clinical/industrial requirements. This will be pivotal in helping make the stem cell field quantitative, predictive, and therapeutically suitable and help make the promise of stem cell therapeutics a reality [298].

6.8.13 Preventive health and AI in stem cells and regenerative medicine In November 1998, the world was introduced to human embryonic stem cells, the blank slate cells that arise at the earliest stages of development and that go on to become any of the scores of cell types that make up a human. With a capacity to replicate endlessly under the right laboratory conditions, the prospect of an inexhaustible supply of replacement cells for ailments such as Parkinson’s, diabetes, heart disease, spinal cord injury, and a host of other dire conditions catapulted the cells into the biomedical spotlight, public imagination, and the hope of preventive healthcare. Today, proven therapies based on AI analysis identifying and trading out diseased cells for healthy lab-grown cells remains a clinical aspiration. But a growing number of clinical trials, the widespread use of the cells in industry, and a swelling list of primary findings attributable to the “Swiss Army knife” of cells are contributing to a measured, steady realization of the promise that came with the first lab-grown cells 2 decades ago. Less publicized is the use of stem cells to develop high-throughput drug screens. Nearly every major pharmaceutical company now has an AI-based stem cell program where the cells are used to assess promising new drug candidates for safety and efficacy. Also in development is the growing scientific contribution of stem cells, both embryonic and induced pluripotent cells. The induced pluripotent cells are adult cells such as skin that have been genetically reprogrammed to mimic the qualities of embryonic stem cells [299].

Chapter 6 • Current AI applications in medical therapies and services

265

6.8.14 Public health and AI in stem cells and regenerative medicine As with any medical therapy, precautions against misuse or untoward events must always be weighed, especially with new innovative therapies like stem cells and regenerative medicine. Transplanted human stem cells are dynamic biological entities that interact intimately with, and are influenced by the physiology of the recipient. The capabilities to self-renew and differentiate that are inherent to human stem cells also point to their perceived therapeutic potential and to the challenge of assessing their safety. Assessing human stem cell safety requires the implementation of a comprehensive strategy. If you are considering stem cell treatments, you must check to make sure the product you are considering is on the FDA’s approved list of stem cell treatments [300]. Whether human stem cells are of embryonic, fetal, or adult origin, donor sources must be carefully screened. Routine AI testing is done on large numbers of infectious agents (bacteria, viruses, etc.) to guard against the inadvertent transmission of infectious diseases [301]. To ensure the integrity, uniformity, and reliability of human stem cell preparations intended for clinical use, it is essential to demonstrate that rigorously controlled, standardized practices and procedures are being followed in establishing and maintaining human stem cell lines in culture. Healthcare providers should discuss testing with their patients on a case-by-case basis [302].

6.8.15 Access and availability and AI in stem cells and regenerative medicine When considering the small number of “stem cell clinics,” there are considerable variations among the offering of these clinics. About 25% focus exclusively on stem cells and regenerative medicine, while many others are rather orthopedic and sports medicine clinics that have added stem cells and regenerative medicine to their services [303]. Also found are differences in the degree of expertize of providers at stem cell clinics. For example, whereas specialists in orthopedics and sports medicine are more likely to restrict stem cell treatments to conditions related to their medical specialties, other providers listing specialties in cosmetic or alternative medicine are more likely to treat a much more extensive range of conditions with stem cells. Just because someone is board certified doesn’t necessarily mean they are qualified to provide stem cell treatments. You need to ask “what they are board-certified in and whether their medical expertise is well-matched to the condition you are seeking treatment for.” More recently, the FDA has begun to tighten up its guidelines and restrict the practices of these clinics [303].

6.9 Genetics and genomics therapies Along with stem cells and regenerative medicine, genetics, and genomics are perhaps the most profound and “disruptive” therapeutic health care technologies influenced by AI.

266

Foundations of Artificial Intelligence in Healthcare and Bioscience

Although they are intimately related (including immunology), the distinction between the these biosciences is essential to highlight. All 3 biosciences are enormous in 2 regards. First, the current and future research and applications of therapies in these 3 areas have the most significant potential of dramatically affecting virtually every category of health and wellness. Second, relative to the immense sciences of stem cells, genetics, genomics, molecular biology, and immunology, the volume of clinical material and information to be addressed is overwhelming. As regards the first consideration of the potential for genetic therapies, since the mapping of the human genome was completed in 2003 (The Human Genome Study [304]), the research, results, and applications of the genetic information produced has had a profound, albeit nascent, influence on medical diagnosis, therapeutics, and prevention. These applications of genetic therapies will be discussed regarding AI categories in the balance of this Chapter (6). Then, in Chapter 7, we will cover specific genetic related treatments and preventive applications in prevalent disease categories. The second considerations, “. . .the volume of clinical material to be addressed. . .in the sciences of stem cells, genetics, genomics, molecular biology, and immunology” may prove to be the most compelling reason why AI has had such a profound influence and thus, has changed (i.e., “disrupted”) the future of health care. Having reviewed (in Chapter 5) “The Bioscience” and the diagnostic applications genetics and genomics have in health care and biosciences, let’s now discuss their enormous role in medical therapies and services. As we discussed back in Chapter 3, “machine learning allows the use of immeasurable amounts of new data whose volume or complexity would previously have made analyzing them unimaginable” [305]. That statement certainly describes the challenge of “bioinformatics,” the science of collecting and analyzing complex biological data such as genetic codes [306]. Now, as we get into AI categories influencing genetics and genomics (here and in Chapter 7), you will begin to see the profound effects of bioinformatics, the science derived from “data science” and “big data analytics.”

6.9.1 Big data analytics and AI in genetics and genomics Throughout Chapters 3 5, we have presented the many applications of data analytics and AI in health care. But its use in genetics and genomics may be considered its most profound contribution to current and future health care. Advancements in AI technologies are enabling the scientific community to create, store, and analyze enormous volumes of data in hours that would have taken years to accomplish only a short time ago [307]. Using machine learning (ML), data-analytical techniques are applied to multi-dimensional datasets allowing for the predictive model constructing and insights gained from the data [308]. ML helps study and understand complex cellular systems, such as genome or gene editing, allowing for the creation of models that then learns from big data sets to generate predictable outcomes.

Chapter 6 • Current AI applications in medical therapies and services

267

In our discussion in Chapter 4 about Big Data Analytics (page 83), there was an explanation and a table (Table 3 7) of the “Vs. of Big Data” [309]. Also referred to as the “V Framework” [310], it has been used to evaluate the current state of data in genomics concerning other applications in data sciences [311]. Example of the evaluation includes: • Volume: One of the key aspects of genomics as data science is the sheer amount of data being generated by sequencers. The data growth trend in genomics, however, is greater than in other disciplines. Some researchers have suggested that if the genomics data generation growth trend remains constant, genomics will soon generate more data than applications such as social media, earth sciences, and astronomy [312]. • Velocity: There are 2 widely accepted interpretations of data velocity: (i) the speed of data generation; and (ii) the speed at which data are processed and made available. The sequencing of a human genome now takes less than 24 h, down from 2 to 8 weeks at the close of the Human Genome Study in 2003. Regarding the speed of data processing, for example in sequencing technologies, such as rapid diagnosis, epidemiology, and microbiome research, nucleic acid sequences for fast, dynamic tracking of diseases and pathogens is now the preferred approach in as little as minutes to hours [313]. • Variety: Genomics data have a two-sided aspect. On 1 side is the sequencing data, ordered lists of nucleotides. In human genomics, these are mapped to the genome and are used to generate coverage or variation data. The other side of genomics data is the complex phenotypic data with which the nucleotides are being correlated. Phenotypic data can consist of such diverse entities as unstructured and straightforward text descriptions from electronic health records, quantitative metrics from laboratories, sensors, and electronic trackers, and imaging data. In the past decade, there has been a fantastic change in the efficiency of DNA sequencing. Using traditional Sanger sequencing [314], the human genome project took 20 years and cost $3 billion. Current next-generation sequencing (NGS) methods now allow a human genome to be sequenced for $1000, in 24 hours [315]. Science and medicine are changing in ways never imagined as DNA sequencing generates volumes of data faster and at a lower cost (remember Moore's Law from Section 1?), all thanks to AI. Next-Generation Sequencing (NGS), also called massively parallel sequencing, is defined as a high-throughput method used to determine a portion of the nucleotide sequence of an individual’s genome. This technique utilizes DNA sequencing technologies that are capable of processing multiple DNA sequences in parallel [316]. They exponentially increase the rate of biological data being generated. Whereas the first human genome was a $3 billion-dollar project requiring over a decade to complete, as mentioned above, we are now close to being able to sequence and analyze an entire genome in a few hours for less than a thousand dollars. However, with this increase in computer and internet speed, what is needed is an infrastructure to generate, maintain securely, transfer, and analyze large-scale information in life sciences. Also, such support will integrate omics data with other data sets, such as clinical data from patients (mainly from EHRs).

268

Foundations of Artificial Intelligence in Healthcare and Bioscience

6.9.2 Health information and records (EHR) and AI in genetics and genomics therapies Electronic health records contain patient-level data collected during and for clinical care. Data within the electronic health record include diagnostic billing codes, procedure codes, vital signs, laboratory test results, clinical imaging, and physician notes. With repeated clinic visits, these data are longitudinal, providing valuable information on disease development, progression, and response to treatment or intervention strategies. The nearly universal adoption of EHRs nationally has the potential to provide population-scale real-world clinical data accessible for biomedical research, including genetic association studies [316]. For this research potential to be realized, high-quality research-grade variables must be extracted from these clinical data warehouses. Electronic or computable phenotyping methods and approaches, along with associated evaluation metrics, are now emerging. More sophisticated strategies are being developed, ensuring in part that the full potential of the EHR for precision medicine research, including genomic discovery, will be achieved as many envision [317]. A study applying machine learning tools to EHR data was conducted by researchers at the University of Wisconsin-Madison (UW-Madison) and the Marshfield Clinic. Through their EHR methodology, they identified genetic markers related to conditions such as depression, anxiety, mood disorders, sleep apnea, and a host of other conditions. In follow-up studies, EHRs were used to set up a double-blind methodology, where both clinicians and patients were blind to the genotype. This enabled researchers to assess whether premutation carriers differed in their patterns of clinical diagnoses from those who don’t have the premutation [318]. These and numerous other studies reported in the literature demonstrate the valuable relationship and resource the EHR is providing in genetic research. The continued identification of genotype and phenotype correlations for inpatient populations will lead to significant advances in population health and precision medicine.

6.9.3 Research/clinical trials and AI in genetics and genomics The most fertile areas of genetics and genomics lie in health care research. Thanks to the advances in ML, coupled with big data analytics, there has been a virtual explosion of activity in cross-disciplinary applications of genomics, medicine, and AI. The integration of machine learning into the clinical workflow would help to remove gaps in the data available to healthcare professionals and allow integration of other datasets (such as genetic information), a vital step in enhancing medical data for a better understanding of patient treatments and cares [319]. A strong example of AI and application in research has been demonstrated at Rady Children’s Institute for Genomic Medicine (RCIGM). They have utilized a machine-learning process and clinical natural language processing (CNLP) to diagnose rare genetic diseases in record time. This new method is speeding answers to physicians caring for infants in intensive care and opening the door to increased use of genome sequencing as a first-line diagnostic test for babies with cryptic conditions [320].

Chapter 6 • Current AI applications in medical therapies and services

269

With the increasing emphasis and intensity of genomic research in the areas of genome sequencing, genetic editing (e.g., CRISPR - see Chapter 7, page 303), deep genomics interpreting genetic variations, pharmacogenomics, Newborn Genetic Screening Tools, the future of genetic research and health applications is endless.

6.9.4 Blockchain and AI in genetics and genomics The cost of genome sequencing has already fallen below $1000 per person [321]. Soon it will be down to $100. This reducing cost and resultant increasing utilization will provide researchers and health professionals with ever-increasing data for genomic sequencing and EHR records analysis. This will lead to better disease assessments and precision outcome analysis. But there are also obstacles to overcome. First, bioscience needs vast datasets of genomic and other healthcare datasets to gather meaningful and potentially transformational information. Second, for the promise of precision medicine to be achieved, data must be sharable and interoperable across technological, geographic, jurisdictional, and professional boundaries. Also, there must be an integration of data to allow patients, health professionals, governments, researchers, and providers of health technology access, cooperation, collaboration, networking, and the ability to form partnerships. The world needs a centralized health data hub, an open marketplace where health and genomic data can be shared, borrowed, even sold. Of course, such a market would have to be secure. By utilizing blockchain technology and next-generation cryptography, a trust could quickly be built around such an ecosystem, alleviating consumer hesitations about leaving personal data online or in the hands of corporations. By implementing an open, collaborative blockchain platform and marketplace, critical mass can be achieved faster for precision medicine to be realized. Of course, this data hub has to be international, although many ethnic and geographical populations are still woefully underrepresented in public databases [322].

6.9.5 Internet of Things (IoT) and AI in genetics and genomics Like all other dramatically expanding fields in AI and its related technologies, so too is the Internet of Things (IoT) moving into areas not even thought about as little as 5 years ago. Indeed, the IoT applications (small connected devices, smartphones, wearables, etc.) are limited only by our imagination. The market for IoT devices is projected to grow from $5.8 billion in 2018 to $3.04 trillion in 2020 [323]. A novel and exciting IoT application in related genetic research is the use of wearables for reactive biospecimen acquisition and phenotyping. It is now possible to remotely monitor a participant’s activity and sleep patterns. The participant is asked to perform a task (e.g., complete a test of cognition) or collect a biospecimen (e.g., a saliva or finger-prick blood sample) based on their activity or sleep. IoT device and wearable could automate aspects of rapid analysis of sleep data upon waking and coupling that with a push notification to the study participant’s smartphone requesting them to collect a finger-prick blood sample [324].

270

Foundations of Artificial Intelligence in Healthcare and Bioscience

Uses of IoTs and wearables in research, including genetics and genomics, are farreaching. The goal should be to incorporate reliable, accurate, valid, and low-cost portable monitors easily into the study design. As the potential IoT and wearables use continue to grow, the potential positive impact of these technologies on how we conduct human research should not be underestimated [325].

6.9.6 Telehealth and AI in genetics and genomics A new concept, Genome Medical, is a service that helps make genomics part of the standard of care, through a combination of telehealth technology and services. The service helps healthcare providers and their patients navigate and utilize genetic test results to understand the risk for disease, accelerate disease diagnosis, make informed treatment decisions, and lower the cost of care. This kind of care has the potential to provide a new level of genomic medicine via a telehealth platform to top clinical genomics specialists for patients and clinicians. The service is delivered via telehealth and work with health systems, providers, employers, and health plans to meet the ever-growing needs of our services. Services are available to hospitals, health systems, employers, and consumers in all 50 states. It provides on-demand access to genetic experts for virtual visits and provider-to-provider consults, in addition to educational and training services such as patient risk assessment tools and consumer e-learning resources. Health care will see a future in which genomics is a seamless part of everyday care delivery. The current challenge is the gap between the clinical application of genomics and access to genetic expertize. It is felt that technologies like telehealth will eliminate traditional barriers and will allow for immediate access to genetic experts for making informed professional judgments and decisions about genetics and genomics [326].

6.9.7 Chatbots and AI in genetics and genomics Increasing amounts of patients are getting genetic tests to assess cancer risk to their hereditary questions on passing a disease on to a child. But most doctors aren’t adequately trained to interpret genetic results or counsel patients appropriately. Skilled genetic counselors are in short supply. As genetic testing becomes more available to an ever-increasing population, so too is their access to mobile broadband technologies. One such technology, Clear Genetics [327], a company based in San Francisco, has collaborated with genetic counselors in a variety of patient-care settings to develop a chatbot named “GIA” (Genetic Information Assistant). This is a clinical-grade chatbot that can assist patients in need of genetic counseling, risk assessment, and testing. It guides a user through their results, collect information and review options for genetic testing, and answer questions about things like whether the test will be covered by insurance. If additional support is required, it can schedule a consultation with a human expert via video or in-person. Another similar chatbot technology is provided by Geisinger Clinic in Pennsylvania [328] serving over 3 million patients. Geisinger and Clear Genetics have collaborated to develop

Chapter 6 • Current AI applications in medical therapies and services

271

chatbots for communication with patients enrolled in the MyCode Community Health Initiative [329], an extensive research biobank returning genetic test results for genes known to be associated with an increased risk for treatable and preventable heritable heart diseases and cancers. The tool is compliant with the Health Insurance Portability and Accountability Act 1996 and can be used on any mobile device. It is designed to record the decision (consent, considering, decline) in the patient’s electronic health record. At-risk patients can request a genetic counseling visit via the MyCode chatbot and ask questions. Geisinger patients who have interacted with the family sharing tool and chatbot report positive experiences and secured the privacy of their information. They have welcomed a device that has detailed information and answers that wouldn’t be on the tip of the tongue when talking with family members about genetic test results [330].

6.9.8 Natural language processing (NLP) and AI in genetics and genomics An ambitious study was conducted using Natural language processing (NLP) and machine learning approaches to build classifiers to identify genomic-related treatment changes in the free-text visit progress notes of cancer patients. Data from a total of 755 cancer patients who had undergone clinical next-generation sequencing (NGS) testing was analyzes. An NLP system was implemented to process the free-text data and extract NGS-related information [331]. The project explored how word embedding, and deep learning techniques can help to extract information from free-text EHR documents efficiently (e.g., progress notes) and evaluate the effectiveness and actionability of genetic testing in assisting cancer patient treatment adjustment. The primary goals of the project were to (1) identify the section of the progress report that discussed genomic testing results and treatment information; (2) predict if there is a treatment change (or not) based on the extracted data using deep learning and word embedding, and (3) compare the performance of 4 recurrent neural networks (RNN)-based approaches and 5 conventional machine learning algorithms for text classification task using clinical progress reports. The results were successfully applied to NLP and machine learning methods to extract information from clinical progress reports and classify them into treatment-change. Notreatment-change groups reported using NLP and deep learning. The goal was to implement an automated annotation system for clinical progress reports that can improve the annotation accuracy, as well as reduce the time required. An automated genetic alteration interpretation system based on NLP and RNN methods could be developed to incorporate relevant information from text-based sources such as pathology reports and progress notes.

6.9.9 Expert systems and AI in genetics and genomics “Decision support system” (DSS) is a system that supports users in unstructured decisionmaking situations. Most of the DSS activities to date have focused on innovation tools such as expert systems. With the advent of Electronic Health Records (EHR) systems, clinicians are getting accustomed to a special kind of DSS called Clinical Decision Support (CDS) [332].

272

Foundations of Artificial Intelligence in Healthcare and Bioscience

Over the years the system has evolved from performing simple medical data processing tasks to complex analytical tasks to support genetically guided cancer management, risk assessment with family history, and operation on a massive amount of clinical data [333]. The recent advancements in genomics have resulted in a high interest in developing genomic decision support systems for improving clinical care [334]. Many types of genomic data are currently available, like sequencing data, gene expression data, genotype data, epigenetic data, etc. Several efforts are underway to utilize these large data sets for clinical support. Expert systems [335] have also been developed to utilize rules. Most fields of clinical genomics research have not yet reached a level of maturity to produce such regulations. Nonetheless, pharmacogenomics is one field that has proven that genotype-phenotype rules could be used in Genome CDS [336].

6.9.10 Robotics and AI in genetics and genomics Nanotechnology is the science of engineering molecularly precise structures and, ultimately, molecular machines or “nanobots.” This technology is conducted at a nanoscale, which is about 1 100 nanometers. A nanometer is 1-billionth of a meter, the width of about 5 carbon atoms nestled side by side [337]. Medical nanorobotics is expected to have a revolutionary effect on regenerative medicine and with diligent effort, and their distinct influences will begin to appear in clinical treatment as early as the 2020s [338]. Already, a nanorobot called a “chromallocyte” is being researched, as a mechanism to extract all existing chromosomes from a diseased or dying cell and insert fresh new ones in their place. This process is called chromosome replacement therapy. The replacement chromosomes are manufactured earlier, outside of the patient’s body, using the patient’s genome as a blueprint. Each chromallocyte is equipped with a single copy of a digitally corrected chromosome set and then injected. Each device travels to its target tissue cell and replaces old or defective genes with new chromosome copies, then exits the cell and is removed from the body. Therefore, chromosome replacement therapy could be an attractive perspective to correct the accumulating genetic damage and mutations that lead to aging in every one of our cells [338].

6.9.11 Population health (demographics and epidemiology) and AI in genetics and genomics The new millennium and the completion of the Human Genome Project in 2003 [304] introduced rapid developments in the core functions and essential services of public health in genomics and an understanding of the distinction between the terms “genetics” and “genomics” (see “Genetic and genomic therapies” below). These developments have enhanced our knowledge of how human genes interact with each other and the environment to influence health. These advancements have yielded the increasing recognition of the potential applications of genomic knowledge and related technologies to improve population health. Specifically, it can enable the stratification and subsequent screening of individuals and subgroups of populations based on their level of genetic risk for developing a disease. This can

Chapter 6 • Current AI applications in medical therapies and services

273

then lead to the development of more targeted prevention approaches to reduce the burden of untoward psychological or social effects [339]. Whereas there are expectations that genomic knowledge, tools, and technologies benefit population health, they must be applied only when the benefits outweigh the potential harms. New tools and technologies that are prematurely introduced without the evidence demonstrating that they are valid and useful run the risk of posing harm to individuals, families, and the broader health system. It is, therefore, essential that the existing and emerging knowledge, tools, and technologies be considered to determine which are beneficial to population health and how they could be appropriately implemented. Public health genomics bridges this gap between new scientific discoveries and technologies, and the application of genomic knowledge to benefit population health [340]. Public health genomics (see below) has been successfully integrated into existing paradigms for the provision of traditional public health services. The continued alignment of genomics with public health promises to deliver more precise, personalized health care to benefit the population. A national, coordinated approach to provide centralized governance of decisionmaking is required to ensure responsible delivery, universality, and equity of access [341].

6.9.12 Precision medicine/health (personalized health) and AI in genetics and genomics The National Institutes of Health started the Big Data to Knowledge (BD2K) Initiative and the ‘Precision Medicine Initiative,’ intending to develop a genetically guided treatment with personalized, precision medicine for improved preventive, early detection, and treatment of common complex diseases [342]. The program aims to do this by using AI machine learning to gathering and linking the electronic health records and data of a group of 1 million Americans. The group will categorize and capture entire genome sequences, cell populations, proteins, metabolites, RNA, DNA as well as behavioral data [343]. In this era of bioinformatics, the wealth of data that diagnostic tests generate has become a new option value, like oil-exploration leases, to power the value and strategy of businesses. Take 23andMe as an example: using genotyping chips, the company offers tests for the genetic blueprint of its customers’ ancestry and health or trait markers. So far, it has gathered data from more than 2 million customers [344], who have the choice of allowing their data to be used for biomedical research. The global market for personalized medicine has proliferated since the inception of the Precision Medicine Initiative, announced by Barack Obama during his 2015 State of the Union address. Market research estimated the 2016 global market at $44 billion in revenues. And these revenues are forecast to more than triple, to $140 billion, by 2026 [345].

6.9.13 Healthcare analytics (and bioinformatics) and AI in genetics and genomics Whereas bioinformatics has been mentioned numerous times up to this point in this book, its arguably most significant application is in genetics and genomics. By definition,

274

Foundations of Artificial Intelligence in Healthcare and Bioscience

bioinformatics refers to the use of computer science, particularly AI and machine learning for statistical modeling and algorithmic processing. To understand biological data. It applies AI and modern computing and big data (healthcare) analytical techniques to biological information [346]. Machine learning (ML) is particularly useful in bioinformatics at prediction and pattern detection based on large datasets. Within the areas of genetics and genomics, the most significant applications of AI and ML include DNA sequencing (NGS), protein classification, and the analysis of gene expressions on DNA microarrays. The process of gene finding (locating regions of the DNA that encode genes [347]) consists of a combination of extrinsic and intrinsic searches. As part of the outward search, “the target genome is searched for DNA sequences that are similar to extrinsic evidence” in the form of known gene encoding sequences previously discovered and labeled. Gene prediction algorithms attempt to identify segments of the DNA that could potentially host gene encoding sequences [348]. Gene expression is the process by which information from a gene is used in the synthesis of a functional gene protein [349]. Machine learning is utilized for analysis, pattern identification, and classification of gene expressions. In the field of cancer research, the advent of microarrays and RNA sequencing coupled with state-of-the-art machine learning techniques, have demonstrated the potential for the detection and classification of tumors at a molecular level [350].

6.9.14 Preventive health and AI in genetics and genomics The concept of “Preventive Genomics” includes the clinical aspects of extensive use of AI in DNA sequencing (NGS), interpretation, and reporting to help accelerate precision medicine research. A clinic (“Preventive Genomic Clinic”) at Brigham and Women’s Hospital in Boston is the first in the U.S [351]. The goal of the clinic is to interpret disease-associated genes for healthy adults and children seeking to understand and mitigate their risk of future disease. For over 2 decades, NIH studies in translational genomics have proven to provide more potential medical benefits and fewer risks than previously considered. Thus, it is felt that genomic technology is offered in a clinical context, under the care of genetics experts, to individuals who wish to be proactive about their health. After a patient has been examined to rule out genetic risks, the patient can then choose from a menu of gene panels that will enable patients to receive quality genome sequencing through machine learning techniques capable of interpreting and reporting of about 3700 disease-associated genes [351]. Patients will then have the opportunity to participate in an NIH-funded follow-up study in which researchers will track outcomes for years. A goal of the clinic and its follow-up studies is to go beyond the typical diagnostic application of genetic testing and to help patients make individualized decisions based on their specific needs. This will accelerate the integration of DNA sequencing into day-to-day medical care and allow patients the ability to communicate with health professionals regarding their “genetic health” and to achieve precision and preventive health.

Chapter 6 • Current AI applications in medical therapies and services

275

6.9.15 Public health and AI in genetics and genomics The CDC Office of Public Health Genomics tracks epidemiologic study results in its weekly horizon scan [352] and Advanced Molecular Detection Clips [352] and deposits relevant publications in the Public Health Genomics Knowledge Base [352]. Many computer tools, AI, and machine learning technologies have contributed significantly to the emerging base of genomic knowledge that has been developed over the past 10 years. However, only a limited amount of this knowledge so far has been fully translated into healthcare and public health practice [353]. The literature references 2 reasons why this may be so. Firstly, in genomic studies, most genetic variants that have been identified seem to contribute to common diseases to only small increases in relative risk. At the same time, they explain only a little about the relationship between disease and genetic inheritance [354]. Secondly, among the tools and techniques based on genomic knowledge that have been developed, there is limited evidence regarding their validity and utility [355]. Public health genomics bridges this gap between new scientific discoveries and computer, AI technologies, and the application of genomic knowledge to benefit population health [356]. The integration of genomic knowledge and technologies into healthcare is revolutionizing the way we approach clinical and public health practice. Building upon the work of public health genomics from the last 20 years, precision public health now enables the integration of genomics into public health strategies within a broader context of other determinants of health, such as socioeconomic, behavioral, and environmental factors. This will lead to more precise individual and population-based interventions [357], and ultimately, improve population health outcomes [358]. The continued alignment of genomics with public health promises to deliver more precise, personalized health care to benefit the population.

6.9.16 Access and availability and AI in genetics and genomics As deep genomics expands, relevant ethical and governance questions begin to arise. How will predictive health data be used, and who will have access to it? Do pharmaceutical companies have the right to profit from your health information without giving you any benefits back in return. Would it threaten your self-worth when those handling your health data know a lot of biological details about your body? Is it ethical for a prospective employer to ask how your health will look like in the next decade? Answers to these questions are not easy to capture, but their impact on society is profound. A recent report [359] on ethics design, and AI argues that “as computational power increases with time and AI improves, information that was thought private will be linked to individuals at a later stage in time. Furthermore, as data is stored in terms of summaries rather than as raw observations, and may help training algorithms, keeping track of data usage and potential risks to privacy may be increasingly complex.” Technological advances such as blockchain being developed for digital currency systems allow individuals to hold and secure digital assets without a central authority. Such

276

Foundations of Artificial Intelligence in Healthcare and Bioscience

technologies are also being used to create new digital property systems that include personal medical data property [360]. This raises the broader analysis of how personal data ecosystems would fit in an AI-powered health economy. There is also an urgent need to concurrently discuss how the convergence of AI and personal health will challenge the purpose and content of relevant legislation such as the Health Insurance Portability and Accountability Act (HIPAA) and the Genetic Information Non-Discrimination Act (GINA). Models of genomic data protection must be considered a critical biosecurity issue. The insights underlying the functioning of our genomes and their impact on our health and research will be of strategic importance in biotechnology and biodefense [361]. It is of importance, if not urgent, that the genomics and AI research community start discussions with diverse fields of expertize in biosafety, biosecurity, and biotechnological governance [362]. The volume of information presented in this Chapter has been enormous yet, a mere fraction of the published literature on the topics and categories addressed herein. As the author, I had to fight the temptation to add more and more on each subject as I conducted the research. I may have erred in some areas with more than needed in a “guidebook” of this nature or went to excess in other areas. But in any area where you, as the reader, may have felt a bit overwhelmed by more than you thought you needed on any specific item, I can assure you that I only scratched the surface. On the other hand, if you were left feeling a lack of sufficient information on a topic that you would have wanted greater coverage, again, I can assure you that there is much more available. To wit, I tried to include substantial enough literature citations (footnotes) for your further research and reading on any topic discussed.

References [1] Health. The difference between primary, secondary, and tertiary health care. eInsure; January 27, 2017. [2] Jee K, Kim GH. The potentiality of big data in the medical sector: focus on how to reshape the healthcare system. Healthc Inform Res 2013;19(2):79 85. [3] Tan SS, Gao G, Koch S. Big data and analytics in healthcare. Methods Inf Med 2015;54(6):546 7. [4] Shortliffe EH. Artificial intelligence in medicine: weighing the accomplishments, hype, and promise. IMIA and Georg Thieme Verlag KG. Yearb Med Inf 2019; , https://doi.org/10.1055/s-0039-1677891 . . [5] Wu PY, Cheng CW, Kaddi CD, Venugopalan J, Hoffman R, Wang MD. Omic and electronic health record big data analytics for precision medicine. IEEE Trans Biomed Eng 2017;64(2):263 73. [6] Garets D, Davis M. Electronic medical records vs. Electronic health records: yes, there is a difference. Zhongguo Yiyuan 2007;11(5):38 9. [7] Poulymenopoulou M, Malamateniou F, Prentza A, Vassilacopous G. Challenges of evolving PINCLOUD PHR into a PHR-based health analytics system. Paper presented at the Proceedings of the European, Mediterranean & Middle Eastern Conference on Information Systems EMCIS 2015. [8] Morgan L. Artificial intelligence in healthcare: how AI shapes medicine. Datamation; March 8, 2019. [9] FDA. Novel drug approvals for 2018. ,https://www.fda.gov/drugs/developmentapprovalprocess/druginnovation/ucm592464.htm.; 2018. [10] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM 2017;60:84 90.

Chapter 6 • Current AI applications in medical therapies and services

277

[11] Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019;25:44 56. [12] Esteva A, et al. A guide to deep learning in healthcare. Nat Med 2019;25:24 9. [13] Shah P, Kendall F, Khozin S, et al. Artificial intelligence and machine learning in clinical development: a translational perspective, vol. 2. npj Digital Medicine; July 26, 2019. Article number: 69. [14] Quarré F. The advent of AI and blockchain in health care. Forbes Technology Council. Forbes; January 30, 2019. [15] Sciforce. IoT in healthcare: are we witnessing a new revolution? Medium; March 7 2019. [16] Kuziemsky C, Maeder AJ, John O, et al. Role of artificial intelligence within the telehealth domain. Georg Thieme Verlag KG Stuttgart. Official 2019 Yearbook Contribution by the members of IMIA Telehealth Working Group. Yearb Med Inf 2019;28(01):035 40. [17] Yaghoubzadeh R, Kramer M, Pitsch K, et al. Virtual agents as daily assistants for elderly or cognitively impaired people. In: International workshop on intelligent virtual agents 2013 August 29. Berlin, Heidelberg: Springer; 2013. p. 79 91. [18] Bickmore TW, Utami D, Matsuyama R, et al. Improving access to online health information with conversational agents: a randomized controlled experiment. J Med Internet Res 2016;18(1). [19] Shaked NA. Avatars and virtual agents relationship interfaces for the elderly. Health Technol Lett 2017; (03):83. [20] Riccardi G. Towards healthcare personal agents. In: Proceedings of the 2014 workshop on roadmapping the future of multimodal interaction research, including business opportunities and challenges 2014 November 16. ACM; 2014. p. 53 6. [21] Chapman W. Healthcare NLP: the secret to unstructured data’s full potential. Health Catalyst; April 2, 2019. [22] Glen Coppersmith G, Leary R, Crutchley P, et al. Natural language processing of social media as screening for suicide risk. Biomed Inform Insights 2018;. Available from: https://doi.org/10.1177/ 1178222618792860. [23] , https://aws.amazon.com/comprehend/medical/.; 2019. [24] Kannan A. The science of assisting medical diagnosis: from expert systems to machine-learned models follow. Medium; April 15, 2019. [25] Tutorialspoint.com. Artificial intelligence expert systems. ,https://www.tutorialspoint.com/artificialintelligence/artificial-intelligence-expert-systems.htm.; 2018. [26] Kermany DS, Goldbaum M, Cai W, et al. Identifying medical diagnoses and treatable diseases by imagebased deep learning. Cell 2018;172(5):1122 31. [27] Ravuri M, Kannan A, Tso GJ, et al. From expert systems to machine-learned diagnosis models. Proc Mach Learn Res 2018;85:1 16. [28] MedTech. AI-powered medical robots are becoming reality! ,https://www.medicaltechoutlook.com/.; July 5, 2019. [29] Da Vinci Surgical System | Robotic Technology - Da Vinci Surgery. ,https://www.davincisurgery.com/ da-vinci-systems/about-da-vinci-systems.; 2019. [30] Prodromakis T. Five ways nanotechnology is securing your future. Elsevier SciTech Connect; March 29, 2016. [31] Research Briefs. Paging Dr. Robot: how robotics is changing the face of medicine. CB Insights; July 24, 2019. [32] Rigby MJ. Ethical dimensions of using artificial intelligence in health care. AMA J Ethics 2019. [33] Lough S. Predictive health platform, optimizes health and disease risk management, improves population health outcomes. ,www.bannerhealth.com.; March 7, 2019. [34] Insights Team. How machine learning is crafting precision medicine. Forbes Insights; February 11, 2019.

278

Foundations of Artificial Intelligence in Healthcare and Bioscience

[35] Mesko B. The role of artificial intelligence in precision medicine. Expert Rev Precis Med Drug Dev 2017;2(5):239-241. Available from: https://doi.org/10.1080/23808993.2017.1380516. [36] FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. U.S. Food and Drug Administration; April 11, 2018. [37] Keown S. Using algorithms to personalize patient care. Dr. Elizabeth Krakow, a blood-cancer specialist, and epidemiologist. Fred Hutch News Service; October 1, 2018. [38] Kaisler S, Armour F, Espinosa JA, et al. Big data: issues and challenges moving forward. In: IEEE Computer Society 46th Hawaii International conference on system sciences. IEEE; 2013. p. 995 1004. Available from: https://doi.org/10.1109/HICSS.2013.645. [39] Perrey J, Spillecke D, Umblijs A. Smart analytics: how marketing drives short-term and long-term growth. McKinsey Quarterly; 2013. [40] Wang L, Alexander CA. Big data in medical applications and health care. Am Med J 2015;6:1 8. Available from: https://doi.org/10.3844/amjsp.2015.1.8. [41] Shah R, Echhpal R, Nair S. Big data in healthcare analytics. Int J Recent Innov Trends Comput Commun 2015;10:134 8. [42] Hansen MM, Miron-Shatz T, Lau YS, et al. Big data in science and healthcare: a review of recent literature and perspectives. Contribution of the IMIA Social Media Working Group, Yearb. Med Inf 2014;9:21 6. Available from: https://doi.org/10.15265/IY-2014-0004. [43] Schultz T. Turning healthcare challenges into big data opportunities: a use-case review across the pharmaceutical development lifecycle. Bull Association Inform Sci Technol 2013;39:34 40. Available from: https://doi.org/10.1002/bult.2013.1720390508. [44] Yang J, Aloe AM, Feeley TH. Risk information seeking and processing model: a meta-analysis. J Commun 2014;. Available from: https://doi.org/10.1111/jcom.12071. [45] Bellazzi R. Big data and biomedical informatics: a challenging opportunity. Yearb Med Inf 2014;9:8 13. [46] Terris M. Evolution of public health and preventive medicine in the United States. Am J Public Health 1975;65:161 9. [47] Manolio TA. Bringing genome-wide association findings into clinical use. Nat Rev Genet 2013;14:549 58. [48] Gambhir SS, Ge TJ, Vermesh O, et al. Toward achieving precision health. Sci Transl Med 28 2018;10 (430). Available from: https://doi.org/10.1126/scitranslmed.aao3612. [49] Shaban-Nejad A, Lavigne M, Okhmatovskaia A, et al. A knowledge-based platform to support integration, analysis, and visualization of population health data. Ann N Y Acad Sci 2017; 1387:44 53. [50] Shaban-Nejad A, Brownstein JS, Buckeridge DL. Public health intelligence and the internet. Cham: Springer International Publishing AG; 2017. [51] Wilk S, et al. Comprehensive mitigation framework for the concurrent application of multiple clinical practice guidelines. J Biomed Inf 2017;66:52 71. [52] Xu JQ, Murphy SL, Kochanek KD, Arias E. Mortality in the United States, 2015. NCHS data brief, no 267. Hyattsville, MD: National Center for Health Statistics; 2016. [53] Hsu J. Will artificial intelligence improve health care for everyone? Undark Magazine Smithsonian.com; July 31, 2019. [54] Makary M. Medical errors now third leading cause of death in United States. BMJ 2016;353:i2139. [55] To PRESS RELEASE NO: 2018/092/HD. World Bank and WHO: half the world requires access to essential health services, 100 million still pushed into extreme poverty because of health expenses. The World Bank; December 13, 2017. [56] Physician Burnout Report. ,https://cdn1.sph.harvard.edu/wp-content/uploads/sites/21/2019/01/.; 2018.

Chapter 6 • Current AI applications in medical therapies and services

279

[57] Bate A, Juniper J, Lawton AM, Thwaites RM. Designing and incorporating a real-world data approach to international drug development and use: what the UK offers. Drug Discov Today 2016;21 (3):400 5. [58] Bate A, Pariente A, Hauben M, Begaud B. Quantitative signal detection and analysis in pharmacovigilance. Mann’s Pharmacovigil 2014;331 54. [59] Trifiro G, Sultana J, Bate A, et al. From big data to smart data for pharmacovigilance: the role of healthcare databases and other emerging sources. Drug Saf 2017;41(3):1 7. Available from: https://doi.org/ 10.1007/s40264-017-0592-4. [60] Oliveira AL. Biotechnology, big data, and artificial intelligence. Biotechnol J 2019;. Available from: https://doi.org/10.1002/biot.201800613. [61] Harrer S, Shah P, Antony B, et al. Artificial intelligence for clinical trial design. Trends Pharmacol Sci 2019;. Available from: https://doi.org/10.1016/j.tips.2019.05.005. [62] Press C. Review evaluates how AI could boost the success of clinical trials. ScienceDaily 2019;. [63] Plotnikov V, Kuznetsova V. The prospects for the use of digital technology “Blockchain” in the pharmaceutical market. In: MATEC web of conferences. London: EDP Sciences 2018;vol. 193. [64] Markov A. Use of blockchain in pharmaceutics and medicine. ,https://miningbitcoinguide.com/technology/blokchejn-v-meditsine/.; October 16, 2018. [65] Sylim P, Liu F, Marcelo A, et al. Blockchain technology for detecting falsified and substandard drugs in distribution: pharmaceutical supply chain intervention. JMIR Res Protoc 2018;7:e10163. [66] Trujllo G, Guillermo C. The role of blockchain in the pharmaceutical industry supply chain as a tool for reducing the flow of counterfeit drugs [Ph.D. thesis]. Dublin: Dublin Business School; 2018. [67] Siyal AA, Junejo AZ, Zawish M, et al. Applications of blockchain technology in medicine and healthcare: challenges and future perspectives. Cryptography; January 2, 2019. [68] Hickey KT, Riga TC, Mitha SA, et al. Detection and management of atrial fibrillation using a remote control. Nurse Pract 2018;43(3):24 30. [69] Özdemir V. The big picture on the “AI Turn” for digital health: the internet of things and cyber-physical systems OMICS: A J Integr Biol 2019;236 Review Articles. June 5. Available from: https://doi.org/ 10.1089/omi.2019.0069. [70] Thuemmler C, Bai C. Health 4.0: how virtualization and big data are revolutionizing healthcare. Cham, Basel: Springer; 2017. [71] Markarian A. Pharma 4.0. Pharm Technol 2018;42(4):24. [72] Kastner P, Morak J, Modre R, et al. Innovative telemonitoring system for cardiology: from science to routine operation. Appl Clin Inf 2010;1(2):165 76. [73] Schreier G, Schwarz M, Modre-Osprian R, Kastner P, Scherr D, Fruhwald F. Design and evaluation of a multimodal mHealth based medication management system for patient self-administration. Conf Proc IEEE Eng Med Biol Soc. 2013;2013:7270 3. [74] Kropf M, Modre-Osprian R, Hayn D, et al. Telemonitoring in heart failure patients with clinical decision support to optimize medication doses based on guidelines. Conf Proc IEEE Eng Med Biol Soc 2014;2014:3168 71. [75] Dharwadkar R, Deshpande NA, Pune A. Pharmabot: a recommendation on general medicines Int J Innovative Res Computer Commun Eng 2018;6(6).

survey.

[76] Comendador BEV. Pharmabot: a pediatric generic medicine consultant chatbot. J Autom Control Eng 2015;3:137 40. [77] Keane J. Medxnote is building “the perfect use case” for chatbots in healthcare. TechEU; September 11, 2017. [78] Ben Abacha A, Zweigenbaum P. A hybrid approach for the extraction of semantic relations from MEDLINE abstracts. In: 12th international conference on computational linguistics and intelligent text processing. Tokyo, Japan. ,https://rd.springer.com/book/10.1007.; 2011.

280

Foundations of Artificial Intelligence in Healthcare and Bioscience

[79] Fong A, Hettinger AZ, Ratwani RM. Exploring methods for identifying related patient safety events using structured and unstructured data. J Biomed Inf 2015;58:89 95. [80] Kim S, Liu H, Yeganova L, et al. Extracting drug-drug interactions from literature using a rich featurebased linear kernel approach. J Biomed Inf 2015;58:23 30. [81] Lopez Pineda A, Ye Y, Visweswaran S, et al. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports. J Biomed Inf 2015;58:60 9. [82] Segura-Bedmar I, Martinez P. Pharmacovigilance through the development of text mining and natural language processing techniques. J Biomed Inf 2015;58:288 91. [83] Schmider J, Kumar K, LaForest C, et al. Use of artificial intelligence in adverse event case processing. Innov Pharmacovigilance: ClPharmacol Ther 2019;105(4). [84] Fong DJ. Artificial intelligence in pharmacy: are you ready? Wolters Kluwer; January 22, 2018. [85] Paul M, Andreassen S, Tacconelli E, et al. Improving empirical antibiotic treatment using TREAT, a computerized decision support system: cluster randomized trial. J Antimicrob Chemother 2006;58:1238 45. [86] Robotics Online Team. How collaborative robots are being used in pharma. Robotics Industry Assoc; May 30, 2019. [87] Vyas M, Thakur S, Riyaz B, et al. Artificial intelligence: the beginning of a new era in pharmacy profession. Asian J Pharmaceutics 2018;12(2):72. [88] Chisolm-Burns MA, Kim Lee J, Spivey CA. US pharmacists’ effect as team members on patient care: systematic review and meta-analysis. Med Care 2010;48(923):33. [89] Sanborn MD. Population health management and the pharmacist’s role. Am J Health-System Pharm 2017;74(18):1400 1. Available from: https://doi.org/10.2146/ajhp170157. [90] van Dulmen S, Sluijs E, van Dijk L, et al. Patient adherence to medical treatment: a review of reviews. BMC Health Serv Res 2007;7:55. [91] Viswanathan M, Golin CE, Jones CD, et al. Interventions to improve adherence to self-administered medications for chronic diseases in the United States: a systematic review. Ann Intern Med 2012;157 (11):785 95. [92] Zullig LL, Blalock DV, Dougherty S, et al. The new landscape of medication adherence improvement: where population health science meets precision medicine. Patient Prefer Adherence 2018;12:1225 30. Available from: https://doi.org/10.2147/PPA.S165404. [93] Harpaz R, Vilar S, Dumouchel W, et al. Combing signals from spontaneous reports and electronic health records for detection of adverse drug reactions. J Am Med Inf Assoc 2013;20 (3):413 19. [94] Islam MS, Hasan M, Wang X, et al. A systematic review on healthcare analytics: application and theoretical perspective of data mining. Healthc Jun 2018;6(2):54. Available from: https://doi.org/10.3390/ healthcare6020054. [95] Big Data in the Healthcare & Pharmaceutical: A 1 $7 Billion Opportunity by 2021. Research and Markets; July 31, 2018. [96] Faggella D. 7 applications of machine learning in pharma and medicine. Business Intelligence and Analytics; January 30, 2019. [97] Fong DJ. Artificial intelligence in pharmacy: are you ready? Wolters Kluwer; January 22, 2018. [98] Hartman M, Martin AB, Espinosa N, et al. National health expenditures account team. National health care spending in 2016. [99] , https://aspe.hhs.gov/pdf-report/observations-trends-prescription-drug-spending.; March 8, 2016. [100] Emanuel EJ. The real cost of the US health care system. JAMA 2018;319(10):982 5.

Chapter 6 • Current AI applications in medical therapies and services

281

[101] Bennett D. Artificial intelligence in life sciences: marketing, sales, and more. Human intelligence amplified; June 11, 2019. [102] , http://www.cloudmedxhealth.com/.; 2019. [103] Sheil E. Cleveland clinic, IBM collaborate to establish model for cognitive population health management and data-driven personalized healthcare. Cleveland Clinic, Newsroom; December 22, 2016. [104] HopkinsMedicine.org; 2019. [105] AI for Care Variation, Utilization & Hospital Ops. KenSci; 2019. [106] Obermeyer Z, Emanuel EJ. Predicting the future—big data, machine learning, and clinical medicine. N Engl J Med 2016;375:1216 19. [107] Garcia Vidal C, Puerta P, Sanjuan G, et al. Predicting multidrug-resistant Gram-negative infections in hematological patients with high-risk febrile neutropenia using neural networks. Eur Congr Clin Microbiol Infect Dis 2019;13 16. [108] Garcia-Vidal C, Sanjuan G, Puerta-Alcalde P. Artificial intelligence to support clinical decision-making processes. EBioMedicine 2019;46:27 9 EBioMedicine. 2019 Aug; 46: 27 29. [109] Ni Y, Bermudez AA, Kennebeck S, et al. A real-time automated patient screening system for clinical trials eligibility in an emergency department: design and evaluation. JMIR 2019;7(3). [110] News Release. Artificial intelligence solution improves clinical trial recruitment. EurekAlert AAAS; July 24, 2019. [111] Wikipedia. 2019. [112] Cyran MA. Blockchain as a foundation for sharing healthcare data. Blockchain Health Today 2018;1:13. [113] Kuo TT, Kim HE, Ohno-Machado L. Blockchain distributed ledger technologies for biomedical and health care applications. J Am Med Inf Assoc 2017;24(6):1211 20. [114] American Hospital Association. Telehealth: a path to virtual integrated care. ,www.AHA.org/center.; 2019. [115] , https://www.tempus.com/about-us.html.; 2019. [116] Gauher S, Boyle Uz F. Cleveland clinic, to identify at-risk patients in ICU using cortana intelligence. Cleveland Clinic; September 26, 2016. [117] , http://nvidianews.nvidia.com/news/nvidia-massachusetts-general-hospital-use-artificial-intelligence-to-advance-radiology-pathology-genomics.; 2019. [118] Insights. Command center to improve patient flow infographic. Johns Hopkins Medicine; March 1, 2016. [119] Lee E, Seals K. Virtual interventional radiologist. UCLA Health; March 3, 2017. [120] Burns E. Natural language processing (NLP). TechTarget; May 2019. [121] Coppersmith G, Leary R, Crutchley P, et al. Natural language processing of social media as screening for suicide risk. Biomed Inform Insights 2018;10. Available from: https://doi.org/10.1177/ 1178222618792860. [122] Haughn M. Natural language understanding (NLU). TechTarget; October 2017. [123] OSHA. Hospital eTools. U.S. Department of Labor. OSHA QuickTakes; 2019. [124] Elion J. Emerging technologies for clinical documentation improvement. Becker’s Healthcare; August 23, 2010. [125] ROBODOC. ThinkSurgical.com; 2019. [126] Kraft BM, Jäger C, Kraft K, et al. The AESOP robot system in laparoscopic surgery: increased risk or advantage for surgeon and patient? Surg Endosc 2004;18(8):1216 23. [127] About DaVinci systems. Intuitive; 2019. [128] CMS.gov. MACRA. Centers for Medicare & Medicaid Services; June 14, 2019.

282

Foundations of Artificial Intelligence in Healthcare and Bioscience

[129] CMS.gov. What are the value-based programs? Centers for Medicare & Medicaid Services; July 16, 2019. [130] Burill S, Boozer Cruse C. Beyond the EHR: shifting payment models call for hospital investment in new technology areas. DeLoitte Insights; January 11, 2019. [131] Kent J. Value-based hospitals more likely to adopt population health tools. Health IT Analytics; January 17, 2019. [132] Bertalan M. The role of artificial intelligence in precision medicine. Expert Rev Precis Med Drug Dev 2017;2(5):239 41. Available from: https://doi.org/10.1080/23808993.2017.1380516. [133] Torkamani A, Topol E. Your genome, on-demand how your detailed genetic profile can predict your risk of diseases and improve your health. MIT Technology Review; October 23, 2018. [134] Sullivan T. Why EHR data interoperability is such a mess in 3 charts. Healthcare TI News; May 16, 2018. [135] Insights Team. How machine learning is crafting precision medicine. Forbes Insights; February 11, 2019. [136] Oshinsky D. Bellevue: three centuries of medicine and mayhem at America’s most storied hospital. Doubleday; 2017. [137] Cohen JK. CEO power panel: patient access is the next frontier for health systems. Modern Healthcare 2019;May 18. [138] Meyer H. Younger patients more dissatisfied with traditional healthcare. Modern Healthcare; February 12, 2019. [139] Carroll W. Artificial intelligence, nurses, and the quadruple aim. Online J Nurs Inform (OJNI) 2018;22(2). [140] Davis T, Bice C. Nurses have significantly higher levels of EHR satisfaction than providers. KLAS; March 28, 2019. [141] Clarke M. Nurses have significantly higher levels of EHR satisfaction than providers. Health Leaders; April 15, 2019. [142] Dilsizian SE, Siegel EL. Artificial intelligence in medicine and cardiac imaging: harnessing big data and advanced computing to provide personalized medical diagnosis and treatment. Curr Cardiol Rep 2014;16(1):441. Available from: https://doi.org/10.1007/s11886-013-0441-8. [143] de Veer AJE, Fleuren MAH, Bekkema N, et al. Successful implementation of new technologies in nursing care: a questionnaire survey of nurse-users. BMC Med Inf Decis Mak 2011;11:67. [144] De Raeve P. Blockchain supports nurses in the continuity of health and social care. Open Access Government; August 8, 2018. [145] FDA is rapidly approving medical IoT devices. Sure; February 17, 2019. [146] , http://community.advanceweb.com/blogs/nurses-18/archive/2015/10/14/nurses-will-benefit-fromthe-internet-of-things.aspx.. [147] The Internet of Things and Nursing. Jacksonville University; 2019. [148] American Telehealth Association. Telehealth nursing fact sheet: ATA telehealth nursing special interest group; 2018. [149] Smith W. Three benefits of a nurse telephone triage line. Team Health Medical Call Center; 2014. [150] AmBetter. 24/7 Nurse advice line. ,http://www.ambetterhealth.com/benefits-services/nurse-line. html.; 2019. [151] American Academy of Ambulatory Care Nursing. Telehealth nursing practice scope and standards of practice. ,http://www.aaacn.org/tnp-scope-standards-practice.; 2019. [152] The Patient Protection and Affordable Care Act. ,http://www.gpo.gov/fdsys/pkg/PLAW-111publ148/ pdf/PLAW-111publ148.pdf.; 2010.

Chapter 6 • Current AI applications in medical therapies and services

283

[153] Hawig D. The rise of AI-powered chatbots in the healthcare industry. IdeaConnection; May 5, 2019. [154] Hand C. How chatbots are changing communication in healthcare. Inc 5000 Healthcare; January 21, 2019. [155] Lovett L. AI triage chatbots trekking toward a standard of care despite criticism. Mobile Health News; November 2, 2018 [156] Seif G. An easy introduction to Natural Language Processing: using computers to understand human language. Retrieved January 20, 2019. [157] Johansen M, O’Brien J. Decision making in nursing practice: a concept analysis. Nurs Forum 2016;51:40 8. [158] Carroll WM. Artificial intelligence, critical thinking, and the nursing process. J Nurs Informatics: OJNI; Chic 2019;23(1). [159] Bowles KH, Ratcliffe SJ, Holmes JH, et al. Using a decision support algorithm for referrals to post-acute care. JAMDA 2019;20(4):408 13. [160] U.S. Bureau of Labor Statistics, Occupational Outlook Handbook. Registered Nurses; September 4, 2019. [161] Schwab K. A hospital introduced a robot to help nurses. They didn’t expect it to be so popular. Fast Company; July 8, 2019. [162] Eriksson H, Salzmann-Erikson M. The digital generation and nursing robotics: an ethnographic study about nursing care robots posted on social media. Nurs Inq 2017;24(2). Available from: https://doi.org/ 10.1111/nin.12165. [163] Andrew J, Pepito AB, Locsin R. Can nurses remain relevant in a technologically advanced future? Int J Nurs Sci 2019;6(1):106 10. [164] U.S. Department of Labor, Bureau of Labor Statistics. Occupational Outlook: Registered Nurses. ,https://www.bls.gov/ooh/healthcare/registered-nurses.htmtab-1.; 2017 [accessed 28.12.17]. [165] Robert Wood Johnson Foundation Catalyst for change Harnessing the Power of Nurses to Build Population Health for the 21st Century 2017. [166] CDC. What is precision medicine? National Institute of Health (NIH). Genetics Home Reference (GHR); May 28, 2019. [167] Cashion AK, Gill J, Hawes R, et al. National Institutes of Health Symptom Science Model sheds light on patient symptoms. Nurs Outlook 2016;64:499 506. [168] Cashion AK, Grady PA. The National Institutes of Health/National Institute of Nursing Research intramural research program and the development of the National Institutes of Health Symptom Science Model. Nurs Outlook 2015;63:484 7. [169] Cashion AK, Grady PA. Response to the commentary: precision health: using omics to optimize selfmanagement of chronic pain in aging: from the perspective of the NINR Intramural Research Program. Res Gerontology Nurs 2018;11:14 15. [170] NINR. The NINR strategic plan: advancing science, improving lives. ,https://www.ninr.nih.gov/sites/ www.ninr.nih.gov/files/NINR-StratPlan2016-reduced.pdf.; 2016. [171] Hickey KT, Bakken S, Byrne MW, et al. Precision health: advancing symptom and self-management science. Nurs Outlook 2019;67(4):462 75. [172] Howley EK. Can nurse practitioners help ease the growing physician shortage? US News; November 15, 2018. [173] Preventive Care Services. Guideline Number: CDG.016.26. United Healthcare; July 1, 2019. [174] Chiverton PA, Votava KM, Tortoretti DM. The future role of nursing in health promotion. Am J Health Promot 2003;18(2):192 4.

284

Foundations of Artificial Intelligence in Healthcare and Bioscience

[175] The Definition and Practice of Public Health Nursing. APHA; November 11, 2013. [176] Public Health Nursing. American Public Health Association; 2019. [177] U.S. Department of Health and Human Services Health Resources and Services Administration Bureau of Health Workforce National Center for Health Workforce Analysis. Supply and Demand Projections of the Nursing Workforce: 2014 2030; July 21, 2017. [178] U.S. Department of Health and Human Services, Health Resources and Services Administration, National Center for Health Workforce Analysis. The Future of the Nursing Workforce: National- and State-Level Projections, 2012 2025. Rockville, Maryland; 2014. [179] Institute of Medicine (US). Committee on the future health care workforce for older Americans. Retooling for an aging America: building the health care workforce. National Academies Press; 2008. [180] Orsini M. Utilizing big data to improve patient outcomes in home healthcare share. Axxess; July 2, 2019. [181] Threlkeld T. NAHC Policy Staff. 3 Significant issues facing homecare & hospice. Home Care; August 29, 2019. [182] Brighttree. 60% of referral sources would switch to a home health and hospice provider that accepts electronic referrals, survey reveals. BusinessWire; July 24, 2019. [183] Optimizing Home Health Care. Enhanced value and improved outcomes. Clevel Clin J Med 2019;. [184] Dobson A, El-Gamil A, Heath S, et al. Use of home health care and other care services among medicare beneficiaries clinically appropriate and cost-effective placement (CACEP) project working paper series. Alliance for Home Health Quality & Innovation; 2012. [185] Seng Tan J. Global blockchain platform for elderly wellness and care: white paper. Wellderly; April 29, 2018. [186] Access and leverage smart sensor analytics. Care Innovations, LLC; 2019. [187] 5 Ways healthcare IoT & remote patient monitoring are transforming care delivery. Care Innovations; 2019. [188] Medicare.gov. Telehealth. Official US Government site for Medicare; 2019. [189] Erstling A. 6 Healthcare innovation trends in 2019| what to pursue and what to watch. Formula; January 23, 2019. [190] , http://news.northeastern.edu/2017/09/professor-designs-chatbot-to-help-comfort-patients-in-lastyears-of-their-lives/.. [191] Rieland R. Can a chatbot help you prepare for death? Smithsonianmag.com; September 29, 2017. [192] Lilley EJ, Lindvall C, Lillemoe KD, et al. Measuring processes of care in palliative surgery a novel approach using natural language processing. Ann Surg 2018;267(5):823 5. [193] Muoio D. How chatbots and robots can fill healthcare’s unmet needs. MobiHealthNews; September 25, 2018. [194] Emanuel E. The status of end-of-life care in the United States: the glass is half full. JAMA 2018;320:329. [195] Lustbader D, Mudra M, Romano C, et al. The impact of a home-based palliative care program in an accountable care organization. J Palliat Med 2016;20:23 8. [196] Coalition to Transform Advanced Care (CTAC): Clinical and Payment Models. Washington, DC. ,www.ctac.org/models.; March 2019. [197] Yosick L, Crook RE, Gatto M, et al. Effects of a population health community-based palliative care program on cost and utilization. J Palliat Med 2019;22(9). [198] The future of health, begins with you. All of Us Research Program. U.S. Department of Health & Human Services; 2019. [199] NHCOA is a member of the Combined Federal Campaign (CFC) #44522 © 2016. [200] eSolutions.com. Titan; 2019.

Chapter 6 • Current AI applications in medical therapies and services

285

[201] eSolutions. Home health & hospice billing; 2109. [202] CarePredict @Home. CarePredict; January 8, 2019. [203] www.tricare.mil is an official website of the Defense Health Agency (DHA), a component of the Military Health System. Last updated June 28, 2019. [204] Center for Disease Control and Prevention. National Center for Health Statistics; March 11, 2016. [205] National Center for Health Statistics, Vital and Health Statistics. Long Term Care Providers and Services Users in the United States, 2015 2016. Series 3, Number 43. February 2019. [206] Nicholson K, Makovskic TT, Griffith LE, et al. Multimorbidity and comorbidity revisited: refining the concepts for international health research. J Clin Epidem 2019;105:142 6. [207] Bresnick J. Identifying big data sources for population health management. Health IT Analytics; January 2, 2018. [208] Glicksberg BS, et al. An integrative pipeline for multi-modal discovery of disease relationships. Pac Symp Biocomput 2015;407 18. [209] Bagley SC, Sirota M, Chen R, et al. Constraints on biological mechanism from disease comorbidity using electronic medical records and database of genetic variants. PLoS Comput Biol 2016;12(4):e1004885. [210] Roca M, Gili M, Garcia-Garcia M, et al. Prevalence and comorbidity of common mental disorders in primary care. J Affect Disord 2009;119(1 3):52 8. Available from: https://doi.org/10.1016/j. jad.2009.03.014. [211] Cuncic A. Facts about comorbidity. VeryWellMind; September 13, 2019. [212] Schumacher A. Towards a global, blockchain-based precision medicine ecosystem. Blockchain & Healthcare - 2017 Strategy Guide Method; June 2017. [213] , https://ec.europa.eu/digital-single-market/en/content/mid-term-review-digital-single-market-dsmgood-moment-take-stock.. [214] , https://ec.europa.eu/digital-single-market/en/news/european-commission-launches-eu-blockchainobservatory-and-forum.. [215] De Raeve P. Blockchain technology: supporting continuity of care. Health Europa; April 27, 2018. [216] Stegemann S. Developing drug products in an aging society: from concept to prescribing. Cham: Springer; 2016. [217] Schreier G, Schwarz M, Modre-Osprian R, Kastner P, Scherr D, Fruhwald F. Design and evaluation of a multimodal mHealth based medication management system for patient self-administration. Conf Proc IEEE Eng Med Biol Soc 2013;2013:7270 3. [218] Ebner H, Modre-Osprian R, Kastner P, et al. Integrated medication management in mHealth applications. Stud Health Technol Inf 2014;198:238 44. [219] Porath A, Irony A, Borobick AS, Nasser S, et al. Maccabi proactive Telecare Center for chronic conditions - the care of frail elderly patients. Isr J Health Policy Res 2017;6(1):68. [220] Kökciyana N, Chapmanb M, Balatsoukas P, et al. A collaborative decision support tool for managing chronic conditions. UK Engineering & Physical Sciences Research Council (EPSRC); 2019. [221] Sklar E, Azhar MQ. Argumentation-based dialogue games for shared control in human-robot systems. J Human-Robot Interact 2015;4:120 48. [222] Young AP, Kökciyan N, Sassoon L, et al. Instantiating metalevel argumentation frameworks. In: Proceedings of the 7th international conference on computational models of argument. 2018, pp. 97 108. [223] Smoller JW, Lunetta KL, Robins J. Implications of comorbidity and ascertainment bias for identifying disease genes. Am J Med Genet 2000;96:817 22. [224] Salmasian H, Freedberg DE, Friedman C. J Am Med Inf Assoc 2013;20:e239 42.

286

Foundations of Artificial Intelligence in Healthcare and Bioscience

[225] De Groot V, Beckerman H, Lankhorst GJ, et al. How to measure comorbidity. A critical review of available methods. J Clin Epidemiol 2003;56:221 9. [226] Friedman C, Shagina L, Lussier Y, et al. Automated encoding of clinical documents based on natural language processing. J Am Med Inf Assoc 2004;11:392 402. [227] Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA 2011;306:1688 98. [228] Singla J, Grover D, Bhandari A. Medical expert systems for the diagnosis of various diseases. Int J Computer Appl 2014;93(7):36 43. [229] Nohria R. Medical expert system-A comprehensive review. Int J Computer Appl 2015;130(7):44 50. [230] Bursuk E, Demirci S, Korpinar MA, et al. Expert system in medicine and its application at pulmonary diseases. Med Sci Discovery 2016;3(11):342 9. [231] Broadbent E, Garrett J, Jepsen N, et al. Using robots at home to support patients with chronic obstructive pulmonary disease: pilot randomized controlled trial. JMIR 2018;20(2). [232] Bradley Eh, Canavan M, Rogan E, et al. Variation in health outcomes: the role of spending on social services, public health, and health. Health Aff 2016;35(5). [233] Social determinants of health. Healthy housing for a sustainable and equitable future Housing and health guidelines. World Health Organization; November 27, 2018.

the WHO

[234] Bresnick J. What are the social determinants of population health? Health IT Analytics; August 19, 2017. [235] Belcher K. From $600 M to $6 Billion, artificial intelligence systems poised for dramatic market expansion in healthcare. Frost & Sullivan; January 5, 2016. [236] Delaney K. Do the math: what happens when you add the next big thing to the next big thing? Accenture; January 19, 2018. [237] Olathe. Project: analysis of medical care data base to identify comorbidity patterns. K State University; March 27, 2019. [238] The National Center for Chronic Disease Prevention and Health. What can people who have arthritis and comorbidities do? Centers for Disease Control and Prevention; May 16, 2018. [239] Theis KA, Brady TJ, Helmick CG. No one dies of old age anymore: a coordinated approach to comorbidities and the rheumatic diseases. Arthritis Care Res 2016;. Available from: https://doi.org/10.1002/acr.23114. [240] Eustice C. Overview of comorbidity and arthritis. VeryWellHealth; May 22, 2019 [241] Correll CU, Detraux J, De Lepeleire J, et al. Effects of antipsychotics, antidepressants, and mood stabilizers on risk for physical diseases in people with schizophrenia, depression, and bipolar disorder. World Psychiatry: J World Psychiatr Assoc (WPA) 2015;14(2):119 36. [242] Stubbs B, Mueller C, Gaughran F, et al. Predictors of falls and fractures, leading to hospitalization in people with schizophrenia spectrum disorder: a large representative cohort study. Schizophrenia Res 2018;201:70 8. [243] Lankarani MM, Assari S. Association between number of comorbid medical conditions and depression among individuals with diabetes; race and ethnic variations. J Diabetes Metab Disord 2015;14:56. [244] Akinyemiju T, Jha M, Xavier J, et al. Disparities in the prevalence of comorbidities among US adults by state Medicaid expansion status. Prev Med 2016;88:196 202. [245] FDA clears the first autonomous telemedicine robot for hospitals. Kurzweil Accelerating Intelligence; January 28, 2013. [246] About ReWalk Robotics. ReWalk.com; 2019. [247] 4 Ways RPA in healthcare can streamline critical processes. HealthSystem Blog; February 22, 2019. [248] Davincisurgery.com. Intuitive; 2019.

Chapter 6 • Current AI applications in medical therapies and services

287

[249] Cullity K. Robotics for the business of healthcare - are you ready for RPA? Culbert Healthcare Solutions; August 21, 2018. [250] Schulman J, Gupta A, Sibi Venkatesan A. A case study of trajectory transfer through non-rigid registration for a simplified suturing scenario. Department of Electrical Engineering and Computer Sciences, University of California at Berkeley. IROS; 2013. [251] Hirsch A. Johns Hopkins scientist programs robot to perform ‘soft tissue’ surgery. HUB; May 6, 2016. [252] Fard MJ, Ameri S, Chinnam RB, et al. Machine learning approach for skill evaluation in roboticassisted surgery. In: Proceedings of the world congress on engineering and computer science 2016, vol. I. WCECS 2016; October 19 21, 2016. [253] Stuart Campbell. The impact of robotics on neurosurgery. Renishaw; October 11, 2016. [254] Zheng S, Lu JJ, Ghasemzadeh N, et al. Effective information extraction framework for heterogeneous clinical reports using online machine learning and controlled vocabularies. JMIR Med Inform; June 5, 2017. [255] Castello Ferrer E. Decentralized AI and robotics using blockchain technology. MIT Research Topic; 2019. [256] Eastward G. How robots are extending the scope of IoT applications. Innovation Enterprise; June 13, 2019. [257] Internet of Things (IoT) in healthcare: benefits, use cases, and evolutions. I-Scoop; 2018. [258] Allocate A, Burghard C, Clap M. Worldwide healthcare IT 2017 predictions. IDC FutureScape; November 2016. [259] Telepresence Robots Market Research Report - Global Forecast to 2023. Market Research Future; May 2019. [260] Telepresence Robots Market 2019 Global Industry Size, Trends, Gross Margin Analysis, Development Status, Sales Revenue by Leading Players Forecast to 2023. Reuters Plus; February 11, 2019. [261] Tomlinson Z. 15 Medical robots that are changing the world. Interesting Engineering; October 11, 2018. [262] Boston Children’s Hospital. A first in medical robotics: autonomous navigation inside the Robotic body catheter, using a novel sensor informed by AI and image processing, makes its way to a leaky heart valve. ScienceDaily; April 24, 2019. [263] Mejia N. Robotic process automation (RPA) in healthcare [264] ITSC Blog. Robotic process automation (RPA) Supply Chain; January 30, 2019.

current use-cases. Emerj; May 30, 2019.

it’s not about robotics! Information Technology and

[265] Matari´c MJ. Robotics and autonomous systems center. USC; 2015. [266] Tapus A, Tapus C, Mataric MJ. User-robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Intell Serv Robot: Multidiscip Collaboration Socially Assistive Robotics 2008;1:169 83. [267] Greczek J, Kaszubski E, Atrash A, et al. Graded cueing feedback in robot-mediated imitation practice for children with autism spectrum disorders. IEEE; October 20, 2014. doi:10.1109/ ROMAN.2014.6926312. [268] Armiitage H. Countdown to big data in precision health: robots that are here to help. Stanford Medicine; April 8, 2019. [269] Mills T. AI in health care: the top ways AI is affecting the health care industry forbes technology council; Jun 11, 2019. [270] Singh Dang S. Artificial intelligence in humanoid robots. Cognitive World; February 25, 2019. [271] Szwartz G. The AI will see you now: how cognitive computing is transforming medicine. Wired; 2018.

288

Foundations of Artificial Intelligence in Healthcare and Bioscience

[272] Wiggers KL. Google’s DeepMind is using neural nets to explore dopamine’s role in learning. AI; May 14, 2018. [273] Special Issue. Enabling technologies in health engineering and informatics for the new revolution of healthcare 4.0. IEEE J Biomed Health Inform 2019;. [274] Jeter Hansen A. Artificial intelligence in medicine—predicting patient outcomes and beyond. Stanford Medicine Scope; May 8, 2018. [275] Poropatich R. Pitt, CMU receive Department of Defense Contracts to create autonomous robotic trauma care system. Carnegie Mellon University. News 2019;. [276] NIH Stem Cell Information Home Page. In stem cell information [World Wide Web site]. Bethesda, MD: National Institutes of Health, U.S. Department of Health and Human Services; September 22, 2019. [277] Scudellari M. How iPS cells changed the world. Nature 2016;. [278] Berg J. Gene-environment interplay. Science 2016;354(6308):15. [279] Kabir H, O’Connor MD. Stems cells, big data, and compendium-based analyses for identifying cell types, signaling pathways, and gene regulatory networks. Biophys Rev 2019;(1):41 50. [280] Bhartiya D. The need to revisit the definition of mesenchymal and adult stem cells based on their functional attributes. Stem Cell Res Ther 2018;9:78. Available from: https://doi.org/10.1186/s13287-0180833-1. [281] Ratajczak MZ, Ratajczak J, Kucia M. Very small embryonic-like stem cells (VSELs). Circ Res 2019;124:208 10. Available from: https://doi.org/10.1161/CIRCRESAHA.118.3. [282] Bhartiya D, Shaikh A, Anand S, Patel H, Kapoor S, Sriraman K, et al. Endogenous, very small embryonic-like stem cells: a critical review, therapeutic potential and a look ahead. Hum Reprod Update 2016;23:41 76. Available from: https://doi.org/10.1093/humupd/dmw030. [283] Bhartiya D. Clinical translation of stem cells for regenerative medicine a comprehensive analysis. CIRCRESAHA; March 14, 2019. Available from: https://doi.org/10.1161/118.313823. [284] Global Institute of Stem Cell Therapy and Research. Goldstar.com; 2019. [285] Coleman L. Here’s how blockchain, sports and science are disrupting the healthcare game. Forbes Magazine; August 25, 2019. [286] Garg S, Williams NL, Ip A, et al. Clinical integration of digital solutions in health care: an overview of the current landscape of digital technologies in cancer care. JCO Clin Cancer Infomatics 2018;2:1 9. [287] Bennett AV, Reeve BB, Basch EM, et al. Evaluation of pedometry as a patient-centered outcome in patients undergoing hematopoietic cell transplant (HCT): a comparison of pedometry and patient reports of symptoms, health, and quality of life. Qual Life Res 2016;25:535 46. [288] Lahm Jr RJ, Lockwood FS. Innovations to come could help address the small business health care access and affordability problem. In: Institute for global business research conference proceedings, vol. 2. Number 1. April 6, 2018. [289] Dodziuk H. Applications of 3D printing in healthcare. Pol J Thorac Cardiovasc Surg 2016;13(3):283 93. Available from: https://doi.org/10.5114/kitp.2016.62625. [290] Aydin O, Zhang X, Nuethong S, et al. Neuromuscular actuation of biohybrid motile bots. PNAS 2019;. Available from: https://doi.org/10.1073/pnas.1907051116. [291] Saif T, Bashir R. Researchers build microscopic biohybrid robots propelled by muscles, nerves. National Science Foundation; September 17, 2019. [292] SuvorovYa RE, Kim SA, Gisina M, et al. Surface molecular markers of cancer stem cells: computation analysis of full-text scientific articles. Bull Exp Biol Med 2018;166(1):135 40. [293] Vijay SAA, Ganesh Kumar P. Fuzzy expert system based on a novel hybrid stem cell (HSC) algorithm for classification of microarray data. J Med Syst 2018;42(4):61. Available from: https://doi.org/10.1007/ s10916-018-0910-0.

Chapter 6 • Current AI applications in medical therapies and services

289

[294] Sungwoong J, Kim S, Ha S, et al. Magnetically actuated microrobots as a platform for stem cell transplantation. Sci Robot 2019;4(30):eaav4317. Available from: https://doi.org/10.1126/scirobotics.aav4317. [295] Choi H. Introduced a new paradigm of cell transplantation with scaffold microrobots. DGist Research News; June 6, 2019. [296] Precision Medicine at Columbia University. Stem Cells. Columbia University; 2019. [297] Hockemeyer D, Jaenisch R. Induced pluripotent stem cells meet genome editing. Cell Stem Cell 2016;18:573 86. [298] Del Sol A, Thiesen HJ, Imitola J, et al. Big-data-driven stem cell science, and tissue engineering: vision and unique opportunities. Cell Stem Cell 2017;. Available from: https://doi.org/10.1016/j.stem. [299] Tenneille EL, Kujak A, Rauti A, et al. 20 Years of human pluripotent stem cell research: it all started with five lines. Cell Stem Cell 2018;23(5):644. Available from: https://doi.org/10.1016/j. stem.2018.10.009. [300] Biologics. Approved cellular and gene therapy products. U.S. Food and Drug Administration; March 29, 2019. [301] NIH Stem Cell Information Home Page. In stem cell information [World Wide Web site]. Bethesda, MD: National Institutes of Health, U.S. Department of Health and Human Services; September 24, 2019. [302] Bacterial infections after the use of stem cell products. Center for Disease Control and Prevention; May 9, 2019. [303] Frow EK, Brafman DA, Muldoon A, et al. Characterizing direct-to-consumer stem cell businesses in the Southwest United States. Stem Cell Rep 2019;. Available from: https://doi.org/10.1016/j. stemcr.2019.07.001. [304] The Human Genome Project. National Human Genome Research Institute; Update on January 9, 2019. [305] Mullainathan S, Spiess J. Machine learning: an applied econometric approach. J Economic Perspect 2017;31(2):87 106. Available from: https://doi.org/10.1257/jep.31.2.87. [306] Westry T. New bioinformatics program is the first of its kind in the state. UAB News; July 18, 2018. [307] Hulsen T, Jamuar SS, Moody AR, et al. From big data to precision medicine. Front Med 2019. Available from: https://doi.org/10.3389/fmed.2019.00034. [308] Camacho DM, Collins KM, Powers RK, et al. Next-generation machine learning for biological networks. Cell 2018;173(7):1581 92. Available from: https://doi.org/10.1016/j.cell.2018.05.015. [309] Bresnick J. Understanding the many V’s of healthcare big data analytics. HealthITAnalytics; June 5, 2017. [310] Wamba SF, Akter S, Edwards A, et al. How “big data” can make a significant impact: findings from a systematic review and a longitudinal case study. Int J Prod Econ 2015;165:234 46. [311] Navarro FCP, Mohsen H, Yan C, et al. Genomics and data science: an application within an umbrella. Genome Biol 2019;20. [312] Stephens ZD, Lee SY, Faghri F, et al. Big Data: astronomical or genomic? PLoS Biol 2015;13:e1002195. [313] Quick J, Loman NJ, Duraffour S, et al. Real-time, portable genome sequencing for Ebola surveillance. Nature 2016;530:228 32. [314] DePalma A. Sanger sequencing: still the gold standard? LabManager; November 5, 2018. [315] Chow E. Next-generation sequencing. iBiology; November 2018. [316] Kohane IS. Using electronic health records to drive discovery in disease genomics. Nat Rev Genet 2011;12(6):417 28. Available from: https://doi.org/10.1038/nrg2999. [317] Pendergrass SA, Crawford DC. Using electronic health records to generate phenotypes for research. December 5, 2018. Available from: https://doi.org/10.1002/cphg.80.

290

Foundations of Artificial Intelligence in Healthcare and Bioscience

[318] Movaghar A, Page D, Brilliant M, et al. Data-driven phenotype discovery of FMR1 premutation carriers in a population-based sample. Sci Adv. 21 2019;5(8):eaaw7195. [319] Taylor B. Machine learning in genomics research and healthcare. Medium; January 28, 2019. [320] Clark M, Hildreth A, Batalov S, et al., Diagnosis of genetic diseases in seriously ill children by rapid whole-genome sequencing and automated phenotyping and interpretation. Science Translational Medicine; 2019. [321] Tirrell M. Unlocking my genome: was it worth it? Personalized Medicine. CNBC; December 10, 2015. [322] Schumacher A. Blockchain-powered DNA data marketplace will revolutionize precision medicine. Dataconomy; January 16, 2019. [323] Haghi M, Thurow K, Stoll R. Wearable devices in medical internet of things, scientific research, and commercially available devices. Healthc Inf Res 2017;23:4 15. [324] Mantua J, Gravel N, Spencer RM. Reliability of sleep measures from four personal health monitoring devices compared to research-based actigraphy and polysomnography. Sensors (Basel), 16. 2016. p. 646. [325] Kiana T, Michael A. Wearable technology and wearable devices, everything you need to know. Bloomberg; March 26, 2014. [326] Alderson L. Telehealth bringing Personalized Medicine closer. HealthManagement.org 2019;19:3. [327] , https://www.cleargenetics.com/.. Clear Genetics; 2019. [328] Schmidlen T. Collaborative development of chatbots as an innovative tool in the delivery of scalable genomic counseling. Geisinger Clinic. PAGC Annual Spring Meeting Philadelphia, PA; May 4, 2018. [329] MyCode Community Health Initiative. Geisinger Health; 2019. [330] Dimic K. Chatbots in precision medicine. Medium; June 23, 2019. [331] Guan M, Cho S, Petro R, et al. Natural language processing and recurrent network models for identifying genomic mutation-associated cancer treatment change from patient progress notes. JAMIA Open 2019;2(1):139 49. [332] Cowie MR. Electronic health records to facilitate clinical research. ClRes Cardiology 2017;106:1 9. [333] Sen A, Banerjee A, et al. Clinical decision support: converging toward an integrated architecture. J Biomed Inform 2012;45(5):1009 17. [334] Khoury MJ. Knowledge integration at the center of genomic medicine. Genet Med 2012;14(7):643 7. [335] Tsoukas LI, Liaros A. The use of the expert system of composite risk factors in breast cancer screening. Stud Health Technol Inform 1997;43:859 63. [336] Sena A, Kawamb A, Dattab A. Emergence of DSS efforts in genomics: past contributions and challenges. Decis Support Syst 2019;116:77 90. [337] Poutintsev F. Nanorobots might hold the key to a radically extended life span. MediumPredict; August 4, 2018. [338] Nanotechnology and radically extended life span. ,http://www.lifeextension.com/magazine/2009/1/ Nanotechnology-Radically-Extended-Life-Span/Page-01.; 2017 [accessed 31.10.17]. [339] Cleeran E, Van der Heyden J, Brand A, Van Oyen H. Public health in the genomic era: will Public Health Genomics contribute to significant changes in the prevention of common diseases? Arch Public Health 2011;69:8. Available from: https://doi.org/10.1186/0778-7367-69-8. [340] Khoury M, Bowen M, Burke W, et al. Current priorities for public health practice in addressing the role of human genomics in improving population health. Am J Prev Med 2011;40:486 93. Available from: https://doi.org/10.1016/j.amepre.2010.12.009. [341] Molster CM, Bowman FL, Bilkey GA. The evolution of public health genomics: exploring its past, present, and future. Front Public Health 2018;6:247. Available from: https://doi.org/10.3389/ fpubh.2018.00247.

Chapter 6 • Current AI applications in medical therapies and services

291

[342] Big Data to Knowledge. National Institutes of Health. U.S. Department of Health and Human Services; August 2, 2019. [343] Mouratidis Y. AI unlocks the mysteries of clinical data. Forbes; April 2019. [344] Herper M. 23andMe rides again: FDA clears genetic tests to predict disease risk. Forbes; April 6, 2017. [345] Global precision medicine market to reach $141.70 billion by 2026, reports BIS Research. BIS Research, Cision PR Newswire; December 15, 2017. ,prnewswire.com.. [346] Bioinformatics. Techopedia; 2019. [347] Sleator RD. An overview of the current status of eukaryote gene prediction strategies. Gene. 2010;461 (1 2):1 4. Available from: https://doi.org/10.1016/j.gene.2010.04.008. [348] Martins PVL. Gene prediction using Deep Learning. U.Porto; July 22, 2018. [349] Steiling K, Christenson S. Tools for genetics and genomics: gene expression profiling. UpToDate. Wolters Kluwer; February 21, 2019. [350] Blair E, Tibshirani R. Machine learning methods applied to DNA microarray data can improve the diagnosis of cancer. SIGKDD Explorations 2019;5(2):48. [351] Press Release. New preventive genomics clinic launches at the Brigham Brigham Health; August 19, 2019. [352] Public Health Genomics Knowledge Base. Public health genomics and precision health knowledge base (v6.0). CDC.gov; October 3, 2019. [353] Boccia S, McKee M, Adany R, et al. Beyond public health genomics: proposals from an international working group. Eur J Public Health 2014;24:877 9. [354] Burke W, Burton H, Hall A, et al. Extending the reach of public health genomics: what should be the plan for public health in an era of genome-based and “personalized” medicine? Genet Med 2010;12:785 91. [355] Bowen M, Kolor K, Dotson W, et al. Public health action in genomics is now needed beyond newborn screening. Public Health Genomics 2012;15:327 34. [356] Khoury M, Bowen M, Burke W, et al. Current priorities for public health practice in addressing the role of human genomics in improving population health. Am J Prev Med 2011;40:486 93. [357] Khoury MJ, Bowen MS, Clyne M, et al. From public health genomics to precision public health: a 20year journey. Genet Med 2017;20:574 82. [358] Weeramanthri TS, Dawkins HJS, Baynam G, et al. Editorial: precision public health. Front Public Health 2018;6:121. [359] Ethically Aligned Design. IEEE Standards. N.p., 2017. Web; January 27, 2017. [360] Kish LJ, Topol EJ. Unpatients—why patients should own their medical data. Nature.com. N.p.; 2016. [361] Peril and Promise, Emerging Technologies and WMD, Emergence and Convergence Workshop Report, Center for the Study of Weapons of Mass Destruction, National Defense University; October 13 14, 2016. [362] Pauwels E, Vidyarthi A. Who will own the secrets in our genes? A U.S. Intelligence and Genomics. The Wilson Center; November 2017.

China Race in Artificial

7 AI applications in prevalent diseases and disorders Notwithstanding the relevance and importance of the information in Chapters 1 6 of this book, I would not be at all surprised if you consider the information in this Chapter 7 as the most pertinent to your needs and interests. The reason is that “now things get personal.” It’s virtually certain that “something” in this Chapter, probably multiple things, will relate directly to your career activities, your personal health, and the health and wellness of your loved one(s). Indeed, even if you escape every possible relationship to any of the diseases or disorders (hopefully) we’ll be discussing, by your very existence, you will have an intimate relationship with some of the major categories such as immunology, genetics, nutrition, aging, just to name a few. In the spirit of the “guidebook” approach that we have attempted to follow throughout this book, we will continue with that format of identifying major categories (in the case of this Chapter 7, “prevalent diseases and disorders”) and then addressing subdivisions within each category. The major categories that we will discuss in this Chapter include all of the diseases (pathologies and pathophysiologies) and disorders (abnormalities without a direct pathological basis) that affect the human organism. These categories will include: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

Immunological and autoimmune disease; Genetics and genomics disorders; Cancers; Vascular (cardiovascular and cerebrovascular); Diabetes (Types 1 and 2); Neurological and sensory disorders and diseases; Musculoskeletal system and arthritis; Integumentary system (skin, hair, nails) and exocrine glands (glands with ducts); Endocrine (hormones) glands; Digestive and excretory system: Renal system and urinary system; Respiratory (pulmonary) system: Reproductive system; Physical injuries, wounds and disabilities; Infectious diseases (bacterial, viral, etc.); Human development, aging, degeneration, and death; Chronic disease; Mental and behavioral disorders; Nutrition and exercise (preventive care).

Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00007-9 © 2021 Elsevier Inc. All rights reserved.

293

294

Foundations of Artificial Intelligence in Healthcare and Bioscience

The order of clinical categories in the list above is not random. I have attempted to prioritize it with the first 2 categories (immunology and genetics) representing universal considerations in current and future health care as well as the 2 areas in health care most influenced by AI. After that, entries number 3 through 7, while also influenced heavily by AI, represent the most prevalent diseases of which we are at risk. By virtue of their importance to their AI applications and their morbidity and mortality, we will address them in some depth. Categories 8 through 13 include all of the remaining systems of the human body, which have significant implications in health and wellness as well as numerous AI applications. And finally, categories 14 through 19 represent relevant and universal components in health and wellness and again, influenced substantially by AI applications. In that this book is a guidebook focused more on AI applications and influences in health care (and de facto, disease care), exhaustive descriptions of each of the subdivisions for each category mentioned above should be considered beyond the scope of the book. I will, however, provide limited, but adequate background information (anatomy, physiology, pathology, clinical descriptions) on each subject to allow for maximal understanding by readers at almost any level of medical knowledge and experience. The AI portions of the discussions should be understandable by relying on the information provided in the Chapters 1 3 and in fact, throughout the entire book. Finally, as I mentioned above, “now things get personal.” In the previous Chapters, wherein the information was strictly technical (hopefully, relatively understandable) and objective, in this Chapter 7, I will try to “soften” the discussion with a bit more personal, casual style. By its very nature, the discussion will continue to include a fair amount of medical and scientific terminology, but I will “soften” it with corresponding, non-technical descriptions where possible. Please don’t misinterpret any such explanations as condescending or patronizing. They are meant entirely in the spirit of providing a comfortable level of discussion for all.

7.1 Immunology and autoimmune disease In deciding the order in which prevalent disease categories should be approached, it becomes clear, very quickly, that immunology and genetics are the principal categories among the collective aggregate of topics to be discussed. Both permeate every aspect of human health and disease; both are on the forefront of current and future health care; both are unquestionably the most “disruptive” forces in disease diagnosis, treatment, and current and future research efforts; and finally, both are indelibly tied to current and future AI technologies. I have arbitrarily chosen immunology to start with, I guess, because of its already established, ubiquitous relationship to virtually all (and yes, I mean all) of the conditions of human disease, health, and wellness. Not to say that genetics and genomics are not also intimately associated with all as well, but immunology had a bit of a chronological head start as a human health science. Genetics and genomics can point to their rise in health care with the completion of the Human Genome Project in 2003 [1]. Modern immunology, on the other hand, became deeply involved in human disease and health in 1975 (earlier in some aspects, but let's not quibble) with an expanded understanding of T-lymphocytes and

Chapter 7 • AI applications in prevalent diseases and disorders

295

monoclonal antibodies [2]. That knowledge base, as well as that of genetics and AI, has grown to a level of health and wellness today, a fraction of which we could not have accomplish just a short time ago, especially when considering the COVID-19 pandemic.

7.1.1 Pathogenesis and etiologies of immunology and autoimmune disease I always like to start a discussion on immunology with a maxim and a paradox, that together capture the essence of immunology. The maxim is: “Immunology is a battle of ‘self’ versus ‘non-self.’” And the paradox is: “Immunology is our best friend and our worst enemy.” Some simple explanations will make these otherwise ambiguous statements, understandable. In this wonderful world of ours, there is “you”. . .and everything else. Now, think of “you” as “self” and everything else as “non-self.” Whether that “non-self” is a substance, chemical, infectious agent (pathogen, e.g., virus), toxin (airborne, ingested, contact), even a nonsubstance like physiological, mental, emotional or physical (injury) stress, virtually anything external to “you,” your body (“self”) interprets these “non-self” entities as “foreign” or technically, an “antigen.” Let’s assume you’re in good health, well-nourished, in good physical condition with a sound mind in a sound body. Notwithstanding a world of antigens that you have to deal with 24/7, the power of a healthy (innate) immune system is awesome. Kind of like a “yin and yang” metaphor where “self” is good and “non-self” (antigen) is evil. Your body does not like non-self-antigens. So, it uses its natural (“innate”) defenses, like anatomical and chemical features (skin, tears, mucus membranes, anti-infectious barriers, enzymes, cytokines) and a series of cellular elements (antibodies and white blood cells [WBCs]). It also uses humoral components (blood-related serum and fluids that carry WBCs, B and T lymphocytes) and chemical components like cytokines. Together, in an exquisitely complex process where your body recognizes an antigen and forms an “antigen-presenting complex” (the antigen bound to a specialized WBC [macrophage] and a TH [t-helper] lymphocyte); creates and binds it to antibodies (generated by B cells); processes it through a series of cellular (T and B cell generated antibodies) and humoral (cytokine) functions, and eliminates it. This protection from the constant 24/7 barrage of antigens is called our “adaptive immune response” [3]. Indeed, it is “our best friend.” Fundamentally, that is what your innate (or natural) immune system and immunology are all about, i.e., a battle of self-versus-non-self-antigens. Wouldn’t it be great if our body won all the battles? Need I say, life doesn’t work that way. Sometimes, the innate immune system can’t quite handle the load. Maybe, for some reason, it’s compromised (“immunocompromised”) or weakened or suppressed (“immunosuppression”). Perhaps the antigen is not being removed effectively (persistent cause), or it keeps reoccurring (continuing re-exposure) as the innate system tries to eliminate it. Or maybe the antigen is too abundant or too pernicious (virulent) for the innate immune response to overcome it. In such conditions, after a few days to a week of feeling “not so great,” the strength of the adaptive immune system begins to demonstrate its next, more aggressive "go to" system.

296

Foundations of Artificial Intelligence in Healthcare and Bioscience

Working with the innate response (using all of its tools plus additional cellular and humoral weapons), adaptive immunity processes the antigen in a more specific approach. It utilizes highly specialized TH (“helper”), Ts ("supressor"), and Tc ("cytotoxic") lymphocytes to produce a progressive attack of an array of neutralizing B lymphocytes, antibodies, and cytokines. It also sets up an identification process (T and B-Memory cells), which, along with the antibodies, provides lifetime protection (we think?) against the specific antigen. All in all, this acquired immune system is a great defender and protector (a “friend”). . .to a point. We can look at acquired immunity as a race to eliminate the “bad guys” (antigens), a competition which, in most cases (given an otherwise healthy person), it wins. If, however, the underlying health of the patient (note how I move from “person” to “patient” here), is not adequate to sustain the activity of the acquired immune response, things could deteriorate or “dysregulate.” As we described above, the immune response begins using a lot of tools, i.e., cellular components and chemicals in amounts not normal (“pathophysiological”) to your body. Their regular (“physiological”) activity is working diligently to resolve the “pathological” (disease) process in your body, but those efforts are also producing abnormal byproducts called “pro-inflammatory cytokines.” Accumulation of those byproducts is a basis for the pathophysiological process called “acute inflammation”. (Note: All inflammation is characterized in medical terminology by the suffix “. . .i t i s.” Any condition mentioned in this text, heretofore or hereafter under any disease category with the suffix “itis” should be considered an inflammation, acute or chronic.) Another type of acute inflammation is caused by a specific form of an antigen called an “allergen” which can activate a specialized kind of immune protein (immunoglobulin E or IgE antibody) response producing the infamous “Type 4 immediate, allergic or hypersensitivity response.” Examples of this response include hay fever, eczema, hives, asthma, food allergy, and reactions to dust, pollen, insect bites and stings, sometimes reaching severe levels called anaphylaxis and anaphylactic shock. Like its antigen cousin, the allergen can be inhaled, ingested, or enter through the skin. After a susceptible person is exposed to an allergen, the body starts producing a large quantity of IgE antibodies. This results in the reoccurrence of the allergic response (sometimes with increasing intensity) with each re-exposure to the allergen. Included among these Type 4 reaction’s cytokines, is one called histamine which, with the other inflammatory symptoms, produce itching. Ultimately, an active immune process, in conjunction with the identification of the cause (diagnosis) and removal of that cause, is the “cure.” Done promptly, health and wellness will result. But, if not resolved within a reasonable period of time (weeks to months at most), the immune system advances to a condition called “chronic inflammation,” different than acute inflammation in its clinical symptoms (often none at all versus those of acute inflammation) and its cellular pathology. It should also be noted that, though poorly understood, chronic inflammation can develop spontaneously, that is without an antigen and acute inflammatory precursor (cause) episode. According to most medical experts, chronic inflammatory disease is the progenitor or originating cause of all (emphasis on all) the major human disease categories [4] (Fig. 7 1). The clinical basis for this belief lies in the destructive nature of the persistent inflammatory

Chapter 7 • AI applications in prevalent diseases and disorders

297

mediators and cellular components damaging tissue throughout the body, especially the blood vessel walls (perivasculitis) supporting virtually every organ system. Over time, leakage of WBCs from the weakened blood vessel walls (infiltration or diapedesis) occurs disrupting normal tissue function (protein synthesis) and even break down of the DNA of the tissue, producing loss of bodily integrity and function and manifesting as recognizable chronic diseases (discussed later in this Chapter). Thus, per our stated paradox, immunology (specifically, chronic inflammation) could be “our worst enemy.” Sadly, COVID-19 is a painful example of this phenomenon. Antigens, by definition, are “foreign,” but as we learn more about the immune system, it has become apparent that “foreign” may not be entirely synonymous with “non-self.” When, for some unknown reason, the body incorrectly identifies itself (self) as foreign (called “autoantigenicity”), it initiates an acquired immune response directed at “itself.” Stated another way, the immune system has the potential to produce an “autoimmune response” [5]. Autoimmune disease develops after further immune system dysregulation, in both the innate and acquired immune system [6]. There are several theories (Table 7 1) as to the cause of this idiosyncratic response. But whatever the cause, the response has created an entirely separate and extensive disease

FIGURE 7–1 Chronic inflammation. Through its inflammatory mediators and cellular components that damage tissue throughout the body, especially the blood vessel (perivasculitis) walls (with infiltration and diapedesis) supporting virtually every organ system, chronic inflammatory disease is considered the progenitor or originating cause of all (emphasis on all) the major human disease categories.

298

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 7–1 Theories on etiologies and pathogenesis of chronic inflammation and autoimmune disease. 1. 2. 3. 4. 5. a. a. a.

Table 7–2 • • • • • • • • • • • • •

A prolonged inflammatory process from failure to eliminate an antigen; Part of the patient’s genome; Environmental factors to which patient is exposed over time; Increasing release of pro-inflammatory cytokines; An abnormal immune response to “self”: Disruption of homeostasis (Yin Yang); Innate autoantigens from inflammatory process; “Rogue” antigen presenting cells (APCs);

Listing of prevalent autoimmune diseases.

Ankylosing spondylitis Lupus Rheumatoid arthritis Juvenile arthritis Scleroderma Dermatomyositis Behcet’s disease Celiac disease Crohn’s Disease Ulcerative colitis Sjogren’s syndrome Reactive arthritis Mixed connective tissue disease

• • • • • • • • • • • • •

Raynaud’s phenomenon Giant cell arteritis Temporal arteritis Polymyalgia rheumatica Polyarteritis nodosa Polymyositis Takayasu arteritis Granulomatosis with polyangiitis Vasculitis Alopecia areata Antiphospholipid Antibody syndrome, Autoimmune hepatitis,

• • • • • • • • • • • •

Type 1 diabetes, Graves’ disease, Guillain-Barre syndrome Hashimoto’s disease, hemolytic anemia, Idiopathic thrombocytopenic purpura, Inflammatory bowel disease, Multiple sclerosis, Myasthenia gravis, Primary biliary cirrhosis, Psoriasis, Vitiligo

Source: Autoimmune disease: Johns Hopkins Health; 2019.

category referred to as “autoimmune disease.” Table 7 2 reveals an impressive (and ominous) list of prevalent autoimmune diseases.

7.1.2 Clinical presentations in immunology and autoimmune disease The signs and symptoms of immune diseases vary widely based on their underlying etiologies (acute, allergy, chronic inflammation) as well as the variety of disorders associated with the autoimmune disease categories. In the case of acute inflammation, indications of the cause are frequently apparent in the patient’s history. Determination and removal of the antigenic (“nonself”) source of the inflammation is effectively 90% of the cure. Such removal can range from simple hygiene to an antibiotic for an infectious antigen (external or internal) to antihistamines and decongestants (for allergy) to injury repair and stress reduction therapies. Somewhat ironically, in large part, the healing process associated with the immune response (i.e., inflammation) is what produces the acute or, in the case of chronic inflammation, extended symptoms.

Chapter 7 • AI applications in prevalent diseases and disorders

299

The clinical presentation of acute inflammation is usually of rapid onset (hours to days for acute inflammation and minutes to hours for allergic IgE responses). The classic inflammatory response, as mentioned above, was described originally in 4 simple words in the first century AD by the Roman scholar Celsus [7]: • • • •

Rubor (redness or hyperemia from vasodilation); Calor (warmth or fever, cytokine induced); Dolor (pain from nociceptor nerve fibers); and Tumor (swelling or edema/induration from fluid and cellular infiltration). (Sometimes also mentioned: functio laesa or loss of function in chronic inflammation).

With the allergic and hypersensitivity response, symptoms can also include itching, sneezing, and congestion (from histamine release). And, as we mentioned above, in their most severe form, allergy or hypersensitivity can produce a life-threatening condition call anaphylaxis and anaphylactic shock [8]. In the case of chronic inflammation, signs and symptoms can vary considerably. As opposed to acute inflammation, chronic inflammation can be difficult to identify without a history of precipitating acute inflammation. A chronically ill patient (chronic illnesses discussed separately, later in this Chapter) can be assumed to have chronic inflammation. This was mentioned previously, that “. . .chronic inflammatory disease is the progenitor or originating cause of all (emphasis on all) the major human disease categories [4,9], (Fig. 7 1).” Non-specific symptoms associated with chronic inflammation include: [10] • • • • • • •

Body pain; Fever (often diagnostically referred to as “fever of unknown origin” or FUO); Constant fatigue and insomnia; Depression, anxiety and mood disorders; Gastrointestinal complications like constipation, diarrhea, and acid reflux; Weight gain; Frequent infections.

In the case of immunosuppression or immunocompromised disease conditions, the symptoms are usually indirect as in increased illnesses, risk of infection, blood disorders, digestive problems, and delayed growth and development [11]. Regarding the array of autoimmune diseases (Table 7 2), there are more than 80 identified affecting more than 50 million Americans (according to the American Autoimmune Disease Related Association, AARDA), of whom 75% are women [12]. Table 7 2 (above) listed the more prevalent autoimmune diseases, and Table 7 3 identifies the top 10, which are undoubtedly quite familiar to most. Relative to the specific conditions (to be discussed separately throughout this Chapter), symptoms range from no symptoms at all to general malaise to severe illness and

300

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 7–3 Ten (10) most common autoimmune diseases. • • • • • • • • • •

Rheumatoid arthritis Systemic lupus erythematosus (SLE) Inflammatory bowel disease (IBD) Crohn’s disease Multiple sclerosis (MS) Type 1 diabetes mellitus Guillain-Barre syndrome Psoriasis Graves’ disease Myasthenia gravis

Source: Autoimmune diseases: types, symptoms, causes, and more. HealthLine; 2019.

risk of death (as in the case of many COVID-19 patients). A listing of the non-specific symptoms most associated with autoimmune disease include: [13] • • • • • • • •

fatigue; achy muscles; swelling and redness; low-grade fever; trouble concentrating; numbness and tingling in the hands and feet; hair loss; skin rashes.

7.1.3 Current treatment approaches and AI applications in immunology and autoimmune disease In so much as acute inflammation can occur anywhere, internally or externally, the variety of treatment considerations is extensive, but the first treatment is always the removal of the cause (the antigen). Short of that, treatment is palliative and directed to the involved site (joint, muscle, internal organ, skin, etc.) to reduce the inflammatory process (assuming the antigen has been removed) and mitigate the pain. Towards these ends, cold (wet or ice) compresses are enormously valuable to produce vasoconstriction (part of the inflammatory process being vasodilation of associated blood vessels [i.e., rubor] to supply inflammatory cells to the site). Other palliative measures include analgesics, anti-inflammatories, nutritional and vitamin supplements (more on nutrition later in this Chapter), omega-3 sources, compression, stress reduction, exercise. The most popular forms of acute anti-inflammatory therapy include the popular analgesic pain relievers such as nonsteroidal anti-inflammatory drugs (NSAIDs, aspirin, ibuprofen, or naproxen, and corticosteroids (such as prednisone or prednisolone for topical application), and acetaminophen [Tylenol]). It should be noted, however, that steroids and non-steroidal medications only suppress symptoms ('masking effect") but are not curative.

Chapter 7 • AI applications in prevalent diseases and disorders

301

Chronic inflammation and autoimmune diseases often require treatment directed at the tissue(s) and organ system(s) being adversely affected. Some of those organ-specific treatments are used in conditions related directly and indirectly to chronic inflammation and autoimmunities, cancers, type-1 diabetes, and many other autoimmune diseases. More generalized treatment falls under the category of immunosuppressive and immunomodulating (suppressing or stimulating) therapies, sometimes referred to as “non-specific therapies.” These therapies include types of drugs that are used to suppress immune system autoantigenicity in autoimmune diseases or to boost the immune system response in cancers and anti-tumor therapies. Two main categories of these immune-modulating drug classes include “disease-modifying anti-rheumatic drugs (DMARDs)” [14], including methotrexate, sulfasalazine, and leflunomide, and the “biologic drugs” [15], such as etanercept, adalimumab, and abatacept, and many, many others. Some of the more popular non-specific immunotherapeutic agents include cytokines like interferons; interleukins; anti-TNF (tumor necrosis factor, a pro-inflammatory cytokine); monoclonal antibodies (drugs with the name suffix, “. . .mab”); immunoglobin antibodies, gene-based delivery systems; checkpoint inhibitors; and other immune system modulators (e.g., hydroxychloroquine, receiving considerable attention relative to its “potential” inhibitory effects on coronavirus somewhat disproven as of May 2020). While the biologics include a large number of drug options, all are attempting to regulate (increase or decrease) the immune response. The reason for the large variety of drugs is due to the extensive amount of pro-inflammatory mediators in the chronic inflammatory and autoimmune process [16]. Each of the biologics having a distinct biochemical effect on different mediators. This gives treating physicians the option of “experimenting” with a variety of biologics to get a maximal drug effect. It also confuses the hell out of the public when they watch a TV commercial promoting a biologic drug for a specific autoimmune condition (e.g., rheumatoid arthritis) on 1 station. Then they change channels and see the same drug being promoted for an entirely different condition (e.g., psoriasis). The drugs are specific for individual mediators that occur in all of the autoimmune diseases, and thus, the drugs are non-specific for any 1 disease. As described above (and in Fig. 7 1), autoimmune disease may be organ-specific in its cellular damage (e.g., Crohn’s disease, Graves’ disease, etc.), or its harm may be disseminated in multiple organ systems throughout the body (e.g., systemic lupus erythematosus, giant-cell arteritis, rheumatoid arthritis). Thus, treatments for autoimmune disease beyond the drug classes mentioned above must be targeted, organ-specific therapies, or treatments disseminated throughout the body via cellular and genetic pathways. This is also the case in cancer therapies (to be discussed next under “Cancers”). Thus, multiple treatment options and approaches are common to both autoimmune diseases and cancers (effectively an autoimmune disease itself). Among the treatment options common to autoimmune diseases and cancers, besides the drug categories mentioned above, stem cell transplantation and immunogenomic (CRISPR-Cas9 and CAR-T cell) therapies have been well received. They are rapidly approaching the standard of care (“precision medicine”) for targeted, organ-specific treatments as well as disseminated and genomic therapies. The bioscience of both stem cells and genomics have been discussed in some depth in Chapters 5 and 6. A quick review by the

302

Foundations of Artificial Intelligence in Healthcare and Bioscience

reader of the “Basic Bioscience” discussions for each of these topics might be valuable as we now begin to apply those technologies and AI’s influences in their clinical applications, treatments, and current and future research. Effectively, cellular therapies (stem cell transplantation and CAR-T cell replacement therapy) and genetic therapies (gene replacement and CRISPR-Cas9, gene editing) have similar applications, albeit with different therapeutic goals in autoimmunologic diseases, genetic disorders, cancers and numerous other congenital, acquired and chronic conditions to be discussed in this Chapter. In the spirit of efficiency (and hopefully best understanding), we will describe each of the 3 principal therapies (stem cell transplantation, CRISPR-Cas9 and CAR-T) step-by-step in this immunologic section. Then we will revisit them downstream in the “Research and Future Considerations” subdivision of subsequent disease categories in the Chapter wherein they, along with any other relevant therapies, are considered necessary relative to the specific disease. But it is important to note that these innovative and “disruptive” medical and cellular therapies for autoimmune diseases enjoy the benefits that piggyback on the successes of genetic and cancer treatments and vis a versa [17].

7.1.3.1 Stem cell transplantation Hematopoietic stem cell therapy (previously described in Chapter 6, Figure 6 3 as a “blood stem cell” that can develop into all types of blood cells found in the peripheral blood and the bone marrow [18]) is now being used effectively to grow new cellular and immunological based strategies for patients with malignancy and hematological disorders produced or provoked by immunologic or autoimmunologic causes (see below). The objective of stem cell transplantation therapy is to destroy the mature, long-lived, and auto-reactive immune cells and generate a new, properly functioning immune system. The patient’s stem cells are used in a procedure known as autologous (from “one’s self”) hematopoietic stem cell transplantation (Fig. 7 2). First, patients receive injections of a growth factor, which coaxes large numbers of hematopoietic stem cells to be released from the bone marrow into the bloodstream. These cells are harvested from the blood, purified away from mature immune cells, and stored. After sufficient quantities of these cells are obtained, the patient undergoes a regimen of cytotoxic (cell-killing) drug and/or radiation therapy, which eliminates the mature immune cells. Then, the hematopoietic stem cells are returned to the patient via a blood transfusion into the circulation where they migrate to the bone marrow and begin to differentiate to become mature immune cells. The body’s immune system is then restored [19]. Stem cells can be readily harvested from bone marrow and adipose tissue (and other bodily tissues) and converted into undifferentiated induced pluripotent cells (iPSC - see page 303) suitable for transplantation into diseased and degenerated organs and body structures (e.g., diabetes, osteoarthritis, etc.). These cells then regenerate and begin to replace the abnormal cells with new, normal cells and even potentially with functioning organs (organ morphogenesis) [20]. Currently, muscle and bone tissue are particularly responsive to stem cell regeneration. All medical treatments have benefits and risks, but unproven stem cell therapies can be particularly unsafe. The FDA will continue to help with the development and licensing of new stem cell therapies where the scientific evidence supports the product’s safety and effectiveness [21].

Chapter 7 • AI applications in prevalent diseases and disorders

303

FIGURE 7–2 Genetic modification & stem cell therapy. The patient’s own stem cells are used in a procedure known as autologous (from “one’s self”) hematopoietic stem cell transplantation. Source: National Institute of Medicine, NIH.

The use of stem cells, in combination with other gene therapies, expands the potential value of stem cell therapies to new heights. Biomedical research can generate cell lines that can then act as disease models. Induced pluripotent cells (iPSCs) can be modified using CRISPR-Cas9 technology (discussed next) for disease modeling, gene correction therapy, antiviral therapy, and antitumor therapies [22]. One strategy is to knock out disease-relevant genes in human PSCs using CRISPR-Cas9 to explore the pathogenic mechanism in the derived cells. These cells then could be used as disease models for drug therapies [23]. (Notice the increasing overlap in these 2 biosciences, immunology and genetics.)

7.1.3.2 CRISPR-Cas9 (gene editing) One of the effective ways of treating autoimmune disease is to identify the “signature” of offending genes (their “gene expression” or the number of RNA molecules they are producing), which is abnormal in autoimmune (and cancer) genes. This identification is accomplished using a

304

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 7–3 CRISPR-Cas9. CRISPR guide RNAs target specific spots in the genome for the Cas9 enzyme (“genetic scissors”) to cut (scissor), forming a double-strand break. A machine learning algorithm predicts which types of repairs will be made at a site targeted by a specific guide RNA. Possibilities include an insertion of a single base pair, a small deletion, or a larger change known as a microhomology deletion [25].

technique called ‘single-cell RNA sequencing’ (scRNA-seq), or, more specifically, TIDE (for Tumor Immune Dysfunction and Exclusion) for autoimmune genes [24]. With this information, a procedure called CRISPR-Cas9 (Clustered regularly interspaced short palindromic repeats-associated protein 9), an RNA-guided genome editing technology is being used to re-engineer T cells. It is worth noting that the 2020 Nobel Prize in Chemistry was awarded to 2 molecular biologists, Emmanuelle Charpentier of the Max Planck Unit for the Science of Pathogens Institute for Infection Biology and Jennifer Doudna of the University of California, Berkeley, for the development of this revolutionary genome editing technique often referred to as “genetic scissors.” The CRISPR-Cas9 system (Fig. 7 3) creates a small piece of RNA with a short “guide” sequence that attaches (binds) to a specific target sequence of DNA identified by AI in a genome. The RNA also binds to the Cas9 enzyme and is used to recognize the DNA sequence. The Cas9 enzyme acting as a “scissor” cuts the DNA at the targeted location. Once the DNA is cut, the cell’s DNA uses its repair machinery to add or delete pieces of genetic material, or to make changes to the DNA by replacing an existing segment with a customized DNA sequence [26]. It was first thought that the stitching back together of the genetic material after the CRISPR-Cas9 procedure was random [27]. But subsequent studies using a trained machine learning algorithm called inDelphi to predict repairs made to DNA snipped with Cas9, used guide RNAs to induce a single, predictable repair genotype in the human genome in more than 50% of editing products. This proved that the edits aren’t random. Based on a library of 41,630 guide RNAs and the sequences of the targeted loci before and after repair—a dataset that totaled more than 1 billion repairs in various cell types [28]. The algorithm was then 

Press Release: The Nobel Prize in Chemistry 2020. The Royal Swedish Academy of Sciences. 7 October 2020.

Chapter 7 • AI applications in prevalent diseases and disorders

305

able to use the sequences that determine each repair to predict Cas9 editing outcomes, the researchers reported in Nature Biotechnology [25]. Further explanation of this CRISPR procedure, RNA sequencing and the AI applications that make it possible can be found in the Chapter 8, “Immunogenetic considerations regarding SARS-COV-2 (COVID-19)”, page 459.

7.1.3.3 CAR-T cell (gene replacement) Cancer immunotherapy is a rapidly growing field that has recently demonstrated clinical efficacy in the treatment of solid tumors and hematological malignancies [29]. Numerous clinical approaches have been developed to redirect and/or augment immune function against tumor cells. The application of adoptive cell transfer therapy (ACT therapy) for the treatment of malignant cancers has been expanded by the use of T lymphocytes engineered to express chimeric antigen receptors (CARs) [30]. Chimeric antigen receptor T cells (CAR-T cells) are T cells that have been genetically engineered to give them the new ability to target a specific protein. The receptors are “chimeric” because they combine both antigen-binding and T-cell activating functions into a single receptor. The premise of CAR-T immunotherapy is to modify T cells to recognize cancer cells to more effectively target and destroy them [31]. CAR-T-cell therapy (Fig. 7 4) begins by removing a patient’s lymphocytes and transducing them with a DNA plasmid vector (a DNA molecule distinct from the cell’s DNA used as a tool to clone, transfer, and manipulate genes [32]) that encodes specific tumor antigens. These modified and targeted lymphocytes are then reintroduced to the patient’s body through a single infusion to attack tumor cells. Known as autologous CAR-T-cell therapy, this treatment has been in development for more than 25 years, resulting in 4 generations of improving therapy that has generated responses for up to 4 years in some studies [33]. There are currently 2 FDA approved CAR-T products used in cell malignancies [34]. Based upon the high rates of initial cancer remission and durable responses in many patients receiving CAR-T cell therapy, the ACT field has expanded with CAR-T cell therapy now being applied against numerous other B cell-associated antigens with encouraging clinical response data being reported [35]. Again, as previously described above, about the combination of stem cells with CRISPR-Cas9, so too can CAR-T cell therapies be expanded in combination. To increase the efficiency of the CAR-T cells, CRISPR-Cas9 has been used to increase their antitumor efficiency by disrupting a programmed death protein [36]. Worth noting here is the costs of CAR-T therapy as well as the other immunotherapies discussed. Notwithstanding the significant benefits these therapies provide, the costs are exorbitant. FDA approved CAR-T cell therapy and the CRISPR-Cas9 procedure range from $373,000 to $875,000 for a single treatment [37]. Depending on the type of stem cell procedure, prices can range from $5,000 to $25,000 per procedure [38]. Gene therapies are subject not only to the regulatory structure of the FDA, but also to the Office of Biotechnology Activities, and the Recombinant DNA Advisory Committee. Excessive regulatory oversight creates an elongated and expensive route to approval. Gene therapies provide those with rare, serious, and possibly terminal conditions with the ability to improve

306

Foundations of Artificial Intelligence in Healthcare and Bioscience

FIGURE 7–4 Immunogenics (immunotherapy) Chimeric Antigen Receptor T cells (CAR-T). CAR-T-cell therapy begins by removing a patient’s lymphocytes and transducing them with a DNA plasmid vector (a DNA molecule distinct from the cell’s DNA used as a tool to clone, transfer, and manipulate genes) that encodes specific tumor antigens. These modified and targeted lymphocytes are then reintroduced to the patient’s body through a single infusion to attack tumor cells. Source: National Institute of Medicine, NIH.

their quality of life significantly. By one estimate, an approved gene therapy drug costs nearly $5 billion (five times as high as the average cost of FDA approval) to develop [39]. Some insurers are beginning partial coverage of FDA approved gene therapies, but experimental treatments receive no third-party coverage other than limited humanitarian exemptions.

7.1.4 Research and future AI considerations in immunology and autoimmune disease The diagnosis of autoimmune disease can be challenging for 2 reasons. First, many of the associated diseases share similar symptoms. Second, as described previously, the disease process may be organ-specific or disseminated among multiple body systems. Thus, their

Chapter 7 • AI applications in prevalent diseases and disorders

307

diagnostic evaluation includes a thorough history, physical examination, laboratory testing, and imaging based on suspected organ-system involvement(s). The cause(s) of autoimmune diseases remain unknown, but research has strongly suggested a “pathogenesis” (“natural history”) of the disease progression over time and the damage it produces. The path of the disease includes 5 likely causes (see Table 7 1). They are: (1) a prolonged, untreated inflammatory process; (2) part of the patient’s genome; (3) environmental factors to which the patient was exposed; (4) Increasing release of pro-inflammatory cytokines; and (5) inherent abnormalities in their immune system [40]. A person’s genes are what “predispose” them or provide genetic susceptibility to dysregulate the immune system, which in turn yields chronic inflammation and, in effect, creates the pathological damage to cells, tissues, and organ systems synonymous with all diseases. Finally, the environmental factors (e.g., smoking, pollution) along with the inherited alleles that an individual possesses for a specific gene (the genotype), combine to produce the “phenotype trigger” and the clinical manifestations of the disease state. This recipe for a disease is responsible for immune and autoimmune diseases, for congenital and acquired genetic diseases, for cancers, and in fact, for just about all the conditions to be discussed in the balance of this Chapter. Thus, throughout our discussions of specific disease entities, you will recognize common denominators in their diagnosis and treatments (as mentioned previously in the discussion on “non-specific” drug therapies in autoimmune disease). What will change for each disorder are the clinical manifestations (phenotypes) of the individual disease categories based on cellular, tissue, and organ system(s) involved [41]. Unraveling the genetic and environmental underpinnings of autoimmune disease has become a major focus at the National Institute of Environmental Health Sciences of the National Institute of Health [42,43]. The process of identifying the offending genetic sites (likely multiple mutations) in the person’s genome is overwhelming. As you will recall from Chapter 6, the potential for those mutations in the sequencing of the 4 base compounds within the 20,000 to 25,000 genes in the human genome exceeds 2.5 3 1020 possibilities spread among the 37.2 trillion somatic cells. Thanks to AI, more specifically, big data analytics and deep learning (convolutional neural networking CNN), genetic loci for immune disorders (immunodeficiencies, autoimmune diseases) are now being identified in a timely diagnostic manner (days to weeks versus months to years) [44]. This application of AI to better identify genetic mutations and their associated disease states, combined with the now FDA approved cellular and gene therapies presented above, has created new horizons in the treatments, management, cures and prevention of disease. Finally, for the first time, scientists at the Human Vaccines Project are combining systems biology with artificial intelligence to understand one of the most significant remaining frontiers of human health, the human immune system [45]. Perhaps the most exciting application of AI in immunology is found in the Human Vaccines Project. Researchers are comprehensively sequencing the human immune system, a system billions of times larger than the human genome. The goal is to encode the genes responsible for circulating B cell receptors. This can provide potential new antibody targets for vaccines and therapeutics that work across populations. The Project seeks to define the genetic underpinnings of people’s ability to respond and

308

Foundations of Artificial Intelligence in Healthcare and Bioscience

adapt to an immense range of diseases [46]. The SARS-CoV-2 COVID-19 pandemic will certainly expedite further progress on this critical area of clinical research. The study specifically looks at 1 part of the adaptive immune system, the circulating B cell receptors that are responsible for the production of antibodies, considered the primary determinant of immunity in people. The receptors form unique sequences of nucleotides known as receptor “clonotypes.” This creates a small number of genes that can lead to an incredible diversity of receptors (B idiotype cells and the antibody idiotype-specific regulatory circuit), allowing the immune system to recognize almost any new pathogen. This Project marks a crucial step toward understanding how the human immune system works, setting the stage for developing next-generation health products, drugs, and vaccines through the convergence of genomics and immune monitoring technologies with machine learning and artificial intelligence [47].

7.2 Genetic and genomic disorders As was stated previously, immunology, genetics, and genomics permeate every aspect of human health and disease. They are at the forefront of current and future health care; they are unquestionably the most “disruptive” force in disease diagnosis (discussed in Chapters 5 and 6); treatment; current and future research efforts; and finally, they are intimately tied to the present and future technologies of AI. The introduction to this section on therapies for genetic and genomic disorders is an ideal place to revisit the concept of “precision” or “personalized” medicine (or health). The business and administrative aspects of Precision Health were covered in depth in Chapter 4; its diagnostic considerations were addressed in Chapter 5, and its applications in medical therapies were discussed in Chapter 6. Now it’s time to make Precision Medicine (Personalized Health) . . . personal. What does it mean to you, your health, wellness, and your life? Precision health (or precision medicine) and personalized health (or personalized medicine) are somewhat interchangeable terms, but also somewhat confusing. “Personalized” could be misinterpreted to imply that treatments and preventions are being developed uniquely for each individual. Rather, precision medicine identifies which approaches will be effective for which patients based on genetic, environmental, and lifestyle factors. Thus, the National Research Council preferred the term “precision medicine” to “personalized medicine” [48]. The Precision Medicine Initiative is a long-term research project involving the National Institutes of Health (NIH) and multiple other research centers. They aim to understand how a person’s genetics, environment, and lifestyle can help determine the best approach to prevent or treat disease. The long-term goals of the Precision Medicine Initiative focus on bringing precision medicine to all areas of healthcare on a large scale. To this end, the NIH has launched a study, known as the “All of Us” Research Program, which involves a group (cohort) of at least 1 million volunteers from around the United States. Participants are providing genetic data, biological samples, and other information about their health [49].

Chapter 7 • AI applications in prevalent diseases and disorders

309

7.2.1 Description and etiology of genetic and genomic disorders Genetic disorders are diseases caused in whole or in part by disturbances in the DNA gene sequences (genetic code) of base compound pairs (adenine paired with thymine and guanine paired with cytosine). These disturbances (“mutations”), can occur in 1 gene (monogenic disorder); in multiple genes (multifactorial inheritance disorder); by a combination of gene mutations and environmental factors; or by damage to chromosomes (changes in the number or structure of entire chromosomes, the structures that carry genes see Chapter 5, Figure 5 3). The human genome holds the key to nearly all diseases. Congenital disorders (also known as birth defects) may be hereditary or may occur during fetal development and are present at birth (or identified during infancy). They can be caused by inherited mutations from the parents and are present at birth (e.g., cystic fibrosis, sickle cell anemia, Marfan’s syndrome). Inherited mutations are also called germline mutations because they originate in the parent’s egg or sperm cells, which are called germ cells. When an egg and a sperm cell unite, the resulting fertilized egg cell receives DNA from both parents. If this DNA has a mutation, the child that grows from that fertilized egg will have the mutation in each of their cells [50]. Most inherited genetic diseases are recessive, which means that a person must inherit 2 copies of the mutated gene to inherit a disorder. This is 1 reason that marriage between close relatives is discouraged; 2 genetically similar adults are more likely to give a child 2 copies of a defective gene. Other genetic disorders are caused by “acquired mutations” in a gene or group of genes and occur during a person’s lifetime. Such mutations are not inherited from a parent but instead occur either randomly or due to some environmental exposure (e.g., smoking, lead poisoning, etc.). Mutations range in size affecting anywhere from a single DNA building block (base pair) to a large segment of a chromosome that includes multiple genes. Acquired mutation rates in individuals vary by their genome; by heredity; by type; by generation; and by the environment. The human genome germline mutation rate is low at approximately 0.5 3 1029 per base pair per year [51]. But given constant lifelong cell division and DNA replication, the rate of “acquired mutations” in the human genome (with about 37 trillion somatic [body] cells) is in the trillions. Fortunately, only an infinitesimal amount of them (less than 60) override “apoptosis” (normal cell’s ability to self-destruct when something goes wrong) to produce genetic disorders and disease. Nonetheless, the risk exists as borne out by the rate of cancers, which are directly associated with accumulated cell mutations [52]. Cancer usually results from a series of mutations within a single cell. Often, a faulty, damaged, or missing p53 gene is to blame. The p53 gene makes a protein that stops mutated cells from dividing. Without this protein, cells divide unchecked and become tumors [53]. Somatic mutations that happen in a single cell early in embryonic development can lead to a situation called mosaicism. These genetic changes are not present in a parent’s egg or sperm cells, or the fertilized egg, but happen a bit later when the embryo includes several cells. As all the cells divide during growth and development, cells that arise from the cell with the altered

310

Foundations of Artificial Intelligence in Healthcare and Bioscience

gene will have the mutation, while other cells will not. Depending on the mutation and how many cells are affected, mosaicism may or may not cause health problems. Genetic alterations that occur in more than 1% of the population are called polymorphisms. They are common enough to be considered a normal variation in the DNA. Polymorphisms are responsible for many of the typical differences between people, such as eye color, hair color, and blood type. Although many polymorphisms have no adverse effects on a person’s health, some of these variations may influence the risk of developing certain disorders [54]. (See AI and Genetic and Genomic Treatment Considerations Regarding Sars-Cov-2 (Covid-19) in Chapter 8.)

7.2.2 Clinical presentations in genetic and genomic disorders In the spirit of our genetics discussion, we can substitute the term “phenotype” for clinical presentations of diseases associated with genetic abnormalities. There is an Internet link through the NIH National Center for Biotechnology Information (NCBI), ,medgen., that will provide comprehensive information on any genetic disorder by merely entering the name of the disorder (or its related gene symbol). The site uses AI to search an extensive list of genetic and medically related databases and provides information in 16 different categories, some of which include clinical features, recent clinical studies, mode of inheritance, outreach, and support, and more [55]. As we have mentioned numerous times, all diseases have a genetic component. The mutations may be inherited or developed in response to environmental stresses. The ultimate clinical goal is to gather necessary information through a detailed patient and family history, physical examination and laboratory testing, and imaging to diagnose, treat, and, if possible, cure or prevent the development of the disease. The vast number of both inherited and acquired genetic disorders is exceeded only by the number of clinical manifestations (phenotypes) they produce. These signs and symptoms of the disease can occur anytime during life, from before birth to old age, depending on the type of disorder. From prenatal testing to adulthood, the diagnosis of a genetic disorder can guide treatment and management decisions. The diagnosis can also suggest whether other family members may be affected by or at risk of a specific disease. Even when no treatment is available for a particular condition, having a diagnosis can help people know what to expect (prognosis) and may help them identify useful support and advocacy resources [56]. Genetic tests are tests on blood and other tissue to find genetic disorders. Over 2000 tests are available. The analysis is performed for several reasons [57]. They include: • • • • • •

Finding genetic diseases in unborn babies; Finding out if people carry a gene for a disease and might pass it on to their children; Screening embryos in utero for disease; Testing for genetic diseases in adults before they cause symptoms; Making a diagnosis in a person who has disease symptoms; Figuring out the type or dose of a medicine that is best for a specific person.

Chapter 7 • AI applications in prevalent diseases and disorders

311

There are several methods used in genetic testing: [57] • Molecular genetic tests (or gene tests) study single genes or short lengths of DNA to identify variations or mutations that lead to a genetic disorder (more on molecular genetic tests in Chapter 8 on diagnostic tests for SARS-CoV2); • Chromosomal genetic tests analyze whole chromosomes or long lengths of DNA to see if there are significant genetic changes, such as an extra copy of a chromosome, that cause a genetic condition; and • Biochemical, genetic tests study the amount or activity level of proteins; abnormalities in either can indicate changes to the DNA that result in a genetic disorder. From the numbers and the math discussed previously in identifying mutation(s) among the 20,000 to 25,000 genes in the human genome with 37 trillion somatic cells, it becomes evident that the application of AI, machine learning, big data analytics, and bioinformatics in the field of genetics is profound. It has changed the nature, costs, timing (whole-genome sequencing within weeks and rapid DNA sequencing, virtually the same-day [58]) and accuracy of genetic testing beyond anything imagined 15 years ago and certainly before the 2003 completion of the mapping of the human genome [59]. The capabilities of AI accomplishing these enormous advances in genetic testing and diagnosis is captured in the technology known as “next-generation sequencing (NGS)” which builds on “first-generation sequencing” developed in 1977 by Dr. Fred Sanger (“Sanger sequencing method” or the “chain termination method”). DNA sequencing is the process of determining the sequence of nucleotide bases (As, Ts, Cs, and Gs) in a piece of DNA (see Chapter 5, “Genetics and Genomics Diagnosis,” page 177 for review). Sequencing an individual’s ‘personal’ genome involves establishing the identity and order of B6 billion bases of DNA [60]. In the Sanger Method, target DNA is copied many times, making fragments of different lengths. Fluorescent “chain terminator” nucleotides mark the ends of the fragments and allow the sequence to be determined. NGS is simply a larger-scale approach to DNA sequencing that increase the speed and reduces the cost [61]. It can be compared to running very large numbers of tiny Sanger sequencing reactions in parallel. This allows large quantities of DNA to be sequenced much faster and more cost-effectively. As a stunning example, in 2001, the cost of sequencing a human genome was between $500,000,000 to 1 billion dollars. For many years, it remained prohibitively expensive for routine use in clinical practice, with an estimated cost of $20 25 million in 2006 [62]. In 2019, it was less than $1300 [60] (talk about Moore's Law!). Whole exome sequencing (WES) encodes the exomes of genes. Exomes are a total of the gene’s exons (sequences of nucleic acids that are represented in the mRNA molecule, representing about 1.5% of the genome). Thus, WES has made sequencing faster and more costeffective than whole-genome sequencing (WGS) while resulting in greater depth and increased sensitivity in the analysis. These advances have made the clinical application of WES and WGS more feasible [63].

312

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.2.3 Current treatment approaches and AI applications in genetic and genomic disorders Inherited genetic disorders result in gene alterations in virtually every cell in the body. As a result, these disorders tend to affect many tissues, organs, and body systems. Often, supportive and palliative treatment approaches are available to manage some of the associated signs and symptoms. For example, disorders associated with heart defects might be treated with surgery to repair the defect or with a heart transplant. Inborn errors of metabolism disrupt the production of specific enzymes that dietary changes or replacement therapies may benefit and help prevent immediate and future complications. And when genetic screening identifies an inherent risk in the genome (e.g., a known cancer gene such as BRCA1, BRCA2, or PALB2 gene for breast cancer), management may include counseling, more frequent cancer screening or even preventive (prophylactic) surgery to remove the tissues at highest risk of becoming cancerous [64]. The potential for cures associated with genetic disorders lies in the genetic therapies discussed above, under the immunology section in this Chapter. They include the 3 principal cellular therapies (stem cell transplantation, CRISPR-Cas9, and CAR-T cell replacement therapy). Sometimes referred to as “genetic engineering” or “genetic modification,” it can be defined as the direct manipulation of the genome using molecular engineering techniques. Recently developed methods for modifying genes are often called “gene editing.” It can be applied in 2 very different ways: somatic genetic modification and germline genetic modification [65]. Somatic genetic modification adds, cuts, or changes the genes in some of the cells of an existing person, typically to alleviate a medical condition. A number of these gene therapy techniques are now FDA approved for specific conditions. Germline genetic modification is used to change the genes in eggs, sperm, or early embryos. A number of these therapies are also FDA approved [66]. (Remember way back in the Introduction to this Section 2, the “Top 5” parlor game? Germline genetic modification [I called it “DNA intrauterine (prenatal) diagnoses”] was on my mind.) I limited my choice to “diagnosis,” but you can see “modification” takes it to the next level. Serious issues in genetic engineering go well beyond the science and safety of the field. Bioethical questions abound regarding potential uses and misuses of this bioscience as well as AI applications expanding its potential beyond therapeutic purposes. Some controversial uses of genetic engineering include (but are not limited to): • Human genetic enhancement: The intentional modification of the human genome to “improve” individuals [67]; • Human germline genome editing: introducing heritable changes to sperm, eggs, or embryos [68]; • Eugenics: (Eugenics is from a Greek word meaning “normal genes.”) Its modern definition describes it as the attempt to direct human heredity and evolution to ensure procreative advantage to more “desirable” human beings and to discourage or limit reproduction by the less desirables [69].

Chapter 7 • AI applications in prevalent diseases and disorders

313

• Genetic cloning: Cloning describes the processes used to create an exact genetic replica of another cell, tissue, or organism. The copied material, which has the same genetic makeup as the original, is referred to as a clone [70]. Of course, the use of genetic cloning for monoclonal antibodies is an enormously valuable procedure. The ethics and pros and cons of all of these techniques are under excruciating analysis and review by international groups. Undoubtedly laws and regulations will be instituted in the coming years to mitigate the dangers of these technologies while maximizing their value in health care.

7.2.4 Research and future AI considerations in genetic and genomic disorders If one were to predict the most significant benefits that AI will contribute to health and wellness over the coming decades, genetic research and genetic therapies would have to be at the top of the list. Consider the combination of immunogenetic therapies, immunogenomics, and gene editing technologies with AI and its deep learning, CNN, big data analytics, natural language processing, and its abundance of other digital and quantum capabilities, not to mention its continued future growth and development. It’s hard to imagine anything less than total technological and clinical progress between AI and health care. Innovation and AI strategies in genome-sequencing technologies are profound and do not appear to be slowing. As a result, one can readily expect continued reductions in the cost of human genome sequencing. The key factors to consider when assessing the value associated with an estimated cost for generating a human genome sequence, in particular, the amount of the genome (whole versus exome), quality, and related data analysis (if any) will likely remain largely the same. With new DNA-sequencing platforms anticipated in the coming years, the nature of the generated sequence data and the associated costs will likely continue to be dynamic. As such, continued attention will need to be paid to how the costs associated with genome sequencing are calculated. The “All of Us” Research Program of the National Institute of Health (NIH) is utilizing AI and big data analytics to produce benefits through long-term research in precision medicine and may not be realized for years. The immediate and long-term benefits of the Precision Medicine Initiative include: [71] • New approaches for protecting research participants, particularly patients’ privacy and the confidentiality of their data; • Design of new AI tools for building, analyzing, and sharing large sets of medical data; • Improvement of FDA oversight of tests, drugs, and other technologies to support innovation while ensuring that these products are safe and effective; • New partnerships of scientists in a wide range of specialties, as well as people from the patient advocacy community, universities, pharmaceutical companies, and others;

314

Foundations of Artificial Intelligence in Healthcare and Bioscience

• Opportunity for a million people to contribute to the advancement of scientific research using blockchain technologies; • The broader ability of doctors to use patients’ genetic and other molecular information as part of routine medical care; • Improved ability to predict which treatments will work best for specific patients; • A better understanding of the underlying mechanisms by which various diseases occur; • Improved approaches to preventing, diagnosing, and treating a wide range of conditions; • Better integration of AI and electronic health records (EHRs) in patient care will allow doctors and researchers to access medical data more efficiently.

7.3 Cancers There was a strong temptation on my part to title this segment “Cancering.” To understand cancer, one must not think of it as a noun, but rather as a verb. Ironically, as I started doing my in-depth research for this section, I came across an article by a world-renowned cancer expert, W. Daniel Hillis, using the word “cancering” [72]. He even begins his lengthy, 2-part article with the statement, “We make a mistake when we think of cancer as a noun.” I mention this for 3 reasons. First, I want to “congratulate myself” for independently generating a unique idea, albeit it probably thought of previously by a recognized cancer expert. Second, I want to make sure no reader of this book thinks I plagiarized the word without due recognition to the originator (if only I had beat him to press!). And finally, I want to urge you to consider reviewing or reading (also available on audio) the excellent article by Dr. Hillis. Part 1 is pretty substantial and in-depth into molecular biology, but Part 2 is clinical with some interesting discussions at a very understandable level. Now a quick explanation of “cancering.” As you will recall from the immunology discussion (Chapter 5, page 176), “. . . , the rate of “acquired mutations” in the human genome (with about 37 trillion somatic [body] cells) is in the trillions.” As we stated regarding that phenomenal calculation, “. . .only an infinitesimal amount of them (less than 60), override “apoptosis” to produce genetic disorders and disease.” Nonetheless, those mutations are continually occurring, and virtually all of them have the potential to become irregular, accumulate, or mutate into cancer. If you follow the mathematical law of large numbers and probabilities, it follows that “If you live long enough, you will get cancer” [73]. Thus, throughout our lives, we are all “cancering.”

7.3.1 Description and etiology of cancers In our immunology and genetics discussions up to this point, we have discussed environmental factors (carcinogens) and radiation caused mutations that may contribute to the development of cancer. But up to now, we haven’t considered the possibility of a random mistake (oncogenesis) in one of those trillions of normal DNA replications that can result in cancer-causing mutations [74]. A series of these mutations (carcinogenesis) in a specific

Chapter 7 • AI applications in prevalent diseases and disorders

315

gene (oncogene) class can “de novo” transform a normal cell into a neoplastic or cancer cell [75]. Beyond “a random genetic mistake” producing cancer, there are several other genetic irregularities that can create carcinogenesis. Epigenetics is the study of changes in organisms caused by modification of gene expression rather than alteration of the genetic code itself. As with all genetic activity, epigenetics can turn gene expression (protein production) on or off by the DNA genetic code or by environmental factors. Such abnormalities can produce unpredictable cancers [76]. Proto-oncogenes are genes that promote cell growth and cellular division, whereas tumor suppressor genes discourage cell growth, or briefly halt the process of DNA repair. A series of many mutations to these proto-oncogenes are needed before a standard cell transforms into a neoplastic cell. This phenomenon is referred to as “oncoevolution” [75]. Tumor suppressor genes, which are activated by cellular stress or injury that produces free-floating genetic material, can trigger enzymes and pathways that result in the activation of a tumor suppressor gene, p53. This tumor suppression protein arrests the progression of the abnormal cell cycle (apoptosis), preventing mutations from being passed on to subsequent cells. This p53 protein has been named the “guardian of the genome” [77]. Among infectious agents, viruses tend to have a higher risk as carcinogens, although bacteria and parasites may also be implicated. Some viruses can disrupt signaling that normally keeps cell growth and proliferation in check. Also, infections can weaken the immune system or cause chronic inflammation, which may lead to cancer. The most significant viral risks for cancers include the following [78]: • Epstein-Barr Virus (EBV): Risk of lymphoma and cancers of the nose and throat; • Hepatitis B Virus and Hepatitis C Virus (HBV and HCV): Risk of liver cancer; • Human Immunodeficiency Virus (HIV): Risk of Kaposi sarcoma, lymphomas (including both non-Hodgkin lymphoma and Hodgkin disease), and cancers of the cervix, anus, lung, liver, and throat; • Human Papillomaviruses (HPVs): Risk of all cervical cancers and penile cancers; • Human T-Cell Leukemia/Lymphoma Virus Type 1 (HTLV-1): Risk of adult T-cell leukemia/lymphoma (ATLL); • Merkel Cell Polyomavirus (MCPyV): Risk of Merkel cell carcinoma; • Helicobacter pylori (H. pylori): Risk of stomach cancer.

7.3.2 Clinical presentations in cancers Cancer embraces a vast number and diversity of diseases that occur in any organ system of the body. The pathological path for cancers is the abnormal proliferation of cells different in type, numbers, and actions of otherwise normal cells for the tissue or organ system in question. The growth of cells can be rapid or slow. Cell accumulation can be minuscule or massive. The ultimate clinical criteria for diagnosis are that the cells in question are distinctly different (microscopically and macroscopic) from the ordinary evolution, appearance, and

316

Foundations of Artificial Intelligence in Healthcare and Bioscience

accumulation of the cell type. Thus, cancer differs from cellular hypertrophy and hyperplasia, where the cells involved are normal in appearance. Besides determining the nature and type of cancer in a diagnosis, one of the most critical considerations in the clinical presentation is “staging.” This is a determination of how advanced the cancer is relative to its spreading (metastasis) beyond its original location. To determine this, a number is assigned (I through IV) to characterize the degree of spread (from local to disseminated, i.e., other tissues and/or organ systems beyond the original site). The higher the number, the more cancer has spread locally or throughout the body. This information is critical in determining a plan of treatment [79]. Besides a full examination, including comprehensive family and medical history, the first diagnostic tests in a cancer diagnosis include biopsy and imaging ranging from photography through nuclear scanning and MRI. It is in this area in which AI has contributed significantly to the management of cancers. The comprehensive discussion in Chapter 5 (AI Applications in Diagnostic Technologies and Services) covered all the major imaging tests and the AI applications in cancer diagnosis. AI is utilizing transcriptomics (the study of gene protein expression) and proteomics (the molecular biology of the expressed proteins) to establish diagnostic markers for a more accurate diagnosis of cancers. Using global gene expression data, derived from epigenetic experiments, gene control experts are using ML algorithms to predict which genomic sequences are involved in cell type-specific regulation of gene expression. This enables the design of “synthetic promoters” to regulate gene activity and precisely control protein production [80]. The tremendous variation in cancer cells, even within the same disease, is one of the greatest challenges in cancer diagnosis. This is being overcome through AI and deep learning CNN that can identify different types of cancer cells simply by scanning microscopic images. As was discussed back in Chapter 3 (The Science and Technologies of Artificial Intelligence), one of the greatest assets in AI is image recognition through GPU technologies. Results in scanning microscopy with image recognition, GPU and CNN are achieving higher accuracy than human judgment [81]. In a dermatological study using 13,000 photographs to identify malignant lesions (see Integumentary System, page 355), a trained CNN program yielded higher sensitivity and specificity than a panel of 21 board-certified dermatologists [82]. Cancers are identified by the type of cells involved and the area of the body from where they originated. Metastasis relates to the spreading of the cells via blood or lymphatic system from their point of origin to new sites of tumor development. The following terms define the general types of cancers: [83] • Carcinoma: cancer that starts in the skin or the tissues that line other organs; • Sarcoma: a cancer of connective tissues such as bones, muscles, cartilage, and blood vessels; • Leukemia: cancer of bone marrow, which creates blood cells; • Lymphoma and myeloma: cancers of the immune system. There are over 185 types of cancers, according to the National Cancer Institute, which also lists the most common cancers (Table 7 4) [84]. Leading the list of the top 10 cancers

Chapter 7 • AI applications in prevalent diseases and disorders

Table 7–4 • • • • • • • •

317

Common cancer types.

Bladder Cancer Breast Cancer Colon and Rectal Cancer Endometrial Cancer Kidney Cancer Leukemia Liver Cancer

• • • • • • •

Lung Cancer Melanoma Non-Hodgkin Lymphoma Pancreatic Cancer Prostate Cancer Thyroid Cancer

Source: Cancer types. National Cancer Institute of NIH; 2019.

Table 7–5 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Top 10 cancers in America.

Skin cancer Lung cancer Prostate cancer Breast cancer Colorectal cancer Kidney (renal) cancer Bladder cancer Non-Hodgkin’s lymphoma Thyroid cancer Endometrial cancer

Source: American Cancer Society Facts & Figures annual report for 2018.

in America [85] (Table 7 5) is a skin cancer (see Integumentary System below) followed by lung cancer (see Respiratory System below, page 373), which is the leading cause of cancer deaths and the second leading cause of all cancer deaths in America [86]. According to the Center for Disease Control, cancer is the second leading cause of death in America, second only to cardiovascular deaths (Table 7 6) [87]. Indeed, we are dealing with a devastating disease, which is being better understood through the applications of AI in research, treatments, and, hopefully, in its ultimate cure. On a positive note, a recent study (January 2020) showed a decline in the cancer death rate by 29% from 1991 to 2017. This included a 2.2% drop from 2016 to 2017, the most significant single-year reduction in cancer mortality ever reported according to the American Cancer Society’s annual report on cancer rates and trends. The 26-year decline is driven primarily by a long-term decrease in death rates for the 4 major cancers; lung, colorectal, breast, and prostate [88]. Recent mortality declines were also dramatic for melanoma of the skin, and long-term rapid increases in liver cancer mortality have attenuated in women and stabilized in men. “The accelerated drops. . . we’re seeing are likely due, at least in part to

318

Foundations of Artificial Intelligence in Healthcare and Bioscience

Table 7–6

Top 10 causes of death in the U.S.

Cause of death

Number of deaths (2017)

Percent of tota l U.S. deaths

1. Heart disease 2. Cancer 3. Unintentional injuries 4. Chronic lower respiratory disease 5. Stroke and cerebrovascular diseases 6. Alzheimer’s disease 7. Diabetes 8. Influenza & pneumonia 9. Kidney disease 10. Suicide

647,457 599,108 169,936 160,201 146,383 121,404 83,564 55,672 50,633 47,173

23.5% 21.3% 6% 5.7% 5.2% 4.3% 3% 2% 1.8% 1.7%

Source: Health E-Stats. National Center for Health Statistics. Center for Disease Control (CDC); 2019.

advances in cancer treatment over the past decade, such as immunotherapy,” said William G. Cance, M.D., chief medical and scientific officer for the American Cancer Society.

7.3.3 Current treatment approaches and AI applications in cancers The current era of cancer treatments share therapies common to immunology, genetics, and genomics. Used separately and in combination with chemotherapies, radiation therapies, and surgery, the immunotherapies and cellular genetic therapies (discussed previously) are being viewed as the hope of future successes in cancer treatments. Chemotherapy and radiation therapy differ from the immunotherapies and cellular therapies in that the immunotherapies and cellular genetic therapies use the body’s own cells (and biologic agents) to treat itself. Conversely, chemotherapies utilize biochemical, toxic chemical agents to target and destroy tumor and cancer cells throughout the body. Alternately, radiation therapy targets and attempts to destroy tumors and cancer cells in specific areas of the body by using beams of intense energy to kill cancer cells. Radiation therapy most often uses X-rays, but protons or other types of energy are now also being used with considerable success [89]. Various forms of chemotherapeutic agents effectively disrupt the stages of irregular and rapid cancer cell development. Unfortunately, they are not specific to the cancer cells alone and tend to disrupt normal cell cycles as well, particularly the more susceptible cells of the gastrointestinal tract and hair follicles, thus causing nausea and hair loss. Radiation therapy is delivered by an external beam or by an internal source (usually solid) placed near the tumor. At high doses, radiation kills cancer cells or damages their DNA, which causes the cancer cells to stop dividing or die. This process could take days or weeks before DNA is damaged sufficiently to destroy the cancer cells. Subsequently, the

Chapter 7 • AI applications in prevalent diseases and disorders

319

cancer cells keep dying for weeks or months after radiation therapy ends. Of course, the risk of damage from an external beam or internal radiation to non-cancer cells and tissue is a negative in radiation therapies [90]. Target therapies for cancer are similar to the immunotherapies that use biologic agents for autoimmune diseases. The agents used differ from chemotherapeutic drugs in that they interfere with specific “molecular targets” that are involved in the growth, and the spread of cancer. Targeted cancer therapies are sometimes called “molecularly targeted therapies” and “precision medicines” by their target specificity, similar to genetic therapies. As such, they are considered cornerstones in the “precision medicine” concept. Some of the FDA approved target therapies include [91] • • • • • • • •

hormone therapies; signal transduction inhibitors; gene expression modulators; apoptosis inducers; angiogenesis inhibitors; immunotherapies; checkpoint inhibitors; toxin delivery molecules.

Regarding AI’s role in cancer treatment, deep learning algorithms are showing significant value in predicting cancer treatment toxicity. Recently, a CNN approach was used to predict side effects of polypharmacy combinations based on databases of protein-protein and drugprotein interactions [92]. This study led to the discovery of at least 5 novel drug drug interaction predictions. The use of AI to predict radiotherapy toxicity has generated significant interest as well over the past few years [93]. Basic neural networks, CNNs, and other ML methods have been explored, using clinical data to predict urinary and rectal toxicity resulting from prostate radiotherapy results [94], hepatobiliary toxicity after liver radiotherapy [95], and rectal toxicity for patients receiving radiotherapy for cervical cancer [96].

7.3.4 Research and future AI considerations in cancers As research progresses in many areas of cancer, AI continues to be one of the leading technologies being used to advance new information and understandings about the disease. Radiological imaging (see Chapter 5, page 129) is one of the more exciting areas of AI’s applications. CNNbased models have demonstrated accuracies in the 80% 95% range for lung nodule detection and showing significant promise for lung cancer screening [97]. Improvement in breast cancer screening with AI has also been an active area of investigation, resulting in a CNN algorithm able to detect breast malignancy with a sensitivity of 90% [98]. In the field of radiogenomics, where radiographic image analysis is used to predict underlying genotypic traits, CNNs are being used with MRI scans to diagnose low-grade gliomas.

320

Foundations of Artificial Intelligence in Healthcare and Bioscience

Additionally, a radiomics signature using extracted features from CT data and an ML algorithm was able to predict underlying CD8 cell tumor infiltration and response to immunotherapy for a variety of advanced cancers [99]. In the area of translational oncology and complex proteomics data, deep learning neural networks are being used to predict protein structure [100], classify cells into a distinct stage of mitosis [101], and even predict the future lineage of progenitor cells based on microscopy images [102]. In a clinical trial, deep learning artificial neural networks (ANNs) trained on transcriptomic response signatures to drugs accurately predicted the potential failure of over 200 sample drugs [103]. And another ANN predicted cancer cell sensitivity to therapeutics using a combination of genomic and chemical properties [104]. Despite the impressive accuracy deep learning algorithms provide in cancer research, the unanswered question remains. How did it make its prediction? Currently, the ability to determine the precise logic behind DL-based predictions is lacking. This is often referred to as the “black box” problem [105] mentioned earlier in Chapter 3 (The science and technologies of Artificial Intelligence, page 29). Further work is needed to better elucidate the decisionmaking logic with deep neural networks. AI is positioned to make “disruptive” changes in cancer care. It has shown promise in imaging diagnostics, treatment response evaluation, predicting clinical outcomes, drug development, and translational oncology. Overcoming issues of validation and how algorithms arrive at their conclusions will be necessary to harness the full potential of AI in cancer diagnosis, treatment, and cures.

7.4 Vascular (cardiovascular and cerebrovascular) disorders Whereas, in our previous cancer discussion, we spoke about “. . .dealing with a devastating disease,” we also identified it as “. . .the second leading cause of all deaths in America.” Cardiovascular disease (CVD) is by far number 1, more than double cancer mortality (43.2% versus 21.3% for cancer) [106,87], One of the greatest contributions AI will make in health care will be its applications in the research and advancement of the early diagnosis, treatment modalities, and ultimate prevention of life-threatening cardiac and vascular abnormalities. As you will read in this section, we’re well on our way. Some of these sobering statistics about the cardiovascular disease will sensitize you to the relationship this disease category has in health care in general and personally, to you and your loved ones: [106,107], • CVD remains the leading cause of death in the United States, responsible for 840,768 deaths (635,260 cardiac) in 2016. An estimated 17.9 million people worldwide died from CVDs in 2016; • Out of the 17 million premature deaths (under the age of 70) due to noncommunicable diseases in 2015, 37% are caused by CVDs;

Chapter 7 • AI applications in prevalent diseases and disorders

321

• Approximately every 40 seconds, an American will have a myocardial infarction (MI). The average age of first myocardial infarction is 65.6 years old for men and 72.0 years old for women; • In the United States in 2019, coronary events are expected to occur in about 1,055,000 individuals, including 720,000 new and 335,000 recurrent coronary events; • In 2017, emergency medical services-assessed out-of-hospital cardiac arrest occurred in an estimated 356,461 Americans; emergency medical services treatment was initiated in 52%; • Stroke (“cerebrovascular accident”) accounts of 16.9% of vascular deaths. Every 40 seconds, on average, an American will have a stroke. About 795,000 Americans have a new or recurrent stroke annually. About 90% of stroke risk is due to modifiable risk factors; 74% is due to behavioral risk factors. (Table 7 7)

7.4.1 Description and etiology of cardio and cerebrovascular disorders Of course, there have been thousands of books written on the structures and function of the cardiovascular (CV) and cerebrovascular systems and the disorders and diseases (CVDs) that affect them. The human CV system and its disorders are as complex as any biophysical and physiological system in our body. Their simple analogy structurally is to a basic plumbing system with fluid dynamics (the blood and lymph fluids), the pipes (blood vessels), and the central pump (the heart). The disorders of the system (CVDs) are any adverse effects on anyone or any combination of the system’s structures and/or functions. As we briefly thumbnail the structures, functions, and abnormalities of the CV system in outline form, use the plumbing metaphor to keep the otherwise complex CV, simple. But keep good old Albert’s maxim in mind: “We must keep things as simple as possible, but not simpler.”

Table 7–7

Percent for CVD behavior/risk factors.

Behavior/risk factor

Prevalence

Smoking, adults Obesity, adults Obesity, youth Low-density lipoprotein cholesterol $ 130 mg/dL, adults Hypertension, adultsa Diabetes mellitus, diagnosed Diabetes mellitus, undiagnosed Chronic kidney disease Recommended exercise (2008 guidelines)

15.5% 39.6% 18.5% 28.5% 45.6% 9.8% 3.7% 14.8% 22.5%

Heart Disease and Stroke Statistics. American College of Cardiology, 2019. a Hypertension defined by 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/ NMA/PCNA Guideline for the Prevention, Detection, Evaluation, and Management of High Blood Pressure in Adults.

322

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.4.1.1 Structures of the cardiovascular systems 1. Blood: Made up of about 45% solids (cells) and 55% fluids (plasma). Types of blood cells include: A. Erythrocytes B. Leukocytes: 1. Three types of granular leukocytes (eosinophils, neutrophils, and basophils) 2. Three types of non-granular (monocytes, T-cell lymphocytes, and B-cell lymphocytes) C. Thrombocytes 2. Blood vessels: A. Arteries B. Capillaries C. Venules 3. Heart: A. The muscular organ that beats over 100,000 times a day to pump blood throughout the body’s 60,000 miles of blood vessels; B. Three layers plus protective membrane: 1. Endocardium (inner layer); 2. Epicardium (middle layer); 3. Myocardium (outer layer). 4. Pericardium (the protective membrane surrounding the heart). C. Four chambers in the heart divided as “right side” and “left side”: 1. The upper half of your heart has 2 atria 2. The lower half of your heart has 2 ventricles

7.4.1.2 Structures of the cerebrovascular system 1. Internal carotid arteries (ICA) 2. Vertebral arteries 3. Circle of Willis

7.4.1.3 Diseases and disorders of the cardiovascular system 1. Diseases and disorders of the blood [108]: A. Anemias: 1. Iron-deficiency anemia; 2. Vitamin-deficiency anemia; 3. Anemia and Pregnancy; 4. Aplastic anemia; 5. Hemolytic anemia; 6. Sickle cell anemia; 7. Anemia caused by other diseases B. Bleeding disorders:

Chapter 7 • AI applications in prevalent diseases and disorders

323

1. Hemophilia: Hereditary disease (found in X chromosome) 2. von Willebrand Disease (lack of blot clotting factor) C. Blood cancers: 1. Leukemia (production of abnormal white blood cells); 2. Lymphoma (abnormal lymphocytes become lymphoma cells); 3. Myeloma (cancer of the plasma cells) D. Blood clots: 1. Embolus (migrates through the vessel until it lodges in bifurcation); 2. Thrombus (forms in a vein and obstructs flow) 2. Diseases and disorders of blood vessels [109]: A. Arteries: 1. Abdominal Aortic Aneurysm: Wall of the abdominal aorta weakens producing a balloon-like dilation; 2. Aneurysm: Wall of any artery may weaken producing a balloon-like dilation; 3. Aortic Dissection: Tear between the innermost and middle layers of the aorta; 4. Arteriosclerosis: Thickening and hardening of walls of arteries from aging; 5. Atherosclerosis: Buildup of fat, cholesterol, calcium, and other substances 6. Arteriovenous malformation: An abnormal tangle of blood vessels connecting arteries and veins; 7. Coronary artery disease: Disease of the arteries supplying the heart muscle; 8. Giant Cell Arteritis: Severe inflammation in affected arteries. a. Temporal arteritis and b. Takayasu’s Arteritis (inflammation of aortic arch and branches); 9. Hyperlipidemia: Acquired or genetic disorders that result in a high level of lipids 10. Lymphedema: Accumulation of lymph fluid in the soft tissues 11. Peripheral Aneurysm: Aneurysm in the abdomen or legs; 12. Peripheral Arterial Disease: 13. Raynaud’s disease: A disorder that causes the blood vessels to narrow when you are cold or feeling stressed 14. Renovascular Conditions 15. Vascular Trauma 16. Visceral Artery Aneurysm B. Venous (Vein) disorders and diseases: 1. Chronic Venous Insufficiency (post phlebitis syndrome): 2. Deep Vein Thrombosis (DVT): 3. Varicose Veins: 4. Vasculitis 3. Heart disorders and diseases: A. Ride-side heart conditions [110]: 1. “Cor pulmonale” meaning caused primarily by lung disorders; 2. Pulmonary hypertension: 3. Valvular Heart disease:

324

Foundations of Artificial Intelligence in Healthcare and Bioscience

4. Right ventricular myocardial infarction: B. Left-sided heart conditions [111]: 1. The most common form of heart failure; 2. Stages of heart failure: 3. Types of left-sided heart failure: a. Systolic heart failure: b. Diastolic heart failure: C. Congenital heart disease:

7.4.1.4 Diseases and disorders of the cerebrovascular system 1. 2. 3. 4.

Aneurysm: Wall of any artery may weaken producing a balloon-like dilation; Carotid Artery Disease: Atherosclerotic plaque and clots causing blockage; Cerebrovascular disease: Disease of the arterial vessels supplying the brain; Stroke: Blood supply to a part of the brain is suddenly interrupted: A. Hemorrhagic stroke (20%) occur with bleeding in the brain; B. Transient ischemic attack (TIA) or mini-stroke (80%) 5. Arteriovenous malformation

7.4.2 Current treatment approaches and AI applications in vascular disorders (General current treatment approaches for each of the major cardiovascular and cerebrovascular disorders plus 3 current AI applications for each specific category.) 1. Common diseases and disorders of the blood: A. Current treatment approaches for anemias [112]: 1. Treatments are based on the diagnosed cause of the anemia; 2. May include dietary changes or supplements, medicines, procedures, or surgery to treat blood loss. B. Current AI applications in anemia treatment: 1. AI used for the prediction of hemoglobin to guide the selection of erythropoiesisstimulating agent dose results in improved anemia management. The benefits of using these modeling techniques appear to be a decrease in the number of transfusions needed [113]. 2. A San Francisco, CA-based developer, Dosis, of the Strategic Anemia Advisor (SAA) decision support tool, announced that it had expanded its AI-powered personalized dosing platform to support 50 dialysis clinics and 5000 patients across the country [114]. 3. A smartphone-based AI algorithm, developed by Atlanta researchers, can accurately pick up signs of anemia just from the coloration of people’s nailbeds, the team reports in Nature Communications [115]. C. Current treatments for bleeding disorders [116]:

Chapter 7 • AI applications in prevalent diseases and disorders

325

1. Hemophilia and von Willebrand Disease, a hereditary disease is requiring patient education and genetic counseling. 2. Current AI applications in anemia treatment: a. Several approaches have been developed using well-known machine learning algorithms demonstrating accuracy and robustness on a thrombogram (predictor of bleeding episodes) database using numerical simulations. Obtained results, 95.57% accuracy using a cascade of an SVM, and MLPs to classify all categories, and 98.10% of accuracy have proven the ability to diagnose hemophilia efficiently [117]. b. Case-Based Reasoning is a method used to solve a new case by adapting the symptoms found in previous cases that are similar to the new case. This expert system can help families identify clinical and hereditary hemophilia. The system provides solutions and early diagnosis. These expert system applications are designed based on websites using the PHP programming language [118]. c. The identification of the sources of von Willebrand Factor (VWF) heritability has been the focus of trait-mapping studies, including AI genome-wide association studies or linkage analyses, as well as hypothesis-driven research studies. The identification of genetic modifiers of plasma VWF levels may allow for better molecular diagnosis of type 1 VWD, and enable the identification of individuals at increased risk for thrombosis. Validation has led to novel insights into the life cycle of VWF and the pathogenesis of quantitative VWF abnormalities [119]. D. Current treatments for blood cancers: 1. Treatment modalities for leukemia, lymphoma, and myeloma [108]: a. Chemotherapy (primary treatment modality); b. Radiation therapy; c. Biological therapies; d. Targeted therapies; e. Stem cell therapies (CRISPR-Cas9 and CAR-T) 2. Current AI applications in blood cancers: a. A study was conducted to compare hematological tumor variant interpretation using an artificial intelligence decision-support system, Watson for Genomics (WfG), with expertly guided manual curation. Comparison of manual and WfG interpretation of 10 randomly selected cases (including 23 Acute Myeloid Leukemia and 5 multiple myelomas) yielded 90% (9/10) concordance and identification of 9 clinically actionable variants (33%) not found in manual interpretation [120]. b. Paralleling the progress achieved to provide better care for myeloma patients, machine learning processes via AI algorithms, and big data analysis have gained remarkable progress. AI has rapidly diffused into various health sciences and cancer care [121].

326

Foundations of Artificial Intelligence in Healthcare and Bioscience

c. Deep Learning with a convolutional neural network (CNN) algorithm to build a lymphoma diagnostic model for 4 diagnostic categories: (1) benign lymph node, (2) diffuse large B-cell lymphoma, (3) Burkitt lymphoma, and (4) small lymphocytic lymphoma. The test results showed excellent diagnostic accuracy at 95% for image-by-image prediction and 100% for set-by-set prediction [122]. E. Current treatments for blood clots [123]: 1. Medication: Anticoagulants, thrombolytics to dissolve clots; a. Compression stockings; b. Surgery: 2. Catheter-directed thrombolysis procedure, 3. Thrombectomy surgery; a. Stents; b. Vena cava filters 4. Current AI applications in blood clot treatment: a. A machine learning algorithm has been developed that can help physicians in deciding how to treat a patient’s stroke. The AI-driven technology assists doctors in determining whether an ischemic stroke patient would benefit from undergoing an endovascular procedure that removes the blood clot [124]. b. Rapid detection of a large vessel occlusion (LVO) in acute ischemic stroke is beneficial to accelerate therapeutic management. A convolutional neural network (CNN) was developed that automatically detects and segments thrombi on non-contrast CT (NCCT) using a patch-based approach. AI-based LVO detection on NCCT showed comparable results to expert observers. This supportive tool could lead to earlier detection of LVO in acute stroke situations [125]. c. Attempts were conducted to utilize machine learning in the identification of vulnerable plaque using invasive imaging. Hundreds of different plaque features were measured and evaluated in each image. Different models using various machine learning techniques were derived and validated using a support vector machine (SVM) and an artificial neural network (ANN). The overall prediction accuracy for the OCT-TFCA feature was over 80% [126]. 2. Treatments for blood vessel or vascular diseases and disorders [109] A. Current treatments for arterial disorders and disease: 1. Treatment goals: a. Relieve the pain of intermittent claudication. b. Improve exercise tolerance by increasing the walking distance before the onset of claudication. c. Prevent critical artery occlusion that can lead to foot ulcers, gangrene, and amputation. d. Prevent heart attacks and strokes. 2. Medications: a. Antiplatelet medications (e.g., aspirin, Plavix);

Chapter 7 • AI applications in prevalent diseases and disorders

327

b. Anticoagulant medications (e.g., heparin, warfarin) c. Cholesterol-lowering drugs (statins) d. Drugs to control hypertension 3. Current AI applications for arterial disorders and diseases: a. An automated methodology has been developed, capable of identifying the presence of a carotid disease from the Heart Rate Variability analysis of electrocardiographic signals. A Correlation-based Feature Selector for data reduction and Artificial Neural Networks (ANN) is used to distinguish between pathological and healthy subjects. The resulting accuracy of the ANN methodology is more effective than any of the main algorithms existing in the literature [127]. b. The ability to detect culprit coronary arteries were compared among experienced nuclear cardiologists and an ANN. The ANN was 79.1%, 89.8%, and 89.3% accurate for each coronary artery. Diagnostic accuracy was comparable between the ANN and experienced nuclear medicine physicians [128]. c. The unique nature of interventional cardiology makes it an ideal target for the development of AI-based technologies designed to improve real-time clinical decision making, streamline workflow in the catheterization laboratory, and standardize catheter-based procedures through advanced robotics [129]. B. Current treatments for venous disorders and disease [130]: 1. Improving blood flow in your leg veins (elevated); 2. Wearing compression stockings may also help; 3. Regular exercise can also improve blood flow; 4. Medicines that increase blood flow through the vessels; 5. Endovenous laser ablation or radiofrequency ablation (RFA); 6. Sclerotherapy; 7. Surgery (ligation or vein stripping) 8. Current AI applications for venous disorders and diseases: a. According to a study led by Stanford University School of Medicine researchers that examined the genes of more than 400,000 people found that “genes that predict a person’s height may be at the root of this link between height and varicose veins and may provide clues for treating the condition,” included 2716 predictors of varicose veins in this machine-learning algorithm. “Then, we let the algorithms find the strongest predictors of varicose veins” [131]. b. An algorithm was created to classify vessels from chest CT images into arteries and veins. The proposed method outperforms state-of-the-art methods, paving the way for future use of 3-D CNN for artery/vein classification in CT images [132]. c. To assist doctors with fewer experiences in diagnosing rare disorders like Cerebral Venous Sinus Thrombosis (CVST), a study was conducted using Computer-Aided Diagnosis (CAD) with a rete algorithm (pattern matching algorithm) to speed up the rule matching process. The CAD system has been

328

Foundations of Artificial Intelligence in Healthcare and Bioscience

tested successfully in the dataset of large Medical University. It can also be used on the Web [133]. 3. Treatments for heart disorders and diseases: A. Treatments for ride-side heart conditions [110]: 1. Identify and treat the underlying cause: a. Pulmonary hypertension: i. Beta-blockers; ii. Diuretics; iii. Angiotensin-converting enzyme (ACE) inhibitors; iv. Angiotensin II receptor blockers (ARBs) b. Valvular heart disease (repair or replace valve); c. Right ventricle myocardial infarction (“heart attack”) i. Administering drugs to stabilize the heart muscle (including morphine, beta-blockers, and statin medications) ii. Immediate efforts to reopen the blocked artery. 2. Current AI applications for ride-side heart conditions: a. According to a study presented at 2019 International Conference on Nuclear Cardiology and Cardiac CT (ICNC), an algorithm has learned how to identify imaging patterns correlating to heart attack and death in cardiac patients and can predict the occurrence of these events with superior accuracy to human doctors [134]. b. A deep neural network (DNN) has been developed to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset, the DNN achieved an average area under the receiver operating characteristic curve (ROC) of 0.97. The average F1 score, which is the harmonic mean of the positive predictive value and sensitivity for the DNN (0.837), exceeded that of average cardiologists (0.780) [135]. c. The diagnosis of congestive heart failure (CHF) is mostly based on clinical assessment of symptoms, signs, imaging findings, and invasive intracardiac pressure measurement. The electrocardiogram (ECG) is neither sensitive nor specific for the diagnosis of CHF. Recognizing the multiparametric patterns of ECG signal aberrations that might occur in CHF could be expedited and optimized using machine learning. The proposed approach was tested on 4 different sets of normal and CHF ECG signals obtained from established public databases. The system can reduce the time requirement and error rate associated with the manual reading of significant ECG signals [136]. B. Treatment for left-side heart conditions: 1. Treatment similar to right ventricular myocardial infarction: 2. Current AI applications for left-side heart conditions: a. When used with a routine heart scan, machine learning does better than conventional risk models at predicting heart attacks and other cardiac events,

Chapter 7 • AI applications in prevalent diseases and disorders

329

according to a study published in the journal Radiology. The author stated, “Once you use a tool like this to help see that someone’s at risk, then you can get the person on statins or get their glucose under control, get them off smoking, get their hypertension controlled because those are the big, modifiable risk factors” [137], b. One of the reasons for high mortality rates with heart abnormalities is due to delayed diagnosis based on ECG records. An algorithm, Intelligent Heart Diseases Diagnosis Algorithm (IHDDA), has been developed, which reads heart signals (e.g., ECG graphs, etc.) and reaches a flawless diagnosis in a fraction of seconds. The proposed algorithm executed on a supercomputing cluster and tested with 300 patients of different heart problems. The results show that the IHDDA identify heart problem in 1.5 seconds with an accuracy of 97% [138]. c. Using machine learning models, trained on targeted proteomics, we defined 2 complementary protein signatures: one for identification of patients with highrisk plaques and one for identification of patients with absence of CAD. Both biomarker subsets were superior to generally available clinical characteristics and conventional biomarkers in predicting the presence of high-risk plaque or absence of coronary atherosclerosis [139]. C. Treatment for congenital heart defects or disease: 1. Depending on symptoms (range from observation to heart transplant): 2. Current AI applications for congenital heart conditions: a. Cardiac MRI (CMR) allows non-invasive, non-ionizing assessment of cardiac function and anatomy in patients with congenital heart disease (CHD). The utility of CMR as a non-invasive imaging tool for CHD evaluation has been growing exponentially over the past decade. The algorithms based on artificial intelligence, and in particular, deep learning, have rapidly become a methodology of choice for analyzing CMR [140]. b. Machine learning algorithms trained on large datasets to estimate prognosis have the potential to guide therapy in adult congenital heart disease (ACHD). Due to the mostly automated process involved, these DL-algorithms can easily be scaled to multi-institutional datasets to improve accuracy further and ultimately serve as online-based decision-making tools. A nationwide risk stratification and therapy guiding model could be provided to all health care professionals likely improving care for ACHD patients and helping to avoid potential catastrophic treatment errors in this vulnerable patient population [141]. c. The precision delivery of health services uses electronic health data to establish personalized care pathways. Applied to adult congenital heart disease (ACHD) patients, this promotes flexible, dynamic care recommendations with a more judicious allocation of resources, as risk clusters vary over the lifespan [142].

330

Foundations of Artificial Intelligence in Healthcare and Bioscience

4. Cerebrovascular diseases: A. Transient ischemic attack (TIA): 1. TIAs are brief episodes, usually involving numbness, weakness, or blindness, which are frequent warnings of an imminent thrombotic stroke; 2. Symptoms and signs typically last less than an hour; 3. In 90% of cases, symptoms last less than 10 minutes; 4. Multiple TIAs usually indicate a critical narrowing of the lumen of the involved artery by an atherosclerotic plaque with superimposed thrombus; 5. TIAs precede permanent occlusion (stroke) in about 50% of cases. B. Stroke: 1. Infarction of the brain in an area supplied by a cerebral artery; 2. Tends to produce a recognizable clinical syndrome; 3. The onset of symptoms is sudden, within seconds to minutes; 4. Gradual improvement may occur beginning after a few days; 5. Specific symptoms depend on the site of infarction: 6. Cardiac mural thrombi are among the most common sources of stroke, 7. Myocardial infarct, valvular disease, and atrial fibrillation are important predisposing factors; 8. Most patients survive, and marked clinical improvement often occurs. C. Current AI applications for cerebrovascular diseases 1. An automated algorithm based on a 3D convolutional neural network model to detect cerebral aneurysms (CAs) was trained to evaluate final ischemic accident detection and aneurysm measurements. The algorithm was able to identify and measure ruptured and unruptured CAs at an AUC of 0.93 for unruptured and 0.94 for ruptured, demonstrating that a deep learning AI algorithm can achieve clinically useful levels of accuracy for clinical decision support [143]. 2. Time-of-Flight (TOF) magnetic resonance angiography (MRA) is commonly used for grading cerebrovascular diseases. Analysis of cerebral arteries in MRA TOF is a challenging and time-consuming task that would benefit from automation. A 3D convolutional neural network (CNN) model was trained and tested on a dataset consisting of 82 subjects. The deep learning produced excellent accuracy on segmented intracranial arteries in MRA TOF. The model can potentially increase the efficiency and speed of grading cerebrovascular diseases [144]. 3. Machine learning (ML) algorithms have significantly greater sensitivities than human readers (75.8% for logistic regression, P 5 0.020; 72.7% for support vector machine, P 5 0.033; 75.8% for random forest, P 5 0.013) in detecting patients within 4.5 hours of CVA symptom onset, but their specificities were comparable (82.6% for logistic regression, P 5 0.157; 82.6% for support vector machine, P 5 0.157; 82.6% for random forest, P 5 0.157). ML algorithms using multiple magnetic resonance imaging features are even more sensitive than human readings in identifying patients with stroke within the time window for acute thrombolysis [145].

Chapter 7 • AI applications in prevalent diseases and disorders

331

7.4.3 Research and future AI considerations in vascular care In the previous vascular section on “Current treatment approaches to vascular disorders,” I described treatments for each of the major CVDs and provided literature citations regarding current AI applications for each condition. But those citations hardly scratch the surface of the current AI research and future considerations in the field. To suggest tens of thousands of articles in the CVD databases on the web would be a conservative estimate. So how best to summarize the enormous volume of information being generated in such a relevant category? To stay within the spirit of our guidebook, let’s use a structured AI directed discussion on current research and future AI influences relating to (1) CVD diagnostic and screening considerations; and (2) emerging AI treatment and prevention applications.

7.4.3.1 Diagnostic and screening considerations in vascular care AI projects that will enhance CVD diagnosis and screening will most likely be conducted by large technology corporations such as Apple, Google, and Microsoft. Apple and Stanford launched a project called “Apple Heart Study” with the help of machine learning. The latest Apple watch series 4 has a new FDA approved transducer that measures ECG. In early 2018, scientists from Verily (Google Life Sciences-Alphabet Inc.‘s research organization) used machine learning to assess the risk of a patient suffering from cardiovascular disease. They developed an algorithm that analyzes scanned images of patients’ eyes from which can be accurately inferred the patient’s age, blood pressure, and smoking status. This allows researchers to predict the patient’s risk of cardiovascular disease. To train the algorithm, they used machine learning to analyze the medical data of nearly 300,000 patients. The accuracy of the algorithm in identifying patients with cardiovascular disease was as high as 70%, close to the accuracy of traditional cardiovascular disease risk prediction methods like the measurement of blood cholesterol levels [146]. Microsoft has begun writing algorithms to help clinicians in predicting the risk factors for CVD. In 2017, the FDA gave clearance for the use of a cardiac MRI analysis software called Cardio DL (from Arterys), which uses deep learning for medical image analysis and provides automated ventricular segmentation for traditional cardiac MRI scans. By using cloud computing, Cardio DL can automatically complete image processing in less than 10 seconds. Further, it can outline the ventricular epicardium and subcardium, to accurately evaluate the function of the ventricle [147]. Siemens has built a database of more than 250 million CVD related images, reports, surgical data, and other materials for training its AI calculation programs. A team of cardiologists at the University Hospital of Heidelberg conducted a 6-year trial using data from patients with heart failure to generate 100 digitally simulated hearts. They used AI to predict the prognosis of these patients and then compared the anticipated results with the actual situation of the patients [148].

332

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.4.3.2 Emerging AI applications in vascular treatment and prevention The concept of computational modeling in medicine uses computers to simulate and study the behavior of the human body. This concept enables the simulation of a personalized heart by integrating multiple diagnostic data obtained from clinical modalities. It provides a platform for practical evaluation and optimization of treatments [149]. Computer modeling is based on theories rather than data-driven patterns and is usually deterministic. The concept that predicting unknown results using data at hand is common to both computer modeling and machine learning. Synthetic patients and data are artificially manufactured, allowing the ability to track a disease course. These are realistic, but not real, data created by analyzing existing data using machine learning techniques [150]. These areas will be continually developed by AI in the future and will contribute to the realization of personalized precision medicine. An unsupervised machine learning clustering algorithm of clinical parameters and imaging data has shown that responders on cardiac resynchronization therapy may be predicted by clustering them into subgroups [151]. A support vector machine approach has been used to distinguish between different phenotypes of coronary plaque, which could help identify high-risk plaques for which therapy should be intensified in order to prevent major adverse cardiac events [152]. Clustering analyses have been used on demographic, clinical, laboratory, imaging, and medication data from heart failure patients to identify different subgroups thereof [153]. These subgroups were shown to develop different clinical outcomes and response to treatment. In conjunction with support vector machines and random forests, clustering may facilitate further development of precision medicine, focusing on an ontology-based classification approach [154].

7.5 Diabetes (type 1 and 2) The disease category, diabetes, refers to an abnormality of the pancreas (see Endocrine system below), causing improper storage and use of glucose, essential for energy in the body. The glucose (sugar) collects in the blood (hyperglycemia) and does not reach the cells that need it, creating the potential for serious complications. Both types of diabetes (Types 1 and 2) can lead to complications, including but not limited to cardiovascular and cerebrovascular disease, kidney disease, vision loss, neurological conditions, and damage to blood vessels and organs. The estimated number of people over 18 years of age in the United States with diagnosed and undiagnosed diabetes is 30.2 million [155]. The figure represents between 27.9% and 32.7% of the population. Of equal, if not greater concern, is the amount of prediabetes (abnormally high blood sugar levels [156]) at 84.1 million adults aged 18 years or older (33.9% of the adult US population) and 23.1 million adults aged 65 years or older [155]. Type 1 diabetes can occur at any age; however, it usually develops by early adulthood, most often starting in adolescence [157]. Among the leading causes of death in the U.S., it ranks seventh [87].

Chapter 7 • AI applications in prevalent diseases and disorders

333

Blood glucose is the body’s main source of energy and comes from the food you eat. Insulin, a hormone made by the pancreas, helps glucose get into your cells to be used for energy. Sometimes the body doesn’t make enough insulin or doesn’t use insulin well, causing glucose to stay in the blood and doesn’t reach the cells. Another hormone, glucagon, works with insulin to control blood glucose levels. The third form of diabetes knows as gestational diabetes is a type of diabetes that can develop during pregnancy in women who don’t already have diabetes. Every year, 2% 10% of pregnancies in the United States are affected by gestational diabetes [158].

7.5.1 Description and etiology of diabetes (type 1 and 2) 7.5.1.1 Type 1 diabetes The cause of type 1 diabetes is autoimmune destruction of the pancreatic beta cells, which is caused by unknown factors. Scientists are looking for cures for type 1 diabetes, such as replacing the pancreas or some of its cells. Risk factors for type 1 diabetes are family history, introducing certain foods too soon (fruit) or too late (oats/rice) to babies and exposure to toxins. Diabetic ketoacidosis also referred to as simply ketoacidosis or DKA is a serious and even life-threatening complication of type 1 diabetes. Blood tests diagnose type 1 diabetes. The level of blood sugar is measured, and then levels of insulin and antibodies can be measured to confirm type 1 vs. type 2 diabetes. Complications of type 1 diabetes are kidney disease, eye problems, heart disease, and nerve problems (diabetic neuropathy) such as loss of feeling in the feet. Poor wound healing can also be a complication of type 1 diabetes. There is no means of preventing Type 1 diabetes. The prognosis or life expectancy for a person with type 1 diabetes is good if blood sugar levels are kept within a healthy range. The life expectancy for someone with type 1 diabetes traditionally has been about 11 years less than average, but that is changing as the prevention of complications improves and technology such as insulin pumps makes it easier for people to keep their blood sugar in a healthy range [165].

7.5.1.2 Type 2 diabetes (mellitus) In Type 2 diabetes mellitus (sometimes referred to as adult-onset, but increasingly found in children due to increased obesity), cells don’t normally respond to insulin (insulin resistance). The pancreas makes more insulin to try to get cells to respond, but eventually the pancreas can’t keep up, and blood sugar rises. This creates the condition of “prediabetes” and, subsequently, type 2 diabetes. High blood sugar is damaging to the body and can cause other serious health problems, such as heart disease, vision loss, and kidney disease [159]. Lack of insulin production may have to do with cell dysfunction in the pancreas or with cell signaling and regulation. In some cases, the liver produces too much glucose. There may also be a genetic predisposition to developing type 2 diabetes. A genetic predisposition to obesity can increase the risk of insulin resistance and type 2 diabetes. There could also be an environmental trigger(s). Most likely, it’s a combination of factors that increases the risk of type 2 diabetes [160].

334

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.5.2 Clinical presentations in diabetes (type 1 and 2) 7.5.2.1 Type 1 diabetes Type 1 diabetes is usually diagnosed through a series of tests. Some can be conducted quickly, while others require hours of preparation or monitoring. Symptoms often develop quickly. People are diagnosed if they meet 1 of the following criteria [161]: • fasting blood sugar .126 mg/dL on 2 separate tests; • random blood sugar .200 mg/dL, along with symptoms of diabetes; • hemoglobin A1c .6.5 on 2 separate tests. These criteria are also used to diagnose type 2 diabetes. People with type 1 diabetes are sometimes misdiagnosed as having type 2. A doctor may not realize you’ve been misdiagnosed until you begin developing complications or worsening symptoms despite treatment. When blood sugar gets so high that diabetic ketoacidosis occurs, you become very ill. This is often the reason people end up in the hospital or their doctor’s office, and type 1 diabetes is then diagnosed.

7.5.2.2 Type 2 diabetes mellitus Type 2 diabetes can develop slowly, and the symptoms may be mild and easy to dismiss at first. The early symptoms may include [162] • • • • • • • • •

constant hunger; a lack of energy; fatigue; weight loss; excessive thirst; frequent urination; dry mouth; itchy skin; blurry vision.

As the disease progresses, the symptoms become more severe and potentially dangerous. If your blood glucose levels have been high for a long time, the symptoms can include: • • • • •

yeast infections; slow-healing cuts or sores; dark patches on your skin, a condition known as acanthosis nigricans; foot pain; feelings of numbness in your extremities, or neuropathy.

7.5.3 Current treatment approaches to diabetes (type 1 and 2) 7.5.3.1 Type 1 diabetes [163] Since the body can’t produce insulin in Type 1 diabetes, regular insulin injections are needed to keep the glucose levels normal. Insulin injections come in several different forms,

Chapter 7 • AI applications in prevalent diseases and disorders

335

each working slightly differently. Some last up to a whole day (long-acting), some up to 8 hours (short-acting), and some work quickly but don’t last very long (rapid-acting). There are alternatives to insulin injections suitable for only a small number of patients. They include: • Insulin pump therapy where a small device continuously pumps insulin (at a rate you control) into your bloodstream through a needle that’s inserted under the skin; • Islet cell transplantation where healthy insulin-producing cells from the pancreas of a deceased donor are implanted into the pancreas of someone with type 1 diabetes; • A complete pancreas transplant.

7.5.3.2 Type 2 diabetes [164] Type 2 diabetes usually gets worse over time. Making lifestyle changes, such as adjusting your diet and more exercise, may help you control your blood glucose levels at first, but may not be enough in the long term. You may eventually need to take medication by mouth or injection (insulin) to help control your blood glucose levels. Medical options for type 2 diabetes include: • Metformin: First line medicine to reduce the amount of glucose your liver releases into your bloodstream; • Sulphonylureas: Increase the amount of insulin produced by the pancreas; • Pioglitazone: Makes the body’s cells more sensitive to insulin, so more glucose is taken from your blood; • Gliptins (DPP-4 inhibitors): Work by preventing the breakdown of a naturally occurring hormone called GLP-1; • SGLT2 inhibitors: Work by increasing the amount of glucose excreted in the urine. • GLP-1 agonists: Act similarly to the natural hormone GLP-1 (see gliptins, above); • Acarbose: Helps prevent blood glucose levels, increasing too much after eating. Slows down the rate at which digestive system breaks carbohydrates down into glucose; • Nateglinide and repaglinide: Stimulate the release of insulin by the pancreas (not commonly used); • Insulin injection treatment: a. Insulin must be injected because it would be broken down in the stomach like food and unable to enter the bloodstream if it were taken as a tablet; b. If glucose-lowering tablets aren’t effective in controlling your blood glucose levels; c. Taken instead of or alongside your tablets, depending on the dose; d. Treatment may include a combination of different insulin preparations.

7.5.4 Research and future AI applications in diabetes (type 1 and 2) 7.5.4.1 Type 1 diabetes Researchers are conducting a study using artificial intelligence and big data analytics to evaluate information from thousands of continuous glucose monitors and insulin pumps. The

336

Foundations of Artificial Intelligence in Healthcare and Bioscience

information will be used to improve the algorithms that control these devices, enhancing the lives of people with Type 1 diabetes [165]. The research will also uncover how often the sensor, insulin pump, and infusion set faults occur. Researchers have developed algorithms that can detect whether or not the signals from the glucose monitoring device can be trusted. Knowing how often anomalies are happening could help researchers further improve these processes and help break down that data by age group. IBM researchers are using AI to build models that identify patterns of specific antibodies in the blood, indicating progression timelines of how quickly Type 1 Diabetes (T1D) would develop in a particular person. This can vary significantly from individual to individual. The machine learning algorithms analyze massive volumes of anonymized data, collected from people at risk of developing T1D, to identify specific progression patterns correlated among biomarkers, genetic risk factors, symptoms, and other clinical data [166].

7.5.4.2 Type 2 diabetes Published studies suggest that a broad spectrum of market-ready AI approaches are being developed, tested, and deployed today in the prevention, detection, and treatment of diabetes. The total number of published technical articles reporting advances in the field of diabetes and AI increased exponentially in the past decade, from 2600 in 2008 to 5500 in 2013, to more than 10,000 in 2017. Because of AI’s ability to rapidly interpret and process enormous amounts of data into simple, actionable guidance, these published studies suggest that AI has significant potential to improve screening, diagnosis, and management of patients with diabetes [167]. A predictive model was developed for diabetic kidney diseases (DKD) using AI, processing natural language, and longitudinal data with big data machine learning, based on the electronic medical records (EMR) of 64,059 diabetes patients. AI could predict DKD aggravation with 71% accuracy. Furthermore, the group with DKD aggravation had a significantly higher incidence of hemodialysis than the non-aggravation group, over 10 years (N 5 2900). The new predictive model by AI could detect the progression of DKD and may contribute to more effective and accurate intervention to reduce hemodialysis [168]. Perhaps the most devastating complication associated with diabetes (Type 1 and 2) is diabetic retinopathy (DR), a retinal condition affecting the blood vessels in the retina. It is the leading cause of blindness in the estimated 30.2 million U.S. adults with diabetes [155] and the risk increases continually with age. Control of the condition is directly related to keeping blood sugar levels as close to normal as possible. Direct treatment to the eye in late stages of the disease includes medicines called anti-VEGF drugs, photocoagulation (laser therapy) of the retinal vessels, and vitrectomy (removal of blood in the vitreous) in the most advanced stages [169]. Such a devastating and prevalent condition as DR demands early detection through screening. Over 50% of the population (including people with diabetes) see an eye doctor less than every 2 years. As such, researchers at multiple medical institutions [170 172], have been testing AI’s performance in diabetic retinopathy and have found that AI deep

Chapter 7 • AI applications in prevalent diseases and disorders

337

learning matches or exceeds the performance of experts in identifying and grading the severity of the conditions. Even more profound, in multiple studies, the AI software was not explicitly programmed to recognize features from images that might indicate the disease. Rather, it simply looked at (through GPUs) thousands of healthy and diseased eyes and figured out for itself how to spot the condition. Algorithms based on deep machine learning had high sensitivity (from 87.0% to 97.5%) and specificity (from 93.4% to 98.5%) for detecting diabetic retinopathy [173]. As a result of these clinical trials and to address the public health risk of DR, in August 2018, the FDA approved an AI technology to screen for DR in a primary care doctor’s office. The device, called IDx-DR, is a software program that uses an AI algorithm to analyze images of the eye taken with a retinal camera. The digital images are uploaded to a cloud server on which IDx-DR software is installed. The software provides the doctor with 1 of 2 results: (1) “more than mild diabetic retinopathy detected: refer to an eye care professional”; or (2) “negative for more than mild diabetic retinopathy; rescreen in 12 months.” If a positive result is detected, patients are advised to see an eye care professional for further diagnostic evaluation and possible treatment as soon as possible [174].

7.6 Neurological and sensory disorders and diseases Whereas, the heart nourishes the body through red blood cells and circulation; the immune system protects the body from “non-self” (antigens) through white blood cells and cytokines; the nervous system has an equal, if not arguably, more critical role. It controls all the functions of the body and mind through the central nervous system (brain and spinal cord); the peripheral nervous system, a network of branching nerve fibers to all muscles and organs; and the sensory nervous system, the 5 senses that allow us to interact with our environment. Neuroscience is the study of the structures (brain, spinal cord, and nerves) and the functions of the nervous system. The rather comprehensive list of those functions, the central and peripheral nervous system control, includes [175] • • • • • • • • • • • • •

Brain growth and development; Sensations (such as touch or hearing); Perception (the mental process of interpreting sensory information); Thought and emotions; Learning and memory; Movement, balance, and coordination; Sleep; Healing and rehabilitation; Stress and the body’s responses to stress; Aging; Breathing and heartbeat; Body temperature; Hunger, thirst, and digestion;

338

Foundations of Artificial Intelligence in Healthcare and Bioscience

• Puberty, reproductive health, and fertility. A subcategory of the peripheral nervous system is the sensory system or that part of the nervous system responsible for processing sensory information. The system consists of sensory receptors, neural pathways, and parts of the brain involved in sensory perception. The commonly recognized sensory systems are those for sight (vision), hearing (auditory), touch (somatic sensation), taste (gestation), and smell (olfactory). Disorders and diseases of the brain, spinal cord, the network of nerves, and the sensory systems constitute the category of neurological disorders and diseases of which there are over 600 [176]. This section will address some of these major disorders and diseases associated with the central nervous system (brain and spinal cord), the peripheral nervous system (nerve networks), and the sensory systems. Our emphasis will be primarily on those disorders within which AI is playing a significant role. In the following section, I will present the most recent research, and AI applications for each of the diseases and disorders presented. Please note that numerous disorders affect multiple entities in the nervous system. This makes neurological diagnosis and treatment that much more challenging. Enter Artificial Intelligence!

7.6.1 Neuroanatomy, etiologies, clinical considerations associated with neurological and sensory disorders 7.6.1.1 The central nervous system (CNS) neuroanatomy [177] • Forebrain: Largest and most highly developed part of the human brain: it consists primarily of the Cerebrum and the Inner Brain: a. The cerebrum is the topmost part of the brain and is the source of intellectual activities. It is split into 2 halves (hemispheres) by a deep fissure. Despite the split, the 2 cerebral hemispheres communicate with each other through a thick tract of nerve fibers that lies at the base of this fissure. The left hemisphere controls the right side of the body and such skills as logic, science, math, etc.; the right hemisphere controls the left side of the body and abstract reasoning skills. b. The inner brain is the gatekeeper between the cerebrum and spinal cord. It initiates involuntary movements, as well as determines emotional state through the limbic system (see Chapter 3, page 32 for comparative analysis of the limbic system to AI) consisting of hypothalamus, thalamus, hippocampus, amygdala and basal ganglia. • Midbrain: See the brain stem below. Controls some reflex actions and is part of the circuit involved in the control of eye movements and other voluntary movements. • Hindbrain: Includes the upper part of the spinal cord, the brain stem, and a small spherical mass of tissue called the cerebellum. a. The midbrain controls some reflex actions; b. Brain stem: ’ Pons (coordination center between hemispheres);

Chapter 7 • AI applications in prevalent diseases and disorders

Controls the flow of messages between the brain and the rest of the body; Medulla oblongata controls essential body functions such as breathing, swallowing, heart rate, blood pressure, consciousness; ’ Determines whether one is awake or sleepy; The cerebellum coordinates voluntary movements such as posture, balance, coordination, and speech. Cerebral cortex (“gray matter”): Outer neural layer of the cerebrum and the cerebellum responsible for: a. Determining intelligence b. Determining personality c. Motor function d. Planning and organization e. Touch sensation f. Processing sensory information g. Language processing Folded bulges called gyri to create deep furrows or fissures called sulci. The folds add to the surface area and increase the amount of gray matter and the quantity of information that can be processed. Most information processing occurs in the 4 sensory areas and 2 motor areas of the cerebral cortex: h. Sensory areas (input from thalamus): ’ The visual cortex of the occipital lobe; ’ The auditory cortex of the temporal lobe; ’ Gustatory cortex of inner brain; ’ The somatosensory cortex of the parietal lobe. i. Motor areas (regulate voluntary movement): ’ Primary motor cortex; ’ The premotor cortex regulates voluntary flow. Neuronal Connections: All sensations, movements, thoughts, memories, and feelings are the result of signals that pass-through neurons (see Chapter 3, page 31). Cerebral Thought Process: Each of the 2 cerebral hemispheres can be divided into 4 sections or lobes, of which each specializes in different functions: a. Frontal lobes: ’ Act as short-term storage sites, allowing 1 idea to be kept in mind while other ideas are considered; ’ The motor area helps control voluntary movement; ’ Broca’s area will enable thoughts to be transformed into words. b. Parietal lobes: ’ Primary sensory areas receive information about temperature, taste, touch, and movement from the rest of the body; ’ Reading and arithmetic functions. c. Occipital lobes: ’ Process images from the eyes; ’ ’

• •

• •

339

340

Foundations of Artificial Intelligence in Healthcare and Bioscience

’ Link image information with images stored in memory d. Temporal lobes: ’ Responsible for receiving information from the ears. ’ Role in forming and retrieving memories including those associated with music; ’ Integrate memories and sensations of taste, sound, sight, and touch.

7.6.1.2 Central nervous system (CNS) clinical considerations (by etiology) [178] (The etiological relationship between neurological disorders and the immune system, specifically autoimmunity and chronic inflammation [see Immunological and Autoimmune disease, page 294] is exhaustive. Almost all of the neurological disorders discussed below have either definite confirmation of an immune etiology or one is strongly suspected.) • Cancers: 1. Glioma: 2. Neuroblastoma (70% in children under age 4): 3. Metastatic neoplasms: a. Other CNS cancers (listed but not discussed): ’ Primary Neoplasms in Children: ’ Pilocytic astrocytoma: ’ Primary neoplasms in children: ’ Medulloblastoma: ’ Ependymoma; ’ Oligodendroglioma (arises from oligodendrocytes); ’ Meningioma (arises mainly from arachnoid cells; histologically benign); ’ Primary brain lymphoma; ’ Tumors of the sellar (pituitary) region; ’ Pineal parenchymal tumors; ’ Germ cell tumors; ’ Metastatic neoplasms; ’ Neurofibromatosis type 1 (NF1); ’ Neurofibromatosis type 2 (NF2); ’ Tuberous sclerosis; ’ Ependymomas; ’ Toxic effects of radiation treatment. • Cerebrovascular diseases (also see “Vascular (Cardiovascular and Cerebrovascular) Disorders, page 331) 1. Transient ischemic attack (TIA): 2. Stroke: 3. Migraine (vascular etiology questionable): a. Other CNS cerebrovascular diseases (listed but not discussed): ’ Vascular malformations (Tangles of abnormal vessels or channels); ’ Spontaneous Intracerebral Hemorrhage;

Chapter 7 • AI applications in prevalent diseases and disorders

Subarachnoid Hemorrhage and Intracranial Aneurysms; Hypertensive Cerebrovascular Disease; ’ Cerebral Edema; Degenerative diseases: 1. Alzheimer’s disease (AD): 2. Dementia with Lewy bodies (DLB): 3. Parkinson’s disease: a. Other CNS degenerative diseases (listed but not discussed): ’ Frontotemporal dementia (e.g., Pick disease); ’ Huntington disease-expanded polyglutamine; ’ Wilson disease (Hepatolenticular degeneration) ’ Prion diseases a. Creutzfeldt-Jakob disease (CJD) b. Variant Creutzfeldt-Jakob disease (vCJD) Demyelinating disorders multiple sclerosis (CNS and PNS involvement): Developmental disorders; 1. Forebrain anomalies a. Polymicrogyria b. Megalencephaly (abnormally large volume); c. Microcephaly (brain smaller than normal without degenerative lesions); d. Microcephaly e. Heterotopias: f. Holoprosencephaly: g. Agenesis of the corpus callosum 2. Posterior fossa anomalies (Arnold-Chiari malformation - small posterior fossa) 3. Encephalocele: Meninges and brain tissue protrude through a skull defect. Genetic or hereditary disorders; 1. Tay-Sachs disease: 2. Niemann-Pick disease type A: a. Other CNS genetic or inherited disorders (listed but not discussed): ’ CSF flow and hydrocephalus (CNS and PNS involvement); ’ Metachromatic leukodystrophy; ’ Krabbe leukodystrophy. Infections, toxins; 1. Other CNS infections, toxins (listed but not discussed): a. Lead poisoning; b. Carbon monoxide toxicity; c. Methanol toxicity; d. Subacute combined degeneration (B12 deficiency); e. Kernicterus (bilirubin toxicity); f. Hepatic encephalopathy; g. Toxic effects of metabolic disturbances; ’ ’



• •





341

342

Foundations of Artificial Intelligence in Healthcare and Bioscience

h. i. j. k. l. m. n.

Empyema; Intracerebral abscess; Progressive multifocal leukoencephalopathy (PML); Subacute sclerosing panencephalitis (SSPE); Brain Abscesses; Central nervous system syphilis; Wernicke’s encephalopathy & Korsakoff’s syndrome (thiamine/vitamin B1 deficiency). • Injuries to the brain; 1. Brain swelling 2. Concussion 3. Subarachnoid hemorrhage a. Other CNS injuries (listed but not discussed): ’ Delayed sequelae/complications of CNS trauma; ’ Epidural Hemorrhage; ’ Subdural Hemorrhage; ’ Contusions (areas of hemorrhagic necrosis); ’ Lacerations (rupture or tearing of brain tissue). • Seizure disorders; 1. Epilepsy: 2. Tonic-clonic or convulsive seizures (formerly known as grand mal): 3. Myoclonic seizures: cause spontaneous, quick twitching of the arms and legs. a. Other CNS seizure disorders (listed but not discussed): ’ Benign Rolandic epilepsy in children causes tongue twitching, and may interfere with speech and cause drooling; ’ Catamenial epilepsy refers to seizures that occur in relation to the menstrual cycle; ’ Atonic seizures cause symptoms like usually falling not associated with loss of consciousness; ’ Febrile seizures typically occur in children between 6 months and 5 years of age. They are common in toddlers.

7.6.1.3 Peripheral nervous system (PNS) neuroanatomy [179] • Spinal cord: • Somatic nervous system: Part of the peripheral nervous system responsible for carrying sensory and motor information to and from the central nervous system; • Autonomic nervous system: Regulates involuntary body functions, such as blood flow, heartbeat, digestion, and breathing:

7.6.1.4 Peripheral nervous system (PNS) clinical considerations (by etiology) [180] • Cancers: • Degenerative diseases - Amyotrophic lateral sclerosis (ALS): • Demyelinating Disorders:

Chapter 7 • AI applications in prevalent diseases and disorders

343

• Developmental disorders: 1. Spina bifida - midline skeletal defect in the spine 2. Myelomeningocele (meningomyelocele) - meninges and spinal cord protrude through an overlying defect in the vertebral column. 3. Syringomyelia: • Genetic and hereditary disorders: 1. Porphyria: 2. Fabry disease: a. Other PNS genetic disorders (listed but not discussed): ’ Charcot-Marie-Tooth disease ’ Refsum disease ’ Hereditary neuropathy with liability to pressure palsies. • Infections, toxins: Most CNS infections cause secondary inflammation to peripheral nerves. 1. Myasthenia gravis (etiology?) 2. Guillain-Barré syndrome: • Injuries to the spinal cord: 1. Diffuse Axonal Injury 2. Spinal Cord Lesions 3. Specific cord syndromes include the following: a. Transverse myelopathy b. Brown-Séquard syndrome c. Central cord syndrome d. Anterior cord syndrome e. Conus medullaris syndrome

7.6.1.5 Sensory systems [181] • The specialized portion of the peripheral nervous system; • Five traditional senses are known as taste, smell, touch, hearing, and sight; • Stimuli from each sensing organ in the body are relayed to different parts of the brain through various pathways; • Sensory information is transmitted from the peripheral nervous system to the central nervous system; • The thalamus receives most sensory signals and passes them along to the appropriate area of the cerebral cortex to be processed; • The limbic system is composed of a group of brain structures that play a vital role in sensory perception, sensory interpretation, and motor function: 1. Amygdala: Receives sensory signals from the thalamus and uses the information in the processing of emotions such as fear, anger, and pleasure. It also determines what memories are stored and where the memories are stored in the brain; 2. Hippocampus: Forms new memories and connects emotions and senses, such as smell and sound, to memories.

344

Foundations of Artificial Intelligence in Healthcare and Bioscience

3. Hypothalamus: Helps regulate emotional responses elicited by sensory information through the release of hormones that act on the pituitary gland in response to stress. • The 5 senses: 1. Taste (gestation): Ability to detect chemicals in food, minerals and dangerous substances such as poisons: a. Cranial Nerves (CN) V Trigeminal, VII Facial, IX Glossopharyngeal, X Vagus, XII Hypoglossal; b. Five basic tastes that these organs relay to the brain: 1. Sweet; 2. Bitter; 3. Salty; 4. Sour; 5. Umami (glutamates or “meaty taste”); 2. Smell (olfactory): Receptors in the nose send signals directly to the olfactory bulb in the olfactory cortex of the brain: a. Cranial Nerve (CN) I Olfactory; b. Unlike most other receptors, olfactory nerves die and regenerate regularly. 3. Touch (somatosensory perception): Perceived by activation in neural receptors in the skin: a. The main sensation comes from pressure applied to these receptors, called mechanoreceptors; b. The skin has multiple receptors that sense levels of pressure from gentle brushing to firm as well as the time of application from a brief touch to sustained; 4. Hearing (auditory): Sound is comprised of vibrations that are perceived by organs inside the ear through mechanoreceptors: a. Cranial Nerve (CN) VIII Vestibulocochlear, IX glossopharyngeal; b. Sound first travels into the ear canal and vibrates the eardrum; c. Vibrations are transferred to bones in the middle ear called the hammer, anvil, and stirrup; 5. Sight (vision): Ability of the eyes to perceive images of visible light: a. Cranial Nerve CN II Optic, III Oculomotor, IV Trochlear, VI Abducens; b. Light enters the eye through the pupil and is focused through the lens onto the retina on the back of the eye; c. Two types of photoreceptors, called cones and rods, detect the light and generate nerve impulses which are sent to the brain via the optic nerve; d. Rods are sensitive to the brightness of light, while cones detect colors;

7.6.2 Research and AI considerations in neurological and sensory disorders The primary reason I have provided, up to this point in the text, a rather comprehensive outline of neurological disorders is that the research and AI applications in this area of health care are so extensive. It covers almost all of the neurological conditions I have presented.

Chapter 7 • AI applications in prevalent diseases and disorders

345

Thus, I hope that the listings provided for these conditions will allow you to better understand the nature of the research being conducted as well as the current and future value of the AI influences in neurological research. Our neurological disorders outline has covered greater than 120 of the more than 600 neurological disorders reported [176] in the literature. The following section will provide a brief literature review of the most current research, and the current and future AI applications in 40 (alphabetical order) of the most prevalent and familiar disorders. As best I could, I have tried to identify the more practical (less technical) research and study reports in the literature. But once again, as stated previously in this Chapter, we are presenting only a fraction of the body of research and AI activity being conducted in this area of health care. I would urge the reader to investigate additional sources on any particular condition of interest to you personally. 1. Alzheimer’s disease: The application of deep learning to early detection and automated classification of Alzheimer’s disease (AD) has recently gained considerable attention, as rapid progress in neuroimaging techniques has generated large-scale multimodal neuroimaging data. Deep learning approaches, such as a convolutional neural network (CNN) or recurrent neural network (RNN), that use neuroimaging data without preprocessing for feature selection have yielded accuracies of up to 96.0% for AD classification and 84.2% for mild cognitive impairment (MCI) conversion prediction [182]. 2. Amyotrophic lateral sclerosis (ALS): Heterogeneity, late-stage recruitment into pharmaceutical trials, and inclusion of phenotypically admixed patient cohorts are some of the key barriers to successful clinical trials in ALS. Machine Learning (ML) models and large international data sets offer unprecedented opportunities to appraise candidate diagnostic, monitoring, and prognostic markers. Accurate patient stratification into well-defined prognostic categories is another aspiration of emerging classification and staging systems [183]. 3. Carbon monoxide poisoning: To prevent CO (colorless and odorless) poisoning, an AI with IoT (Internet of Things) technology has been developed with the following functionals. (1) A CO sensor, installed in a room in the home that immediately activates to warn the household when the CO concentration is excessively high; (2) an electric window opener and fan, which when activated creates ventilation and thereby reduced CO concentration; (3) a device that cuts off the gas supply; (4) a Line application that notified family members and signaled an emergency center; and (5) a mechanism for unlocking the door to allow people to enter for emergency rescue [184]. 4. Concussion: A study explored how machine learning analytics could be of use in modeling how high school sports concussion symptom(s) resolve. The purpose of this study was to implement a machine learning-based approach to model the estimated time for resolution of symptoms in high school athletes who suffered a concussion while playing sports. The main finding and importance are that machine learning and advanced analytics do have the potential for clinical utility in managing concussion. Even with the limited data set available, computer modeling was able to

346

5.

6.

7.

8.

9.

10.

11.

12.

Foundations of Artificial Intelligence in Healthcare and Bioscience

accurately predict symptoms associated with prolonged recovery after a concussion [185]. Dementia with Lewy Bodies: Deep learning-based imaging classification was useful for an objective and accurate differentiation of Dementia with Lewy Bodies (DLB) from Alzheimer’s Disease (AD) and for predicting clinical features of DLB. The cingulate island sign (CIS) was identified as a specific feature during DLB classification. The visualization of specific features and learning processes could be critical in deep learning to discover new imaging features [186]. Encephalitis: Among the many studies documenting superior results of AI compared to human diagnosis, encephalitis proved an exception to that rule. A comparative study using a trained AI algorithm revealed an 83.7% accuracy in the diagnosis of encephalitis while the human doctors’ accuracy was all well above 95%. This suggests the possibility (though not confirmed in other studies) that human diagnosis may be superior to AI in more serious illnesses [187]. Epidural hemorrhage: AI interventions have been shown to have up to 94% sensitivity and 99% negative predictive value in detecting life-threatening pathology on head CT, such as subarachnoid hemorrhage, epidural hematoma, midline shift, hydrocephalus, and acute ischemic stroke [188]. Epilepsy: High-definition medicine will require the development of analytical techniques, including AI, that uses and combine behavioral, environmental, molecular genomic, chronotype, and microbiomic data to offer the best individualized therapeutic options for disease management in each person with epilepsy. To provide an improvement in precision medicine for epilepsy, a high-definition approach may be required that encompasses a panoramic view of factors that can be manipulated either directly or indirectly, now and in the future [189]. Germ cell tumors: Deep learning-based image analysis can be used for detecting tumorinfiltrating lymphocytes (TILs) in testicular germ cell cancer more objectively and it has potential for use as a prognostic marker for disease relapse [190]. Glioma: The prediction accuracy for gliomas after radiotherapy can be improved with the combination of clinical and dose-volume histograms (DVH) features. Study results show the potential to optimize the treatment strategy for individual patients based on a machine learning model [191]. Hearing disorders: Multiple studies have shown that miRNAs play a significant role in sudden sensorineural hearing loss (SNHL) and inner ear development [192]. ML can build disease-specific algorithms to predict various degrees of SNHL in different inner ear pathologies based on perilymph-derived miRNA expression profile alone. The permutation feature of importance in ML allows it to understand which component and at what weighted value that component was used to create the model [193]. HIV-associated disorders: A new machine learning algorithm automatically selected important risk-related variables of HIV from millions of medical records. The algorithm can detect those most vulnerable to HIV infection and could play an

Chapter 7 • AI applications in prevalent diseases and disorders

13.

14.

15.

16.

17.

18.

19.

347

important role in increasing the prescription of preexposure prophylaxis medications to prevent infection [194]. Huntington’s disease: A study was conducted to evaluate the potential to objectively quantify chorea (jerky involuntary movements) in Huntington’s Disease patients using wearable sensor data. The sensor-based model can quantify the chorea level with a high correlation to the chorea severity reported by both clinicians and patients. This work suggests that digital wearable sensors have the potential to support clinical development of medications in patients with movement disorders, such as chorea [195]. Intracranial hemorrhage: A team of investigators from the Massachusetts General Hospital (MGH) Department of Radiology has developed a system using artificial intelligence to quickly diagnose and classify brain hemorrhages and to provide the basis of its decisions from relatively small image datasets. To train the system, the MGH team began with 904 head CT scans, each consisting of around 40 individual images, that were labeled by a team of 5 MGH neuroradiologists as to whether they depicted 1 of 5 hemorrhage subtypes, based on the location within the brain, or no hemorrhage. This system can also be deployed directly onto scanners, alerting the care team to the presence of hemorrhage and triggering appropriate further testing before the patient is even off the scanner [196]. Lead poisoning: With some funding from Google, a group of volunteer computer scientists designed a machine-learning model to help predict which homes in Flint Michigan were likely to have lead pipes. The artificial intelligence was supposed to help the City dig only where pipes were likely to need replacement. Through 2017, the plan was working. Workers inspected 8833 homes, and of those, 6228 homes had their pipes replaced, a 70% rate of accuracy. Sadly, city officials and their engineering contractor abandoned the project [197]. Meniere’s syndrome: A study using machine learning algorithms demonstrated that miRNA profiling within the inner ear is a feasible methodology and can potentially offer insight into what is occurring on a cellular and molecular level in various inner ear pathologies. In the search for specific hearing loss related biomarkers, they were able to demonstrate that various inner ear diseases, from Meniere’s disease to otosclerosis, express different and distinct miRNA profiles [198]. Meningiomas: The DL model yielded accurate automated detection and segmentation of meningioma tissue despite diverse scanner data and thereby may improve and facilitate therapy planning as well as monitoring of this highly frequent tumor entity [199]. Meningitis: Several different machine learning algorithms were evaluated to reduce the use of invasive procedures, such as cerebrospinal fluid (CSF) collection, as much as possible. Decision Tree algorithm performed best, with 94.56% and 96.18% accuracy for training and testing data, respectively. These results represent a significant aid to doctors in diagnosing meningitis as early as possible and in preventing expensive and painful procedures on some children [200]. Metastatic neoplasms: Advances in AI applied to radiomics and radiogenomics in neuro-oncologic imaging will improve our diagnostic, prognostic, and therapeutic

348

20.

21.

22.

23.

24.

25.

Foundations of Artificial Intelligence in Healthcare and Bioscience

methods, helping propel the field into an era of precision medicine (see also Cancers, page 314) [201]. Migraine: In a study, feature selection and machine learning classification methods were tested to automate the diagnosis of migraines. The method improved the performance of migraine diagnosis classifiers compared to individual feature selection methods, producing a robust system that achieved over 90% accuracy in all classifiers. The results suggest that the proposed methods can be used to support specialists in the classification of migraines [202]. Multiple sclerosis: A 5-minute, self-administered, language-independent AI computerized test named Integrated Cognitive Assessment (ICA) was developed to measure cognitive dysfunction in patients with MS. ICA demonstrated excellent testretest reliability (r 5 0.94), with no learning bias (i.e. no significant practice effect); and had high level of convergent validity with the Brief International Cognitive Assessment for MS (BICAMS). ICA demonstrated a high accuracy (AUC 5 95%) in discriminating cognitively normal from cognitively impaired participants. ICA can be used as a digital biomarker for assessment and monitoring of cognitive performance in MS patients [203]. Myasthenia gravis: Machine learning algorithms were developed at Stanford University to predict with high confidence, the presence of Myasthenia Gravis and helpful in eliminating the need for a painful and expensive Single-Fiber Electromyography (EMG) test. The algorithms were trained on 22 co-factors/features commonly found with Myasthenia Gravis, and could also predict the probability of being afflicted with Myasthenia Gravis given patient history and a questionnaire. the CNN model performed well since it has the highest F1 score amongst all the other models [204]. Neuroblastoma: The classification and location of neuroblastoma in nuclear MRI images are realized by using Deep Neural Network (CNN) algorithm as the core technology. The module used to make up for the gap in the field of intelligent identification and accurate positioning of neuroblastoma in the current nuclear magnetic resonance detection technology, effectively reduce the work intensity of doctors reading films and further promote the clinical application and technical development of nuclear magnetic resonance detection technology in the diagnosis of neuroblastoma [205]. Neurofibromatosis: A study showed that it is possible to accurately predict measures of neuropsychological performance based on brain metabolism using a machine learning Gaussian Processes based algorithm in neurofibromatosis patients. This result suggests an underlying metabolic pattern that relates to more global/verbal aspects of cognitive functioning in this condition [206]. Paresthesia: Integration of artificial intelligence and the use of apps for the development of smart neuromodulation is the new horizon in pain management. Previously, spinal cord stimulation consisted of only conventional tonic stimulation at low frequencies. This option wasn’t always the most effective for reducing neuropathic pain, and some patients disliked the paresthesia. Now, patients have the option of other electrical

Chapter 7 • AI applications in prevalent diseases and disorders

26.

27.

28.

29.

30.

31.

349

waveforms such as a high-frequency, burst or other waveforms that can reduce pain without accompanying paresthesia [207]. Parkinson’s disease: An artificial neural network system with a backpropagation algorithm has been developed for helping doctors in identifying Parkinson’s Disease (PD). Previous research with regards to predicting the presence of the PD has shown accuracy rates up to 93% [208]; however, the accuracy of prediction for small classes is reduced. The newer design of the neural network system causes a significant increase in robustness. It is also has shown that network recognition rates reached 100% [209]. Primary brain lymphoma: Of the currently published studies in differentiating glioblastoma (GBM) from primary central nervous system lymphoma (PCNSL), ML algorithms have demonstrated promising results and certainly have the potential to aid radiologists with difficult cases, which could expedite the neurosurgical decision-making process. ML algorithms will likely help to optimize neurosurgical patient outcomes as well as the cost-effectiveness of neurosurgical care if the problem of overfitting can be overcome [210]. Primary neoplasms in children: The 2 main bottlenecks for successful AI applications in pediatric oncology imaging to date are the needs for large data sets and appropriate computer and memory power. With appropriate data entry and processing power, deep convolutional neural networks (CNNs) can process large amounts of imaging data, clinical data, and medical literature in very short periods and thereby accelerate literature reviews, correct diagnoses, and personalized treatments [211]. Retinitis pigmentosa: A study was conducted to assess the discrimination ability of a deep convolution neural network for ultrawide-field pseudocolor imaging and ultrawide-field autofluorescence of retinitis pigmentosa. The sensitivity and specificity of the ultrawide-field pseudocolor group were 99.3% (95% CI [96.3% 100.0%]) and 99.1% (95% CI [96.1% 99.7%]), and those of the ultrawide-field autofluorescence group were 100% (95% CI [97.6% 100%]) and 99.5% (95% CI [96.8% 99.9%]), respectively. Using the proposed deep neural network model, retinitis pigmentosa can be distinguished from healthy eyes with high sensitivity and specificity [212]. Retinoblastoma: The CRADLE app (ComputeR Assisted Detector LEukocoia) searches for traces of abnormal reflections from the retina called leukocoria or “white pupil,” a primary symptom of retinoblastoma, as well as other common eye disorders. The study found the app is an effective tool to augment clinical leukocoria screenings, allowing parents to efficiently and effectively screen their children more often throughout their development [213]. Smell (olfactory) abnormalities: The use of machine learning in human olfactory research includes major approaches comprising (1) the study of the physiology of pattern-based odor detection and recognition processes; (2) pattern recognition in olfactory phenotypes; (3) the development of complex disease biomarkers including olfactory features; (4) odor prediction from physicochemical properties of volatile

350

32.

33.

34.

35.

36.

37.

Foundations of Artificial Intelligence in Healthcare and Bioscience

molecules; and (5) knowledge discovery in publicly available big databases. The increasing use of contemporary methods of computational science is reflected in a growing number of reports employing machine learning for human olfactory research and clinical diagnosis [214]. Somatosensory disorders: Researchers have been focused on inventing a wearable device that assists the wearer to learn patterns by producing contingent feedback such as sounds, vibrations or pressures. In other words, it helps the body understand and map out physical movements through multiple modalities. Work is also being done on an intrinsic sensory stimulus method to promote an intuitive understanding of the target movement pattern to the learner through somatosensory sensation. This is a technique to modify motion senses and intrinsic awareness by stimulating vibration to a muscle spindle using a vibration actuator. This technique is said to be a sensory substitution to somatosensory information on visual information [215]. Stroke: Artificial intelligence can support health specialists and provide them with actionable insights to accelerate diagnosis and ensure accurate medication and intervention decisions in the shortest possible time after stroke onset. It can even help reduce the risk of developing the condition in some patients, eliciting subtle warning patterns and alerting the clinicians about the upcoming crisis [216]. Syringomyelia: In canines, symptomatic SM is diagnosed based on signs of myelopathy and the presence of a large syrinx that is consistent with the neuro-localization. If the cause of CSF channel disruption and syringomyelia is not revealed by anatomical MRI then other imaging modalities may be appropriate. The development of a machine learning approach and computer analysis is recommended [217]. Taste abnormalities: McCormick & Company, a pioneer in flavor and food innovation, and a team at IBM Research have created a novel AI system to help product developers more efficiently and effectively create new flavor experiences. Designing new flavor experiences at McCormick is a good fit for AI technology because of the nature of the problem and the wealth of available data accumulated over decades. Based on the promising results to date, McCormick plans to roll out the AI system globally to operations in more than 20 labs in 14 countries encompassing over 500 product and flavor developers and their support staff [218]. Touch disorders: The Computer Science and Artificial Intelligence Lab (CSAIL) of MIT have created a predictive AI that allows robots to link multiple senses (touch and sight) in much the same way humans do. Using a robot arm with a tactile sensor called GelSight (another MIT creation), they trained the robot with thousands of touches on 200 objects creating more than 3 million visual/tactile paired images in its dataset. From this, the robot is capable of predicting the environment purely from tactile feelings [219]. Transient ischemic attacks: The risk of recurrent stroke following a transient ischemic attack (TIA) or minor stroke is high. A study was done to investigate the feasibility of using an artificial neural network (ANN) for risk stratification of TIA or minor stroke patients. Results indicated that ANN may yield a novel and effective method in risk stratification of TIA and minor stroke [220].

Chapter 7 • AI applications in prevalent diseases and disorders

351

38. Tuberous sclerosis: A convolutional neural network (CNN) was trained to classify H.E. stained microscopic images of focal cortical dysplasia type IIb (FCD IIb) and cortical tuber of tuberous sclerosis complex (TSC). A digital processing pipeline was developed for a series of 56 FCD IIb and TSC cases to obtain 4000 regions of interest and 200.000 sub-samples with different zoom and rotation angles to train a CNN. A web application was then developed, which combined the visualization of whole slide images (WSI) with the possibility for classification between FCD IIb and TSC on demand by our pretrained and build-in CNN classifier. This approach might help to introduce deep learning applications for the histopathologic diagnosis of rare and difficult-to-classify brain lesions [221]. 39. Vascular malformations: ML has been studied extensively, demonstrating excellent performance in outcome prediction for a wide range of neurosurgical conditions, particularly arteriovenous malformations [222]. 40. Vision disorders: A study of eighteen different diseases of the visual system was categorized into brain damages, damages to the eyes and the damages to the sensory neurons carrying data. Possible engineering solutions for each one were evaluated. Using deep neural networks to do the almost same process as the brain does, the developed devices can replace the vision of the eyes, and also make artificial new connections to cover the attenuation of the sensory neurons carrying data from the eyes to the brain [223].

7.7 Musculoskeletal disorders (MSDs) The musculoskeletal system is made up of the bones of the skeleton, muscles, cartilage, tendons, ligaments, joints, and other connective tissue that supports and binds tissues and organs together. Its function is to provide form, support, stability, and motion to the body. Somewhat surprising (to this author, at least), is the prevalence of osteoarthritis and musculoskeletal injuries which rank 3rd among chronic diseases and a death rate of 6% (see Chronic diseases and Table 7.8 below). Table 7–8 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Ten most common chronic conditions (ranked by death rate).

Heart disease (death rate 23.0%); Cancer (death rate 21.3%); Unintentional injury (death rate 6%); Respiratory diseases including asthma and COPD (death rate 5.7%); Stroke and cerebrovascular disease (death rate of 5.2%); Alzheimer’s disease (death rate of 4.3%); Type 2 diabetes (death rate 3%); Influenza and pneumonia (death rate 2%) Kidney disease (death rate 1.8%); Chronic liver disease and cirrhosis (death rate 1.8%)

National vital statistics reports. June 24, 2019;68(6).

352

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.7.1 Musculoskeletal disorders (MSD) and diseases and associated AI applications Among the more than 675 MSDs [224] 15 of the more prevalent conditions are listed below with brief descriptions, treatments and the most recent related AI applications. 1. Arthritis: This is an autoimmune disease-causing joint inflammation, describing over 200 conditions (osteo and rheumatoid types of arthritis) that affect joints, tissues surrounding joints, and other connective tissue. The more common forms include osteoarthritis, gout, fibromyalgia, SLE, and rheumatoid and psoriatic arthritis. Symptoms include pain, aching, stiffness, and swelling and treatment are aimed primarily at pain reduction [225]. A study reported in JAMA shows that artificial intelligence models can use electronic health record data to predict future patient outcomes in rheumatoid arthritis (RA). The deep learning model had a strong performance in a test cohort of 116 patients, whereas baselines that used each patient’s most recent disease activity score had a statistically random performance [226]. 2. Bone fractures: See “Physical injuries, wounds and disabilities” below. 3. Bursitis: Bursae are fluid-filled sacs that act as a cushion between bones, tendons, joints, and muscles. When these sacs become inflamed it is called bursitis. Symptoms include pain (greater with movement) and swelling. Treatment is rest, raising the affected area, ice packs, and pain medications [227]. In a study, a wearable wireless sensor network (an IoT) WSN-based incorporating a backpropagation neural network (BPNN) algorithm in an activity recognition model attempted to recognize 6 types of rehabilitation exercises applied in frozen shoulder therapy. Sensors delivered acceleration and angular velocity signals training the BPNN algorithm in activity recognition. When parameters were applied in practical motions of laboratory participants, the results revealed favorable recognition rates of 85% 95% above for the proposed swinging and stretching exercises [228]. 4. Carpal tunnel syndrome: This is a common condition that causes pain, numbness and tingling in the hand and arm. It occurs when 1 of the major nerves to the hand (the median nerve) is squeezed or compressed as it travels through the wrist. Early on, symptoms can be relieved with simple measures like wearing a wrist splint or avoiding certain activities, but symptoms often worsen with time and surgery is indicated [229]. The performance and effectiveness of 4 classification methods including support vector machine, naive Bayes, classification tree, and artificial neural network were investigated in the detection of carpal tunnel syndrome. Naive Bayes yielded better performance than all the other methods in the diagnosis of carpal tunnel syndrome [230]. 5. Fibromyalgia: This is a condition that causes pain all over the body, sleep problems, fatigue, and often emotional and mental distress. People with fibromyalgia may be more sensitive to

Chapter 7 • AI applications in prevalent diseases and disorders

353

pain than people without fibromyalgia. The cause of fibromyalgia is not known, but it can be effectively treated and managed. Treatment includes exercise, pain medications, and stress management [231]. A range of techniques, including AI, to confirm that the changes seen in the microbiomes of fibromyalgia patients are not diet-related. The severity of a patient’s symptoms was directly correlated with an increased presence or a more pronounced absence of certain bacteria. Using machine-learning algorithms, the microbiome composition alone allowed for the classification of patients and controls. This is the first demonstration of gut microbiome alteration in non-visceral pain [232]. 6. Hip and knee replacement surgery: Arthroplasty represents one of the highest volume procedural subspecialties, with hip and knee replacement representing the most common procedures reimbursed by Medicare. Joint replacement is usually elective surgery. The patient with end-stage arthritis on x-ray may be referred to a specialist for joint replacement. The surgical decision involves careful consideration tailored to the patient’s functional demands, medical status, quality of life, and expectations. A sophisticated AI algorithm can predict the risk of joint replacement, the cost and length of stay during admission, and even the post-operative recovery trajectory. Radiology and robot-assisted surgery surrounding hip or knee replacement generate large quantities of data capable of being studied and characterized by AI-based algorithms [233]. 7. Marfan’s syndrome: Marfan’s syndrome is a genetic disorder that affects the body’s connective tissue. Marfan’s syndrome is caused by a defect (or mutation) in the gene that tells the body how to make fibrillin-1. The syndrome can affect many different parts of the body including the heart (most often), blood vessels, bones, joints, and eyes [234]. Treatment will evolve with advances in genetic therapies (see “Genetic and Genomics”, page 308). Next-generation sequencing (AI supported genetic testing) such as whole-exome and whole-genome sequencing (see page 311) are being rapidly integrated into clinical practice. The American College of Medical Genetics and Genomics recently published recommendations for the reporting of these variants in clinical practice for 56 “actionable” genes. Among these, 7 are involved in Marfan Syndrome and Related Disorders (MSARD) [235]. 8. Osteoarthritis (OA): Osteoarthritis (OA) is the most common form of arthritis. It occurs most frequently in the hands, hips, and knees. The cartilage within a joint begins to break down and the underlying bone begins to change. The condition develops slowly and worsens with age. Symptoms include pain, stiffness, swelling and often reduced function and disability (see “Physical injuries, wounds and disabilities” below). Treatment ranges from exercise to physical therapy, weight loss, pain medications, supportive devices (crutches or canes) and surgery [236]. Stem cell therapies (see page 302) have shown significant promise in OA treatment.

354

Foundations of Artificial Intelligence in Healthcare and Bioscience

A fully developed computer-aided diagnosis (CAD) system for early knee osteoarthritis (OA) detection using knee X-ray imaging and machine learning algorithms was tested on 1024 knee X-ray images from the public database Osteoarthritis Initiative (OAI). The results show that the proposed system has a good predictive classification rate for OA detection (82.98% for accuracy, 87.15% for sensitivity and up to 80.65% for specificity) [237]. 9. Osteogenesis imperfecta (OI): This is a group of genetic disorders that mainly affect the bones (“imperfect bone formation”). People with this condition have bones that break easily, often from mild trauma or with no apparent cause. Multiple fractures are common, and in severe cases, can occur even before birth. Additional features include blue sclerae, short stature, hearing loss, respiratory problems, and a disorder in tooth development. The most severe forms include an abnormally small, fragile rib cage, underdeveloped lungs, and other life-threatening problems [238]. A research team from the School of Chinese Medicine at Hong Kong Baptist University (HKBU) has successfully developed a novel aptamer (a single-stranded piece of DNA) for the treatment of osteogenesis imperfecta (OI) with the aid of AI technology. It has been granted orphan drug designation by the US Food and Drug Administration (FDA). Clinical trials will be conducted over the next 3 years [239]. 10. Osteoporosis: This is a disease that thins and weakens the bones. Bones become fragile and fracture (break) easily, especially the bones in the hip, spine, and wrist. Millions of people in the U.S. either already have osteoporosis or are at high risk due to low bone mass. It is more common in older women with lower bone density. Treatment includes Vitamin D and calcium supplementation [240]. Machine learning and deep learning models have found a role in osteoporosis, both to model the risk of fragility fracture, and to help with the identification and segmentation of images. New methods for automatic image segmentation, and the prediction of fracture risk show promising clinical value. Though these recent developments have had a successful initial application to osteoporosis research, their development is still under improvement [241]. 11. Paget’s disease of bone (Osteitis deformans): This disease causes bones to grow too large and weak. It can lead to other health problems such as arthritis and hearing loss. The most common sites are the spine, pelvis, skull, and legs. It is most common in older men. Treatment includes pain medication and bisphosphonates, a medication that helps the body regulate the bonebuilding process to stimulate more normal bone growth [242]. A technique has been developed that can identify bone metastases and/or fracture with high sensitivity from dual-energy x-ray absorptiometry exams using AI techniques. Inventors aim to expand the application to detect other bone diseases and conditions such as Paget’s disease in the spine [243].

Chapter 7 • AI applications in prevalent diseases and disorders

355

12. Rheumatoid arthritis (RA): Early stages of the autoimmune disease RA may not include symptoms. Subsequently, after 6 weeks or longer, redness or swelling in the joints and pain develops. Morning stiffness is common and more than 1 joint (wrists, certain joints of the hands and feet) are generally affected, often the same joint on both sides. Fatigue, low-grade fever, and dryness of skin, eyes, mouth can occur as well as inflammation of blood vessels and organ damage in late stages [244]. Machine learning and AI, fueled by breakthroughs in high-performance computing, data availability and algorithmic innovations, are paving the way to effective analyses of large, multi-dimensional collections of patient histories, laboratory results, treatments, and outcomes. In the new era of machine learning and predictive analytics, the impact on clinical decision-making is being leveraged within the field of rheumatology [245]. 13. Scoliosis: This disorder is caused by a sideways, S- or C-shaped curve of the backbone, or spine. It is most common in late childhood and early teens when children grow fast. Greater in females and can be hereditary. Symptoms include leaning to 1 side and having uneven shoulders and hips. Treatment may include a brace or surgery [246]. The algorithm (ResUNet) was developed to measure various clinical conditions, such as degenerative changes or deformities. The algorithm was trained and evaluated on 990 standing lateral radiographs. The algorithm versus experienced physicians’ correlation ranged from 0.946 to 0.993, indicating excellent consistency. The superior performance of the proposed method and its high consistency with physicians proved its usefulness for automatic measurement of the sagittal vertical axis in clinical settings [247]. 14. Spinal stenosis: This disorder involves a narrowing of the spine that puts pressure on the nerves and spinal cord. The symptoms occur gradually and include pain in the neck or back, numbness, and weakness in arms or legs, radiating pain down the leg (radiculopathy). The condition occurs in older patients, younger people with a spine injury, arthritis, and scoliosis patients. Treatment includes pain medications, physical therapy, braces, and surgery [248]. By using machine learning and big data techniques, a linear model encoding variation in lumbar neural foraminal areas in asymptomatic individuals has been established. This model can be used to make quantitative assessments of neural foraminal areas in patients by comparing them to the age-, sex-, and height-adjusted population averages [249]. 15. Tendinitis (or tendonitis): Severe swelling of a tendon (flexible bands of tissue that connect muscles to bones). Usually happens after repeated injury to an area such as the wrist or ankle. Produces pain and soreness around a joint (e.g., tennis elbow, golfer’s elbow, pitcher’s shoulder,

356

Foundations of Artificial Intelligence in Healthcare and Bioscience

swimmer’s shoulder, and jumper’s knee). Treatment is aimed at reducing pain and swelling. Included is rest, wrapping or elevating the affected area, and pain medications. In recent occurrences and severe injuries, ice packs are indicated. Other treatments include ultrasound, physical therapy, steroid injections, and surgery [250]. The accurate diagnosis of rotator cuff disorders is important to determine treatment strategy, especially differentiating tears from other types of tendinopathies. The introduction of quantitative and automated diagnostic procedures could potentially reduce the impact of variability. Computer-aided diagnosis systems provide an objective, quantitative assessment of lesion type and grade. After defining the lesion area with manual or semi-automatic segmentation, quantitative features can be extracted and combined in an artificial intelligence classifier [251].

7.8 Integumentary system and exocrine glands The integumentary system is the province of dermatology in medicine, although, estheticians, cosmetologists, plastic and cosmetic surgeons lay claim to some of this eclectic domain as well. It includes the skin (trivia alert: the largest organ in the human body at approx. 16 22 sq. ft. surface area and about 20 pounds for average size adult); hair; nails; and exocrine (sebaceous and sweat or eccrine and apocrine) glands.

7.8.1 Dermatology Skin is made up of 2 principal layers. The epidermis is the most superficial layer covered by epithelial cells and overlaying almost the entire body surface. The second layer (underneath the epidermis) is the thicker dermis layer. The epidermis is only about a tenth of a millimeter thick and contains no blood or blood vessels. About 90% are keratinocytes that produce and store the protein keratin. About 8% is melanocytes that produce the pigment melanin to protect the skin from ultraviolet radiation and sunburn. The remaining 2% consists of Langerhans cells that detect and fight pathogens and Merkel cells which connect to the nerve ending to create a sense of touch. The deeper layer, the dermis contains blood vessels and is made up of connective tissue which provides strength and elasticity to the skin. Deeper subcutaneous layers (hypodermis) store adipose (fatty) tissue which provides energy (triglycerides) and helps insulate the body. The glands in the dermis are called exocrine glands of which there are 2 types. Sudoriferous (sweat) glands or glands that secrete substances onto an epithelial surface by way of a duct; and sebaceous glands or glands that produce an oily secretion known as sebum. There are 2 types of sudoriferous glands, eccrine and apocrine. Eccrine sweat glands are found throughout the body and produce a secretion of water and sodium chloride. The secretion (sweat) is delivered via a duct to the surface of the skin and lowers the body’s temperature through evaporative cooling. Apocrine sweat glands produce a thick, oily liquid that is consumed by bacteria living on the skin. This process is what produces body odor. There is also a specialized exocrine gland, ceruminous, found only in the

Chapter 7 • AI applications in prevalent diseases and disorders

357

dermis of the ear canal. It produces a waxy secretion known as cerumen which lubricates and protects the ear canals and the eardrum. Finally, hair is considered an accessory organ of the skin that helps protect the body from UV radiation and also insulates the skin by trapping warm air. Its structure consists of follicles (depressions in the epidermal cells deep in the dermis layer); root in the follicle that generates keratin and melanocytes; and a hair shaft. The hair shaft includes the cuticle, cortex, and medulla [252].

7.8.2 Integumentary system disorders and diseases and associated AI applications The number of direct and indirect (secondary to other conditions) disorders of the integumentary system is enormous. There are over 2700 dermatological entities affecting skin, hair, nails and exocrine glands [253]. Among them, AI has been applied to many (but far from all) of the more prevalent disorders. In this section, we will address 12 common dermatological disorders in adults including 10 skin, 1 nail and 1 hair, many of which you will likely have heard of, seen or experienced. I will include a brief description of each disorder and its standard treatment as well as an example of current AI activity relating to the condition. 1. Acne (Acne vulgaris): Acne vulgaris, a very common skin condition, is a chronic inflammation of the sebaceous gland with lesions (comedones, papules, erythema, pustules) chiefly on the face but can also occur on the upper arms, trunk, and back. It is a chronic condition, but self-limits over time with the risk of scarring. Causes range from medications to UV, endocrine disorders, pregnancy, and genetic predisposition. Treatments include topical retinoids and antibiotics, and oral doxycycline and Isotretinoin [254]. An AI algorithm was trained on 479 images of acne patients to see if it could perform acne grading equal to or superior to expert dermatologists. AI recognized the condition with an accuracy of 0.854 and a statistically high significant (P , 0.001) correlation between manual and automated evaluation. [255]. 2. Alopecia areata: This condition is a common autoimmune disorder that can result in unpredictable hair loss, usually in small, quarter size patches on the scalp. In some cases, there could be a complete loss of hair on the scalp and body (alopecia universalis). The condition can affect anyone regardless of age and gender, though most cases occur before the age of 30. People with limited patches of hair loss often experience a spontaneous, full recovery. There is no cure, but in some cases, antiinflammatories to suppress the immune system have some effect [256]. A series of scientific publications have explored advances that involve both stem-cell research and 3-D printing, intending to clone a person’s actual hair and then inserting it into his or her scalp—in tremendous, unlimited quantities. It involves growing hair from stem cells derived from a person’s skin or blood, and implanting hair follicles rich with

358

Foundations of Artificial Intelligence in Healthcare and Bioscience

dermal papillae into the space around a person’s old, shrunken, dormant follicles. “the ability to regenerate an entire hair from cultured human cells “will have a transformative impact on the medical management of different types of alopecia” [257]. 3. Cold sore (Herpes labialis or fever blisters): There are 2 types of Herpes Simplex Virus (HSV). Type 1 causes oral herpes or cold sores. Type 2 is genital herpes. Type 1 infects more than half of the U.S. population by the time they reach their 20 s. Cold sores usually occur outside the mouth, on or around the lips. When they are inside the mouth, they are usually on the gums or the roof of the mouth. There is no cure for cold sores. They normally go away on their own in a few weeks, but antiviral (e.g., acyclovir) medicines (taken at the first signs of blistering) can help them heal faster [258]. Bioinformaticians have developed algorithms that can predict the probability of infection progressing in individual HSV cells. The herpes virus is a good research model because it is relatively easy to work within the laboratory. These new insights can lead to the prevention of HSV-1 infections [259]. 4. Contact dermatitis: As with so many skin disorders, contact dermatitis is an inflammatory eczematous skin condition. Two types include irritant and allergic contact dermatitis, both caused by exogenous antigens, with the later (allergic) releasing mediators inducing a delayed (type 4) IgE-mediated hypersensitivity (see page 296). All individuals are at risk for contact dermatitis and the symptoms are erythema, edema, oozing, and tenderness. Treatment is the avoidance of offending antigens, topical (oral in severe cases) steroids, and antihistamines [260]. Generally, dermatologists use visual assessment to make decisions of severity scoring for dermatitis. An AI approach has been developed to classify severity scoring based on image recognition, machine learning algorithms using color, texture, and redness features of the skin. The redness features showed a significant indication compared to other features. The overall accuracy achieved by the Multi-class SVM classifier was around 86%, and the multiclass SVM successfully classified the severity score into 4 classes [261]. 5. Hives (urticaria): Urticaria appears as raised, well-circumscribed areas of erythema (redness) and edema (swelling) involving the dermis and epidermis that are very pruritic (itchy). Acute urticaria (less than 6 weeks versus chronic, greater than 6 weeks) can be caused by allergic reactions to foods, drugs, cosmetics, or soaps; infections; insect bites, stings, or exposure; environmental factors; latex; undue skin pressure, cold, or heat; even emotional stress and exercise. Local urticaria can occur following contact with allergens via an IgE-mediated mechanism. Treatment includes avoidance of offending antigen(s) and oral antihistamines (sometimes biologics) [262]. An AI computer-assisted medical decision-making (CMD) system can be used as an aid for decision support in the diagnosis of urticaria. A Bayes classification approach achieves a classification accuracy of 96.92% over the test samples. The junior clinicians

Chapter 7 • AI applications in prevalent diseases and disorders

359

were able to classify the test samples with an average accuracy of 75.68%. A probabilistic classification approach was used for identifying the presence or absence of urticaria based on intradermal skin test results. In the absence of an allergy specialist, the CDM system can assist junior clinicians in clinical decision making [263]. 6. Nevus (moles): When pigment cells in the skin (melanocytes) grow in clusters, they are termed nevus or a mole. They are very common with most people having between 10 and 40 on their bodies. A person may develop new moles from time to time, usually until about age 40. They are usually pink, tan or brown and can be flat or raised. They are usually round or oval and usually less than 10-mm (larger should be biopsied to rule out melanoma) [264]. An app has been developed that uses AI to digitally map out the skin to make it easier for its users to detect new moles, freckles and other marks. This, in turn, helps doctors to spot abnormalities. The app allows patients to capture and track wide-area images of their entire body, including the back, an area not commonly checked thoroughly. Images are then transmitted to specialists who identify suspicious lesions [265]. 7. Nail fungus (and athlete’s foot tinea pedis) Also called onychomycosis, nail fungus is a common condition that begins as a white or yellow spot under the tip of your fingernail or toenail. As the fungal infection goes deeper, nail fungus may cause your nail to discolor, thicken and crumble at the edge. It can affect several nails. When fungus infects the areas between your toes and the skin of your feet, it’s called athlete’s foot (tinea pedis). The fungus can affect fingernails, but it’s more common in toenails [266]. A deep neural network approach beat 42 dermatology experts in diagnosing a common nail fungus. The AI program relied heavily upon a huge dataset of almost 50,000 images of toenails and fingernails. That data used to train the deep neural networks on recognizing cases of onychomycosis provided the crucial edge that enabled deep learning to outperform medical experts [267]. 8. Psoriasis: Psoriasis is an immune-mediated disease that causes raised, red, scaly patches to appear on the skin. It typically affects the outside of the elbows, knees or scalp, though it can appear on any location. People report symptoms like itchy, burns and stings. It is also associated with other serious health conditions, such as diabetes, heart disease, and depression. There are multiple forms of the skin lesions and diagnosis is by skin biopsy. Treating involves a combination of strategies including topical treatments, phototherapy (also known as light therapy) and biologic drugs. A study was conducted to establish an accurate and objective psoriasis assessment method based on segmenting images by machine learning technology. An algorithm trained on 203 images achieved an accuracy of more than 90% in 77% of the images and differed on average 5.9% from manually marked areas. The difference between algorithm predicted and photo-based estimated areas by physicians represented an 8.1% improvement [268].

360

Foundations of Artificial Intelligence in Healthcare and Bioscience

9. Rosacea: This is a chronic skin disease that affects the face, primarily the forehead, nose, cheeks, and chin of fair complexion people, particularly of western European ancestry. There are 3 main types of rosacea, categorized by their primary signs and symptoms. Erythematotelangiectatic rosacea causes skin redness and warmth (flushing) and visible clusters of blood vessels (telangiectasia). Papulopustular rosacea causes skin redness, swelling, and pus-filled bumps called pustules. Phymatous rosacea is characterized by thickened skin on the face and an enlarged, bulbous nose (rhinophyma). The eyes and eyelids are often involved. Numerous triggering factors vary among individuals [269]. Treatment (with variable success) includes topical therapies and oral doxycycline [270]. AI and machine learning could prove to be a good tool for assisting in the diagnosis of dermatological conditions. A study was conducted to evaluate the use of data augmentation in machine learning image recognition of 5 dermatological disease manifestations-acne, atopic dermatitis, impetigo, psoriasis, and rosacea. The average of each of the statistical metrics (sensitivity, specificity, PPV, NPN, MCC, and F1 Score) increased when data augmentation was added to the model. With a deep learningbased approach, it is possible to differentiate dermatological images with appreciable MCC, F1 score, and AUC [271]. 10. Seborrheic keratosis: Involves superficial, often pigmented, epithelial lesions that are usually warty but may occur as smooth papules. The cause is unknown, but genetic mutations have been identified in certain types. The lesions commonly occur in middle age and later and most often appear on the trunk or temples. Seborrheic keratoses vary in size and grow slowly. They may be round or oval and flesh-colored, brown, or black. They usually appear stuck on and may have a verrucous, velvety, waxy, scaling, or crusted surface. Lesions are not premalignant and need no treatment unless they are irritated, itchy, or cosmetically bothersome [272]. Automatic skin lesion analysis involves 2 critical steps: lesion segmentation and lesion classification. A multi-target deep convolutional neural network (DCNN) was developed to simultaneously tackle the problem of segmentation and classification using melanoma detection and seborrheic keratosis identification. Results proved the multitarget DCNN model demonstrates superiority over a single model, indicating its learning efficiency and potential for application in automatic skin lesion diagnosis [273]. 11. Shingles (herpes zoster): Shingles (herpes zoster or varicella-zoster “chickenpox” virus) produces a painful rash or blisters on the skin on 1 side of the body. After several days, fluid-filled blisters develop. The most common location for shingles is a band, called a dermatome, spanning 1 side of the trunk around the waistline. Anyone who has had chickenpox is at risk for shingles. Immediate treatment with antiviral drugs helps stave off the painful after-effects of shingles known as postherpetic neuralgia. Other treatments for postherpetic neuralgia include steroids. The shingles vaccine is a

Chapter 7 • AI applications in prevalent diseases and disorders

361

preventive therapy but not a treatment for those who already have shingles postherpetic neuralgia [274]. It is challenging for clinicians to tailor treatment to patients, due to the lack of prognosis information on the neurological pathogenesis that underlies herpes zoster (HZ). A study aimed at characterizing the brain structural pattern of HZ before treatment with medication that could help predict medication responses. An algorithm and support vector machine (SVM), was applied with MRI to identify the spatial pattern of gray matter (GM) volume, with high predicting accuracy. The predictive regions, with an accuracy higher than 79%, were located and displayed significant increases of gray matter volumes in patients with poor response, compared to those with a good response. The combination of sMRI and AI might be a useful tool to explore the neuroanatomical imaging biomarkers of HZ-related pain associated with medication responses [275]. 12. Skin cancer (basal cell carcinoma and melanoma): Skin cancer is the most common type of cancer (see Table 7 5). It is caused by outof-control growth of abnormal cells (mutations) in the epidermis. There are multiple types of skin cancers including: [276] • Basal cell carcinoma (.4.3 million cases per year); • Squamous cell carcinoma (.1 million cases per year); • Melanoma (192,310 cases in 2019); • Merkel cell carcinoma (about 2000 cases per year); • Kaposi sarcoma (6 cases per 1 million people per year); • Cutaneous (skin) lymphoma (6.4 cases per 1 million people per year); • Various types of sarcomas (skin adnexal tumors that start in hair follicles or skin glands); • Precancerous and non-cancerous tumors: a. Moles (described previously); b. Actinic keratosis (more than 58 million cases per year); c. Hemangiomas (benign blood vessel growths, often called strawberry spots); d. Lipomas (soft tumors made up of fat cells); e. Warts (rough-surfaced growths caused by some types of human papillomavirus). Given that this chapter is dedicated to more prevalent conditions, we will limit our discussion of skin cancers to the 3 most common; i.e., basal cell carcinoma, squamous cell carcinoma, and melanoma. A. Basal cell carcinoma (BCC): This skin cancer develops most often on skin areas typically exposed to the sun (UV exposure most common cause), especially the face, ears, neck, scalp, shoulders, and back. Characterized by an open sore that does not heal, and may bleed, ooze or crust. The sore might persist for weeks, or appear to heal and then come back. Sites are localized and destructive if not treated early. BCC rarely metastasizes and very rarely is fatal [276]. An interesting AI study was conducted at Yale University to develop and validate a multi-parameterized artificial neural network based on available personal health

362

Foundations of Artificial Intelligence in Healthcare and Bioscience

information for early detection of non-malignant skin cancers (NMSC) with high sensitivity and specificity, even in the absence of known UVR exposure and family history. A neural network (NN) was trained on 2056 NMSCs and 460,574 non-cancer cases measuring 13 parameters: gender, age, BMI, diabetic status, smoking status, emphysema, asthma, race-Hispanic ethnicity, hypertension, heart diseases, vigorous exercise habits, and history of stroke. The results (training sensitivity 88.5% and specificity 62.2%, validation sensitivity 86.2% and specificity 62.7%) were comparable to a previous study of basal and squamous cell carcinoma prediction that also included UVR exposure and family history information. These results indicate that an AI NN is robust enough to make predictions, suggesting that novel associations and potential predictive parameters of NMSC [277]. B. Squamous cell carcinoma: Squamous cell carcinoma (SCC), also known as cutaneous squamous cell carcinoma (cSCC) is the second most common form of skin cancer. It is characterized by abnormal, accelerated growth of squamous cells and appears as scaly red patches, open sores, rough, thickened or wart-like skin, or raised growths with a central depression. When caught early, most SCCs are curable [278]. Head-and-neck squamous cell carcinoma (HNSCC) is one of the most common malignancies worldwide. The development of prediction models would augment clinicians’ ability to provide absolute risk estimates for individual patients. A retrospective cohort study was conducted of 33 065 patients with oral squamous cell carcinoma from the National Cancer Data Base between 1 January 2004, and 31 December 2011. Using machine learning algorithms, a prediction model was created based on patient social, demographic, clinical, and pathologic features. The developed prediction model proved to be better than a prediction model that exclusively used TNM (Tumor, Node, Metastasis) pathologic and clinical stage according to all performance metrics. This study highlights the role that machine learning may play in individual patient risk estimation in the era of big data [279]. C. Melanoma: Melanoma is much less common than the other types but much more likely to invade nearby tissue and spread to other parts of the body. Most deaths from skin cancer are caused by melanoma (greater than 7200 per year). The first 5 letters of the alphabet are a guide to help you recognize the warning signs of melanoma [280]: A is for Asymmetry. Most melanomas are asymmetrical; B is for Border. Melanoma borders tend to be uneven and may have scalloped or notched edges; C is for Color. Multiple colors are a warning sign. As a melanoma grows, the colors red, white or blue may also appear; D is for Diameter or Dark. Warning sign if a lesion is a size of a pencil eraser (about 6 mm, or 1/4 inch in diameter) or larger.; E is for Evolving. Any change in size, shape, color or elevation of a spot on the skin, or any new symptom in it, such as bleeding, itching or crusting.

Chapter 7 • AI applications in prevalent diseases and disorders

363

The literature on AI applications in malignant melanoma diagnosis is extensive. Some of the most recent articles are documenting studies of the practical and effective use of AI with smartphones in the diagnosis of BCC. A published article (among many) in the dermatological literature with 88 collaborators presents the classification of skin lesions using a single convoluted neural network (CNN), trained directly on a dataset of 11,444 clinical images, using only pixels and disease labels as inputs. The algorithm’s performance was tested against 112 board-certified dermatologists on biopsy-proven clinical images of malignant melanomas versus benign nevi. The CNN out performed all tested experts (P , 0.001), demonstrating an AI capable of classifying skin cancer with a level of competence comparable to dermatologists [281].

7.9 Endocrine glands The endocrine system is made up of a network of glands that secrete hormones (chemical substances transported through the blood) to regulate bodily functions. The system conducts these tasks through its network of 10 glands that produce, store, and secrete the hormones. These glands include: [282] • Adrenal: Two glands on top of kidneys with 2 distinct parts: a. The cortex (outer portion) produces hormones such as cortisol which helps regulate metabolism, sex drive, body’s response to stress and aldosterone which helps control blood pressure; b. The medulla (inner portion) hormones, such as adrenaline which helps the body react to stress. • Hypothalamus: This is a region of the brain that establishes a link between the endocrine and nervous systems. It produces releasing and inhibiting hormones, which modulate the production of other hormones throughout the body. • Ovaries: Two glands maintain the health of the female reproductive system and secrete 2 main female sex hormones, estrogen, and progesterone. • Pancreas: This gland secretes insulin and glucagon hormones through endocrine cells (Islets of Langerhans ) to regulate blood glucose. It also functions as an exocrine gland by secreting enzymes through ducts to break down the proteins, lipids, carbohydrates, and nucleic acids in food. • Parathyroid: Four small glands in the throat region with the sole purpose of secreting parathyroid hormone to regulate the calcium level in our bodies • Pineal (“Artificial brain”): Location deep in the center of the brain, it produces melatonin, which is believed to help maintain circadian rhythm affecting sleep and regulates reproductive hormones. • Testes: A pair of sperm-producing organs (also known as gonads) that maintain the health of the male reproductive system. They produce sperm and secrete the testosterone hormone which maintains libido, muscle strength, and bone density. • Thymus: This gland, located behind the sternum, between your lungs, is only active until puberty. Thymosin is the hormone of the thymus, and it stimulates the development of

364



• • • • •

Foundations of Artificial Intelligence in Healthcare and Bioscience

disease-fighting T cells. After puberty, all T cells are produced and the gland starts to slowly shrink and become replaced by fat. Thyroid: This gland (in the throat) regulates metabolism through 2 main thyroid hormones, T3 and T4. These hormones extract iodine from the blood and incorporate it into hormones that are associated with calorie burning and heart rate. Through their interaction with other bodily systems, this complex system of glands helps control multiple life functions, including: Growth and development; Homeostasis (the internal balance of body systems); Metabolism (body energy levels); Reproduction; Response to stimuli (stress and/or injury).

Given the magnitude of the endocrine system and its direct and indirect associations with virtually all the systems of the body, so too is the extensive amount of disorders relating to the endocrine system. The system must release the proper amounts of hormones (too much or too little creating abnormal results); blood must be flowing properly to transmit the hormones effectively to target organ systems, and hormone receptors in target organ systems must be functioning properly to receive the chemical signals to achieve their functional goals. Thus, endocrine-related disorders are relatively common and demand a medical specialty, endocrinology, to be able to effectively diagnose a vast array of related dysfunctions. Such a complex of actions and interacts make for an ideal framework for AI’s machine learning and deep learning capabilities. This is exemplified in the plethora of AI applications in endocrinology. Almost all of the disorders and diseases related directly and indirectly to the endocrine system have been influenced by AI applications.

7.9.1 Endocrine disorders and diseases and associated AI applications The following listings include the direct disorders associated with each endocrine gland (from least to most common) and will provide a brief review of a recent and relevant AI program for the last (most common) entry on the list. Once again, as mentioned for all of the previous clinical categories discussed in this Chapter, this sampling of AI programs represents only a fraction of the total body of AI applications in endocrinology. • Adrenal gland disorders and diseases: a. Pheochromocytomas (oversecretion of epinephrine and norepinephrine); b. Hyperaldosteronism (oversecretion of aldosterone); c. Addison’s Disease (Adrenal insufficiency); d. Cushing Syndrome (oversecretion of cortisol) e. Variants of support vector machine, neural networks, and other ML techniques, with immunohistochemical methods, were utilized in categorizing Cushing syndrome with

Chapter 7 • AI applications in prevalent diseases and disorders

365

adrenocortical lesions. Based on the gene expression profiling, 2 specific DNA molecules were found in adrenocortical carcinoma versus 1 adrenocortical hyperplasia related DNA, with adrenal adenomas expressing all 3. These techniques diagnosed the adrenocortical disease type with 92.6% accuracy. This enabled the differentiation of cancer from an adenoma with a sensitivity and specificity of 90%, a diagnostic value that was higher than CT, MRI, or PET scans [283]. • Hypothalamus disorders and diseases (indirect disorders due to gland’s functions): a. Hypopituitarism; b. Neurogenic diabetes insipidus; c. Tertiary hypothyroidism; d. Prader-Willi Syndrome (genetic link): In a study, researchers relied on machine learning to teach a computer how to predict brain age based on magnetic resonance imaging (MRI) scans of 2001 normal subjects, from 18 to 90 years old. Then the team used the system to compare the predicted brain age of 20 adults with Prader-Willi Syndrome (PWS). Most of the PWS patients in the study (19 out of 20) had a paternal chromosome 15q11 13 deletion, the most common cause of PWS. Results showed that, on average, the brain-predicted age difference of the PWS patients was 7.24 years older than in the matched controls [284]. • Ovaries disorders and diseases: a. Endometriosis (endometrium grows outside your uterus); b. Ovarian cysts and Polycystic Ovary Syndrome (PCOS); c. Ovarian Germ Cell Tumors; d. Ovarian Low Malignant Potential Tumors; e. Ovarian Epithelial Cancer (EOC): A study was conducted to develop an ovarian cancer-specific predictive framework for clinical stage, histotype, residual tumor burden, and prognosis using machine learning methods based on multiple biomarkers. Machine learning techniques were superior to conventional regression-based analyses in predicting multiple clinical parameters pertaining to EOC. Machine learning systems can provide a critical diagnostic and prognostic prediction for patients with EOC before the initial intervention, and the use of predictive algorithms may facilitate personalized treatment options through pretreatment stratification of patients [285]. • Pancreas disorders and diseases: a. Acute pancreatitis; b. Chronic pancreatitis; c. Hereditary pancreatitis; d. Pancreatic cancer (4th most common cause of cancer death in men and 5th in women): A study was conducted aimed at evaluating a deep learning protocol to identify neoplasia in intraductal papillary mucinous neoplasia (IPMN) in comparison to current radiographic criteria. A computer-aided framework was designed using convolutional neural networks to classify IPMN. The protocol was applied to magnetic

366

Foundations of Artificial Intelligence in Healthcare and Bioscience

resonance images of the pancreas. The deep learning protocol sensitivity and specificity to detect dysplasia were 92% and 52%, respectively. Sensitivity and specificity to identify high-grade dysplasia or cancer were 75% and 78%, respectively. Diagnostic performance was similar to radiologic criteria. The deep learning protocol showed accuracy comparable to current radiographic criteria [286]. • Parathyroid disorders and diseases: a. Hypoparathyroidism; b. Parathyroid cancer (extremely rare); c. Hyperparathyroidism: Up to 20 25% of patients with primary hyperparathyroidism will have multigland disease (MGD). Preoperative imaging can be inaccurate or unnecessary in MGD. ML platform, Waikato Environment for Knowledge Analysis, was used, and models were selected for (1) overall accuracy and (2) preferential identification of MGD. A review of imaging studies was performed on a cohort predicted to have MGD. The best overall accuracy was achieved using a boosted tree classifier, RandomTree. To maximize the positive predictive value of MGD prediction, a rule-based classifier, JRip, with costsensitive learning was used and achieved 100% positive predictive value for MGD. Only 8 (29%) of injected dye scans (sestamibi) and 4 (36%) ultrasounds were correct. ML methods can help distinguish MGD early in the clinical evaluation of primary hyperparathyroidism [287]. • Pineal disorders and diseases: a. Tumors (very rare): b. Sleep (circadian rhythm) dysfunction; One of the causes of poor sleeping is found to be the environmental condition of the location of sleeping. The lighting, humidity, and temperature of the room affect sleep. Using the Internet of Things and AI, a proposed system is aiming at tracking the sleeping patterns, finding the causes, relating them with environmental parameters, i.e., temperature, humidity, and light intensity and then controlling the affecting parameters to optimize the environment for a better quality of sleep. The system is adaptable to each user, tracking their sleep pattern and quality of sleep using the wristbands, adding to the proven potential of the absolute system in today’s chaotic lifestyle. The system demonstrates the factors as well as the solutions to a healthy life, improving the quality and quantity of sleep utilizing the Internet of Things and Artificial Intelligence at its best [288]. • Testes disorders and diseases: a. Epididymitis; b. Hydrocele; c. Testicular torsion; d. Varicocele; e. Hypogonadism; f. Orchitis (inflammation of the testicles); g. Spermatocele (cyst in the epididymis coiled tube behind each testicle);

Chapter 7 • AI applications in prevalent diseases and disorders

h. i. j. k.

367

Testicular pain; Testicular swelling; Other scrotal conditions; Testicular cancer: A study was conducted to evaluate if a deep learning algorithm can be trained to identify tumor-infiltrating lymphocytes (TILs) in tissue samples of testicular germ cell tumors and to assess whether the TIL counts correlate with relapse status of the patient. TILs were manually annotated in 259 tumor regions from 28 slide images. A deep learning algorithm was trained on half of the regions and tested on the other half. The algorithm was further applied to larger areas of tumor slides from 89 patients and correlated with clinicopathological data. Results indicated deep learning-based image analysis can be used for detecting TILs in testicular germ cell cancer more objectively and it has potential for use as a prognostic marker for disease relapse [190]. • Thymus disorders and diseases: a. Congenital disorders; b. Autoimmune disease (see Myasthenia gravis, page 294); c. Thymus cancers (thymomas, lymphomas); Thymoma-associated myasthenia gravis (TAMG) is the most common paraneoplastic syndromes of thymoma and closely related to thymus abnormalities. A large cohort of 230 thymoma patients was enrolled of which 182 thymoma patients (81 with TAMG, 101 without TAMG) were used for training and model building. 48 cases from another hospital were used for external validation. A comprehensive analysis by integrating machine learning and general CT image features, named 3DDenseNet-DL-based multi-model, was also performed to establish a more effective prediction model. The model can effectively detect TAMG in patients with thymoma based on preoperative CT imaging. This model may serve as a routine non-invasive screening method or as a supplement to the conventional diagnostic criteria for identifying TAMG [289]. • Thyroid disorders and diseases: a. Anaplastic Thyroid Cancer (most aggressive thyroid cancer); b. De Quervain’s Thyroiditis (giant cell thyroiditis); c. Follicular Thyroid Cancer (asymptomatic thyroid mass or nodule); d. Goiters (abnormal enlargement of your thyroid gland); e. Graves’ Disease (an autoimmune disorder that causes hyperthyroidism); f. Hashimoto’s Thyroiditis (an autoimmune disorder that can cause hypothyroidism); g. Hurthle Cell Thyroid Cancer (associated with Hashimoto’s thyroiditis); h. Hyperthyroidism; i. Hypothyroidism; j. Medullary Thyroid Cancer (develops from C cells in the thyroid gland); k. Papillary Thyroid Cancer (the most common form of thyroid cancer); l. Silent Thyroiditis (immune reaction causing hyperthyroidism, followed by hypothyroidism);

368

Foundations of Artificial Intelligence in Healthcare and Bioscience

m. Thyroid Nodules: A diagnostic study of 134 lesions among 121 patients was conducted to develop a model through automated machine learning. The AI algorithm was able to identify genetically high-risk thyroid nodules by ultrasonography alone, with a specificity of 97% and a positive predictive value of 90%. The findings suggest that machine learning application to genetic risk stratification of thyroid nodules is feasible, affording an additional diagnostic adjunct to cytogenetics for nodules with the indeterminate cytological result [290].

7.10 Digestive and excretory systems Among the remaining 6 systems of the human body to be discussed, you will note that I have paired the digestive system with the excretory system (here in #10) and the renal system with the urinary system (in #11 following). The reason is simply that these respective pairings represent systems whose processes are a continuum, that is their sequential actions continue from 1 system to the next to completion. The excretory system is a composite of the organs and functions of these 2 systems (digestive and excretory) associated with their elimination process. These organs include the kidneys, ureters and urinary bladder of the renal system; the skin of the integumentary system (for sweat); and the lower gastrointestinal tract (colon, rectum, and anus) of the digestive tract. While the respective systems associated with the excretory system are themselves anatomically and functionally distinct, collectively their sequential actions define the excretory system’s function of eliminating bodily waste. Also, worthy of note here in this description of the comingling of bodily systems’ functions and anatomy is mention of (as previously discussed) the endocrine system. Portions of each bodily system will include endocrine glands as part of their functional processes. A quick review of the digestive system anatomy includes the gastrointestinal tract (the GI or Digestive Tract) and the liver, pancreas, and gallbladder. The GI Tract includes the mouth, esophagus, stomach, small intestine, large intestine, and anus. Functionally, the system uses enzymes and hormones from the pancreas, and bile from the liver and gallbladder to break down nutrients into amino acids, fatty acids, glycerol and sugars for your body to absorb and use for energy, growth, and cell repair. These nutrients are then absorbed through the small intestines into the bloodstream and distributed to the organs and tissues throughout the body. And finally, the large intestines absorb water and waste products for elimination [291].

7.10.1 Digestive and excretory disorders and diseases and associated AI applications It is estimated that 60 70 million people in the U.S. are affected by digestive diseases each year [292]. Some diseases and conditions are acute, lasting only a short time, while others

Chapter 7 • AI applications in prevalent diseases and disorders

369

are chronic, or long-lasting. Gastroenterology is the principal medical specialty responsible for the digestive disease. There are greater than 40 major disease categories directly associated with the digestive system. I will list the 5 most prevalent disease categories with a brief description and a recent and relevant AI application. Once again, as mentioned for all of the previous clinical categories discussed in this Chapter, this sampling of AI programs represents only a fraction of the total body of AI applications in gastroenterology. • Acid reflux (GER & GERD): Gastroesophageal reflux (GER) is a physical condition in which acid from the stomach flows backward up into the esophagus. More than 15 million Americans experience acid reflux (heartburn or acid indigestion) symptoms each day. Symptoms are more common among the elderly and pregnant women. Frequent heartburn (two or more times a week), food sticking, blood or weight loss may be associated with a more severe problem known as gastroesophageal reflux disease (GERD) [293]. Endoscopic therapy using AI decision trees is playing a key role in the treatment of pre-cancerous and early cancerous conditions of the gastrointestinal (GI) tract, for which it has been increasingly replacing conventional surgical approaches, with improved tolerability and comparable or better outcomes. Endoscopic approaches for the treatment of obesity, gastro-esophageal reflux disease, and diabetes have emerged in recent years. These are at different stages of introduction into routine practice but are likely to have a significant impact on patient management soon [294]. • Liver and pancreatic disease: A series of twenty-two studies were conducted to test the ability of AI to aid in the identification of patients with pancreatobiliary or liver diseases. Six studies tested AI in detection of pancreatic adenocarcinoma with an AUROC of approximately 90%. Of sixteen AI hepatology studies, 7 attempted to detect fibrosis associated with viral hepatitis. AI strategies were also developed to detect nonalcoholic fatty liver disease; patients with esophageal varices; and 1 to assess patients with chronic liver disease of any cause. Thirteen studies used data from electronic medical records and/or biologic features to build the algorithms and 3 studies used data from elastography. All models identified their target factor with approximately 80% accuracy [295]. An ANN based on clinical and laboratory data demonstrated 90% accuracy data on patients with cirrhosis who would die within 1 year with 90% accuracy [296]. Among numerous other studies, an ML model based on clinical, laboratory and histologic data identified patients with chronic hepatitis C virus infection at highest risk for disease progression and liver-related outcomes (e.g., liver-related death, hepatic decompensation, hepatocellular carcinoma, liver transplant with an AUROC curve of 0.78 in a validation cohort of 1007 patients [297]. In hepatology, AI techniques could be used to determine patients’ risk of liver fibrosis and allow some patients to avoid liver biopsy. Limitations of AI techniques that require caution include the lack of high-quality datasets for ML development; DL algorithms considered black-box model; legal liabilities in endoscopic misdiagnosis; and inherent

370

Foundations of Artificial Intelligence in Healthcare and Bioscience

biases, such as racial discrimination in AI algorithms to determine risk of fibrosis related to viral hepatitis [298]. • Colorectal cancer: Colorectal cancer is the second most common cancer killer overall and third most common cause of cancer-related death in the United States (52,000 per year) in both males and females. Due to advances in screening techniques and improvements in treatments, the death rate from colorectal cancer has been falling. The cancer starts in the colon or the rectum. These cancers can also be named bowel cancer, colon cancer or rectal cancer. They occur when tumors form in the lining of the large intestine. The risk of developing colorectal cancer rises after age 50. The risk increases if you have colorectal polyps, a family history of colorectal cancer, ulcerative colitis or Crohn’s disease, eat a diet high in fat, or smoke. Everyone over 50 should get screened. Tests include colonoscopy and tests for blood in the stool. Treatments for colorectal cancer include surgery, chemotherapy, radiation, or a combination. Surgery can usually cure it when it is found early [299]. A study was conducted to investigate the effect of an automatic polyp detection system based on deep learning on polyp detection rate and adenoma detection rate (ADR). Of 1058 patients included, 536 were randomized to standard colonoscopy, and 522 were randomized to colonoscopy with computer-aided diagnosis. The AI system significantly increased ADR (29.1% vs 20.3%, P , 0.001) and the mean number of adenomas per patient (0.5 vs 0.31, P , 0.001). Besides, the number of hyperplastic polyps was also significantly increased (114 vs 52, P , 0.001). This automatic polyp detection system during colonoscopy resulted in a significant increase in the number of diminutive adenomas detected, as well as an increase in the rate of hyperplastic polyps [300]. • Crohn’s disease: This disease causes inflammation of the digestive system. It is 1 of a group of diseases called granulomatous inflammatory bowel disease. Crohn’s can affect any area from the mouth to the anus. The cause of the disease is unknown. The most common symptoms are a pain in the abdomen and diarrhea. There is no cure for Crohn’s Disease. Treatment can help control symptoms and may include medicines, biologics, nutrition supplements, and/or colon and iliac surgery. Some people have long periods of remission when they are free of symptoms [301]. Scientists have developed a computer method that may help improve the understanding and treatment of Crohn’s disease, The study used AI to examine genetic signatures of Crohn’s in 111 people. The method revealed previously undiscovered genes linked to the disease and accurately predicted whether thousands of other people had the disease [302]. • Hepatitis C: This is a liver infection caused by the hepatitis C virus (HCV). Hepatitis C is a bloodborne virus. Today, most people become infected with the hepatitis C virus by sharing needles or other equipment to inject drugs. For some people, hepatitis C is a short-term

Chapter 7 • AI applications in prevalent diseases and disorders

371

illness but for 70% 85% of people who become infected with the hepatitis C virus, it becomes a long-term, chronic infection. There is no vaccine for hepatitis C [303]. A Kohonen (self-organizing) artificial neural network (ANN) to analyze socio-medical data of 1.8 million insurants for predictors of undiagnosed HCV infections. The network was trained with variables obtained from a subgroup of 2544 patients with confirmed hepatitis C-virus (HCV) infections excluding variables directly linked to the diagnosis of HCV. Training results were visualized 3-dimensionally and the distributions and characteristics of the clusters were explored within the map. This ANN approach may allow for a more efficient risk-adapted HCV-screening [304]. • Irritable bowel syndrome (IBS): IBS is a common problem (the most common for gastroenterologists at 10 15% of adult population [305]) that affects the large intestine. It can cause abdominal cramping, bloating, and a change in bowel habits. Some people have constipation and other diarrhea. Although IBS can cause a great deal of discomfort, it does not harm the intestines. It affects about twice as many women as men and is most often found in people younger than 45 years. The cause is unknown and there is no specific test for it. Symptoms are controlled through diet, stress management, probiotics, and medicine [306]. Data were obtained through an internet survey of people with irritable bowel syndrome, chronic fatigue syndrome, and fibromyalgia syndrome, all of whom completed the same 61 symptom questionnaire which was specially devised for this study. There was a total of 1751 respondents. A machine learning process was used to analyze the data. The study tested the prediction from the adaptive network theory that the pattern of symptoms of people with different diagnoses becomes more similar as severity increases. This was found to be the case. The results from this study can be explained only by network theory, and not by competing psychological or disease models. The results show that as functional disorders become more severe, not only is there greater pathology in all the different mechanisms in the network, but also the strength of the causal connection between the mechanisms increases. This has implications for treatment [307].

7.11 Renal system and urinary system The renal system and urinary system (or track) are divided by their anatomical structures versus their functions. The renal system includes the kidneys (2 bean-shaped organs) through which blood flows and is filtered of waste materials from food, medications, and toxic substances and excreted through the urinary tract. The kidneys also maintain overall fluid balance in the body; regulate minerals in the blood; create hormones that help produce red blood cells; promote bone health, and regulate blood pressure. The nephron is the functioning unit of the kidney (approximately 1 million per kidney) containing renal pyramids in the outer renal cortex that transport blood into the kidneys. Protein from the blood is absorbed through clusters of capillaries (glomeruli) and the remaining fluid (capsular urine) passes through Bowman’s capsule into renal tubules. These

372

Foundations of Artificial Intelligence in Healthcare and Bioscience

tubules return water, sodium, glucose, and potassium to the bloodstream while absorbing urea (a byproduct of protein metabolism) and finally collecting in tubules in the inner renal medulla as diluted fluid. This fluid passes through a funnel-shaped, inner portion of the kidney (renal pelvis) through which the renal artery passes and carries filtered blood from the kidneys back to the heart while the pelvis serves as a pathway for the remaining fluid (urine) to the ureters or tubes to the bladder [308]. You can see the continuum mentioned above where the urinary tract is effectively the endpoint or drainage system for the renal system. The bladder fills with urine until the lower motor neurons send a signal to the brain, at which point the bladder wall muscles contract and the sphincter muscle at the neck of the bladder relaxes, sending urine through the urethra.

7.11.1 Renal and urinary disorders and diseases and associated AI applications Diseases and disorders of the renal and urinary system exceed 400 [309] including acute and chronic conditions, ranging from benign to fatal. As with previous system reviews, I will identify 5 of the most common diseases and disorders and provide the most recent and relevant research and AI applications for each. Once again, this represents is a mere fraction of the full scope of research and AI advances in kidney disease. • Chronic kidney disease (CKD): CKD is a common form of kidney disease which is a long-term condition that doesn’t improve over time. It’s commonly caused by high blood pressure and also diabetes. These conditions are dangerous for the kidneys because they can increase the pressure on the glomeruli. Over time, the increased pressure damages the vessels and kidney function begin to decline. In advance stages, the kidneys can no longer perform their job properly and dialysis would be needed [310]. A novel deep learning framework has been developed for chronic kidney disease classification using a stacked autoencoder model utilizing multimedia data with a softmax classifier. The stacked autoencoder helps to extract the useful features from the dataset and then a softmax classifier is used to predict the final class. Precision, recall, specificity, and F1score were used as evaluation metrics for the assessment of the proposed network. It was observed that this multimodal model outperformed the other conventional classifiers used for chronic kidney disease with a classification accuracy of 100% [311]. • Kidney stones: Kidney stones are another common kidney problem. They occur when minerals and other substances in the blood crystallize in the kidneys, forming solid masses (stones). About 80% of kidney stones are calcium and the remaining amounts are from an array of crystallized substances. The stones usually come out of the body during urination and can be extremely painful. Otherwise, rarely do they cause significant problems [312].

Chapter 7 • AI applications in prevalent diseases and disorders

373

Human kidney stones of different compositions were obtained from a stone analysis laboratory. Two images were captured for the majority of stones and images were cropped to remove the background. A deep convolutional neural network was trained with the images to predict the stone composition. Study result proved that a deep learning, computer vision methods can be used to detect kidney stone composition with high accuracy and has the potential to replace laboratory analysis of stone composition [313]. • Glomerulonephritis: Glomerulonephritis is an inflammation of the glomeruli caused by infections, drugs, or congenital abnormalities. The acute disease may be caused by infections such as strep throat. It may also be caused by other illnesses, including lupus, Goodpasture’s syndrome, Wegener’s disease, and polyarteritis nodosa. Early diagnosis and prompt treatment are important to prevent kidney failure. The chronic forms are caused by changes in the immune system. However, in many cases, the cause is not known. Sometimes, you will have 1 acute attack of the disease and develop the chronic form years later. It could also get better on its own [314]. A machine learning approach using an eXtreme Gradient Boosting (XGBoost) model, which generates a series of iteratively constructed decision points with each tree based on the previous one, performs well in predicting the kidney outcomes in Immunoglobulin A nephropathy (IgAN). The algorithm had a high statistic validity for the prediction of endstage kidney disease or a 50% reduction in estimated glomerular filtration rate within 5 years after the biopsy diagnosis. This still represents meaningful progress in the application of machine learning to primary glomerular diseases in nephrology to further the precision medicine movement [315]. • Pyelonephritis: Acute pyelonephritis is a sudden and severe kidney infection. It causes the kidneys to swell and may permanently damage them. The condition can be life-threatening. When repeated or persistent attacks occur, the condition is called chronic pyelonephritis. This chronic form is rare, but it happens more often in children or people with urinary obstructions. Symptoms include high fever, pain in the abdomen, back, side, or groin, painful or burning urination, and cloudy urine. Treatment includes antibiotics based on the infecting agent, possible hospitalization and possible nephrectomy (partial removal of the kidney [316]. The exact prediction of kidney transplantation outcome is still not accurate even with the enhancements in acute rejection results. Machine learning methods introduce many ways to solve the kidney transplantation prediction problem than that of other methods. An AI algorithm introduced a new proposed feature selection method that combines statistical methods with classification procedures of data mining techniques to predict the probability of graft survival after kidney transplantation. Experimental results have proved that the newly proposed feature selection method has better results than other techniques [317]. • Urinary tract infections (UTI): This is one of the most common types of infection and accounts for around 8.1 million visits to a doctor every year. Women have a lifetime risk of over 50% of

374

Foundations of Artificial Intelligence in Healthcare and Bioscience

developing a urinary tract infection (UTI). Common symptoms include a strong, frequent urge to urinate and a painful and burning sensation when urinating. They are usually diagnosed based on symptoms and testing of a urine sample. And can be cured with 2 3 days of treatment [318]. Urinary tract infections (UTIs) are bacterial infections of any part of the urinary system. Infections in the bladder and urethra are the most common. They are easily treatable and rarely lead to more health problems. However, if left untreated, these infections can spread to the kidneys and cause kidney failure. A model to identify children at highest risk for a recurrent urinary tract infection plus vesicoureteral reflux to allow for targeted voiding cystourethrogram while children at low risk could be observed. The main outcome was defined as recurrent urinary tract infection associated with vesicoureteral reflux. The model predicted recurrent urinary tract infection associated vesicoureteral reflux with an AUC of 0.761 (95% CI 0.714 0.808) in the testing set. The predictive model using a novel machine-learning algorithm provided promising performance to facilitate individualized treatment of children with an initial urinary tract infection and identify those most likely to benefit from voiding cystourethrogram after the initial urinary tract infection [319].

7.12 Respiratory (pulmonary) system Somewhat similar to the previous pairings of the digestive and excretory system and the renal and urinary system, the respiratory system functions intimately in tandem with the circulatory (vascular) system. In fact, not unlike the paired systems, the respiratory system has an excretory function relative to the vascular system, which is the elimination of carbon dioxide from the circulating blood. Notwithstanding these similarities, the respiratory system is generally considered as a separate body system. Of course, throughout this respiration process, additional body systems including the nervous system, lymphatic system, and immune system are actively involved in the process as well. The inclusive organs of the respiratory system and their functions (breathing) include the following sequence. First, oxygenated air is taken in (inhaled) through the nose and mouth. The sinuses regulate the temperature and humidity of the air then passes into the nasal cavity (from the nose) and the oral cavity (from the mouth). The pharynx (throat) collects incoming air and passes it down to the trachea (windpipe) which divides into the 2 main bronchi (tubes), 1 for each lung. The bronchi, in turn, subdivide further into bronchioles. The right lung is divided into 3 lobes, or sections and the left lung is divided into 2 lobes. Inside the lungs, the bronchial tubes are lined with cilia (like very small hairs) that have a wave-like motion. This motion carries mucus that catches and holds the dust, germs, and other foreign matter that invades the lungs. Your lungs get rid of the mucus through coughing. The diaphragm is the strong wall of muscle between the chest and abdominal cavities. Its downward contraction movement creates suction to draw in air and expand the lungs.

Chapter 7 • AI applications in prevalent diseases and disorders

375

Within the lung tissue, alveoli (air sacs) collect air from the bronchioles. It is at this point, at the walls of the alveoli, that the respiratory and circulatory system meet. The pulmonary arteries carrying deoxygenated blood (from the venous circulatory system) emanating from the right ventricle of the heart travel to the lungs (one artery to each lung). In the lung, the arteries bifurcate to create embedded capillary networks in the walls of the more than 600 million alveolar sacs. Within these capillary beds, carbon dioxide gas from the blood is transferred into the alveoli sacs and fresh oxygen from within the sacs is transferred to the capillaries. In the respiration system’s final interchange with the circulatory system, the oxygenated blood is transferred from the capillary network to the pulmonary veins (trivia: the only veins in the body that carry oxygenated blood) and back to the left atrium of the heart (and through the mitral valve into the left ventricle and through the aortic valve to the body). At this point, the diaphragm relaxes and the lungs rise upward (sliding on their pleural lining) creating an upward pressure causing the air in the alveoli sacs to be expelled (exhaling) [320].

7.12.1 Respiratory system diseases and disorders and associated AI applications There are 47 major pulmonary disease categories [321] which can be summarized in 4 forms. These include: [322] 1. Airway diseases: Narrowing or blockage of the tubes (airways) within the lungs. Symptoms are reported as wheezing, coughing, especially early in the morning or at night, chest tightness and shortness of breath. Airway diseases include (but not limited to) asthma, bronchitis, emphysema, and bronchiectasis. 2. Lung (interstitial) tissue diseases: Abnormalities of the lung (parenchymal) structure. Causes include scarring or inflammation of the tissue making it impossible for the lungs to expand fully (restrictive lung disease) to take in oxygen and release carbon dioxide. Etiologies include (but not limited to) infection, toxins, autoimmune diseases. Examples of Interstitial lung diseases include pneumonia, tuberculosis, influenza (flu) and SARSCoV2. Symptoms include tightness in the chest and the inability to breathe deeply. 3. Lung circulation diseases: Disorders of the pulmonary blood vessels. Causes include clotting, scarring, or inflammation of the blood vessels. This affects the ability of the lungs to take up oxygen and release carbon dioxide. These diseases may also affect heart function called pulmonary venous-occlusive disease (PVOD). Other lung circulations disorders include (but not limited to) pulmonary emboli and pulmonary hypertension. Symptoms of lung circulation disease include shortness of breath on exertion, cough, chest pain, sweating. 4. Lung cancers: Usually grouped into 2 main types, small cell and non-small cell (more common form). These types of lung cancer grow differently and are treated differently. Non-small cell lung cancer is more common (80% to 85% of all lung cancers) than small

376

Foundations of Artificial Intelligence in Healthcare and Bioscience

cell lung cancer. The main subtypes of NSCLC are adenocarcinoma, squamous cell carcinoma, and large cell carcinoma. Small cell lung cancer, though less common, is more invasive because it grows more quickly than NSCLC and is often diagnosed in advanced (metastatic) stages. Smoking is the leading cause of all lung cancers, although it also develops in non-smokers as well. These cancers are frequently associated with metastasis (from the lung to body organs and body to lungs). Lung disorders are fairly common and often benign (e.g., upper respiratory infections [URIs], etc.) yet some, though relatively common, are life-threatening. I reference lung cancers which are the second leading cause of cancer deaths in the U.S. (see Table 7 5) at an estimated 23.5% or 142,670 deaths in 2019. That’s 2.7 times greater than the second leading cause of cancer deaths (colon cancer) at 8.4% [323] Sadly, the estimated number of new cases in 2019 is 228,150 which, when taken with 142,670 projected deaths, represents a 62.5% death rate for this cancer. The 4 lung disease categories identified above (airway, interstitial, circulatory and cancer) include all major lung disorders. Those specific disorders and diseases mentioned for these respective categories are among the most common and each will be described further below. (Discussion of SARS-CoV2 will be reserrved for Chapter 8). After those 4 descriptions, I will identify and review 3 current AI-related programs and developing research for each disease. And one more time, the AI applications included herein represent only a fraction of the total body of AI programs and research in lung disorders. 1. Airway diseases: A. Asthma: A chronic, long-term condition that intermittently inflames and swells the airways. It affects people of all ages and often starts during childhood. There is no cure. Symptoms range from mild to severe and may happen rarely or every day. Severe symptoms, called an asthma attack, is a life-threatening emergency. The goal of management is to achieve and maintain control with an action plan that may include monitoring, avoiding triggers, and using medicines [324]. AI applications and research: 1. Routine asthma medication is achieved through inhalers which involve several steps in their use. Up to 50% of patients are unsuccessful in taking the medication properly, if at all. New “smart inhalers” now have built-in AI that provides guidance, reminders and how to improve the quality of the inhalation technique to patients. Studies have shown a 58% improvement in medication adherence [325]. 2. The use of deep learning to model combinations of symptom-physical signs and objective tests, such as lung function tests and the bronchial challenge test were studied to assess their if they could improve model performance in predicting the initial diagnosis of adult asthma when compared to the conventional machine learning diagnostic method. The accuracy of the DNN model increased to 0.98 and was significantly higher than the 0.82 accuracies of the support vector machine

Chapter 7 • AI applications in prevalent diseases and disorders

377

(SVM) and the 0.94 accuracies of the logistic analysis. The deep learning models based on symptom-physical signs and objective tests appear to improve the performance for diagnosing adult asthma [326]. 3. “Big healthcare data” was used to link discovered “phenotypes” to specific mechanisms and asthma clinical presentations. The key question of which component of these exposures can be translated into interventions requires confirmation. Increasing mechanistic evidence is demonstrating that shaping the microbiome in early life may modulate immune function to confer protection. The hidden structures within “big data” assets, and medical professionals, epidemiologists, basic scientists, and geneticists who provide critical clinical and mechanistic insights about the mechanisms underpinning the architecture of the heterogeneity are keys to delivering mechanism-based stratified treatments and prevention [327]. B. Bronchitis: Similar to the pathology of asthma, bronchitis (sometimes referred to as “a chest cold”) is an inflammation and swelling of the bronchial tubes. The difference is that bronchitis is caused by an inhaled virus or bacteria, often after an upper respiratory infection (URI). It can be acute or chronic (congestive obstructive pulmonary disease or COPD described below) with acute bronchitis usually clearing up within days to weeks while chronic bronchitis is persistent and never completely goes away. Symptoms for both include a cough, chest discomfort, fever, and aching. Treatment includes fluids, rest and cough, and cold medications. Antibiotics are not recommended. AI applications and research: 1. A study using data-driven approaches to mine medical databases for novel insight was to demonstrate the use of artificial intelligence-based methods such as Bayesian networks to open up opportunities for the creation of new knowledge in the management of chronic conditions. Validation will lead to advancement in the clinical treatment of asthma & bronchitis, thereby, improving patient outcomes and leading to long term cost savings. In summary, this study demonstrates that the application of advanced AI methods in healthcare has the potential to enhance the quality of care by discovering non-obvious, clinically relevant relationships and enabling timely care intervention [328]. 2. Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods. Using the differential analysis of fever and chills in bronchitis versus pneumonia, in which these human explanations differ from current machine explanations, distilling them into a list of desiderata, and formalize them into a framework via the notion of weight of evidence from information theory, this diagnostic framework can show intuitive and comprehensible explanations [329]. 3. Radiology images are often used to diagnose pneumonia and distinguish the condition from other lung conditions, such as bronchitis. There are often

378

Foundations of Artificial Intelligence in Healthcare and Bioscience

difficulties in identifying pneumonia if the patient has pre-existing lung conditions, such as malignancies or cystic fibrosis. Also, subtle types of pneumonia can easily be overlooked and lead to unnecessary CT scans. AI algorithms could read x-rays and other images for evidence of pneumonia or pneumothorax, then alert providers to allow for speedier treatment. AI can also help to identify high-risk patients when the pneumothorax is suspected, especially when radiologists are not present [330]. C. Emphysema: This is a type of COPD (congestive obstructive pulmonary disease described below) involving damage to the air sacs (alveoli) in the lungs. As a result, your body does not get the oxygen it needs. Emphysema makes it hard to catch your breath and may produce a chronic cough and trouble breathing during exercise. The most common cause is cigarette smoking and treatment, besides cessation of smoking, includes inhalers, oxygen, medications, and sometimes surgery [331]. 1. Development of new Machine Learning methodologies for deployment onto mobile devices to help the early diagnosis of some life-threatening conditions such as emphysema, using X-ray images have been developed. By using the latest AI technologies, a smartphone app has been developed using an Artificial Neural Network to assist physicians in their diagnostic [332]. 2. The severity of emphysema was evaluated quantitatively by using percentage lung volume occupied by low-attenuation areas. The median duration of follow-up was 7.4 years. An AI algorithm of regression analysis for the relationship between imaging patterns and survival was used based on the Cox proportional hazards model, with adjustment for age, race, sex, height, weight, pack-years of cigarette smoking, current smoking status, educational level. Compared with subjects who did not have visible emphysema, mortality was greater in those with any grade of emphysema beyond trace. The visual presence and severity of emphysema are associated with significantly increased mortality risk, independent of the quantitative severity of emphysema [333]. 3. A multicenter study found that 4D flow MRI provided a promising way of measuring blood flow in the superior and inferior cava veins and right heart, which may provide further insight into physiologic and pathologic blood flow patterns in individuals with COPD and emphysema. The study showed that blood flow was not greatly altered in COPD patients, but emphysema was associated with higher right heart regurgitant flow in the superior and inferior cava veins and tricuspid valve. This proves that pulmonary vascular damage may not be the sole cause of reduced cardiac output in emphysema [334]. 2. Lung (interstitial) tissue diseases: A. Tuberculosis: (See also “15. Infectious Diseases” below) This infectious disease is among the top 10 causes of death worldwide and the leading cause of a single infectious agent (above HIV/AIDS). It is caused by bacteria (Mycobacterium tuberculosis) that most often affect the lungs. Symptoms include a

Chapter 7 • AI applications in prevalent diseases and disorders

379

cough with sputum and blood at times, chest pains, weakness, weight loss, fever and night sweats. When a person with TB coughs, sneezes or spits, they propel the TB germs into the air. A person needs to inhale only a few of these germs to become infected. About one-quarter of the world’s population has latent TB, which means people have been infected by TB bacteria but are not (yet) ill with the disease and cannot transmit the disease. TB is curable and preventable. Treated includes a standard 6-month course of 4 antimicrobial drugs [335]. 1. Based on TB patient data, ANN accurately predicted the Mycobacterium tuberculosis (MTB) positive or negative with an overall accuracy of .94%. Further, the accuracy of the test and validation were found to be .93%. This high correlation ( . 94% accuracy) with the experimental result of MTB detection may help to choose optimal therapeutic regimens, especially in TB high burden countries [336]. 2. A systematic review was conducted for the diagnostic accuracy of artificial intelligence-based software for the identification of radiologic abnormalities (computer-aided detection, or CAD) compatible with pulmonary tuberculosis on chest x-rays (CXRs). Four databases were searched for articles published between January 2005-February 2019. The study concluded that CAD programs are promising, but the majority of work thus far has been on development rather than clinical evaluation [337]. 3. Most deep neural networks applied to the task of tuberculosis diagnosis have been adapted from natural image classification. These models have a large number of parameters as well as high hardware requirements, which makes them prone to overfitting and harder to deploy in mobile settings. A simple convolutional neural network optimized for the problem which is faster and more efficient than previous models but preserves their accuracy. The advanced neural network architecture optimized for tuberculosis diagnosis achieved good while reducing the computational, memory and power requirements significantly [338]. B. Pneumonia: (See also “15. Infectious Diseases” below) Pneumonia is a bacterial, viral or fungal infection in one or both lungs. The infection causes inflammation in the air sacs in the alveoli sacs which fill with fluid or pus, making it difficult to breathe. Mild to life-threatening symptoms include coughing, sweating, fever, chills, nausea, and vomiting. The disease is contagious and contracted through inhalation of airborne droplets and contaminated surfaces. Treatment is antibiotics for bacterial forms and antifungals for fungal disease. Viral pneumonia usually clear on their own. In severe cases, in hospital IV medications, respiratory therapy, and oxygen therapy may be required. 1. CheXpert, developed by researchers at Stanford University, is an automated chest X-ray interpretation model that uses AI to analyze X-ray images taken in emergency departments. A study found that the AI system could accurately detect pneumonia in chest X-rays in about 10 s. The quick diagnosis allowed physicians to accurately confirm and start a treatment plan for pneumonia more quickly than with current clinical practice [339].

380

Foundations of Artificial Intelligence in Healthcare and Bioscience

2. A “road map” has been developed describing how to convert research-based AI into improved medical imaging on patients. Among other things, the authors urged more collaboration across disciplines in building and testing AI algorithms, and intensive validation of algorithms before they reach patients. FDA has already issued approvals for several algorithms. An important goal of the resulting road map is to grow an ecosystem, facilitated by professional societies, industry, and government agencies, that will allow robust collaborations between practicing clinicians and AI researchers to advance foundational and translational research relevant to medical imaging [340]. 3. A study was conducted to investigate the epidemiology, causative agents and outcomes of severe community-acquired pneumonia (CAP) in Hong Kong. Also, the risk factors for mortality would be identified. Multivariate regression analysis was used to assess the impact of independent variables on hospital mortality. A total of 390 patients were admitted to ICU for severe CAP. Causative pathogens were obtained in about 60% of patients. Streptococcus pneumoniae was the most common bacterial agent and most viral infections were caused by influenza A. Atypical bacteria accounted for only a minority of CAP [341]. C. Influenza (flu): (See also “15. Infectious Diseases” below) Flu is a contagious respiratory illness caused by influenza viruses (versus coronavirus responsible for COVID-19) that infect the nose, throat, and sometimes the lungs. It can cause mild to severe illness, and at times can lead to death. The best way to prevent flu is by getting a flu vaccine each year. As opposed to the common cold, flu comes on suddenly and produce mild to severe (life-threatening in elderly and immune-compromised) symptoms including fever, cough, sore throat, runny nose, aches, and pains. It is contagious through airborne droplets and contaminated surfaces. Antiviral drugs are taken early to reduce symptoms and shorten the course of the disease, but the best treatment is prevention through vaccination [342]. AI may soon be used to identify biomarkers of protective influenza immunity that will not only help us determine how protective immunity occurs but also allow us to develop new ways to boost this immunity. The Delphi-stat model uses nonmechanistic statistical machine learning and data from previous influenza seasons to make predictions about future influenza seasons. Results highlight the value we can gain from AI in predicting influenza accurately in the future, but also show us that we should never underestimate our human intelligence [343]. Researchers have been taking data (like mRNA variations and concentrations of metabolites and proteins) from hundreds of patient samples exposed to the flu vaccine, then running that information through a sophisticated AI algorithm. These research teams hope to better understand the biological processes that trigger an effective immune response to the flu. This will pave the way for personalized flu prevention, with people receiving customized vaccines designed to induce a better response in their particular immune system [344]. Certainly, what is being learned from the coronavirus pandemic (COVID-19, see Chapter 8)

Chapter 7 • AI applications in prevalent diseases and disorders

381

will add to our ability to assess and manage influenza predictions and management more effectively. Three flu prediction models, based on twitter and US Centers for Disease Control’s (CDCs) Influenza-Like Illness (ILI) data, are proposed (models 1 3) to verify the factors that affect the spread of the flu. In this work, an Improved Particle Swarm Optimization algorithm to optimize the parameters of Support Vector Regression (IPSO-SVR) was proposed. The results show that the IPSO-SVR method (model 3) demonstrates excellent performance in real-time prediction of ILIs, and further highlights the benefits of using real-time twitter data, thus providing an effective means for the prevention and control of flu [345]. These AI programs used in the spread of influenza flu are also being used in the analysis and modeling of COVID-19. 3. Lung circulation diseases: A. Pulmonary emboli: Sudden blockage of a major artery in a lung. Usually due to a blood clot from another part of the body, breaks off and travels in the bloodstream into the lung where it blocks the pumping of the heart and prevents the heart from taking in oxygen. This is the third most common cardiovascular disease after a heart attack and stroke. Symptoms include shortness of breath, chest pain, cough, leg pain or swelling or both, usually in the calf, clammy or discolored skin (cyanosis), fever [108]. 1. In a multi-institutional diagnostic study of 3214 patients, a machine learning model was designed to achieve an accurate patient-specific risk score for pulmonary embolism diagnosis. The model was successfully evaluated in both multi-institutional inpatient and outpatient settings. The machine learning model, PERFORM, may consider multitudes of applicable patient-specific risk factors and dependencies to arrive at a PE risk prediction that generalizes to new population distributions. This approach might be used as an automated clinical decisionsupport tool for patients referred for CT PE imaging to improve CT use [346]. 2. An AI software program has been developed to identify and prioritize pulmonary embolism (PE) in computed tomography pulmonary angiograms (CTPAs). By flagging obstructions in blood flow to the lungs, the PE AI tool is designed to prioritize diagnostic workstation worklists and help radiologists detect critical conditions more rapidly. The tool works automatically when images arrive for interpretation at a diagnostic workstation, independently prioritizing cases that require immediate attention [347]. 3. Due to inherent variabilities in how pulmonary emboli (PE) manifests and the cumbersome nature of the manual diagnosis, there is growing interest in leveraging AI tools for detecting PE. A 2-stage detection pipeline was built that is accurate, computationally efficient, robust to variations in PE types and kernels used for CT reconstruction, and most importantly, does not require dense annotations. Using a large, real-world dataset characterized by complex PE types and patients from multiple hospitals, this study provides guidelines for designing highly generalizable pipelines [348].

382

Foundations of Artificial Intelligence in Healthcare and Bioscience

B. Pulmonary hypertension (PH): Pulmonary hypertension (PH) means high blood pressure in the lungs. The hypertension is specifically in the arteries to the lungs which become stiff, damaged or narrow, and the right side of the heart must work harder to pump blood through the lungs. Symptoms are similar to other airway disorders with shortness of breath and fatigue most prominent. There are 2 main kinds of PH, one hereditary and the other related to a heart or lung disease. There is no cure and treatment to control symptoms includes treating the heart or lung disease, medicines, oxygen, and sometimes lung transplantation [349]. 1. In December 2018, the FDA granted a breakthrough device designation for an AI pattern recognition software being developed that can identify signs of chronic thromboembolic pulmonary hypertension (CTEPH)—a rare form of hypertension—in CT pulmonary angiography (CTPA) scans. The AI software uses deep learning technology and processes image findings of cardiovascular, lung perfusion and pulmonary vessel analyses in combination with a patient’s history of pulmonary embolism, according to the release [350]. 2. A study was conducted to explore the predictive capabilities of diagnostic discrimination in a PH patient population by leveraging statistical and AI techniques. CNN modeling was able to discriminate between pre-capillary PH and post-capillary PH with 83% accuracy and an AUC of 0.86. This study demonstrates that machine learning algorithms are feasible in distinguishing pre and postcapillary PH non-invasively [351]. 3. Patients with idiopathic pulmonary arterial hypertension (iPAH) exhibit patterns of health-seeking behavior before diagnosis that will allow the development of earlier identification tools. The Sheffield Pulmonary Hypertension IndeX (SPHInX) project aims to develop a predictive algorithm based on routinely collected healthcare resource utilization (HCRU) data. Patients with probable iPAH have high levels of health care utilization for several years before diagnosis. AI models can be used to develop the SPHInX algorithm to screen for undiagnosed iPAH in the general population [352]. C. Pulmonary veno-occlusive disease (PVOD): This circulatory disease is caused by blockage (occlusion) of the blood vessels (the pulmonary venules) that carry oxygen-rich (oxygenated) blood from the lungs to the right atrium of the heart. The blockage is caused by fibrous tissue buildup in the smaller vessels and produces a rise in pulmonary pressure (pulmonary hypertension) and less oxygenated blood reaching the heart and other bodily systems. This produces symptoms including shortness of breath, fatigue on exertion, dizziness, a bluish tint to the skin (cyanosis) and difficulty breathing when lying down. The lungs may also accumulate fluid (pulmonary edema). Causes of pulmonary fibrosis include genetic factors and possible viral and toxic etiologies [353].

Chapter 7 • AI applications in prevalent diseases and disorders

383

1. Automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians with the difficult pulmonary differential diagnose between arterial versus venous pathological conditions. An algorithm was developed to follow 3 main steps: first, a scale-space particle segmentation to isolate vessels; then a 3-D convolutional neural network (CNN) to obtain the first classification of vessels; finally, graph-cuts optimization to refine the results. The proposed algorithm achieves an overall accuracy of 94%, which is higher than the accuracy obtained using other CNN architectures and RF. The proposed method outperforms state-of-the-art methods, paving the way for future use of 3-D CNN for artery/vein classification in CT images [132]. 2. A study was conducted to assess if Generative Visual Rationales (predicting optimal endpoints) can identify imaging features learned by a model trained to predict congestive heart failure from chest radiographs, allowing radiologists to better identify faults and biases. The model was used to visualize how a radiograph with a high estimated B-type natriuretic peptide (BNP) would look without the disease (a “healthy” radiograph). Results indicated that features of congestive heart failure on chest radiographs learned by neural networks can be identified using Generative Visual Rationales, enabling detection of bias and overfitted models [354]. 3. Vasculature networks are responsible for providing reliable blood perfusion to tissues in health or disease conditions. Image processing tasks such as vessel segmentation and centerline extraction impede research progress and have prevented the systematic comparison of 3D vascular architecture across large experimental populations. A proposed pipeline is presented in the contexts of different biomedical and biological research questions including the stalling capillary phenomenon in disease states such as POVD [355]. 4. Lung cancers: As mentioned in the earlier section on Cancer in this chapter (page 314), along with a brief explanation of the disease, lung cancers were identified as the leading cause of cancer deaths and the second leading cause of all deaths in America [86]. This has led to a high degree of interest and activity in the worldwide medical and scientific community in cancer research. Considering the National Institute of Health (NIH) funding as a measure of research activity in a disease category, cancer research (at 5.6 billion dollars in 2019) leads to all other diseases [356]. So too has cancer been a high priority in program and algorithm development in AI. And because of the high prevalence and mortality rates in lung cancers, that area of cancer has received more attention than most other forms. And once again, and perhaps more so as regards AI research articles in lung cancer, the 3 literature reviews presented below for each of the 3 major categories of lung cancers are a very, very small sample of the full body of related literature. The 9 reviews presented the most recent, relevant articles in the literature.

384

Foundations of Artificial Intelligence in Healthcare and Bioscience

A. Non-small cell lung cancer (NSCLC): 1. A study was conducted to explore the prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with nonsmall cell lung cancer treated with concurrent and sequential chemoradiotherapy. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). The results of this algorithm suggest that its prognostic power of the proposed features have a promising potential for early survival prediction [357]. 2. Despite the success immunotherapy in non-small cell lung cancer (NSCLC) treatment, only a subset of patients responded, indicating the need for predictive biomarkers. It has been hypothesized that AI algorithms can automatically quantify radiographic characteristics that are related to and may, therefore, act as non-invasive radiomic biomarkers for immunotherapy response. Among 203 patients with NSCLC, an AI-based characterization of each lesion on the pretreatment contrast-enhanced CT imaging data to develop and validate a noninvasive machine learning biomarker capable of distinguishing between immunotherapy responding and non-responding. Highly significant associations were found with pathways involved in mitosis, indicating a relationship between increased proliferative potential and preferential response to immunotherapy. These results indicate that radiographic characteristics of lesions on standard-ofcare imaging may function as non-invasive biomarkers for response to immunotherapy, and may show utility for improved patient stratification in both neoadjuvant and palliative settings [358]. 3. A study was conducted to evaluate deep learning networks for predicting clinical outcomes by analyzing time-series CT images of patients with locally advanced non small cell lung cancer (NSCLC). Deep learning models using time series scans were significantly predictive of survival and cancer-specific outcomes. This demonstrates that deep learning can integrate imaging scans at multiple timepoints to improve clinical outcome predictions. AI-based noninvasive radiomics bio-markers can have a significant impact on the clinic given their low cost and minimal requirements for human input [359]. B. Small cell lung cancer (SCLC): 1. A retrospective study was conducted to assess the consistency between the lung cancer treatment recommendations made for the same patient by IBM’s Watson for Oncology (WFO) and by a multidisciplinary team of experts. WFO did not support 18.1% (33/ 182) of recommendations among all cases. Of the 149 supported cases, 65.8% (98/149) received recommendations were consistent with the recommendations of the IBM team. Logistic regression analysis showed that pathological type and staging had

Chapter 7 • AI applications in prevalent diseases and disorders

385

significant effects on consistency. WFO cannot currently replace oncologists. It can improve the efficiency of clinical work by assisting doctors, but it needs to learn the regional characteristics of patients to improve their assistive ability [360]. 2. In SCLC, the brain is a common site of distant metastasis. The standard of care for the extended stage (ES) SCLC is platinum-based chemotherapy. Meta-analyses revealed that prophylactic cranial radiation (PCI) decreased the incidence of brain metastases, which translated into an improvement in patient survival after achieving a complete response to initial chemotherapy or chemoradiotherapy. This analysis included 15% of ES SCLC cases. However, the benefit of PCI for ES SCLC was borderline from the subset analysis in a previous study. Therefore, the indication of PCI for ES SCLC have been controversial [361]. 3. Biomedical Image Processing is the latest emerging tool in medical research used for the early detection of cancers. AI can be used in the medical field to diagnose diseases at an early stage. Computed Tomography (CT Scans) of the lungs of the patients from the Lung Image Database Consortium (LIDC) is used as input data for image processing. After Image Processing, the input images become more efficient and refined. These are input for the Convolution Neural Network (CNN). to predict whether lung image is cancerous (malignant) or non-cancerous (benign). Deep Learning is a newer branch of AI research that will help in better performance in CNN based systems. The proposed system will also take into account the processing power and time delay of the cancer detection process for efficiency [362]. C. Metastatic carcinoma: 1. Applications of deep learning technology to cancer imaging can assist pathologists in the detection and classification of cancer in the early stages of its development to allow patients to have appropriate treatments that can increase their survival. Statistical analyses and other analytical approaches, based on data of ScienceDirect (a source for scientific research), suggest that the sharp increase of the studies of deep learning technology in cancer imaging seems to be driven by high rates of mortality of some types of cancer (e.g., lung and breast) to solve consequential problems of a more accurate detection and characterization of cancer types to apply efficient anti-cancer therapies. This new technology can generate a shift of technological paradigm for diagnostic assessment of any cancer type and disease and can also generate socioeconomic benefits for poor regions [363]. 2. Pulmonary metastases of head and neck squamous cell carcinoma (HNSC) are currently difficult to distinguish from primary lung squamous cell carcinomas (LUSCs). Researchers have developed a machine-learning algorithm that exploits the differential DNA methylation observed in primary LUSC and metastasized HNSC tumors in the lung. Their method was able to discriminate between these 2 tumor types with high accuracy across multiple cohorts, suggesting its potential as a clinical diagnostic tool [364]. 3. The Oncology Expert Advisor (OEA) was designed to simulate peer-to-peer consultation with 3 core functions: patient history summarization, treatment

386

Foundations of Artificial Intelligence in Healthcare and Bioscience

options recommendation, and management advisory. Machine-learning algorithms were trained to construct a dynamic summary of patients’ cancer history and to suggest approved therapy or investigative trial options. Results demonstrated the technical feasibility of an AI-powered application to construct longitudinal patient profiles in context and to suggest evidence-based treatment and trial options. The study experience highlighted the necessity of collaboration across clinical and AI domains, and the requirement of clinical expertise throughout the process, from design to training to testing [365].

7.13 Reproductive systems To state the obvious, there are 2 human reproductive systems, male and female. Both systems are equally complex as is the process of human reproduction itself. I guess the best way to address these multiple topics of the reproductive system is to briefly review the normal structures, functions, and selected (most common) disorders and diseases of the respective female and male systems along with examples of recent AI applications related to each system. Then, I will briefly describe the reproductive process itself and its abnormalities. I will also present 2 recent AI applications related to each system.

7.13.1 Female reproductive system [366] Normal structures (anatomy) and function: The female reproductive system includes 6 main organs, each with a distinct function in reproduction: • Ovaries are a pair of small glands that produce female sex hormones, estrogen, and progesterone, as well as ova (commonly called “eggs”). Each month during ovulation, a mature ovum is released. The ovum travels from the ovary to the fallopian tube, where it may be fertilized before reaching the uterus. • Fallopian tubes are a pair of muscular that end in a funnel-shaped structure called the infundibulum, which is covered with small finger-like projections called fimbriae. The fimbriae pick up and carry ova from the infundibulum to the uterus. • The uterus (also known as the womb) is a hollow, muscular, organ connected to the 2 fallopian tubes on its superior end and the vagina (via the cervix) on its inferior end. The endometrium (inner lining of the uterus) provides support to the embryo during early development. The visceral muscles of the uterus contract during childbirth to push the fetus through the birth canal. • The vagina, located below the uterus, is an elastic, muscular tube that connects the cervix of the uterus to the exterior of the body. It functions as the receptacle for the penis during sexual intercourse and carries sperm to the uterus and fallopian tubes. It also serves as the birth canal by stretching to allow the delivery of the fetus during childbirth. During menstruation, the menstrual flow exits the body via the vagina.

Chapter 7 • AI applications in prevalent diseases and disorders

387

• The vulva is the collective name for the external female genitalia. It surrounds the external ends of the urethral opening and the vagina and includes the mons pubis, labia majora, Labia minora, and clitoris. On the superior end of the Labia minora is a small mass of erectile tissue known as the clitoris that contains many nerve endings for sensing sexual pleasure. • Breasts are specialized organs of the female body that contain mammary glands, milk ducts, and adipose tissue. In the center of each breast is a highly pigmented nipple that releases milk when stimulated. The areola, a thickened, highly pigmented band of skin that surrounds the nipple, protects the underlying tissues during breastfeeding. The mammary glands (15 20 clusters) are a special type of sudoriferous glands that become active during pregnancy and remain active until milk is no longer needed. The milk passes through milk ducts on its way to the nipple, where it exits the body.

7.13.2 Female reproductive cycle The female reproductive cycle is the process of preparing the uterus to receive a fertilized ovum to begin pregnancy. The entire reproductive cycle takes about 28 days on average. • Oogenesis and ovulation: Follicle-stimulating hormone (FSH) and luteinizing hormone (LH) stimulate the ovaries to produce an oocyte which matures and is released as an ovum (ovulation) about 14 days into the reproductive cycle. Usually, only 1 ovum per cycle is released. • Fertilization: It takes about a week for the mature ovum to travel to the uterus. If sperm can reach and penetrate the ovum, it becomes a fertilized zygote containing a full complement of DNA. After 2 weeks of rapid cell division known as the germinal period, the zygote forms an embryo that implants itself into the uterine wall and develops there during pregnancy. • Menstruation (period): In the fallopian tube, the endometrium grows and develops in preparation for the embryo. If the ovum is not fertilized in time or if it fails to implant into the endometrium, the arteries of the uterus constrict to cut off blood flow to the endometrium and causes cell death in the endometrium and shedding of tissue in a process known as menstruation. In a normal menstrual cycle, this shedding begins around day 28 and continues into the first few days of the new reproductive cycle. • Menopause: This is the time in a woman’s life when her periods stop. Menopause happens because the ovaries stop producing the hormones estrogen and progesterone. It usually occurs naturally, most often after age 45. A woman has reached menopause when she has not had a period for 1 year. • Pregnancy: The fertilized embryo will implant itself into the endometrium and begin to form an amniotic cavity, umbilical cord, and placenta. For the first 8 weeks, the embryo will develop almost all of the tissues and organs present in the adult before entering the

388

Foundations of Artificial Intelligence in Healthcare and Bioscience

fetal period of development during weeks 9 through 38 where it grows larger and more complex until it is ready to be born. • Lactation: The production of milk begins before birth under the control of the hormone prolactin. Prolactin is produced in response to the suckling of an infant on the nipple, so milk is produced as long as active breastfeeding occurs. The release of milk by the nipples is known as the “milk-letdown reflex,” controlled by the hormone oxytocin. Oxytocin is also produced in response to infant suckling so that milk is only released when an infant is actively feeding. The human reproductive system experiences functional (non-disease) disorders and pathological disease conditions. The female reproductive system has a combined number of over 170 gynecological disorders and diseases [367]. Functional disorders of the female reproductive system with recent, related AI programs: • Ectopic pregnancy - When a fertilized ovum is implanted in any tissue other than the uterine wall: A 3-stage classifier was developed to predict the treatment for ectopic pregnancies. Testing was conducted with 4 different algorithms: MLP, SVM, deep learning, and Naïves Bayes. According to the results, it is feasible to develop a clinical decision support system using the algorithms that present a higher precision. This system would help gynecologists to take the most accurate decision about the initial treatment, thus avoiding future complications [368]. • Menstrual Abnormalities: a. Amenorrhea: Absence of menstrual bleeding; b. Dysmenorrhea: Menstrual cramping; c. Premature menopause: Primary ovarian insufficiency; d. Premenstrual syndrome (PMS): Premenstrual dysphoric disorder; e. Fibroids: Non-cancerous growths in the uterus; f. Metrorrhagia: Excessive, prolonged or irregular uterine bleeding. A study was performed to investigate the use of classification methods by a machinelearning approach for discriminating the uterine activity during the 4 phases of the menstrual cycle. The method was applied on a database (24 measurements) collected in different phases of the menstrual cycle, comprising uterine active and quiescent phases. The support vector machine (SVM) classifier showed the best performance for discrimination between the different menstrual phases. The classification accuracy, sensitivity, and specificity were 90%, 79%, 93%, respectively [369]. • Infertility: A team, consisting of embryologists, reproductive medicine clinicians, computer scientists, and precision medicine experts, trained an AI algorithm to discriminate between poor and good embryo quality. Investigators used 12,000 photos of human embryos taken precisely 110 hours after fertilization to train an AI algorithm. After training and validation, the algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97% accuracy [370].

Chapter 7 • AI applications in prevalent diseases and disorders

389

7.13.2.1 Disease conditions of the female reproductive system with recent, related AI programs • Gynecological cancers: a. Cervical cancer; b. Ovarian cancer; c. Uterine cancer; d. Vaginal cancer; e. Vulvar cancer; f. Breast cancer A study was conducted to evaluate satisfaction following robotically-assisted surgery and its impact on short-term and long-term patient-rated pain. Robotic surgery for the treatment of gynecologic cancers resulted in a minimal impact on short- and long-term patient-rated pain. The majority of patients (B90%) did not require the use of opioids and were very satisfied with their surgery [371]. • Reproductive tract infection (RTI) are infections that affect the reproductive tract: a. Bacterial vaginosis (BV) b. Candida (fungal) vaginitis c. Cervicitis (usually sexually transmitted disease STD); d. Inflammatory Vaginitis e. Pelvic Inflammatory Disease (PID) Clostridium (Clostridioides) difficile infection (CDI) is a healthcare-associated infection that can lead to serious complications. A study was conducted to explore the utility of a machine learning (ML) approach for patient risk stratification for complications using electronic health record (EHR) data. Using EHR data accurate stratifying of CDI cases was achieved according to their risk of developing complications. Such an approach could be used to guide future clinical studies investigating interventions that could prevent or mitigate complicated CDI [372]. • Fibrocystic Disease: the condition in which the breasts feel lumpy: A deep learning AI system was developed to identify suspicious soft-tissue and calcified lesions in digital breast tomosynthesis (DBT) images. A reader study compared the performance of 24 radiologists (13 of whom were breast subspecialists) reading 260 DBT examinations (including 65 cancer cases) both with and without AI. The concurrent use of an accurate DBT AI system was found to improve cancer detection efficacy in a reader study that demonstrated increases in AUC, sensitivity, and specificity and a reduction in recall rate and reading time [373].

7.13.3 Male reproductive system [366] Normal structures (anatomy) and function: The male reproductive system includes 10 relevant parts, each with a distinct function in reproduction:

390

Foundations of Artificial Intelligence in Healthcare and Bioscience

• Scrotum: A sac-like organ made of skin and muscles that houses the testes. It is located inferior to the penis in the pubic region containing 2 testes. When the testes become too warm to support spermatogenesis, the scrotum relaxes to move the testes away from the body’s heat. Conversely, the scrotum contracts to move the testes closer to the body’s core heat when temperatures drop below the ideal range for spermatogenesis. • Testes: Also known as testicles are the male gonads responsible for the production of sperm and testosterone. Each testes is found inside its pouch on 1 side of the scrotum and is connected to the abdomen by a spermatic cord and cremaster muscle. The cremaster muscles contract and relax along with the scrotum to regulate the temperature of the testes. The inside of the testes is divided into small compartments known as lobules, each containing seminiferous tubules lined with epithelial cells. These epithelial cells contain many stem cells that divide and form sperm cells through the process of spermatogenesis. • The epididymis: A sperm storage area that wraps around the superior and posterior edge of the testes made up of several feet of long, thin tubules that are tightly coiled into a small mass. Sperm produced in the testes moves into the epididymis to mature before being passed on through the male reproductive organs. • Spermatic cords and ductus deferens: A pair of spermatic cords connects the testes to the abdominal cavity. They contain the ductus deferens along with nerves, veins, arteries, and lymphatic vessels that support the function of the testes. The ductus deferens, also known as the vas deferens, is a muscular tube that carries sperm superiorly from the epididymis into the abdominal cavity to the ejaculatory duct. The ductus deferens is wider in diameter than the epididymis and uses its internal space to store mature sperm. The smooth muscles of the walls of the ductus deferens move sperm towards the ejaculatory duct through peristalsis. • Seminal vesicles: A pair of lumpy exocrine glands that store and produce some of the liquid portions of semen. The liquid produced contains proteins and mucus and has an alkaline pH to help sperm survive in the acidic environment of the vagina. The liquid also contains fructose to feed sperm cells so that they survive long enough to fertilize the oocyte. • Ejaculatory duct: The ductus deferens pass through the prostate and joins with the urethra at a structure known as the ejaculatory duct which contains the ducts from the seminal vesicles as well. During ejaculation, the ejaculatory duct opens and expels sperm and the secretions from the seminal vesicles into the urethra. • Urethra: Semen passes from the ejaculatory duct to the exterior of the body via the urethra which passes through the prostate and ends at the external urethral orifice located at the tip of the penis. Urine exiting the body from the urinary bladder also passes through the urethra. • The prostate: Walnut-sized exocrine gland that borders the inferior end of the urinary bladder and surrounds the urethra. It produces a large portion of the fluid that makes up semen, milky white and contains enzymes, proteins, and other chemicals to support and protect sperm during ejaculation. The prostate also contains smooth muscle tissue that can constrict to prevent the flow of urine or semen. This organ is particularly susceptible to cancer.

Chapter 7 • AI applications in prevalent diseases and disorders

391

• Cowper’s glands: Also known as the bulbourethral glands, a pair of pea-sized exocrine glands located inferior to the prostate and secrete a thin alkaline fluid into the urethra that lubricates the urethra and neutralizes acid from urine remaining in the urethra after urination. This fluid enters the urethra during sexual arousal before ejaculation to prepare the urethra for the flow of semen. • Penis: The male external sexual organ containing the urethra and the external opening of the urethra. Large pockets of erectile tissue in the penis allow it to fill with blood and become erect causing it to increase in size and become turgid. The function of the penis is to deliver semen into the vagina during sexual intercourse and allow for the excretion of urine through the urethra.

7.13.3.1 Male reproductive process • Spermatogenesis: At puberty, spermatogenesis begins when luteinizing hormone (LH) and follicle-stimulating hormone (FSH) are produced. LH triggers the production of testosterone by the testes while FSH triggers the maturation of germ cells. Testosterone stimulates stem cells in the testes known as spermatogonium to undergo the process of developing into spermatocytes. Each diploid spermatocyte goes through the process of meiosis I and splits into 2 haploid secondary spermatocytes. The secondary spermatocytes go through meiosis II to form 4 haploid spermatid cells. The spermatid cells then go through a process known as spermiogenesis where they grow a flagellum and develop the structures of the sperm head. After spermiogenesis, the cell is finally a sperm cell, or spermatozoa and released into the epididymis. • Fertilization: The process by which a sperm combines with an oocyte, or egg cell, to produce a fertilized zygote. The sperm released during ejaculation must first swim through the vagina and uterus and into the fallopian tubes where they may find an oocyte. Next, the sperm uses enzymes that allow it to penetrate the outer layers of the oocyte. After penetration, the nuclei of these haploid cells fuse to form a diploid cell known as a zygote which begins cell division to form an embryo.

7.13.3.2 Functional disorders of the male reproduction system with recent, related AI programs • Premature ejaculation: A lack of voluntary control over ejaculation: Machine learning was used as a classification method to assess resting-state brain function in lifelong premature ejaculation (LPE) patients. The study screened 9 (average across every training step during 100 times 10-folds CV) out of 4005 functional connectivity (FC) features to construct the optimal classifier with an accuracy of 0.85 from subjects to healthy people. These FC features are mainly distributed in some areas in the frontal, temporal, and parietal cortex, and limbic system. Results provide more novel FC-derived indicators through a strategy of classification research to understand the potential abnormalities of brain function in LPE patients [374].

392

Foundations of Artificial Intelligence in Healthcare and Bioscience

• Impotence: The inability of a male to produce or maintain an erection: A study was conducted to investigate the cerebral structural changes related to venous erectile dysfunction (VED) and the relationship of these changes to clinical symptoms and disorder duration and distinguish patients with VED from healthy controls using a machine learning classification. Compared to healthy control subjects, VED patients showed significantly decreased cortical volumes in the left postcentral gyrus and precentral gyrus, while only the right middle temporal gyrus showed a significant increase in cortical volume. Machine learning analyses discriminated against VED patients from controls with excellent performance [375]. • Enlarged prostate (BPH): A study was performed to determine if supervised machine learning (ML) could be used to classify samples using the immunophenotyping flow cytometry data of myeloid-derived suppressor cells (MDSCs) and lymphoid cells from healthy donor (HD), benign prostatic hyperplasia (BPH)/lower risk prostate cancer (LR-PCa), and higher risk prostate cancer (HR-PCa) subjects. An analytical technique was developed for processing flow cytometry data to serve as inputs for feedforward neural networks (NNs) to classify samples based upon their levels of MDSCs and lymphocytes to classify them as either HD, BPH, or PCa. This technique could be used to dramatically reduce the number of unnecessary prostate biopsies performed each year on high-risk patients (high PSA, abnormal DRE, age) as current screening methods result in a large number of false positives [376].

7.13.4 Disease conditions of the male reproduction system with recent AI programs • Cancers: a. Prostate: A study was conducted to evaluate the applicability of machine learning methods that combine data on age and prostate-specific antigen (PSA) levels for predicting prostate cancer. Records of 943 patients who underwent transrectal ultrasonography (TRUS)-guided prostate biopsy were evaluated. A retrospective review of the patients’ medical records, analyzed the prediction rate of prostate cancer and identified 20 important features were compared with biopsy results using 5 different algorithms, viz., logistic regression (LR), support vector machine, random forest (RF), extreme gradient boosting, and light gradient boosting machine. Results suggest that the prediction rate of prostate cancer using machine learning methods may increase the detection rate for prostate cancer and reduce unnecessary prostate [377]. b. Testicular: A deep learning algorithm can be trained to identify tumor-infiltrating lymphocytes (TILs) in tissue samples of testicular germ cell tumors and to assess whether the TIL counts correlate with relapse status of the patient. A correlation coefficient of 0.89 was achieved when comparing the algorithm with the manual TIL count in the test set of images in which TILs were present (n 5 47). In the WSI regions from the 89 patient samples, the median TIL density was 1009/mm2. Deep learning-based image analysis

Chapter 7 • AI applications in prevalent diseases and disorders

393

can be used for detecting TILs in testicular germ cell cancer more objectively and it has potential for use as a prognostic marker for disease relapse [190]. • Prostatitis: An extensive literature search was conducted to investigate the applications of AI in diagnosis, treatment and outcome prediction in urologic diseases and evaluate its advantages over traditional models and methods. Articles between 1994 and 2018 using the search terms “urology”, “artificial intelligence”, “machine learning” were included. Compared to conventional statistical analysis, 71.8% of studies concluded that AI is superior in diagnosis and outcome prediction [378]. • Male infertility: In a study using modern and classical machine learning techniques together with a dataset consisting of 85 videos of human semen samples and related participant data was conducted to automatically predict sperm motility. Techniques used include simple linear regression and more sophisticated methods using convolutional neural networks. Results indicated that sperm motility prediction based on deep learning using sperm motility videos can be rapidly performed and consistent. Adding participant data did not improve the performance of the algorithms. Results indicate that machine learning-based automatic analysis may become a valuable tool in male infertility investigation and research [379].

7.14 Physical injuries, wounds and disabilities After so many serious disease categories up to this point in this chapter, some might view this next category as less significant and even perhaps not worthy of AI efforts and applications. That is until you review the epidemiology of physical injuries, wounds, and disabilities in the U.S. Accidents from unintentional injuries ranks third among the leading causes of death, just behind heart disease and cancer (see Table 7 6). In the age ranges from 1 to 44, it ranks number 1 [380]. A good source for summarizing and reviewing injuries and wounds is the Center for Disease Control and Prevention (CDC), National Center for Injury Prevention and Control, their National Vital Statistics System (NVSS) and Web-based Injury Statistics Query and Report System. This system statistical breakdown includes the general, comprehensive categories of fatal and non-fatal injuries, under which they further subcategorize intentional and unintentional data. I will outline each category below and its subdivisions, and will include 2 recent, relevant AI applications (one briefly explained and the second, a literature citation for further review by interested readers).

7.14.1 Fatal injury data a. Intentional: x Homicide: 1. MIT researchers programmed an AI algorithm called “Norman” with exclusively gruesome and violent content from Reddit (called “Norman” after Norman Bates from

394

Foundations of Artificial Intelligence in Healthcare and Bioscience

the movie “Psycho”). Norman only sees death in everything. Microsoft also has an algorithm called “Tay” programmed for death (removed from the Internet due to its usage of hate speech and racial slurs). When these programs were tested on inkblot to help determine personality characteristics or emotional functioning, they consistently revealed morbid interpretations akin to a murderer’s psychology. The team believes Norman can also be retrained to have a less “psychopathic” point of view [381]. 2. Chatbots (e.g., Amazon’s Alexa) can transcribe human speech and then respond to that input with an educated guess based on what they have heard and learned before (e.g., “Kill your foster parents.”) [382]. x Suicide: 1. Search results are a useful, but very broad, area regarding suicide prevention strategies. Google has developed targeted algorithmic suicide prevention for people who are already asking for help. The Trevor Project offers crisis counseling to LGBTQ teenagers via a phone line (TrevorLifeline), a texting service (TrevorText), and an instant-messaging platform (TrevorChat). The project is using machine learning to automatically assess suicide risk. It’s all centered on the initial question that begins every session with a Trevor counselor: “What’s going on?” [383]. 2. The Speech-based algorithm can objectively differentiate PTSD cases (at greater risk for suicide) from controls [384]. b. Unintentional: x Adverse effects (medical, fall, drowning): 1. AI and machine learning technology collect data that help employees and teams predict impending accidents. A Boston-based construction company has developed AI software that observes job sites and alert to dangerous behaviors it picks up, like a worker standing too close to a machine. After prolonged periods of observation, the artificially intelligent program identifies unique or seemingly minute details that point to potential accidents, like a tilting machine or a too-heavy load [385]. 2. AI techniques and machine learning to systematize, streamline, and strengthen conventional approaches used for safety analysis and certification [386]. x Fire: 1. By linking equipment including optical and thermal cameras, as well as spectrometric systems that identify the chemical makeup of substances to AI, a company working with IBM believes it can help tame the often, unpredictable effects of climate change. Others are using AI to predict dangerous hail storms and studying how it can help find victims in bad weather. Along with analyzing the data from on-site, the device’s artificial intelligence will weigh similar events captured by the system over time. It will also use IBM’s Watson supercomputer to visually evaluate what it sees and forecasts from its Weather Company to predict how the fire might spread [387]. 2. Two new A.I. Tech plugins, AI-FIRE-DEEP and AI-SMOKE-DEEP, can be used to guarantee the safety of environments, through the early detection and localization of flames (AI-FIRE-DEEP) and smoke (AI-SMOKE-DEEP), respectively [388].

Chapter 7 • AI applications in prevalent diseases and disorders

x

x

x

395

Firearm: 1. A company, ShotSpotter, uses a series of sensors that differentiates everyday noise from the bang of a bullet. After a shot, building-mounted monitors record the exact moment of the shooting. Smart software locates the gunshot by triangulating the acoustic signals among nearby monitors that absorb the sound. An AI software program instantaneously determines the type of shot and within minutes, information is provided to local police officers with the location, time, and analytics of the kind of shot [389]. 2. AI could be used to determine which ammunition, and ultimately which firearm, was responsible for a particular gunshot from the residue it left behind [390]. Poisoning (drug reaction, toxin): 1. Biometric authentication is complex for pill bottles, a new method of user identification using touch capacitance during bottle-opening attempts has been developed. A smart pill bottle could generate an immediate warning to deter a child from opening the bottle and send an alert to parents/guardians. From 232 bottle-opening events, our optimized neural network generated no false detections of children as adults and 4 false detections of adults as children. Preliminary results indicate that smart pill bottles can be used to reliably detect children trying to open pill bottles and reduce the risk of child-poisoning events. 2. An advanced predictive algorithm enabled scientists to predict the toxicity of any chemical without setting foot in the animal lab [391]. Transportation-related (motor vehicle, cyclist, pedestrian): 1. Machine learning-based algorithms were employed to predict and classify motorcycle crash severity. The main aim of this research is to evaluate and compare different approaches to modeling motorcycle crash severity as well as investigating the effect of risk factors on the injury outcomes of motorcycle crashes. The dataset was classified into 4 injury severity categories: fatal, hospitalized, injured, and damage-only. Three machine learning-based models were developed. The results of the study reveal that the predictions of machine learning algorithms are superior to the multinomial logit model (MNLM) in accuracy and effectiveness, and the RF-based algorithms show the overall best agreement with the experimental data out of the 3 machine learning algorithms, for its global optimization and extrapolation ability [392]. 2. A deep learning model to explore the complex interactions among roadways, traffic, environmental elements, and traffic crashes proved a superior alternative for traffic crash predictions [393].

7.14.2 Nonfatal injury data a. Intentional: x Non-specific (cuts, falls, dog bites, inhalation): 1. Microsoft, in collaboration with 2 companies (Armed and Napier) has developed a wearable device that can detect early indications of frailty, such as low grip

396

Foundations of Artificial Intelligence in Healthcare and Bioscience

strength, hydration levels, muscle mass, low heart rate, and heart rate variability. Pioneering AI predictive analytics modeling using Microsoft Machine Learning tools are combined with this device. The data it produces can predict the escalating risk of a potential fall [394]. 2. An IoT-based approach is used to design an AQSS (Air Quality Sensing System) to evaluate the environmental parameters and monitor their effects in the process of open skin wound healing [395]. x Firearm: 1. AI programs are now being used as gun detection systems able to automatically respond to an active shooter situation by sending a real-time video feed to security staff or police as soon as a gun or some type of threatening action is detected. The system can alert a criminal that he or she has been identified and the authorities have been called. Systems are being utilized in multiple locations across the United States [396]. 2. Functional autonomous weapon systems in the military can already be created with existing AI technology [397]. x Poisoning 1. It remains difficult to predict the side effects of experimental drugs within the central nervous system. A machine learning-based in vitro system has been designed to detect seizure-inducing side effects before the clinical trial. This artificial intelligence-based classification may provide a new platform to preclinically detect seizure-inducing side effects of drugs [398]. 2. Harmful algae blooms have been increasing in frequency in recent years, and attention has shifted from describing to modeling and using AI to predict these phenomena since in many cases they pose a risk to human health and coastal activities [399]. b. Unintentional (includes undetermined); x Adverse effects (cut, fall, burn, overexertion, food poisoning): 1. A deep learning approach was applied to classifying mushrooms as edible or poisonous. All experiments showed that the composed model is an efficient one. The procedure used for classification yielded an approximate efficiency of 85%. The efficiency of the model can be further increased by decreasing the False positive rate using the neural network ensemble. The model created can be used by the food and packaging industry to automatically classify the mushrooms as edible or poisonous. A mobile application can be developed and customized to obtain all the details including nutritional values, edibility of a mushroom [400]. 2. An AI algorithm was developed to measure salivary amylase, a marker of sympathetic activity during exercise. The model proved highly accurate for maximum heartbeat measurement [401]. x Firearms: 1. Google is pursuing a path through the ethical minefields involving physical and autonomous cyber agents. It published a set of company principles that ruled out

Chapter 7 • AI applications in prevalent diseases and disorders

397

developing AI for weapons and technologies intended to cause or facilitate direct harm to people, but it retained the option to work with governments and the military on developing AI applications for cybersecurity. All leaders must demonstrate extra caution in deploying autonomous weapons, knowing such weapons could easily fall under enemy control [402]. 2. With the help of artificial intelligence, surveillance cameras mounted inside and outside buildings can recognize lethal threats within seconds [403]. x Machinery: 1. AI risks may be No italics needed a misuse of risks or accident risks. This is the prevailing approach in the field. Misuse risks entail the possibility that people use AI in an unethical manner with malicious motivation. Accident risks, in contrast, involve harm arising from unintended behavior by the AI systems. The prime example of this would be a self-driving car collision arising from the AI misunderstanding its environment. The idea that many of the risks from AI have structural causes is important to realize. It suggests that solving these problems will require collective action both domestically and internationally [404]. 2. When should we trust fully-automated computer systems with autonomous decision making, and how do we allow humans to gain control if things go wrong? [405] x Nature/environment: 1. A study was conducted that sets forth an automatic evaluation network of the risk perceived ability for motorists driving on the freeway in snow and ice environments, using a deep learning approach and the rough sets technique. Rough sets technique was added as a judgment in the output layer of the deep belief network (DBN). The results indicate that the DBN improves the accuracy of perceiving risky conditions. This approach can provide a reference for the design of hazard detection systems of partially automated vehicles [406]. 2. Environmental Control Units (ECU) including chatbots and natural language processing (NLP) enable environmental independence for physically and functionally disabled clients and reduce burden and frequency of demands on careers [407]. x Transportation-related (motor vehicle, cyclist, pedestrian): 1. Based on the weights of traffic accident’s features, the feature matrix to gray image (FM2GI) algorithm is proposed to convert a single feature relationship of traffic accident’s data into gray images containing combination relationships in parallel as the input variables for the model. Moreover, experiments demonstrated that the proposed model for traffic accident’s severity prediction has a better performance [408]. 2. A comprehensive overview of the AI techniques applied worldwide to address transportation problems mainly in traffic management, traffic safety, public transportation, and urban mobility [409]. c. Violence related: x Assault (attempted homicide, sexual, rape, abuse): 1. An AI tool has been developed using a dataset of 732 popular film scripts annotated for violent content by Common Sense Media. Then a team built a

398

x

x

Foundations of Artificial Intelligence in Healthcare and Bioscience

neural network machine learning model where algorithms intersect, work together and learn from the text of the scripts to create an output (i.e., violence ratings for the movie). The AI tool analyzed language in the dialogue of screenplays and found that the semantics and sentiment of the language used was a strong indicator of the rated violent content in the completed films [410]. 2. Researchers use AI models to run simulations of real-life social problems to better understand how religious violence can break out [411]. Legal intervention: 1. A familiarity in how machine learning works will allow attorneys to begin to formulate questions and strategies when that technology produces substantive evidence at trial. Included in such evidence may be potential issues under the Fifth and Sixth Amendments as well as the Federal Rules of Evidence, none of which would categorically bar machine learning evidence. Once machine learning evidence is deemed admissible, counsel for both sides must consider the significant issues with machine learning that nonetheless could affect the weight such evidence could be assigned by the trier of fact [412]. 2. The analysis of judicial data through artificial intelligence methods has become an urgent demand [413]. Self-harm: 1. Cyberbullying and possible resultant self-harmed caused by depression have prompted some artificial intelligence developers to try to seek solutions to this widespread problem. Some British schools have started using an AI tool called AS Tracking, developed by a company called STEER, which came into use at 150 schools in Britain. The tool involves students taking an online psychological test. The results are sent to STEER, “which compares the data with its psychological model, then flags students who need attention in its teacher dashboard” [414]. 2. One of the most common intentional forms of non-suicidal self-injury (NSSI), is relatively often shared publicly via new digital media technologies. An imagerecognition algorithm was developed to protect vulnerable populations from contact with NSSI-related pictures posted on social media [415].

7.14.3 Disabilities Each of the subcategories under non-fatal injuries can result in disability. The Federal Government defines disability as “A limit in a range of major life activities. This includes activities like seeing, hearing, walking and tasks like thinking and working” [416]. The medical and health-related aspects of each of these disabling activities are covered elsewhere in this text. Here, I will provide an example of an AI application relating to each activity and an additional reference for each for those interested. • Seeing: 1. Aira is a service that allows a blind person to establish voice communication with a trained human agent, who can also see through a camera that the blind person has

Chapter 7 • AI applications in prevalent diseases and disorders

399

mounted in a headset or on their phone. Aira transactions suggest that significant parts could be automated (e.g., requests to read the text). Also, a version of the Aira service might be devised for partially sighted clients and could provide a good interaction model for passive cognitive assistants which allows the user to direct the perceived need of support and interaction, thus promoting self-determination [417]. 2. There are great opportunities to maximize AI to make the digital world more accessible to people with vision disabilities [418]. • Hearing: 1. “The power of deep learning comes from its hierarchical structure that is capable of transforming noisy or mixed voice signals into clean or separated voices through layer-bylayer processing. “When it comes to hearing aids, the challenge is always to make the technology work on a small computer behind the ear. Currently, the algorithm requires too much space in a hearing aid for this. Even if an algorithm can separate several unknown voices from each other, it isn’t able to choose which voice to present to the hearing aid user. These are some practical issues that we need to solve before an AI-assisted hearing aid solution. However, the most important thing is that these issues now seem solvable” [419]. 2. Adoption of AI technologies in otolaryngology practice may be hindered by misconceptions of what AI is and a fear that machine errors may compromise patient care [420]. • Walking: 1. Shuffling, festination and akinetic episodes, which could diminish the life quality of Parkinson's Disease (PD) patients. Therefore, it is very useful to develop a computerized tool to provide an objective evaluation of PD patients’ gait. A proposed system is a dual-modal deep-learning-based model, where the left and right gait is modeled separately by a convolutional neural network (CNN) followed by an attention-enhanced long short-term memory (LSTM) network. Experimental results indicate that our model can provide state-of-the-art performance in terms of classification accuracy [421]. 2. Development of an intelligent walking assist robot and gait rehabilitation robot for people with walking disabilities, where distinguishing of abnormal gait is crucial during safe human-robot interaction [422]. • Thinking: 1. The Cognitive and Learning Disabilities Accessibility Task Force of the World Wide Web Consortium (https://w3c.github.io/coga/user-research/) lists 10 categories of difference, including memory, executive function, reasoning, attention, language, and literacy. Differences can be associated with diverse circumstances, including chromosomal variation, brain injury, effects of medications, aging, and many others. advances in machine learning are pushing back the boundaries of what technology can do. Can we expect these advances to enable people with cognitive disabilities to live more self-determined lives? [423] 2. The neurobiological capacity for language, a cardinal aspect of being human, can be lost by disease and, in some cases, partially restored by technology. Research is advancing toward more sophisticated devices to detect and compute methods to decode brain signals corresponding to the language [424].

400

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.15 Infectious disease If you are old enough to be reading this book, it’s guaranteed you have had an infectious disease in your lifetime. In fact, it’s fair to say, you have had multiple infectious diseases, hopefully, most of them minor and of short duration. Infectious agents are ubiquitous and easily transmitted through kissing, touching, sneezing, coughing, sexual contact, insect or animal bites, contaminated food, water, soil, or plants. Germs are also present on your skin and inside your body, with most being harmless and some even helpful (e.g., in the intestinal tract). There are 5 main kinds of germs: [425] • Bacteria: One-celled organisms that multiply quickly. They give off toxins, harmful chemicals that cause a range of disease states from sore throats to life-threatening infections (e.g., sepsis, etc.). • Viruses: Protein coated capsules of genetic material (nucleic acid molecules) capable of replicating within the living cells of a host producing an infectious disease state (e.g., coronavirus, etc.); • Fungi: A single-celled or multicellular primitive, plant-like organism found in soil, air, and water, mostly harmless to humans, but may be pathogenic (e.g., histoplasmosis, etc.); • Parasites: Animals or plants that survive by living on or in other living things and takes its nourishment from that other organism (e.g., amoebiasis, pin worm, etc.); • Protozoa: Unicellular organisms (e.g., algae, euglena, amoeba, slime molds, trypanosomes, and sporozoans) transmitted to humans through contaminated soil, food, water, pets, and animals, as well as by insect vectors. They can cause disease in humans by parasitically feeding off of and multiplying at the expense of their host. While there are tens of millions of microbial species on earth, only about 1400 of them are considered human pathogens [426]. Medical therapies are available for most of the infectious agents except for viruses. However, the human immune system (see Immune System, page 294) is the principal control for suppressing an active infection and eliminating it from the body (except for viruses see below). Antibiotics are powerful medications that kill bacterial pathogens as well as suppress their reproduction allowing the body’s natural defenses to eliminate them from the body. Unfortunately, because of the excessive use of antibiotics, an increasing number of bacteria are becoming resistant (mutant strains developing) to currently available antibiotics. This ominous process is leading to untreatable bacterial infections which escalate into life-threatening conditions with no means of control. The most prominent example of this is reflected in the omnipresent, common gram-positive staphylococcal bacteria which continues to mutate into resistant strains to standard and newly developed antibiotics (e.g., Methicillin-resistant Staphylococcus aureus or MRSA) [427]. Fungal infections are often recognized late and result in difficulty in treating effectively without cellular and tissue damage. Minor fungal infections like athlete’s foot (tinea pedis) are often treatable with OTC medications, although the response may be slow and

Chapter 7 • AI applications in prevalent diseases and disorders

401

the condition may reoccur. The more advanced forms of fungal infections (e.g., lung, bloodstream, meningitis) can be difficult to treat and life-threatening. Anti-fungal medications are toxic and can cause serious complications. A virus enters a cell (a host) and uses the host’s cellular processes to make thousands of copies of itself (see SARS-CoV2 life cycle, Chapter 8). Then, the cell wall ruptures and the new viruses infect neighboring cells. Antiviral medications don’t kill viral agent(s) as much as mitigate their symptoms, intensity, and duration by reducing their ability to replicate. The virus itself is rendered dormant, but not eliminated from the host (the body). Varying factors ranging from physical to physiological to stress (emotional, etc.) can exacerbate and reactivate a virus at any time. Notwithstanding viral resistance to treatment, this very feature creates the opportunity to use the process of a replicating virus as a biological vector (an organism that carries a disease or a remedial or therapeutic agent to other host cells) to treat disease. Using gene therapy (see Genetics and Genome, page 308), viral genes (e.g., retroviruses or adenoviruses) are replaced with DNA code that will enter disease cells and replace or eliminate the abnormal or mutant DNA code (e.g., cancer cells) [428]. Another use of viruses and other infectious agents as treatment vectors is their use in vaccines (see vaccine discussions in Chapter 8). In this case, the infectious agent is a dead or recombinant (DNA engineered to express an antigen) microbe which is introduced into the recipient whose immune system responds by producing antibodies and a resultant adaptive immunity (“active immunization”) to the microorganism [429]. Because of the large number of infectious agents and the variety of diseases they produce, there is a vast amount of AI applications related to infectious diseases. To provide a reasonable representation of related literature articles, I will list the 5 major groups of infectious agents (bacteria, virus, fungus, parasites, protozoa) and identify 3 major infectious diseases produced by each. Then for each disease, I will present 1 or 2 (depending on available literature) of the most recent and relevant AI-related literature articles and cite 1 additional reference for interested readers. I’m fairly certain that you will be familiar with all of the diseases selected because of their prevalent nature. There is extended discussion (beyond the brief comments above) on the novel coronavirus, SARS-CoV-2 and its resultant epidemic, COVID19 in Chapter 8. 1. Bacterial diseases: A. Tuberculosis (TB): Mycobacterium tuberculosis, contagious lung infection. 1. An artificial neural network (ANN) was applied to predict TB infection. Based on TB patient data, ANN accurately predicted the Mycobacterium tuberculosis (MTB) positive or negative with an overall accuracy of .94%. Further, the accuracy of the test and validation were found to be .93%. This increased accuracy of ANN in the detection of TB suspected patients might be useful for the early management of disease to adopt some control measures in further transmission and reduce the drug resistance burden [336].

402

B.

C.

2. A.

Foundations of Artificial Intelligence in Healthcare and Bioscience

2. A Harvard undergraduate, working with Harvard Medical School scientists, has designed an artificial intelligence model that predicts tuberculosis resistance to 10 most commonly used drugs. If incorporated into clinical tests, the model could make resistance detection both faster and more accurate [430]. 3. “Efficient Deep Network Architectures for Fast Chest X-Ray Tuberculosis Screening and Visualization” [338]. Lyme disease: Borrelia burgdorferi transmitted to humans through the bite of infected black-legged ticks. 1. A study demonstrated the accuracy of deep machine learning identifying erythema migrans rashes in early Lyme disease. A comparison with non-medically trained human performance indicated that the machine almost always exceeded acceptable specificity and could operate with higher sensitivity [431]. 2. Clinical research of Lyme Disease has been somewhat stagnant, controversial, and challenging. That makes the use of big data and mathematical analysis even more critical to make progress toward better understanding, diagnosis, and treatment of Lyme disease and on other important scientific fronts. Sophisticated mathematical methods for large-scale Lyme disease data is a perfect example of complicated data that can significantly enhance scientific and medical understanding [432]. 3. “Risk assessment strategies for early detection and prediction of infectious disease outbreaks associated with climate change” [433] Pertussis (Whooping cough): Contagious respiratory disease caused by bacterium Bordetella pertussis. 1. Without technology to model and visualize the risk of large datasets, vaccinators and policymakers are unable to identify target groups and individuals at high risk of dropping out. Default rates remain high, preventing universal immunization coverage. Predictive analytics algorithms leverage AI and use statistical modeling, machine learning, and multidimensional data mining to accurately identify children who are most likely to delay or miss their follow-up immunization visits [434]. 2. A low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in areas at risk for outbreaks. An algorithm was developed for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. It automatically detects individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and infection outbreaks control [435]. 3. “Respiratory sound analysis in the era of evidence-based medicine and the world of medicine 2.0” [436] Viral disease: The common cold, mainly caused by rhinovirus, adenovirus, and coronavirus (COVID19 [437]-see Chapter 8).

Chapter 7 • AI applications in prevalent diseases and disorders

B.

C.

3. A.

403

1. Event-based surveillance (EBS) systems and risk modeling will increasingly be used to inform public health actions to prevent, detect and mitigate climate change increases in infectious diseases [433]. 2. Reliable data management AI platform methods will enable effective analysis of massive infectious disease and surveillance data to support risk and resource analysis for government agencies, healthcare service providers, and medical professionals in the future [438]. 3. “Machine learning for clinical decision support in infectious diseases: a narrative review of current applications” [439] Human immunodeficiency virus (HIV) 1. A new machine-learning algorithm automatically selected important risk-related variables of HIV from millions of medical records. The algorithm can detect those most vulnerable to HIV infection and could play an important role in increasing the prescription of preexposure prophylaxis medications to prevent infection [194]. 2. Currently, resistant isolates are commonly identified by time-consuming and expensive in vitro neutralization assays. Machine learning classifiers are being reported that accurately predict the resistance of HIV-1 isolates to 33 bNAbs. This classifier predictor should facilitate informed decisions of antibody usage and sequence-based monitoring of viral escape in clinical settings [440]. 3. “Incorporating causal factors into reinforcement learning for dynamic treatment regimes in HIV” [441]. Gastroenteritis (and irritable bowel syndrome IBS), caused by the nova virus (notorious on cruise ships) 1. A study investigated the effects of food contamination on gastrointestinal-disease morbidities using 8 different machine-learning models. Experiments on the datasets from 10 cities/counties in central China demonstrate that deep neural networks achieve significantly higher accuracy than classical linear-regression and shallow neural-network models, and the deep denoising autoencoder model with evolutionary learning exhibits the best prediction performance. The results also indicate that the prediction accuracies on acute gastrointestinal diseases are generally higher than those on other diseases [442]. 2. An exploratory study aims to design a system that allows physicians to take advantage of the available data and data sources to manage inflammatory bowel disease (IBD). This study will develop the algorithms necessary for data cleansing and for applying descriptive and predictive analytics to provide physicians with relevant data to predict future IBD flare ups [443]. 3. “Application of artificial intelligence in gastroenterology” [444] Fungal disease: Nail fungal disease: Onychomycosis (Tinea unguium) 1. A study suggests a deep convolutional network to classify onychomycosis disease from images. The framework uses the Visual Geometry Group (VGG)-19 for feature extraction. Due to the unavailability of a diligent dataset, a new dataset was built for

404

B.

4. A.

B.

Foundations of Artificial Intelligence in Healthcare and Bioscience

testing the accuracy of the contended framework. This work has been tested on our dataset and has also been compared with other state-of-the-art algorithms (SVM, ANN, KNN, Tree, RF, Adaboost) that results in a great performance in feature extraction [445]. 2. By training with a dataset comprising 49,567 images, diagnostic accuracy was achieved for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study [446]. Candidiasis: Yeast (a type of fungus) called Candida 1. A paper proposes an algorithm combining CNN with a decision-making tree (CNNDMT) based on medical expert consensus of clinical vaginal candidiasis. In a way of incorporating features automatically extracted by the machine and expert knowledge, the automatic diagnosis of vaginitis disease is realized. Experimental results show that the CNN-DMT approach improves test accuracy by 8.46% over the leading CNN method while enhancing the accuracy of normal bacterial flora by more than 15% [447]. 2. Research was conducted using data mining methods to classifying sexually transmitted diseases. The experiment result shows that K-NN (neural network) is the best method to solve this problem with 90% accuracy [448]. Parasitic disease: Ectoparasites: Mites, ticks, lice, and fleas. 1. There are 23 neglected tropical diseases (NTDs) that have been prioritized by the World Health Organization, which are endemic in 149 countries. Several large datasets are also now in the public domain and this enables machine learning models to be constructed that then facilitate the discovery of new molecules for these pathogens [449]. 2. Parasitic infections constitute a major global public health issue. Using a holographic speckle analysis algorithm combined with deep learning-based classification, a study demonstrated sensitive and label-free detection of trypanosomes within spiked whole blood and artificial cerebrospinal fluid (CSF) samples. This unique platform has the potential to be applied for sensitive and timely diagnosis of neglected tropical diseases caused by motile parasites and other parasitic infections in resource-limited regions [450]. Helminths: Flatworms and roundworms 1. Schistosomiasis is a debilitating parasitic disease infecting over 250 million people with nearly 800 million people at risk worldwide. Deep convolutional neural networks (CNNs) have proven to be highly efficient for image recognition tasks across many object categories. Applying a state-of-the-art algorithm, it classified images of 4 snail categories with 99.64% accuracy and images of 11 parasite categories with 88.14% accuracy, which rivals highly-trained human parasitologists [451]. 2. Wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, a deep convolutional neural network (CNN) has demonstrated impressive performance in

Chapter 7 • AI applications in prevalent diseases and disorders

C.

5. A.

B.

405

various image and video analysis tasks. In a research paper, a novel deep hookworm detection framework was proposed for WCE images, which simultaneously modeled visual appearances and tubular patterns of hookworms [452]. Toxoplasmosis: Toxoplasma gondii 1. Toxoplasma gondii, one of the world’s most common parasites can infect all types of warm-blooded animals, including one-third of the world’s human population. Most current routine diagnostic methods are costly, time-consuming, and labor-intensive. A novel transfer learning was developed based microscopic image recognition method for T.gondii identification. This approach employs the Fuzzy Cycle Generative Adversarial Network (FCGAN) with transfer learning utilizing knowledge gained by the parasitologists. The method showed high accuracy and effectiveness of the approach in the newly collected unlabeled Toxoplasma microscopic images, compared to other current available deep learning methods [453]. 2. Emerging methods using transcript quantification, public databases (chem/bioactivity profiles, ontologies, image-based screening results), combined with machine-learning tools, are providing ground-breaking new and alternative screening strategies for Toxoplasmosis by augmenting phenotypic screening results [454]. 3. “mSphere of Influence: The Rise of Artificial Intelligence in Infection Biology” [455] Protozoan disease: Amebic dysentery: Amebiasis, caused by Entamoeba histolytica. 1. A study used Illumina next-generation sequencing to conduct a comparative genomic analysis of 2 clinical isolates obtained from diarrheal and asymptomatic patients. The EHI_176590 gene was detected by PCR in 56% of stool samples from symptomatic patients infected with E. histolytica, but only in 15% of stool samples from asymptomatic individuals. This suggests that the presence of the EHI_176590 gene is correlated with the outcomes of infection. These data strongly indicate that the AIG1 family protein plays a pivotal role in E. histolytica virulence via the regulation of host cell adhesion [456]. 2. A study presented an artificial neural network (ANN)-based method for calculating the water quality index (WQI) to estimate water pollution. The ANN approach is found to be useful in this study for calculating the weight values and the WQI efficiently. The accuracy of the calculated WQI also increased to 98.3% [457]. Chagas disease (sleeping sickness): American trypanosomiasis caused by Trypanosoma cruzi. 1. The application of recent scientific innovations to the field of Chagas disease has led to the discovery of new promising drug candidates. Phenotypic screening brought new hits and opportunities for drug discovery. AI also has the potential to accelerate drug discovery in Chagas disease and further research into this is warranted [458]. 2. In a study, artificial neural networks (ANNs) and kernel-based partial least squares regression (KPLS) were developed using anti-T. cruzi activity data for broadly diverse chemotypes. The models exhibited a good predictive ability for the test set compounds. The results highlighted privileged molecular scaffolds and the optimum

406

Foundations of Artificial Intelligence in Healthcare and Bioscience

physicochemical space associated with high anti-T. cruzi activity, which provides important guidelines for the design of novel trypanocidal agents having drug-like properties [459]. 3. “Use of Artificial Intelligence on the Control of Vector-Borne Diseases” [460] C. Malaria: Mosquito-borne disease caused by a protozoan parasite. 1. AI methods have been developed to enable reliable and data-oriented disease monitoring and projection against malaria. It is foreseeable that together with reliable data management platforms AI methods will enable effective analysis of massive infectious disease and surveillance data to support risk and resource analysis for government agencies, healthcare service providers, and medical professionals in the future [438]. 2. In recent years detection of Malaria using computerized image analysis which is trained using some dynamic learning, the mechanism has gained increasing importance. A study proposed an image processing-based Malaria detection system which is trained by deep learning. Big data was used for increasing the accuracy of the system, and the reached accuracy showed that the proposed system has an outstanding classification rate that can be used in real-world detection [461]. 3. “Hijacking Malaria Simulators with Probabilistic Programming” [462]

7.16 Human development, aging, degeneration and death Biological development (ontogenesis) can be grouped into 3 main stages with subgroups for each. The first group is gestation (pregnancy) which includes fertilization, embryologic and fetal development. The second group is growth (physical, physiological and mental) which includes infancy, childhood, and adulthood. And finally, the third group is aging which includes wisdom (hopefully), regression (physical, physiological and mental), degeneration and death. AI is represented in each group (and subgroup) with applications that are helping us better understand the existential nature of life and death. From among the thousands of AI applications presented in the literature, I will present 3 significant programs in each subgroup. I urge the reader to review the literature (“Google it”) as well to learn more about how AI will continue to “disrupt” our lives in the years to come. 1. Fertilization: A. A deep convolutional neural network was trained on a collection of B1000 sperm cells of known DNA quality, to predict DNA quality from brightfield images alone. Results demonstrated moderate correlation (bivariate correlation B0.43) between a sperm cell image and DNA quality and the ability to identify higher DNA integrity cells relative to the median. This deep learning selection process is directly compatible with current, manual microscopy-based sperm selection [463]. B. A smartphone-based point-of-care device for automated ovulation testing has been developed using artificial intelligence by detecting fern patterns in a small volume (,100 µL) of saliva that is air-dried on a microfluidic device). Performance of the

Chapter 7 • AI applications in prevalent diseases and disorders

407

device was evaluated using artificial saliva and human saliva samples and observed that the device showed .99% accuracy in effectively predicting ovulation [464]. C. Several Machine Learning (ML) methods have been tested to develop a model that helps the selection of a single potential embryo with a high success rate. Fertilized eggs are observed over a period extending up to 5 days. Algorithmic methods gave a yield of 78.4%, which is acceptable in many cases to reduce the workload. Most of these models are practically useful to predict the implantation rate and outcome of IVF [465]. 2. Embryologic development: A. A team, consisting of embryologists, reproductive medicine clinicians, computer scientists, and precision medicine experts, trained an artificial intelligence algorithm to discriminate between poor and good embryo quality. The technique, which analyzes time-lapse images of the early-stage embryos, could improve the success rate of in vitro fertilization (IVF) and minimize the risk of multiple pregnancies [370]. B. Sixteen artificial intelligence (AI) and machine learning (ML) approaches were reported at the 2018 annual congresses of the American Society for Reproductive Biology and European Society for Human Reproduction and Embryology. Nearly every aspect of patient care was investigated, including sperm morphology, sperm identification, identification of empty or oocyte containing follicles, predicting embryo cell stages, predicting blastocyst formation from oocytes, assessing human blastocyst quality, predicting live birth from blastocysts, improving embryo selection, and for developing optimal IVF stimulation protocols. AI and ML are burgeoning methodologies in human reproduction and embryology and would benefit from the early application of reporting standards [466]. C. Accurate classification of embryo early development stages can provide embryologists valuable information for assessing the embryo quality, and hence is critical to the success of IVF. A paper has proposed multi-task deep learning with dynamic programming (MTDL-DP) approach for this purpose. It first uses MTDL-DP to preclassify each frame in the time-lapse video to an embryo development stage. This is the first study that applies MTDL-DP to embryo early development stage classification from time-lapse videos [467]. 3. Fetal development: A. Cardiotocography (CTG) monitoring and uterine activity (UA) provides useful information about the condition of the fetus and the cesarean or natural delivery. The visual assessment by the pathologists takes a lot of time and maybe incompatible. A study was performed on many diverse approaches that are suggested for predicting fetal state classes based on AI. Experimental results contributing to deep-stacked sparse auto-encoders (DSSAE) as more accurate than other suggested techniques to predict fetal state. The proposed method achieved a sensitivity of 99.716, a specificity of 97.500 and a geometric mean of 98.602 with an accuracy of 99.503 [468].

408

Foundations of Artificial Intelligence in Healthcare and Bioscience

B. Retinopathy of prematurity (ROP) is the leading cause of childhood blindness worldwide. An automated ROP detection system called DeepROP was developed by using Deep Neural Networks (DNNs). ROP detection was divided into ROP identification and grading tasks. The developed DeepROP is the potential to be an efficient and effective system for automated ROP screening [469]. C. A research group of scientists has developed a novel system that can automatically detect abnormalities in fetal hearts in real-time using artificial intelligence (AI). This technology could help examiners to avoid missing severe and complex congenital heart abnormalities that require prompt treatments, leading to early diagnosis and well-planned treatment plans, and could contribute to the development of perinatal or neonatal medicine [470]. 4. Infancy: A. Typically, a cerebral palsy (CP) diagnosis is delayed until around age 2 years; this delay decreases the likelihood of a long-term positive patient outcome. Current early detection is by visual examination of newborns 10 20 weeks post gestation. A screening program based on filming babies and processing the video via AI will allow increased early detection and intervention [471]. B. An approach focusing on the process of acquisition of intelligence, called evolutionary and developmental intelligence (EDI) utilizes AI, brain-inspired computing, and developmental robotics to define a conceptual framework. The process integrates advances in neuroscience, machine learning, and robotics to construct EDI systems that can help to better understand animal and human intelligence [472]. C. Numerous methods have been applied to infant MRI data due to numerous inherent challenges such as inhomogeneous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. Deep learning algorithms and in particular, convolutional networks have shown tremendous success in medical image analysis applications [473]. 5. Childhood and adolescence: A. An automated machine learning (ML) method for overcoming barriers to Autism spectrum disorder (ASD) screening was developed, specifically using the feedforward neural network (fNN). For the total sample, the best results yielded 99.72% correct classification using 18 items. Best results yielded 99.92% correct classification using 14 items for white toddlers and 99.79% correct classification using 18 items for black toddlers. ML may be a beneficial tool in implementing automatic, efficient scoring that negates the need for labor-intensive follow-up and circumvents human error, providing an advantage over previous screening methods [474]. B. Machine learning algorithms and sensors are being used to speed up the diagnosis of developmental delay and ensure more timely intervention. Scientists obtained information from USCs Infant Neuromotor Control Laboratory. This information included data about the motor movements of infants obtained from sensors strapped to the infants’ ankles. An algorithm was able to classify typical delay (TD) and delayed development (AR) and was then used to further analyze the observable differences in

Chapter 7 • AI applications in prevalent diseases and disorders

409

spontaneous movements of infants with TD and AR. Movement data was able to predict developmental delays in the first 6 months with 83.9% accuracy and 77% accuracy in the second 6 months [475]. C. A study is being conducted to share initial learnings and key questions around the intersection between AI and youth (ages 12 18), in the context of domains such as education, health and well-being, and the future of work. It aims to encourage various stakeholders including policymakers, educators, and parents and caregivers. The goal is to consider how we can empower young people to meaningfully interact with AI-based technologies to promote and bolster learning, creative expression, and wellbeing, while also addressing key challenges and concerns [476]. 6. Adulthood: A. AI appears to be extremely valuable for integrating genetic and cellular data from humans and other species and for modeling biological processes associated with aging. Such analyses could potentially resolve several unanswered questions currently pending in aging research. the National Institute on Aging (NIA) convened an interdisciplinary workshop titled “Contributions of Artificial Intelligence to Research on Determinants and Modulation of Health Span and Life Span” in August 2018. Aging and longevity are influenced by many interacting components and AI is particularly well-suited for modeling complex patterns driven by non-additive interactions and genetic or phenotypic heterogeneity [477]. B. It is of enormous importance to estimate general exposure to the risk of adverse health events, commonly referred to as frailty. A study was conducted to compare the performance of shallow and deep multilayer perceptrons (sMLP and dMLP), and long short-term memories (LSTM), on the prediction of a subject’s decline in activities of daily living. Deep networks performed better than shallow ones, while dMLP and LSTM performance were similar. Domain adaptation improved predictive ability in all comparisons. Results may help to improve the state of the art in predictive models for clinical practice and population screening [478]. C. A comprehensive report presents a broad look at the American public’s attitudes toward artificial intelligence (AI) and AI governance, based on findings from a nationally representative survey of 2000 American adults. Results include: 1. Mixed support for the development of AI (59% favor); 2. An overwhelming majority of Americans (82%) believe that robots and/or AI should be carefully managed; 3. Americans expect AI more likely to impact people around the world within the next 10 years (54.7% to 69% range of concern); 4. The public puts the most trust for development in university researchers (50%) and U.S. military (49%) reporting “a fair amount of confidence”; 5. The chance that high-level machine intelligence will be developed by 2028 (54%); 6. Weak support for developing high-level machine intelligence (31% of Americans support while 27% oppose its development) [479];

410

Foundations of Artificial Intelligence in Healthcare and Bioscience

7. Demographic characteristics account for substantial variation in support for developing of high-level AI: a. College graduates 5 57%; high school or less 5 29%; b. Household incomes (over $100,000 annually 5 47%; less than $30,000 5 24%); c. Computer science or programming experience 5 45%; without 5 23%; d. Men 5 39%; women 5 25%. 8. More Americans feel AI will be harmful than those who think it will be beneficial to humanity: a. 34% think that technology will have a harmful impact; b. 12% feel it could be extremely bad, leading to possible human extinction; c. 31% think that high-level machine intelligence will be good for humanity; d. 5% saying it will be extremely good; e. 18% of respondents selected “I don’t know.” 7. Aging: A. Sleep characteristics related to duration, timing, continuity, and sleepiness are associated with mortality in older adults. Machine learning was used to establish the predictive ability of a multidimensional self-reported sleep domain and to identify which sleep characteristics are most predictive. The analytic sample includes N 5 8668 older adults (54% female) aged 65 99 years. Multidimensional sleep was determined to be an important predictor of mortality that should be considered among other more routinely used predictors [480]. B. Predictors of chronological and biological age developed using deep learning (DL) is rapidly gaining popularity in the aging research community. These deep aging clocks can be used in a broad range of applications in the pharmaceutical industry, spanning target identification, drug discovery, data economics, and synthetic patient data generation. A brief overview of recent advances in this important subset, or perhaps superset, of aging clocks was presented that have been developed using AI [481]. C. Smart home technologies (SmHT) introduce passive monitoring features into the residential infrastructure to promote older adults’ ability to manage day-to-day living and age in place. Opportunities continue to develop to create smart homes that enhance physical and cognitive capacity for older adults, but there are also ethical and practical challenges that will inform the design of future smart home systems [482]. 8. Degeneration: A. A major challenge in dementia is achieving an accurate and timely diagnosis. In recent years, neuroimaging with computer-aided algorithms have made remarkable advances in addressing this challenge. Based on a rigorous review of existing works in the field, it has been found that, while most of the studies focused on Alzheimer’s disease, recent research has demonstrated reasonable performance in the

Chapter 7 • AI applications in prevalent diseases and disorders

411

identification of other types of dementia remains a major challenge. Multimodal imaging analysis deep learning approaches have shown promising results in the diagnosis of these other types of dementia [483]. B. A study used a dataset of labeled 35,900 optical coherence tomography (OCT) images obtained from age-related macular degeneration (AMD) patients and used them to train 3 types of CNNs to perform AMD diagnosis. The AI platform’s detection accuracy was generally higher than 90% and was significantly superior (P , 0.001) to that of medical students (69.4% and 68.9%) and equal (P 5 0.99) to that of retinal specialists (92.73% and 91.90%). Furthermore, it provided appropriate treatment recommendations comparable to those of retinal specialists [484]. C. The advent of AI, big data, and the next-generation telecommunication network (5 G) have generated enormous interest in digital health. This phenomenon is highlighted in new developments in ophthalmology, focusing on AI and other digital innovations, particularly those that are clinically available and could be implemented in the foreseeable future [485]. 9. Death: A. A team of healthcare data scientists and doctors have developed and tested a system of computer-based ‘machine learning’ algorithms to predict the risk of early death due to chronic disease (see Chronic Disease discussion immediately below) in a largely middle-aged population. They found this AI system was very accurate in its predictions and performed better than the current standard approach to prediction developed by human experts [486]. B. In a study, researchers evaluated 1.1 million ECGs that did indicate atrial fibrillation (AF) from more than 237,000 patients. They used specialized computational hardware to train a deep neural network to assess 30,000 data points for each respective ECG. The results showed that approximately 1 in 3 people received an AF diagnosis within a year of death. According to the results, the machine-learning model exhibited efficacy at predicting the 1-year risk of death. Importantly, the researchers observed that the neural network was able to accurately predict the risk of death in patients who were deemed by physicians to have a normal ECG [487]. C. An investigation was conducted to determine if early intensive care unit (ICU) scoring with the Simplified Acute Physiology Score (SAPS 3) could be improved using artificial neural networks (ANNs). The ANNs were constructed using the same parameters as in the SAPS 3 model and a total of 217,289 admissions were included. ANN model was found to perform just as well as SAPS 3, but with better calibration (AUC 0.85 and cBrier score 0.106). Furthermore, the ANN model was superior in correcting mortality for age [488].

412

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.17 Chronic disease The Center for Disease Control (CDC) defines chronic diseases as “conditions that last 1 year or more and require ongoing medical attention or limit activities of daily living or both” [489]. Another valuable definition from among the multiple definitions in the literature defines chronic conditions as “a physical or mental health condition that lasts more than 1 year and causes functional restrictions or requires ongoing monitoring or treatment” [490]. Chronic diseases such as heart disease, cancer, and diabetes also are the leading causes of death and disability in the United States and are the leading drivers of the nation’s $3.5 trillion in annual health care costs. Collectively, they represent the greatest cost in health care and by far, the leading cause of death in the U.S. and worldwide [491]. No surprise when considering the prevalence of chronic diseases. Six in 10 adults in the US. have a chronic disease. Four in 10 have 2 or more [489]. While there are significant numbers of disease states that can be classified as chronic, the 10 most common themselves represent the greatest contributions to the costs and death rates of chronic conditions in the U.S. Table 7 8 lists 10 of the most common chronic conditions ranked by death rate. However, such prioritized lists vary based on demographic factors (i.e., age, gender, race, geographic location, and socioeconomics): [380] All of the chronic diseases in the list, as well as comorbidities (see Chapter 6, page 242) directly associated with chronic conditions, have been addressed in numerous sections throughout this text. Specific conditions and statistics, beyond the scope of this discussion, vary widely between the U.S. and the global community, but all of catastrophic proportions [492,493]. What needs to be considered here regarding chronic diseases is the contribution AI applications are making to the risk factors and public health issues that are producing the chronic disease epidemic and their prevention. The "top 10" of these factors include: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Tobacco use (and secondhand smoke); Harmful use of alcohol and drugs; Raised blood pressure (or hypertension); Physical inactivity; Raised cholesterol; Overweight/obesity; Unhealthy diet; Raised blood glucose; Socioeconomic, cultural, political and environmental factors; Population, precision health, and prevention.

The following is a very small sample of the most recent and relevant examples in the literature of AI applications relating to the individual chronic condition risk factors listed above. Also, in Chapter 8, you will see that comorbid chronic diseases significantly increases the mortality rate among COVID-19 patients.

Chapter 7 • AI applications in prevalent diseases and disorders

413

1. Tobacco use (and secondhand smoke): A smart, proactive AI system has been developed using a wrist band housing a single Inertial Measurement Unit (IMU) sensor, and a smartphone app housing AI based on Recurrent Neural Network (RNN). To detect the smoking puffs, the proposed system uses a 2 steps classification scheme: first, a general model categorizes measured activities into Activities Daily Living (ADL) and Hand Gestures Activity (HGA). Then an Expert model further categorizes HGAs into smoking, eating, and drinking. The system recognizes smoking activity with an accuracy of 91.38% and provides active vibration feedback to smokers [494]. 2. Harmful use of alcohol and drugs: Addicaid is an app that explores the way AI can help individuals struggling with addiction. It combines AI with clinical research to predict when a person may be at risk for a relapse. It also offers treatment suggestions, including phone numbers for treatment lines and centers, and cognitive behavioral therapy options to help curb urges to relapses [495]. 3. Raised blood pressure (or hypertension): Since 1999, AI neural network methods and machine learning (ML) have been used to gauge the relationship between BP and multiple cardiometabolic risk factors. Elevated BP is a very important early metric for the risk of development of cardiovascular and renal injury. Therefore, advances in AI and ML will aid in early disease prediction and intervention [496]. 4. Physical inactivity: See section on Nutrition and Exercise below. 5. Raised cholesterol: An automatic diagnostic system was developed based on a deep learning algorithm to diagnose hyperlipidemia by using human physiological parameters. It is a neural network that uses technologies of data extension and data correction. The deep learning model can automatically extract all the available information instead of artificially reducing the raw data. It achieved 91.49% accuracy, 87.50% sensitivity, 93.33% specificity, and 87.50% precision with data from the test dataset making it a highly robust and accurate model that can be used for tentative diagnosis [497]. 6. Overweight/obesity: Machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. A review was conducted of machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning, and decision tree analysis. The algorithms were then applied to the National Health and Nutrition Examination Survey to demonstrate the methodology, utility, and outcomes. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity [498]. 7. Unhealthy diet: See section on Nutrition and Exercise below. 8. Raised blood glucose: People with Type 1 diabetes must test their blood sugar often to decide how much insulin they should inject using a needle or insulin pump. Until recently, blood sugar could only be tested by performing a finger stick to obtain a blood sample that would be analyzed by a glucose meter, a process most people only do 5 6 times a day. Now, enhanced through AI algorithms, connect glucose monitors and insulin pumps can automatically regulate blood glucose to

414

9.

10.

11.

12.

Foundations of Artificial Intelligence in Healthcare and Bioscience

healthy levels every 5 minutes, without frequent finger sticks. The data analysis in this research will enable engineers to improve models that predict the effect of insulin and meals on glucose levels, yielding better control of blood sugar levels [499]. Socioeconomic, cultural and political issues in chronic illnesses: Algorithms like data mining are supposed to eliminate human biases from the decision-making process. But data may simply reflect widespread biases that persist in society at large or discover surprisingly useful regularities that are just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. These concerns must be viewed through the lens of American antidiscrimination law such as Title VIIs prohibitions of discrimination in employment [500]. Environmental factors in chronic illnesses: To understand the effects of the environment as a whole (envirome), it is important to delineate specific domains of the environment and to assess how individually and collectively these domains affect cardiovascular health chronic conditions. A hierarchical model of the envirome defined by 3 consecutively nested domains, consisting of natural, social, and personal environments was studied. Extensive evidence suggests that features of the natural environment such as sunlight, altitude, diurnal rhythms, vegetation, and biodiversity affect cardiovascular health. These findings could lead to the development of new prevention strategies and deeper insights into etiological processes that contribute to CVD risk and susceptibility [501]. Population health and prevention: A study of a total of 450 published diabetes and AI articles represent a diverse and complex set of innovative approaches that aim to transform diabetes care in 4 main areas: automated retinal screening, clinical decision support, predictive population risk stratification, and patient self-management tools. Many of these new AI-powered retinal imaging systems, predictive modeling programs, glucose sensors, insulin pumps, smartphone applications, and other decision-support aids are on the market today with more coming. AI applications have the potential to transform diabetes care and help millions of diabetics to achieve better blood glucose control, reduce hypoglycemic episodes, and reduce diabetes comorbidities and complications [502]. Precision health and prevention: In healthcare, the most common application of traditional machine learning is precision medicine (see Chapter 4, page 101). IBMs Watson has received considerable attention in the media for its focus on precision medicine, particularly cancer diagnosis and treatment. Watson employs a combination of machine learning and NLP capabilities. AI has an important role to play in healthcare and chronic diseases in the form of machine learning and its development of precision medicine [503].

7.18 Mental and behavioral disorders Although sometimes overlooked in the spectrum of prevalent health care issues, mental health, and its associated behavioral disorders are among the most significant clinical and

Chapter 7 • AI applications in prevalent diseases and disorders

415

public health problems in the U.S. and worldwide. It can be defined as emotional, psychological, and social well-being, affecting how we think, feel, and act. It determines how we handle stress, relates to others, and makes choices and, it affects every stage of life from childhood through adolescence and adulthood [504]. Almost twenty percent of U.S. patients deal with a mental disorder. This ends up costing the United States a total of $193.2 billion per year [505]. Depression alone afflicts roughly 300 million people around the globe. Bipolar disorders are present in roughly 60 million people and schizophrenia in 23 million. Schizophrenia and other psychoses, dementia, developmental disorders, and autism make mental health and behavioral disorders one of the greatest causes of disability in the world [506]. Multiple factors contribute to a person’s mental health and behavior. They may be physical (biological “organic” brain dysfunction), physiological (chemical imbalances?), genetic, psychological (depression, emotional trauma, anxiety, and stress), environmental (life experiences, abuse), or any combination thereof. Care and prevention are always available including psychiatry and psychological care in the form of medications and analysis, and ongoing counseling to address one’s distress and prevent reoccurrences. Among the areas in health and wellness addressed by AI applications and influences, mental health and behavioral disorders constitute a major portion of its programs. From the areas and factors associated with mental illness identified above, I will review 2 recent and relevant literature articles and highlight 1 additional citation for each category. And again, I do so with the understanding that such coverage is but an infinitesimal representation of the full body of literature in the field and your additional reading and research will undoubtedly yield any additional information of interest that you may not find in my selections. • Biological “organic” brain dysfunction: 1. AI and big-data technologies provide great potential in mental health for personalizing treatment selection, prognosticating, monitoring for relapse, detecting and helping to prevent mental health conditions before they reach clinical-level symptomatology. There are, however, very few widely used or validated biomarkers in mental health, leading to reliance on patient and clinician derived questionnaire data as well as interpretation of new signals such as digital phenotyping. There exist major opportunities, but also limitations in techniques used for improving mental healthcare through AI and big-data [507]. 2. A review was conducted of 28 studies of AI and mental health that used electronic health records (EHRs), mood rating scales, brain imaging data, novel monitoring systems (e.g., smartphone, video), and social media platforms to predict, classify, or subgroup mental health illnesses including depression, schizophrenia or other psychiatric illnesses, and suicide ideation and attempts. As AI techniques continue to be refined and improved, it will help mental health practitioners re-define mental illnesses more objectively and identify these illnesses at an earlier or prodromal stage when interventions may be more effective, and personalize treatments based on an individual’s unique characteristics [508].

416

Foundations of Artificial Intelligence in Healthcare and Bioscience

a. Durstewitz D., Koppe G, Meyer-Lindenberg. A. deep neural networks in psychiatry. Mol Psychiatry 2019;24:1583 98. • Genetic factors: 1. Work conducted using genome-wide approaches during the past several years has invigorated the field, and represents the dawn of molecular genetics of schizophrenia. The aggregate data increasingly support a combination of rare and common genetic variation in schizophrenia, a major role for a polygenic inheritance, and genetic overlap of schizophrenia and other psychiatric disorders, such as bipolar disorder and autism. A main challenge for the field is the translation of established genetic associations into a better pathophysiological understanding of schizophrenia [509]. 2. Autism spectrum disorder (ASD) is a neuropsychiatric disorder with strong evidence of genetic contribution, and increased research efforts have resulted in an evergrowing list of ASD candidate genes. A machine learning model was constructed by leveraging a brain-specific functional relationship network (FRN) of genes to produce a genome-wide ranking of ASD risk genes. Through functional enrichment analysis on our highly prioritized candidate gene network, a small number of pathways were identified that are key in early neural development, providing further support for their potential role in ASD [510]. a. Lake JH. The future of mental health care: trends and forecast first online. In: An integrative paradigm for mental health care. 2019. • Depression: 1. Despite its increasingly significant burden and a pressing need for effective treatment, depression has been persistently difficult to treat. A significant barrier has been the symptom heterogeneity present in the diagnosis of major depressive disorder. Machine learning offers the ability to recognize this heterogeneity and model that information in psychiatric disorders. While ethical concerns arise in employing these methods, the benefits are wide-reaching, from personalizing treatment for depression to the development of AI chats that employ psychotherapy to predicting social outcomes for patients with mental illness. The implications extend far beyond depression treatment, as the epidemiology and service demand for mental healthcare systems continue to grow. Indeed, psychiatry is primed for innovation in AI and machine learning [511]. 2. With powerful computational methods, scientists have recently zeroed in on some key features of depressed brains. Those hallmarks include certain types of brain waves in specific locations, like the one just behind and slightly above the eyes. Other researchers are focused on how to correct the faulty brain activity that underlies depression. A small, implantable device capable of both learning the brain’s language and then tweaking the script when the story gets dark would be an immensely important clinical tool. Of the 16.2 million U.S. adults with severe depression, about a third don’t respond to conventional treatments [512]. a. What causes depression? Harvard Health Publishing; 2019.

Chapter 7 • AI applications in prevalent diseases and disorders

417

• Emotional trauma, anxiety, and stress: 1. A company has developed a wearable device (BioBeats), an app and machine learning system that collects data and monitors users’ level of stress, before predicting when it could be the cause of a more serious or physical health condition. It measures several vital signs, including blood pressure, stroke volume, pulse rate, pulse pressure, heart rate variability, respiratory rate, saturation, cardiac output, cardiac index, and more. The data is transmitted to Biobeat’s application and is available on the individual’s cellular phone, tablet, or as a full monitoring system in a hospital department. When combined with in-app mood-tracking, deep breathing exercises, and executive function tests, it can provide a comprehensive overview of one’s mental and physical wellbeing [513]. 2. A dataset of suicide-related tweets was collected from Twitter streaming data with a multiple-step pipeline including keyword-based retrieving, filtering and further refining using an automated binary classifier. Specifically, a convolutional neural network (CNN) based algorithm was used to build the binary classifier. Next, psychiatric stressors were annotated in the suicide-related tweets. CNN is leading the performance at identifying suicide-related tweets with a precision of 78% and an F-1 measure of 83%. The results indicate the advantages of deep learning-based methods for automated stressors recognition from social media [514]. a. Triantafyllidis AK, Tsanas A. Applications of machine learning in real-life digital health interventions: review of the literature. JMIR 2019;21(4). • Abusive life experiences: 1. An experimental protocol was developed to carefully extract as much relevant information about a crime of child abuse as possible. AI was used to help young victims tell their stories. The question was, “can AI support interviewers with tools to help appropriately gather information and how such computer-aided tools accurately assess the productivity of forensic interviews?” The hope is that the availability of large datasets of question and answer interactions and rigorous mathematical models of how children respond and are influenced by interviewer inputs will provide better tools for those who work as child advocates [515]. 2. Domestic violence is a serious issue plaguing families across the United States. Accessing its frequency is difficult due to underreporting. AI can provide a way for health providers to identify those at risk of domestic violence. It can help identify worrisome injury patterns or discrepancies between patient-reported histories and the abnormalities that are present on x-rays. By connecting these dots, AI could notify providers when further investigation is warranted. Brigham & Women’s Hospital in Boston uses AI’s application in this avenue enabling radiologists to provide even more value to patients instead of simply evaluating their injuries. It has proven to be a powerful tool for clinicians and social workers allowing them to approach patients with confidence and with less worry about offending the patient or the spouse [516]. a. Livingston S, Risse M. The future impact of artificial intelligence on humans and human rights. Ethics Int Aff 2019;33(2):141 58. Cambridge University Press.

418

Foundations of Artificial Intelligence in Healthcare and Bioscience

7.19 Nutrition and exercise (preventive care) Ending this chapter and the book on nutrition, exercise, and preventive care is not by chance. As you may have noticed, prevention has become a recurring topic throughout most categories of discussion in the book. The reason and objective for that relate to the simple fact that after all is said and done in health care, it’s prevention that dictates our ultimate state of health and wellbeing. If I may offer my personal opinion here (which I have avoided throughout the text), I submit to you my belief that nutrition and exercise are indeed the 2 single most important considerations in health and wellness and especially, in prevention. Intrinsic in almost everything we have discussed regarding health, wellness and prevention including everything presented in the AI health and wellness literature and reviews, assume proper nutrition and reasonable exercise of mind and body, as the foundational core. The proven facts are that physically, mentally, physiologically and even genetically, good health and wellness and health prevention spring from good nutrition and exercise.

7.19.1 Physical exercise For the past 50 1 years, every morning at the gym or after a run, I ask myself, “How can something so good feel so bad?” My learned answer is always, “Don’t ask!” And now, healthy and well at age 77, I ask myself, “Is it my genome or the exercise?” And again, my learned answer is always, “Don’t ask.” In July 2017, the journal Preventive Medicine presented a seminal article on an aging study titled “A National Health and Nutrition Examination Survey (NHANES 1999 2002) investigation” [517]. The article concluded that “. . . the more exercise people get, the less their cells appear to age. People who exercised the most had biological aging markers that appeared 9 years younger than those who were sedentary.” This study validated the work of Elizabeth Blackburn for her 2009 Nobel Prize-winning work in the discovery of how chromosomes play a key role in determining the lifespan of cells and the processes of cell aging and cancers through the protection of their telomeres and the enzyme telomerase. [518 520], So, you ask, "what are telomeres and telomerase?" At the beginning of this chapter, I stated that genetics “. . .permeate every aspect of human health and disease.” I did not mention, however, that it also governs the aging process through a function of DNA degradation from telomere shrinkage. At the ends of chromosomes are stretches of DNA called telomeres (Fig. 7 5). These DNA “tips” are tightly packed chains of chemical code made up of the 4 nucleic acid bases (guanine, adenine, thymine, and cytosine) that we described earlier (page 308) that make up the DNA strands. In white blood cells, telomeres range from 8000 base pairs in newborns and decreasing to 3000 base pairs in adults and as low as 1500 in elderly people. This diminution of telomeres is a product of normal cell division which occurs about 50 70 times during a normal life span and considered part of the aging process. With these cell divisions, the telomeres continually shorten and eventually reach a point where the cells can no longer divide and they become inactive or “senescent” and die. This process is believed to be the etiology of aging.

Chapter 7 • AI applications in prevalent diseases and disorders

419

FIGURE 7–5 Telomeres. At the ends of chromosomes (“tips”) are stretches of DNA called telomeres which are chains of chemical code made up of the four nucleic acid bases (guanine, adenine, thymine, and cytosine). Source: iStock.

It also has been associated with cancer and a higher risk of death. Telomerase is the enzyme that adds nucleic bases to the ends of telomeres in young cells to keep them from wearing down too much. But as cells divide repeatedly with age, there is not enough telomerase to maintain cell integrity and thus, the telomeres grow shorter and the cells age [521]. The results of Blackburn’s research produced 4 critical findings regarding aging: 1. Regular physical activity accounted for significantly longer telomeres in U.S. adults; 2. Regular physical activity reduces disease risk through the preservation of telomeres; 3. The longer telomeres found in active adults accounted for 9 years of reduced cellular aging; and 4. Contrary to commercial promotions of telomerase supplements in aging, there appears to be no proof that such supplementation contributes to telomere longevity [517]. The research concludes that the only direct relationship to telomere integrity during aging to decrease shortening and promote increased length is continued physical activity and exercise. The ubiquitous recommendations about exercise are indeed based on hard scientific research and facts and not anecdotal information as some think. Preventive medicine experts extracted 5 recommendations from the research: 1. Exercise 3 times per week for at least 45 minutes; 2. Reduce belly fat and avoid unstable weight gain/loss;

420

Foundations of Artificial Intelligence in Healthcare and Bioscience

3. Proper nutrition (discussed below); 4. Stress/anxiety reduction and body/mind meditation; 5. Avoid any telomerase supplements. The research cited above utilized AI machine learning and big data analytics in its statistical analyses and conclusions as is standard now in almost all research. And beyond Nobel Prize-winning research in physical activity and exercise, AI is playing a substantial role in improving health, wellness and preventive care in the population as the 3 recent literature reviews below exemplify. • Data consisting of 1,382,284 geotagged physical activity tweets from 481,146 users on Twitter in more than 2900 counties was captured. Machine learning and statistical modeling was applied to demonstrate sex and regional variations in preferred exercises and assessed the association between reports of physical activity on Twitter and population-level inactivity prevalence from the US Centers for Disease Control and Prevention. The regional-specific and sex-specific activity patterns captured may allow public health officials to identify changes in health behaviors at small geographical scales and to design interventions best suited for specific populations [522]. • Accelerometry data were processed by 5 methods to estimate physical activity in older adults: 1041 vertical axis cut-point; 15-second vector magnitude (VM) cut-point; 1-second VM algorithm (Activity Index (AI)); machine-learned walking algorithm; and individualized cut-point derived from a 400-meter walk. Each method tested had a significant relationship between change in physical activity and improved physical function and depressive symptoms [523]. • Machine learning (ML) can enhance the prediction of cardiopulmonary outcomes through classification techniques that classify the data into predetermined categories. A comparative analysis was conducted to assess the prediction of 10 years of all-cause mortality (ACM) using statistical logistic regression (LR) and ML approaches in a cohort of patients who underwent exercise stress testing. The analysis demonstrates that ML provides better accuracy and discrimination of the prediction of ACM among patients undergoing stress testing [524].

7.19.2 Nutrition The second half of the formula for good health, wellness and prevention is nutrition. Combined with physical activity, your diet can help you reach and maintain a healthy weight, reduce your risk of chronic diseases (like heart disease and cancer), and promote your overall health. Conversely, poor diet and nutrition contribute to the obesity epidemic in the United States. In the U.S. 33.8% of adults are obese (defined by the National Institutes of Health (NIH) as a BMI [Body Mass Index] of 30 and above [525]). Even more serious as a public health danger is obesity in children at 17% (or 12.5 million) of children and adolescents aged 2 19. This risk factor (obesity) is the single greatest cause of increased adult

Chapter 7 • AI applications in prevalent diseases and disorders

421

chronic diseases and type 2 diabetes (no longer referred to as “adult diabetes”) in children and adolescents [526]. Given the importance of good nutrition and the dangers of poor dietary habits, direct and indirect AI programs addressing this personal and public health issue have grown exponentially. I would not dare to analyze or opine on any 1 specific diet from among the literal thousands that exist in the literature. Rather, 1 particular “generic” effort has produced a new and dynamic approach to analysis and maintenance of healthy eating habits. The research was reported in a 2015 paper in Cell Journal entitled “Personalized Nutrition by Prediction of Glycemic Responses” [527]. Subsequently, the study was highlighted in a 2019 in-depth New York Times article (“The AI Diet”) by Eric Topol, M.D., a world-renowned AI health care expert, cardiologist and Director of Scripps Research Translational Institute [528]. The following is a summary of the program. Notwithstanding fad diets and government-issued food pyramids, little is understood about the science of nutrition. Multiple studies over the years have rarely produced validated conclusions of cause and effect. Some studies with hundreds of thousands of participants yielding results and claims of lowering risks of major diseases (e.g., heart attacks and strokes [529]), only to require retractions, confusion, contradictions, and fabrications by the food industry who attempt to bias studies they fund. The only accurate conclusion arrived at seems to be that there is no optimal diet for all people. Thanks to big data analytics, AI has begun to reveal the fact that a universal diet is a biological and physiological myth. Those parameters that make each of us unique, like human metabolism, microbiome, and individual environments, are the reason a diet itself must also be unique to an individual. Perhaps the evolving concept of “nutrigenomics” where a DNA test can provide individualized dietary guidelines (“precision nutrition”) for what foods you should eat. But this theory has yet to be proven as well. Assuring accuracy in such an “omics” method would require billions of pieces of data about each person in addition to analyzing the 40 trillion bacteria from about 1000 species that reside in our guts. And beyond that biological assessment, additional millions of demographics, physiologic and environmental factors would be required. Unfortunately, such massive computations even exceed current AI capabilities. The 2015 paper cited above (“Personalized Nutrition by Prediction of Glycemic Responses.”) measured spikes in blood glucose levels in response to eating which are thought to be an indicator of diabetes risk [530]. These metrics provided the first objective proof that individuals do indeed respond quite differently to eating the same foods in the same amounts. Individual data on 800 subjects (without diabetes) included the time of each meal, food, and beverage amount and content, physical activity, height, weight, and sleep. For 1 week, the subjects had their blood and gut microbiome inhabitants assessed and their blood glucose monitored. They ate more than 5000 standardized meals provided by the researchers containing chocolate and ice cream, as well as nearly 47,000 meals that consisted of their usual food intake. In total, there were more than 1.5 million glucose measurements made This testing resulted in the billions of data points. Such a sample can be classified as “a big data set.”

422

Foundations of Artificial Intelligence in Healthcare and Bioscience

Using machine learning, data points were analyzed (big data analytics) to identify the factors that drove the glucose response to specific foods for each individual. From this, an algorithm was built, free of human biases. More than a hundred factors were found to be involved in the glycemic response, but of significant interest, food wasn’t the key determinant. It was the gut bacteria. This landmark discovery proved 2 important facts: (1) that our gut microbiome plays a big role in our unique response to food intake; and (2) the discovery was made possible by AI. An interesting follow-up to the 2015 study was conducted by (actually “on”) Dr. Topol himself. After going through the study protocol, the algorithm revealed that his gut microbiome had 1 significantly elevated bacteria (bacteroides stercoris) of 27% versus a normal 2% level. The result was specific food recommendations to avoid glucose spikes, unfortunately, recommendations that contraindicated foods he enjoyed and ironically, suggested food sources which he disliked. “If I wanted to avoid glucose spikes, I’d have to make some pretty big sacrifices in my diet.” Tough decisions. The algorithm developed from this study is now commercially available (www.daytwo. com) as well as other AI diet-related programs including a smartphone app that allows a person to take a photo of plates of food which are analyzed with deep learning algorithms to accurately determine what you are eating. But again, these programs are limited without multiple types of data like activity, sleep, level of stress, medications, genome, microbiome, and glucose. Although, with multiple IoT devices like skin patches and smartwatches additional data can be acquired for advanced algorithms to assess. Indeed, in the next few years, a virtual health coach will be available with deep learning capabilities to analyze an individual’s relevant health metrics and providing them with customized dietary recommendations. Certainly, such recommendations, if proven worthwhile will need updating from time to time with life changes and aging. And so, the bottom line on health and wellness, from what we eat, to how we live, to how we think and relate to the world we live in, is assisted and arguably enhanced by artificial intelligence. Yes, it is a disruptive technology, and it may be just what the doctor ordered.

References [1] National Institute of Health. The human genome project. National Human Genome Research Institute; 2019. [2] Giasuddin ASM. Role of immunologists in the development of health care system. J Immunol Immunother 2017;1(1):2. [3] The innate immune system. Immunopaedia; 2019. [4] Han S. World Health Organization (WHO) ranks chronic diseases as the greatest threat to human health. HealthLine; 2018. [5] Orbai AM. Autoimmune disease: why is my immune system attacking itself? Johns Hopkins Health; 2019. [6] Kuchroo VK, Ohashi PS, Sartor RB, et al. Dysregulation of immune homeostasis in autoimmune diseases. Nat Med 2012;18:42 7. [7] Serhan CN, Ward PA, Gilroy DW. Fundamentals of inflammation. Yale J Biol Med 2011;84(1):64 5.

Chapter 7 • AI applications in prevalent diseases and disorders

423

[8] Allergy and the Immune System. Johns Hopkins Health. 2019. [9] Chen L, Deng H, Cui H, et al. Inflammatory responses and inflammation-associated diseases in organs. Oncotarget 2018;9(6):7204 18. Available from: http://doi.org/10.18632/oncotarget.23208. [10] Pahwa R, Jialal I. Chronic inflammation. StatPearls; 2019. [11] Primary immunodeficiency. Mayo Clinic; 2019. [12] Eustice C. Autoimmune disease types and treatment. VeryWellHealth; 2019. [13] Autoimmune diseases: types, symptoms, causes, and more. HealthLine; 2019. [14] Cohen S, Cannella A. Patient education: disease-modifying antirheumatic drugs (DMARDs) (Beyond the Basics). UpToDate; 2019. [15] Ogbru O. Biologics (Biologic drug class). MedicineNet; 2019. [16] Mandal A. Autoimmune disease development of therapies. News Medical Life Science; 2019. [17] Immunotherapies for autoimmune diseases. Nat Biomed Eng 2019;3(247). [18] National Cancer Institute. Dictionary. National Institute of Health; 2019. [19] NIH. Stem cell information. Autoimmune diseases and the promise of stem cell-based. [20] Eguizabal C, Aran B, Geens M, et al. Two decades of embryonic stem cells: a historical overview. Human Reprod 2019;1 17. Available from: http://doi.org/10.1093/hropen/hoy024. [21] FDA warns about stem cell therapies. U.S. Food and Drug Administration; 2019. [22] Zhang Z, Zhang Y, Gao F, et al. CRISPR/Cas9 genome-editing system in human stem cells: current status and prospects. Mol Ther Nucleic Acids 2017;9:230 41. [23] Avior Y, Lezmi E, Yanuka D, et al. Modeling developmental and tumorigenic aspects of trilateral retinoblastoma via human embryonic stem cells. Stem Cell Rep 2017;8(5):1354 65. [24] Benhenda M. How to better predict cancer immunotherapy results. Medium AI Lab; 2019. [25] Allen F, Parts L, et al. Predicting the mutations generated by repair of Cas9-induced double-strand breaks. Nat Biotechnol 2019;37:64 72. [26] Genetics Home Reference. What are genome editing and CRISPR-Cas9? NIH. National U.S. Library of Medicine. USA.gov; 2019. [27] van Overbeek M, Capurso D, Carter MM, et al. DNA repair profiling reveals nonrandom outcomes at Cas9-mediated vreaks. Mol Cell 2016;63(4):P633 46. [28] Shen MW, Sherwood R, et al. Predictable and precise template-free CRISPR editing of pathogenic variants. Nature 2018;563:646 51. [29] June CH, O’Connor RS, Kawalekar OU, et al. CAR-T cell immunotherapy for human cancer. Science 2018;359:1361 5. Available from: http://doi.org/10.1126/science.aar6711. [30] Gill S, June CH. Going viral: chimeric antigen receptor T-cell therapy for hematological malignancies. Immunol Rev 2015;263:68 89. Available from: http://doi.org/10.1111/imr.12243. [31] Minutolo NG, Hollander EE, Powell Jr DJ. The emergence of universal immune receptor T cell therapy for cancer. Front Oncol 2019. [32] Wikipedia. Plasmids. 2019. [33] Shank BR, Do B, Sevin A, et al. Chimeric antigen receptor T cells in hematologic malignancies. Pharmacotherapy 2017;37(3):334 45. [34] Porter DL, Hwang W-T, Frey NV, et al. Chimeric antigen receptor T cells persist and induce sustained remissions in relapsed refractory chronic lymphocytic leukemia. Sci Transl Med 2015;7:303ra139. Available from: http://doi.org/10.1126/scitranslmed.aac5415.

424

Foundations of Artificial Intelligence in Healthcare and Bioscience

[35] Fry TJ, Shah NN, Orentas RJ, et al. CD22-targeted CAR-T cells induce remission in B-ALL that is naive or resistant to CD19-targeted CAR immunotherapy. Nat Med 2017;24:20. Available from: http://doi.org/ 10.1038/nm.4441. [36] Rupp LJ, Schumann K, Roybal KT, et al. CRISPR/Cas9-mediated PD-1 disruption enhances anti-tumor efficacy of human chimeric antigen receptor T cells. Sci Rep 2017;7(1):737. [37] March RJ. Why this new gene therapy drug costs $2.1 million 2019. Foundation for Economic Education; 2019. [38] Hildreth C. Cost of stem cell therapy and why it’s so expensive. BioInformant; 2018. [39] Ramina G. Regulation and oversight of gene therapy in the US. Regulatory Focus.org; 2017. [40] Disease Development. How do autoimmune diseases unfold? Johns Hopkins Medicine Pathology; 2019. [41] Campbell M. Genotype vs. phenotype: examples and definitions. Genomic Research; 2019. [42] National Institute of Environmental Health Science. Autoimmune disease. NIH. USA.gov; 2019. [43] Ramos PS, Shedlock AM, Langefeld CD. Genetics of autoimmune diseases: insights from population genetics. J Hum Genet 2015;60(11):657 64. [44] Noorbakhsh-Sabet N, Zand R, Zhang Y, et al. Artificial intelligence transforms the future of health care. Am J Med 2019;132(7):795 801. [45] Garrett L. Pioneering a new era in human health. Human Vaccines Project; 2019. [46] Soto C, Bombardi RG, Branchizio A, et al. High frequency of shared clonotypes in human B cell receptor repertoires. Nature 2019;566:398 402. [47] Press Release. Decoding the human immune system. Human Vaccines Project; 2019. [48] Genetics Home Reference. What is the difference between precision medicine and personalized medicine? NIH. National U.S. Library of Medicine. USA.gov; 2019. [49] Genetics Home Reference. What is the precision medicine initiative? NIH. National U.S. Library of Medicine. USA.gov; 2019. [50] Genetic disorders. National Human Genome Research Institute. NIH. USA.gov; 2018. [51] Scally A. The mutation rate in human evolution and demographic inference. Curr Opin Genet Dev 2016;41:36 43. [52] Zhang S. Your body acquires trillions of new mutations every day, and it’s somehow fine? The Atlantic; 2018. [53] Stanford at the Tech Museum understanding genetics. Mutations and disease. Stanford at the Tech; 2019. [54] Genetics Home Reference. What is a gene mutation, and how do mutations occur? NIH. National U.S. Library of Medicine. USA.gov; 2019. [55] MedGen. National Center for Biotechnology Information, U.S. National Library of Medicine; 2017. [56] Genetics Home Reference. How are genetic conditions diagnosed? NIH. National U.S. Library of Medicine. USA.gov; 2019. [57] Medline Plus. Genetic testing. NIH. National Library of Medicine; 2019. [58] Collins F. Whole-genome sequencing plus AI yield same-day genetic diagnoses. NIH Director’s Blog. NIH.gov; 2019. [59] Clark MM, Hildreth A, Batalov S, et al. Diagnosis of genetic diseases in seriously ill children by rapid whole-genome sequencing and automated phenotyping and interpretation. Sci Transl Med 2019;11 (489):eaat6177. [60] Wetterstrand KA. The cost of sequencing a human genome. National Human Genome Research Institute. NIH. USA.gov; 2019. [61] Khan Academy. DNA sequencing. Creative Commons, Rice University; 2019.

Chapter 7 • AI applications in prevalent diseases and disorders

425

[62] National Human Genome Research Institute. The cost of sequencing a human genome. https://www. genome.gov/sequencingcosts/; 2017. [63] Davies SC. Annual report of the chief medical officer 2016, generation genome. London: Department of Health; 2017. [64] What to do if your genetic test results are positive. BreastCancer.org; 2019. [65] Center for Genetics and Society. Human genetic modification. 2019. [66] Approved cellular and gene therapy products. U.S. Food and Drug Administration; 2019. [67] Friedmann T. Genetic therapies, human genetic enhancement, and . . . eugenics? Gene Ther 2019;26:351 3. [68] Editorial. Germline gene-editing research needs rules. Nature 2019;567:145. [69] Solomon MZ. Gene editing humans: it’s not just about safety. Sci Am 2019. [70] Medline Plus. Cloning. U.S. Department of Health and Human Services. National Institutes of Health. U. S. National Library of Medicine; 2019. [71] Genetics Home Reference. What are some potential benefits of precision medicine and the precision medicine initiative? NIH. National U.S. Library of Medicine. USA.gov; 2019. [72] Hillis WD. Edge master class 2010: W. Daniel Hillis on “Cancering.” Edge.org; 2019. [73] Weinberg RA. Cancer, mutations, and the facts of life. [email protected]. The Guardian; 2019. [74] Tomasetti C, Li L, Vogelstein B. Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention. Science 2017;355(6331):1330 4. [75] Belikov A, Aleksey V. The number of critical carcinogenic events can be predicted from cancer incidence. Sci Rep 2017;7(1):12170. [76] Epigenetics simplified. A super brief and basic explanation of epigenetics for total beginners. What Is Epigenetics; 2018. [77] Genetics Home Reference. Tumor protein p53. NIH. National U.S. Library of Medicine. USA.gov; 2019. [78] Infectious agents. National Cancer Institute at the National Institutes of Health; 2019. [79] Cancer overview. What is cancer staging? Cleveland Clinic; 2019. [80] Allen D. AI could revolutionize gene therapy. Artificial intelligence, innovation focus. MedicalExpo 2 e-magazine; 2018. [81] Toratani M, Konno M, Asai A, et al. A convolutional neural network uses microscopic images to differentiate between mouse and human cell lines and their radioresistant clones. Cancer Res 2018;78 (23):6703. Available from: http://doi.org/10.1158/0008-5472.CAN-18-0653. [82] Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115 18. [83] What do you want to know about cancer? Healthline; 2019. [84] Cancer types. National Cancer Institute of NIH; 2019. [85] American Cancer Society Facts & Figures annual report for 2018. [86] Cancer facts & figures, 2019. American Cancer Society; 2019 [87] NCHS Health E-Stats. National Center for Health Statistics. Center for Disease Control (CDC); 2019. [88] Siegel RL, Miller KD, Jemal A. Cancer statistics. ACS J 2020. Available from: http://doi.org/10.3322/caac.21590. [89] Cancer chemotherapy. MedLinePlus. NIH. USA.gov; 2019. [90] Radiation therapy. MedLinePlus. NIH. USA.gov; 2019. [91] Targeted cancer therapies. National Cancer Institute. NIH; 2019.

426

Foundations of Artificial Intelligence in Healthcare and Bioscience

[92] Zitnik M, Agrawal M, Leskovec J. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics 2018;34:i457 66. [93] Kang J, Schwartz R, Flickinger J, Beriwal S. Machine learning approaches for predicting radiation therapy outcomes: a clinician’s perspective. Int J Radiat Oncol 2015;93:1127 35. [94] Lee S, Kerns S, Ostrer H, et al. Machine learning on a genome-wide association study to predict late genitourinary toxicity after prostate radiation therapy. Int J Radiat Oncol Biol Phys 2018;101:128 35. [95] Ibragimov B, Toesca D, Chang D, et al. Development of deep neural network for individualized hepatobiliary toxicity prediction after liver SBRT. Med Phys 2018;45:4763 74. [96] Zhen X, Chen J, Zhong Z, et al. Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study. Phys Med Biol 2017;62:8246 63. [97] Kuan K, Ravaut M, Manek G, et al. Deep learning for lung cancer detection: tackling the Kaggle Data Science Bowl 2017 challenge, ,http://arxiv.org/abs/1705.09435.; 2017 [accessed 14.02.19]. [98] Ribli D, Horváth A, Unger Z, et al. Detecting and classifying lesions in mammograms with deep learning. Sci Rep 2018;8:4165. [99] Sun R, Limkin EJ, Vakalopoulou M, et al. A radiomics approach to assess tumor-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: an imaging biomarker, retrospective multicohort study. Lancet Oncol 2018;19:1180 91. [100] Wang J, Cao H, Zhang JZH, Qi Y. Computational protein design with deep learning neural networks. Sci Rep 2018;8:6349. [101] Eulenberg P, Köhler N, Blasi T, et al. Reconstructing cell cycle and disease progression using deep learning. Nat Commun 2017;8:463. [102] Buggenthin F, Buettner F, Hoppe PS, et al. Prospective identification of hematopoietic lineage choice by deep learning. Nat Methods 2017;14:403 6. [103] Artemov AV, Putin E, Vanhaelen Q, et al. Integrated deep learned transcriptomic and structure-based predictor of clinical trial outcomes, ,https://www.biorxiv.org/content/10.1101/095653v2.; 2016. [104] Menden MP, Iorio F, Garnett M, et al. Machine learning prediction of cancer cell sensitivity to drugs based on genomic and chemical properties. PLoS One 2013;8:e61318. [105] Castelvecchi D. Can we open the black box of AI? Nat News 2016;538:20. [106] Key facts. Cardiovascular diseases (CVDs). World Health Organization (WHO); 2019. [107] Heart disease and stroke statistics-2019 update: a report from the American Heart Association. Circulation; 2019. [108] Blood basics. American Society of Hematology; 2019. [109] Vascular conditions. Society of Vascular Surgery; 2019. [110] Fogoros RN. Why is right-sided heart failure different? VeryWell, Health; 2019. [111] Villines Z, Kohli P. What to know about congestive heart failure. Medical News Today; 2019. [112] Anemias. Mayo Foundation for Medical Education and Research; 2019. [113] Brier ME, Gaweda AE. Artificial intelligence for optimal anemia management in end-stage renal disease. Kidney Int 2016;90(2):259 61. Available from: http://doi.org/10.1016/j.kint.2016.05.018. [114] Pennic J. Dosis expands AI-powered strategic anemia advisor to 50 clinics. HIT; 2019. [115] Carroll L. Smartphone app could screen for anemia. Reuters Health News; 2018. [116] Luo EK, Kahn A. Bleeding disorders. Healthline; 2018. [117] Lasson F, Delamarre A, Redou P, et al. A clinical decision support system to help the interpretation of laboratory results and to elaborate a clinical diagnosis in blood coagulation domain. Adv Comput Intell 2019;109 22.

Chapter 7 • AI applications in prevalent diseases and disorders

427

[118] Chandra S, Sumijan S, Mandala EPW. Expert system for diagnosing hemophilia in children using casebased reasoning. Indonesian J Artif Intell Data Mining 2019;2(1). [119] Swystun LL, Lillicrap D. Genetic regulation of plasma von Willebrand factor levels in health and disease. J Thromb Haemost 2018;16(12):2375 90. [120] Kim M, Snowdon J, Weeraratne SD, et al. Clinical insights for hematological malignancies from an artificial intelligence decision-support tool. J Clin Oncol 2019;e13023. [121] Sevindik OG. Artificial intelligence to assist better myeloma care, is it the time? Clin Lymphoma, Myeloma Leuk 2019;19(10), e356. [122] El H, Belousova T, Chen L, et al. Automated diagnosis of lymphoma with digital pathology images using deep learning. Ann Clin Lab Sci 2019. [123] Blood clots: management and treatment. Cleveland Clinic; 2019. [124] Carfagno J. Machine learning algorithm helps doctors make decisions in stroke management. DocWire; 2019. [125] Tolhuisen ML, Ponomareva E, Koopman MS, et al. Artificial intelligence based detection of large vessel occlusion on non-contrast computed tomography in stroke. Stroke; 2019. [126] Bae Y, Kang SJ, Kim G, et al. Prediction of coronary thin-cap fibroatheroma by intravascular ultrasound-based machine learning. Atherosclerosis 2019;288:168 74. [127] Verde L, De Pietro G. A neural network approach to classify carotid disorders from heart rate variability analysis. Comput Biol Med 2019;109:226 34. [128] Yoneyama H, Nakajima K, Taki J, et al. Ability of artificial intelligence to diagnose coronary artery stenosis using hybrid images of coronary computed tomography angiography and myocardial perfusion SPECT. Eur J Hybrid Imaging 2019;3:4. [129] Sardar P, Abbott JD, Kundu A, et al. Impact of artificial intelligence on interventional cardiology: from decision-making aid to advanced interventional procedure assistance. JACC: Cardiovasc Interventions 2019;12(14). [130] Chronic venous insufficiency. Johns Hopkins Medicine; 2019. [131] Fukaya E, Flores AM, Gustafsson S, et al. Clinical and genetic determinants of varicose veins prospective, community-based study of  500000 individuals. Circulation; 2018. Available from: https://www. ahajournals.org/doi/abs/10.1161/CIRCULATIONAHA.118.035584. [132] Nardelli P, Jimenez-Carretero D, Bermejo-Pelaez D, et al. Pulmonary artery-vein classification in CT images using deep learning. IEEE Trans Med Imaging 2018;37(11):2428 40. Available from: https:// doi.org/10.1109/TMI.2018.2833385. [133] Fan L, Han J, DuanChen J, et al. Computer-aided diagnosis for cerebral venous sinus thrombosis. In: International conference on mechatronics and intelligent robotics, ICMIR. 2018. pp 1112 9. [134] Kent C. AI surpasses humans in predicting heart attack and death. Verdict; 2019. [135] Hannun AY, Rajpurkar P, Haghpanahi M, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med 2019;25:65 9. [136] Ankit A, Manish B, RuSan-Tan S, et al. An efficient detection of congestive heart failure using frequency localized filter banks for the diagnosis with ECG signals. Cogn Syst Res 2019;55:82 94. [137] Johnson KM, Johnson HE, Zhao Y, et al. Scoring of coronary artery disease characteristics on coronary CT angiograms by using machine learning. Radiology 2019. Available from: https://doi.org/10.1148/ radiol.2019182061. [138] Khan RU, Hussain T, Quddus H, et al. An intelligent real-time heart diseases diagnosis algorithm. IEEE Xplore; 2019. [139] Bom MJ, Levin E, Driessen RS, et al. Predictive value of targeted proteomics for coronary plaque morphology in patients with suspected coronary artery disease. Ebiomedicine 2019;40:23 4.

428

Foundations of Artificial Intelligence in Healthcare and Bioscience

[140] Arafati A, Hu P, Finn J, et al. Artificial intelligence in pediatric and adult congenital cardiac MRI: an unmet clinical need. Cardiovasc Diagn Ther 2019. [141] Diller GP, Kempny A, Babu-Narayan SV, et al. Machine learning algorithms estimating prognosis and guiding therapy in adult congenital heart disease: data from a single tertiary center including 10 019 patients. European Heart J 2019;40(13):1069 77. [142] Marelli A. The future of adult congenital heart disease research: precision health services delivery for the next decade. Can J Cardiol 2019. [143] Pereira VM, Donner Y, Levi G, et al. Artificial intelligence to improve the detection and triage of cerebral aneurysms. Stroke 2020;51:A141. [144] Yeo LL, Engin M, Lange R, et al. Automatic segmentation of cerebral arteries in MRA TOF using deep learning. Stroke 2020;51(Suppl. 1). Abstract WP269. [145] Lee H, Lee E, Ham S, et al. Machine learning approach to identify stroke within 4.5 hours. Stroke 2020. Available from: http://doi.org/10.1161/STROKEAHA.119.027611. [146] Poplin R, Varadarajan AV, Blumer K, Liu Y, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng 2018;2(3):158 64. [147] Arteries cardio DL cloud MRI analytics software receives FDA clearance. Diagnostic and Interventional Cardiology; 2017. [148] Yan Y, Zhang JW, Zang GY, et al. The primary use of artificial intelligence in cardiovascular diseases: what kind of potential role does artificial intelligence play in future medicine? J Geriatr Cardiol 2019;16 (8):585 91. [149] Niederer SA, Lumens J, Trayanova NA. Computational models in cardiology. Nat Rev Cardiol 2019;16:100 11. [150] Choi E, Biswal S, Malin B, Duke J, Stewart WF, Sun J. Generating multi-label discrete patient records using generative adversarial networks. arXiv preprint arXiv:1703.06490. Available at: ,https://arxiv.org/ abs/1703.06490.; 2017 [accessed 11.08.19]. [151] Cikes M, Sanchez-Martinez S, Claggett B, et al. Machine learning-based phenogrouping in heart failure to identify responders to cardiac resynchronization therapy. Eur J Heart Fail 2019;21:74 85. [152] Juarez-Orozco LE, Knol RJJ, Sanchez-Catasus CA, et al. Machine learning in the integration of simple variables for identifying patients with myocardial ischemia. J Nucl Cardiol 2018. [Epub ahead of print]. [153] Ahmad T, Lund LH, Rao P, et al. Machine learning methods improve prognostication, identify clinically distinct phenotypes, and detect heterogeneity in response to therapy in a large cohort of heart failure patients. J Am Heart Assoc 2018;7(8):e8081. [154] Haendel MA, Chute CG, Robinson PN. Classification, ontology, and precision medicine. N Engl J Med 2018;379:1452 62. [155] National diabetes statistics report. National Center for Health Statistics. Center for Disease Control (CDC); 2019. [156] Diabetes. prediabetes: your chance to prevent type 2 diabetes. National Center for Health Statistics. Center for Disease Control (CDC); 2019. [157] Type 1 diabetes. Genetics Home Reference. U.S. National Library of Medicine; 2019. [158] Diabetes. Gestational diabetes. National Center for Health Statistics. Center for Disease Control (CDC); 2019. [159] Type 2 diabetes. Centers for Disease Control and Prevention; 2019. [160] Weatherspoon D, Pietrangelo A. Understanding type 2 diabetes. Healthline; 2019. [161] Basina M, Kivi R, Boskey E. What is type 1 diabetes? Healthline; 2019. [162] Weatherspoon D, Pietrangelo A. Understanding type 2 diabetes. Healthline; 2019. [163] Type 1 diabetes. NHS Inform; 2019.

Chapter 7 • AI applications in prevalent diseases and disorders

429

[164] Type 2 diabetes. NHS Inform; 2019. [165] RPI News. Diabetes data analysis will lead to improved glucose monitoring and insulin delivery. Rensselaer; 2019. [166] Hu J. AI offers hope for earlier screening for type 1 diabetes. IBM Research Blog; 2019. [167] López B, Martin C, Viñas PH. Special section on artificial intelligence for diabetes. Artif Intell Med 2018;85:27 8. [168] Makino M, Yoshimoto R, Ono M, et al. Artificial intelligence predicts the progression of diabetic kidney disease using big data machine learning. Nature 2019. [169] Diabetic retinopathy. NIH National Eye Institute; 2019. [170] Janakiram MSV. Google’s research in artificial intelligence helps in preventing blindness caused by diabetes. Forbes Tech; 2017. [171] Condliffe J. DeepMind’s first medical research gig will use AI to diagnose eye disease. MIT Technology Review; 2016. [172] Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016;316:2402 10. [173] Long E, Lin H, Liu Z. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts Nat Biomed Eng 2017; Article number: 0024. Available from: http://doi.org/ 10.1038/s41551-016-0024. [174] FDA News Release. FDA permits marketing of artificial intelligence-based device to detect specific diabetes-related eye problems. U.S. Food and Drug Admin; 2018. [175] Society for Neuroscience, n.d. About neuroscience. 2018. Available from: https://neuronline.sfn.org/ Home/SfN/About/About-Neuroscience. [176] Medline Plus. Neurological diseases. U.S. Natl Library Med 2019. [177] Brain basics: know your brain. NIH. National Institute of Neurological Disorders and Stroke; 2019. [178] Lovell L, Plantegenest G. Neuropathology navigaot. Neurobiology of Disease. MSU; 2008. [179] Cherry K, Gans S. The peripheral nervous system. VeryWellMind; 2019. [180] Merck Manual Professional Version. Overview of spinal cord disorders. Merck Sharp & Dohme Corp; 2019. [181] Bailey R. Overview of the five senses. ThoughtCo; 2019. [182] Jo T, Nho K, Saykin AJ. Deep learning in Alzheimer’s disease: diagnostic classification and prognostic prediction using neuroimaging data. Front Aging Neurosci 2019;11:220. [183] Grollemund V, Pradat PF, Querin G, et al. Machine learning in amyotrophic lateral sclerosis: achievements, pitfalls, and future directions. Front Neurosci 2019;13:135. [184] Hsu WL, Ho CY, Liang CK, et al. Application of IoT in the prevention of carbon monoxide poisoning. Sens Mater 2019;3465 82. [185] Williams V. Artificial intelligence shows promise in concussion management. USNews Health 2019. [186] Iizuka T, Fukasawa M, Kameyama M. Deep-learning-based imaging-classification identified cingulate island sign in dementia with Lewy bodies. Nat Sci Rep 2019;9 Article number: 8944. [187] Topol E. Deep medicine: how artificial intelligence can make healthcare human again. 1st ed. Hackette Book Group; 2019. [188] Farzaneh N, Reza Soroushmehr SM, Williamson CA, et al. Automated subdural hematoma segmentation for traumatic brain injured (TBI) patients. In: Conf Proc Annu Int Conf IEEE Eng Med Biol Soc IEEE Eng Med Biol Soc Annu Conf. 2017. pp. 3069 72. [189] Kearney H, Byrne S, Cavalleri GL, et al. Tackling epilepsy with high-definition precision medicine. a review. JAMA Neurol 2019;76(9):1109 16.

430

Foundations of Artificial Intelligence in Healthcare and Bioscience

[190] Linder N, Taylor JC, Colling R, et al. Deep learning for detecting tumor-infiltrating lymphocytes in testicular germ cell tumors. J Clin Pathol 2019;72:157 64. [191] Mizutani T, Magome T, Igaki H, et al. Optimization of treatment strategy by using a machine learning model to predict survival time of patients with malignant glioma after radiotherapy. J Radiat Res 2019. [192] Li Q, Peng X, Huang H, et al. RNA sequencing uncovers the key microRNAs potentially contributing to sudden sensorineural hearing loss. Medicine 2017;96:47. [193] Shew M, New J, Wichova H, et al. Using machine learning to predict sensorineural hearing loss based on perilymph micro RNA expression profile. Sci Rep 2019;9(1):3393. [194] Harvard Pilgrim Health Care Institute. Two new algorithms can identify patients at risk of HIV. Neuroscience News; 2019. [195] Gordon MF, Grachev ID, Mazeh I, et al. Function in Huntington disease patients using wearable sensor devices. Digit Biomark 2019;3:103 15. [196] Yune S, Lee H, Kim M, Tajmir SH, Gee MS, Do S. Beyond human perception: sexual dimorphism in hand and wrist radiographs is discernible by a deep learning model. J Digit Imaging 2019;32 (4):665 71. [197] Madrigal AC. How a feel-good AI story went wrong in flint. A machine-learning model showed promising results, but city officials and their engineering contractor abandoned it. Atlantic 2019. [198] Shew M, et al. Feasibility of microRNA profiling in human inner ear perilymph. Neuroreport 2018;29:894 901. [199] Roman K, Frank L, Shakirin TG, et al. Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur Radiol 2019;29(1):124 32. [200] Zaccari K, Marujo EC. Machine learning for aiding meningitis diagnosis in pediatric patients. World Acad Sci, Eng Technol Int J Med Health Sci 2019;13(9). [201] Rudie JD, Rauschecker AM, Bryan RN, et al. Emerging applications of artificial intelligence in neurooncology. Pediatric Radiol 2019;49(11):1384 90. [202] Garcia-Chimeno Y, Garcia-Zapirain B, Gomez-Beldarrain, et al. Automatic migraine classification via feature selection committee and machine learning techniques over imaging and questionnaire data. BMC Med Inform Decis Mak 2019;17(1):38. [203] Khaligh-Razavi SM, Sadeghi M, Khanbagi M, et al. A self-administered, artificial intelligence (AI) platform for cognitive assessment in multiple sclerosis (MS). BioRxiv 2019. [204] Tapadar A, George A, George A. Painless prognosis of myasthenia gravis using machine learning. Stanford University; 2018. [205] Chen L, Zheng K, Shen Y, et al. Development of a deep learning algorithm for classification of neuroblastoma. Zhongguo Yi Liao Qi Xie Za Zhi 2019;43(4):255 8. [206] Schütze M, de Souza Costa D, de Paula JJ, et al. Use of machine learning to predict cognitive performance based on brain metabolism in Neurofibromatosis type 1. PLoS One 2018;13(9). [207] Rosenow J, Dyrda L. Key trends in neuromodulation and where artificial intelligence fits. Spine Rev 2019. [208] Parkinson’s disease information page. NINDS; 2017. [209] Sadek RM, Mohammed SA, Rahman A, et al. Parkinson’s disease prediction using artificial neural network. Int J Acad Health Med Res 2019;3(1):1 8. [210] Nguyen AV, Blears EE, Ross E. Machine learning applications for the differentiation of primary central nervous system lymphoma from glioblastoma on imaging: a systematic review and meta-analysis. Neurosurg Focus 2018;45(5). [211] Daldrup H. Artificial intelligence applications for pediatric oncology imaging. Pediatr Radiol 2019; (11):1384 90.

Chapter 7 • AI applications in prevalent diseases and disorders

431

[212] Masumoto H, Tabuchi H, Nakakura S, et al. Accuracy of a deep convolutional neural network in the detection of retinitis pigmentosa on ultrawide-field images. Peerj 2019;7:e6900. [213] Munson MC, Plewman DL, Baumer KM, et al. Autonomous early detection of eye disease in childhood photographs. Sci Adv 2019;5(10). [214] Lötsch J, Kringel D, Hummel T. Machine learning in human olfactory research. Chem Sens 2019;44 (1):11 22. [215] Masakazu H. Developing intelligent technologies to empower the human capabilities of learning. Impact 2019;4(3):64 6. [216] Shugalo I. How artificial intelligence can predict and detect stroke. Stroke 2019. [217] Rusbridge C, Stringer F, Knowle SP. Clinical application of diagnostic imaging of chiari-like malformation and syringomyelia. Front Vet Sci 2018. [218] Lougee R. Using AI to develop new flavor experiences. PhysOrg; 2019. [219] Gordon R. Teaching artificial intelligence to connect senses like vision and touch. MIT CSAIL 2019. [220] Chan KL, Leng X, Zhang W, et al. Early identification of high-risk TIA or minor stroke using artificial neural network. Front Neurol 2019. [221] Kubach J, Muehlebner-Farngruber A, Soylemezoglu F, et al. Same but different: a web-based deep learning application for the histopathologic distinction of cortical malformations. BioRxIX 2019. [222] Senders JT, Staples PC, Karhade AV, et al. Machine learning and neurosurgical outcome prediction: a systematic review. World Neurosurgery 2018;109:476 86. [223] Shoeibi N, Karimi F, Corchado JM. Artificial intelligence as a way of overcoming visual disorders: damages related to visual cortex, optic nerves, and eyes. In: International symposium on distributed computing and artificial intelligence,16th international conference, special sessions. 2019. pp. 183 7. [224] Muscle and bone diseases. NIH, National Institute of Arthritis and Musculoskeletal and Skin Diseases; 2019. [225] About arthritis. Arthritis Foundation; 2019. [226] Norgeot B, Glicksberg BS, Trupin L, et al. Assessment of a deep learning model based on electronic health record data to forecast clinical outcomes in patients with rheumatoid arthritis. JAMA Net Open 2019;2(3):e190606. [227] Bursitis. NIH, National Institute of Arthritis and Musculoskeletal and Skin Diseases; 2017. [228] Lin HC, Chiang SY, Lee K, et al. An activity recognition model using inertial sensor nodes in a wireless sensor network for frozen shoulder rehabilitation exercises. Sensors (Basel) 2015;15(1):2181 204. [229] Faust K, Jennings CD. Carpal tunnel syndrome. OtrthoInfo 2016. [230] Sayin R, Keskin S, Hamamci M. Evaluation of several classification methods in carpal tunnel syndrome. J Pak Med Assoc 2017;67(11):1654 7. [231] Fibromyalgia. Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health; 2017. [232] Minerbi A, Gonzalez E, Brereton NJB, et al. Altered microbiome composition in individuals with fibromyalgia. Pain 2019;160(11):2589 602. [233] Ramkumar P. Artificial intelligence should start with artificial joints. Forbes 2019. [234] Genetics Home Reference. Marfan syndrome. National Institutes of Health. National Library of Medicine; 2019. [235] Pinard A, Salgado D, Desvignes JP, et al. WES/WGS reporting of mutations from cardiovascular “actionable” genes in clinical practice: a key role for UMD knowledgebases in the era of big databases. Hum Mutat 2016;37(12):1308 17.

432

Foundations of Artificial Intelligence in Healthcare and Bioscience

[236] Osteoarthritis (OA). Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health; 2019. [237] Brahim A, Jennane R, Riad R, et al. A decision support tool for early detection of knee OsteoArthritis using X-ray imaging and machine learning: data from the OsteoArthritis initiative. Comput Med Imaging Graph 2019;73:11 18. [238] Osteogenesis imperfecta. Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health; 2019. [239] Lee JJ, Liu F, Majumdar S, et al. Can AI predict pain progression in knee osteoarthritis subjects from structural MRI. Osteoarthritis Cartil 2019;27(1):S24. [240] Osteoporosis. U.S. National Library of Medicine. Department of Health and Human Services. National Institutes of Health; 2019. [241] Ferizi U, Honig S, Chang G, et al. Artificial intelligence, osteoporosis, and fragility fractures. Curr Opin Rheumatol 2019;4:368 75. [242] Paget’s disease of bone. MedlinePlus. U.S. Department of Health and Human Services NIH; 2019. [243] Mehta, S.D. & Sebro, R. Random forest classifiers aid in the detection of incidental osteoblastic osseous metastases in DEXA studies. Int J CARS 2019. [244] Rheumatoid arthritis. Arthritis Foundation; 2019. [245] Kim KJ, Tagkopoulos I. Application of machine learning in rheumatic disease research. Korean J Intern Med 2019;34(4):708 22. [246] Scoliosis. MedlinePlus. U.S. Department of Health and Human Services NIH; 2019. [247] Weng CH, Wang CL, Huang YJ, et al. Artificial intelligence for automatic measurement of sagittal vertical axis using ResUNet framework. J Clin Med 2019;8(11):1826. [248] Spinal stenosis. MedlinePlus. U.S. Department of Health and Human Services NIH; 2019. [249] Gaonkar B, Beckett J, Villaroman D, et al. Quantitative analysis of neural foramina in the lumbar spine: an imaging informatics and machine learning study. Radiol: Artif Intell 2019;1(2). [250] Tendinitis. MedlinePlus. U.S. Department of Health and Human Services NIH; 2019. [251] Chang RF, Lee CC, Lo CM. Quantitative diagnosis of rotator cuff tears based on sonographic pattern recognition. PLoS One 2019. [252] Barclay T, Curreli S. Integumentary system. Inner Body; 2019. [253] List of Skin Conditions. Wikipedia. Updated November 1, 2019. [254] Sutaria AH, Schlessinger J. Acne vulgaris. StatPearls; 2019. [255] Melina A, Dinh NN, Tafuri B, et al. Artificial intelligence for the objective evaluation of acne investigator global assessment. J Drugs Dermatol 2018;17(9):1006 9. [256] McIntosh J. What’s to know about alopecia areata? Med News Today 2017. [257] Abaci HE, Coffman A, Doucet Y, et al. Tissue engineering of human hair follicles using a biomimetic developmental approach. Nat Commun 2018;9. Article number: 5301. [258] Cold sores. Medline Plus. U.S. Department of Health and Human Services National Institutes of Health; 2018. [259] Wyler E, Franke V, et al. Single-cell RNA-sequencing of Herpes simplex virus 1-infected cells identifies NRF2 activation as an antiviral program. Nat Commun 2019. Available from: http://doi.org/10.1038/ s41467-019-12894-z. [260] Kimyon RS, Warshaw EM. Airborne allergic contact dermatitis: management and responsible allergens on the American contact dermatitis society core series. Dermatitis 2019;30(2):106 15. [261] Suhendra R, Arnia F, Idroes R, et al. A novel approach to multi-class atopic dermatitis disease severity scoring using multi-class SVM. 2019. Available from: http://doi.org/10.1109/CYBERNETICSCOM.2019.8875693.

Chapter 7 • AI applications in prevalent diseases and disorders

433

[262] Danielsen RD, Ortiz EG, Symington S. Chronic urticaria it’s more than just antihistamines!. Clin Rev 2018;(1):36 43. [263] Christopher JJ, Nehemiah HK, Arputharaj K, et al. Computer-assisted medical decision-making system for diagnosis of urticaria. Med Decision Making Policy Pract 2016. Available from: https://doi.org/ 10.1177/2381468316677752. [264] Moles (Nevus). Medline Plus. U.S. Department of Health and Human Services National Institutes of Health; 2019. [265] Bell L. Mole-mapping app miiskin uses AI to help adults detect warning signs of melanoma. Forbes; 2019. [266] Nail fungus. Mayo Clinic; 2019. [267] Overman D. AI beats dermatologists in diagnosing nail fungus. PSP; 2018. [268] Meienberger N, Anzengruber F, Amruthalingam L, et al. Observer-independent assessment of psoriasis affected area using machine learning. JEADV 2019. Available from: https://doi.org/10.1111/jdv.16002. [269] Genetics Home Reference. Rosacea. U.S. Department of Health & Human Services. National Institutes of Health. National Library of Medicine; 2019. [270] Rosacea. Mayo Clinic; 2019. [271] Aggarwal SLP. Data augmentation in dermatology image recognition using machine learning. Skin Res Technol 2019;25(6):815 20. [272] Aaron DM. Seborrheic keratoses. Merck Manual Professional Version; 2019. [273] Yang X, Li H, Wang L, Yeo SY, et al. Skin lesion analysis by multi-target deep neural networks. Conf Proc IEEE Eng Med Biol Soc 2018;2018:1263 6. Available from: http://doi.org/10.1109/ EMBC.2018.8512488. [274] Shingles information page. National Institute of Neurological Disorders and Stroke, NIH; 2019. [275] Zeng P, Huang J, Wu S, et al. Characterizing the structural pattern predicting medication response in herpes zoster patients using multivoxel pattern analysis. Front Neurosci 2019;13:534. [276] Skin cancer facts & statistics. Skin Cancer Foundation; 2019. [277] Roffman D, Hart G, Girardi M, et al. Predicting non-melanoma skin cancer via a multi-parameterized artificial neural network. Sci Rep 2018;8(1):1701. Available from: http://doi.org/10.1038/s41598-018-19907-9. [278] Hale EK, Hanke CW. Squamous cell carcinoma overview. Skin Cancer Foundation 2019. [279] Karadaghy OM, Shew M, New J, et al. Development and assessment of a machine learning model to help predict survival among patients with oral squamous cell carcinoma. JAMA Otolaryngol Head Neck Surg. Published online May 2, 2019. [280] Halpern AC, Marghoob AA, Reiter O. Melanoma warning signs. Skin Cancer Foundation 2019. [281] Maron RC, Weichenthal M, Utikal JS, et al. Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur J Cancer 2019;119:57 65. Available from: http://doi.org/10.1016/j.ejca.2019.06.013. [282] Sargis RM. About the endocrine system: endocrine glands and hormones. Endocrine Web; 2016. [283] Gubbi S, Hamet P, Tremblay J, et al. Artificial intelligence and machine learning in endocrinology and metabolism: the dawn of a new era. Front Endocrinol 2019;10:185. Available from: https://doi.org/ 10.3389/fendo.2019.00185. [284] Molano A. Adults with Prader-Willi syndrome show prematurely aged brains. Prada-Willi Syndrome News; 2019. [285] Kawakami E, Tabata J, Yanaihara N, et al. Application of artificial intelligence for preoperative diagnostic and prognostic prediction in epithelial ovarian cancer based on blood biomarkers. Clin Cancer Res 2019.

434

Foundations of Artificial Intelligence in Healthcare and Bioscience

[286] Corral JE, Hussein S, Kandel P, et al. Deep learning to classify intraductal papillary mucinous neoplasms using magnetic resonance imaging. Pancreas 2019;48(6):805 10. Available from: http://doi.org/ 10.1097/MPA.0000000000001327. [287] Imbus JR, Randle RW, Pitt SC, et al. Machine learning to identify multigland disease in primary hyperparathyroidism. J Surg Res 2017;219:173 9. Available from: http://doi.org/10.1016/j.jss.2017.05.117. [288] Ahmed IS, Sam SM, Azizan A, et al. System for improving sleep quality by using the internet of things. Open Int J Inf ((OIJI)) 2018;6(3). [289] Zhu Y, Liu Z, et al. 3D densenet deep learning-based preoperative computed tomography for detecting myasthenia gravis in patients with thymoma. Lancet 2019. [290] Daniels K, Gummadi S, Zhu Z, et al. Machine learning by ultrasonography for genetic risk stratification of thyroid nodules. JAMA Otolaryngol Head Neck Surg 2019. Available from: https://doi.org/10.1001/ jamaoto.2019.3073. [291] Your digestive system & how it works. National Institute of Diabetes and Digestive and Kidney Diseases; 2019. [292] National Institutes of Health. U.S. Department of Health and Human Services. Opportunities and challenges in digestive diseases research: recommendations of the national commission on digestive diseases. Bethesda, MD: National Institutes of Health, NIH; 2009. Publication 08 6514. [293] Acid reflux. American College of Gastroenterology; 2019. [294] Januszewicz W, Pietro M. Novel gastrointestinal procedures. Medicine 2019;47(7):448 53. [295] Gatos I, Tsantis S, Spiliopoulos S, et al. A machine-learning algorithm toward color analysis for chronic liver disease classification, employing ultrasound shear wave elastography. Ultrasound Med Biol 2017;43:1797 810. [296] Banerjee R, Das A, Ghoshal UC, et al. Predicting mor-tality in patients with cirrhosis of liver with application ofneural network technology. J Gastroenterol Hepatol 2003;18:1054 60. [297] Konerman MA, Lu D, Zhang Y, et al. Assessing risk of fibrosis progression and liver-related clinical outcomes among patients with both early stage and advanced chronic hepatitis C. PLoS One 2017;12: e0187344. [298] Le Berre C, Sandborn WJ, Aridhi S, et al. Application of artificial intelligence to gastroenterology and hepatology. Gastroenterology 2020;158:76 94. [299] Colorectal cancer. MedlinePlus. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019. [300] Wang P, Berzin TM, Romek J, et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomized controlled study. BMJ 2019;68(10). [301] Crohn’s disease. MedlinePlus. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019. [302] Wang Y, Miller M, Astrakhan Y, et al. Identifying Crohn’s disease signal from various analyses. Genome Med 2019;11(1). Available from: http://doi.org/10.1186/s13073-019-0670-6. [303] Hepatitis C. Division of viral hepatitis. National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention; 2019. [304] Reiser M, Wiebner B, Hirsch J. Neural-network analysis of socio-medical data to identify predictors of undiagnosed hepatitis C virus infections in Germany (DETECT). J Transl Med 2019;17; Article number: 94. [305] Irritable bowel syndrome. American College of Gastroenterology; 2019. [306] Irritable bowel syndrome. MedlinePlus. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019.

Chapter 7 • AI applications in prevalent diseases and disorders

435

[307] Hyland M. Using artificial intelligence to understand irritable bowel syndrome, chronic fatigue syndrome and fibromyalgia syndrome. Atlas Sci 2019. [308] Your kidneys & how they work. U.S. Department of Health and Human Services National Institutes of Health; 2019. [309] List of kidney diseases and conditions. Disabled World; 2018. [310] Chronic kidney disease initiative. Centers for Disease Control and Prevention. U.S. Department of Health and Human Services; 2019. [311] Khamparia A, Saini G, Pandey B, et al. Chronic kidney disease classification with multimedia data learning using deep-stacked autoencoder network. Multimedia Tools Appl 2019;1 16. [312] What are kidney stones? Urology Care Foundation; 2019. [313] Aldoukhi AH, Law H, Black KM, et al. Deep learning computer vision algorithm for detecting kidney stone composition: towards an automated future. J Urol Surg Technol Simul: Instrum Technol 2019 PD04. [314] What is glomerulonephritis? National Kidney Foundation; 2019. [315] Nadkarni GN, Chaudhary K, Coca SG. Machine learning in glomerular diseases: promise for precision medicine. AJKD 2019;74(3). [316] Falck S, DiMaria C, Solan M, et al. Pyelonephritis. HealthLine; 2018. [317] Atallah DM, Badawy M, El-Sayed A, et al. A new proposed feature selection method to predict kidney transplantation outcome. Health Technol 2019;9(5):847 56. [318] McIntosh J. What to know about urinary tract infections. Med News Today 2018. [319] Hsiao H, Wang S, Li M, et al. Targeted workup after initial febrile urinary tract infection: using a novel machine learning model to identify children most likely to benefit from voiding cystourethrogram. J Urol 2019;202(1):144 52. [320] Zimmermann KA. Respiratory system: our avenue for gas exchange. Live Sci 2019. [321] Lung diseases. American Lung Association; 2018. [322] Lung disease. MedlinePlus. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019. [323] Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J.com 2019;61(1). [324] Asthma. National Heart, Lung and Blood Institute. NIH; 2014. [325] Sanyal S. 4 ways in which AI is revolutionizing respiratory care. Forbes; 2018. [326] Tomita K, Nagaoa R, Touge H, et al. Deep learning facilitates the diagnosis of adult asthma. Allergol Int 2019;68(4):456 61. [327] Saglani S, Custovic A. Childhood asthma: advances using machine learning and mechanistic studies. Am J Respir Crit Care Med 2019;199(4). [328] Vemulapallia V, Jennifer JQ, Garren M, et al. Non-obvious correlations to disease management unraveled by Bayesian artificial intelligence analyses of CMS data. Artif Intell Med 2016;74:1 8. [329] Alvarez-Melis D, Daumé H, Vaughan JW, et al. Weight of evidence as a basis for human-oriented explanations. Cornell University; 2019. arXiv:1910.13503. [330] Bresnick J. Top 5 use cases for artificial intelligence in medical imaging. Health IT Analytics; 2018. [331] MedlinePlus. Emphysema. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019. [332] Elkins A, Freitas FF, Sanz V. Developing an app to interpret chest X-rays to support the diagnosis of respiratory pathology with artificial intelligence. Cornell University; 2019. [333] Lynch DA, Moore CM, Wilson C, et al. CT-based visual classification of emphysema: association with mortality in the COPD gene study. Radiology 2018;288(3).

436

Foundations of Artificial Intelligence in Healthcare and Bioscience

[334] Rahman O, Mark M, Balte P, et al. Reproducibility and changes in vena caval blood flow by using 4D flow MRI in pulmonary emphysema and chronic obstructive pulmonary disease (COPD). Radiology 2019;292(3). [335] Tuberculosis. World Health Organization; 2019. [336] Tahir Khan M, Chandra Kaushik A, Ji L, et al. Artificial neural networks for prediction of tuberculosis disease. Front Microbiol 2019;10:395. [337] Harris M, Qi A, Jeagal L, et al. A systematic review of the diagnostic accuracy of artificial intelligencebased computer programs to analyze chest x-rays for pulmonary tuberculosis. PLoS One 2019;14(9): e0221339. [338] Pasa F, Golkov V, Pfeiffer F, et al. Efficient deep network architectures for fast chest X-ray tuberculosis screening and visualization. Sci Rep 2019;9(1):6268. [339] O’Connor M. AI takes 10 seconds to diagnose pneumonia on chest x-rays. Health Imaging 2019. [340] Allen Jr B, Seltzer SE, Langlotz CP, et al. A road map for translational research on artificial intelligence in medical imaging: from the 2018 National Institutes of Health/RSNA/ACR/The Academy Workshop. J Am Coll Radiol, Part A 2019;16(9):1179 89. [341] Wah Lam K. Surveillance of community-acquired pneumonia in critically ill patients. JECCM 2019;3. [342] Influenza (Flu). Centers for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases; 2019. [343] PreScouter. How AI is changing the future of how we handle the flu. Medium; 2019. [344] de la Garza A. These researchers are using artificial intelligence to make a better flu vaccine. Time Mag 2019. [345] Xue H, Bai Y, Hu H, et al. Regional level influenza study based on Twitter and machine learning method. PLoS One 2019. [346] Banerjee I, Sofela M, Yang J, et al. Development and performance of the pulmonary embolism result forecast model (PERFORM) for computed tomography clinical decision support. JAMA Netw Open 2019;2(8):e198719. [347] Muoio D. FDA clears AI that detects ‘ticking time bomb’ in chest images. Med Health News 2019. [348] Rajan D, Beymer D, Abedin S, et al. A pipeline for pulmonary embolism detection using sparsely annotated 3D CT images. Cornell University. V1. V3; 2019. [349] MedlinePlus. Pulmonary hypertension. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019. [350] Rohman M. FDA grants special recognition to AI software used to diagnose a rare form of hypertension. Health Imaging 2018. [351] Adedinsewo DA, Lesser E, Yamani MH, et al. An innovative application of artificial intelligence techniques and machine learning in diagnostic evaluation of pulmonary hypertension. Circulation 2019;140: A15999. [352] Bergemann R, Allsopp J, Jenner H, et al. High levels of healthcare utilization before diagnosis in idiopathic pulmonary arterial hypertension support the feasibility of an early diagnosis algorithm: the SPHInX project. Sage J 2018. Available from: https://doi.org/10.1177/2045894018798613. [353] Genetic Home Reference. Pulmonary veno-occlusive disease. U.S. Department of Health & Human Services. NIH. National Library of Medicine; 2019. [354] Seah JC, Tang JSN, Kitchen A, et al. Chest radiographs in congestive heart failure: visualizing neural network learning. Radiology 2018. [355] Javaherian MH. Quantitative assessment of cerebral microvasculature using machine learning and network analysis [Ph.D. thesis]. Cornell University; 2019.

Chapter 7 • AI applications in prevalent diseases and disorders

437

[356] Estimates of funding for various research, condition, and disease categories (RCDC). National Institutes of Health|U.S. Department of Health and Human Services; 2019. [357] Astaraki M, Wang C, Buizza G, et al. Early survival prediction in non-small cell lung cancer from PET/ CT images using an intra-tumor partitioning method. Phys Med 2019;60:58 65. Available from: http:// doi.org/10.1016/j.ejmp.2019.03.024. [358] Trebeschi S, Drago SG, Birkbak NJ, et al. Predicting response to cancer immunotherapy using noninvasive radiomic biomarkers. Ann Oncol 2019. [359] Xu Y, Hosny A, Zeleznik R, et al. Deep learning predicts lung cancer treatment response from serial medical imaging. Clin Cancer Res 2019. Available from: http://doi.org/10.1158/1078-0432.CCR-18-2495. [360] Liu C, Liu X, Wu F, et al. Using artificial intelligence (Watson for oncology) for treatment recommendations amongst Chinese patients with lung cancer: feasibility study. J Med Internet Res 2018;20(9): e11087. p. 1. [361] Bang A, Kendal WS, Laurie SA, et al. Prophylactic cranial irradiation in extensive stage small cell lung cancer: outcomes at a comprehensive cancer centre. Int J Radiat Oncol Biol Phys 2018;101:1133 40. [362] Bhalerao RY, Jani HP, Gaitonde RK, et al. A novel approach for detection of lung cancer using digital image processing and convolution neural networks. IEEE Xplore 2019. Available from: http://doi.org/ 10.1109/ICACCS.2019.8728348. [363] Coccia M. Deep learning technology for improving cancer care in society: new directions in cancer imaging driven by artificial intelligence. Technol Soc 2020;60:101198. [364] Jurmeister P, Bockmayr M, Seegerer P, et al. Machine learning analysis of DNA methylation profiles distinguishes primary lung squamous cell carcinomas from head and neck metastases. Sci Transl Med 2019;11(509):eaaw8513. [365] Simona G, DiNardo CD, Takahashib K, et al. Applying artificial intelligence to address the knowledge gaps in cancer care. Oncologist 2019;24(6):772 82. [366] Barclay T, Curreli S. Female reproductive system. Inner body; 2019. [367] Female reproductive system. Medline Plus. U.S. National Library of Medicine. Department of Health and Human Services National Institutes of Health. NIH; 2018. [368] De Ramón A, Ruiz FD, Prieto Sánche FM. A decision support system for predicting the treatment of ectopic pregnancies. Int J Med Inf 2019;129:198 204. [369] Bakkes TH, Sammali F, Kuijsters NP, et al. Machine learning for classification of uterine activity outside pregnancy. IEEE Xplore; 2019. Accession Number: 19109530. [370] Khosravi P, Kazemi E, Zhan Q, et al. Deep learning enables robust assessment and selection of human blastocysts after in vitro fertilization. Digital Med 2019;2(1):21. [371] Abitbol J, Lau SL, Ramanakumar AV, et al. Evaluating postoperative pain and satisfaction among women treated by robotic surgery for gynecologic cancer. Gynecol Pelvic Med 2019;2. [372] Li BY, Oh J, Young VB, et al. Using machine learning and the electronic health record to predict complicated clostridium difficile infection. Open Forum Infect Dis 2019;6(5). [373] Conant EF, Toledano AY, Periaswamy S, et al. Improving accuracy and efficiency with concurrent use of artificial intelligence for digital breast tomosynthesis. Radiology 2019;1(4). [374] Xu Z, Yang X, Gao M, et al. Abnormal resting-state functional connectivity in the whole brain in lifelong premature ejaculation patients based on machine learning approach. Front Neurosci 2019;13:448. [375] Li L1, Fan W, Li J, et al. Abnormal brain structure as a potential biomarker for venous erectile dysfunction: evidence from multimodal MRI and machine learning. Eur Radiol 2018;9:3789 800. [376] Dominguez GA, Roop J, Polo A, et al. Using artificial intelligence to distinguish subjects with prostate cancer (PCa) from benign prostatic hyperplasia (BPH) through immunophenotyping of MDSCs and

438

Foundations of Artificial Intelligence in Healthcare and Bioscience

lymphocyte cell populations. J Immunol Cancer 2017;5:44. Condamine et al. Science Immunol. 2016;1:2. [377] Lee J, Yang SW, Lee S, et al. Machine learning approaches for the prediction of prostate cancer according to age and the prostate-specific antigen level. Korean J Urol Oncol 2019;17(2). [378] Hung A. Artificial intelligence, decision support techniques, diagnosis, machine learning, prediction, systematic review, urology. BJUI 2019. [379] Hicks SA, Andersen JM, Witczak O, et al. Machine learning-based analysis of sperm videos and participant data for male fertility prediction. Nat Sci Rep 2019;9. Article number 14. [380] Heron M. Division of vital statistics. deaths: leading causes for 2017. National vital statistics reports. Center Dis Control Prevent 2019;68(6). [381] Dodgson L. Scientists have created a murder-obsessed ‘psychopath’ AI called Norman — and it learned everything it knows from Reddit. Business Insider 2018. [382] Dastin J. ‘Kill your foster parents’: Amazon’s Alexa talks murder, sex in AI experiment. Reuters Technol News 2018. [383] Fussell S. For many reasons, parents and teachers may fail to intervene when they spot LGBTQ teens in trouble. Can Google help? Atlantic 2019. [384] Marmar CR, Brown AD, Qian M, et al. Speech-based markers for posttraumatic stress disorder in US veterans. Depression Anxiety 2019. Available from: https://doi.org/10.1002/da.22890. [385] Baravik M. What do AI and automation mean for accident prevention? Telematics 2019. [386] Hadj-Mabrouk H. Contribution of artificial intelligence to risk assessment of railway accidents. Urban Rail Transit 2019;5(2):104 22. [387] Sullivan BK. Artificial intelligence helps to contain wildfires, predict wild weather. Insurance J 2019. [388] Capone B. Early fire detection based on deep learning: AI-FIRE-DEEP and AI-SMOKE-DEEP. AI Tech 2019. [389] Durborow M, Mueller J, Fleiss A. Artificial intelligence meets gun violence. Rebellion Res 2019. [390] Northumbria University. Artificial intelligence could help crack previously unsolvable murder cases. Phys.org; 2018. [391] Hartung T. AI beats animal testing at finding toxic chemicals. The Scientist 2019. [392] Wahab L, Jiang H. A comparative study on machine learning-based algorithms for prediction of motorcycle crash severity. PLoS One 2019;14(4):e0214966. [393] Dong C, Shao C, Li J, et al. An improved deep learning model for traffic crash prediction. J Adv Transport 2018:13. Article ID 3869106. [394] Limonte K. Machine learning aiding elderly falls. Microsoft Industry Blog UK; 2018. [395] Sattar H, Bajwa IS, Shafi UF, et al. An intelligent air quality sensing system for open-skin wound monitoring. MDPI Electronics 2019. [396] Ciabarra C. How AI is being used to help with active shooter incidents. Forbes Technology Council; 2019. [397] Cooke G. Magic bullets: the future of artificial intelligence in weapons systems. United States Military Academy at West Point; 2019. [398] Gao M, Sato M, Ikegaya Y, et al. Machine learning-based prediction of seizure-inducing action as an adverse drug effect. Yakugaku Zasshi 2018;138(6):809 13. [399] Asnaghi V, Pecorino D, Ottaviani E, et al. A novel application of an adaptable modeling approach to the management of toxic microalgal bloom events in coastal areas. Harmful Algae 2017;63:184 92. [400] Kavitha S, Lourdhu Suganthi R, Jose J. Ensemble deep learning for prediction of palatable mushrooms. Int J Eng Sci Res 2018;6(1).

Chapter 7 • AI applications in prevalent diseases and disorders

439

[401] Anthes K, Shiva K, Falco A. Measuring the sympathetic response to intense exercise in a practical setting. Proc Machine Learn Res 2019;106:1 23. [402] Hsu J. Forget killer robots: autonomous weapons are already online. UnDark 2018. [403] Warner B. A.I. security cameras are the latest high-tech attempt to combat mass shooters. Fortune 2019. [404] Zwetsloot R, Dafoe A. Thinking about risks from AI: accidents, misuse and structure. LawFare 2019. [405] Dhar V. Machines make mistakes—how can we trust artificial intelligence to fly and drive? Opinion. Newsweek; 2019. [406] Zhao W, Xu L, Bai J, et al. Sensor-based risk perception ability network design for drivers in snow and ice environmental freeway: a deep learning and rough sets approach. Soft Comput 2018;22 (5):1457 66. [407] Noda K. Google home: smart speaker as an environmental control unit. J Disab Rehab: Assist Technol 2017;13(7):674 5. [408] Zheng M, Li T, Zhu R, et al. Traffic accident’s severity prediction: a deep-learning approach-based CNN network. IEEE Access. Digital Object Identifier. Available from: http://doi.org/10.1109/ ACCESS.2019.2903319; 2019. [409] Abduljabbar R, Dia H, Liyanage S, et al. Applications of artificial intelligence in transport: an overview. Sustainability 2019;11:189. Available from: http://doi.org/10.3390/su11010189. [410] Harrison G. Using artificial intelligence tools to identify levels of violence in movies. Techexplore 2019. [411] DeBrule S. Using artificial intelligence to study religious violence. Medium 2018. [412] Nutter PW. Machine learning evidence: admissibility and weight. J Constitutional Law 2019;21:3. [413] Bresnick J. Berkeley Lab, VA use deep learning to address veteran suicide. HealthITAnalytics; 2019. [414] Zivkovic L. Artificial intelligence is now being used to detect cyberbullying in school children. United AI; 2019. [415] Scherr S, Arendt F, Frissen T, et al. Detecting intentional self-harm on Instagram: development, testing, and validation of an automatic image-recognition algorithm to discover cutting-related posts. Soc Sci Comput Rev 2019. [416] Disability. HealthCare.gov; 2019. [417] Aira Tech Corp. https://aira.io/; 2019. [418] Vision AI. Google Cloud; 2019. [419] Aalborg University. Artificial neural networks make life easier for hearing aid users. Phys.org; 2019. [420] Bur AM, Shew M, New J. Artificial intelligence for the otolaryngologist: a state-of-the-art review. Otolaryngol Head Neck Surgery 2019. [421] Xia Y, Yao ZM, Ye Q, et al. A dual-modal attention-enhanced deep learning network for quantification of Parkinson’s disease characteristics. IEEEXplore 2019. [422] Zhao D, Yang J, Onyeka Okoye M, et al. Walking assist robot: a novel non-contact abnormal gait recognition approach based on extended set membership filter. IEEEXplore 2019;76741 53. [423] Seeman L, Cooper M. Cognitive accessibility user research. W3C. https://w3c.github.io/coga/userresearch/; 2019. [424] Cheshire WP. Machine intelligence as interpreter: ethical implications of neural speech decoding. Ethics Med 2019;34:2. [425] Infectious diseases. U.S. National Library of Medicine. U.S. Department of Health and Human Services National Institutes of Health; 2019.

440

Foundations of Artificial Intelligence in Healthcare and Bioscience

[426] Microbiology by numbers. Nat Rev Microbiol 2011;9:628. Available from: http://doi.org/10.1038/ nrmicro2644. [427] Harris A, Lowy FD, Sullivan M. Patient education: methicillin-resistant Staphylococcus aureus (MRSA) (beyond the basics). UpToDate 2019. [428] Viral vectors. Gene Therapy Net; 2019. [429] Shiel WC. Medical definition of vaccination. MedicineNet 2018. [430] Chen ML, Doddi A, Royer J, et al. Beyond multidrug resistance: leveraging rare variants with machine and statistical learning models in Mycobacterium tuberculosis resistance prediction. EBioMedicine 2019;43:356 69. [431] Burlina PM, Joshia NJ, Ng E, et al. Automated detection of erythema migrans and other confounding skin lesions via deep learning. Comput Biol Med 2019;105:151 6. [432] Needell D. Large data analysis and lyme disease. Notices Am Math Soc 2019;66(1). [433] Rees EE, Ng V, Gachon P, et al. Risk assessment strategies for early detection and prediction of infectious disease outbreaks associated with climate change. Can Commun Dis Rep 2019;45(5):119 26. [434] Chandir S, Siddiqi DA, Hussain OA, et al. Using predictive analytics to identify children at high risk of defaulting from a routine immunization. JMIR Public Health Surveill 2018;4(3). [435] Xaviero R, Pramono A, Imtiaz SA, et al. A cough-based algorithm for automatic diagnosis of pertussis. PLoS One 2016. Available from: https://doi.org/10.1371/journal.pone.0162128. [436] Andrès E, Gass R, Charloux A, et al. Respiratory sound analysis in the era of evidence-based medicine and the world of medicine 2.0. J Med Life 2018;11(2):89 106. [437] Coronavirus disease 2019 (COVID-19) situation summary. Center for Disease Control and Prevention, National Center for Immunization and Respiratory Diseases (NCIRD), Division of Viral Diseases; 2020. [438] Wonga ZSY, Zhoub J, Zhang Q. Artificial intelligence for infectious disease big data analytics. Infect, Dis & Health 2019;24(1):44 8. [439] Peiffer-Smadja N, Rawson TM, Ahmad R, et al. Machine learning for clinical decision support in infectious diseases: a narrative review of current. Clin Microbiol Infect 2019. [440] Rawi R, Mall R, Shen CH, et al. Accurate prediction for antibody resistance of clinical HIV-1 isolates. Sci Rep 2019;9:14696. [441] Yu C, Dong Y, Liu J, et al. Incorporating causal factors into reinforcement learning for dynamic treatment regimes in HIV. BMC Med Inform Decis Mak 2019;19:60. [442] Song Q, Zheng YJ, Yang J, et al. Effects of food contamination on gastrointestinal morbidity: comparison of different machine-learning methods. Int J Environ Res Public Health 2019;16(5):838. [443] Abouzahra M. Designing a system to predict inflammatory bowel disease flares using machine learning. Emergent research forum (ERF) paper. Twenty-fifth Americas Conference on Information Systems. Cancun; 2019. [444] Yang YJ, Bang CS. Application of artificial intelligence in gastroenterology. World J Gastroenterol 2019;25(14):1666 83. [445] Aishwarya E, Goel A, Nijhawan R. A deep learning approach for classification of onychomycosis nail disease. Proc ICETIT 2019;1112 18. [446] Han SS, Park GH, Lim W, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: automatic construction of onychomycosis datasets by a region-based convolutional deep neural network. PLoS One 2018;13(1): e0191493. [447] Zhao K, He H, Gao P, et al. A new approach for vaginal microbial micrograph classification using convolutional neural network combined with decision-making tree (CNN-DMT). SPIE Proc 2019;11069.

Chapter 7 • AI applications in prevalent diseases and disorders

441

[448] Yuliastuti GE, Alfiyatin AN, Rizki AM, et al. Performance analysis of data mining methods for sexually transmitted disease classification. Int J Electrical Comput Eng ((IJECE)) 2018;8(5):3933 9. [449] Hernandez HW, Soeung M, Zorn KM, et al. High throughput and computational repurposing for neglected diseases. Pharm Res 2018;36:27. [450] Zhang Y, Ceylan Koydemir H, Shimogawa MM, et al. Motility-based label-free detection of parasites in bodily fluids using holographic speckle analysis and deep learning. Light Sci Appl 2018;7:108. [451] Yung-Chun Liu Z, Chamberlin AJ, Shome P, et al. Identification of snails and parasites of medical importance via convolutional neural network: an application for human schistosomiasis. bioRxiv 2019. 713727. [452] He JY, Wu X, Jiang YG, et al. Hookworm detection in wireless capsule endoscopy images with deep learning. IEEE Trans Image Process 2018;27(5). [453] Li S, Li A, Molina Lara DA, et al. A novel transfer learning approach for toxoplasma gondii microscopic image recognition by fuzzy cycle generative adversarial network. bioRxiv 2019. 567891. [454] Aulner N, Danckaert A, EunIhm J, et al. Next-generation phenotypic screening in early drug discovery for infectious diseases. Trends Parasitol 2019;35(7):559 70. [455] Yakimovicha A. mSphere of influence: the rise of artificial intelligence in infection biology. mSphere 2019;4(3). [456] Nakada-Tsukui K, Sekizuka T, Sato-Ebine E, et al. AIG1 effects in vitro and in vivo virulence in clinical isolates of Entamoeba histolytica. PLoS Pathog 2018;14(3):e1006882. [457] Bansal S, Ganesan G. Advanced evaluation methodology for water quality assessment using artificial neural network approach. Water Resour Manag 2019;33:3127 41. [458] Villalta F, Rachakonda G. Advances in preclinical approaches to Chagas disease drug discovery. Expert Opin Drug Discov 2019;14(11):1161 74. [459] de Souza AS, Ferreira LLG, de Oliveira AS. Quantitative structure-activity relationships for structurally diverse chemotypes having anti-Trypanosoma cruzi activity. Int J Mol Sci 2019;20(11):2801. [460] da Silva Motta D, Badaró R, Santos A, et al. Use of artificial intelligence on the control of vector-borne diseases. Intechopen. 2018. Available from: http://doi.org/10.5772/intechopen.81671. [461] Kalkan SC, Sahingoz OK. Deep learning-based classification of malaria from slide images. IEEE Xplore; 2019. Accession Number: 18760090. [462] Gram-Hansen B, Schröder de Witt C, Rainforth T, et al. Hijacking malaria simulators with probabilistic programming. arXiv 2019. 1905.12432. [463] McCallum C, Riordon J, Wang Y, et al. Deep learning-based selection of human sperm with high DNA integrity. Commun Biol 2019;2:250. [464] Potluri V, Sangeetha Kathiresan P, Kandula H, et al. An inexpensive smartphone-based device for point-of-care ovulation testing. R Soc Chem 2019;19:59 67. [465] Patil SN, Wali UV, Swamy MK. Selection of single potential embryo to improve the success rate of implantation in IVF procedure using machine learning techniques. IEEE Xplore; 2019. Accession Number: 1861930. [466] Curchoe CL, Bormann CL. Artificial intelligence and machine learning for human reproduction and embryology presented at ASRM and ESHRE 2018. J Assist Reprod Genet 2019;36:591 600. [467] Liu Z, Huang B, Cui Y, et al. Multi-task deep learning with dynamic programming for embryo early development stage classification from time-lapse videos. IEEE Access 2019;7:122153 63. [468] SaberIraji M. Prediction of fetal state from the cardiotocogram recordings using neural network models. Artif Intell Med 2019;96:33 44. [469] Wang J, Ju R, Chen Y, et al. Automated retinopathy of prematurity screening using deep neural networks. EBioMedicine 2018;35:361 8. Available from: http://doi.org/10.1016/j.ebiom.2018.08.033.

442

Foundations of Artificial Intelligence in Healthcare and Bioscience

[470] Wilkinson J. AI used to detect fetal heart problems. Am Assoc Adv Sci (AAAS) 2018. [471] Schmidt W, Regan M, Fahey M, et al. General movement assessment by machine learning: why is it so difficult? JMAI 2019;2. [472] Doya K, Taniguchi T. Curr Opin Behav Sci 2019;29:91 6. [473] Mostaph M, Styner M. Role of deep learning in infant brain MRI analysis. Magn Resonance Imaging 2019;64:171 89. [474] Achenie LEK, Scarpa A, Factor R, et al. A machine learning strategy for autism screening in toddlers. J Dev Behav Pediatr 2019;40(5):369 76. [475] Goodfellow D, Zhi R, Funke R, et al. Predicting infant motor development status using day long movement data from wearable sensors. arXiv; 2018. 1807.02617v2 [cs.LG]. [476] Hasse A, Cortesi SC, Lombana A, et al. Youth and artificial intelligence: where we stand. Berkman Klein Center Research Publication No. 2019-3; 2019. Available at SSRN: https://doi.org/10.2139/ ssrn.3385718. [477] Moore JH, Raghavachari N. Artificial intelligence based approaches to identify molecular determinants of exceptional health and life span-an interdisciplinary workshop at the national institute on aging. Front Artif Intell 2019. Available from: https://doi.org/10.3389/frai.2019.00012. [478] Donati L, Fongo D, Cattelani L, et al. Prediction of decline in activities of daily living through deep artificial neural networks and domain adaptation. In: International conference of the Italian association for artificial intelligence. AI IA 2019: AI IA Advances in Artificial Intelligence. 2019. pp. 376 91. [479] Zhang B, Dafoe A. Artificial intelligence: American attitudes and trends. 2019. Available at SSRN: https://ssrn.com/abstract 5 3312874 or https://doi.org/10.2139/ssrn.3312874. [480] Wallace ML, Buysse CJ, Redline S, et al. Multidimensional sleep and mortality in older adults: a machine-learning comparison with other risk factors. J Gerontol: Ser A 2019;74(12):1903 9. [481] Zhavoronkov A, Mamoshina P, VanHalen Q, et al. Deep aging clocks: the emergence of AI-based biomarkers of aging and longevity. Sci Soc Special Issue: Rise Machines Med 2019;40(8):P546 9. [482] Choi YK, Lazar A, Demiris G, et al. Emerging smart home technologies to facilitate engaging with aging. J Gerontol Nurs 2019;45(12):41 8. [483] Ahmed MR, Zhang Y, Feng Z, et al. Neuroimaging and machine learning for dementia diagnosis: recent advancements and future prospects. IEEE Rev Biomed Eng 2018;12:19 33. [484] De-Kuang H, Hsu CC, Chang KJ, et al. Artificial intelligence-based decision-making for age-related macular degeneration. Theranostics 2019;9(1):232 45. [485] Ting DSW, Lin H, Ruamviboonsuk P, et al. Artificial intelligence, the internet of things, and virtual clinics: ophthalmology at the digital translation forefront. Lancet Digital 2019. [486] Weng SF, Vaz L, Qureshi N, et al. Prediction of premature all-cause mortality: a prospective general population cohort study comparing machine-learning and standard epidemiological approaches. PLoS One 2019. [487] Raghunath S. Artificial intelligence examining ECGs predicts irregular heartbeat, death risk. Philadelphia, PA: American Heart Association’s Scientific Sessions 2019; 2019. [488] Holmgren G, Andersson P, Jakobsson A, et al. Artificial neural networks improve and simplify intensive care mortality prognostication: a national cohort study of 217,289 first-time intensive care unit admissions. J Intensive Care 2019;7. Article number: 44. [489] About chronic diseases. National Center for Chronic Disease Prevention and Health Promotion. CDC; 2019. [490] Buttorff C, Ruder T, Bauman M. Multiple chronic conditions in the United States. Rand Corp 2017. [491] Raghupathi W, Raghupathi V. An empirical study of chronic diseases in the United States: a visual analytics approach. Int J Environ Res Public Health 2018;15(3).

Chapter 7 • AI applications in prevalent diseases and disorders

443

[492] Chronic diseases and health promotion. World Health Organization (WHO); 2019. [493] Hajata C, Stein E. The global burden of multiple chronic conditions: a narrative review. Prev Med Rep 2018;12:284 93. Available from: http://doi.org/10.1016/j.pmedr.2018.10.008. [494] ICDSP. Smoking activity recognition using a single wrist IMU and deep learning light. In: Proceedings of the 2nd international conference on digital signal processing. 2018. p. 48 51. [495] Rue N. How AI is helping methamphetamine addicts get sober and find recovery. Medium; 2019. [496] Santhanam P, Ahima RS. Machine learning and blood pressure. J Clin Hypertension 2019. Available from: https://doi.org/10.1111/jch.13700. [497] Zhang Q, Liu Y, Liu G, et al. An automatic diagnostic system based on deep learning, to diagnose hyperlipidemia. Diabetes Metab Syndr Obes 2019;12:637 45. [498] DeGregory KW, Kuiper P, DeSilvio T, et al. A review of machine learning in obesity. Obes Rev 2018;19 (5):668 85. Available from: http://doi.org/10.1111/obr.12667. [499] Wells T. Diabetes data analysis will lead to improved glucose monitoring and insulin delivery. Rennselear News 2019. [500] Barocas S, Selbst AD. Big data’s disparate impact. 104 Calif Law Rev 2016;671. [501] Riggs DW, Yeager RA, Bhatnagar A. Defining the human envirome: an omics approach for assessing the environmental risk of cardiovascular disease. Circ Res 2019;122(9):1259 75. [502] Dankwa-Mullan I, Rivo M, Sepulveda M, et al. Transforming diabetes care through artificial intelligence: the future is here. Popul Health Manag 2019;22(3). [503] Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019;6 (2):94 8. [504] What is mental health? U.S. Department of Health & Human Services; 2019. [505] Mental health by the numbers. National Alliance on Mental Illness (NAOMI); 2019. [506] Mental disorders. World Health Organization (WHO); 2019. [507] Rosenfeld A, Benrimoh D, Armstrong C, et al. Big data analytics and AI in mental healthcare. Cornell University; 2019. arXiv:1903.12071v1. [508] Graham S, Depp C, Lee EE, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep 2019;21:116. [509] Gejman PV, Sanders AR, Kendler KS, et al. Genetics of schizophrenia: new findings and challenges. Annu Rev Genomics Human Genetics 2011;12:121 44. [510] Duda M, Zhang H, Li HD, et al. Brain-specific functional relationship networks inform autism spectrum disorder gene prediction. Transl Psychiatry 2018;8(1):56. [511] Tan J, Rollins C, Sonia I, et al. Primed for psychiatry: the role of artificial intelligence and machine learning in the optimization of depression treatment. Univ Toronto Med J 2019;96(1):43 7 5 p. [512] Sani OG, Shanechi MM. Brain-zapping implants that fight depression are inching closer to reality. Sci News 2019. [513] Fearn N. Can artificial intelligence help prevent mental illness? Forbes 2019. [514] Du J, Zhang Y, Luo J, et al. Extracting psychiatric stressors for suicide from social media using deep learning. BMC Med Inform Decis Mak 2018;18(Suppl. 2):43. [515] Blumenthal A. Can artificial intelligence help victims of abuse to disclose traumatic testimony? In: The University of Southern California. 2018 ACM International Conference on Multimodal Interaction. EurekAlert; 2018. [516] Benjamin M. Another 5 ways artificial intelligence is improving healthcare. DogTown Media 2019.

444

Foundations of Artificial Intelligence in Healthcare and Bioscience

[517] Tucker LA. Physical activity and telomere length in U.S. men and women: A National Health and Nutrition Examination Survey (NHANES 1999 2002) investigation. Prevent Med 2017;100:145 51. [518] Greider C, Blackburn E. A telomeric sequence in the RNA of Tetrahymena telomerase required for telomere repeat synthesis. Nature 1989;337:331 7. [519] Harmon K. Work on telomeres wins Nobel prize in physiology or medicine for 3 U.S. genetic researchers (Elizabeth Blackburn, Carol Greider, and Jack Szostak). Sci Am 2009. [520] MacMillan A. Exercise makes you younger at the cellular level. Time Mag 2018. [521] Learning genetics. Are Telomeres the key to aging and cancer. The University of Utah; 2019. [522] Cesare N, Nguyen QC, Grant C, et al. Social media captures demographic and regional physical activity. BMJ 2019;5(1). [523] Thralls KJ, Godbole S, Manini TM, et al. A comparison of accelerometry analysis methods for physical activity in older adult women and associations with health outcomes over time. J Sports Sci 2019;37 (20):2309 17. [524] Al-Mallah MH, Elshawi R, Ahmed AM, et al. Using machine learning to define the association between cardiorespiratory fitness and all-cause mortality (from the Henry Ford Exercise Testing Project). Am J Cardiol 2017;120(11):2078 84. [525] Importance of good nutrition. President’s Council on Sports, Fitness & Nutrition. U.S. Department of Health & Human Services. 2019. [526] Overweight and obesity. Division of Nutrition, Physical Activity, and Obesity, National Center for Chronic Disease Prevention and Health Promotion; 2018. [527] Zeevi D, Korem T, Zmora N, et al. Personalized nutrition by prediction of glycemic responses. Cell 2015;163(5):P1079 94. [528] Topol E. The AI diet. New York Times; 2019. [529] Estruch R, Ros E, Salas-Salvadó J, et al. Primary prevention of cardiovascular disease with a Mediterranean diet supplemented with extra-virgin olive oil or nuts. N Engl J Med 2018;378:e34. [530] Thaiss CA, Levy M, Grosheva I, et al. Hyperglycemia drives intestinal barrier dysfunction and risk for enteric infection. Science 2018;359(6382):1376 83.

8 SARS-CoV-2 and the COVID-19 pandemic The manuscript for this book was completed in March 2020 and submitted for publication. It consisted of seven chapters, a preface and an epilogue. At the same time, the United States and the world were struck by a virulent and highly contagious form of a coronavirus, referred to as the “novel” coronavirus or SARS-CoV-2. This virus resulted in a global pandemic referred to as COVID-19. Because of the unique and devastating nature of this health crisis, the book’s publisher, Elsevier and I felt the need for this additional, late inserted chapter to offer a discussion of the virus, its epidemiology, clinical and bioscience, public health implications, and the immediate role and response AI has provided in the early stages of this pandemic. I also went back through the relevant topics in Chapters 4 through 7 and, as you will have noted, I added brief related comments on COVID-19 with references to the material in this new chapter. Hopefully, by the time you read this chapter, the virus will have been eliminated or dramatically reduced through an effective vaccine or antiviral therapeutic agents. Furthermore, I apologize for any information in this chapter that will have been modified, updated or proven inaccurate at the time of your reading. The information I am providing now is either traditional, documented public health information from previous epidemics and pandemics or it is information and hypotheses being considered by experts as of September 2020.

8.1 Background 8.1.1 Definitions An endemic level of disease can be defined as that level of observable disease found in a community and considered a baseline or expected level. Occasionally, the expected level of disease may rise, often suddenly, in a defined geographic area and is termed an “outbreak.” If the rise in the cases are grouped in a specific place, it is considered a “cluster,” but if they are broadly distributed, it is considered an “epidemic.” Pandemic refers to an epidemic that has spread over several countries or continents, usually affecting a large number of people [1]. Epidemics and pandemics occur when an infectious agent (e.g., a virus) is sufficiently virulent and contagious enough to be conveyed to a large number of susceptible hosts (e.g., humans). These conditions may result from: • A recent increase in amount or virulence of the agent; • The recent introduction of the agent into a setting where it has not been before; Foundations of Artificial Intelligence in Healthcare and Bioscience. DOI: https://doi.org/10.1016/B978-0-12-824477-7.00004-3 © 2021 Elsevier Inc. All rights reserved.

445

446

Foundations of Artificial Intelligence in Healthcare and Bioscience

• An enhanced mode of transmission so that more susceptible persons are exposed; • A change in the susceptibility of the host response to the agent; and/or • Factors that increase host exposure or involve introduction through new portals of entry [2].

8.1.2 History of pandemics 8.1.2.1 Historical overview Outbreaks of infectious disease have shaped the economic, political, and social aspects of human civilization, their effects often lasting for centuries. These outbreaks have defined some of the basic tenets of modern medicine with the development of the principles of epidemiology, prevention, immunization, and the field of public health. Throughout history, pandemic outbreaks have decimated societies, determined outcomes of wars, wiped out entire populations, yet paradoxically, they have ushered in new innovations, created and advanced sciences including medicine, immunology, genetics, public health as well as fields of economics and political science systems [3]. The best-known examples of recorded plagues are those referred to in religious writings starting with the Old Testament. The Athenian plague is an historically documented event that occurred in 430 26 BCE during the Peloponnesian War. This plague affected a majority of the inhabitants of the overcrowded city-state and claimed lives of more than 25% of the population [4]. Subsequent plagues over the centuries effected the Roman Empire (the Antonine plague [5]), the Justinian plague [6] and forward to 13th century and the Black Plague, a global outbreak of the bubonic plague that originated in China in 1334, arrived in Europe in 1347, and over the following 50 years it reduced the global population from 450 million to possibly below 300 million. Some estimates claim that the Black Death claimed up to 60% of lives in Europe at that time [7].

8.1.2.2 Recent history Three influenza pandemics occurred at intervals of several decades during the 20th century, the most severe of which was the so-called “Spanish Flu” (caused by an A[H1N1] virus), estimated to have caused 20 50 million deaths in 1918 19. Milder pandemics occurred subsequently in 1957 58 (the “Asian Flu” caused by an A[H2N2] virus) and in 1968 (the “Hong Kong Flu” caused by an A[H3N2] virus), which were estimated to have caused one to 4 million deaths each. Polio (classified as an epidemic) occurred in the United States from 1916 to its peak in 1952. Of the 57,628 reported cases, there were 3145 deaths. Dr. Jonas Salk developed a vaccine and in 1962, the average number of cases dropped to 910. The CDC Trusted Source reports that the United States has been polio-free since 1979 [8]. Unfortunately, there have been recent reports of new cases of polio developing in industrialized and developing countries [9]. First documented case of the human immunosuppressive virus (HIV) occurred in 1981. The pandemic first appeared to be a rare lung infection originating in Africa. Now it is known that it damages the body’s immune system and compromises its ability to fight off infections.

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

447

Acquired immune deficiency syndrome (AIDS) is the final stage of HIV and the 6th leading cause of death in the United States among people 25 44 years old. While no cure currently exists, treatment drugs have been developed and the number of deaths has fallen to 19% since 2005 [10]. The first influenza pandemic of the 21st century occurred in 2009 10 and was caused by an influenza A(H1N1) virus. This H1N1 pandemic was a reprise of the “Spanish flu” pandemic from 1918, but with far less devastating consequences. Suspected as a re-assortment of bird, swine, and human flu viruses, it was coined the “swine flu” [11]. For the first time, a pandemic vaccine was developed, produced and deployed in multiple countries during the first year of the pandemic. While most cases of pandemic H1N1 were mild, globally it is estimated that this 2009 pandemic caused between 100,000 and 400,000 deaths in the first year alone [12]. Other prominent epidemics and pandemics that occurred in the early 21st century included Ebola, Lassa fever, Middle East respiratory syndrome coronavirus (MERS-CoV), Nipah and henipa virus diseases, Zika, and others [13]. The first outbreak of Severe Acute Respiratory Syndrome (SARS) was at the start of the 21st century. It was caused by the SARS Corona virus (SARS-CoV-1) and started in China. It affected fewer than 10,000 individuals, mainly in China and Hong Kong, but also in other countries, including 251 cases in Canada (Toronto). The severity of respiratory symptoms and mortality rate of about 10% caused a global public health concern. Through the vigilance of public health systems worldwide, the outbreak was contained by mid-2003 [14]. This certainly is a sad statement when considering the virtually uncontrolled evolution and spread of the SARS-CoV-2 pandemic being experienced during the second decade of the 21st century. How can it have happened? The novel coronavirus (SARS-CoV-2), albeit more contagious than the SARS-CoV-1, was allowed to spread uncontrolled because of inadequate (personal responsibility and political accountability) attention to the simplest cardinal public health measures to control infectious disease testing, quarantine, social distancing, copious hygiene (hand-washing), wearing masks and contact tracing. Such a sad statement has resulted in otherwise avoidable human suffering [15].

8.1.3 Incidence and prevalence of COVID-19 Originating in the City of Wuhan, China in December 2019, the novel coronavirus spread rapidly throughout China (epidemic) and within two months, it had spread throughout the entire world becoming a pandemic labeled COVID-19. At the time of this writing (August 31, 2020) this pandemic had spread to 213 countries and territories and has escalated to a total of 25,620,939 reported cases and 854,222 deaths worldwide and in the United States, 6,210,979 cases or 24% of the worldwide total and 187,713 deaths or 22% of the world’s total [16,17]. In the United States, COVID-19 has already become the 3rd leading cause of death in 2020, behind heart disease and cancer. There is little doubt that when you read this book, these case numbers and mortality rates will have grown substantially, hopefully less than currently predicted.

448

Foundations of Artificial Intelligence in Healthcare and Bioscience

8.2 Pathogenesis and bioscience considerations for SARS-CoV-2 8.2.1 Mechanisms Viruses are not living cells or organisms. They are obligate parasites or non-living organisms that lack metabolic machinery of their own to generate energy or to synthesize proteins. Rather, they require a living host (an “obligate”) to exploit or infect (enter) so they can replicate to complete their life cycle (see Fig. 8 1 and Life Cycle below). The invading virus uses either its genomic DNA or RNA to replicate in the host cell. Coronaviruses (CoV) are a family of RNA viruses that typically cause mild respiratory disease in humans. They include MERS-CoV and SARS-CoV-1, thought to be driven by the spillover of bat-adapted CoVs into an intermediate host (see below). The novel coronavirus (SARS-CoV-2) is a single positivestrand RNA virus which is the largest genome known. Thus, these viruses are poorly adapted to the human host and if transmitted to humans (e.g., SARS-CoV-2), they are generally associated with more severe clinical presentations. Also, if infection occurs, it can be highly transmissible from person to person as SARS-CoV-2 has demonstrated [18].

8.2.2 Theories Several studies suggest that antibodies against non-SARS-CoVs are highly prevalent in the general population including children, suggesting that most individuals have been infected by CoVs and have potentially developed a certain degree of (protective) immune response [19]. The severity and the clinical picture could even be related to the activation of an exaggerated immune mechanism (“cytokine storm”), causing uncontrolled inflammation (i.e., the immune system as “our worst enemy”). The hypothesis that SARS-CoV-1 (or other, antigenically similar CoV-1) have silently infected a significant proportion of the population, inducing herd immunity (see “Treatment and management strategies” below) needs to be confirmed. Indeed, immunity against the infection, or also patterns of semi-immunity (capacity of the immune system to avoid severe infection) may be due to cellular rather than humoral immune responses. Within 19 days after symptom onset, a total of 100% of 285 patients with COVID-19 tested positive for antiviral immunoglobulin-G (IgG). Seroconversion for IgG and IgM (transition of the test results for IgG or IgM against SARS-CoV-2 from negative to positive results in sequential samples) occurred simultaneously or sequentially. Both IgG and IgM titers plateaued within six days after seroconversion [20]. Serological testing may be helpful for the diagnosis of suspected patients with negative RT PCR results and for the identification of asymptomatic infections. Animal models suggest that the efficiency of T lymphocyte-mediated immune responses (see Chapter 6, page 4) is pivotal for controlling SARS-CoV infections [21]. There are currently no data on the specific role of either humoral or cellular immunity or innate immunity in patients recovering from COVID-19. T lymphocytes responsible for clinically relevant antiviral immune responses have a significant chance to be locally present in, or close to, respiratory epithelia [22]. It is very possible that the exclusive detection of humoral immunity against SARS-CoV-2 leads to an underestimation of the anti-SARS-CoV-2 immune responses. It becomes plausible that, after infection by SARS-CoV-2, a sort of race

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

449

decides the course of the events. Either a cellular innate immune response rapidly clears SARS-CoV-2 without any (or mild) clinical signs of infection, or the virus causes a state of immunosuppression that debilitates and sometimes overwhelms the host’s defense [23]. San Francisco-based Vir Biotechnology has identified several human neutralizing monoclonal antibody (mAb) candidates against SARS-CoV-2. The lead antibody’s ability to neutralize the SARS-CoV-2 live virus has been confirmed in two different laboratories. It binds to an epitope, the specific site on the viral antigen molecule that is also seen on the SARS-CoV-1 virus that causes SARS. This means the antigen is highly conserved and less likely to disappear should the viruses mutate or develop resistance to the antibody [24]. Researchers have analyzed genomic data related to the overall molecular structure of the new coronavirus. Their testing (AI machine learning karyotyping analysis) has traced this novel coronavirus to a strain of Malaysian anteater (pangolin) containing genomic regions that are very closely related to the human virus. Their analysis showed that the genome resembles that of a bat coronavirus discovered after the COVID-19 pandemic began. However, in “SARS-CoV-2 testing,” the binding region of the spike protein resembles the novel virus found in pangolins (anteaters). This provides additional evidence that the coronavirus that causes COVID-19 almost certainly originated in nature, most likely in bats with an intermediate animal (anteater or monkey?) host and ultimately transmitted to humans (“zoonotic spillover”) [25]. Most important among these findings is the receptor binding domain (spike protein) that dictates how the virus is able to attach and infect human cells (see Life cycle below). This comparative analysis of genomic data dispelled the postulate that the virus was laboratory constructed or was a “manipulated” virus. Rather, it promotes a lesson learned to reduce human exposure to wildlife and to ban the trade and consumption (e.g., “wet markets” in China) of wildlife. This genetic information concludes that “coronaviruses clearly have the capacity to jump species boundaries and adapt to new hosts (virus recently reported in Malaysian tigers in Bronx Zoo [26]), making it straightforward to predict that more will emerge in the future.” However, as not all of the early COVID-19 cases were wet market associated, it is possible that the emerging story is more complicated than first suspected. The genomic data of the new coronavirus responsible for COVID-19 show that its spike protein contains some unique adaptations. One of these adaptations provides special ability of this coronavirus to bind to a specific protein on human cells called angiotensin converting enzyme (ACE-2). Human ACE-2 is expressed in epithelial cells of the lung and serve as an entry receptor site for SARS-CoV-2 spike glycoprotein [27]. ACE-2 genetic polymorphism (occurrence of different forms in the life cycle of an individual organism.) represented by diverse genetic variants in the human genome, has been shown to affect virus-binding activity [28] suggesting a possible genetic predisposition to COVID-19 infection. Thus, machine learning analysis of genetic variants from asymptomatic, mild or severe COVID-19 patients can be performed to classify and predict people based on their vulnerability or resistance to potential COVID-19 infection. Furthermore, the machine learning model can also return those prioritized genetic variants, such as ACE-2 polymorphism. The entire genome of the 2019-novel coronavirus is more than 80% similar to the previous human SARS-like bat CoV. Thus, previously used animal models for SARS-CoV can be

450

Foundations of Artificial Intelligence in Healthcare and Bioscience

utilized to study the infectious pathogenicity of SARS-CoV-2. CRISPR-mediated (see CRISPR, Chapter 7, page 303 and below), genetically modified hamsters or other small animals can be utilized for the study of the pathogenicity of novel coronaviruses. In such studies, AI predictions can be used to investigate the inhibitory role of the drug against SARS-CoV-2 [29].

8.2.3 Life cycle of SARS-CoV-2 The pathogenesis and life cycle of SARS-CoV-2 includes a complex of RNA genomic transfers and regenerations to produce the proliferation of the virus. The extracellular and intracellular (host cytoplasm) process involved is illustrated in Fig. 8 1 and traced through the following steps: 1. When the spike protein of SARS-CoV-2 binds to the ACE-2 receptor of the host cell, the virus enters the cell; 2. Then the fatty envelope of the virus is peeled off, which releases the viral genomic RNA into the cytoplasm; 3. The ORF1a and ORF1b (genes) RNAs are produced by genomic RNA, and then translated into pp1a and pp1ab proteins, respectively; 4. Protein pp1a and ppa1b are cleaved by protease (proteolysis) to make a total of 16 nonstructural proteins; 5. Some of the nonstructural proteins form a replication/transcription complex (RNA-dependent RNA polymerase, RdRp), which use the (+) strand genomic RNA as a template; 6. The (+) strand genomic RNA produced through the replication process becomes the genome of a new virus particle; 7. Subgenomic RNAs produced through the transcription are translated into structural proteins (S: spike protein, E: envelope protein, M: membrane protein, and N: nucleocapsid protein) which form a viral particle; 8. Spike, envelope and membrane proteins enter the endoplasmic reticulum, and the nucleocapsid protein is combined with the (+) strand genomic RNA to become a nucleoprotein complex; 9. This complex merges into the complete virus particle in the endoplasmic reticulum-Golgi apparatus compartment; and 10. The new viral particles are released (exocytosis) to extracellular region through the Golgi apparatus and the vesicle.

8.2.4 Review of AI regarding the pathogenesis of SARS-CoV-2 1. Forbes Magazine reported on a global artificial intelligence database company, BlueDot, using an AI-powered algorithm, machine learning, and natural-language processing to analyze information from a multitude of sources that can track over a hundred infectious diseases [30]. 2. AI is playing an important role in evaluating the pathogenesis, diagnosis and treatment of the SARS-CoV-2 virus. There is an urgent need to develop a system with AI-based machine learning capacity to analyze and integrate imaging-based, patient-based,

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

451

FIGURE 8–1 SARS-CoV-2 Life Cycle. The life cycle of the novel coronavirus (SARS-CoV-2) begins when its spike protein attaches to an ACE2 receptor on a cell membrane (1) and penetrates the cell wall where it replicates a genomic RNA (2 4), then produces ‘subgenomic RNAs’ (5 6), synthesizes various spike proteins through translation (7) and new genomic RNA becomes the genome of a new virus particle (8). This combines with the strand genomic RNA, merges in the endoplasmic reticulum-Golgi apparatus into a complete virus particle within a vesicle (9), and the new viral particles are released (exocytosis) to the extracellular region (10). Source: Shereen MA. Khana S. Kazmi A. et al. COVID-19 infection: Origin, transmission, and characteristics of human coronaviruses. J. Adv. Research. 2020;24:91–98.

clinician-based, and molecular measurements-based data, to fight the outbreak of COVID-19 and enable more efficient responses to unknown infections in the future [31]. 3. Vaxign is a reverse vaccinology tool being used with Vaxign-ML machine learning tool to predict COVID-19 vaccine candidates. A study applied the state-of-the-art Vaxign reserve vaccinology (RV) and Vaxign-ML machine learning strategies to the entire SARS-CoV-2 proteomes including both structural and non-structural proteins for vaccine candidate prediction. The results indicate for the first time that many non-structural proteins could be used as potential vaccine candidates [32]. 4. AI technologies are powerful tools against COVID-19 and widely used in combating this pandemic. A survey investigated the main scope and contributions of AI in combating COVID-19 from the aspects of disease detection and diagnosis, virology and

452

Foundations of Artificial Intelligence in Healthcare and Bioscience

pathogenesis, drug and vaccine development, and epidemic and transmission prediction. AI mainly focuses on medical image inspection, genomics, drug development, and transmission prediction, and thus still has great potential in this field [33]. 5. On March 16, 2020 the White House issued a call to action for global AI researchers to develop new algorithms and data mining techniques to assist in COVID-19 related research. Within a short period of time advanced machine learning techniques were developed and implemented to better understand the pattern of viral spread, further improve diagnostic speed and accuracy, develop novel effective therapeutic approaches, and potentially identify the most susceptible people based on personalized genetic and physiological characteristics. This is only the beginning of a permanent role AI will play in global health care [34].

8.3 Clinical considerations regarding SARS-CoV-2 infection 8.3.1 Clinical manifestations (signs and symptoms) Reported illnesses with the novel coronavirus have ranged from mild symptoms to severe illness and death for confirmed COVID-19 cases. The symptoms may appear two to 14 days after exposure (based on the incubation period of SARS-CoV viruses). Symptoms include fever, cough and shortness of breath. Elderly and immune compromised patients are at greater risk for contracting the virus and for poor outcomes. However, significant numbers of young and healthy people are also being reported, though with generally better outcomes. Spread occurs through respiratory droplets produced when an infected person coughs or sneezes. These droplets persist in air for extended periods and can land in the mouths or noses of people who are nearby or possibly inhale the virus into their lungs. Older age, obesity, and comorbidities have consistently been reported as risk factors for unfavorable prognosis or protracted disease (“long haulers”). Less clear so far has been how the number and types of comorbidities influence the outcome. An epidemiologic clarification was provided through a nationwide Chinese retrospective cohort study involving 1590 PCRconfirmed (see Antigen testing below) COVID-19 cases (mean age, 49 years; 43% female) diagnosed between December 11, 2019, and January 31, 2020. The most common symptoms were fever, dry cough, and fatigue (88%, 70%, and 43%, respectively). The mean incubation period was four days. According to the 2007 American Thoracic Society/Infectious Disease Society of America guideline for community-acquired pneumonia criteria, 16% of the cases were considered severe. Reported proportions with comorbidities included 17% hypertension, 8% diabetes, 4% cardiovascular disease, 2% cerebrovascular disease, 2% chronic obstructive pulmonary disease (COPD), 1% chronic kidney disease, and 1% malignancy. At least one comorbidity was significantly more common in severe than in non-severe cases (33% vs. 10%) [35]. Obesity puts those with coronavirus disease 2019 (COVID-19) at particularly higher risk of death, more so than related risk factors such as diabetes or hypertension, according to a study of patient records by researchers from Kaiser Permanente [36]. Coronavirus disease leads to fast activation of innate immune cells, especially in patients developing severe disease. Innate immune activation, levels of many pro-inflammatory

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

453

effector cytokines (e.g., TNF, IL-1β, IL-6, IL-8, G-CSF and GM-CSF), as well as higher levels of chemokines (e.g., MCP1, IP10 and MIP1α) are also found in those who are critically ill. In addition, the levels of some T cell-derived cytokines (e.g., IL-17) are increased [37]. A cytokine storm can develop that triggers a hyperinflammatory state. This inflammatory clinical response leaves virtually all organ systems vulnerable to adverse effects from the novel coronavirus. Of increasing concern are the cardiovascular effects resulting from perivasculitis (inflammation of the adventitia and endothelial lining of blood vessel walls see Chapter 7, page 297) [38]. Anti-inflammatories (steroids) and cytokine inhibitor drugs (e.g., checkpoint inhibitors, IgG, Interleukin 6 blockers) are being studied and beginning to show some benefits in advanced cases and late stage disease [39]. AI is playing an important role in evaluating the pathogenesis, diagnosis and treatment of the SARS-CoV-2 virus. There is an urgent need to develop a system with AI-based machine learning capacity to analyze and integrate imaging-based, patient-based, clinician-based, and molecular measurements-based data, to fight the outbreak of COVID-19 and enable more efficient responses to unknown infections in the future [31].

8.3.2 Diagnostic testing The clarion call during the early stages of the COVID-19 pandemic was “Testing, Testing, Testing.” Tracking (“contact tracing”) an invisible virus is the only way to control it, and the most effective strategy to accomplish that goal starts with building a comprehensive system to test anyone who may be infected. Upon accomplishing that, then those positive cases can be isolated and “contact traced” (identifying persons who may have come into contact with the infected person) and testing them as well and isolate all positive cases. This critical “diagnostic” process is conducted through two types of tests, one testing for the antigen (people who are currently infected) and second, testing for antibodies to the antigen (people previously infected who have developed antibodies to the virus). Continuing efforts are being made to develop novel diagnostic approaches to COVID-19 using machine learning algorithms. Machine learning based screening of SARS-CoV-2 assay designs using a CRISPR-based virus detection system are demonstrating high sensitivity and speed. Neural network classifiers have been developed for a large-scale screening of COVID-19 patients based on their distinct respiratory pattern. Also, a deep-learning based analysis system of thoracic CT images was constructed for automated detection and monitoring of COVID-19 patients over time. Rapid development of automated diagnostic systems based on AI and machine learning can not only contribute to increased diagnostic accuracy and speed, but will also protect healthcare workers by decreasing their contacts with COVID-19 patients.

8.3.2.1 Antigen testing An antigen test reveals if a person is actively infected with the SARS-CoV-2 virus. The test detects certain proteins that are part of the virus. Using a nasal or throat swab to get a fluid sample, antigen tests can produce results in minutes. Because these tests are faster and less

454

Foundations of Artificial Intelligence in Healthcare and Bioscience

expensive than molecular tests, some experts consider antigen tests more practical to use for large numbers of people. Once the infection has gone, the antigen disappears. A positive antigen test result is considered very accurate, but there’s an increased chance of false negative results, meaning it’s possible to be infected with the virus but have negative antigen test results. So, antigen tests aren’t as sensitive as molecular tests. This type of test already exist for strep throat, influenza, tuberculosis, HIV, and other infectious diseases [40].

8.3.2.2 Molecular genetic test (PCR test) This test detects genetic material of the virus using a lab technique called polymerase chain reaction (PCR). Also called a PCR test, a health care worker collects fluid from a nasal or throat swab or from saliva. Results may be available in minutes if analyzed onsite or one to two days if sent to an outside lab. Molecular tests are considered very accurate when properly performed by a health care professional, but the rapid test appears to miss some cases. The FDA also approved certain COVID-19 at-home test kits, available only with doctor approval. It can be done with a nasal swab kit or a saliva kit. The sample is mailed to a lab for testing. The FDA warns consumers against buying unapproved home tests, because they may be inaccurate and unsafe [41].

8.3.2.3 Antibody testing Antibody tests check a person’s blood by looking for antibodies, which may (or may not) tell if the person had a past infection with the coronavirus. Antibodies are proteins that help fight off infections and thus, can provide immunity and protection against getting the infection again (this is uncertain). Neutralizing antibodies are specific to an antigen (the virus) and thus, provide protection only against the specific disease associated with the antigen (in the case of coronavirus as the antigen, the disease being COVID-19). If the person is exposed to the antigen (coronavirus) again, the antibodies produce “memory” (anamnestic protection) towards the disease [42]. However, there are increasing reports of reinfection with the novel coronavirus suggesting that some coronavirus antibodies may not be neutralizing antibodies [43]. Except in instances in which viral testing is delayed, antibody tests should not be used to diagnose a current COVID-19 infection. An antibody test may not show if you have a current COVID-19 infection because it can take one to three weeks after infection for your body to make antibodies. To see if you are currently infected, you need a viral test. Viral (antigen) tests identify the virus in samples from your respiratory system, such as a swab from the inside of your nose. It is possible to isolate the coronavirus from respiratory secretions, blood, urine, and fecal samples for diagnostic testing. Clinically, infections can be diagnosed with respiratory viral panels that are widely commercially available [44].

8.4 Treatment and management strategies Care for coronavirus patients is supportive in nature and may include supplemental oxygen, fluid administration, and, for critically ill patients, being managed in intensive care units and receiving rescue therapies such as extracorporeal membrane oxygenation (pulmonary ventilation). Stringent infection control is critical to preventing transmission to healthcare workers

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

455

and other patients. Droplet precautions (e.g., personal protective equipment [PPE] including surgical or procedure mask, gown, and gloves) are indicated during the treatment of all coronavirus patients, and such protocols for droplet-spread respiratory viruses are part of hospital infection control practices. Additional respiratory precautions may also be appropriate during aerosol-generating procedures [45]. At the time of the writing of this chapter on COVID-19, treatment and management strategies continue to grow, some proving effective and some ineffective. In that this is being written at the height of the pandemic (late 2020), it must be considered a prospective view of appropriate treatment and management as recommended by the medical experts guiding us through this difficult period. It will be of interest to the readers in the months and years ahead, to evaluate retrospectively, which of these treatment and management approaches proved most value. Hopefully, it will be a prescient lesson to future generations in their preparedness and response to epidemics and pandemics they may face. Future readers of this book will be able to retrospectively assess the strengths and weaknesses of each.

8.4.1 General measures 8.4.1.1 Basic preventive steps 1) Shelter-in-place or “self-isolation” (remain in your home with only absolutely necessary outdoor activities); 2) Social distancing (separation of .6 feet between people); 3) Wash your hands copiously and frequently; 4) Face masks (first CDC and surgeon general suggest for use only if infected, now strongly recommended for fulltime use); 5) If symptoms occur (fever, cough, chills, aches and pains), get tested and if positive, selfquarantine for minimum 14 days and retest 3 2 before assuming normal activities; 6) If symptoms advance over two to three days, seek medical attention.

8.4.1.2 Mitigation This term describes the procedures and policies to reduce risks of infectious spread. Results of mitigation are measured by “flattening the curve,” i.e., the inverted bell-shaped curve that is produced in a bar graph measuring increases (and eventual decreases in the number of positive cases on a daily basis). This curve also can measure a percentage of positive cases resulting from testing. This is a significant measure as to effective control of viral spread.

8.4.1.3 Contact tracing Epidemiologists (“disease detectives”) attempt to identify potential “spreaders” by identifying the index patient, sometimes called “patient zero.” Depending on what they already know about that patient’s condition how the disease is spread, its natural history, what symptoms it causes they interview the patient to learn about their movements and identify all

456

Foundations of Artificial Intelligence in Healthcare and Bioscience

close contacts (persons, places and things). Based on the answers, public health workers contact each associated person (having had contact with the index patient to explain their risk, offer screening for the infection and conduct regular monitoring for symptoms of the infection. This important public health measure is not progressing well (late 2020) due to limited “tracer personnel” and public resistance to sharing information.

8.4.1.4 Modeling 1) Study the mechanisms by which disease is spreading; 2) Monitor (graphically) through testing positive case volumes, death rates and other vital statistics [45]; 3) Predict the future course of an outbreak; 4) Mitigation; and 5) Evaluate strategies to control an epidemic. Modeling data produces an inverted bell shape curve with the x-axis representing time and the y-axis representing number of cases.

8.4.1.5 Herd immunity and R Naught (RO or RO) The concept of herd immunity is an epidemiological formula in which a sufficient amount of people are immunized or vaccinated against a pathogen, thus reducing the rate of infection throughout the population. The vaccination levels must produce a threshold called the “R-Naught” or R O (The SIR [‘susceptible-infectious-recovered’]) formulation, a factor that determines the transmissibility of the pathogen. It denotes the average number of secondary cases of an infectious disease that one case would generate in a completely susceptible population. That is, when one infected person infects greater than one other person, a potential exponential increase in infections results leading to an epidemic or pandemic. If, however, transmission on average remains below an RO of one person, this will result in a decreasing spread in infection and eventually into a majority of the population (an estimated 60 70% needed) to produce “herd immunity” [46]. In the absence of a vaccine, developing herd immunity to an infectious agent requires large amounts of people actually being infected, developing antibodies to the infectious agent and thus, becoming immunized against future infection. Scientists are not always certain if this immunity is permanent or for how long it might last. But even assuming that immunity is long-lasting, a very large number of people must be infected to reach the 60 70% herd immunity threshold required. During this process, mortality of certain infections like SARS-CoV-2 could reach unacceptable levels as occurred in Sweden [47]. Nor does a pathogen magically disappear when the herd immunity threshold is reached. Rather, it only means that transmission begins to slow down and that a new epidemic is unlikely to start up again. An uncontrolled pandemic could continue for months after herd immunity is reached, potentially infecting many more millions in the process. These additional infections are what epidemiologists refer to as “overshoot” [47].

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

457

8.4.2 Therapeutics 8.4.2.1 Monoclonal antibodies Monoclonal antibodies (any drug with the name suffix, “. . .mab”) are laboratory engineered antibodies by Regeneron, Eli Lily, et al. used to mimic the immune system’s own antibodies for a specific antigen. These antibodies are made by identical immune cells that are all clones of a unique parent cell. Monoclonal antibodies can have monovalent affinity, in that they bind to the same epitope (the part of an antigen that is recognized by the antibody). Rather than wait for vaccines to coax the body to make its own antibodies, scientists are studying versions of these molecules to directly disable the SARS-CoV-2 coronavirus [48]. Monoclonal antibodies are nowadays often generated by isolating or transforming antibody-producing cells taken directly from immunized animals or patients, and transplanting the antibody-encoding genes of these cells into suitable producer cell lines, rather than using hybridoma technology [49].

8.4.2.2 Convalescent plasma (serum) Plasma is collected from patients who recovered from COVID-19. Each donates a pint of blood. The red and white blood cells are separated and put back into the donor’s bloodstream while the blood plasma, rich with virus-fighting antibodies, is kept aside [50]. Four hundred and three monoclonal antibodies were isolated from three convalescent COVID-19 patients. They showed that the patients had strong immune responses against the viral spike protein, a complex that binds to receptors on the host cell. A subset of antibodies was able to neutralize the virus [51]. Early results, (late 2020) however are proving questionable.

8.4.2.3 Hydroxychloroquine (Plaquenils) combined with azithromycin (Zithromaxs) A small sample survey showed that hydroxychloroquine treatment (a biologic used for malaria and lupus) is associated with viral load reduction in COVID-19 patients and its effect is reinforced by azithromycin (an antibiotic). A study reported in the New England Journal of Medicine conclude that results do not support the use of hydroxychloroquine at present, outside randomized clinical trials testing its efficacy [52]. Further work is warranted to determine if these compounds could be useful as chemoprophylaxis to prevent the transmission of the virus without significant adverse effects [53]. Continuing testing however is indicating increased risks of adverse cardiac effects with this form of therapy.

8.4.2.4 Remdesivir This drug is thought to interfere with the mechanism that coronavirus uses to make copies of itself (see Fig. 8.1 and discussion on Life Cycle above). Scientists are still working out exactly how that occurs. A preliminary report published in The New England Journal of Medicine showed that the drug shortened recovery time for people with COVID-19 from an average of 15 days to about 11 days [54]. Issues of storage, supply and cost of this drug are presenting serious limitations on its long term value.

458

Foundations of Artificial Intelligence in Healthcare and Bioscience

8.4.2.5 Dexamethasone (and corticosteroids) As discussed above and in detail in Chapter 6, chronic inflammatory organ injury (e.g., heart, lungs, kidneys) may occur in severe Covid-19, with a subgroup of patients having markedly elevated levels of inflammatory markers. Several therapeutic interventions have been proposed to mitigate inflammatory organ injury in viral pneumonia including glucocorticoids (i.e., dexamethasone). Glucocorticoids have been widely used in syndromes closely related to Covid-19, including SARS, Middle East respiratory syndrome (MERS), severe influenza, and community-acquired pneumonia. However, the evidence to support or discourage the use of glucocorticoids under these conditions has been weak [55]. In patients hospitalized with Covid-19, the use of dexamethasone resulted in lower 28-day mortality among those who were receiving either invasive mechanical ventilation or oxygen alone at randomization but not among those receiving no respiratory support [56]. Other steroids and biologic agents are also beginning to show some promising results [57].

8.4.2.6 RNA screening SARS-CoV-2 is an RNA virus which means its genetic material is encoded in RNA. Once the virus is inside our cells, it releases its RNA and making long viral proteins to compromise the immune system (see Life Cycle above). The virus assembles new copies of itself and spreads to more parts of the body. One of the weapons in our cells’ is an RNA surveillance mechanism called nonsense-mediated mRNA decay (NMD) that protects us from many genetic mutations that could cause disease. The genome of COVID-19 is a positive-sense, singlestranded RNA which can evade NMD and prevent it from degrading RNA by producing proteins that interacts with certain proteins that modify the chemical structure of RNA. With the progression of new viral strains, this research on the fundamentals of RNA allows for the development of therapeutics and vaccines that directly target processes critical to a virus’s life cycle (see Fig. 8.1, page 451) [58].

8.4.3 Vaccine (immunization) By definition, a vaccine is a biological preparation that provides active, adaptive immunity to a particular infectious disease (e.g., SARS-CoV-2) by stimulating neutralizing antibodies to the source of the infection. It typically contains an agent that resembles the disease-causing microorganism made from weakened or killed forms of the microbe (an attenuated virus), its toxins, or one of its surface proteins. The spike protein is the target for most of the COVID-19 vaccine human clinical trials and so research centers on how the immune system, particularly B and T cells, responds to the spike protein. B cells are responsible for producing the antibodies that recognize SARS-CoV-2, while T cells play an important role in supporting the development of the B cell response (see Chapter 7). Vaccination is the act of getting a vaccine, usually as an injection to immunize a person (immunization) to protected against a disease. Testing for an effective vaccine begins with giving the vaccine to animals such as mice or monkeys to see if it produces an immune response. Then Phase One vaccinates a small number of people to test safety and dosage as

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

459

well as to confirm that it stimulates the immune system. Phase Two includes hundreds of people split into groups (viral injected and placebo), such as children and the elderly, to see if the vaccine acts differently in them as well as safety and ability to stimulate the immune system. Phase Three gives the vaccine to thousands of people (again, two groups) to see how many become infected, compared with volunteers who received a placebo. These trials can determine any rare side effects that might be missed in earlier studies. Finally, if the vaccine protects against the coronavirus in at least 50% of vaccinated people it is considered effective and regulators decide whether to approve the vaccine or not. During a pandemic, a vaccine may receive emergency use authorization before getting formal approval [59]. At least seven teams are developing vaccines using the virus itself, in a weakened or inactivated form. Around 25 groups say they are working on viral-vector vaccines. A virus such as measles or recombinant adenovirus is genetically engineered so that it can produce coronavirus proteins in the body. At least 20 teams are aiming to use genetic instructions (in the form of DNA or RNA) for a coronavirus protein that prompts an immune response. Pfizer has recently reported 90% success (currently unconfirmed) with a genetic RNA vaccine. Such vaccine types require deep freeze storage ( 2 94 F) and second (after 28 days) dose [60]. Finally, many researchers are experimenting with injecting coronavirus proteins directly into the body to mimic the coronavirus’s outer coat [61].

8.4.4 CRISPR-Cas13 and RNA screening A new Cas13 RNA screen has been developed to establish guide RNAs for the COVID-19 coronavirus and human RNA segments which could be used in vaccines, therapeutics and diagnostics. A novel CRISPR-based editing tool enables researchers to target mRNA and knockout genes without altering the genome has been developed. Using the CRISPR-Cas13 enzyme, researchers have created a genetic screen for RNA, currently designed for use on humans, which they say could also be used on RNA containing viruses and bacteria. The developers have used their parallel-screening technique to create optimal guide RNAs for the SARS-CoV-2 coronavirus which could be used for future detection and therapeutic applications. The platform is optimized to run massively-parallel genetic screens at the RNA level in human cells because it is based on the CRISPR-Cas13 enzyme, which targets RNA instead of DNA. The data is collected by targeting thousands of different sites in human RNA transcripts to create a machine learning-based predictive model to expedite identification of the most effective Cas13 guide RNAs [62].

8.4.5 Immunoinformatics AI and immunoinformatics (computational immunology) play a central role in vaccines by suggesting components understanding viral protein structures, and helping medical researchers scour tens of thousands of relevant research papers at an unprecedented pace [63]. AI supported preclinical studies in mice of a candidate vaccine based on this spike protein are already underway at NIH’s Vaccine Research Center (VRC). But there will be many more steps after that to test safety and efficacy, and then to scale up to produce millions of doses. National Institute of Allergy and Infectious

460

Foundations of Artificial Intelligence in Healthcare and Bioscience

Diseases (NIAID) is now working with the numerous biotechnology company (AstraZeneca, Pfizer, J&J, Moderna, et al.) to use the latest findings to develop a vaccine candidate using messenger RNA (mRNA), molecules that serve as templates for making proteins (see Pfizer news release above, page 459). The goal is to direct the body to produce a spike protein in such a way to elicit an immune response and the production of antibodies. Other forms of vaccine candidates are also in preclinical development [64]. AI and immunoinformatics are being used to better understand the structure of proteins involved in SARS-Cov-2 infection in search for potential treatments and vaccines. Proteins have a three-dimensional structure, which is determined by their genetically encoded amino acid sequence (Next-gen sequencing of genetic code), and this structure influences the role and function of the protein. An AI Google DeepMind system called AlphaFold [65] uses amino acid sequencing and protein structure to make predictions to construct a “potential of mean force” which can be used to characterize the protein’s shape. This system has been applied to predict the structures of six proteins related to SARS-CoV-2 [66].

8.4.6 Review of AI for clinical considerations for coronavirus infections 1. Continuing efforts are being made to develop novel diagnostic approaches to COVID-19 using machine learning algorithms. Machine learning based screening of SARS-CoV-2 assay designs using a CRISPR-based virus detection system (see Cas13 above) are demonstrating high sensitivity and speed. Neural network classifiers have been developed for a large-scale screening of COVID-19 patients based on their distinct respiratory pattern. Also, a deep-learning based analysis system of thoracic CT images was constructed for automated detection and monitoring of COVID-19 patients over time. Rapid development of automated diagnostic systems based on AI and machine learning can not only contribute to increased diagnostic accuracy and speed, but will also protect healthcare workers by decreasing their contacts with COVID-19 patients [67]. 2. Five companies were highlighted for developing deep learning models to predict old and new drugs that might successfully treat COVID-19. Scudellari M. Five Companies Using AI to Fight Coronavirus. IEEE Spectrum. March 19, 2020 [68]. 3. Advanced deep learning-based CNN algorithms plays a major role in extracting highly essential features, mostly in terms of medical images. This technique, with using CT and X-Ray image scans, has been adopted in most of the recently published articles on the coronavirus with remarkable results. Furthermore, according to this paper, this can be noted and said that deep learning technology has potential clinical applications [69]. 4. A new framework has been proposed to detect COVID-19 using built-in smartphone sensors (IoTs). The proposal provides a low-cost solution that ordinary people can use on their smartphones for the virus detection purposes. The designed AI enabled framework reads the smartphone sensors signal measurements to predict the grade of severity of the pneumonia as well as predicting the result of the disease [70].

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

461

5. AI and deep learning algorithms are being developed to enhance the detection and diagnosis of COVID-19. The need to provide access to accurate and low-cost tests for the diagnosis of COVID-19 is critical. Such AI algorithms can be used as an initial screening tool for suspected cases so that patients at higher risk could have confirmatory laboratory-based tests and be isolated if necessary. These algorithms could help health care providers triage patients with COVID-19 into potentially three groups: the 80% who have mild disease; the 15% who have moderate disease; and the 5% who have severe disease, including those at high risk of mortality. Finally, AI can facilitate the discovery of novel drugs with which to treat COVID-19 [71].

8.5 Epidemiology and public health considerations in COVID-19 8.5.1 Current epidemiologic considerations The COVID-19 impact already indicates more disastrous effects than that of 2003 severe acute respiratory syndrome (SARS-CoV). Many countries (e.g., China, Singapore, Hong Kong, South Korea, Italy, Spain and the USA) have relied on an extrapolation of classic infection-control and public-health metrics to contain the COVID-19 pandemic, similar to those used for previous SARS pandemics. They range from extreme quarantine measures, “shelter-in-place,” “social distancing,” to painstaking detailed contact tracing with hundreds of contact tracers. However, these measures may not be as effective in 2020 for tackling the scale of COVID-19. Three vertically integrated digital and AI technologies are being introduced for monitoring, surveillance, detection, prevention of COVID-19, and to mitigate its spread and its direct and indirect impact to worldwide healthcare systems [71]. First, the Internet of Things (IoT) is providing a platform that allows public-health agencies access to data for monitoring the COVID-19 pandemic. For example, the ‘Worldometer’ [72] provides a real-time update on the actual number of people known to have COVID-19 worldwide, including daily new cases of the disease, disease distribution by countries and severity of disease (recovered, critical condition or death). Johns Hopkins University’s Center for Systems Science and Engineering has also developed a real-time tracking map for following cases of COVID-19 across the world, using the data collected from US Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), the European Center for Disease Prevention and Control, the Chinese Center for Disease Control and Prevention (China CDC) and the Chinese website DXY [73]. Second, big data is providing opportunities for performing modeling studies of viral activity and for guiding individual country healthcare policymakers to enhance preparation for the outbreak. Using three global databases, WHO International Health Regulations, the State Parties Self-Assessment Annual Reporting Tool, Joint External Evaluation reports and the Infectious Disease Vulnerability Index, health authorities are performing modeling studies of ‘nowcasting’ and forecasting COVID-19 disease activity throughout the world for publichealth planning and control worldwide [74].

462

Foundations of Artificial Intelligence in Healthcare and Bioscience

Third, digital technology is enhancing public-health education and communication. The government of Singapore has partnered with WhatsApp (owned by Facebook) to allow the public to receive accurate information about COVID-19 and government initiative. Multiple social-media platforms (e.g., Facebook and Twitter) are currently being used by healthcare agencies to provide ‘real-time’ updates and clarify uncertainties with the public. Also, some facial-recognition companies (e.g., SenseTime and Sunell) have adopted the thermal imaging enabled facial recognition to identify people with an elevated temperature [31]. The initial reaction in many countries to COVID-19 is for healthcare facilities to reduce or even cease many clinical services, including closure of clinics and postponement of medical appointments or elective surgeries. However, such strategies cannot be sustained indefinitely if the COVID-19 pandemic extends beyond six months. Healthcare systems should plan to use digital technology ‘virtual clinics’ using telehealth consultations with imaging data uploaded from peripheral sites and interpreted remotely. This would ensure that patients continue to receive standard clinical care while reducing physical crowding of patients into hospitals. Chatbots staffed by health professionals can also provide early diagnoses as well as patient education. And blockchain technologies can coordinate hospital, clinics and pharmacy patient information [31]. Undoubtedly, by the time you read this book, the AI literature and more so, AI programs and research in the epidemiology, public health considerations, clinical aspects and immunological considerations regarding COVID-19 will have proliferated into a major body of new science and “disruptive technologies” [75]. Indeed, the re-emergence of yet another more virulent SARS-CoV virus and global pandemic emphasize the ongoing and permanent challenge that infectious diseases pose and the need for global cooperation, AI and preparedness, even during “interim” periods. Besides classic public-health measures for tackling the COVID-19 pandemic, in 2020, a wide range of digital technology are being implemented that can augment and enhance these publichealth strategies. This COVID-19 health care crisis of 2020 provides a distinct opportunity to enhance the applications of AI technologies for immunology in the public health domain.

8.5.2 Review of AI for epidemiology and public health considerations There are, however, inherent problems with AI solutions amid a pandemic. Many articles inflate AI’s “effectiveness and scale,” ignores the levels of human involvement, and sometimes demonstrates careless assessment of related risks. Taking it even further, the Brookings Institute suggests that “the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI” [76]. They list eight considerations for how to remain critically-minded and realistic amid all the AI-coronavirus hype: 1. 2. 3. 4. 5.

Look to the subject-matter experts; AI needs lots of data; Don’t trust AI’s accuracy; Real-world deployment degrades AI performance; Most predictions must enable an intervention to really matter;

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

463

6. AI is far better at minute details than big, rare events; 7. There will be unintended consequences; 8. Don’t forget: AI will be biased. While taking a cautious approach to AI’s role in an evolving health care crisis, we must also consider its enormous contribution to the scientific, clinical and public health platform it provides. When the Covid-19 pandemic enters dangerous new phases, the critical question becomes whether and when to take aggressive public health interventions to slow down the spread of COVID-19. A study was undertaken to develop AI inspired methods for real-time forecasting and evaluating intervention strategies to curb worldwide spread. A modified autoencoder for modeling the transmission dynamics of the epidemics is being developed and applied to the surveillance data of cumulative and new Covid-19 cases and deaths from WHO, as of March 16, 2020. Total peak number of cumulative cases and new cases in the world with later intervention could reach 255,392,154 by January 2021. However, the total peak number of cumulative cases in the world with one-week earlier intervention were reduced to 1,530,276. We observed that delaying intervention for one month caused the maximum number of cumulative cases to increase 166.89 times, and the number of deaths increase from 53,560 to 8,938,725. Disastrous consequences if immediate action to intervene is not taken [77]. MIT published a paper describing the needed changes in three areas if we want AI to be useful in future pandemics. First, prediction through database companies using a range of natural-language processing (NLP) algorithms to monitor news outlets and official health-care reports in different languages around the world; second, machine-learning models with large datasets for examining medical images to catch early signs of disease that human doctors miss, from eye disease to heart conditions to cancer; third, identifying cures through big data analysis of drug trials and design algorithms to highlight biological and molecular structures matching drugs with candidates [78]. There have been multiple citations regarding AI and COVID-19 in this chapter. However, it would not be complete without a direct reference to deep learning and the diagnosis and treatment of the coronavirus. An article dated June 12, 2020 “offers a response to combat the virus through Artificial Intelligence (AI).” It identifies AI platforms for use by physicians and researchers to accelerate the process of diagnosis and treatment of the COVID-19 disease. Some include Deep Learning (DL) methods, Generative Adversarial Networks (GANs), Extreme Learning Machine (ELM), and Long/Short Term Memory (LSTM) and integrated bioinformatics approaches with different aspects of information from a continuum of structured and unstructured data sources. The main advantage of these AI-based platforms is to accelerate the process of diagnosis and treatment of the COVID-19 disease. The most recent related publications and medical reports were investigated to facilitate reaching a reliable Artificial Neural Network-based tool for challenges associated with COVID-19. There are some specific inputs for each platform, including various forms of the data, such as clinical data and medical imaging which can improve the performance of the introduced approaches toward the best responses in practical applications [79].

464

Foundations of Artificial Intelligence in Healthcare and Bioscience

Conclusion I began this chapter by reporting on the number of worldwide COVID-19 recorded cases and deaths to date. The number reminds me of a sad saying. “One death is a tragedy 854,222 is a statistic.” We can’t let ourselves think that way. Maybe if we think of it as 854,222 personal tragedies (and growing), we’ll realize what the world and each of us as caring individuals are truly enduring with this pandemic. How bad will it get? Unless a vaccine has been developed, approved and delivered to the world’s population by the time you read this, we will continue to face human tragedies, not statistics, of epic proportions. It is estimated that there are 1.7 million viruses residing in environmental ecosystems throughout the world. Let us all hope and pray that the applications of science, AI technologies and mostly, our personal and societal efforts meet and defeat this public health challenge of infectious disease pandemics and help humanity create a better place for all.

References [1] Principles of epidemiology in public health practice. Center for Disease Control and Prevention; 2012. [2] Kelsey JL, Thompson WD, Evans AS. Methods in observational epidemiology. New York: Oxford University Press; 1986. p. 216. [3] Scheidel W. The great leveler: violence and the history of inequality from the stone age to the twentyfirst century. Chapter 10: The black death. Princeton: Princeton University Press; 2017. p. 291 313. [4] Thucydides. History of the Peloponnesian War, Book 2, Chapter VII [Crawley R, Trans.]. Digireads.com Publishing; 2017. p. 89 100. [5] Sabbatani S, Fiorino S. The Antonine Plague and the decline of the Roman Empire. Infez Med 2009; 17(4):261 75. [6] Horgan J. Justinian’s Plague (541 542 CE). Ancient History Encyclopedia; 2014. [7] DeWitte SN. Mortality risk and survival in the aftermath of the medieval Black Death. PLoS One 2014; 9(5):e96513. [8] Polio elimination in the United States. Center for Disease Control and Prevention; 2019. [9] Keet E. Number of reported polio cases in first months of 2019 up from 2018. ContagionLive; 2019. [10] Weatherspoon D. The most dangerous epidemics in U.S. history. Healthline; 2016. [11] Trifonov V, Khiabanian H, Rabadan R. Geographic dependence, surveillance, and origins of the 2009 influenza A (H1N1) virus. N Engl J Med 2009;361(2):115 19. [12] Past pandemics. World Health Organization; 2020. [13] WHO: R&D Blueprint, list of blueprint priority diseases, ,https://www.who.int/blueprint/priority-diseases/en/. [accessed 10.18]. [14] World Health Organization (WHO). Summary of probable SARS cases with onset of illness from 1 November 2002 to 31 July 2003. [15] Huremovi´c D. Brief history of pandemics (pandemics throughout history). Psychiatry of pandemics. Nat Public Health Emerg Collect 2019;7 35. [16] COVID-19 Coronavirus pandemic. Worldometer; Last updated: March 27, 2020. [17] Coronavirus COVID-19 global cases by the Center for Systems Science and Engineering (CSSE). Johns Hopkins University; 2020.

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

465

[18] Cui J, Li F, Shi ZL. Origin and evolution of pathogenic coronaviruses. Nat Rev Microbiol 2019;17(3):181 92. [19] Wang SF, Chen KH, Chen M, et al. Human-leukocyte antigen class I Cw 1502 and class II DR 0301 genotypes are associated with resistance to severe acute respiratory syndrome (SARS) infection. Viral Immunol 2011;24(5):421 6. [20] Long Q, Liu B, Deng H, et al. Antibody responses to SARS-CoV-2 in patients with COVID-19. Nat Med 2020;26:845 8. Available from: https://doi.org/10.1038/s41591-020-0897-1. [21] Zhao J, Zhao J, Mangalam AK, et al. Airway memory CD4(+) T cells mediate protective immunity against emerging respiratory coronaviruses. Immunity 2016;44(6):1379 91. [22] Woodward Davis AS, Roozen HN, Dufort MJ, et al. The human tissue-resident CCR5(+) T cell compartment maintains protective and functional properties during inflammation. Sci Transl Med 2019;11 (521). [23] Fan Wu F, Zhao S, Yu B, et al. A new coronavirus associated with human respiratory disease in China. Nature 2020;579:265 9. [24] Terry M. Vir biotech IDs two antibodies that could be effective in preventing and treating COVID-19. BioSpace; 2020. [25] Collins F. Genomic study points to natural origin of COVID-19. NIH Director’s Blog; 2020. [26] Dolan L. 8 big cats have tested positive for coronavirus at the Bronx Zoo. CNN; 2020. [27] Zhou P, Yang X-L, Wang X-G, et al. A pneumonia outbreak associated with a new coronavirus of probable bat origin. Nature 2020;1 4. [28] Cao Y, Li L, Feng Z, et al. Comparative genetic analysis of the novel coronavirus (2019-nCoV/SARS-137 CoV-2) receptor ACE2 in different populations. Cell Discov 2020;6:1 4. [29] Richardson P, Griffin I, Tucker C, et al. Baricitinib as potential treatment for 2019-nCoV acute respiratory disease. The Lancet; 2020. [30] Wu J. How artificial intelligence can help fight coronavirus. Forbes Cognitive World; 2020. [31] Zhang L, Wang DC, Huang Q. Significance of clinical phenomes of patients with COVID-19 infection: a learning from 3795 patients in 80 reports. Clin Transl Med 2020;. Available from: https://doi.org/10.1002/ctm2.17. [32] Ong E, Wong MU, Huffman A, He Y. COVID-19 coronavirus vaccine design using reverse vaccinology and machine learning. Preprint. bioRxiv. 2020;2020.03.20.000141. 2020. [33] Chen J, Li K, Zhang Z, et al. A survey on applications of artificial intelligence in fighting against COVID-19. arXiv. 2007.02202; 2020. [34] Alimadadi A, Aryal S, Manandhar I, et al. Artificial intelligence and machine learning to fight COVID-19. J Physiol Genomics 2020; 108.252.096.178. [35] Glück T. Association of comorbidities with COVID-19 outcomes. NEJM J Watch 2020;. [36] Kass DA. COVID-19 and severe obesity: a big problem? Ann Intern Med 2020;. Available from: https:// doi.org/10.7326/M20-5677 Published online. [37] Huang C, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 2020;395:497 506. [38] Fox SE, Li G, Akmatbekov A, et al. Unexpected features of cardiac pathology in COVID-19 infection. Circulation; 2020. https://doi.org/10.1161/Circulationaha.120.049465. [39] Liu B, Li M, Zhou Z, et al. Can we use interleukin-6 (IL-6) blockade for coronavirus disease 2019 (COVID-19)-induced cytokine release syndrome (CRS)? J Autoimmun 2020;102452. Available from: https://doi.org/10.1016/j.jaut.2020.102452. [40] Service RF. Coronavirus antigen tests: quick and cheap, but too often wrong? Science; 2020. [41] Marshall WF. How do COVID-19 antibody tests differ from diagnostic tests? Mayo Clinic; 2020.

466

Foundations of Artificial Intelligence in Healthcare and Bioscience

[42] Coronavirus Disease 2019. Test for past infection. National Center for Immunization and Respiratory Diseases (NCIRD), Division of Viral Diseases; 2020. [43] CDC Media Statement. Updated isolation guidance does not imply immunity to COVID-19. Content source: Centers for Disease Control and Prevention; 2020. [44] BioFire. The BioFire FilmArray respiratory EZ (RP EZ) panel, ,https://www.biofiredx.com/products/ the-filmarray-panels/filmarray-respiratory-panel-ez/. [accessed 17.01.20]. [45] Coronaviruses: SARS, MERS, and 2019-nCoV. Johns Hopkins Bloomberg School of Public Health; 2020. [46] D’Souza G, Dowdy D. What is herd immunity and how can we achieve it with COVID-19? Johns Hopkins Bloomberg School of Public Health; 2020. [47] Bergstrom CT, Dean N. What the proponents of ‘natural’ herd immunity don’t say try to reach it without a vaccine, and millions will die. New York Times; 2020. [48] Ledford H. Antibody therapies could be a bridge to a coronavirus vaccine — but will the world benefit? Nature; 2020. [49] Rajewsky K. The advent and rise of monoclonal antibodies. Nature; 2019. [50] Guarner J, Roback JD, et al. Convalescent plasma to treat COVID-19 possibilities and challenges. JAMA 2020;. Available from: https://doi.org/10.1001/jama.2020.4940. [51] Brouwer PJM, Caniels TG, van der Straten K, et al. Potent neutralizing antibodies from COVID-19 patients define multiple targets of vulnerability. Science 2020;369(6504):643 50. Available from: https:// doi.org/10.1126/science.abc5902. [52] Cavalcanti AB, Zampieri FG, Rosa RG, et al. Hydroxychloroquine with or without azithromycin in mildto-moderate Covid-19. NEJM. 2020;. Available from: https://doi.org/10.1056/NEJMoa2019014. [53] Gautretab P, Lagierac JC, Parola P, et al. Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. Int J Antimicrobial Agents 2020;105949. [54] Beigel JH. Tomashek KM. Dodd LE., et al. Remdesivir for the Treatment of Covid-19 — Preliminary Report. DOI: 10.1056/NEJMoa2007764. May 22, 2020. [55] Arabi YM, Mandourah Y, Al-Hameed F, et al. Corticosteroid therapy for critically ill patients with Middle East respiratory syndrome. Am J Respir Crit Care Med 2018;197:757 67. [56] The Recovery Collaborative Group. Dexamethasone in hospitalized patients with Covid-19 — preliminary report. NEJM 2020. DOI: 10.1056/NEJMoa2021436. [57] Prescott HC, Rice TW. Corticosteroids in COVID-19 ARDS: evidence and hope during the pandemic. JAMA 2020;. Available from: https://doi.org/10.1001/jama.2020.16747 Published online. [58] Anderson D, Fu D, Maquat L. COVID-19: what’s RNA research got to do with it? Univ. of Rochester NewsCenter; 2020. [59] The vaccine testing process. New York Times; 2020. [60] News Release. Pfizer and Biontech announce vaccine candidate against COVID-19 achieved success in first interim analysis from phase 3 study. Pfizer. November 09, 2020. [61] Callaway E. The race for coronavirus vaccines: a graphical guide. Nature 2020;. [62] Gonatopoulos-Pournatzis T, Aregger M, Brown KR, et al. Genetic interaction mapping and exonresolution functional genomics with a hybrid Cas9 Cas12a platform. Nat Biotechnol 2020;. [63] Etzioni O DeCario N. AI can help scientists find a Covid-19 vaccine. Wired; 2020. [64] Collins F. Structural biology points way to coronavirus vaccine. NIH Director’s Blog; 2020. [65] Senior A, Jumper J, Hassabis D, et al. AlphaFold: using AI for scientific discovery, arXiv.org . cs . arXiv:2003.11336v1; 2020.

Chapter 8 • SARS-CoV-2 and the COVID-19 pandemic

467

[66] Jumper J, Tunyasuvunakool K, Kohli P, Hassabis D, AlphaFold Team. Computational predictions of protein structures associated with COVID-19. arXiv.org . cs . arXiv:2003.11336v1; 2020. [67] Mei X, Lee H, Diao K, et al. Artificial intelligence enabled rapid diagnosis of patients with COVID-19. Nat Med 2020;26:1224 8. Available from: https://doi.org/10.1038/s41591-020-0931-3. [68] Scudellari M. Five companies using AI to fight coronavirus. IEEE Spectrum; 2020. [69] Waleed Salehi A, Preety B, Gupta G. Review on machine and deep learning models for the detection and prediction of Coronavirus. Mater Today Proc 2020. https://doi.org/10.1016/j.matpr.2020.06.245. [70] Maghdid HS, Ghafoor KZ, Sadiq AS, et al. A novel AI-enabled framework to diagnose coronavirus COVID 19 using smartphone embedded sensors: design study. arXiv.org . cs . arXiv:2003.07434; 2020. [71] Ting DSW, Carin L, Dzau V, et al. Digital technology and COVID-19. Nat Med 2020;. Available from: https://doi.org/10.1038/s41591-020-0824-5. [72] https://www.worldometers.info/coronavirus/. [73] https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/ bda7594740fd40299423467b48e9ecf6. [74] Joseph T, Wu JT, Leung K, Leung GM. Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. Lancet 2020;395(10225):P689 97. [75] McIntosh K, Hirsch MS, Bloom A. Coronavirus disease 2019 (COVID-19): epidemiology, virology, and prevention. UpToDate. WoltersKluwer; 2020. Topic 126981 Version 108.0. [76] Engler A. A guide to healthy skepticism of artificial intelligence and coronavirus. Brookings Institution; 2020. [77] Zixin H, Qiyang G, Shudi L, et al. Forecasting and evaluating intervention of Covid-19 in the world. arXiv:2003.09800 [q-bio.PE]; 2020. [78] Heaven WD. AI could help with the next pandemic—but not with this one. MIT Technology Review; 2020. [79] Jamshidi MB, Lalbakhsh A, Talla J, et al. Artificial intelligence and COVID-19: deep learning approaches for diagnosis and treatment. IEEE 2020;8(12):109581 95.

Epilogue The broadest view of this book “from 35 thousand feet” is more than 450 pages of technical, clinical and bioscience materials on AI applications in health and wellness. Rather, it describes a network of information and guidance on the direction in which the science and technology of AI are taking health and wellness care. It is a snapshot in time of a disruptive phenomenon affecting us all, currently, and going forward. Topics not covered in this extensive discussion on AI in health and wellness include 2 controversial areas regarding potential negative implications that AI is creating among health care analysts and professionals. First, there is the perceived risk of “depersonalization” of care as computers play an increasing role in patient care. However, tasks in health care as well as other industries that necessitate the need for consistency; accuracy; reduction of redundant, repetitious, time-inefficient tasks; thoroughness (for a quality outcome); and cost-effectiveness in practice and for patients’ best care are now on the table for reassessment. It appears that practitioners (e.g., radiology, dermatology, ophthalmology, etc.) are enthusiastically accepting the idea that the merger of digital, electronic technologies, AI, and robotics is the answer mentally and physically. The second concern, and more acute, is the growing concern among health care providers and professionals (and a similar concern of workers in many occupations) regarding the risk of being “replaced” by evolving AI and robotic technologies. Concerns of doctors have understandably grown in the past few years as AI technologies continue to be implemented effectively with impressive efficiency and accuracy, and show significant potential for continued progress. Cogent and reasoned professionals are beginning to think through human and artificial intelligence from foundational principles rather than the empirics of the past. Indeed, we are entering an age in which the future will not likely resemble the past. Beyond the formidable ethical and emotional challenges AI will present going forward, one of the biggest challenges for health care professionals will be convincing them that AI’s computer information, advice, analyses, diagnostic conclusions are reliable. IBM’s “Watson” has already made a strong case for AI [1] and to a lesser, but no less impressive degree, have chatbots like Amazon’s Alexa, Apple’s Siri, Google’s Home. As algorithms continue to advance, as a correlate to Moore’s Law, Elon Musk’s prediction about AI may well pose our greatest challenge. He opined, “The rate of change of technology. . .is outpacing our ability to understand it” [2]. But AI machine learning systems capable of explaining how they reach their conclusions will also be needed to reassure users, especially in health care. As mentioned in Chapter 3, algorithms are being developed and tested called “Explanation Facilities” or “Explainable AI” (XAI). They aim to give AI systems used in medical diagnoses the ability to explain their reasoning rather than leaving the algorithmic conclusions a mystery (the “black box” issue) [3]. 469

470

Epilogue

As mentioned in Chapter 1, Daniel Dennett at Tufts University put it best. “Since there are no perfect answers, we should be as cautious of AI explanations as we are of each other’s. No matter how clever a machine seems, if it can’t do better than us at explaining what it’s doing, then don’t trust it” [4]. Another finding identified in Chapter 7, regarding the diagnosis of encephalitis suggests that “human diagnosis may be superior to AI in more serious illnesses” [5]. It is imperative that substantive qualifications of this nature regarding AI must be considered among the potential limitations and weaknesses of AI. The following considerations can summarize some of the additional weaknesses or challenges for AI in health and wellness care: • AI systems are trained on data from the past; • They are not prepared to reason in ways humans do about conditions that have not been seen before; • AI algorithms and outputs are statistical predictions. We should maintain an index of suspicion that the prediction may be wrong; • Rapid proliferating algorithms may prove faulty (“Garbage in garbage out”); • The potential decrease in personal, doctor/patient contact and communication are real; • AI may produce adverse effects on health care providers (doctors, nurses, support personnel); and • Currently (as documented in Chapter 7, [6]) the American public’s attitudes toward AI remain mixed at best with a measurable level of concerns: • Only a slight majority (59%) support AI’s continued development; • There is weak support for developing high-level machine intelligence (31%) while 27% oppose its development entirely; • Only 31% think that high-level AI will be good for humanity; • 34% believe that AI technology will have a harmful impact with 12% feeling it could have terrible effects “leading to possible human extinction.” Such modest (to negative) public levels of support for AI technology may have a significant impact on its continued development, at least in the U.S. And continued growth is critical in that AI has a long way to go. Consider the fact that, whereas AI algorithms can be programmed to quantify and qualify the contents of a room down to the number of dust particles, if an elephant were in that room, it would be missed [7]. (Remember the Brookings advice from Chapter 7 [8], “AI is far better at minute details than big, rare events”.) Indeed, AI has a long way to go. But at the same time, we must consider how far AI has come in so few years (since Minsky and McCarthy at Dartmouth, 1956) and how far it has taken us in the areas of health, wellness, prevention, immunology, genetics, cancer research and its associated and successful clinical applications. And surely, the “All of Us” Research Program [9] of the National Institute of Health (NIH) utilizing AI and big data analytics to produce benefits through long-term research in precision medicine may not be realized for years. Indeed, the current benefits of AI, especially in health care and the biosciences are still in their

Epilogue

471

infancy and are only outweighed by their future potential. Those positive benefits can be summarized as follows: • “The digitization of health-related records and data sharing” (AI will manage EHR more efficiently and more accurately); • “The adaptability of deep learning to the analysis of heterogeneous data sets” (AI big data analytics, blockchain, and population health surpass human capabilities); • The value of “precision medicine” in personalized diagnosis and treatment; • The continued expansion of immunology, genetics and genomics in the progress of all levels of health care and the ultimate achievement of preventive health; • “The capacity of deep learning for hypothesizing generations in research” (AI exceeds human abilities to assemble, analyze and deduce conclusions from large datasets); • “The promise of deep learning to streamline clinical workflows and empower patients” (AI will improve doctors’ efficiency and patient communications); • “The strengths of digital imaging over human interpretation” (AI interprets images far better than humans); • “The rapid-diffusion open-source (Cloud) and proprietary deep learning programs” (AI will become the standard for accuracy and safety of medical care); • “The adequacy of today’s basic deep learning technology to deliver improved performance as data sets get larger” (AI will be the only way to manage the continuing explosion of health care information); and perhaps the most important, • Increased time for the health care provider to interact personally with their patient and show them the empathy and caring no machine will ever be able to do. It is important to remember that machines (computers) only calculate. Humans interpret, decide and care [10] (and can usually identify the elephant in the room). But ultimately, in health care, with computers and humans working together as partners, patients will emerge as the biggest winners as AI machine learning transforms health care. And indeed, notwithstanding Andrew Ng’s prophetic metaphor, “AI is the new electricity,” [11] as with electricity and all disruptive technologies, questions will continue to grow and be debated regarding what AI, big data and robotics might do in the future. But these questions are themselves “proof of concept” [10]. Artificial Intelligence is not coming; it’s already here [12]. And what you have read in this book is only the beginning.

References [1] ,http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html.. [2] Ankel S, Dash S, Musk E, Ma J. Clash during intense debate on the future of artificial intelligence and life on Mars. Business Insider; 2019. [3] Kreps GL, Neuhauser L. Artificial intelligence and immediacy: designing health communication to personally engage consumers and providers. Patient Educ Couns 2013;92:205 10. Available from: https:// doi.org/10.1016/j.pec.2013.04.014. [4] Knight W. The dark secret at the heart of AI. MIT Technology Review; 2017.

472

Epilogue

[5] Topol E. Deep medicine: how artificial intelligence can make healthcare human again. 1st ed. Hackette Book Group; 2019. [6] Zhang B, Dafoe A. Artificial intelligence: American attitudes and trends. SSRN: https://ssrn.com/ abstract=3312874 or https://doi.org/10.2139/ssrn.3312874; 2019. [7] Rosenfeld A, Zemel R, Tsotos JK. The elephant in the room. arXiv; 2018. [8] Engler A. A guide to healthy skepticism of artificial intelligence and coronavirus. Brookings Institution; 2020. [9] The National Institute of Health (NIH). All of Us research program, ,https://allofus.nih.gov/.; [accessed 29.08.18]. [10] Obermeyer Z, Phil M, Emanuel EJ. Predicting the future — big data, machine learning, and clinical medicine. N Engl J Med 2016;375(3):1216 19. Available from: https://doi.org/10.1056/NEJMp1606181. [11] Li O, Ng A. Artificial intelligence is the new electricity — Andrew Ng. Medium Synced; 2017. [12] Lewis-Kraus G. The great AI awakening. The New York Times Magazine; 2016.

Glossary of terminology Abdominal aortic aneurysm Wall of abdominal aorta weakens producing a balloon-like dilation Acanthosis nigricans Dark patches on the skin Acarbose For type 2 diabetes, helps prevent blood glucose level increasing too much after eating Accelerator A microchip designed to enable faster processing of AI tasks Accenture Consulting firm Acquired immune system Active immune system producing antibodies Acquired immunodeficiency syndrome (AIDS) Active HIV infection Acquired mutations Change in gene structure causing abnormality in the human organism Actuators (output unit for robotics) Physical objects using RFID (radio-frequency identification) Acute disseminated encephalomyelitis Inflammation of the brain and spinal cord: see Demyelinating disorders Acute inflammation Clinical reaction in acquired immune response ADA Digital Health Ltd. See Chatbot Addiction medicine Diagnosis, and treatment of persons with the disease of addiction Adenine (As) Amino acid base compound in DNA Adoptive cell transfer therapy (ACT therapy) Treatment of solid tumors and hematological malignancies Adult congenital heart disease (ACHD) Heart disease present at birth and manifested in adulthood Advanced molecular detection clips CDC Office of Public Health Genomics tracking epidemiologic study result Advanced Multimodality Image Guided Operating Suite (AMIGO) MRI inside the operating room as part of a larger imaging setup Adverse Event Reporting System (FAERS) FDA information source Aerospace medicine Preventive health care in aerospace medicine AESOP Surgical robot Affordable Care Act (ACA) Affordable Care Act (aka Obama Care) Alexa Amazon chatbot Algorithm A procedure or formula for solving a mathematical problem, based on conducting a sequence of specified actions or steps that frequently involves repetition of an operation AliveCor Digital (IoT) for monitoring heart arrhythmias All of Us initiative NIH research project Alleles Inherited genes on a chromosome that determine hereditary Allergens Specific form of antigen that triggers immunoglobin E (IgE) Allergy Hypersensitivity response AlphaGo Computer game Alzheimer’s Disease (AD) Neurodegenerative disease Amazon Multinational technology company Amino acid Building blocks of the large, complex protein molecules Amygdala Nucleus in the limbic system of the brain Amyotrophic Lateral Sclerosis (ALS) Neurological degenerative disease Anaphylactic shock Life-threatening hypersensitivity reaction Anemias Not enough or malfunctioning red blood cells for hemoglobin transport Aneurysm Wall of any artery may weaken producing a balloon-like dilation Angina pectoris Chest pain secondary to myocardial infarction

473

474

Glossary of terminology

Angiogenesis inhibitors Cancer treatment Angiogram Diagnostic procedure to visualize blood vessels and organs Angiography See angiogram Angiotensin-converting enzyme (ACE) inhibitors Treatments for ride-side heart conditions Angiotensin II receptor blockers (ARBs) Treatments for ride-side heart conditions AngularJS Framework to control how content looks on different devices such Anterior spinal artery Vertebral artery Anti-inflammatory medications Drugs to treat acute inflammation Anti-TNF (tumor necrosis factor) Proinflammatory cytokine Antibiotic Treatment for bacterial infection Antibody Cellular component of the immune response Anticoagulant medications (e.g., heparin, warfarin) Medication to reduce clotting Antigen-presenting complex (APC) Antigen bound to a macrophage and T cell Antigen-nonselective immunomodulators See interferon (IFN)-beta 1b (Betaseron), IFN-beta 1a (Avonex), Cop1 (Copaxone) Antigen Foreign (non-self) substance or stress Antineutrophil cytoplasmic antibodies (ANCA) Laboratory tests to help determine the type of vasculitis Antinuclear antibodies (ANA) Diagnostic test for arterial disease Antiplatelet medications (e.g., aspirin, Plavix) Anticoagulation drugs Aortic dissection Tear between the innermost and middle layers of the aorta Aplastic anemia Autoimmune disease in which the body fails to produce blood cells in sufficient numbers Apolipoprotein B100 Type of LDL, specific protein that plays a key role in metabolism Apoptosis inducers Cancer treatment Apoptosis Cell’s ability to self-destruct when something goes wrong Apple Heart Study Apple watch series 4 has a new FDA approved transducer that measures ECG Application Programming Interface (API) Software intermediary (i.e., “software to software” program) whose function is to specify how software components should interact or “integrate” with databases and other programs Application software (app) User-specific program Application Specific Integrated Circuit (ASIC) Semiconductor microchip is designed to be customized for specialized use The area under the receiver operating characteristic curve (AUC, AUC-ROC or AUROC) Performance measurement for classification problem at various thresholds settings Arithmetic Logic Unit (ALU) Stored programmed instructions to device controllers and device drives to perform specific tasks for mathematical and logical operations on information in RAM Arnold-Chiari malformation See Posterior fossa anomalies Array Hardware circuits that a user can program to carry out one or more logical operations Arteries See Blood vessels Arteriosclerosis Thickening and hardening of walls of arteries from aging Arteriovenous malformation An abnormal tangle of blood vessels connecting arteries and veins, which disrupts normal blood flow and oxygen circulation. Arthroscope An instrument through which the interior of a joint may be inspected or operated on Artificial conversation entity See Chatbot Artificial intelligence (AI) A branch of computer science dealing with the simulation of intelligent behavior in computers; the capability of a machine to imitate intelligent human behavior (a “disruptive technology”) Artificial neural network (ANN) The human neural network model replicated in computer technology Artificially Intelligent Robots Robots that are controlled by AI programs as smartphones and tablets Aspirin Acetylsalicylic acid (ASA)

Glossary of terminology

475

Assembler software Used in compilation process to assemble target data Asthma Chronic, long-term condition that intermittently inflames and swells the airways Asynchronous videoconferencing (Store and forward) Transmission of recorded health history to a health practitioner Atherosclerosis Buildup of fat, cholesterol, calcium, and other substances creates plaques inside arteries and leading to hardening and narrowing (stenosis) of arteries Auditory cortex of the temporal lobe Sense of hearing with input from the thalamus Auto-antigen Antigen in the autoimmune disease process Autoimmune disease Category of immune diseases of unknown origin (originating through selfantigenicity) Autologous (from “one’s self”) hematopoietic stem cell transplantation Autologous CAR-T-cell therapy Modified and targeted lymphocytes reintroduced to the patient’s body through a single infusion to attack tumor cells AUTOMAP Automated transform by manifold approximation (MRI) Automated breast volume scanning (ABVS) Ultrasound and digital mammography image, optical breast analysis multimodality GPU-based computer-assisted diagnosis of breast imaging Automated Clinical Trial Eligibility Screener (ACTES) EHR screening Automated Speech Recognition (ASR) Understands and process human (unstructured) languages Autonomic nervous system Peripheral nervous system (PNS) Autonomic neuropathy A group of symptoms that occur when there is damage to the nerves that manage everyday body functions Autonomous driving cloud platform e.g., Tesla’s “Autopilot”, Waymo’s Lidar, et al. Autosomal dominant allele Determine hereditary characteristics Autosomal recessive allele Two copies of an abnormal gene must be present for the disease or trait to develop Avonex See Antigen-nonselective immunomodulators Axon Conducts electrical impulses away from the neuron's cell body; primary transmission lines of the nervous system B-cell lymphocytes See non-granular leukocytes Back-end programming Refers to the server side of an application and everything that communicates between the database and the browser Baidu Chinese computer technology company Bandwidth Range of frequencies within a given band Barcode Method of representing data in a visual, machine-readable form Base pair Nucleotide amino acid base compounds on a gene (collectively called the human exome) BaseHealth (Sunnyvale, CA) Predictive analytics system Basophils Type of white blood cell (WBC) Bayesian logic Logic applied to decision making and inferential statistics that deals with probability inference Beta-blockers Beta-adrenergic blocking agents Betaseron See Antigen-nonselective immunomodulators between them Big Data to Knowledge (BD2K) Initiative National Institutes of Health ‘Precision Medicine Initiative’ to develop a genetically guided treatment with personalized, precision medicine for improved preventive health Big data Large volumes of data Biobanks Biological specimen databases Biobots Modeled after sperm cells Bioinformatics The science of collecting and analyzing complex biological data Biologic agents Immune modulating drug classes

476

Glossary of terminology

Biomarkers Phenotypic features for any measurable quantity or score that can be used as a basis to stratify patients Biomedical informatics Data used in big data analytics Biopsy Histopathological study of tissue Biosignals Reminders or notification messages, which are triggered by events derived from automated analyses Blockchain Distributed database existing on multiple computers at the same time Blood vessels Arteries, capillaries, venules Boolean logic Things determined as completely true or false Bootstrap Form of program code visible to user Brain Abscess A collection of pus, immune cells, and other material in the brain Brain stem Pons and Medulla oblongata BRCA1 Cancer gene BRCA2 Cancer gene Brightree survey Home healthcare and hospice survey Broadband High-speed Internet access that is always on Broca’s area See Frontal lobe C Drive (C:/) The main hard disk partition on a computer that contains the operating system and the related system files. C-reactive protein (CRP) Increases with heart disease Cache Collection of items of the same type stored in a hidden on a computer Calor Referring to temperature (heat) in the inflammatory process Cancer Group of diseases involving abnormal cell growth with the potential to invade or spread to other parts of the body Cancering Mutations are constantly occurring in the body and virtually all of them have the potential to become irregular, accumulate or mutate into a cancer Capillaries See Blood vessels Capsule endoscopy FDA approved microbot procedure CaptureProof Asynchronous telehealth using advanced computer vision CAR-T cell therapy Gene replacement therapy Carbon monoxide (CO) An odorless, colorless gas that can cause fatal toxicity Carcinogenesis Series of mutations in an oncogene Carcinogens Substances capable of causing cancer in living tissue. Carcinoma A cancer that starts in the skin or the tissues that line other organs Cardiac catheterization Diagnostic test for coronary heart disease Cardio DL (from Arterys) Uses deep learning for medical image analysis and Carotid Artery Disease Atherosclerotic plaque and clots causing blockage Carpal tunnel syndrome Condition that causes pain, numbness, and tingling in the hand and arm Carrier testing Used to identify people who carry one copy of a gene mutation CARs Combine both antigen-binding and T-cell activating functions into a single receptor Cas9 enzyme Acts as a pair of ‘molecular scissors’ that can cut the two strands of DNA at a specific location in the genome Case-Based Reasoning Method used to solve a new case by adapting the symptoms found in previous cases that are similar to the new case. CAT scan Computer tomography Catheter-directed thrombolysis procedure Treatment for blot clots CD8 Cell surface glycoprotein Central nervous system (CNS) See Forebrain, Midbrain, Hindbrain, Cerebellum, Cerebral cortex Central processing unit (CPU) Transforms the raw data into binary code information

Glossary of terminology

477

Centromere Region of a chromosome to which the microtubules of the spindle attach Cerebellum See Brain stem Cerebral cortex Gray matter (outer neural layer of cerebrum and cerebellum) Cerebral Edema Fluid accumulation around the brain Cerebral Perfusion SPECT Data Alzheimer’s disease information Cerebrovascular accident (Stroke) Ischemic infarct in the cerebral region Cerebrovascular disease Disease of the arterial vessels supplying the brain Charlson comorbidity index (CCI) Standard created of comorbidities through a manual review of admission notes Chatbot (short for “chatterbot”) An interactive agent, virtual or conversational assistant (e.g., Talkbot, Bot, IM, Assistant, Apple Siri, Microsoft Cortana, Google Home, Amazon Alexa) Checkpoint inhibitors Immune system modulator Chemotherapy Cancer therapy using chemical agents CheXNeXt algorithm Stanford University algorithm to screen chest X-rays for more than a dozen types of disease in less time than it takes to read this sentence. Chromallocyte Nanobot to extract irregular chromosomes and insert a correction Chrome Google browser Chromoendoscopy Optimal diagnostic accuracy that is comparable with standard histopathologic examination Chromosome replacement therapy See Chromallocyte Chromosome Threadlike structure of nucleic acids and protein found in the nucleus of most living cells, carrying genetic information in the form of genes Chronic disease Conditions that last 1 year or more and require ongoing medical attention or limit activities of daily living or both; e.g., heart disease, cancer, diabetes Chronic inflammation Extended or permanent clinical manifestation of acquired immune disorders; considered the underlying cause of all human disease Chronic Venous Insufficiency (post phlebitis syndrome) Valves in veins (usually leg or sometimes arms) don’t work, causing blood to pool and putting increased pressure on the walls of the veins; Circle of Willis Joining area of several arteries at the bottom (inferior) side of the brain Classification and Regression Tree (CART) algorithm Telehealth monitors early identification of high-risk conditions Clear Genetics Chatbot “GIA” (Genetic Information Assistant) for genetic counseling Client Types of servers, e.g., file servers, printer, email, database servers and web servers supporting web browsers Clinical Decision Support (CDSi) Publicly available algorithms that can be embedded directly into patient EHRs for forecasting Clinical Decision Support Services (CDSS) Predictive analytics for decision making for healthier care experiences for health providers Clinical Documentation Improvement (CDI) Expert system Clinically Appropriate and Cost-Effective Placement Project Evaluates medicare services Cloud computing Using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer CloudMedX Health care company Clustering analyses Used on demographic, clinical, laboratory, imaging and medication data from heart failure patients to identify different subgroups Cobots Pharmacy robots Colonoscopy Long, flexible, tubular instrument about 1/2-in. in diameter that transmits an image of the lining of the colon

478

Glossary of terminology

Common carotid arteries Arteries that supply the head and neck and divide in the neck to form the external and internal carotid arteries Community health Geographically based population health and public health addressing public policy influences, creating shared community resources, and conducting a holistic approach to health and wellness Comorbidity See Concurrent Medical Conditions Compilation Target code or binary (machine) code information populates the inner (hidden) layer of a computer to be used by its software Compiler (translator) Converts human language and commands (source code) from keyboards, spoken word (audio) or visual imagery into machine code (or computational language) for the input layer Complete blood count (CBC) Complete blood count test measures several components and features of blood Complete pancreas transplant Treatment for Type 1 diabetes Computational immunogenetics Bioinformatic methods Computational Pathology Applies computational models, machine learning and visualizations to produce lab output Computer processor Programmed microchip, the main functional unit of computer software Computer vision algorithms Builds an automated radiograph prioritization system Concurrent Medical Conditions Comorbidity or multimorbidity Concussion Traumatic brain injury Congenital heart disease (CHD) Malformations of heart structure existing at birth Congenital hypothyroidism Disorder of the thyroid gland Congestive heart failure (CHF) Right-side heart condition CONSULT Collaborative mObile decisioN Support for managing mULtiple morbidiTies: Chatbot Contact tracing Identifying persons exposed to infectious risk Contrast-enhanced ultrasound (CEUS) Procedure following injection of a microbubble contrast agent to demonstrate blood flow to the perfusion level Control unit (CU) CPU unit that decodes and executes instructions Convalescent plasma (serum) Antibody rich plasma from previously infected victim Conversational agent (Virtual assistant) See Chatbots Conversational user interface (CUI) See Chatbot Copaxone See Antigen-nonselective immunomodulators Cor pulmonale Right-side heart disease caused primarily by lung disorders Core processor CPU unit that executes multiple instructions simultaneously Coronary artery disease Disease of the arterial vessels supplying the heart muscle Cortana Microsoft digital assistant CRISPR-Cas9 therapy (Clustered regularly interspaced short palindromic repeats - associated protein 9); gene editing Crohn’s disease Autoimmune disease of the gastrointestinal tract Cryptographic (coded) key Encryption tool for blockchain Cryptography Writing or solving codes CSAIL “The Computer Science and Artificial Intelligence Lab”, MIT wearable device to monitor vital signs and emergencies CT scan Computer tomography Cubital tunnel syndrome Entrapment neuropathy Cyber-physical system (CPS) Integrates with IoTs Cystic fibrosis (CF) Inherited lung disease Cytogenetics Examination of chromosomes to identify structural abnormalities Cytokines Chemical components of the immune system

Glossary of terminology

479

Cytosine (Cs) Base compound Cytotoxic Cell killing da Vinci Surgical robotic system Data generator Combines data generated by an expert knowledge base with data coming from real-world medical records Data integration A process that consists of retrieving, cleaning, and organizing data, usually obtained from several different sources Data mining Searching for information in a database Data node Network of data points Data processing Utilizing data to produce a result Database Structured set of data held in a computer and accessible in various ways Decision support system (DSS) Tools for genetic information in EHR systems Decision tree Algorithm is used for solving regression and classification problems Decode Programs to be executed translated into Assembly instructions Deep convolutional neural network (DCNN) Deep learning process Deep learning (DL) Subcategory of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw input Deep neural network (DNN) Same as DCNN Deep Q-Networks (DQN) Reinforcement learning process Deep vein thrombosis (DVT) Blood thickens in a clump that becomes solid, forming a clot DeepMind (Google) A division of Alphabet, Inc. responsible for developing general-purpose artificial intelligence (AGI) technology Dementia with Lewy Bodies (DLB) Degenerative disease of the central nervous system Demographics Statistical data relating to the population and particular groups within it Demyelinating Disorders Any condition that results in damage to the protective covering (myelin sheath) that surrounds nerve fibers in the brain, optic nerves, and spinal cord Dendrite Short branched extension of the neuron, along which impulses received from other cells at synapses are transmitted to the cell body DenseNet-121 Structure of neural network Dependent variables Outcomes in a defined population group Dermatology Medical specialty for integumentary (skin, hair, nails) disorders Dermoscopy Dermatologic image analysis Descriptive analytics Analyzes real-time incoming and historical data for insights on how to approach the future to determine some characteristic of the offspring Developmental disorders Delays or abnormal patterns of physical or mental development in the areas of communication/language, motor skills, etc. Diabetes insipidus An uncommon disorder that causes an imbalance of fluids in the body Diabetes mellitus type 2 (originally, adult diabetes) Glucose (sugar) collects in the blood (hyperglycemia) and does not reach the cells Diabetes type 1 (originally, Juvenile Diabetes) Autoimmune destruction of the pancreatic beta cells, which is caused by unknown factors Diabetic ketoacidosis (DKA) Life-threatening complication Diabetic retinopathy (DR) Retinal vascular changes associated with diabetes Diagnostic analytics Form of advanced analytics which examines data or content to determine why a health outcome occurs Diastolic BP Diastolic blood pressure Dickey amendment Forbade the CDC from using its funds to promote or advocate for gun control Digital code Machine or binary code

480

Glossary of terminology

Digital drug control system (DDCS) Pharmaceutical drug development and supply system through the use of AI Diode Semiconductor device with two terminals Discharge Referral Expert System for Care Transitions (DIRECT) University of Pennsylvania Disease-modifying anti-rheumatic drugs (DMARDs) Immune modulating drug class Disruptive technology A process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves upmarket, eventually displacing established competitors Diuretics Drugs to increase the amount of water and salt expelled from the body as urine DNA editing Direct manipulation of DNA sequences in cells DNA sequencing Determine the order of DNA proteins (nucleotides) in an individual’s genetic code DNA Deoxyribonucleic acid, the carrier of genetic information Dolor Pain in the acute inflammatory process Domain Name System (DNS) System by which Internet domain names and addresses are tracked and regulated Domain Name used to identify one or more IP addresses Doppler ultrasound Increase (or decrease) in the frequency of sound Double helix DNA molecules made of two twisting, paired strands Ductal carcinoma in situ (DCIS) Breast cancer Dynamic random-access memory (DRAM) Form of random-access memory Dysregulate Abnormal acquired immune response Echo State Networks (ESN) Analyzes EEG Echo Amazon chatbot Echocardiography Use of ultrasound waves to investigate the action of the heart Eczema Condition in which patches of skin become rough and inflamed, with blisters that cause itching and bleeding Electrocardiography (ECG) Measurement of electrical activity in the heart and the recording of such activity as a visual trace Electrocochleography Technique of recording electrical potentials generated in the inner ear and auditory nerve Electrodiagnosis (EDX) Methods of medical diagnosis that obtains information about diseases by passively recording the electrical activity of body parts (that is, their natural electrophysiology) or by measuring their response to external electrical stimuli (evoked potentials) Electroencephalography (Intracranial EEG, stereoelectroencephalography) Measurement of electrical activity in different parts of the brain and the recording of such activity as a visual trace Electrogastrogram (EGG) Test in which electrodes on skin measure electrical activity in the stomach Electromyography (EMG) Recording of the electrical activity of muscle tissue Electronic health record EHR originally EMR for “electronic medical record” Electronystagmography (ENG) Diagnostic test to record involuntary movements of the eye caused by a condition known as nystagmus Electrooculography (EOG) Measurement of the electrical potential between electrodes placed at points close to the eye, used to investigate eye movements Electroretinography (ERG) Test to measure the electrical response of the eye’s light-sensitive cells, called rods and cone ELIZA Original chatbot (aka Eliza Doolittle) Embase database Comprehensive biomedical literature database EmberJS Front-end programming where the program code is visible to the user Embolus Migrating blood clot

Glossary of terminology

481

Embryonic stem cell (ESC) Pluripotent, undifferentiated stem cell capable of generating all of the body’s cell types Emmanuelle Charpentier 2020 Nobel laureate in chemistry for CRISPR-Cas9 Empyema Collection of pus in the pleural usually associated with pneumonia Encephalitis Inflammation of the brain Encephalomyelitis Inflammation of the brain and spinal cord Encryption Process of encoding a message or information in such a way that only authorized parties can access it Endocardium Inner layer of the heart Endoscopy Insertion of a long, thin tube directly into the body to observe an internal organ or tissue Endovenous laser ablation Radiofrequency ablation (RFA) for venous abnormalities Eosinophils White blood cell common in allergic reactions Ependymoma Brain tumor (cancer) Ependymomas Tumor that can form in the brain or spinal cord Epicardium Middle layer of the heart Epidemiology Branch of medicine which deals with the incidence, distribution, and possible control of diseases and other factors relating to health Epidural hemorrhage Bleeding between the dura mater and the skull Epigenetic dysregulation Development of autoimmunity by dysregulating immune cell functions Epigenetics Study of changes in organisms caused by modification of gene expression rather than alteration of the genetic code itself Epigenome Chemical compounds that regulate gene expression Epilepsy Neurological disorder marked by sudden recurrent episodes of sensory disturbance, loss of consciousness, or convulsions, associated with abnormal electrical activity in the brain Epstein-Barr Virus (EBV) Risk of lymphoma and cancers of the nose and throat Erythrocyte sedimentation rate (ESR) Measures how quickly your red blood cells collect at the bottom of a test tube over one hour Erythrocyte Red cells containing hemoglobin which combines with oxygen in the lungs and transports it to the body’s cells, tissues and organs Ethereum An open-source, public, blockchain-based distributed computing platform and operating system (using InterPlanetary File System IPFS) EU Blockchain Observatory and Forum Existing initiatives on blockchain Eugenics Science of improving a human population by controlled breeding to increase the occurrence of desirable heritable characteristics Evidence variable A known factor enabling identification of an unknown Evoked potentials Electrical potential in a specific pattern recorded from a specific part of the nervous system Evoking strength (positive predictive value) How strongly one should consider a disease if an associated finding was observed Expert system Rule-based and “knowledge engineers” Explainable AI (XAI) Explains method and rationale algorithm uses to make a determination(s); Explanation facility Explanation facility Explainable XAI Explorer Microsoft browser Exposome Nongenetic exposures that affect human health and disease Expression Regulated activity of a gene External drive Hard drive, external to computer External storage Storage on an external device Fabry disease Genetic and hereditary disorders Facebook Social networking website

482

Glossary of terminology

FastMRI Facebook and NYU project to cut down MRI durations by 90% by applying AI-based imaging tools Federal Communications Commission (FCC) Regulates telehealth Fetch CPU function that takes the address number from a program counter to track which instructions should execute next Field-programmable gate array (FPGA) Integrated circuit with Hardware Description Language (HDL) Firefox Mozilla browser Florence Nursing bot (Florence Nightingale) Fluorescent “chain terminator” nucleotides Fluoroscopy continuous X-ray image displayed on a monitor which provides the ability of real-time monitoring of a medical procedure or the course of a contrast agent (“dye”) through the body FluoroShield System combines AI technology with an ultra-fast collimator, and an advanced imageprocessing platform to reduce radiation exposure by up to 84% Forebrain anomalies Polymicrogyria, Megalencephaly, Microencephaly, Heterotopias, Holoprosencephaly, Agenesis of the corpus callosum Forebrain Most highly developed part of the human brain: it consists primarily of the Cerebrum and the Inner Brain Forensic testing Uses DNA sequences to identify an individual for legal purposes Framework A platform for developing software applications FRAX (Fracture Risk Assessment Tool) Predictive algorithms Frequency (sensitivity) Models how likely it is that a patient with a disease manifests a particular finding in an Expert System Front-end programming programming where the program code is visible to the user Frontal lobe See Cerebrum Functio laesa “Loss of function” in the inflammatory process Fundoscopy See Fundus imaging Fundus imaging Imaging of retina using fundoscopy or ophthalmoscopy Fuzzy logic Indiscriminate nature of real things with a series of transitional state GEMINI Integrative healthcare analytics system Gene expression modulators Cancer treatment Gene mutation Change in base compound sequencing Gene sequencing Ordering of base compound pairs (adenine paired with thymine and guanine paired with cytosine) Gene signature Gene expression or number of RNA molecules they produce Gene A unit of heredity which is transferred from a parent to offspring and is held to Genetic and hereditary disorders Inherited disorders Genetic cloning The processes used to create an exact genetic replica of another cell, tissue or organism Genetic code See DNA sequencing Genetic engineering or modification Editing or modifying genetic material Genetic scissor Method used in CRISPR-Cas9 procedure Genetic testing tests to identify changes in chromosomes, genes, or proteins Genome An organism’s complete set of DNA, including all of its genes Genomic medicine Clinical science that attempts to build individual strategies for diagnostic or therapeutic decision-making Genomics Study of the genome Genotype Genetic information carried for a trait Germ cell tumors Growths that form from reproductive cells Germline genetic modification Methods used to change the genes in eggs, sperm, or early embryos Gestational diabetes Develops during pregnancy in women who don’t already have diabetes Giant cell arteritis Severe inflammation in affected arteries

Glossary of terminology

483

Gigahertz (GHz) A clock frequency, also known as a clock rate or clock speed, representing a cycle of time GIOSTAR Labs Company dedicated to stem cell-based technologies Glioma Brain tumor (cancer) Gliptins (DPP-4 inhibitors) Work by preventing the breakdown of a naturally occurring hormone called GLP-1 in type 2 diabetes GLP-1 agonists For type 2 diabetes, act similarly to the natural hormone GLP-1 (see gliptins) GoldSTAR Provides stem cell services to the underserved Google Search engine Google’s DeepMind Health Precision medicine program Grand mal seizure Tonic-clonic or convulsive seizures Granular leukocytes Eosinophils, neutrophils, and basophils Graves’ disease Autoimmune disease caused by hyperthyroidism Guanine (Gs) Genetic base compound amino acid Guillain-Barre syndrome Genetic and hereditary disorders Gustatory cortex Sensory cortical area for taste Gyri Folded bulges in brain Hard disk Storage device Hardware Physical components of a computer Hardware Description Language (HDL) Verilog and VHSIC HDL Health analytics Tools and techniques that extract information from these complex and voluminous datasets and translate them into information to assist decision- making in healthcare Health Tap See Chatbot Hearing (auditory) See The five senses Heart attack Myocardial infarction: Loss of blood supply to heart ventricle (ischemia) Heart scan Computerized tomography (CT) of heart Heart Muscular organ that beats over 100,000 times a day to pump blood throughout the body’s 60,000 miles of blood vessels Helicobacter pylori (H. pylori) Risk of stomach cancer Hematocrit Test to measure the percentage of blood made up of red blood cells Hematopoietic stem cell a “blood stem cell” that can develop into all types of blood cells found in the peripheral blood and the bone marrow Hemoglobin Oxygen-carrying protein in the blood Hemolytic anemia Disorder in which red blood cells are destroyed faster than they can be made Hemophilia Hereditary disease (found in X chromosome) Hepatic encephalopathy A decline in brain function that occurs as a result of severe liver disease Hepatitis C Virus (HCV) Risk of liver cancer Heredity The passing on of physical or mental characteristics genetically from one generation to another Heterozygous 1 recessive and 1 dominant allele Heuristic Trial and error, intuitive or “rule of thumb” Hidden layer Inner layer of computer and human cortex; see Inner layer Hierarchical clustering A technique that groups similar data points such that the points in the same group are more similar to each other than the points in the other groups High-density protein (HDL) “Good cholesterol” Hindbrain Includes the upper part of the spinal cord, the brain stem, and a small spherical mass of tissue called the cerebellum Hippocampus Nucleus in the midbrain limbic system Hives See Urticaria Home Google chatbot Homozygous 2 Recessive alleles

484

Glossary of terminology

Horizontal integration The coordination of activities across operating units that are at the same stage or level in the process of delivering services Hormone therapies Cancer treatment Human genome project Complete mapping of the human DNA sequencing Human immunodeficiency virus (HIV) Risk of Kaposi sarcoma, lymphomas (including both non-Hodgkin lymphoma and Hodgkin disease), and cancers of the cervix, anus, lung, liver, and throat Human papillomaviruses (HPVs) Risk of all cervical cancers and penile cancers Human T-cell leukemia/lymphoma virus type 1 (HTLV-1) Risk of adult T-cell leukemia/lymphoma (ATLL) Human vaccines project Combining systems biology with artificial intelligence to understand one of the greatest human immune systems Humoral Blood-related serum and fluids that carry WBCs, B and T lymphocytes and chemical components like cytokines Hydrocephalus A condition in which an accumulation of cerebrospinal fluid (CSF) occurs within the brain Hyperlipidemia Acquired or genetic disorders that result in a high level of lipids (fats, cholesterol, and triglycerides) circulating in the blood Hyperplasia The enlargement of an organ or tissue caused by an increase in the reproduction rate of its cells, often as an initial stage in the development of cancer. Hypersensitivity response Allergic IgE response Hypertensive cerebrovascular disease Changes in cerebral vasculature secondary to hypertension Hypertext Markup Language (HTML) Universal website language Hypertrophy The enlargement of an organ or tissue from the increase in the size of its cells IBM’s Watson Genomics and Oncology Precision medicine program Ibuprofen Nonsteroidal anti-inflammatory drug (NSAID) IDEAL-X Adaptive learning platform (robot) IDx-DR A software program that uses an AI algorithm to analyze images of the eye taken with a retinal camera. Used to diagnose diabetic retinopathy If/then statements Reveal associations between independent data in a database, relational (inference rules logic) reveal associations between independent data in a database, relational or other information repositories Imalogix Platform for diagnostic imaging Immunocompromised Impaired immune system Immunodeficiency Lack of immune response to antigen Immunogenetics Branch of medical genetics that explores the relationship between the immune system and genetics. Immunogenomics Adding, for each of us, the millions of uniquely randomized T- and B-cell receptor genes that encode our immune repertoires Immunoglobulin E (IgE) Hypersensitivity, allergic immune reaction Immunomodulators Non-specific drug categories that suppress or stimulate the immune system Immunosuppression Decreased (suppressed) immune system inDelphi Algorithm to predict DNA repairs through Cas9 Independent variables Variable (often denoted by x) whose variation does not depend on that of another Industry 4.0v (Health 4.0) Cyber-Physical Systems (CPS) integrated with IoT and cloud-computing plus an array of other technologies Inference engine See if/then statement Infermedica SP See Chatbot Inflammation Acute or chronic clinical response of the acquired immune response characterized by reddened (“rubor”), hot (“calor”), swelling (“tumor”) and pain (“dolor”) Innate immune system Natural (healthy) defense system of the body

Glossary of terminology

485

Inner (hidden) layer See hidden layer Input layer information or data provided by an external source called an input device Instagram Online video service (purchased by Facebook in 2012) Insulin Hormone produced in the pancreas by the islets of Langerhans, which regulates the amount of glucose in the blood Insulin pump therapy Treatment for type 1 diabetes Insulin, oral Treatment for type 2 diabetes Intel Microprocessor company Intelligent agent (autonomous driving platform) Algorithm that controls acts as the brain of the autonomous vehicle and connected to a database that acts as a memory where past driving experiences are stored Intelligent Heart Diseases Diagnosis Algorithm (IHDDA) Algorithm to dynamically read heart signals Interactive agent See Chatbot Interferon Immune system cytokine Interleukins Immune system cytokine Internal carotid arteries Branches from the common carotid arteries Internet (Net) Worldwide network of computers Internet Explorer Microsoft browser Interoperability Practical implementation and application of vertical integration within the health care industry Interpreter Responsible for interpreting the results of the inference engine output, including explaining the correctness and reason for the conclusion Interventional cardiology Subspecialty of cardiology that deals specifically with the catheter-based treatment of heart diseases Interventional endoscopy Bronchoscopy, colonoscopy, laparoscopy, cystoscopy Intracerebral abscess See Brain abscess Intracranial aneurysms See Aneurysm Iron-deficiency anemia Condition in which blood lacks adequate healthy red blood cells Islet cell transplantation Treatment for type 1 diabetes Jennifer Doudna 2020 Nobel laureate in chemistry for CRISPR-Cas9 K-means clustering An iterative algorithm that tries to partition the dataset into K pre-defined distinct nonoverlapping subgroups (clusters) Karyotype Overall number and shape of all your chromosomes KenSci Risk prediction company Kernicterus Bilirubin toxicity Kiwi Robot for autistic children Knowledge-based systems (expert systems) Contain medical knowledge, usually about a very specifically defined task, and they can reason with data from individual patients to come up with reasonable conclusions Labeled data Usually associated with machine learning Lead poisoning Environmental toxicity Lefluomide See DMARD Left atrium Receives oxygenated blood from the pulmonary veins Left ventricle Contracts to send oxygenated blood through the (high pressure) aorta and to the arteries throughout the body Leukemia A cancer of bone marrow; creates abnormal WBCs Leukocytes White cells that help the body fight bacteria and infection Limbic system Set of structures in the brain that deal with emotions and memory Linear algebra Branch of mathematics concerning linear equations such as linear functions and their representations in vector spaces and through matrices

486

Glossary of terminology

Logic gate They are integrated circuits that are sets of circuits on a chip (an “array”) Long term memory Associated with the limbic hippocampus (similar to long term computer memory) Low-density protein (LDL) “Bad cholesterol” Lumbar puncture Needle is inserted between two lumbar bones (vertebrae) to remove a sample of cerebrospinal fluid Lymphatic system Network of tissues and organs that help rid the body of toxins, waste and other unwanted materials Lymphedema Accumulation of lymph fluid in the soft tissues, most frequently in the arms or legs Lymphocyte (TH,S,C,M and B, BM) Cellular component of the immune system Lymphoma Cancers of the immune system causing abnormal lymphocytes to become lymphoma cells Lymphoscintigraphy Test for lymphedema Machine learning Principal learning process in AI Magnetocardiography Technique to measure the magnetic fields produced by electrical currents in the heart using extremely sensitive devices such as the superconducting quantum interference device (SQUID) Mammography Type of radiography is used to capture images (mammograms) of internal structures of the breasts Mammography Quality Standards Act To ensure the quality of mammography for early breast cancer detection (1992) Marfan’s syndrome Genetic disorder of the connective tissue McCarthy, John One of the founders of the discipline of artificial intelligence; see also Marvin Minsky MedLEE Natural language processing system for comorbidities Medulla oblongata See Brain stem Medulloblastoma Brain tumor (cancer) Medxnote A bot that plugs directly into a hospital’s electronic medical records system Megabyte 106 or 1,000,000 bytes Megahertz (MHz) One million hertz (one cycle per second) Memory networking (ANN) Artificial neural networking (ANN) using RAM (random access memory) (analogous to the hippocampus) Meningioma (Arises mainly from arachnoid cells; histologically benign); Brain tumor (cancer) Meningitis Inflammation (swelling) of the protective membranes covering the brain and spinal cord Merkel cell polyomavirus (MCPyV) Risk of Merkel cell carcinoma Metabolomics Large-scale study of small molecules, commonly known as metabolites Metadata When, how, and by whom data is created Metastasis Development of secondary malignant growths at a distance from a primary site of cancer Metformin First line medicine for type 2 diabetes to reduce the amount of glucose your liver releases into your bloodstream; Methotrexate See DMARD Microbiome Genetic material of all the microbes - bacteria, fungi, protozoa, and viruses - that live on and inside the human body Microprocessor An integrated circuit that contains all the functions of a central processing unit of a computer. Microsoft’s Project Hanover Precision medicine program Midbrain See Brain stem Migraine Recurring type of headache Minsky, Marvin One of the founders of the discipline of artificial intelligence; see also John McCarthy Mobile communication networks (5G) Fifth-generation wireless technology for digital cellular networks Mobile health (mHealth) Form of videoconferencing

Glossary of terminology

487

Molecular biology Branch of biology that deals with the structure and function of the macromolecules (e.g. proteins and nucleic acids) essential to life Molecular genetic tests Gene tests Molecularly targeted therapies Cancer treatment Molly Nursing bot Monoclonal antibodies Immune system modulators Monocytes See Non-granular leukocytes Moore’s Law The number of transistors on a microchip doubles about every two years, though the cost of computers is halved Mosaicism Genetic changes not present in a parent’s egg or sperm cells, or in the fertilized egg, but happen later in the embryo Moxi Nursing robot MRI (Magnetic Resonance Imaging) scan mRNA Messenger ribonucleic acid Multi-core processor A single computing component comprised of two or more CPUs that read and execute the actual program instructions Multi-omics Genotype-phenotype data through genome-wide association studies (GWAS) Multimorbidity See Concurrent Medical Conditions Multiple sclerosis Demyelinating disease Multivariable calculus The extension of calculus in one variable to calculus with functions of several variables Mutation Changing of the structure of a gene, resulting in a variant form that may be transmitted to subsequent generations, caused by the alteration of single base units in DNA, or the deletion, insertion, or rearrangement of larger sections of genes or chromosomes; see gene mutation Myasthenia gravis Genetic and hereditary disorder MyCode See Clear Genetics Community Health Initiative (Geisinger) Myeloma Cancers of the plasma cells Myelomeningocele (Meningomyelocele) See Developmental disorders Myocardium Outer layer of the heart Myoclonic seizure Brief, shock-like jerks of a muscle or a group of muscles Myopathy A disease of the muscle in which the muscle fibers do not function properly Name Entity Recognition (Bio-NER) Evidence extraction for precision medicine Nanobots Robots that carry out a very specific function and are ~50 100 nm wide Nanorobotics The technology of creating machines or robots at or close to the scale of a nanometer; see nanobots Naproxen A non-steroidal anti-inflammatory drug (NSAID) Nateglinide and repaglinide Stimulate the release of insulin by the pancreas (not commonly used) Natural language generation (NLG) Software process that transforms structured data into natural language Natural language processing (NLP) A subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data Neoplasm A new and abnormal growth of tissue in some part of the body, especially as a characteristic of cancer Neoplastic Relating to a neoplasm or neoplasia Neural network (NN) A computer system modeled on the human brain and nervous system; see deep neural network (DNN), artificial neural network (ANN) Neurilemoma Peripheral nerve sheath tumors Neuroblastoma Brain tumor (cancer)

488

Glossary of terminology

Neurological disorders Disorders of the central or peripheral nervous systems Neuromorphic chips Self-learning microchips Neuromyelitis optica (Devic’s disease) See demyelinating disorders Neuron Specialized cell transmitting nerve impulses; a nerve cell Neuroradiological imaging Focuses on diagnosing conditions of the spine, neck, head, and central nervous system using computed tomography (CT) or magnetic resonance Neuroscience The scientific study of the nervous system Neurotransmitters A chemical substance that is released at the end of a nerve fiber by the arrival of a nerve impulse and, by diffusing across the synapse or junction, causes the transfer of the impulse to another nerve fiber, a muscle fiber, or some other structure Neutrophils Type of white blood cell that helps heal damaged tissues and resolve infections Next-generation sequencing or next-gen sequencing (NGS) A high-throughput method used to determine a portion of the nucleotide sequence of an individual’s genome; see Sanger Method Non-granular leukocytes Monocytes, T-cell lymphocytes, B-cell lymphocytes Non-self Antigen NSAIDs Nonsteroidal anti-inflammatory drugs; see aspirin, ibuprofen, naproxen Nuclear medicine scan Radionuclide scan Nuclear stress test Study aimed at measuring whether the blood flow to your heart muscle is normal or abnormal utilizing a radioactive tracer to create an image of how well blood is reaching your heart muscle, both during exercise and while at rest Nucleotide bases Adenine (As), thymine (Ts), guanine (Gs) and cytosine (Cs) Nurse practitioners (NPs) An advanced practice Registered Nurse (RN) with additional education and responsibilities for administering patient; also referred to as Nurse Clinicians Object file Compilation of assembled target data Occipital lobe See Cerebrum Occupational medicine Branch of medicine which is concerned with the maintenance of health in the workplace, including prevention and treatment of diseases and injuries Oligodendroglioma (arises from oligodendrocytes) Brain tumor (cancer) Omics “Field of study” suffix itis Suffix for inflammation Oncoevolution Proto-oncogenes causing a standard cell transforms into a neoplastic cell Oncogene Cancer gene Oncogenesis Random genetic error Oncology Study and treatment of cancer and tumors Operating system (OS) Software that supports a computer’s basic functions, such as scheduling tasks, executing applications, and controlling peripherals Optical coherence tomography (OCT) A non-invasive imaging test that uses light waves to take crosssection pictures of your retina Optical drive A computer system that allows users to use DVDs, CDs, and Blu-ray Osteoporosis (osteometabolic disease) Disease in which the density and quality of bone are reduced P4 Medicine (Systems medicine) A plan to radically improve the quality of human life via biotechnology p53 Gene Makes a protein that stops mutated cells from dividing; “guardian of the genome.” PALB2 Gene for breast cancer Palpitations the feeling that your heart is pounding hard and rapidly and is fluttering or skipping beats Parietal lobe See Cerebrum Parkinson’s Disease A progressive nervous system disorder that affects movement Participatory medicine The concept that patients should play a decisive role in their healthcare by actively controlling their health status and by participating in the decision-making process regarding their treatments

Glossary of terminology

489

Pathogenesis Natural history of a disease Pathology The bioscience of the causes and effects of diseases, especially the branch of medicine that deals with the laboratory examination of samples of body tissue for diagnostic or forensic purposes Pathophysiology The disordered physiological processes associated with disease or injury Pentium Family of 32 and 64-bit 3 86-based CPU chips from Intel Perception-action cycle A data loop that takes place repetitively in autonomous vehicles wherein with more cycles the more intelligent the intelligent agent becomes, resulting in a higher accuracy of making decisions, especially in complex driving situations Pericardium Protective membrane surrounding the heart Peripheral aneurysm Aneurysm in the abdomen or legs Peripheral arterial disease Chronic disease caused by plaque builds up in the arteries to the legs Peripheral nerve sheath tumors Tumors occurring on any peripheral nerve (e.g., Schwannoma/ Neurilemoma) Peripherals Any external device that provides input and output for the computer Peroneal nerve palsy Entrapment neuropathy Personalized health care Precision health care Personalized medicine Precision medicine PharmaBot Help clinicians securely file prescription errors into the hospital’s system Pharmacoeconomics Branch of health economics which deals with identifying, measuring, and comparing the costs and consequences of pharmaceutical products and services Pharmacogenetics Study of how people respond differently to drug therapy based upon their genetic makeup or genes Pharmacogenomics Study how an individual’s genome can impact their responses to medication Pharmacovigilance The practice of monitoring the effects of medical drugs after they have been licensed for use, especially to identify and evaluate previously unreported adverse reactions Pharmeum A system heavily reliant on blockchain and AI technology that allows for coexistence between doctors, pharmacies, regulators, and patient Phenotype How a trait genotype shows on your physical body Phenylketonuria (A genetic disorder that causes intellectual disability if left untreated) Pheochromocytoma Adrenal gland tumor Photoacoustic imaging Non-ionizing laser pulses are delivered into biological tissues Physical distancing See Social distancing Pilocytic Astrocytoma Brain tumor (cancer) Pioglitazone For type 2 diabetes, makes the body’s cells more sensitive to insulin so more glucose is taken from your blood Neuroplasticity Brain plasticity that refers to the brain’s ability to change throughout life Platelets See Thrombocytes Platforms See Frameworks Pluripotent stem cells Cells that can undergo self-renewal and give rise to all cells of the tissues of the body Polymorphism Genetic alterations that occur in more than 1% of the population Polyneuropathy Damage or disease affecting peripheral nerves (peripheral neuropathy) in roughly the same areas on both sides of the body, featuring weakness, numbness, and burning pain Pons See Brain stem Population health System that allows clinicians and health managers to form customized, proactive care plans for their plan members designed to intervene identified risk factors Porphyria Genetic and hereditary disorders Posterior fossa anomalies Arnold-Chiari malformation

490

Glossary of terminology

Precision Medicine Initiative (PMI) A long-term research project, involving the National Institutes of Health (NIH) and multiple other research centers, which aims to understand how a person’s genetics, environment, and lifestyle can help determine the best approach to prevent or treat disease Predictive analytics Analysis of patient data to determine possible patient outcomes, such as the likelihood of a worsening or improving health condition, or chances of inheriting an illness in an individual’s family Preimplantation testing Also called preimplantation genetic diagnosis (PGD) Prenatal testing Used to detect changes in a fetus’s genes or chromosomes before birth Prescriptive analytics Machine learning algorithms to perform comprehensive analyses of patient data to improve the quality of patient management Preventive Care Flows from predictive care as well as precision care Preventive medicine Preventive care Primary brain lymphoma Brain cancer Primary care Gatekeeper, the first point of consultation Pro-inflammatory cytokine Type of signaling molecule that is secreted from immune cells like helper T cells (Th) and macrophages, and certain other cell types that promote inflammation PROBOT Surgical robot Progenitor Originating cause Programmable Universal Machine for Assembly (PUMA) Robotics Proteome Gene protein Proteomics Biochemistry, functions, and interactions or proteomes within the body Proto-oncogenes Genes that promote cell growth and cellular division Protocols Procedure or system of rules governing actions Public health The art and science of preventing disease, prolonging life, and promoting health through the organized efforts and informed choices of society, organizations, public and private communities, and individuals Public Health Genomics Knowledge Base Epidemiologic study database PubMed database Free resource supporting the search and retrieval of peer-reviewed biomedical and life sciences literature Pulmonary atresia A form of heart disease in which the pulmonary valve does not form properly Pulmonary hypertension Increased pressure in the pulmonary artery Pulmonary thrombosis Occurs when a clump of material, most often a blood clot, gets wedged into an artery in your lungs Python An interpreted, high-level, general-purpose programming language Quantum processors The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation Quaternary care More complex care including experimental treatment and highly complicated surgeries Qubits Unit of quantum information that can store both 0 s and 1 s simultaneously, or an infinite number of values in between, in multiple states (i.e., store multiple values) at the same time; see Superposition Radiation oncology Medical specialty that involves the controlled use of radiation to treat cancer; see radiation therapy Radiation therapy See Radiation oncology Radiogenomics Radiographic image analysis used to predict underlying genotypic traits Radiography The process or occupation of taking radiographs to assist in medical examinations Radiomics Information from large datasets of qualitative and quantitative imaging RAM Random Access Memory RAM drive RAM drive (also called a RAM disk) is a block of random-access memory (primary storage or volatile memory) that a computer’s software is treating as if the memory were a disk drive Raynaud’s disease A disorder that causes the blood vessels to narrow when you are cold or feeling stressed Recombinant Genetic material formed by recombination (e.g., recombinant DNA)

Glossary of terminology

491

Regenerative medicine The process of replacing or “regenerating” human cells, tissues or organs to restore or establish normal function; see Stem cells Regression Measure of the relation between the mean value of one variable (e.g. output) and corresponding values of other variables Regression analysis (Support vector machine) A set of statistical processes for estimating the relationships between a dependent variable (often called the ‘outcome variable’) and one or more independent variables (often called ‘predictors’, ‘covariates’, or ‘features’) Rehabilitation The action of restoring someone to health or normal life through training and therapy after imprisonment, addiction, or illness Reinforcement learning (RL) Area of machine learning concerned with how software agents ought to take actions in an environment to maximize some notion of cumulative reward. Relays Output unit for robotics Remidio ‘Fundus on phone’ (FOP), a smartphone-based device Remote patient monitoring (RPM) Form of remote video monitoring Renovascular condition Renal arteries become blocked Resistors A passive two-terminal electrical component that implements electrical resistance as a circuit element Retinoblastoma Genetic and hereditary retinal tumor Rheumatic heart disease A condition in which permanent damage to heart valves is caused by rheumatic fever Rheumatoid arthritis The most common type of autoimmune arthritis Ribosome A minute particle consisting of RNA and associated proteins found in large numbers in the cytoplasm of living cells Right atrium Receives deoxygenated blood from superior and inferior vena cava Right ventricle Contracts to send deoxygenated blood through the pulmonary valve to the lungs Robodoc Robot Robot A programmable machine that physically interacts with the world around it and is capable of carrying out a complex series of actions autonomously (self-governing) or semi-autonomously “Robotic process automation” (RPA) The use of software to mimic human actions to perform a sequence of steps, leading to meaningful activity, without any human intervention Robotics An interdisciplinary branch of engineering and computer science (AI) ROM Read-only memory Rubor Redness associated with acute inflammation Safari Apple browser Sanger Method of sequencing Original sequencing technology (the Sanger method); “chain termination method”; see Next-generation sequencing (NGS) Sarcoma A cancer of connective tissues such as bones, muscles, cartilage, and blood vessels Schrödinger’s cat A thought experiment, sometimes described as a paradox (in quantum physics), devised by Austrian physicist Erwin Schrödinger in 1935 Schwannoma Peripheral Nerve Sheath Tumors Sclerotherapy A medical procedure used to eliminate varicose veins and spider veins Secondary care Specialty care and acute care hospitals (e.g., ER) Secondary storage device Backup device to hard drive storage Self-driving vehicle Autonomous, robotic vehicle Semantic rule Analyzes the meaning conveyed by a text by interpretation of words and how sentences are structured Semiconductor A solid substance that has a conductivity between that of an insulator and that of most metals Semiconductor integrated circuit A microchip

492

Glossary of terminology

Microchip See Semiconductor Sensely Inc. See Chatbot Sensorium Totality of neurological centers that receive, process and interpret sensory stimuli Sensors Robotic input sources Server A machine (computer) that “serves” other machines SGLT2 inhibitors For type 2 diabetes, works by increasing the amount of glucose excreted in urine Sickle cell anemia See Anemia Signal transduction inhibitors Cancer treatment Siri Apple Assistant Smart Health an Emerging concept that refers to the provision of healthcare services for prevention, diagnosis, treatment, and follow-up management at any time or any place by connecting information technologies and healthcare Smart hospitals Data, insight, and access Smart sensors See CarePredict Smartphone A class of mobile phones and multi-purpose mobile computing devices. Smartplaness software 3D ultrasound scanning Social distancing Maintaining minimum distance between people during epidemic Sociomarkers Measurable indicators of social conditions, at the group-level to improve disease surveillance, disease prediction, and implementation and evaluation of population health interventions Software The programs and other operating information used by a computer Solid-state device Electronic device in which electricity flows through solid semiconductor crystals (silicon, gallium arsenide, germanium) Solid-state relay (SSR) An electronic switching device that switches on or off when a small external voltage is applied across its control terminals Somatic mutations Genetic alteration acquired by a cell that can be passed to the progeny of the mutated cell in the course of cell division Somatic nervous system Peripheral nervous system (PNS) Sound cards A device that can be slotted into a computer to allow the use of audio components for multimedia applications. Source code Text listing of commands to be compiled or assembled into an executable computer program Speech synthesis The process of generating spoken language by a machine based on written input Sphygmomanometer Blood pressure measuring device Spina bifida Developmental disorder of the spine and central nervous system Spinal cord See Peripheral nervous system (PNS) Statins Cholesterol-lowering drugs Staging Determination of how advanced the cancer is relative to its spreading (metastasized) beyond its original location Stakeholders Providers, insurers/payers, pharma companies, etc. “Stem Cell Core Lab” of Columbia University Precision medicine genetic analyses and molecular interventions Stem cell transplantation Cell replacement therapy Stents Tubular support placed temporarily inside a blood vessel, canal, or duct to aid healing or relieve an obstruction Storage devices A piece of computer hardware on which information can be stored Stroke Cerebrovascular accident; Blood supply to a part of the brain is suddenly interrupted Subacute combined degeneration (B12 deficiency) Refers to degeneration of the posterior and lateral columns of the spinal cord (Lichtheim’s disease) Subarachnoid hemorrhage (SAH) Life-threatening type of stroke caused by bleeding into the space surrounding the brain

Glossary of terminology

493

Subdural hemorrhage type of bleeding that often occurs outside the brain as a result of a severe head injury Sulci Fissures in brain Sulfasalazine DMARD Sulphonylureas For type 2 diabetes, increase the amount of insulin produced by the pancreas Supercomputer A particularly powerful mainframe computer Superposition The ability of a quantum system to be in multiple states at the same time until it is measured; see qubits and quantum processing Supervised learning Labeled data Swarms Collective behavior of decentralized, self-organized systems, natural or artificial employed in work on artificial intelligence Synapse A junction between two nerve cells, consisting of a minute gap across which impulses pass by diffusion of a neurotransmitter Synchronous videoconferencing Live videoconferencing Syntactic Relating to the rules of language Syringomyelia See developmental disorders System software Files and programs that make up your computer’s operating system Systemic lupus erythematosus (SLE) An autoimmune disease that can affect any tissue or organ system of the body T-cell lymphocytes See non-granular leukocytes Takayasu’s Arteritis Inflammation of aortic arch and branches Target code Fully compiled or assembled program ready to be loaded into the computer Tarsal tunnel syndrome a compression, or squeezing, on the posterior tibial nerve that produces symptoms anywhere along the path of the nerve running from the inside of the ankle into the foot Tay-Sachs disease an inherited metabolic disorder in which certain lipids accumulate in the brain, causing spasticity and death in childhood TCP/IP Transmission control protocol/Internet protocol Telemedicine (aka Telehealth) The remote diagnosis and treatment of patients using telecommunications technology. Telepsychiatry Application of telemedicine to the specialty field of psychiatry Telestroke services Network of audiovisual communication and computer systems for delivery of stroke clinical services Temporal arteritis See Giant cell arteritis Temporal lobe See Cerebrum Tempus Focuses on developing personalized cancer care using a machine learning platform (Mayo) Tertiary care More highly specialized care including certain surgeries, cancer treatments, and other complicated procedures. Tetralogy of Fallot A combination of four congenital abnormalities including a ventricular septal defect (VSD), pulmonary valve stenosis, a misplaced aorta and a thickened right ventricular wall (right ventricular hypertrophy) Thalamus Limbic nucleus in the midbrain Third-party insurers A form of liability insurance purchased by an insured (first-party) from an insurer (second party) for protection against the claims of another (third party) Three-D imaging Technique that combines many scans (from computed tomography, MRI or ultrasonography) computationally Three-D laser lithography Fabricates microrobot that can deliver therapeutic stem cells precisely to very specific parts of the brain Thrombocytes Platelets aid the formation of blood clots Thrombogram Predictor of bleeding episodes

494

Glossary of terminology

Thrombosis Focal coagulation or clotting of the blood in a part of the circulatory system Thrombus Obstructive blood clot Thumb drive Generic term referring to a USB drive or a pen drive Thymine (Ts) Nucleic acid base compound of gene Tonic-clonic or convulsive seizures Formerly known as grand mal seizure Touchpad A computer input device in the form of a small panel containing different touch-sensitive areas Toxin delivery molecules Cancer treatment Toxoplasmosis Disease caused by toxoplasmas, transmitted chiefly through undercooked meat, or in soil or cat feces Training Procedure used to perform the learning process in a neural network Transcriptomics Expression of the genes’ proteins (the proteome) Transient ischemic attack TIA or mini-stroke Transistor A semiconductor device with three connections, capable of amplification in addition to rectification Transverse myelitis Inflammation of the spinal cord: See Demyelinating disorders TRAuma Care in a Rucksack: TRACIR AI enabling medical interventions that extend the “golden hour” for treating combat casualties and ensure an injured person’s survival for long medical evacuations TREAT Expert system based on a causal probabilistic network to improve antibiotic therapy in hospitalized patients Triage Assignment of degrees of urgency to wounds or illnesses to decide the order of treatment of a large number of patients or casualties TRICARE Military health system Tricuspid valve Valve that is situated at the opening of the right atrium of the heart into the right ventricle Triglycerides Stored in fat cells Tuberous sclerosis Rare disease that causes tumors, or growths, in the brain and other organs Tumor Swelling in acute inflammation; also, a mass of tissue that’s formed by an accumulation of abnormal cells Turing test Developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human Twenty-Three (23andMe) Offers a genetic blueprint for ancestry and health markers Twitter Social media application allowing message up to 124 characters Ultrasound (Ultrasonography) a technique using echoes of ultrasound pulses to delineate objects or areas of different density in the body Undersea and hyperbaric medicine Deals with decompression illness and diving accident cases Unlabeled data No explanation, label, tag, class or name for the features in data in machine learning Unstructured medical data (physician notes, medical correspondence, individual EMRs, lab and imaging systems, CRM systems, finance, and claims) Unsupervised learning see Unlabeled data Urticaria An outbreak of swollen, pale red bumps or plaques (wheals) on the skin that appear suddenly either as a result of the body’s reaction to certain allergens or for unknown reasons; see Hives V Framework Vs of big data Vaccination Treatment with a vaccine to produce immunity against disease; inoculation Varicose veins Large bulging veins in the legs causing multiple, different types of symptoms Vascular Pertaining to blood vessels and the heart (cardiovascular) Vascular trauma Refers to injury to a blood vessel Vasculitis Group of disorders that involve inflammation of blood vessels produced by Immune system Vectorcardiography Method of recording the magnitude and direction of the electrical forces that are generated by the heart through a continuous series of vectors that form curving lines around a central point. Venules See Blood vessels

Glossary of terminology

495

Verily (Google Life Sciences-Alphabet Inc.’s research organization) Used machine learning to assess the risk of a patient suffering from cardiovascular disease Vertebral arteries Major arteries of the neck. Typically, the vertebral arteries originate from the subclavian arteries Vertical integration The coordination of (health) services among operating units, i.e., providers, facilities (hospitals, etc.), insurers that are at different stages or levels of the process of delivery of patient services Video cards A printed circuit board controlling output to a display screen. Videoconferencing Live (synchronous): store and forward (asynchronous); remote patient monitoring (RPM); mobile health (mHealth) Virtual assistant (conversational agent) See Chatbots Virtual reality The computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors Visceral artery aneurysm Aneurysm associated with the arteries supplying the liver, spleen, kidneys or intestines. Visual cortex Cortical center In occipital lobe for interpreting vision Vital signs Body temperature (BT); blood pressure (BP); pulse rate (PR); respiration rate (RR) Vitamin-deficiency anemia See Anemia Von Willebrand disease Lack of blot clotting factor VSELs (very small embryonic-like stem cells) Stem cells with plasticity Wearable device IoT (Internet of Things) devices Web (WWW or World Wide Web) A way of accessing, viewing and sharing information over the Internet Web browsers A program with a graphical user interface for displaying HTML files, used to navigate the World Wide Web Webcam A video camera that inputs to a computer connected to the Internet, so that its images can be viewed by Internet users Wellderly Blockchain platform for elderly wellness and care Wellness Includes the presence of positive emotions and moods (e.g., contentment, happiness), the absence of negative emotions (e.g., depression, anxiety), satisfaction with life, fulfillment and positive functioning Woebot Labs Inc. See Chatbot XGBoost Gradient boosting Yahoo Online directory and search engine on the World Wide Web Your.MD Limited See Chatbot YouTube Online video sharing website

Glossary of abbreviations 5G AARDA ABVS ACA ACE ACHD ACMG ACR DSI ACS ACT ACTES AD ADC ADR ADR AI AIDS ALS ALU AMD AMIGO AMP ANA ANCA ANN APC APHA API App APU ARB As ASCA ASCO ASIC ASR AUC AUROC AUTOMAP B1 BCC Bio-NER

Mobile communication network American Autoimmune Disease Related Association Automated breast volume scanning Affordable Care Act Angiotensin converting enzyme Adult congenital heart disease American College of Medical Genetics and Genomics American College of Radiology Data Science Institute American Cancer Society Adoptive cell transfer therapy Automated Clinical Trial Eligibility Screener Alzheimer’s disease Analog-to-digital conversion Adenoma detection rate Adverse drug reaction Artificial Intelligence Acquired Immunodeficiency Syndrome Amyotrophic Lateral Sclerosis Arithmetic Logic Unit Age-related macular degeneration Advanced Multimodality Image Guided Operating Suite Association for Molecular Pathology Antinuclear antibodies Antineutrophil cytoplasmic antibodies Artificial Neural Network or Memory Network Antigen presenting complex American Public Health Association Application Programming Interface Application software Accelerated Processing Unit Angiotensin II receptor blockers Adenine Administrative Simplification Compliance Act Adaptive Stem Cell Optimization Application Specific Integrated Circuit Automated Speech Recognition Area under the receiver operating characteristic curve see AUC Automated transform by manifold approximation (MRI) B lymphocyte Basal cell carcinoma (skin cancer) Name Entity Recognition

498

Glossary of abbreviations

BM BP BP BPH BPNN BRCA1 BRCA2 BT BV C:/C Drive CAD CAD CAOS CAP CARs CART CAT scan CBC CCD CD8 CDC CDI CDI CDO CDSi CDSS CEUS CF CHD CHF CKD CMR CMS CNS CONSULT (Collaborative mObile decisioN Support for managing mULtiple morbidiTies) Cop 1 COPD COVID-19 CP CPR CPS CPU Cs CSAIL

B memory lymphocyte Blood pressure Diastolic blood pressure Enlarged prostate Back propagation neural network Cancer gene Cancer gene Body temperature Bacterial vaginosis Computer hard drive Computer Aided Diagnosis Computer-assisted-diagnosis (computer-aided design) software Computer-assisted orthopedic surgery Severe community-acquired pneumonia Chimeric antigen receptors Classification and Regression Tree algorithm Computer tomography scan Complete blood count Critical care department Cell surface glycoprotein Center for Disease Control and Prevention Clinical Documentation Improvement (Expert System) Clostridium (Clostridioides) difficile infection Care delivery organization Clinical Decision Support Clinical Decision Support Services Contrast-enhanced ultrasound Cystic Fibrosis Congenital heart disease Congestive heart failure Chronic kidney disease Cardiac MRI Centers for Medicare & Medicaid Services Central nervous system Chatbot

Copaxone Chronic Obstructive Pulmonary Disease SARS-CoV-2 Pandemic Cerebral Palsy C-reactive protein Cyber-physical systems Central Processing Unit Cytosine MIT wearable device to monitor vital signs and emergencies

Glossary of abbreviations

cSCC CT scan CTA CTEPH CTG CTPA CU CUI CV CVD CVST DCIS DCNN DDCS DDS DIRECT DKA DL DLB DMARDs DNA DNN DNS DPP-4 DQN DR DRAM DTMF DVT EBV ECG EDI EDX EEG EGG EHR ELIZA EMG ENG EOC EOG EP ER ERG ESC ESN FAERS FCC FDA

Cutaneous squamous cell carcinoma Computer tomography scan; see CAT scan CT angiography Chronic thromboembolic pulmonary hypertension Cardiotocography Computed tomography pulmonary angiogram Control unit Conversational user interface Cardiovascular Cardiovascular disease Cerebral Venous Sinus Thrombosis Ductal carcinoma in situ in breast cancer Deep convolutional neural network Digital drug control system Decision support system Discharge Referral Expert System for Care Transitions Diabetic ketoacidosis Deep learning Dementia with Lewy Bodies Disease modifying anti-rheumatic drugs Deoxyribonucleic acid Deep neural network Domain Name System Gliptins (DPP-4 inhibitors) Deep Q-Networks Diabetic retinopathy Dynamic random-access memory Dual Tone Multi-Frequency Deep vein thrombosis Epstein-Barr Virus Electrocardiography Evolutionary and developmental intelligence Electrodiagnosis Electroencephalography Electrogastrogram Electronic health record (originally EMR for “Electronic medical record”) “Eliza Doolittle” (Original chatbot) Electromyography Electronystagmography Ovarian Epithelial Cancer Electrooculography Emergency physician Emergency Room Electroretinography Embryonic stem cell Echo State Networks Adverse Event Reporting System Federal Communications Commission Federal Drug Administration

499

500

Glossary of abbreviations

FPGA geoAI GER (GERD) GHz GI GIA GINA GIS GLP-1 GOV GPS GPU Gs GWAS HAT HBV HCP HCV HD HDL HDL HIPAA HITECH Act HIV HLA HMM HNSC HR-PCa HRSA HSC HSV HTA HTML HTTP HWSM HZ I/O IBS ICD-10 ICDR ICH ICNC CT ICT ICU IDEAL-X IDx-DR IFN IgAN IgE

Field-programmable Gate Array Geospatial AI Gastroesophageal reflux Gigahertz Gastrointestinal tract Chatbot “GIA” (Genetic Information Assistant) Genetic Information Non-Discrimination Act geographic information system Gliptin agonist Government agency Global positioning system Graphic processing unit Guanine Genome-wide association studies Health technology assessment Hepatitis B Virus Health care provider Hepatitis C Hard drive Hardware Description Language (Verilog and VHSIC HDL) High density protein (“Good cholesterol”) Health Insurance Portability and Accountability Act Health Information Technology for Economic and Clinical Health Act Human immunodeficiency virus Human leukocytic antigen Hidden Markov models Pulmonary metastases of head and neck squamous cell carcinoma Higher risk prostate cancer Health Resources and Services Administration (U.S.) Hybrid Stem Cell Herpes Simplex Virus Health technology assessment Hypertext markup language Hypertext transfer protocol Health Workforce Simulation Model of HRSA Herpes Zoster (Shingles) Input/Output Irritable bowel syndrome International Statistical Classification of Diseases and Related Health Problems) International Clinical Diabetic Retinopathy classification scale Intracranial hemorrhage International Conference on Nuclear Cardiology and Cardiac CT Information and Communication Technology Intensive Care Unit Adaptive learning platform (robot) Diabetic retinopathy imaging software Interferon Immunoglobulin A nephropathy Immunoglobin E

Glossary of abbreviations

IHDDA INO IoE IOM IoT IP iPAH IPFS iPSC ISP IT itis IVF KB Kiwi LDL LIDC LPE LR-PCa LSTM LTPAC LVO MACRA MCPyV MDP MDSS MEG MERS-CoV MGH MHC mHealth MHz ML MLm Molly Moxi MRA MRI mRNA MSARD MTB NAHC NC NCBI NCHS NCI NCS Net NF 1

Intelligent Heart Diseases Diagnosis Algorithm Internuclear ophthalmoplegia Internet of Everything Institute of Medicine Internet of Things Internet protocol Idiopathic pulmonary arterial hypertension InterPlanetary File System Induced pluripotent stem cell Internet service provider Information technology Suffix for inflammation In vitro fertilization Knowledge base Robot for autistic children Low density protein Lung Image Database Consortium Lifelong premature ejaculation Lower risk prostate cancer Long short-term memories Long-term post-acute care Large vessel occlusion Medicare Access and CHIP Reauthorization Act of 2015 Merkel Cell Polyomavirus Markov Decision Processes Medical decision support systems Magnetoencephalography Middle East Respiratory Syndrome) Massachusetts General Hospital Major histocompatibility complex Mobile health Megahertz Machine learning Machine learning in medicine Nursing bot Nursing robot Magnetic resonance angiography Magnetic Resonance Imaging scan Messenger ribonucleic acid Marfan Syndrome and Related Disorders Mycobacterium tuberculosis National Association for Home Care & Hospice Nurse Clinician; see Nurse Practitioner National Center for Biotechnology Information (NCBI): Link ,medgen . National Center for Health Statistics National Cancer Institute Nerve conduction study Internet Neurofibromatosis Type 1

501

502

Glossary of abbreviations

NF 2 NGO NGS NHCOA NIA NIEHS NIH NINR NLG NLP NLU NMSC NN NP NRC NSCLC NVSS OA OAI OCT OCTA OEA OI oma ome omics ONC OS OSHA OUD p53 PACT Care BV PALB2 PCa PCI PCNSL PCOS PDGM PE PET scan PGD PHCI PHD PHM PHN PHR PICU

Neurofibromatosis Type 2 Non-governmental organization Next-generation sequencing (or next-gen sequencing) National Hispanic Council on Aging (NHCOA): “All of US” protection National Institute on Aging National Institute of Environmental Health Sciences of the National Institute of Health National Institute of Health National Institute of Nursing Research; Symptom Science Model Natural language generation Natural language processing Natural language understanding Non-malignant skin cancer Neural network Nurse Practitioner; see Nurse Clinician National Research Council Non-small cell lung cancer National Vital Statistics System Osteoarthritis OsteoArthritis Initiative Optical coherence tomography Optical coherence tomography angiography Oncology Expert Advisor Osteogenesis Imperfecta Suffix referring to cancer or tumor Suffix regarding objects of study in biological fields Suffix referring to a field of study in biology Office of the National Coordinator for Health Information Technology Operating system US Dept. of Labor Occupational Standards and Safety Administration Opioid-use disorder Gene makes a protein that stops mutated cells from dividing (“Guardian of the genome”) Chatbot Gene for breast cancer Prostate cancer Prophylactic cranial radiation Primary central nervous system lymphoma Ovarian cysts and Polycystic Ovary Syndrome Patient-Driven Groupings Model Pulmonary emboli Positron emission tomography scan Preimplantation testing, also called preimplantation genetic diagnosis Patient health condition identification Patient health data Population Health Management Public health nursing Personal health record Pediatric intensive care units

Glossary of abbreviations

PID PMI PML PNS POCT PPRHA PR PROBOT PROMIS PUMA PVOD QOL RA RAM RAP RAS RCIGM RECIST RF RFA RFID RL RNA ROM ROP ROS RP-VITA RPA RPM RR RTI RTP SAA SAH SARSA SARS-CoV SCC SCLC scRNA-seq SDRAM SGLT2 SICU SLA SmHT sMLP and dMLP SNF SPECT SPHinX SQUID

Pelvic Inflammatory Disease Precision Medicine Initiative Progressive multifocal leukoencephalopathy Peripheral nervous system Point-of-care-testing labs (e.g., Maverick Detection System from Genalyte) Patient preventive/remedial health advocacy Pulse rate Surgical robot Patient-Report Outcomes Measurement Information System Programmable Universal Machine for Assembly Pulmonary veno-occlusive disease Quality of life Rheumatoid arthritis Random access memory Request for Anticipated Payment Robotically-assisted surgical devices Rady Children's Institute for Genomic Medicine Response Evaluation Criteria in Solid Tumors Radio frequency Radiofrequency ablation Radio frequency identification Reinforcement learning Ribonucleic acid Read-only memory Retinopathy of prematurity Robotic operating system Telepresence robot Robotic process automation Remote patient monitoring Respiration rate Reproductive tract infection “Return to Provider” Strategic Anemia Advisor Subarachnoid hemorrhage State-Action-Reward-State-Action Severe Acute Respiratory Syndrome (Cornavirus) Squamous cell carcinoma (skin cancer) Small cell lung cancer Single-cell RNA sequencing Synchronous dynamic random-access memory Inhibitors in type 2 diabetes Surgical intensive care units Systemic lupus erythematosus Smart home technologies Perceptrons Skilled nursing facility Single-photon emission computed tomography Sheffield Pulmonary Hypertension IndeX Superconducting quantum interference device

503

504

Glossary of abbreviations

SSPE SSR STAR STD STDR SVM TAMG TC TCP/IP TEE THS TIA TIDE TM TNM TPU TRACIR TREAT tRNA TS Ts UA UID URL USB UTI VARIMED VED ViSi (Sotera Wireless) VRAM VSELs VWF WBC WCE Web WES WfG WFO WGS WHO WWW X-Ray XAI

Subacute sclerosing panencephalitis Solid-state relay Smart Tissue Autonomous Robot (Johns Hopkins University) Sexually transmitted disease Sight-threatening Diabetic Retinopathy Support vector machine Thymoma-associated myasthenia gravis T cytotoxic lymphocyte Transmission Control Protocol/Internet Protocol Transesophageal echocardiography Tolosa-Hunt syndrome Transient ischemic attack Tumor Immune Dysfunction and Exclusion T memory lymphocyte Tumor, Node, Metastasis Tensor Processing Unit TRAuma Care in a Rucksack Expert system based on a causal probabilistic network to improve antibiotic Transfer RNA T suppressor lymphocyte Thymine Uterine activity Unique identifier Uniform resource locator Universal serial bus Urinary tract infection Large Stanford University database of published disease-associated genetic variants Venous erectile dysfunction Mobile app to monitor vital signs Video random access memory Very small embryonic-like stem cells von Willebrand Factor White blood cell Wireless capsule endoscopy World Wide Web Whole-Exome sequencing Watson for Genomics IBM’s Watson for Oncology Whole-Genome sequencing World Health Organization World Wide Web Conventional radiography Explainable AI

Index Note: Page numbers followed by “f” and “t” refer to figures and tables, respectively. A Abdominal Aortic Aneurysm, 323 Acanthosis nigricans, 334 Acarbose, 335 Accelerator, 46t, 48 49, 63 64. See also Graphic processing unit (GPU) Accenture, 79 80, 84, 212, 227 Acquired immune response, 296 297. See also Acute inflammation chronic inflammation, 298 299 Acquired immune system, 296 297 Acquired Immunodeficiency Syndrome (AIDS), 81t, 378 379, 446 447 Acquired mutations, 172, 309, 314 Actuators, 61, 98, 350 Acute inflammation. See also Acquired immune response calor (heat/temperature), 299 dolor (pain), 299 functio laesa (functional loss), 299 rubor (redness), 299 300 tumor (swelling), 299 ADA Digital Health, Ltd., 187 Addiction medicine, 107 108 Adenine (As), 169, 169f, 171, 309, 418, 419f. See also Base compound Adenine paired with thymine, 169f, 171, 309 Adoptive cell transfer therapy (ACT therapy), 305 Adult congenital heart disease (ACHD), 329 Advanced Molecular Detection Clips, 275 Advanced Multimodality Image Guided Operating Suite (AMIGO), 144 Adverse Event Reporting System (FAERS), 212 Aerospace medicine, 107 AESOP, 225 Affordable Care Act (ACA), 110, 230, 232, 250 251

Alexa (Amazon), 52, 62, 186, 222. See also Bot chatbot, 394, 469 Algorithm. See also CheXNeXt algorithm Classification and Regression Tree (CART) algorithm, 185 computer vision algorithms, 132, 214 Intelligent Heart Diseases Diagnosis Algorithm (IHDDA), 329 AliveCor, 215 All of Us initiative, 102 Alleles, 169, 171, 307. See also Autosomal dominant allele autosomal recessive allele, 171 Allergens. See also Antigen hypersensitivity, 296 immunoglobin IgE, 296, 358 359 Allergy, 298 299, 358 359 AlphaGo, 49 Alzheimer’s Disease (AD), 142 143, 148 149, 177 178, 318t, 341, 345 346, 351t, 410 411 Amazon, 23, 46, 205, 220. See also Search engine Amino acid, 171 172, 368, 460 Amygdala, 32 33, 35f, 40, 43t, 45t, 65, 67, 338, 343. See also Limbic system Amyotrophic Lateral Sclerosis (ALS), 177 178, 342, 345 Anaphylactic shock, 299. See also Hypersensitivity response Anaphylaxis, 296 Angiogenesis inhibitors, 319 Angiogram, 136 Angiography, 144, 155 Angiotensin-converting enzyme (ACE-2). See also Coronavirus COVID-19, 449 spiked protein, 449 450 Angiotensin II receptor blockers (ARBs), 328

505

506

Index

AngularJS, 20 Anemias, 322, 324 Aneurysm. See also Abdominal Aortic Aneurysm Intracranial Aneurysms, 341 Peripheral Aneurysm, 323 Visceral Artery Aneurysm, 323 Anti-inflammatory medications. See also Corticosteroids NSAIDs (Non-steroidal anti-inflammatory drug), 300 steroids, 300 Anti-TNF (tumor necrosis factor), 301 Antibiotic, 211, 217, 298, 357, 373, 377 380, 400 401, 457 Antibody, 296, 298t, 307 308, 403, 448, 454, 457. See also Immunology Anticoagulant medications (e.g., heparin, warfarin), 327 Antigen. See also Antigen presenting complex (APC) auto-antigen, 298t immunology, 295 non-self, 295, 337 Antigen presenting complex (APC), 295 Antineutrophil cytoplasmic antibodies (ANCA), 160t Antinuclear antibodies (ANA), 160t Antiplatelet medications (e.g., aspirin, Plavix), 326 Anvil, 344 Aortic Dissection, 323 Aplastic anemia, 322 Apoptosis, 309, 314 315, 319 Apoptosis inducers, 319 Apple Heart Study, 331 Application Programming Interface (API), 14 17, 16f, 17f, 19, 22 23, 36 38, 37t, 57, 67, 252 Application software (app), 21, 23, 46 Application Specific Integrated Circuit (ASIC), 46t, 50 Area under the receiver operating characteristic curve (AUC, AUC-ROC or AUROC), 155, 328, 384 Arithmetic Logic Unit (ALU), 16 17, 21 22, 47 48. See also Central processing unit (CPU)

Arnold-Chiari malformation, 341 Array, 16, 23, 29 30, 50, 55 56, 86, 107, 125, 128 129, 203, 214, 223, 296, 299 300, 364, 372 373 Arteriosclerosis, 323 Arteriovenous malformation, 323 324, 351 Artery, 152, 179, 263, 323 324, 327 328, 330 331, 371 372, 375, 381, 383 Arthroscope, 153 Artificial conversation entity, 62 Artificial intelligence (AI), 7, 13, 26, 29 72, 79, 95 96, 132 133, 137, 145, 152, 173, 210, 220, 256 257, 307 308, 346 347, 370, 385, 415 ANN model of, 31 36 applications, 127 178 big data analytics, 56 59 blockchain, 59 60 cyber-physical system, 55 56 definition of, 8 10 expert systems, 53 55 in genetic cancer screening, 176 in government agencies, 79 83 great steak experience, 67 68 hardware, 46 50 accelerators, 48 49 application specific integrated circuit, 50 central processing unit, 47 48 computer servers, 47 field-programmable gate array, 50 graphic processing unit, 48 hardware description language, 50 neuromorphic chips, 49 50 quantum processors using “qubits”, 49 random access memory, 46 47 human intelligence, 7 8 in immunogenetics, 176 177 internet of things, 55 Mona Lisa smiling, 64 66 natural language generation, 52 53 natural language processing, 51 52 nongovernmental organizations, 79 82 robotics, 60 64 software (algorithms), 36 45 machine learning, 38 40

Index

neural networking and deep learning, 40 45 theory and science of, 29 30 third-party health insurers, 82 83 Artificial neural network (ANN), 30 36, 31f, 37t, 42, 62, 64, 89, 132, 145, 320, 327, 361 362 Artificially Intelligent Robots, 60 Aspirin, 300, 326 Assembler software, 16 17 Asthma, 211, 296, 351t, 361 362, 375 378 Asynchronous videoconferencing, 183. See also Telehealth Atherosclerosis, 152, 323, 329 Athlete’s foot, 359 Auditory cortex of the temporal lobe, 339 Auto-antigen, 298t. See also Autoimmune disease Autoimmune disease, 293 308 AI applications, 300 306 clinical presentations in, 298 300 current treatment approaches, 300 306 pathogenesis and etiologies of, 295 298 research and future AI considerations, 306 308 Autologous (from “one’s self”) hematopoietic stem cell transplantation, 302, 303f Autologous CAR-T-cell therapy, 305. See also CAR-T cell therapy AUTOMAP, 145 Automated breast volume scanning (ABVS), 152 Automated Clinical Trial Eligibility Screener (ACTES), 221 Automated Speech Recognition (ASR), 230 231 Autonomic nervous system, 342 Autonomic neuropathy, 182 Autonomous driving cloud platform, 63. See also Intelligent agent (autonomous driving platform) self-driving vehicle, 62 63 Autosomal dominant allele, 171 Autosomal recessive allele, 171 Axon, 31 32, 31f. See also Neuron synapse, 31 32 Azithromycin, 457 B B-cell lymphocytes, 322. See also Non-granular leukocytes

507

Backward chaining, 53 54, 54f, 67. See also Inference engine Back-end programming, 16, 20 Baidu, 46, 187 Bandwidth, 46 47 Barcode, 15t, 19, 21 Base compounds. See also Adenine cytosine, 169, 169f, 171, 309, 418, 419f guanine, 169, 169f, 171, 309, 418, 419f nucleotide bases, 171, 311 thymine, 169, 169f, 171, 309, 418, 419f Base pairs, 169f, 171, 173, 304f, 309, 418 419. See also Adenine paired with thymine guanine paired with cytosine, 169f, 171, 309 BaseHealth (Sunnyvale, CA), 207 Basic computer hardware, 21 22 internet, 25 language, 19 20 layers of, 14 19 programming, 19 20 servers, 23 25 software, 22 23 world wide web, 26 Basophils, 258, 322. See also Granular leukocytes Bayesian logic, 13, 36, 46, 67, 168 Beta blockers, 328 Big data, 56 59, 83 85, 175 176, 201, 213, 218, 251, 377, 461, 471. See also Data mining Big data analytics. See also Descriptive analytics diagnostic analytics, 83, 90, 99 health analytics, 97 101 in health care, 83 85 predictive analytics, 83, 220 prescriptive analytics, 83 Binary code, 20 Biobanks, 110, 212, 270 271 Biobots, 261 262. See also Bot chatbot, 261 262 Bioinformatics, 176, 213, 260, 266, 273 274, 311 Biologic agents, 318 319. See also Immunomodulating drugs Biomarkers, 102 105, 139 140, 178, 336, 415 Biomedical informatics, 201 202, 211 212

508

Index

Biopsy, 128t, 134 135, 137 138, 150, 160t, 206, 224 225, 254, 316, 359 360, 373, 392 Bioscience, 13, 199 200, 258 259, 265 266, 269, 301 302, 445, 448 452, 469 471 Biosignals, 216 Birth defect, 309 Black box, 55, 136, 320, 369 370, 469 470. See also Explainable AI Blockchain, 10, 53, 59 60, 88, 126, 203, 214 215, 221 222, 228 229, 236 237, 252 253, 260 261, 269, 462 in health care, 85 88 Blood pressure, 215 216, 382, 413 Blood vessels, 136, 141, 144, 150t, 155, 254 255, 296 297, 300, 316, 321 323, 326 328, 332, 336, 353, 355 356, 360 361, 375, 382 383. See also Artery vein, 323 Boolean logic, 54 55 Bootstrap, 20 Bot, 62, 185 186, 216, 254. See also Chatbots Brain Abscess, 342 Brain stem, 338 339 BRCA1, 312. See also Cancer gene BRCA2, 312. See also Cancer gene Brightree survey, 235 236 Broadband, 25, 270 Broca’s area, 45t, 339 Browser, 20, 24 25. See also Search engine web browser, 23, 24t, 26, 39 C C-reactive protein (CRP), 160t Cache, 21 22 Calor, 299. See also Inflammation vital signs, 417 Cancer. See also BRCA1 AI applications in, 318 319 BRCA2, 312 carcinogenesis, 314 315 carcinogens, 314 315 carcinoma, 316 clinical presentations in, 315 318 current treatment approaches, 318 319 description and etiology of, 314 315 leukemia, 316, 317t, 323, 325

lymphoma, 315, 317t, 323, 325 oncoevolution, 315 oncogene, 314 315 oncogenesis, 314 315 radiation oncology, 140 research and future AI considerations in, 319 320 Cancer gene, 312 Cancering, 314 Candida, 404 Capillaries, 153, 322, 371 372, 375 Capsule endoscopy, 206, 254 CaptureProof, 184 Carbon monoxide (CO), 107 108, 341, 345 Carcinogenesis, 314 315. See also Cancer cancering, 314 Carcinogens, 314 315. See also Cancer cancering, 314 Carcinoma, 316 Cardio DL (from Arterys), 331 Cardiovascular disease (CVD), 110, 114 115, 152, 160t, 250, 320 321, 321t, 331, 381, 414 Care delivery organization (CDO), 202 Cardiotocography (CTG), 407 Carotid Artery Disease, 324. See also Stroke transient ischemic attack (TIA), 324 Carpal tunnel syndrome, 182, 352 Carrier testing, 174 CAR-T cell therapy, 301 302, 305 306, 306f, 312, 325. See also Autologous CAR-T-cell therapy; Gene replacement therapy DNA editing, 263 Cas9 enzyme, 304, 304f. See also CRISPR-Cas9 DNA editing, 263 Case Based Reasoning, 83 84, 325 CAT scan, 141 144 Catheter-directed thrombolysis procedure, 326 CD8, 319 320 Central dogma of molecular biology, 172 Central nervous system (CNS), 31 32, 129 130, 171, 181t, 337 343, 396 Central processing unit (CPU), 14 17, 16f, 17f, 18t, 20 22, 46 48, 61. See also Arithmetic logic unit (ALU) control unit (CU), 14 15, 21 22, 47

Index

Centromere, 169 Cerebellum, 338 339 Cerebellum Gray matter (outer neural layer of cerebrum and cerebellum), 339 Cerebral cortex, 32, 339 Cerebral Edema, 341 Cerebral Perfusion SPECT Data, 148 Cerebrovascular accident (Stroke), 321 Cerebrovascular disease, 109 110, 318t, 324, 330, 332, 340 341, 351t Charlson comorbidity index (CCI), 247 Chatbot (short for “chatterbot”). See also Alexa (Amazon) biobots, 261 262 bot, 62, 185 186, 216, 254 cobots, 217 conversational agents, 184 186 Cortana (Microsoft), 52, 186, 223 diagnostic testing, 128t Echo (Amazon), 52 ELIZA, 186 Florence, 230 HealthTap, 187 Home (Google), 62 Infermedica SP, 187 interactive agent, 62, 185 186 Medxnote, 216 Molly, 230 Moxi, 231 Sensely Inc., 187 Siri (Apple), 62, 469 virtual agent, 185 186 Check-point inhibitors, 301 Chemotherapy, 141, 318, 325, 370, 385 CheXNeXt algorithm, 132 133 Chimeric antigen receptors (CARs), 305 Cholesterol-lowering drugs (statins), 327 Chromallocyte, 272 Chrome (Google). See also Browser search engine, 178 179 web browser, 24t Chromoendoscopy, 153 Chromosome, 168 169, 169f, 170f, 171 175, 272, 309, 311, 365, 418, 419f Chromosome replacement therapy, 272

509

Chronic disease, 95, 101, 109 110, 185, 202, 210, 232, 239, 246, 293, 411 414. See also Comorbidity concurrent medical conditions, 246 248 Chronic inflammation, 296 299, 297f, 298t, 301, 307, 315, 340 342, 357. See also Acquired immune response immunology, 296 297 Chronic Venous Insufficiency (post phlebitis syndrome), 323 Circle of Willis, 322 Classification and Regression Tree (CART) algorithm, 185 Clear Genetics, 270 271 Client, 23, 24t, 25, 226, 397 399 Clinical Decision Support (CDSi), 83 85, 87 88, 96, 101, 142, 201 202, 231, 271 272, 330, 388, 403, 414 Clinical Decision Support Services (CDSS), 83 84 Clinical Documentation Improvement (CDI), 224 Clinically Appropriate and Cost-Effective Placement Project, 236 Clitoris, 387 Cloud computing, 55 56, 100 101, 189, 222, 251, 253, 331 CloudMedX, 220 Clustering analyses, 332 Cobots, 217. See also Bot chatbot, 216 Colonoscopy, 153 154, 370 Common carotid arteries, 179 Community health, 94, 112t, 234. See also Public health Comorbidity. See also Chronic disease concurrent medical conditions, 128t, 199, 242 251 multimorbidity, 199, 242 251 Compilation, 16 17, 34 36, 49 51 Compiler (translator), 14 16, 19 23, 49 Complete blood count (CBC), 160t Complete pancreas transplant, 335 Computational immunogenetics, 176 Computational pathology, 159 167 Computer hardware input devices, 21

510

Index

Computer hardware (Continued) output devices, 22 processing devices/microprocessors, 21 22 storage devices, 22 Computer processor, 13 14, 19 Computer Science and Artificial Intelligence Lab (CSAIL), 180, 350 Computer tomography, 201. See also Diagnostic imaging Computer vision algorithms, 132, 214 Computed tomography (CT or CAT) scan, 141 144 Concurrent Medical Conditions. See also Chronic disease comorbidities, 242 245, 247 248 multi-morbidities, 242 251 Concussion, 342, 345 346 Congenital heart disease (CHD), 144, 324, 329 Congenital hypothyroidism, 174 Congestive heart failure (CHF), 150, 328, 383 CONSULT, 246 Contact tracing, 453, 455 456, 461. See also Pandemic Control unit (CU), 14 15, 21 22, 47. See also Central processing unit (CPU) Conversational agent (virtual assistant), 184. See also Bot chatbot, 186 Convolutional neural network (CNN), 32, 35f, 37t, 40, 129, 130f, 132, 142 143, 155, 157, 307, 326, 330, 345, 349, 351, 360, 365 366, 372 373, 379, 393, 399, 404 406, 417 Cor pulmonale, 323 Core processor, 47 Coronary artery disease, 152, 323 Coronavirus (COVID-19), 301, 400 403, 445, 448 449, 452 453, 461. See also Pandemic AI for clinical considerations for coronavirus infections, 460 461 AI for epidemiology and public health considerations, 462 463 bioscience, 448 452 diagnostic testing, 453 454 epidemiologic considerations, 461 462 incidence and prevalence, 447

pathogenesis, 448 452 treatment and management strategies, 454 461 vaccine, 458 459 Cortana (Microsoft), 52, 186, 223. See also Bot chatbot, 186 Corticosteroid, 300. See also Anti-inflammatory medication steroids, 300 COVID-19. See Coronavirus (COVID-19) Cowper’s gland, 391 Cranial arteritis, 330. See also Temporal arteritis CRISPR-Cas9 therapy, 301 302. See also DNA editing genetic engineering or modification, 312 CRISPR-Cas13, 459 Crohn’s disease, 144, 298t, 300t, 301, 370 Cryptographic (coded) key, 59 Cryptography, 59, 269 CT scan, 128t, 129 130, 141, 149, 256, 377 378, 385 Cubital tunnel syndrome, 182 Cyber-physical system (CPS), 55 56, 215, 256 257 Cystic fibrosis (CF), 22t, 160t, 309, 377 378 Cytogenetics, 168, 173, 368 Cytokines. See also Check-point inhibitors interferon, 301 interleukin, 301 Cytosine (Cs), 169, 169f, 171, 309, 418, 419f. See also Base compound Cytotoxic, 302 D da Vinci, 206, 225. See also Robotics Data generator, 188 Data integration, 22 23, 95, 213 Data mining, 82, 89 90, 98, 98f, 114 115, 168, 238, 262, 373, 402, 404, 414, 452. See also Big data Data node, 14, 15f Data processing, 14 16, 16f, 17f, 19, 38, 46, 55, 88 89, 106, 201 202, 267, 272 Database, 20, 23 25, 34 36, 39, 41, 59, 63, 99 101, 155, 212, 255, 331, 388 Decision support system (DSS), 104, 158, 271 272

Index

Decision tree, 39, 99, 347, 413 Decode, 47, 399 Deep convolutional neural network (DCNN), 132, 142 143, 349, 360, 372 373, 404 406 Deep learning (DL), 7, 9, 23, 30, 38, 40, 68, 129, 132, 139 140, 155, 185 186, 214, 326, 345, 384 385, 408, 460, 463, 471 Deep neural network (DNN), 32 33, 34f, 35f, 37t, 40, 131f, 320, 328, 348 349, 351, 359, 379, 403, 408, 411, 416. See also Deep convolutional neural network (DCNN) Deep Q-Networks (DQN), 44 Deep Vein Thrombosis (DVT), 323 DeepMind (Google), 9 10, 43, 49, 256, 460 Dementia with Lewy Bodies (DLB), 341, 346 Demographics, 88, 91 93, 92f, 93f, 97 98, 101, 132, 186, 200, 202, 207, 217 218, 221, 225, 232, 234, 239, 242 243, 245 246, 248, 272 273, 332, 362, 410, 412, 421 Demyelinating disorders, 341 342 Dendrite, 31 32, 31f. See also Neuron synapse, 31 32 Deoxyribonucleic acid (DNA). See also Double helix genetics, 315 genomics, 170, 274 Deoxyribonucleic acid (DNA) editing, 263. See also CAR-T cell therapy CRISPR-Cas9, 302 305 Deoxyribonucleic acid (DNA) sequencing, 175, 267, 274, 311, 313 Dependent variables, 37t, 39, 91 93, 93f. See also Independent variables population health, 92f Dermatology, 156 157, 201, 356 357, 359, 469. See also Integumentary system Dermoscopy, 157 Descriptive analytics, 83, 98 99, 218. See also Health analytics Developmental disorders, 341, 343, 415 Dexamethasone, 458 Diabetes insipidus, 365 Diabetes Mellitus Type 2 (originally, Adult Diabetes), 333 334, 420 421 Diabetes Type 1 (originally, Juvenile Diabetes), 301, 332 336

511

Diabetic ketoacidosis (DKA), 333 334 Diabetic retinopathy (DR), 155 156, 336 337 Diagnostic analytics, 83, 90, 99. See also Health analytics Diagnostic imaging. See also Computed tomography (CT or CAT) scan diagnostic testing, 128t, 147f, 174, 453 454 endoscopy, 128t, 152 153 fluoroscopy, 128t, 136 137 fundus imaging (fundoscopy or ophthalmoscopy), 128t, 154 155 mammography, 128t, 133 medical (clinical) photography, 128t, 157 MRI (Magnetic Resonance Imaging) scan, 128t, 143 144 neuroradiological imaging, 129 130 nuclear medicine scan, 128t, 146 147 radiomics, 128t, 139 X-Rays (conventional radiography), 128t, 131 ultrasound (sonography), 128t, 149 150 Diagnostic testing. See also Chatbots diagnostic imaging, 128t, 129 158 electrodiagnosis, 128t, 180 182 genetic and genomic screening and diagnosis, 168 178 laboratory (clinical diagnostic) testing, 128t, 158 168 telemedicine (aka telehealth), 128t, 182 185 vital signs, 128t, 179 180 Diastolic blood pressure (BP), 179. See also Vital signs Dickey Amendment, 115 116 Digital code, 14 15, 20 22, 47 Digital drug control system (DDCS), 215 Discharge Referral Expert System for Care Transitions (DIRECT), 231 Disease modifying anti-rheumatic drugs (DMARDs), 301. See also Leflunomide methotrexate, 301 Disruptive change, 257 258, 320 Disruptive technology, 29, 96, 422 Diuretics, 328 DNA editing, 263 Dolor, 299. See also Inflammation Domain, 43, 53, 94, 409, 414 Domain Name System (DNS), 24

512

Index

Doppler ultrasound, 149 150 Double helix, 171. See also DNA Dynamic random-access memory (DRAM), 46 Dysregulate, 296, 307. See also Acquired immune response inflammation, 296, 307 E Echo (Amazon), 52. See also Bot chatbot, 52 Echo State Networks (ESN), 182 Echocardiography, 202 Eczema, 296 Ejaculatory duct, 390 Electrocardiography (ECG), 180, 181t, 328 329, 331, 411 Electrocochleography, 180, 181t Electrodiagnosis (EDX), 127, 128t, 180 182. See also Diagnostic testing Electroencephalography (Intracranial EEG, stereoelectroencephalography), 180, 181t Electrogastrogram (EGG), 180, 181t Electromyography (EMG), 180, 181t Electronic health record (EHR), 58, 88 91, 94, 175, 184, 202, 213, 220, 247, 389, 415 416 Electronystagmography (ENG), 180, 181t Electrooculography (EOG), 180, 181t Electroretinography (ERG), 180, 181t ELIZA, 186. See also Chatbots EmberJS, 20. See also Front-end programming Embolus, 323. See also Thrombus Embryonic stem cell (ESC). See also Regenerative medicine stem cell, 258, 264 VSELs (very small embryonic-like stem cells), 260 Emmanuelle Charpentier, 303 304 Empyema, 342 Encephalitis, 346, 470 Encryption, 221 Endocardium, 322 Endoscopy, 128t, 138, 152 155, 254. See also Diagnostic imaging

Endovenous laser ablation, 327 Eosinophils, 258, 322. See also Granular leukocytes Ependymoma, 340 Epicardium, 322, 331 Epidemiology, 95, 107t, 111, 116, 200, 207, 210, 217 218, 225, 232, 239, 248, 257, 267, 272 273, 393, 416. See also Public health Epididymis, 390 Epidural hemorrhage, 342, 346 Epigenetic dysregulation, 178 Epigenetics, 176 178, 232 233, 315. See also Genetics Epigenome, 102 103, 172, 178. See also Genomics Epilepsy, 342, 346 Epstein-Barr Virus (EBV), 160t, 315 Erythrocyte, 160t, 258, 322 Erythrocyte sedimentation rate (ESR), 160t Ethereum, 221, 253. See also Blockchain EU Blockchain Observatory and Forum, 245 Eugenics, 312. See also Genetics Evidence variable, 42 Evoked potentials, 180 181, 181t. See also Electrodiagnosis (EDX) Evoking strength (positive predictive value), 132, 188, 328, 366, 368 Exocrine gland, 356 Expert system, 53 55, 53f, 159, 187 190, 205 206, 217, 224, 231, 247, 255, 262 263, 271 272, 325. See also Knowledge-based system TREAT, 217 Explainable AI (XAI), 10, 53f, 54, 66f, 136, 469 470. See also Black box Explanation facility, 54 55 Explorer (Microsoft). See also Browser search engine, 178 179, 185 web browser, 23, 24t, 26 Exposome, 209 Expression, 139, 160t, 171 172, 176 177, 248 249, 262, 272, 274, 303 304, 315 316, 319, 364 365, 409. See also Genetics

Index

F Fabry Disease, 343 Facebook, 62, 100 101, 144, 230, 462 FastMRI, 144. See also MRI (Magnetic resonance imaging) Federal Communications Commission (FCC), 183 Fetch, 47 Field-programmable Gate Array (FPGA), 46t, 50 Florence, 230. See also Chatbot Fluorescent “chain terminator” nucleotides, 311 Fluoroscopy, 128t, 136 139. See also Diagnostic imaging Forebrain, 338 Forebrain anomalies, 341 Forensic testing, 175 Framework, 20, 23, 38, 84, 106, 377, 403 404, 408. See also Platform FRAX (Fracture Risk Assessment Tool), 147 Frequency (sensitivity), 188 Front-end programming, 20 Frontal lobe, 45t, 339 Functio laesa, 299. See also Inflammation Fundoscopy, 154 155. See also Diagnostic imaging Fundus imaging, 128t, 154 157. See also Diagnostic imaging Fuzzy logic, 54 55. See also Expert systems G Gastroenteritis, 403 Gastroesophageal reflux (GER)., 369 Gastroesophageal reflux disease (GERD), 369 GEMINI, 101. See also Health analytics Gene, 101, 168, 170, 170f, 172, 174 175, 239, 260, 303 304, 308 Gene editing, 266, 302, 312 313 Gene expression modulators, 319 Gene mutation, 160t, 173 175, 309 Gene replacement therapy, 302. See also CAR-T cell therapy Gene sequencing, 56 57, 172 Gene signature, 303 304 Genetic and hereditary disorders, 343 Genetic cloning, 313

513

Genetic code, 169f, 171 172, 175, 266, 309, 315, 458 Genetic engineering or modification, 312 313. See also CRISPR-Cas9 Genetic testing, 128t, 147f, 173 175, 270 271, 274, 311, 353 Genetics. See also Clear Genetics AI applications in, 312 313 clinical presentations in, 310 311 computational immunogenetics, 176 current treatment approaches, 312 313 cytogenetics, 168, 173, 368 description and etiology of, 309 310 diagnostic testing, 128t, 174 epigenetic dysregulation, 178 epigenetics, 176 178, 232 233, 315 epigenome, 102 103, 172, 178 eugenics, 312 expression, 139, 176 177, 248 249, 262, 272, 274, 303 304, 315 316, 364 365 genetic and hereditary disorders, 343 genetic cloning, 313 genetic code, 169f, 171 172, 266, 309, 315 genetic engineering or modification, 312 313 genetic scissor, 303 304, 304f genetic testing, 128t, 147f, 173 175, 270 271, 274, 311, 353 genotype, 145, 169, 171, 176, 268, 272, 307 germline genetic modification, 312 heredity, 169 170, 309 heterozygous, 171 homozygous, 171 immunogenetics, 172, 176 178 karyotype, 169, 171 molecular genetic tests, 311 pharmacogenetics, 174 phenotype, 169 171, 243 244, 268, 307, 310, 332, 377 research and future AI considerations in, 313 314 Genome. See also Diagnostic testing genomic medicine, 175, 270 genomic testing, 57, 127, 271 genomics, 102 103, 126, 170, 199, 258, 266 276, 294 295, 301 302, 308, 318, 471

514

Index

Genome (Continued) Human Genome Project, 168, 267, 272 273, 294 295 IBM’s Watson Genomics and Oncology, 384 385 immunogenomics, 170, 313 pharmacogenomics, 213, 269, 272 radiogenomics, 146, 319 320, 347 348 Genomic medicine, 175, 270 Genomic testing, 57, 127, 271 Genomics, 102 103, 126, 170, 199, 258, 266 276, 294 295, 301 302, 308, 318, 471 big data analytics in, 175 176 Genotype, 145, 169, 171, 176, 268, 272, 307 Germ cell tumors, 340, 346 Germline genetic modification, 312 Germline mutation, 309 Gestational diabetes, 333 Giant cell arteritis, 298t, 301, 323 Gigahertz (GHz), 47 GIOSTAR Labs, 260 Glioma, 146, 340, 346 Gliptins (DPP-4 inhibitors), 335. See also Type 2 diabetes GLP-1 agonists, 335. See also Type 2 diabetes GoldSTAR, 260 261. See also Stem cells Gonad, 363 Google, 48 49, 62, 108 109, 220, 265, 347, 394, 396 397 Google’s DeepMind Health, 256 Governmental agencies (GOV), 79, 116 Grand mal seizure, 342 Granular leukocytes. See also Basophils eosinophils, 322 neutrophils, 322 Granulomatous inflammatory bowel disease, 370 Graphic processing unit (GPU), 21 22, 36 38, 46 48, 46t, 50, 129, 145, 147f, 157, 182, 316, 336 337. See also Accelerator Graves’ disease, 298t, 300t, 301, 367 Guanine (Gs), 169, 169f, 171, 309, 418, 419f. See also Base compound Guillain-Barre syndrome, 298t, 300t, 343 Gustatory cortex, 339 Gyri, 339

H Hardware, 13 14, 16f, 21, 25, 46, 50, 128 129, 379, 411 Hardware description language (HDL), 46t, 50, 160t Health analytics. See also Descriptive analytics diagnostic analytics, 83, 90, 99 GEMINI, 101 descriptive analytics, 98 99 predictive analytics, 58, 83, 90, 99 101, 168, 174 175, 220, 233, 402 403 prescriptive analytics, 83, 90, 100 101, 208 Health care. See also Community health analytics, 97 101 big data analytics, 83 85 blockchain, 85 88 electronic health record, 88 91 Google’s DeepMind Health, 256 health analytics, 97 98, 100 101 HealthTap, 187 Industry 4.0” (Health 4.0), 56 mobile health (mHealth), 183, 185 personalized health care, 273, 275 population health, 91 97 precision health, 10, 101 106 preventive health, 106 110, 209, 257 258 public health, 91, 94 95, 109 116, 113f, 182 183, 209 211, 219, 250, 257, 275, 337, 412, 462 463 smart health, 125, 189 HealthTap, 187. See also Chatbot Hearing (auditory), 338, 344 Heart, 24 25, 47, 143, 146, 150t, 258, 321, 371 372, 375, 381 383 Heart attack, 147, 150, 326, 328 329, 381, 421 Heart scan, 147, 328 329 Helicobacter pylori (H. pylori), 160t, 315 Hematocrit, 160t Hematopoietic stem cell. See also Regenerative medicine stem cells, 302, 303f Hemoglobin, 160t, 324, 334 Hemolytic anemia, 298t, 322 Hemophilia, 323, 325 Hepatic encephalopathy, 341

Index

Hepatitis C virus (HCV), 315, 369 371 Herd immunity, 456. See also Immunology pandemic, 456 vaccine, 456 Heredity, 169 170, 309, 312. See also Genetics Heterozygous, 171. See also Genetics Heuristic, 54, 56, 65, 67, 168 Hidden layer, 13 14, 16f Hierarchical clustering, 37t, 41 Hindbrain, 338 339 Hippocampus, 32 33, 35f, 40, 42, 43t, 65, 67, 338, 343. See also Limbic system Histamine, 296 Hives, 296, 358 359. See also Hypersensitivity response Home (Google), 469. See also Bot chatbot, 62 Homozygous, 171. See also Genetics Horizontal integration, 215. See also Interoperability vertical integration, 215 Hormone therapies, 319 Hospital, 55, 87 88, 101, 144, 179, 215 216, 220 227, 334, 458, 462 Hot spots, 146 147 Human Genome Project, 168, 267, 272 273, 294 295. See also Genomics Human Immunodeficiency Virus (HIV), 315, 403, 446 447 Human intelligence, 7 8, 30, 32 33, 36f, 44, 130, 179, 211, 227, 380 381, 408 Human Papillomaviruses (HPVs), 160t, 315, 361 Human T-Cell Leukemia/Lymphoma Virus Type 1 (HTLV-1), 315 Human Vaccines Project, 307 308 Humoral, 295 296, 448 449. See also Cytokines Hydrocephalus, 341, 346 Hydroxychloroquine, 457 Hyperlipidemia, 323, 413 Hyperplasia, 315 316, 364 365 Hypersensitivity response. See also Acquired immune response allergen, 296 immunoglobin (IgE), 296 Hypertensive Cerebrovascular Disease, 341

515

Hypertext Markup Language (HTML), 19, 25 Hypertrophy, 315 316 I IBM’s Watson Genomics and Oncology, 384 385. See also Cancer genomics, 384 385 Ibuprofen, 300. See also Non-steroidal antiinflammatory drugs (NSAIDs) IDx-DR, 337. See also Type 2 diabetes If/then statements, 37t, 41. See also Inference engine Imalogix, 138 139 Immunocompromised, 295, 299 300 Immunodeficiency, 307 Immunogenetics, 172, 176 178 Immunogenomics, 170, 313 Immunoglobulin E (IgE), 296. See also Allergen hypersensitivity reaction, 296 Immunoglobulin G (IgG), 160t, 448. See also COVID-19 Immunology. See also Acquired immune system acquired immunodeficiency syndrome (AIDS), 81t, 378 379, 446 447 AI applications, 300 306 antibody, 307 308 autoimmune disease, 294 308 clinical presentations in, 298 300 computational immunogenetics, 176 current treatment approaches, 300 306 cytokine, 307 herd immunity, 456 Human Immunodeficiency Virus (HIV), 315, 403, 446 447 immunocompromised, 295, 299 300 immunodeficiency, 307 immunogenetics, 172, 176 178 immunogenomics, 170, 313 immunoglobulin E (IgE), 296 immunoglobulin G (IgG), 296 immunomodulating drugs, 301 immunomodulators, 301 immunosuppression, 295, 299 300, 448 449 innate immune system, 295 interferon, 301

516

Index

Immunology (Continued) interleukin, 301 monoclonal antibodies, 294 295, 301, 313, 457 pathogenesis and etiologies of, 295 298 research and future AI considerations, 306 308 Immunoinformatics, 459 460 Immunomodulating drugs, 301. See also Biologic agents InDelphi, 304 305. See also Cas9 enzyme Independent variables, 37t, 39, 56, 92 93, 380. See also Dependent variables population health, 56, 92f, 93f Induced pluripotent stem cells (iPSCs), 258 Industry 4.0” (Health 4.0), 55 56, 215 Infectious disease, 111, 265, 293, 378 381, 400 406, 446. See also COVID-19 pandemic, 447 Inference engine, 44, 54, 255 if/then statements, 41 interpreter, 54 Infermedica SP, 187. See also Chatbot Inflammation. See also Acute inflammation acquired immune response, 295 297 functio laesa (functional loss), 299 immunology, 296 297 Innate immune system, 295. See also Immunology Inner (hidden) layer, 13 14, 16 18, 16f, 18f, 23, 29, 32, 34 36, 34f, 35f, 49 50, 53f Input layer, 13 17, 19 22, 19f, 29, 32 33, 64f, 128 129, 131f Insulin, 160t, 333, 335, 414 Insulin pump therapy, 335 Insulin, oral, 335 Integration. See also Horizontal integration interoperability, 83, 112 114 vertical integration, 83, 91, 112 114, 116, 200, 215 Integumentary system, 293, 316 317, 356 363, 368. See also Dermatology Intel, 21t, 46, 49 50 Intelligent agent (autonomous driving platform), 63 64. See also Self-driving vehicle Intelligent Heart Diseases Diagnosis Algorithm (IHDDA), 329 Interactive agent, 62, 185 186. See also Bot chatbot, 62, 185 186

Interferon, 301. See also Cytokine immunology, 301 Interleukins, 301. See also Cytokine immunology, 301 Internal carotid arteries (ICA), 322 Internet (Net), 23 26, 98, 125, 178 179, 211, 262, 310, 393 394 Internet Explorer. See also Browser explorer, 24 search engine, 178 179 web browser, 23 Internet of Things (IoT), 55 57, 85, 98, 100 101, 148 149, 167, 183, 189, 199, 203 204, 215, 222, 225, 229, 237, 253, 256 257, 261, 269 270, 345, 366, 461. See also Wearable devices Interoperability. See also Horizontal integration integration, 83, 90 91, 95, 253 vertical integration, 112 114, 200 Interpreter, 54. See also Inference engine Interventional cardiology, 327 Interventional endoscopy, 138. See also Endoscopy Intracerebral abscess, 342 Intracranial Aneurysms, 341 Iron-deficiency anemia, 322 Islet cell transplantation, 335 J Jennifer Doudna, 303 304 K K-means clustering, 41 Karyotype, 169, 170f, 171. See also Genetics KenSci, 220 Kernicterus, 341 Kiwi, 256. See also Robotics Knowledge-based systems, 159, 205. See also Expert systems L Labeled data, 38 40, 67, 188. See also Supervised learning Laboratory testing, 127, 128t, 150, 159 168, 306 307, 310. See also Diagnostic testing Layers. See also Inner (hidden) layer

Index

input layer, 13 17, 19 22, 19f, 29, 32 33, 64f, 128 129, 131f output layer, 13 14, 18 19, 29, 61, 131f, 397 Lead poisoning, 309, 341, 347 Leflunomide, 301. See also Disease modifying anti-rheumatic drugs (DMARDs) Left atrium, 375 Left ventricle, 375 Leukemia, 316, 317t, 323. See also Cancer Leukocoria, 349 Leukocytes. See also Basophils granular leukocytes, 322 eosinophils, 322 lymphocytes (T-cell, B-cell), 258 macrophage, 258 monocytes, 258 non-granular leukocytes, 322 neutrophils, 258, 322 Limbic system. See also Amygdala hippocampus, 32 33, 35f, 40, 343 hypothalamus, 344 thalamus, 32 33, 35f, 338 Linear algebra, 13, 36, 46 Long term memory, 40 Long-term post-acute care (LTPAC), 243 Low density lipoprotein (LDL), 321t Lung cancer, 375 376, 383 386 Lyme disease, 402 Lymphatic system, 316, 374 Lymphedema, 323 Lymphocyte (TH,S,C,M and B, BM), 258, 392 Lymphoma, 315 316, 317t, 323, 325. See also Cancer M Machine code, 14 16, 18f, 19 20 Machine learning (ML), 9 10, 23, 31f, 34 36, 38 40, 61, 85, 97, 104, 140, 202 203, 214, 220, 227, 264, 274, 330, 345, 355, 373, 385 386, 410, 420, 449, 452 453, 460. See also Training supervised (labeled) data, 38 40 Magnetocardiography, 180, 181t Male reproductive system, 389 392 Cowper’s glands, 391

517

ejaculatory duct, 390 epididymis, 390 functional disorders of, 391 392 penis, 391 prostate, 390 scrotum, 390 seminal vesicles, 390 spermatic cords and ductus deferens, 390 testes, 390 urethra, 390 Mammography, 128t, 133 137. See also Diagnostic imaging Mammography Quality Standards Act, 136 Marfan’s syndrome, 309, 353 Massively parallel sequencing, 267 McCarthy, John, 9 Mechanoreceptor, 344 Medicine. See also Addiction medicine aerospace medicine, 107 genomic medicine, 175, 270 nuclear medicine scan, 128t, 140 141, 146 150 occupational medicine, 107 P4 Medicine (Systems medicine), 112 114 participatory medicine, 112 114 personalized medicine, 101 106, 126, 139 140, 146, 177, 203, 219, 273, 308 Precision Medicine Initiative (PMI), 101, 105, 273, 308, 313 314 preventive medicine, 59, 106 110, 200, 209, 418 420 regenerative medicine, 199, 257 266, 272 Undersea and Hyperbaric Medicine, 107 108 MedLEE, 247. See also Comorbidity natural language processing, 247 Medical (clinical) photography, 128t, 158. See also Diagnostic imaging Medulla oblongata, 339 Medulloblastoma, 340 Medxnote, 216. See also Chatbots Memory networking (ANN), 42, 43t, 65. See also Neural network Meningioma, 340, 347 Meningitis, 347, 400 401 Menstruation, 387

518

Index

Merkel Cell Polyomavirus (MCPyV), 315 Metabolomics, 112 114, 139 Metadata, 58, 255 Metastasis, 316, 362, 375 376, 385. See also Cancer cancering, 314 Metformin, 335. See also Type 2 diabetes Methotrexate, 160t, 301. See also Disease modifying anti-rheumatic drugs (DMARDs) Microbiome, 102 103, 232 233, 267, 352 353, 377, 421 422 Microchip, 20, 48 Microprocessor, 16 17, 20 23, 48 49, 61 Midbrain, 32 33, 338 Migraine, 340 341, 348 Minsky, Marvin, 9 Mobile communication networks (5G), 56 Mobile health (mHealth), 183, 185 Molecular biology, 139, 168 170, 172, 263, 265 266, 314, 316. See also Genetics Molecular genetic tests, 311, 454 Molecularly targeted therapies, 319 Molly, 230. See also Chatbots Monoclonal antibodies, 294 295, 301, 313, 457. See also Immunology Monocytes, 258, 322. See also Non-granular leukocyte Moore’s Law, 9 10, 56 57, 148 149, 469 Mosaicism, 309 310. See also Genetics Moxi, 231 232. See also Bot chatbot, 230 MRI (Magnetic Resonance Imaging) scan, 129 130, 143 144, 201 202, 330, 365. See also Diagnostic imaging mRNA (messenger ribonucleic acid), 172, 311, 380 381, 458 460 Multi-core processor, 47 Multi-omics, 102 103 Multimorbidity, 243, 245 246. See also Comorbidity concurrent medical conditions, 199, 242 251 Multiple sclerosis, 177 178, 298t, 300t, 348 Multivariable calculus, 13, 36, 46 Mutual Information, 263

Mutation, 309 310. See also Cancer genetics, 173 175, 307, 309, 360 Myasthenia gravis, 177 178, 182, 298t, 300t, 343, 348, 367 MyCode, 270 271 Myeloma, 316, 323, 325 Myelomeningocele (Meningomyelocele), 343 Myocardium, 322 Myoclonic seizure, 342 Myopathy, 182 N Name Entity Recognition (Bio-NER), 102 103 Nanobots, 254 255, 254f, 272 Nanorobotics, 272 Naproxen, 300 Nateglinide and repaglinide, 335 National Research Council (NRC), 115 116, 308 Natural language generation (NLG), 19, 37t, 51 53, 51t, 52f, 55, 62 63 Natural language processing (NLP), 9 10, 19, 29 30, 36 38, 37t, 51 52, 52f, 57, 85 87, 89, 102 103, 223 224, 230 231, 238, 246 247, 255, 262, 271, 313, 450 Natural language understanding (NLU), 51 53, 51t, 224 Neoplasm, 340, 347 349. See also Cancer cancering, 340 Neoplastic, 314 315. See also Cancer cancering, 314 315 Neural network (NN). See also Artificial neural network (ANN) convolutional neural network (CNN), 32, 35f, 37t, 40, 129, 130f, 132, 142 143, 155, 157, 307, 326, 330, 345, 349, 351, 360, 365 366, 372 373, 379, 393, 399, 404 406, 417 deep convolutional neural network (DCNN), 132, 142 143, 349, 360, 372 373, 404 406 deep neural network (DNN), 32 33, 34f, 35f, 37t, 40, 131f, 320, 328, 348 349, 351, 359, 379, 403, 408, 411, 416 memory networking (ANN), 42, 43t, 65 neuroscience, 40 Neuroblastoma, 340, 348 Neurological disorders, 338, 340 342, 344 351

Index

Neuromorphic chips, 46t, 49 50 Neuromyelitis optica (Devic’s disease), 177 178 Neuron. See also Axon dendrite, 31 32, 31f mathematical model of, 33f synapse, 31 32, 32f Neuroplasticity, 67. See also Neuroscience Neuroradiological imaging, 129 130. See also Diagnostic imaging Neuroscience, 7, 30, 32, 36, 40, 44, 47 48, 256, 337 338, 408. See also Neural network neuroplasticity, 67 Neurotransmitters, 31 32, 33f Neutrophils, 258, 322. See also Granular Leukocyte Next generation sequencing or next-gen sequencing (NGS), 102 103, 173, 175, 177, 267, 271, 311, 353, 405, 460 Non-governmental organizations (NGOs), 79 83, 81t, 116 Non-granular leukocytes. See also Monocytes B-cell lymphocytes, 322 T-cell lymphocytes, 322 Non-self, 295, 297, 337. See also Antigen Non-small cell lung cancer (NSCLC), 384 Non-steroidal anti-inflammatory drugs (NSAIDs), 300 NSAIDs. See also Anti-inflammatory medications NSAIDs (Non-steroidal anti-inflammatory drug), 300 Nuclear medicine scan, 128t, 140 141, 146 150. See also Diagnostic imaging Nucleotide bases. See also Adenine base compounds, 169f cytosine, 169, 169f, 171 guanine, 169, 169f, 171 thymine, 169, 169f, 171 Nurse practitioners (NPs), 233 Nursing, 91 92, 199, 202, 204, 227 242 Nutrition, 420 422 Nvidia, 46, 48, 223 O Object file, 16 17 Occipital lobe, 45t, 339 340

519

Occupational medicine, 107 Oligodendroglioma (arises from oligodendrocytes), 340 Omics, 105 106, 139, 208 209, 232 233, 260, 421 Oncoevolution, 315. See also Cancer Oncogene, 314 315. See also Cancer genetics, 314 315 Oncogenesis, 314 315. See also Cancer cancering, 314 315 Oncology, 106, 141, 145, 147 149, 201, 349. See also Cancer cancering, 140 Onychomycosis, 359 Operating system (OS), 14 15, 18, 20, 22 23, 24t, 221 Ophthalmoscopy, 154 155 Optical coherence tomography (OCT), 155 156, 411 Optical devices, 19 Organoid, 259 Osteoarthritis (OA), 353 354 Osteogenesis imperfecta (OI), 354 Osteoporosis (osteometabolic disease), 133, 250, 354 Output layer, 13 14, 18 19, 29, 61, 131f, 397 P Paget’s disease of bone, 354 P4 Medicine (Systems medicine), 112 114 p53 Gene, 309 PALB2, 312 Pandemic. See also Contact tracing coronavirus, 380 381 COVID-19, 94, 127, 183, 235, 445 468 herd immunity, 456 history of, 446 447 Parietal lobe, 45t, 339 Parkinson’s disease, 97, 177 178, 182, 341, 349 Participatory medicine, 112 114 Pathogenesis, 116, 295 298, 298t, 307, 325, 360 361, 448 452 Pathology, 129, 168, 205, 271, 377 378 Pathophysiology, 112 114 Penis, 391

520

Index

Pentium, 21 22 Perception-Action Cycle, 63 64 Performance-based contracting (PBC), 80 82 Pericardium, 322 Peripheral Aneurysm, 323 Peripheral arterial disease, 323 Peripherals, 21 22, 181t, 182, 302, 462 Peroneal nerve palsy, 182 Personalized healthcare, 95, 273, 275. See also Precision health Personalized medicine, 101 106, 126, 139 140, 146, 177, 203, 219, 273, 308. See also Precision health PharmaBot, 216. See also Robotics Pharmaceuticals, 55, 199, 202 203, 208 209, 212 220, 264, 275, 313, 345, 410 Pharmacoeconomics, 217 Pharmacogenetics, 105, 174 Pharmacogenomics, 213, 269, 272 Pharmacovigilance, 212 213, 218 Pharmeum, 87. See also Blockchain Phenotype, 169 171, 243 244, 268, 307, 310, 377. See also Genetics Phenylketonuria, 174 Pheochromocytoma, 364 Photoacoustic imaging, 202 Pilocytic Astrocytoma, 340 Pioglitazone, 335 Platelets, 258 Platforms, 13, 23, 38, 98, 138 139, 184, 187, 236 237, 240 241, 248, 253, 332, 462 463. See also Framework Pluripotent stem cells. See also Embryonic stem cell (ESC) regenerative medicine, 258 stem cells, 258 259 Polymerase chain reaction (PCR), 454 Polymorphism, 310, 449. See also Genetics Polyneuropathy, 182 Pons, 338 Population health, 10, 59, 91 97, 116, 125, 207, 217 218, 225, 232, 239, 248, 272 273, 414, 471 Porphyria, 343 Posterior fossa anomalies, 341

Postherpetic neuralgia, 360 361 Precision health, 10, 101 106, 173, 209, 232 233, 256, 308, 412, 414. See also Personalized healthcare Precision medicine, 59, 88 89, 101 106, 126, 140, 168, 175, 177, 199, 218, 225 226, 232 233, 239 240, 248 249, 256, 263 264, 273, 301 302, 308, 319, 414, 470 471 Precision Medicine Initiative (PMI), 101, 105, 273, 308, 313 314 Predictive analytics, 58, 83, 90, 99 101, 168, 174 175, 220, 233, 402 403. See also Health analytics Preimplantation genetic diagnosis (PGD), 174 Preimplantation testing, 174 Prenatal testing, 128t, 174, 310 Prescriptive analytics, 83, 90, 100 101, 208. See also Health analytics Preventive Care, 108, 112 114, 189, 232, 256 257, 293, 418 422 Preventive health, 107 111, 125, 168, 209, 219, 233 234, 240 241, 249 250, 256 257, 264, 274, 471 Preventive medicine, 59, 106 110, 200, 209, 418 420 Primary brain lymphoma, 340, 349 Primary care, 110, 132 133, 155, 185, 200 203, 217, 244 245, 256 257, 337 Prior probability, 40 Pro-inflammatory cytokine, 296, 298t, 301, 307 PROBOT, 224 225. See also Robotics Processing, 10, 21 22, 50, 61, 152, 338, 385, 408 Progenitor, 258, 296 297, 297f, 299, 320 Programmable Universal Machine for Assembly (PUMA), 224 225 Programming, 13 14, 16, 19 20, 62, 410 Progressive multifocal leukoencephalopathy (PML), 342 PROMIS (Patient-Report Outcomes Measurement Information System), 261 Proteome, 102 103, 172 Proteomics, 112 114, 172, 260, 316 Proto-oncogenes, 315 Protocols, 25, 176, 213, 230, 251, 365 366, 454 455

Index

Public health, 91, 94 95, 109 116, 113f, 182 183, 209 211, 219, 250, 257, 275, 337, 412, 462 463. See also Community health PubMed database, 177 Pulmonary hypertension, 323, 328, 375, 382 383 Pulmonary venous-occlusive disease (PVOD), 375, 382 383 Python, 20t Q Quantum computers, 49 Quantum processors, 46t, 49 Qubits, 46t, 49. See also Quantum computers R Radiation oncology, 106, 140. See also Cancer Radiation therapy, 106, 140 141, 302, 318 319, 325 Radiogenomics, 146, 319 320, 347 348 Radiography, 131, 152, 201 202 Radiomics, 106, 128t, 129, 139 141, 319 320, 347 348, 384. See also Diagnostic imaging Random access memory (RAM), 14 17, 21 22, 42, 46 47, 67 Random forest algorithm, 110 Raynaud’s disease, 323 Read-only memory (ROM), 21 Recombinant, 305 306, 401, 459 Regenerative medicine. See also Embryonic stem cell (ESC) stem cells, 199, 257 266 VSELs (very small embryonic-like stem cells), 260 Regression, 39 40, 91 92, 406 Regression analysis (Support vector machine), 37t, 39, 56, 91 92, 92f, 378 Rehabilitation, 107, 206, 247 248, 256, 337, 352, 399 Reinforcement learning (RL), 38, 42 45, 43t, 45t, 62, 67, 85, 403 Relays, 32 33, 35f, 61, 65, 344 Remdesivir, 457 Remidio, 155 156 Remote patient monitoring (RPM), 97, 183, 185, 189, 237 Renovascular Condition, 323

521

Respiration rate, 179. See also Vital signs Retinoblastoma, 349 Retinopathy of prematurity (ROP), 155, 408 Rheumatoid arthritis (RA), 258 259, 298t, 300t, 301, 352, 355 Ribosome, 172 Right atrium, 382 383 Right ventricle, 328, 375 RNA screening, 458 459 R-Naught, 456 Robodoc, 225. See also Robot Robot, 60 61, 225, 231 232, 255, 257, 350 Robotic process automation (RPA), 60 61, 251 252, 255 Robotics, 10, 60 64, 126, 168, 185 186, 206 207, 217, 224 225, 231 232, 239, 247 248, 254, 256, 263, 272, 408, 469, 471. See also Kiwi Rubor, 299 300. See also Inflammation S Safari (Apple), 24. See also Browser; Search engine; Web browser Sanger Method of sequencing, 175, 311 Sarcoma, 316, 361 SARS-CoV-2. See also Angiotensin-converting enzyme (ACE-2); Coronavirus (COVID-19) clinical manifestations, 452 453 life cycle, 450 pandemic, 307 308 pathogenesis and bioscience considerations for, 448 452 spiked protein, 450 Schrödinger's cat, 49 The science, 169 173 Scissor, 304, 304f Sclerotherapy, 327 Scoliosis, 355 Scrotum, 390 Search engine, 178 179, 185. See also Browser web browser, 23, 26 Secondary care, 200 201 Self-driving vehicle, 62 63. See also Autonomous driving cloud platform intelligent agent (autonomous driving platform), 63

522

Index

Semantic rule, 51 Semiconductor, 20, 21t, 50 Semiconductor integrated circuit, 20 Sensely Inc, 187. See also Chatbot Sensorium, 63 Sensors, 55, 102, 184 185, 201, 252 253, 336, 347, 350, 352, 408 409, 414 Server, 23 25, 34 36 SGLT2 inhibitors, 335 Sickle cell anemia, 309, 322 Signal transduction inhibitors, 319 Single-cell RNA sequencing (scRNA-seq), 303 304 Siri (Apple), 469. See also Bot chatbot, 62 Smart Health, 125, 189 Smart hospitals, 222, 224 Smart sensors, 203, 229, 237 Smartphone, 20, 25, 55, 58, 155 156, 158, 182 183, 215 216, 378, 414, 422. See also Internet of Things (IoT) Smart Tissue Autonomous Robot (STAR), 252 Social bot, 186 Social distancing. See also Coronavirus COVID-19, 455, 461 pandemic, 455, 461 Sociomarkers, 210 211 Software, 9 10, 13, 16, 19, 22 23, 26, 37t, 46, 51t, 128 129, 168, 337 Somatic mutations, 309 310 Somatic nervous system, 342 Sound cards, 18t, 22 Source code, 16, 19 20 Spanish Flu, 446 447 Speech synthesis, 51t Spermatic cord, 390 Spermatogenesis, 391 Spermatogonium, 391 Spiked protein. See also Angiotensin-converting enzyme (ACE-2) Coronavirus (COVID-19), 449 450, 458 460 Sphygmomanometer, 179. See also Blood pressure vital signs, 179 Spina bifida, 343 Spinal cord, 337 339, 342, 355 Squamous cell carcinoma (SCC), 362

Staging, 143, 316, 345, 384 385 Stakeholders, 80 82, 109 110, 202, 221 222, 232, 236, 243, 409 Stem Cell Core Lab of Columbia University, 263 264 Stem cell transplantation, 258 259, 261, 301 303, 312. See also Regenerative medicine Stem cells. See also Embryonic stem cell hematopoietic stem cell, 302, 303f regenerative medicine, 199, 257 266 Stents, 136 138, 326 Steroids, 358, 360 361. See also Antiinflammatory medication corticosteroids, 300 Stirrup, 344 Storage devices, 22, 46 Stroke, 129 130, 130f, 243, 258 259, 318t, 321, 324, 326, 330, 350, 351t. See also Carotid Artery Disease transient ischemic attack (TIA), 324, 340, 350 Subacute combined degeneration (B12 deficiency), 341 Subarachnoid hemorrhage (SAH), 341 342, 346 Subdural Hemorrhage, 342 Sulci, 339 Sulfasalazine, 301 Sulphonylureas, 335 Supercomputer, 49, 394 Superposition, 49 Supervised learning, 38 40, 42, 43t, 67. See also Labeled data Swarms, 62 Synapse, 31 32, 33f. See also Axon dendrite, 31 32, 31f neuron, 31 32, 31f Synchronous videoconferencing, 183. See also Telehealth Syntactic, 51 Syringomyelia, 343, 350 Systemic lupus erythematosus (SLE), 300t, 301, 352 T T-cell lymphocytes, 322. See also Non-granular leukocyte Takayasu’s Arteritis, 323

Index

Target code, 16, 17f, 20 Tarsal tunnel syndrome, 182 Tay, 393 394 Tay-Sachs disease, 341 TCP/IP, 25 Telehealth, 97, 182 183, 185, 199, 204, 215 216, 222 223, 229 230, 237, 245 246, 253 254, 270, 462. See also Videoconferencing Telemedicine, 125, 128t, 155, 183, 199, 237 Telemedicine (aka Telehealth), 127, 182 185. See also Diagnostic testing Telepsychiatry, 223 Telestroke services, 222 Telomeres, 418 419, 419f Temporal arteritis, 298t, 323. See also Cranial arteritis Temporal lobe, 45t, 339 340 Tempus, 223 Tertiary care, 155 156, 200 201 Thalamus, 32 33, 35f, 43t, 45t, 338 339, 343. See also Limbic system Third party insurers, 178 179 Three-D Imaging, 149 Three-D laser lithography, 263 Three-D ultrasound scanning, 149 150 Thrombocytes, 322 Thrombogram, 325 Thrombosis, 325 Thrombus, 323. See also Embolus Thumb drive, 22 Thymine (Ts), 169, 169f, 171, 309, 418, 419f. See also Base compound Tonic-clonic or convulsive seizures, 342 Touchpad, 15t, 21 22 Toxin delivery molecules, 319 Toxoplasma gondii, 405 Toxoplasmosis, 160t, 405 Training, 38 39, 106, 130f, 151 152, 206 207, 270, 331, 388, 404. See also Machine learning Transducer, 61, 149 150, 185 Transcriptomics, 112 114, 139, 172, 260, 316 Transfer RNA (tRNA), 172

523

Transient ischemic attack (TIA), 324, 330, 340, 350. See also Carotid Artery Disease stroke, 350 Transistor, 20, 47 49 Transverse myelitis: Inflammation of the spinal cord, 343 TRAuma Care in a Rucksack: TRACIR, 257 TREAT, 217. See also Expert system Triage, 129 130, 209, 461 TRICARE, 241 Tricuspid valve, 378 Triglycerides, 160t, 356 Trypanosoma cruzi, 405 406 Tuberculosis (TB), 378 379, 401 402 Tuberous sclerosis, 340, 351 Tumor, 102 103, 141, 145, 176, 299, 362, 367. See also Cancer inflammation, 147 Turing test, 8 9, 9f Type 2 diabetes, 333 337 Twenty-Three (23andMe), 173 174, 273 U Ultrasound (Ultrasonography), 128t, 149 150, 152, 202, 355 356, 368. See also Diagnostic imaging Undersea and Hyperbaric Medicine, 107 108 Unlabeled data, 38, 40 42, 65, 65f, 67. See also Unsupervised learning Unstructured medical data, 83 85 Unsupervised learning, 40 44, 43t, 45t, 67, 94. See also Unlabeled data Urethra, 390 Urticaria, 358 359 V V Framework, 267 Vaccination, 94, 107, 202, 380 381, 447, 456, 458 459 Vaccine, 360 361, 380 381, 445, 451. See also Coronavirus COVID-19, 458 459 Varicose veins, 323, 327

524

Index

Vascular, 155, 293, 320 332, 340 341, 351 Vascular Trauma, 323 Vasculitis, 144, 298t, 323 Vas deferens, 390 Vectorcardiography, 180, 181t Vein, 323, 327 Venules, 322 Verily, 331 Vertebral arteries, 322 Vertical integration, 91, 112 114, 215. See also Horizontal integration interoperability, 83, 112 114, 116, 200 Video cards, 18t, 22 Videoconferencing: Live (synchronous), 183. See also Telehealth Virtual assistant (Conversational agent), 184. See also Bot chatbot, 186 Virtual reality, 126, 149 Visceral Artery Aneurysm, 323 Visual cortex, 65, 66f, 339 Vital signs. See also Calor (body temperature) (#1) diagnostic testing, 128t, 174 diastolic blood pressure (#2), 179 pulse (#3), 179 respiration rate (#4), 179 systolic blood pressure (#2), 155, 189 Vitamin-deficiency anemia, 322 Von Willebrand Disease, 323, 325

VSELs (very small embryonic-like stem cells), 260. See also Stem cells W Wearable device, 179 180, 207 208, 240, 350, 395 396, 417. See also Internet of Things (IoT) Web (WWW or World Wide Web), 23 26, 399 Web browsers. See also Chrome (Google) Explorer (Microsoft), 24 Safari (Apple), 24 search engine, 178 179, 185 Webcam, 15t, 21 Wellderly, 236 237. See also Blockchain Wellness, 56, 68, 91 92, 94, 108, 127, 190, 200, 219, 265 266, 293 295, 422, 469 471 Woebot Labs Inc., 187 Womb, 386 Wuhan, China. See also COVID-19 pandemic, 447 X X-Ray (conventional radiography), 128t, 131. See also Diagnostic imaging XGBoost, 373 Y Your.MD Limited, 187 YouTube, 23