Health Informatics and Medical Systems [1 ed.] 9781683925705, 9781601325006

Proceedings of the 2019 International Conference on Health Informatics and Medical Systems (HIMS'19) held July 29th

212 74 4MB

English Pages 115 Year 2016

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Health Informatics and Medical Systems [1 ed.]
 9781683925705, 9781601325006

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

WORLDCOMP’19

PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS & MEDICAL SYSTEMS HEALTH INFORMATICS & MEDICAL SYSTEMS

Health Informatics and Medical Systems

HIMS’19 Editors Hamid R. Arabnia Leonidas Deligiannidis, Fernando G. Tinetti Quoc-Nam Tran

U.S. $129.95 ISBN 9781601325006

12995

EMBD-HIMS19_Full-Cover.indd All Pages

Arabnia

9 781601 325006

Publication of the 2019 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE’19) July 29 - August 01, 2019 | Las Vegas, Nevada, USA https://americancse.org/events/csce2019

Copyright © 2019 CSREA Press

18-Feb-20 5:30:30 PM

This volume contains papers presented at the 2019 International Conference on Health Informatics & Medical Systems. Their inclusion in this publication does not necessarily constitute endorsements by editors or by the publisher.

Copyright and Reprint Permission Copying without a fee is permitted provided that the copies are not made or distributed for direct commercial advantage, and credit to source is given. Abstracting is permitted with credit to the source. Please contact the publisher for other copying, reprint, or republication permission.

American Council on Science and Education (ACSE)

Copyright © 2019 CSREA Press ISBN: 1-60132-500-2 Printed in the United States of America https://americancse.org/events/csce2019/proceedings

Foreword It gives us great pleasure to introduce this collection of papers to be presented at the 2019 International Conference on Health Informatics and Medical Systems (HIMS’19), July 29 – August 1, 2019, at Luxor Hotel (a property of MGM Resorts International), Las Vegas, USA. The preliminary edition of this book (available in July 2019 for distribution on site at the conference) includes only a small subset of the accepted research articles. The final edition (available in August 2019) will include all accepted research articles. This is due to deadline extension requests received from most authors who wished to continue enhancing the write-up of their papers (by incorporating the referees’ suggestions). The final edition of the proceedings will be made available at https://americancse.org/events/csce2019/proceedings . An important mission of the World Congress in Computer Science, Computer Engineering, and Applied Computing, CSCE (a federated congress to which this conference is affiliated with) includes "Providing a unique platform for a diverse community of constituents composed of scholars, researchers, developers, educators, and practitioners. The Congress makes concerted effort to reach out to participants affiliated with diverse entities (such as: universities, institutions, corporations, government agencies, and research centers/labs) from all over the world. The congress also attempts to connect participants from institutions that have teaching as their main mission with those who are affiliated with institutions that have research as their main mission. The congress uses a quota system to achieve its institution and geography diversity objectives." By any definition of diversity, this congress is among the most diverse scientific meeting in USA. We are proud to report that this federated congress has authors and participants from 57 different nations representing variety of personal and scientific experiences that arise from differences in culture and values. As can be seen (see below), the program committee of this conference as well as the program committee of all other tracks of the federated congress are as diverse as its authors and participants. The program committee would like to thank all those who submitted papers for consideration. About 55% of the submissions were from outside the United States. Each submitted paper was peer-reviewed by two experts in the field for originality, significance, clarity, impact, and soundness. In cases of contradictory recommendations, a member of the conference program committee was charged to make the final decision; often, this involved seeking help from additional referees. In addition, papers whose authors included a member of the conference program committee were evaluated using the double-blinded review process. One exception to the above evaluation process was for papers that were submitted directly to chairs/organizers of pre-approved sessions/workshops; in these cases, the chairs/organizers were responsible for the evaluation of such submissions. The overall paper acceptance rate for regular papers was 21%; 23% of the remaining papers were accepted as poster papers (at the time of this writing, we had not yet received the acceptance rate for a couple of individual tracks.) We are very grateful to the many colleagues who offered their services in organizing the conference. In particular, we would like to thank the members of Program Committee of HIMS’19, members of the congress Steering Committee, and members of the committees of federated congress tracks that have topics within the scope of HIMS. Many individuals listed below, will be requested after the conference to provide their expertise and services for selecting papers for publication (extended versions) in journal special issues as well as for publication in a set of research books (to be prepared for publishers including: Springer, Elsevier, BMC journals, and others).    

Prof. Abbas M. Al-Bakry (Congress Steering Committee); University President, University of IT and Communications, Baghdad, Iraq Prof. Emeritus Nizar Al-Holou (Congress Steering Committee); Professor and Chair, Electrical and Computer Engineering Department; Vice Chair, IEEE/SEM-Computer Chapter; University of Detroit Mercy, Detroit, Michigan, USA Prof. Hamid R. Arabnia (Congress Steering Committee); Graduate Program Director (PhD, MS, MAMS); The University of Georgia, USA; Editor-in-Chief, Journal of Supercomputing (Springer); Fellow, Center of Excellence in Terrorism, Resilience, Intelligence & Organized Crime Research (CENTRIC). Prof. Dr. Juan-Vicente Capella-Hernandez; Universitat Politecnica de Valencia (UPV), Department of Computer Engineering (DISCA), Valencia, Spain

        

           



 

Prof. Juan Jose Martinez Castillo; Director, The Acantelys Alan Turing Nikola Tesla Research Group and GIPEB, Universidad Nacional Abierta, Venezuela Prof. Emeritus Kevin Daimi (Congress Steering Committee); Director, Computer Science and Software Engineering Programs, Department of Mathematics, Computer Science and Software Engineering, University of Detroit Mercy, Detroit, Michigan, USA Prof. Zhangisina Gulnur Davletzhanovna; Vice-rector of the Science, Central-Asian University, Kazakhstan, Almaty, Republic of Kazakhstan; Vice President of International Academy of Informatization, Kazskhstan, Almaty, Republic of Kazakhstan Prof. Leonidas Deligiannidis (Congress Steering Committee); Department of Computer Information Systems, Wentworth Institute of Technology, Boston, Massachusetts, USA; Visiting Professor, MIT, USA Prof. Mary Mehrnoosh Eshaghian-Wilner (Congress Steering Committee); Professor of Engineering Practice, University of Southern California, California, USA; Adjunct Professor, Electrical Engineering, University of California Los Angeles, Los Angeles (UCLA), California, USA Hindenburgo Elvas Goncalves de Sa; Robertshaw Controls (Multi-National Company), System Analyst, Brazil; Information Technology Coordinator and Manager, Brazil Prof. Byung-Gyu Kim (Congress Steering Committee); Multimedia Processing Communications Lab.(MPCL), Department of Computer Science and Engineering, College of Engineering, SunMoon University, South Korea Prof. Tai-hoon Kim; School of Information and Computing Science, University of Tasmania, Australia Prof. Louie Lolong Lacatan; Chairperson, Computer Engineerig Department, College of Engineering, Adamson University, Manila, Philippines; Senior Member, International Association of Computer Science and Information Technology (IACSIT), Singapore; Member, International Association of Online Engineering (IAOE), Austria Prof. Dr. Guoming Lai; Computer Science and Technology, Sun Yat-Sen University, Guangzhou, P. R. China Prof. Hyo Jong Lee; Director, Center for Advanced Image and Information Technology, Division of Computer Science and Engineering, Chonbuk National University, South Korea Dr. Muhammad Naufal Bin Mansor; Faculty of Engineering Technology, Department of Electrical, Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia Dr. Andrew Marsh (Congress Steering Committee); CEO, HoIP Telecom Ltd (Healthcare over Internet Protocol), UK; Secretary General of World Academy of BioMedical Sciences and Technologies (WABT) a UNESCO NGO, The United Nations Michael B. O'Hara (Vice Chair and Editor, HIMS); CEO, KB Computing, LLC, USA; Certified Information System Security Professional (CISSP); Certified Cybersecurity Architect (CCSA); Certified HIPAA Professional (CHP); Certified Security Compliance Specialist (CSCS) Prof. Dr., Eng. Robert Ehimen Okonigene (Congress Steering Committee); Department of Electrical & Electronics Engineering, Faculty of Engineering and Technology, Ambrose Alli University, Nigeria Prof. James J. (Jong Hyuk) Park (Congress Steering Committee); Department of Computer Science and Engineering (DCSE), SeoulTech, Korea; President, FTRA, EiC, HCIS Springer, JoC, IJITCC; Head of DCSE, SeoulTech, Korea Dr. Akash Singh (Congress Steering Committee); IBM Corporation, Sacramento, California, USA; Chartered Scientist, Science Council, UK; Fellow, British Computer Society; Member, Senior IEEE, AACR, AAAS, and AAAI; IBM Corporation, USA Ashu M. G. Solo (Publicity), Fellow of British Computer Society, Principal/R&D Engineer, Maverick Technologies America Inc. Prof. Dr. Ir. Sim Kok Swee; Fellow, IEM; Senior Member, IEEE; Faculty of Engineering and Technology, Multimedia University, Melaka, Malaysia Prof. Fernando G. Tinetti (Congress Steering Committee); School of Computer Science, Universidad Nacional de La Plata, La Plata, Argentina; also at Comision Investigaciones Cientificas de la Prov. de Bs. As., Argentina Prof. Hahanov Vladimir (Congress Steering Committee); Vice Rector, and Dean of the Computer Engineering Faculty, Kharkov National University of Radio Electronics, Ukraine and Professor of Design Automation Department, Computer Engineering Faculty, Kharkov; IEEE Computer Society Golden Core Member; National University of Radio Electronics, Ukraine Prof. Shiuh-Jeng Wang (Congress Steering Committee); Director of Information Cryptology and Construction Laboratory (ICCL) and Director of Chinese Cryptology and Information Security Association (CCISA); Department of Information Management, Central Police University, Taoyuan, Taiwan; Guest Ed., IEEE Journal on Selected Areas in Communications. Dr. Yunlong Wang; Advanced Analytics at QuintilesIMS, Pennsylvania, USA Prof. Layne T. Watson (Congress Steering Committee); Fellow of IEEE; Fellow of The National Institute of Aerospace; Professor of Computer Science, Mathematics, and Aerospace and Ocean Engineering, Virginia Polytechnic Institute & State University, Blacksburg, Virginia, USA

 

Prof. Jane You (Congress Steering Committee); Associate Head, Department of Computing, The Hong Kong Polytechnic University, Kowloon, Hong Kong Dr. Farhana H. Zulkernine; Coordinator of the Cognitive Science Program, School of Computing, Queen's University, Kingston, ON, Canada

We would like to extend our appreciation to the referees, the members of the program committees of individual sessions, tracks, and workshops; their names do not appear in this document; they are listed on the web sites of individual tracks. As Sponsors-at-large, partners, and/or organizers each of the followings (separated by semicolons) provided help for at least one track of the Congress: Computer Science Research, Education, and Applications Press (CSREA); US Chapter of World Academy of Science; American Council on Science & Education & Federated Research Council (http://www.americancse.org/). In addition, a number of university faculty members and their staff (names appear on the cover of the set of proceedings), several publishers of computer science and computer engineering books and journals, chapters and/or task forces of computer science associations/organizations from 3 regions, and developers of high-performance machines and systems provided significant help in organizing the conference as well as providing some resources. We are grateful to them all. We express our gratitude to keynote, invited, and individual conference/tracks and tutorial speakers - the list of speakers appears on the conference web site. We would also like to thank the followings: UCMSS (Universal Conference Management Systems & Support, California, USA) for managing all aspects of the conference; Dr. Tim Field of APC for coordinating and managing the printing of the proceedings; and the staff of Luxor Hotel (Convention department) at Las Vegas for the professional service they provided. Last but not least, we would like to thank the Co-Editors of HIMS’19: Prof. Hamid R. Arabnia, Prof. Leonidas Deligiannidis, Prof. Fernando G. Tinetti, and Prof. Quoc-Nam Tran. We present the proceedings of HIMS’19.

Steering Committee, 2019 http://americancse.org/

Contents SESSION: MEDICAL DEVICES AND SERVICES + MONITORING SYSTEMS, TOOLS, SUPPORT SYSTEMS AND DIAGNOSTICS AlgoDerm: An End-to-End Mobile Application for Skin Lesion Analysis and Tracking Rashika Mishra, Ovidiu Daescu

3

A Decision Support System for Skin Cancer Recognition with Deep Feature Extraction and Multi Response Linear Regression (MLR)-Based Meta Learning Md Mahmudur Rahman

10

Modelling of Sickle Cell Anemia Patients Response to Hydroxyurea using Artificial Neural Networks Brendan E. Odigwe, Jesuloluwa S. Eyitayo, Celestine I. Odigwe, Homayoun Valafar

16

A Cloud-based Intelligent Remote Patient Monitoring Architecture Khalid Alghatani, Abdelmounaam Rezgui

23

Seizure Detection using Machine Learning Algorithms Sarah Hadipour, Ala Tokhmpash, Bahram Shafai

30

Fog Enabled Health Informatics System for Critically Controlled Cardiovascular Disease Applications Ragaa A. Shehab, Mohamed Taher, Hoda K. Mohamed

35

SESSION: HEALTH INFORMATICS, HEALTH CARE AND REHABILITATION + PUBLIC HEALTH RELATED SYSTEMS Using Isochrones to Examine NICU Availability in Rural Alabama Ben Hallihan, Myles McLeroy, Blake Wright, Travis Atkison

45

Ontology-based Model for Interoperability between openEHR and HL7 Health Applications Cristiano Andre da Costa, Matheus Henrique Wichman, Rodrigo da Rosa Righi, Adenauer Correa Yamin

51

Character Stroke Path Inference in a Rehabilitation Application Sarah Potter, Benjamin Bishop, Renee Hakim

58

Understanding Elderly User Experience on the Use of New Technologies for Independent Living Luiza Spiru, Cosmina Paul, Magdalena Velciu, Andrei Voicu, Mircea Marzan

65

SESSION: HEALTH INFORMATICS: DATA SCIENCE, PREDICTIVE ANALYTICS, SECURITY ISSUES AND RELATED SYSTEMS Software Architecture Integrating Blockchain and Artificial Intelligence for Medical Data Aggregation Jingpeng Tang, Qianwen Bi, Bradley Van Fleet, Jason Nelson, Carter Davis, Joe Jacobson

71

Towards a Unified Blockchain-Based Dental Record Ecosystem for Disaster Victims 76 Identification Sakher AlQahtani, Shada AlSalamah, A. Alimam, Ahad Alabdullatif, Maram Alamri, Miral Althaqeb, Sara Alqahtani, Kenneth Aschheim Predicting 30-Day Hospital Readmissions for Patients with Diabetes Yijun Zhao, Weiyan Wu, Yao Jin, Suwen Gu, Han Wu, Jiwei Wang, Xinyi Jiang, Hong Xiao

83

Redactable Blockchain in Mobile Healthcare System Arij Alfaidi, Edward Chow

90

SESSION: LATE BREAKING PAPERS - HEALTH INFORMATICS AND MEDICAL SYSTEMS Predictive Analytics for Left Without Treatment in the Emergency Departments Harish Kumar, Shahram Rahimi, Sean Bozorgzad, Alexander Sommers, Alexander Stephens Autism Therapeutic Device Chuck Lopez, Josh Parker, Roger Ian Konlog, Shoeb Saiyed, Daren Wilcox

97

102

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

SESSION MEDICAL DEVICES AND SERVICES + MONITORING SYSTEMS, TOOLS, SUPPORT SYSTEMS AND DIAGNOSTICS Chair(s) TBA

ISBN: 1-60132-500-2, CSREA Press ©

1

2

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

3

AlgoDerm: An End-to-End Mobile Application for Skin Lesion Analysis and Tracking Rashika Mishra1 , and Ovidiu Daescu1 1 Department of Computer Science, University of Texas at Dallas, Dallas, Texas, USA Emails: [email protected], and [email protected]

Keywords: Melanoma Detection, Lesion Segmentation, Lesion Size Estimate, Lesion Monitoring

making diagnosis difficult are (1) variable features of skin lesions, such as size, shapes, fuzzy boundaries, presence of hair and other artifacts and (2) properties of the dermoscopy or camera images, such as resolution, size, contrast, and brightness. Above motivations have led to the development of computer-aided diagnosis (CAD) systems that can assist dermatologists with clinical diagnoses. Such systems however are heavily relying on dermoscopic images [16], with only few being based on cell phone images [25]. With the emergence of telemedicine, it is not hard to envision a protocol where the patient uses a cell phone at home to take a picture of a skin lesion, uploads it to a dermatologist web site, has the image interpreted by a specialized CAD system and the answer validated by a specialist, and then receives a recommendation back through the web site. Such a system would save time and money in most cases, with only patients that need further investigation through biopsy or cross examination having to visit a medical office. A key drawback of current CAD systems is that they do not track the evolution of the skin lesion over time and do not account for the size of the lesion. On both dermoscopic and camera images automatic detection of the size of the region seems impossible since there is no reference that can allow to infer the size.

1. Introduction

1.1 Our Contributions

The number of people diagnosed with skin cancer in the U.S. is increasing every year. For 2019, the number of new melanoma cases is estimated to increase by 7.7 %. If diagnosed early, skin cancer is highly curable. The estimated survival rate for patients is 98% if detected early. This rate falls to 64 % when the cancer reaches the lymph nodes and to 23% when the cancer metastasizes [11]. The increasing number of skin cancer incidences has stimulated research that led to significant advances in Dermoscopy. Dermoscopy uses optical magnification to produce high-resolution images with fewer artifacts and increased visibility of subsurface structures of the lesion. Previous research has shown that dermoscopy improves diagnostic accuracy in comparison to standard photography [6]. However, dermoscopy might not be available in poor or remote regions and, even with highresolution images, the diagnosis of melanoma (the most deadly form of skin cancer) by human dermatologists is subjective and can be inaccurate [2], [10]. The factors

We present a mobile application for skin lesion image analysis and classification that (1) uses a novel dual segmentation method to segment out lesions in images (2) classifies the images into one of seven categories (3) automatically detects the size of the segmented lesion using a fixed size marker placed next to the lesion and (4) stores lesion images and analysis results in a database for history lookup and tracking the evolution of the lesion. Specifically: 1) We added color space normalization of images as a preprocessing step to make them illumination invariant. 2) We developed a dual segmentation that combines deep learning with unsupervised segmentation. We first trained a Mask-RCNN model [3], [12] to get an initial segmentation. We then developed a superpixel algorithm for segmentation refinement. The initial segmentation is used as input to the superpixel segmentation to get a closer-to-real-boundary segmentation. 3) We used the weights from the Resnet backbone from

Abstract— Skin related problems affect a large percentage of the human population at any given time and many of them require qualified diagnosis. To facilitate accurate diagnosis, in the past few years various computer aided diagnosis systems have been proposed. These systems are mostly limited to cancer diagnosis from dermoscopic images, with only a few considering cell phone images. In this paper, we extend such systems by proposing a machine learning based mobile application for analysis of skin lesions that accepts both dermoscopic and cell phone images, automatically computes the size of the lesion, and adds a temporal dimension for keeping track of the changes of a specific lesion. We perform the machine learning in two parts: 1) Lesion detection and 2) Lesion Classification. In the lesion detection part, we present a novel superpixel segmentation algorithm to improve upon the initial segmentation. For classification, we use the trained Resnet backbone from the previous step’s segmentation model for transfer learning to the classification task. The proposed models were analyzed on the ISIC 2018 dataset and are top contenders even when using only the ISIC 2018 dataset for training.

ISBN: 1-60132-500-2, CSREA Press ©

4

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

the segmentation model in step 2 to train a Resnet-152 [13] with transfer learning. To handle the heavy class imbalance on available data we use online agumentation, oversampling of minority classes, and weighted loss functions. 4) We designed a method to automatically measure the lesion size in images taken through the mobile application, with the use of a fixed size marker and thresholding methods. 5) We implemented database features that allow to keep track of the time evolution of the lesion. This feature is a huge help for dermatologists, and matches the process that takes place in clinical offices for recurring patients. 6) Our CAD system is fast and gives very good results with both dermoscopic and digital images.

1.2 ISIC 2018 Challenge The International Skin Imaging Collaboration (ISIC) [14] has a large scale publicly accessible data with more than 23000 dermoscopy images and hosts an annual benchmark challenge on dermoscopy image analysis since 2016 [1]. The challenge includes three sub-tasks for lesion analysis: (1) Lesion Segmentation (2) Lesion attributes/feature extraction (3) Lesion Classification. In this paper, we present models for the Lesion Segmentation and Classification tasks. All the methods presented in this paper are trained and evaluated on the ISIC dataset only. Lesion Segmentation The training input data consists of dermoscopic lesion images in JPEG format. The ground data are binary mask images in PNG format, indicating the location of the primary skin lesion (single continous region) within each input image. [9] Lesion Classification The lesion classification dataset has 7 classes: (1) Melanoma (MEL), (2) Melanocytic nevus (NV), (3) Basal cell carcinoma (BCC), (4) Actinic keratosis (AKIEC), (5) Benign keratosis (BKL), (6) Dermatofibroma (DF), and (7) Vascular lesion (VASC). The dataset is heavily imbalanced towards benign lesions. [24]

2. Lesion Segmentation Skin lesion segmentation is a fundamental requirement for any computer-aided diagnosis system. Segmentation can either be supervised or unsupervised. Unsupervised methods use thresholding, energy function or region clustering. However, these methods rely heavily on image preprocessing such as hair removal and amplifying lesion homogeneity. Traditional supervised methods extract pixel level color/texture features and use various classifiers to segment out the lesion from surrounding skin. The supervised methods use low-level features based on color and texture but do not capture the high-level semantic features in the image. In [7] the authors present a survey on such segmentation methods. Recently, deep learning methods have produced highly accurate segmentation. The current state of the art

methods uses deep convolution neural networks (CNN) for skin lesion segmentation [18], [26]. The winner of the ISIC 2018 challenge combined the Mask-RCNN detection with an encoder-decoder segmentation [21]. Inspired by the success of deep learning methods in skin lesion segmentation, we present a dual-segmentation method, combining a deep learning model with unsupervised segmentation. The rest of this section defines each step in our proposed segmentation.

2.1 Illumination invariant color space Dermoscopy images are commonly acquired using a digital camera. Illumination variation occurs due to different light conditions while capturing the images, which can misclassify normal skin as a lesion. The appearance of skin in an image is illumination dependent, and lesion segmentation can use different color models invariant to illumination conditions. Many models are trained either in the HSV or Lab color space to alleviate the effects of image capture illumination conditions [7]. We use the perception based color space to generate the illumination-invariant image (IIVI) and maintain color constancy [8]. This ensures that the perceived color of the region of interest (ROI) in an image is constant even when the illumination changes. To convert an image to the IIVI space we first convert it to XYZ (Tristimulus values) color space which is device invariant.

2.2 Mask-RCNN Segmentation Mask-RCNN [3] has been recognized as the state-ofart neutral network architecture for instance segmentation. It utilizes the region based convolution neutral network and generates pixel-level segmentation on the proposed bounding box. We fine-tuned Mask-RCNN with ResNet152 for two class segmentation- skin lesion (foreground) and background. The adapted Mask-RCNN architecture produces a single mask per image. Preprocessing and Augmentation The mean aspect ratio for all the images in the training dataset is 2 : 3. Hence, we resized all images to 512 × 768 with bicubic interpolation. We further used online augmentation for training. For all the images, we flip the image vertically and horizontally with a probability of 0.5 and randomly rotate images in [90◦ , 180◦ , 270◦ ]. We made the training data independent of the magnification/distance from camera by randomly scaling the images to 75 − 150% of their original size. We also added random smoothening using gaussian blur with sigma in range 0.0 − 2.0% to handle camera resolution and soft edges in the real world data. Training The Resnet152 backbone weights are initialized with pretrained weights on imagenet dataset [13]. We used Adam optimization [15] for gradient optimization with base learning rate of 0.001 and momentum 0.9. We generated 64 anchors per image with scales in (16, 32, 64, 128, 256) for the region proposal network. We trained the model for 800 iterations per epoch and it converged after 50 epochs.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Loss function In general, segmentation methods employ cross-entropy loss functions. Cross-entropy measures the performance of a segmentation model based on the class per pixel. Since the skin lesion is mostly a small section of the image, cross entropy loss is biased towards the detection of background skin instead of foreground lesion. To ensure that the model is learning the lesion attributes for segmentation and also the background image, we used the Intersection over Union (IoU) loss proposed by [22].

5

Algorithm 1 Fine-tune Segmentation 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:

Fig. 1: Superpixel algorithm to refine segmentation

2.3 Superpixel Algorithm The segmentation obtained from Mask-RCNN is an over estimate of the lesion area. The estimated mask is not an exact segmentation because of multiple resizing of the image and mask during the network lifecycle. We also use a mask size of 56x56 which results in smooth boundaries for the mask. To improve upon the segmentation, we use superpixel segmentation on the segmented mask from MaskRCNN. We group pixels into regions differentiated by minute boundaries. We develop the superpixel segmentation algorithm on top of the simple linear iterative clustering (SLIC) algorithm [4]. We cluster the segmentation output from previous segmentation in upto k = 5 clusters using k-means algorithm with a weighted distance measure D. D balances the color proximity with space proximity for pixels: it is the distance between a pixel and a cluster center Ck in terms of color and space location. Let a pixel’s color in IIVL space be represented by [I 1 , I 2 , I 3 ]T . Let the pixel’s position in the image be [x, y]T . Let the number of pixels in an image be N . We define the maximum spatial distance N for a given cluster as S = k and the maximum color distance as a fixed value M . We define α = M 2 /S 2 as the weight factor to balance the color and space distance. When M is large, spatial distance is more important than color distance. This results in smooth clusters. When M is small, color distance has higher priority giving clusters with more irregular boundaries. For our task, we fix the value of M = 3. We can define the color and space distance between

18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28:

INPUT Segment image I OUTPUT Segment mask M # Randomly initialize k = 5 cluster centers Ck ← [Ik1 , Ik2 , Ik3 , xk , yk ]T for i ← 1 to 5 do # Define a 3 × 3 neighborhood around each cluster center Ci cN eighbors ← neighborhood(Ci , 3) # Define new centers at the lowest gradient of cNeighbors Ci ← minGradient(cN eigbors) end for for pixel p : I and p = black do l(p) ← −1 d(p) ← inf end for iteration ← 0 # K-means using D in a 2S neighborhood repeat for i ← 1 to k do cN eighbors ← neighborhood(Ci , 2S) for pixel p in cN eighbors do D ← weightedDistance(p, Ci ) if D < d(p) then l(p) ← i d(p) ← D end if end for end for # Update cluster centers for i ← 1 to k do Ci ← minGradient(clusteri ) end for

29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39: 40: 41: 42: 43: 44:

# Merge the clusters based on size and color intensity Ck ← merge(clusteri ) # Get the new number of clusters k = count(Ck ) iteration ← iteration + 1 until iteration < 15 k ′ ← count(C_k) # No change in segmentation if k ′ ≤ 2 then return −1 else for i < k ′ do mIntensityi ← meanColor(clusteri ) # Get the color intensity for the clusters end for end if sort(mIntensityi ) M ′ ← pixels in top 2 clusters of mIntensityi return M ′

ISBN: 1-60132-500-2, CSREA Press ©

6

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

two pixels i and j as in equations 1 and 2.  dc = (Ii1 − Ij1 ) + (Ii2 − Ij2 ) + (Ii3 − Ij3 ) ds =

 (xi − xj ) + (yi − yj )

(1) (2)

This results in weighted distance D defined in equation 3.  (3) D = d2c + α · d2s We initialize the cluster centers Ck randomly by sampling points at a distance of S from the image center. For each update of the clusters, we assign the cluster centers to the mean of [Ik1 , Ik2 , Ik3 , xk , yk ]T vector and merge clusters where count(clusteri ) < 25 (clusteri defines the pixels in a cluster). Let k ′ be the number of clusters after the superpixel algorithm. Our next step is to decide which clusters will form the segmentation mask. For this purpose we order the clusters with respect to decreasing cluster mean colors. If k ′ > 2 we select the top 2 clusters from the ordered set as our segmentation mask, otherwise we return the original segmentation as no new clusters were found. This is because the lesion would have a higher color intensity than the surrounding skin clusters. The clustered output is post-processed to remove holes and islands in the image. We used morphological operations (dilation and erosion) to fill in the holes and remove unnecessary islands. Algorithm 1 provides the pseudocode for the superpixel segmentation. Figure. 1 shows some outputs of the Algorithm 1.

2.4 Segmentation Results We used standard skin lesion metrics for evaluation: (1) dice similarity coefficient (DSC), (2) Jaccard index (JI), (3) Threshold Jaccard Index at 0.65 (TJI), (4) sensitivity (Sen), (5) specificity (Spec), and (6) accuracy (Acc). Quantitative results on the ISIC 2018 test dataset for the different approaches mentioned in Section 2 are shown in Table 1. From Table 1, we can see that the color space conversion improves the output segmentation. Both additions to the baseline Mask-RCNN architecture have positive impact on all evaluation metrics without impacting the efficiency of the segmentation. The performance of our algorithm is on par with the previous year’s winner for the segmentation task [21]. In [21], the authors trained two deep learning models, Mask-RCNN and Encoder-Decoder Segmentation Network, to get the final segmentation output. This requires saving two models in memory and sending each image through the two model pipeline. Our proposed network requires saving weights for only one model and uses unsupervised learning in target areas to produce the final segmentation mask. Our experiments prove that illumination artifacts and fuzzy boundaries have high impact on performance of a segmentation algorithm. Our final goal is to use the trained model for images taken by users from their cell phones. This would create an

image dataset with high illumination variation. Color space conversion to the IIVI space ensures that all images fall in the same illumination range without affecting the lesion attributes. The superpixel algorithm described in the previous section employs initial segmentation close to the lesion boundary as input and is able to handle fuzzy boundaries through color based and space based distance measures.

3. Lesion Classification Lesion classification by Dermatologists follows a list of medical criteria such as the ABCD rule [19] and the 7point checklist [5]. These medical criteria are evaluated mostly through visual inferences from the skin lesions and surrounding area and rarely through histology. However, visual diagnosis is subjected to observer bias and relies heavily on the dermatologists experience [10]. Therefore, a computer aided analysis system can aid dermatologists in making a more confident diagnosis. In the past few years several CAD systems have been published to aid diagnose skin cancer by analyzing digital dermoscopy images [2], [7], [10]. Traditional machine learning methods involve visual feature extraction for classification and diagnoses [25]. Deep learning methods involve training CNN architectures for end-to-end classification [17], [25]. Above approaches require huge amounts of training data and also employ high data augmentation in image space to handle class imbalance in the training data. In this paper, we take advantage of the model trained for skin lesion segmentation. We present a transfer learning approach using the Resnet backbone of Mask-RCNN model trained in segmentation and utilize minority class oversampling to handle class imbalance.

3.1 Transfer Learning from Resnet backbone The limited dataset (around 10000 images) makes training a model from scratch difficult, since it only represents a small subsample of the real world dataset. Thus, it is a common practice to use pretrained weights from a previously trained architecture. For the classification task, we trained a Resnet152 model with the weights initialized with the pretrained weights from the backbone network of the Mask-RCNN trained in segmentation (see Section 2). The backbone model of the Mask-RCNN network is a feature extraction model. This feature extractor model detects low level features (edges and corners), and successively identifies higher level features that define a skin lesion. The model hyper-parameters and preprocessing steps are adapted from Mask-RCNN training for segmentation. We used weighted cross entopy as the loss funvtion to handle class imbalance. We trained the model with 95% of the training data for 25 epochs to learn the 7 classes. From this 95%, we used 10% as a cross-validation set for early stopping with patience value set to 15. The remaining 5% of the data was used as blind test set to evaluate the model.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

7

Table 1: Evaluation metrics for experiments in section 2 and ISIC 2018 winner on ISIC 2018 Test Dataset Method DSC JI TJI Sen Spec Acc Average segmentation time per image Mask-RCNN (baseline) 0.885 0.810 0.761 0.926 0.941 0.931 0.042s Mask-RCNN (Color space) 0.887 0.821 0.774 0.940 0.925 0.936 0.045s Dual Segmentation (proposed) 0.893 0.825 0.802 0.904 0.963 0.940 0.051s Two step deep learning segmentation 0.898 0.838 0.802 0.906 0.963 0.942 ( ISIC 2018 winner) [21]

Table 2: Class Distribution for ISIC 2018 Challenge Task 3 Training Data Class MEL NV BCC AKIEC BKL DF VASC

No of images 1109 6734 500 326 1085 116 144

Percentage 11.07 67.24 4.99 3.26 10.83 1.16 1.45

3.2 Class Imbalance The training data for ISIC 2018 [24] is highly imbalanced as shown in Table 2. Dataset augmentation has been a standard regularization technique used to reduce overfitting while training models and to handle class imbalance. Data augmentation for skin lesion tasks can be implemented by applying image manipulations such as shifting, scaling, rotation, and other affine transformations. From Table. 2, we can see that more than 67% of the data belongs to the Melanocytic nevus (NV) class. We use weighted loss function to handle class imbalance during training. We also employ oversampling of the minority class to augment the dataset. From Table. 2, four classes (BCC,AKIEC,DF,VASC) have less than 5% images each in the training dataset. We increase these classes in the training set by adding augmented images with piecewise affine transformation. For each image, we locally distort the image by a value d relative to the size of the image and a grid of size 2 × 2. Here, d is randomly picked from a normal distribution N (0, σ). For our dataset, σ is set as the mean of 5% of the height and 5% of the width of the image. For the BCC and AKIEC classes, we generate two transformations per image. For DF and VASC classes, we generate three transformations per image.

3.3 Classification Results We evaluate the performance of the classifier using the following metrics: balanced accuracy (BA), sensitivity (SEN), specificity (SPEC), accuracy (ACC), area under the receiver operating characteristic curve (AUC), and F1 score (F1). The Balanced Accuracy metric is defined as the average of recall obtained on each class. We test the performance of our model on three datasets: (1) the 5% holdout dataset from the training data, (2) ISIC 2018 Challenge test dataset, and (3) curated data from sd198 dataset [23]. The sd198 dataset is used to check the generalization of the trained

Fig. 2: ROC curve for the sd198 test dataset model on a dataset from a different source. To create this dataset, we parse the images in sd198 and look for the 7 class labels. The sd198 dataset does not have labels for "Melanocytic nevus (NV)" and "Vascular lesion (VASC)" classes. To form these classes we include images from [Blue Nevus, Congential Nevus, Dysplastic Nevus] in the NV class and images from [Angioma, Pyogenic Angioma] in the VASC class. Table 3 shows the evaluation metric for the 3 datasets used in evaluation and the results of the ISIC 2018 winner on the ISIC challenge dataset [20]. From this table we can make the following observations: Table 3: Evaluation metrics for the classification Dataset Hold-out Set Sd198 ISIC 2018 Challenge ISIC 2018 Challenge Winner [20]

BA 0.842 0.739 0.812

Acc 0.940 0.750 0.932

AUC 0.978 0.907 0.971

Sen 0.841 0.739 0.811

Spec 0.989 0.958 0.988

0.885

0.985

0.982

0.958

0.833

Model Generalization The images in sd198 are a mix of both dermoscopy and digital images. Some of the images in Sd198 are not localized and hence are very different from the images in the ISIC dataset. The model accuracy on a test set from this source (Sd198) is around 73.9%. This shows that the model has good generalization ability. Model Performance Figure. 2 show the Receiver Operating Characteristic (ROC) curves for the model on the sd198 sample datasets. The model performance on the ISIC challenge test dataset is on par with the ISIC 2018 winner. The ISIC 2018 winner model incorporates large ensemble networks to make final predictions. However, implementing such en-

ISBN: 1-60132-500-2, CSREA Press ©

8

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

semble of models for real-time classification is resource and time sensitive. Our proposed model makes prediction based on a single network output with comparable performance and can be used to make real time classification on a phone application. Algorithm 2 Marker Detection and Lesion Size Calculation 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:

INPUT: Image I and bounding box M for Lesion Mask OUTPUT: Lesion size resizedIm ← resize(I, 512, 768) lowRed ← [(0, 100, 100), (10, 255, 255)] highRed ← (160, 100, 100), (179, 255, 255)] blurIm ← medianBlur(resizedIm, 5) binaryIm ← binary(blurIm, lowRed, highRed) otsuIm ← otsuT hreshold(blurIm) binaryM ask ← and(binaryIm, otsuIm) contourList ← f indContours(binaryM ask) bBoxList ← getBoundingBox(contourList) # Find the bounding box for the marker for Box b : bBoxList do if area(b) > 100 then aspectRatio ← length(b) width(b) if aspectRatio > 5 then add b to markers end if end if end for markerBox ← f indLargestBox(markers) # Count the number of pixels for the longest side 2 markerLength ← length(markerBox) moleSize ← length(M ) × markerLength return moleSize

for skin lesion size estimation and earlier algorithms. Once the image is uploaded to the server, we perform 3 separate tasks: (1) Lesion Segmentation, (2) Marker Detection and Lesion Size calculation, (3) Lesion Classification. We have described task 1 and task 3 in Sections 2 and 3, respectively. We provide a custom marker based lesion size measure tool for computing the absolute lesion size. For this purpose we ask the user to add a predefined marker (such as tape) beside the lesion, of approximate 5:1 ratio and 2cm length in red color before taking a picture. This marker is used to get the absolute measure for lesion size. We use Algorithm 2 to get the size of a segmented lesion in an image. It segments the marker based on red hue from the rest of the image and calculates the length of the bounding box for the segmented marker. This length is then used to get the size of the lesion. This allows for real-world measure of the lesion and over-time tracking of the change in size of the lesion for diagnosis purposes. Figure. 3 shows the output of the marker detection algorithm. Figure. 4 shows screenshots of the mobile application. The screenshots display the application’s process in different stages and also shows the available features: tracking and size calculation. The application’s homepage provides options to add a new image or access multiple other images. The app has the option to add lesions from different sites (anatomical regions) and group them together. All the added lesions are stored in the database with site name, classification, lesion size and image date. When a user adds a new image in the application, the image is sent to the server for analysis, the server runs the segmentation algorithm and sends the image with the segmentation mask to the marker detection step. In this step, the marker is segmented and the absolute size of the lesion is calculated. The image is then sent to the classification algorithm which returns the class probabilities.

5. Conclusion

Fig. 3: Marker detection algorithm in progress

4. Phone Application We developed a mobile application called AlgoDerm for real time inference on images. The application allows users to take a picture of their skin lesion on their phone, send it to a server hosting machine learning algorithms, and receive information about the lesion, such as its estimated size and predictions for potential skin diseases. Additional features of the app include the ability to create and organize images by body regions, viewing and editing of historic data, and extensive help messages. NativeScript was used for mobile development to make the app available on both Android and iOS. OpenCV and Tensorflow were used on the server side

In this work, we propose an end-end phone application to detect and classify skin lesions into seven classes, trained on the ISIC 2018 dataset. We combine superpiexel segmentation with Mask-RCNN model to produce an accurate segmentation of the lesion. We also show that preprocessing techniques such as color constancy can improve results significantly. For the lesion classification task, we propose data augmentation and over sampling to handle class imbalance. We evaluated the performance of these models on ISIC challenge test dataset and a subset of the sd198 dataset with high accuracy. As a final step, we present a phone application module to measure the size of the lesion. This phone application allows for database storage with analysis results for history look up. To the best of our knowledge, this is the first application developed with the proposed functionalities.

References [1] ISIC 2018. Isic 2018: Skin lesion analysis towards melanoma detection. https://challenge2018.isic-archive.com/.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

9

Fig. 4: Screenshots of different windows in the phone application showing the ability to : (1) add a new image (2) get the history for an old lesion (3) lesion analysis results from the server (4) lesion size results [2] Qaisar Abbas, M Emre Celebi, Carmen Serrano, Irene FondóN GarcíA, and Guangzhi Ma. Pattern classification of dermoscopy images: A perceptually uniform model. Pattern Recognition, 46(1):86–97, 2013. [3] Waleed Abdulla. Mask r-cnn for object detection and instance segmentation on keras and tensorflow. https://github.com/ matterport/Mask_RCNN, 2017. [4] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk. Slic superpixels compared to stateof-the-art superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11):2274–2282, 2012. [5] Giuseppe Argenziano, Gabriella Fabbrocini, Paolo Carli, Vincenzo De Giorgi, Elena Sammarco, and Mario Delfino. Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: comparison of the abcd rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Archives of dermatology, 134(12):1563– 1570, 1998. [6] M Emre Celebi, Hassan A Kingravi, Bakhtiyar Uddin, Hitoshi Iyatomi, Y Alp Aslandogan, William V Stoecker, and Randy H Moss. A methodological approach to the classification of dermoscopy images. Computerized Medical imaging and graphics, 31(6):362–373, 2007. [7] M Emre Celebi, QUAN Wen, HITOSHI Iyatomi, KOUHEI Shimizu, Huiyu Zhou, and Gerald Schaefer. A state-of-the-art survey on lesion border detection in dermoscopy images. Dermoscopy image analysis, pages 97–129, 2015. [8] Hamilton Y. Chong, Steven J. Gortler, and Todd Zickler. A perceptionbased color space for illumination-invariant image processing. In ACM SIGGRAPH 2008 Papers, SIGGRAPH ’08, pages 61:1–61:7, New York, NY, USA, 2008. ACM. [9] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 168–172. IEEE, 2018. [10] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115, 2017. [11] Skin Cancer Foundation. Skin cancer facts and statistics. https: //www.skincancer.org/skin-cancer-information/ skin-cancer-facts. [12] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference

on computer vision, pages 2961–2969, 2017. [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778, 2016. [14] ISIC. The international skin imaging collaboration. https://www.isic-archive.com/#!/topWithHeader/ wideContentTop/main. [15] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [16] Konstantin Korotkov and Rafael Garcia. Computerized analysis of pigmented skin lesions: a review. Artificial intelligence in medicine, 56(2):69–90, 2012. [17] Yuexiang Li and Linlin Shen. Skin lesion analysis towards melanoma detection using deep learning network. Sensors, 18(2):556, 2018. [18] Rashika Mishra and Ovidiu Daescu. Deep learning for skin lesion segmentation. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 1189–1194. IEEE, 2017. [19] Franz Nachbar, Wilhelm Stolz, Tanja Merkle, Armand B Cognetta, Thomas Vogt, Michael Landthaler, Peter Bilek, Otto Braun-Falco, and Gerd Plewig. The abcd rule of dermatoscopy: high prospective value in the diagnosis of doubtful melanocytic skin lesions. Journal of the American Academy of Dermatology, 30(4):551–559, 1994. [20] Aleksey Nozdryn-Plotnicki, Jordan Yap, and William Yolland. Ensembling convolutional neural networks for skin cancer classification, 2018. [21] Chengyao Qian, Ting Liu, Hao Jiang, Zhe Wang, Pengfei Wang, Mingxin Guan, and Biao Sun. A two-stage method for skin lesion analysis. arXiv preprint arXiv:1809.03917, 2018. [22] Md Atiqur Rahman and Yang Wang. Optimizing intersection-overunion in deep neural networks for image segmentation. In International symposium on visual computing, pages 234–244. Springer, 2016. [23] Xiaoxiao Sun, Jufeng Yang, Ming Sun, and Kai Wang. A benchmark for automatic visual classification of clinical skin disease images. In European Conference on Computer Vision, pages 206–222. Springer, 2016. [24] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 5:180161, 2018. [25] Jufeng Yang, Xiaoxiao Sun, Jie Liang, and Paul L Rosin. Clinical skin lesion diagnosis using representations inspired by dermatologist criteria. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1258–1266, 2018. [26] Xiaoqing Zhang. Melanoma segmentation based on deep learning. Computer Assisted Surgery, 22(sup1):267–277, 2017.

ISBN: 1-60132-500-2, CSREA Press ©

10

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

A Decision Support System for Skin Cancer Recognition with Deep Feature Extraction and Multi Response Linear Regression (MLR)-Based Meta Learning Md Mahmudur Rahman Computer Science Department, Morgan State University, Baltimore, MD, USA [email protected] Abstract This work presents an integrated classification and retrieval basde diagnostic aid for skin cancer detection by extracting and combining several deep features by using transfer learning and a multi-response linear regression (MLR)-based meta-learning approach. Generally, diagnosing an unknown skin lesion in a dermoscopic image is the first step to determine appropriate treatment. The descriptiveness and discriminative power of features extracted from the images are critical to achieve good classification and retrieval performances towards computer aided diagnosis. Recently, characterization of suspicious lesion images using a purely feature and transfer learning based data driven approach has been gaining popularity in this domain. However, it is still challenging to find a unique feature representation to classify and compare images accurately for retrieval for all types of queries in large heterogeneous collections. To address the above issues, this work is focusing on the extraction and fusion of several deep features based on transfer learning from pre-trained Convolutional Neural Networks (CNN) and Meta learning based on a multiresponse linear regression (MLR). The classification and retrieval result were evaluated on a collection of 10,015 images of seven different skin disease categories with improved accuracies compared to using only a single feature or without the proposed fusion approach. Keywords: Skin Cancer, Melanoma, CAD, Classification, Retrieval, Deep Learning, Regression, Meta Learner.

1 Introduction Skin cancer is one of the most frequent cancers among human beings, affecting more than one million Americans every year and one in five Americans will get it in their lifetime. With more than 3,000 possible disorders, diagnosing

an unknown skin lesion is the first step to determine appropriate treatment [1]. The early diagnosis through periodic screening with dermoscopic images can significantly improve the survival rate as well as reduce the treatment cost and consequent suffering of patients [2]. There is currently a great interest in the development of computer aided diagnosis (CAD) systems for automated melanoma recognition as it causes 75% of all the skin cancer-related deaths. However, these systems are mainly non-interactive in nature and their prediction represents just a cue for the dermatologist, as the final decision regarding the likelihood of the presence of a melanoma is left exclusively to him/her. Whereas, a real-world clinical scenario demands a more difficult classification task with multiple image categories associated with different kind of skin cancers or disorders. It would also be more effective if a dermatologist is assisted in the decision making process by means of an interactive approach; where the system retrieves a number of lesions from a database of already diagnosed cases, similar to the one under the analysis. Our hypothesis is that by providing with a set of pathologically-confirmed images of past cases as computer output, it could be utilized to guide them to a precise diagnosis, but not to suggest them a second diagnosis. Locating, retrieving and displaying relevant past cases, provides an intuitive and effective support to both inexperienced and experienced clinicians which can improve their diagnostic accuracy. Such a concept can be implemented with a content-based image retrieval (CBIR) tool for searching and retrieving dermoscopic images most similar to an unknown case [3]. Despite the large proliferation of CBIR in radiological domain, existing literature on CBIR in the context of dermoscopic images is not rich [4, 5, 6]. The classification and retrieval performance of these systems is usually highly dependent on the effectiveness of image feature vectors. While many feature descriptors, such as intensity histograms, filter-based features such as Gabor filters and Wavelets and many scale-invariant feature have

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

been proposed over the past years for these systems [3]. Automatic feature learning from image data has emerged as a different trend recently, to capture the intrinsic image features without manual or hand crafted feature design approaches [7, 8]. Traditional hand-crafted features often require expensive human labor and often rely on expert knowledge. Also, they normally do not generalize well for large datasets. Whereas, feature learning or representation learning is a set of techniques that learn a transformation of raw data input to a representation that can be effectively exploited in machine learning tasks. Being motivated, this work is focusing on extracting several deep feature from pre-trained Convolutional Neural Networks (CNN). It also presents a classification-driven retrieval approach by combining multiple deep features with a MLR-based metalearner based on combining class probability outputs of the base level Logistic Regression classifiers. MLR attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data [9]. The advantage of using MLR here over other generalizers is its interpretability as the weights generated by it indicate the different contributions that each features makes for class prediction. Hence, based on the on-line prediction of the query image modalities, individual feature weights generated by MLR are used in a linear combination of similarity matching function for image retrieval. To evaluate the effectiveness of our retrieval and classification approaches, experiments were performed and results were validated on a benchmark dataset of more than 10,000 images of International Skin Imaging Collaboration (ISIC) by participating in the competition for Skin Lesion Analysis Towards Melanoma Detection [10].

2 Deep Feature Extraction The proposed system consists of three main stages: Deep feature extraction based on transfer learning, Image classification with MLR based Meta Leaner and Dynamic Similarity Fusion for Retrieval. Convolutional neural networks (CNNs) trained on large-scale datasets such as ImageNet [11] have demonstrated to be excellent at the task of transfer learning. These networks learn a set of rich, discriminating features to recognize 1,000 separate object classes. CNNs not only give state-of-the-art results when trained for a specific task, but experiments have shown that the filters learned over the ImageNet dataset are generic and useful for other image tasks that the CNN was not originally trained for. Using a pre-trained CNN as a feature extractor rather than training a from scratch is attractive as it transfers learning (i.e. filters) from other domains where more training data is available, and avoids a time consuming training process. Hence, in this work, three different such deep features are extracted from images using pre-trained CNN models,

such as VGG16 [12], AlexNet [13] and ResNet50 [14] and stored in HDF5 dataset format which are later loaded and used as input for training with a general Logistic Regression classifier. For example, we at first pre-processed the images (e.g. 3 channel 224×224 pixel image) for the VGG model (without the output layer) trained on ImageNet dataset and used the extracted features predicted by this model as input. When treating the VGG16 networks as a feature extractor, we essentially “chop off” the network prior to the fully-connected layers. The last layer of the network is a max pooling layer (Figure 2(a)), which will have the output shape of 7 × 7 × 512 implying there are 512 filters each of size 7 × 7. If we were to forward propagate an image through this network with its FC head removed, we would be left with 512, 7 × 7 activations that have either activated or not based on the image contents. Therefore, we can actually take these 7 × 7 × 512 = 25, 088 values and treat them as a feature vector that quantifies the contents of an image where VGG16 network is used here as an intermediary feature extractor. We separately perform the similar feature extraction process with both ResNet and GoogleNet networks. In AlexNet architecture [13], input images are assumed to be 227 × 227 × 3 pixels where the first block of AlexNet applies 96, 11×11 kernels with a stride of 4×4, followed by a RELU activation and max pooling with a pool size of 3×3 and strides of 2 × 2, resulting in an output volume of size 55 × 55. We then apply a second CON V => RELU => P OOL layer this, this time using 256, 5x5 filters with 1x1 strides. After applying max pooling again with a pool size of 3×3 and strides of 2×2 we are left with a 13×13 volume. Next, we apply (CON V => RELU ) ∗ 3 => P OOL. The first two CONV layers learn 384, 3 × 3 filters while the final CONV learns 256, 3 × 3 filters. After another max pooling operation, we reach our two FC layers, each with 4096 nodes and RELU activations in between. The final layer in the network is our softmax classifier and the final feature vector size is 4096. ResNet is an architecture using new and innovative types of blocks (known as residual blocks) and the concept of residual learning, has allowed researchers to reach depths unthinkable with the classic feed forward model due to the problem of the degradation of the gradient [14]. In ResNet50 architecture [15], input images are resized to 224 × 224 pixels The final average pooling layer of ResNet50 is 2048-d, which is the dimensionality of the feature vector as input to the classifiers.

3 MLR as Meta Learner Combiner of Deep Features In recent years, MLR has been recommended as a combiner for merging heterogeneous base-level classifiers

ISBN: 1-60132-500-2, CSREA Press ©

11

12

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

[9, 18, 19]. Any classification problem with real-valued attributes can be transformed into a multi-response regression problem. The experimental results in [9] showed that for small sample sizes, MLR can generally outperform the other combiners, such as FLD (Fisher Linear Discriminant), DT (Decision Template) and MEAN, when the validation or stacking strategy is adopted. The validation strategy splits the training set L into two disjoint subsets, one of which is used to derive the base-level classifiers C1 , C2 , · · · , CL and the other one is employed to construct the meta-level data. Whereas, the stacking strategy utilizes the cross-validation method to form the meta-level data. In [18], it is found that to handle multi-class problems, the best results are obtained when the higher-level model combines the confidence (and not just the predictions) of the lower-level ones based on MLR algorithm and stacking method is employed to construct its meta-level data (namely, the data for training the combiner). On the other hand, considering of both accuracy and cost, it is shown in [19] that combiner with validation is an obvious better choice compared to stacked generalization, which may prevent it from being applied in realistic databases. Based on the above findings, we proposed to use MLR as a learnable combiner with validation strategy for combining probabilistic outputs instead of class predictions of several base-level SVM classifiers which are trained with individual features as inputs. In addition, we evaluated the classification effectiveness of MLR by analyzing its performances from small to large sample sizes to justify its usability for modality detection. Given a data set L = {(yn , xn ), n = 1, · · · , N }, where yn is a class label taking values from one of the m classes {ω1 , ω2 , · · · , ωm } and xn is a vector representing the attribute values of a feature (instance) xn of the n − th instance. Now, for K different features, K baselevel classifiers C1 , C2 , · · · , CK are generated by applying a multi-class SVM learning algorithm [?]. So, each base-level classifier is trained with a particular input feature xn from the training set. The prediction of classifier Ck (k = 1, 2, · · · , K) when applied to a feature vector xn is a probability distribution vector

Based on the intermediate feature space constituted by the outputs of each base-level SVM classifier, the MLR method [9] firstly transforms the original classification task with m classes into m regression problems: the problem for class ωj has examples with responses equal to one when they indeed have class label ωj and zero otherwise. For each class ωj , MLR selects only j (xn ), the probabilities that P1j (xn ), P2j (xn ), · · · , PK xn belongs to ωj predicted by the base-level classifiers C1 , C2 , · · · , CK , as the input attributes to establish a linear equation LRj (xn ) =

K 

αkj Pkj (xn ), j = 1, 2, · · · , m

(3)

k=1

where the coefficients {αkj } are constrained to be nonnegative and the nonnegative-coefficient least-squares algorithm described in [?] is employed to estimate them. To classify a new instance x, we need to compute LRj (x) for all the m classes and assign it to the class ωj which has the greatest value: LRj (x) > LRj ′ (x) for all j′ = j.

(4)

The advantage of using MLR over other generalizers is its interpretability as it provides a method of combining the confidence (class probabilities) generated by the base level models into a final decision. The weights generated by MLR indicate the different contributions that each base level model (e.g., features) makes to the prediction classes.

4 Dynamic Similarity Fusion

Similarity measure is an essential final processing step in image retrieval where the most common types of queries based on image similarity are k-nearest neighbor and range queries. Current CAD schemes using CBIR approaches typically use the k-nearest neighbor type searching method. Here, a query-specific adaptive similarity fusion approach is proposed by exploiting the realtime classification informaPk (xn ) = [Pk (ω1 |xn ), Pk (ω2 |xn ), · · · , Pk (ωm |xn )]Ttion and utilization of feature weights generated by MLR meta learner. . = [Pk1 (xn ), Pk2 (xn ), · · · , Pkm (xn )]T , k = 1, 2, · · · , K. In a linear combination, the similarity between a query (1) image Iq and target image Ij is described as m where Pk (xn ) denotes the probability or class confi  dence score that the feature vector xn of example xn beSim(Iq , Ij ) = αF S(fqF , fjF ) (5) αF SF (Iq , Ij ) = longs to class ωm as estimated by the classifier Ck . FurF F thermore, P(xn ) is defined as an mK-dimensional column vector as where S(fqF , fjF ) is the similarity matching function (generally Euclidean) in individual feature spaces and αF be P(xn ) = [P1 (xn ), · · · , Pk (xn ), · · · , PK (xn )]T the weights (generally decided by users or hard coded in = [P11 (xn ), P12 (xn ), · · · , P1m (xn ), P21 (xn ), P22 (xn ), (2) the systems) within the different feature F representation m 1 2 m T schemes. · · · , P2 (xn ), · · · , PK (xn ), PK (xn ), · · · , PK (xn )]

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

13

Table 1. Classification Accuracy Feature Accuracy Deep Feature (VGG16) 67.43% Deep Feature (ResNet50) 70.33% Deep Feature (AlexNet) 70.95% Combined (Equal Weight) 71.60% Combined (MLR) 73%

Figure 1. Seven different Skin Image Categories [10].

In the proposed approach, the category of a query image is determined at first by employing the classification approach with meta learner combiner. Based on the online category prediction of a query image, pre-computed category-specific feature weights (e.g., αF ) in Equation 3 as generated by MLR, are utilized in the linear combination of the similarity matching function. In this scheme, a particular deep feature might have have more weight for a particular image category based on the outcomes of learnable MLR combiner, and those weights will be adjusted in similarity matching in a dynamic fashion to provide more accurate accuracy during retrieval.

5 EXPERIMENTS AND RESULTS Distinguishing among melanoma, non-melanoma and other types of benign skin lesions are an important component of a practical skin diagnosis tool and is the main focus of this work based on our participation in the ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection challenge [10]. For example, Fig. 1 shows the seven different disease categories, which are encoded as MEL: Melanoma diagnosis confidence, NV: Melanocytic nevus diagnosis confidence, BCC: Basal cell carcinoma diagnosis confidence, AKIEC: Actinic keratosis / Bowens disease (intraepithelial carcinoma) diagnosis confidence, BKL: Benign keratosis, DF: Dermatofibroma diagnosis confidence and VASC: Vascular lesion and provided as a ground truth for a training data set of 10,015 images. The images (come from the HAM10000 Dataset, and were acquired with a variety of dermatoscope types, from all anatomic sites (excluding mucosa and nails). For this experiment, split the entire training set of images as separate training, test, and validation set with 60%, 25%

Table 2. Weights generated by MLR Meta Learner for individual categories/deep feature spaces Category AKIEC BCC BKL DF MEL NV VASC

VGG16 0.50 0.45 0.60 0.67 0.58 0.83 0.81

ResNet50 0.50 0.70 0.52 0.08 0.52 0.87 0.48

AlexNet 0.50 0.55 0.55 0.14 0.55 0.85 0.60

and 15% splits. Table 1 shows the final classification results on the test data set based on using either individual deep features as well as both using equal or MLR-based weights (Table 2) on classifier combination. It is observed that the MLR combiner improves accuracy around 2% compared to using equal feature weighting and better than when deep features are used individually. As expected, combining classifiers with category-specific individual feature weights on complementary features benefits the performance. For a quantitative evaluation of the retrieval results, we

Table 3. Classification Report Using MLR Meta Learner Category AKIEC BCC BKL DF MEL NV VASC avg / total

Precision 0.50 0.45 0.60 0.67 0.58 0.83 0.81 0.73

ISBN: 1-60132-500-2, CSREA Press ©

Recall 0.50 0.70 0.52 0.08 0.52 0.87 0.48 0.74

F1-score 0.50 0.55 0.55 0.14 0.55 0.85 0.60 0.73

Support 44 101 279 25 483 1528 44 2504

14

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

1 DeepFeature-VGG16 DeepFeature-AlexNet DeepFeature-ResNet50 Fusion-Equal Fusion-MLR

Precision

0.9 0.8 0.7 0.6 0.5 0.4

0

0.2

0.4

Recall

0.6

0.8

1

Figure 2. PR graphs in different feature spaces

performed experiment on the same test set used for the evaluation of classification result. Here, all the images in the test set are selected as query images and used query-by-example (QBE) as the search method. A retrieved image is considered a match if it belongs to the same category as the query image out of the 8 disjoint categories of Table 3. The retrieval effectiveness is measured with the precision-recall (PR) graphs that are commonly used in the information retrieval domain. Precision (percentage of retrieved images that are also relevant) and recall (percentage of relevant images that are retrieved) are used as the basic evaluation measure of retrieval performances. The average precision and recall are calculated over all the queries to generate the precision-recall (PR) curves in various settings. Fig. 2 presents the PR curves for image searches in individual deep feature spaces and also combining all features based on similarity fusion with equal and MLR-based weighting. By analyzing Fig. 2, we can observe that the best performance is obtained in terms of precision at each recall level when the search is performed in a MLRweighted combined feature spaces based on different image categories.

6 CONCLUSIONS The objective of this research is to provide dermatologist with a diagnostic aid by automatically classifying and retrieving similar images of different skin cancer types. The early diagnosis through periodic screening with dermoscopic images can significantly improve the survival rate as well as reduce the treatment cost and consequent suffering of patients. Although there has been an immense interest to develop CAD systems for automated melanoma recognition

during the last decade, this technology is still in the early development stage with lack of effective feature representation and benchmark evaluation in ground-truth data sets. The descriptiveness and discriminative power of features to effectively represent the structure and characteristics of lesions and effectively handle the within-class variation and between-class similarity are critical to achieve good classification and retrieval performances. The novelty of our approach is in exploiting several deep features based on transfer learning and performing classifier combination by applying MLR-based meta-learner. The other major contribution of this work is our participation in a challenge to support research and development of algorithms for automated diagnosis of melanoma as well as participation in a largest standardized and comparative study for melanoma diagnosis in dermoscopic images to date. We believe that this work would be able to make a significant contribution to the research and development of smart CAD system for next generation of computer based medical applications. Besides serving as a smart computer-aided decision support tool for clinical diagnosis, teaching and research related to skin cancer are the two other vast domains that will be able to take instant advantage of this research.

Acknowledgment This research is supported by a pilot research grant from ASCEND, Morgan State University. We would like to thank the ISIC [10] organizer for making the database available for the experiments.

References [1] Siegel RL. Miller, KD and Jemal, A.: Cancer statistics, 2016. CA: A Cancer Journal for Clinicians, vol. 66, no. 1, pp. 7-30, 2016. [2] Braun RP, Rabinovitz HS, Olivier M, Kopf AW and Saurat JH, Dermoscopy of pigmented skin lesions, J. American Academy of Dermatology, 2005, 52 (1), 109121. [3] Muller H, Michoux N, Bandon D, and Geissbuhler A, A Review of Content-Based Image Retrieval Systems in Medical Applications Clinical Benefits and Future Directions,2004, International Journal of Medical Informatics, 2004, 73(1), 123. [4] Rahman M, Bhattacharya P, and Desai BC, An Integrated Content-Based Retrieval and Expert FusionBased Decision Support System for Automated Melanoma Recognition of Dermoscopic Images, Computerized Medical Imaging and Graphics, 2010, 34(6), 479-486.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

[5] Alfonso B, Raffaele M, Emanuele D, Mario M, Oscar G, Stefano B, and Luca G, Definition of an automated Content-Based Image Retrieval (CBIR) system for the comparison of dermoscopic images of pigmented skin lesions, BioMedical Engineering OnLine, 2009, 8(18), doi:10.1186/1475- 925X-8-18 [6] Lucia B, Xiang L, Robert BF, Ben A, Jonathan R, Content-Based Image Retrieval of Skin Lesions by Evolutionary Feature Synthesis, Applications of Evolutionary Computation, Lecture Notes in Computer Science, 2010, 6024, 312-319. [7] Bengio, Y., Courville, A., Vincent, P.: Representation Learning: A Review and New Perspectives. Technical report, Universite de Montreal (2012)

15

Analysis and Machine Intelligence 20 (3): 226–239, 1998. [17] Xu, L., Krzyzak, A. and Suen C. Y. “Methods of combining multiple classifiers and their applications to handwriting recognition”, IEEE Transactions on System, Man, and Cybernatics 23 (3): 418–435, 1992. [18] K. M. Ting, I. H. Witten, “Issues in Stacked Generalization”, Journal Of Artificial Intelligence Research, Volume 10, pages 271-289, 1999 [19] David D. Fan, Philip K. Chan, and Salvatore J. Stolfo, A comparative evaluation of Combiner and Stacked Generalization, In Proceedings of AAAI-96 workshop on Integrating Multiple Learned Models, 1996,40–46

[8] A. Krizhevsky and G. Hinton, Learning multiple layers of features from tiny images, Masters thesis, Department of Computer Science, University of Toronto, 2009. [9] Chun-Xia Zhang and Robert P.W. Duin, “An Empirical Study of a Linear Regression Combiner on Multiclass Data Sets”, J.A. Benediktsson, J. Kittler, and F. Roli (Eds.): MCS 2009, LNCS 5519, pp. 478487, 2009. Springer-Verlag Berlin Heidelberg 2009 [10] https://challenge2018.isic-archive.com/participate/ [11] O. Russakovsky et al., ImageNet large scale visual recognition challenge, IJCV, 2015. [12] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large- Scale Image Recognition. In: CoRR abs/1409.1556 (2014). URL: http://arxiv.org/abs/1409.1556 (cited on pages 32, 86, 171). [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In: Advances in Neural Information Processing Systems 25. Edited by F. Pereira et al. Curran Associates, Inc., 2012, pages 10971105 [14] Kaiming He et al. Deep Residual Learning for Image Recognition. In: CoRR abs/1512.03385 (2015). URL: http://arxiv.org/abs/1512.03385 (cited on pages 81, 131, 171, 192, 196, 202). [15] Rosebrock, A. (2017). Deep Learning for Computer Vision Practitioner Bundle, https://www.pyimagesearch.com/deep-learningcomputer-visionpython- book/ [16] Kittler, J., Hatef, M., Duin, R.P.W. and Matas, J. On combining classifiers, IEEE Transactions on Pattern

ISBN: 1-60132-500-2, CSREA Press ©

16

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Modelling of Sickle Cell Anemia Patients Response to Hydroxyurea using Artificial Neural Networks Brendan E. Odigwe1 , Jesuloluwa S. Eyitayo2, Celestine I. Odigwe3, Homayoun Valafar1 1

Department of Computer Science & Engineering, University of South Carolina, Columbia, SC, USA. 2

Department of Computer of Science, Texas State University, San Marcos, Texas, USA.

3

Department of Internal Medicine, University of Calabar Medical School, Calabar, C.R.S, Nigeria.

Abstract Hydroxyurea (HU) has been shown to be effective in alleviating the symptoms of Sickle Cell Anemia disease. While Hydroxyurea reduces the complications associated with Sickle Cell Anemia in some patients, others do not benefit from this drug and experience deleterious effects since it is also a chemotherapeutic agent. Therefore, to whom, should the administration of HU be considered as a viable option, is the main question asked by the responsible physician. We address this question by developing modeling techniques that can predict a patient’s response to HU and therefore spare the non-responsive patients from the unnecessary effects of HU. We analyzed the effects of HU on the values of 22 parameters that can be obtained from blood samples in 122 patients. Using this data, we developed Deep Artificial Neural Network models that can predict with 92.6% accuracy, the final HbF value of a subject after undergoing HU therapy. Our current studies are focusing on forecasting a patient’s HbF response, 30 days ahead of time. Key words: Artificial, Neural, Network, Deep, Sickle, Cell, Anemia, predictive.

1. Introduction

Sickle cell anemia is an inherited form of anemia, a condition in which there is not enough healthy red blood cells to carry adequate oxygen throughout the body. Red blood cells contain the oxygen-carrying molecule called Hemoglobin (Hb) which are of different types depending on the composing sub-units. Sickle cell disease patients have the Sickle Hemoglobin (HbSS) sub-unit, which does not carry the amount of oxygen needed. Fetal Hemoglobin (HbF) is another sub-unit that is normally lost as we age but is not subject to the mutation that causes Sickle Cell Anemia (SCA). Hydroxyurea has been observed to increase the amount of HbF in circulation. Most patients respond to Hydroxyurea (HU) therapy with an increase in the HbF concentration of blood [1], therefore reducing the number of incidents related to Sickle Cell Anemia. Although Hydroxyurea, in general, reduces the number of complications of the SCA disease in most patients, it has numerous deleterious effects since it is an agent used during chemotherapy. Therefore, the primary challenge facing a physician who is providing care to an SCA patient is whether

the benefits of the HU outweigh its detriments. In the absence of any prior knowledge, a trial process of more than six months is initiated in order to determine a patient’s response to potentially a harmful treatment. Pattern recognition and value prediction techniques based on the employment of artificial neural networks have had great success especially in establishing complex relationships and identifying patterns between disparate parameters such as age, gender or genetic information. Artificial Intelligence tools, including Deep Neural Networks (DNN), have historically been used in the development of Clinical Decision Support [2], medical image processing [3][4], and prediction of patient outcomes [5][6]. Availability of predictive tools can be of paramount importance in the development of personalized medicine. More specific to the SCA, if the magnitude of the HU-elicited increase in the percentage fetal hemoglobin in a patient with sickle cell anemia can be predicted, it will aid in the identification of responders and non-responders to the therapy, non-compliant patients, and an estimation of the extent of response for the identified responders [7]. This approach could also give physicians the incredibly important ability of predicting whether a patient’s sickle cell symptom(s) will be significantly reduced by the therapy. Having the ability to look at a patient on an individual basis and being capable of determining if the projected benefits of treatment are worth the deleterious side effects that could ensue, will allow for a more specific patient guided therapy. Previous work [5] reported the utility of ANN in predicting the effectiveness of Hydroxyurea therapy for Sickle cell anemia patients. In that report, a shallow ANN was used to identify HU treatment responders using a limited definition of responders. Here we continue our previous work by establishing the use of ANN in a more pragmatic definition of responders as patients who more than double their initial %HbF. We also perform additional data analytics in order to expose the challenging nature of this dataset. Finally, we provide a preliminary report on the utilization of LSTM [8] to predict a patient’s response to HU one month in advance.

2. Background

Sickle Cell Disease/Anemia is an autosomal recessive genetic disease where Red Blood Cells (RBC’s) take the

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

shape of a crescent/sickle. Sickle cell mutation is a nonconservative missense mutation where the Glutamate (hydrophilic) in the 6th amino acid of the beta globin is substituted with a Valine (hydrophobic). A hemoglobin tetramer which is 2- alpha and 2-mutated beta is called Sickle Hemoglobin (HbS). HbS is able to carry oxygen, but when deoxygenated, it changes its shape, making it capable of clustering with other HbS complexes that form polymers and distort the shape of red blood cells into a crescent shape (Sickling)[9].This change allows the easier destruction of erythrocytes, causing anemia, among other things [10]. Hydroxyurea (Hydrea or HU) is a preventive medication that reduces the complications associated with Sickle Cell Anemia (SCA). HU operates by increasing the amount of gamma-globin in the RBCs which results in the expression of more fetal hemoglobin (HbF) that is not subject to Sickle Cell mutation. HbF is the primary Hemoglobin during the fetal development stages and at birth, which explains why Sickle cell symptoms do not appear until a few months of life (when adult Hb, containing the mutated beta globin starts to dominate) [11]. As with all medications, positive response is not guaranteed; some patients respond positively depicted by an increase in the concentration of their fetal hemoglobin, while other patients do not respond positively (see fig 1). While Hydrea reduces the complications associated with SCA, it is a chemotherapeutic agent and may elicit the following side effects: nausea, vomiting, diarrhea or constipation, acute pulmonary reactions, genetic mutation, secondary leukemia, and hair loss, to name a few [12] . It is therefore beneficial to spare the non-responding patients from the unnecessary exposure to this drug’s side effects. In previous work [5], ANNs have been utilized to predict SCA patients responsiveness by way of a classification model, which categorizes patients into responders and nonresponders. the criteria employed for classification of responders consisted of patients whose HbF concentration exceeded the 15% threshold. Based on this definition of a responder, a person whose %HbF increased from an initial value of 14.9 to a final value of 15.1 is considered a responder. In contrast, a person whose %HbF increased from an initial value of 1.0 to a final value of 14.8 is considered a non-responder. Based on similar observations, the community has scrutinized the definition of a responder, and new definitions have been proposed. In this work, we explore the usability of ANN as a predictive tool for the early identification of responders based on different criteria. In addition, we present a new approach in utilizing the predictive power of Machine Learning techniques that allows us to remain agnostic to the definition of a responder.

3. Materials and/or Methods 3.1. Sickle Cell Anemia Patient Data

In this experiment, data from 122 sickle cell anemia patients were collected over a varying period, depending on the patient’s response to the treatment and adherence to the

17

project protocol. The cohort of patients were at least 16 years of age, who were treated with a daily, oral dose (based on body weight) of Hydroxyurea with subsequent increase in dosage if the HbF levels stabilized in successive monthly visits. Only patients who had received HU for 8 months or longer were included in the ANN experiments, although, certain patients began to display positive response well after 8 months of continuous treatment. Blood samples were obtained and analyzed monthly. The values of 22 parameters [5] for each of the 122 patients were recorded for a minimum of 8 months and a maximum of 98 months. The parameters analyzed for each patient are those normally included in a Complete Blood Count (CBC), Chemistry Profile (SMA18) serial HbF levels (absolute and percent), DNA analyses for globin gene haplotype and the number of genes, treatment duration, weight, age, and gender. The values of these parameters were used to train ANNs and for statistical analyses [5].

3.2. Statistical Analyses

Our initial step in understanding the complexity of this study consisted of performing several standard analytics of the data. In addition to performing the first and secondranked statistical analyses, density estimation, principal component analysis, and correlation coefficient analysis were performed on the entire dataset. Density estimation [13] aims to visualize the effects of HU on patients who undergo the therapy by observing the distribution of HbF before and after the therapy across the entire cohort of participants. Principal component analysis [14] aims to explore meaningful strategies in performing dimensional-reduction in the principal component space to remove the unnecessary complexities of the problem. Correlation coefficient [14] , on the other hand, will be useful in reducing the dimensionality of the problem in the original parameter space by identifying the data with degree of correlation.

3.3. Artificial neural networks approach

This project utilized Artificial Neural Networks [15] (ANN) as the primary predictive Machine Learning approach. Our utilization of ANNs as the core predictive tool is based on the minimalist approach of employing the simplest model that will satisfy the problem requirements. Therefore, our investigations utilized a spectrum of ANN architectures including the legacy shallow ANNs. Our developments included neural network models using MATLAB [16]; or TensorFlow [17] and Keras [18]. Within the dataset, some patients did not have complete values for all 22 parameters. In this work, the missing data were substituted with a zero since it has no contribution to the updated weights during the learning process of the backpropagation algorithm. A Shift of one was added to all data with a legitimate value of 0 (e.g. number of BAN). In addition, the dataset contained data with substantially different ranges. For instance, the age of the patient is measured in days (range was in thousands), while other

ISBN: 1-60132-500-2, CSREA Press ©

18

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

parameters such as haplotypes units have single digit values. This disparity in the range of data is automatically resolved by normalization of data in MATLAB during the training/testing processes. Due to the limited amount of data, we employed an n-fold cross validation approach; train with n-1, test with 1, repeat n times, where n is the total number of patients in the dataset. This approach was applied for all experiments conducted in this study. The details related to the optimal network architecture and the training-testing processes are discussed in the individual results sections.

3.4. Specific aims of the project

After data preprocessing, we explored the development of neural network models to carry out the following three aims. 3.4.1. Aim 1: Classify responders and non-responders. We developed a shallow two-stage fully connected, feedforward artificial neural network topology in MATLAB with a backpropagation learning algorithm for training and testing. Patients whose post-therapy HbF experienced a 15% increase or more were classified as responders and anyone whose values did not meet that threshold were classified as non-responders. This was performed as a repetition of earlier work[19], [20] to establish comparability in results and further ensure the integrity of results. 72 of the total 122 patients with the highest number of observations were used for this experiment. Due to the lack of general agreement on the specific percentage regarded as responding, we established a threshold for a positive response as patients who experience a HbF increase of 100% (double the initial HbF) or more because of its popularity among the community of Sickle Cell Anemia (SCA) researchers. 3.4.2. Aim 2: Predict final fetal hemoglobin level. After the initial results obtained from implementing a shallow neural network in MATLAB for classifying responders and nonresponders, the next step is to attempt predicting a patient’s final fetal hemoglobin value after undergoing Hydroxyurea therapy. This is to predict the highest %HbF observed over the course of medication administration.

(from Aim 2), such information is of little pragmatic use without knowing when the effect will be observed (in 6 months or 6 years of therapy). To resolve this limitation we have explored a new approach where we can predict a patient’s %HbF one month in advance as a preliminary step toward predicting the time when the full effect of the medication will be observed. To that end, we implemented a recurrent neural network (RNN) with Long short-term memory (LSTM) for the time series prediction. A recurrent neural network is a class of artificial neural network where connections between nodes form a directed graph along a temporal sequence. LSTM networks are wellsuited to making predictions based on time series data, since there can be lags of unknown duration between important events in a time series, as is the case with the problem at hand [21], [22].

Figure 2: Diagram of an LSTM Neural Network Regular feed-forward and recurrent deep neural network were developed using TensorFlow (an interface for expressing and executing machine learning algorithms) and Keras (Python machine learning application programming interface, which allows you to easily run neural networks). Training and testing were performed using the patient data previously obtained after pre-processing. The neural network architecture was regularly modified to obtain the optimal model. The neural network model used for the final HbF prediction was a deep model constituting 3 hidden layers. 21 input neurons, 12 neurons for the first hidden layer, 8 neurons for the second hidden layer, 4 neurons for the third hidden layer and then a single neuron at the output layer (21-12-8-41) since just the single value prediction is expected. We made use of the Adam optimizer for iterative update of the network weights and for each layer within the model, we made use of the rectified linear unit (ReLU) activation function with exception to the final layer which has a linear activation function because the output is expected to be a continuous value. This network architecture was used to predict the final fetal hemoglobin value after treatment.

Figure 1. Diagram of a feedforward Artificial Neural Network 3.4.3. Aim 3: Prediction of %HbF one month ahead. Although it may be of theoretical benefit to gain prior knowledge of a patient’s maximum response to HU therapy

The neural network model used for the periodic prediction was a recurrent LSTM model. The data was prepared by normalizing the features and framing the dataset as a supervised learning problem as predicting the HbF response at the current month given the response and patient biological data from the previous month. The first input layer constitutes 50 neurons and 1 neuron in the output layer. The

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

19

input shape is 1 time-step with 22 features, because the input layer receives the 21 parameter values of the current timestep and the resultant prediction of the previous time-step.

4. Results and discussion 4.1. Statistical analyses

As the first step in conveying the complexity of the problem, Figure 3 illustrates some examples of patient responses to the HU treatment. As illustrated in these figures, not all patients’ response to HU results in a monotonically increasing levels of HbF. Furthermore, it is evident that in some instances, an initial period of response follows by a decline in the HbF levels. Visual inspection of these response profiles, demonstrates the complexity of predicting a patient’s response to HU treatment.

Figure 4. Density distribution of SCA patients %HbF before and after undergoing hydroxyurea therapy The correlation coefficient analysis results are shown in the heat map (shown in Figure 5). As shown in this figure, the 22 parameters of this problem show correlation values within the mid range of -0.5 to 0.5, indicating that there is no substantial correlation between any two parameters in the dataset to assist with dimensional reduction.

Figure 3. Plots showing the varying response of different patient’s Fetal Hemoglobin to Hydroxyurea Application of density estimation provides supporting evidence for the effectiveness of HU. Figure 4 shows the distribution of %HbF across all patients before (depicted with a continuous line) and after (depicted with a dashed line) the Hydroxyurea therapy. The vertical lines (in Figure 4) correspond to the mean values for each distribution. It is evident from this figure that there is a clear increase in the %HbF for most patients. In fact, the figure shows a notable increase in the average HbF value of the patients in the population. Furthermore, the distortion in the distribution of the %HbF before and after the HU therapy, indicates that patients exhibit an unequal response to the drug. One-tailed student T-test was applied to the results of statistical analyses to confirm that the observed variation in concentration of HbF, as well as other results, were due to the HU administered and not due to random events. A confidence level of 0.001 was the minimum value accepted as significant in the results of Student T-tests.

Figure 5. Correlation coefficient heatmap of all parameters in the patient dataset Finally, the results of the spectral analysis obtained from principal component analysis are summarized in Figure 6. Based on the gradual decrease in the spectral analysis, it can be concluded that dimensional reduction in the principal component space is not a very effective approach.

ISBN: 1-60132-500-2, CSREA Press ©

20

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

and the predicted value (P) is greater than or equal to 7 and less than or equal to 13, it is considered correct, else, the prediction is considered false. With this approach to evaluation, 113 of the 122 predictions were correct giving a 92.6% accuracy level.

Figure 6. Spectral analysis from Principal Component Analysis conducted on the patient dataset In summary, the collective results of the statistical analyses exposes the complexity of the problem statement and suggests that the problem could benefit from the application of artificial neural networks.

4.2. Results for Aim 1

The neural network that was utilized during this experiment consisted of a single hidden layer with 4 hidden neurons. This ANN was able to identify responders and nonresponders based on the previously reported 15% threshold, with 83 percent accuracy. Although redundant, the repeat of the previous results was necessary to establish equality and continuity between the current and past work. A total of 72 patients were used for this study and 60 of them were accurately classified by the neural network.

Figure 7: Predicted HbF value and their corresponding Actual Values

The classification experiment was repeated with the same ANN topology using the doubling of %HbF criterion as the identification of responders. During this experiment, 98 of 122 patients were predicted correctly producing a final accuracy of 80.5%. Specificity (the percentage of responders which were actually predicted as responders) was 88.5% and sensitivity (the number of responders which were predicted to be non-responders) was 60%. In general the performance of this network indicated a slight bias towards prediction of responders.

4.3. Results for Aim 2

Predicting the final value of a patient’s HbF concentration allows the data analysts to avoid the lack of consensus within the community of physicians and caregivers. In this mode, we can simply predict the final %HbF value that a patient will achieve, and allow the physician’s to justify the use of HU. Restatement of the problem turns the problem definition from a classification problem to regression problem. The model will be predicting a real continuous value and this experiment was performed on the total dataset comprising of 122 patients. During this study we utilized a neural network with 3 hidden layers and 24 (12-8-4) total neurons , which was modelled with TensorFlow and Keras. Experimentation was carried out using all 122 patient data for training and testing. During the evaluation step, we used a percentage error threshold of 30% between the predicted value and the actual value. That is, if the actual value is 10

Figure 8: Percentage Error Distribution of all ANN results

4.4. Results of Aim 3

Although the reformulation of the problem from classification into regression domain alleviates the dependency on the accepted definition of a responder, it does not remove other existing logistical limitations. For instance, while the regression approach may predict correctly the maximum achievable value of %HbF, it fails to provide the conditions under which such outcomes can be achieved. Some of the critical values that may be required in this prediction are the drug dosage and the time needed for the HU therapy to take the maximum effect. Here we present the results of a preliminary approach that allows prediction of

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

%HbF for a given patient 30 days ahead of time. In this exercise we employed the use of LSTM networks (modelled in TensorFlow and Keras) on a small subset of patients. This was performed as a series of regression problems (depending on the number of months the patient underwent therapy) with continuous output values at each step. For each patient, we used a series of expected values reflecting the next month’s HbF value. The monthly predictions are expected to follow the same sequence of progression or decline to be considered a successful prediction. The Pearson’s correlation coefficient [14] (PCC) was used as a metric to determine the similarity in the monthly progression of the patient’s predicted and the expected HbF

21

values. The results corresponding to our six preliminar patients are shown in Figure 9. In this figure, trends illustrated in red and green correspond to the actual and the predicted response trajectories of the six test patients respectively. Four of these patients exhibited PCC between 0.6-1, 1 patient exhibited PCC in the intermediate range of 0.3-0.59, and the remaining patient exhibited poor PCC score (less than 0.29). The training process consisted of utilizing monthly observations across 121 subjects. The testing process consisted of obtaining the next predicted value of HbF for the excluded patient over the course of their therapy. This process was repeated for 6 patients.

Figure 9: Some Sample patients time-series results predicted with with the LSTM neural network.(HbF on the Y-axis and Timestep in months on the x-axis)

ISBN: 1-60132-500-2, CSREA Press ©

22

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

5. Conclusion

Neural networks are capable of recognizing patterns and complex relationships between parameters which are not easily discernible by man unaided by a machine. The parameter values used for this experiment were obtained before and after the patients underwent Hydroxyurea therapy. We have demonstrated that we can (with 83% accuracy) identify patients that will respond to Hydroxyurea (HU) therapy. We have been successful at predicting (with 92.6% accuracy) a patient's final fetal hemoglobin value after undergoing HU therapy such that the percentage error between the actual value and the predicted value is less than or equal to 30%. To improve the confidence in the model’s prediction, we have gone a step further with an 81.96% accuracy within a 15% error margin of expected and actual HbF values.

Regardless of the way the output classes are categorized, training ANNs to accurately distinguish between different classes of patients is of value to the medical community. This experiment shows that ANNs are capable of exploiting correlations between medical data that physicians are not able to identify unaided. This approach and others like it will positively influence medical decision making, administration of intervention procedures and improve the practice of precision medicine. We have also demonstrated the potential for using a single, well trained general model to predict the response trajectory of a Sickle Cell Anemia (SCA) patient to Hydroxyurea (HU) with 60% accuracy.

6. References [1] M. H. Steinberg, Z. H. Lu, F. B. Barton, M. L. Terrin, S. Charache, and G. J. Dover, “Fetal hemoglobin in sickle cell anemia: determinants of response to hydroxyurea. Multicenter Study of Hydroxyurea,” Blood, vol. 89, no. 3, pp. 1078–1088, Feb. 1997. [2] R. K. Price, E. L. Spitznagel, T. J. Downey, D. J. Meyer, N. K. Risk, and O. G. el-Ghazzawy, “Applying artificial neural network models to clinical decision making,” Psychol. Assess., vol. 12, no. 1, pp. 40–51, Mar. 2000. [3] D. Shen, G. Wu, and H.-I. Suk, “Deep Learning in Medical Image Analysis,” Annu. Rev. Biomed. Eng., vol. 19, pp. 221– 248, Jun. 2017. [4] A. S. Miller, B. H. Blott, and T. K. Hames, “Review of neural network applications in medical imaging and signal processing,” Medical & Biological Engineering & Computing, vol. 30, no. 5. pp. 449–464, 1992. [5] H. Valafar et al., “Predicting the effectiveness of hydroxyurea in individual sickle cell anemia patients,” Artificial Intelligence in Medicine, vol. 18, no. 2. pp. 133– 148, 2000. [6] A. Das et al., “Prediction of outcome in acute lowergastrointestinal haemorrhage based on an artificial neural network: internal and external validation of a predictive model,” Lancet, vol. 362, no. 9392, pp. 1261–1266, Oct. 2003.

[7] J. Y. Ryu, H. U. Kim, and S. Y. Lee, “Deep learning improves prediction of drug-drug and drug-food interactions,” Proc. Natl. Acad. Sci. U. S. A., vol. 115, no. 18, pp. E4304–E4311, May 2018. [8] T. Pham, T. Tran, D. Phung, and S. Venkatesh, “Predicting healthcare trajectories from medical records: A deep learning approach,” J. Biomed. Inform., vol. 69, pp. 218–229, May 2017. [9] M. D. Hoban et al., “Correction of the sickle cell disease mutation in human hematopoietic stem/progenitor cells,” Blood, vol. 125, no. 17, pp. 2597–2604, Apr. 2015. [10] J. W. Childs, “Sickle cell disease: The clinical manifestations,” The Journal of the American Osteopathic Association, vol. 95, no. 10. p. 593, 1995. [11] R. E. Ware, “How I use hydroxyurea to treat young patients with sickle cell anemia,” Blood, vol. 115, no. 26. pp. 5300– 5311, 2010. [12] S. K. B. P. Singh and S. Ballas Priya, “Idiosyncratic Side Effects of Hydroxyurea in Patients with Sickle Cell Anemia,” Journal of Blood Disorders & Transfusion, vol. 04, no. 05. 2013. [13] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization. John Wiley & Sons, 2009. [14] C. Meng, O. A. Zeleznik, G. G. Thallinger, B. Kuster, A. M. Gholami, and A. C. Culhane, “Dimension reduction techniques for the integrative analysis of multi-omics data,” Brief. Bioinform., vol. 17, no. 4, pp. 628–641, Jul. 2016. [15] D. E. Rumelhart and J. L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Foundations. Volume 1. MIT Press, 1987. [16] “Matlab Neural Network Toolkit,” Computational Ecology. pp. 139–160, 2010. [17] N. Ketkar, “Introduction to Tensorflow,” Deep Learning with Python. pp. 159–194, 2017. [18] N. Ketkar, “Introduction to Keras,” Deep Learning with Python. pp. 97–111, 2017. [19] H. Valafar et al., “Predicting the effectiveness of hydroxyurea in individual sickle cell anemia patients,” Artif. Intell. Med., vol. 18, no. 2, pp. 133–148, Feb. 2000. [20] S. Roushanzamir, H. Valafar, and F. Valafar, “A comparative study of linear and quadratic discriminant classifier techniques for variable selection: a case study in predicting the effectiveness of hydroxyurea treatment of sickle cell anemia,” IJCNN’99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339). . [21] A. Sarah, K. Lee, and H. Kim, “LSTM Model to Forecast Time Series for EC2 Cloud Price,” 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). 2018. [22] Y.-T. Tsai, Y.-R. Zeng, and Y.-S. Chang, “Air Pollution Forecasting Using RNN with LSTM,” 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). 2018.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

23

A Cloud-based Intelligent Remote Patient Monitoring Architecture Khalid Alghatani Department of Computer Science and Engineering New Mexico Institute of Mining and Technology Socorro, NM, USA [email protected]

Abdelmounaam Rezgui School of Information Technology Illinois State University Normal, IL, USA [email protected]

Abstract—The increase in population and the need for high standard of welfare and healthcare are today's challenges for emerging technologies in healthcare. The monitoring of patient health status and medical variables is essential in physicians’ diagnostic processes. We propose a new approach for intelligent remote patient monitoring (RPM). We recommend a system architecture that involves all major groups in any healthcare services. The proposed solution will be entirely cloud-based, enabling hospitals to achieve a more cost-effective management, higher speed for their medical processes, and increased quality of medical services. The main objective of the system is to monitor the health status of patients and generate alerts when undesired medical conditions are predicted.

general health. Vital signs are measurements of the body's most basic functions that give clues to possible diseases and show progress toward recovery. The main vital signs are body temperature, pulse rate, respiration rate, and blood pressure. Normal vital signs change with age, sex, weight, exercise capability, and overall health.

Keywords—Intelligent Remote Patient Monitoring (IRPM), Telemedicine, Cloud Computing, Vital Signs Measurements, System Architecture

I. INTRODUCTION Cloud computing has been dubbed the ‘next big thing’ in healthcare IT and can bring tremendous benefits to healthcare organizations. Cloud computing has the potential to consolidate patient data by centralizing and organizing methods, and that will help healthcare organizations make better decisions and reduce operational costs. RPM uses devices such as sensors or actuators attached to the human body to collect patient data and send it to a hospital or a health agency for interpretation. In our solution, data will be sent to the cloud computing service provider. RPM services can be used to supplement the use of visiting nurses. RPM services in this cloud computing solution will enhance the quality of the healthcare services. Cloud computing embraces new opportunities of transforming healthcare delivery into a more reliable and sustainable manner. The improvement of the monitoring of discharged patients’ health-related quality of life will reduce the cost of treatment and detect illness early. An Intelligent Remote patient monitoring (IRPM) is a smart way to utilize patient data by applying a set of machine learningbased techniques that will enable the system to generate highly automated health-related recommendations. We will narrow the scope of IRPM to vital signs and medical variables which will help detect, predict, and prevent some of the diseases related to

To add the intelligent layer to our system, we will involve machine learning techniques. Machine learning is a method of data analysis that automates analytical model building. It identifies patterns and makes decisions with minimal human intervention. We call the process of learning training and the output that this process produces is called a model. A model can provide new data and it can reason about this new information based on what it has previously learned. Machine learning has three main types of models: classification, clustering and regression. In brief, the solution is designed to allow patients or potential patients to collect vital signs or any physiological signals through medical devices or sensors and send the data to a centralized cloud-based system. The system will store the data. After that, the system will do some data analysis and data mining to generate reports to be used by the system participant groups. The research is significant because it will find an automated way to collect patients’ data such as medical conditions and store them in a centralize system. Remote Patient Monitoring in a cloud computing platform reduces the need for hospital admission by discovering a small health issue before it becomes a significant one. It will improve the quality and efficiency of health care delivery. For example, existing processes for patients’ vital data collection require a great deal of labor to collect, input and analyze the information. These processes take time and are subject to errors. They cause a latency that makes real-time data accessibility impossible. The system will help reduce hospital’s emergency room ER waiting time. Also, it will help patients avoid staying in hospitals for a long time. IRPM systems can benefit businesses. IRPM deployment attempts to bring change and benefit to a health organization by using the system to achieve some of the goals and models. For example, IRPM will allow more access to care by delivering care in rural areas that suffer from lack of access to health care due to geography or limited resources. Moreover, IRPM will save cost by creating a new healthcare delivery method that

ISBN: 1-60132-500-2, CSREA Press ©

24

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

allows the sharing of resources between hospitals by using a cloud-based platform. Furthermore, IRPM will allow healthcare providers to expand their market by servicing more patients. Also, IRPM will help create a chronic care model for people in a specific area who suffer from a common disease such as heart disease with a high costs. The system is significant for healthcare providers because it will increase the profit by opening a new business and saving hospital operation cost and time. It will enhance the job of home care service too. For insurance companies, the system can improve quality of life and make people healthy, which will save the cost of treatment. Healthcare manufacturers must think differently. They must provide IP-enabled products that can send data to a destination address. Products should be easy to use, easy to carry, and has a long battery time. Non-healthcare manufactures can participate by including some health applications. The limitations of running RPM services in the cloud are unavoidable. The poor or lost communication, especially in rural areas, will affect the quality of the service. One restriction is the compatibility with the system; if the RPM device cannot communicate then there is no way to exchange data. The system stores specific patient data. So, the system must follow HIPAA (Health Insurance Portability and Accountability Act) regulations to protect individuals’ personal health information. Not all RPM services can be delivered by a cloud. IP-baseddevices should support wired or wireless network. Stand-alone and closed-products cannot communicate. The risk of running RPM services in the cloud is inevitable. RPM services might generate false data, which will affect the system functions. One risk is losing the identity, which happens when the devices are used by different people. An RPM system should run on a secure and reliable cloud computing platform. It includes all appliances needed like firewall, intrusion detection and prevention, and authentication systems. The system should account for confidentiality policy, traceability and privacy of the data stored in a cloud. Since it is patient data, the system should meet HIPAA requirements or any other required standards. The system also needs to do a service level agreement SLA. The proposed system architecture is in a shared platform. It is designed to run on a cloud computing environment. Cloud computing has many features. It reduces the cost of healthcare service implementation. It provides rapid service and infrastructure availability. Resource pooling is one cloud computing feature that allows multiple customers to access the resources at the same time. Also, cloud computing frees the customer to select services of their choice from the service catalog. To answer what happens next to a patient, we might need experienced physicians to respond. However, clinical prediction models can answer too. Applying machine learning techniques will help predict illness from vital sign variables and that is the key difference between our proposed intelligent remote patient monitor system and traditional remote patient monitor systems.

II. BACKGROUND We now explore telemedicine, cloud computing and machine learning in the healthcare industry. We will go through some telemedicine barriers and explain current solutions that are working widely in the healthcare industry. We will then give an overview of past and current research and development work in RPMs. Afterward, We will define cloud computing and list some cloud computing advantages, issues, challenges and limitations. Finally, We will present machine learning and its applications in healthcare. A. Telemedicine Telemedicine solutions have successfully enhanced the quality and accessibility of medical care by allowing distant providers to evaluate, diagnose, treat, and provide follow-up care to patients. Telemedicine uses telecommunications technology as a medium for the provision of medical services to overcome geographical barriers and to increase access to healthcare services. 1) Telemedicine Overview The rapid developments in technology are enabling healthcare organizations to see new methods of providing healthcare. Telemedicine is a crucial initiative for healthcare organizations today. Telemedicine is needed to optimize and support more types of health services for all ages. It makes healthcare more affordable for the poor and the elderly. Telemedicine can be used to provide preventive care in addition to emergency treatment. It is a useful way to provide remote rehabilitation monitoring and chronic disease relief. However, telemedicine deployment is facing a lot of barriers at different levels [1]. Healthcare organizations are working to implement telemedicine solutions for several reasons including reducing costs, improving patient services, providing improved access to specialists, access to care, educating patients, and expanding the geographic footprint of the organization. Telemedicine’s common elements include providing clinical support, overcoming geographical barriers, involving the use of various types of ICT, and improving health services. The massive improvement and high utilization of technology by the general population have been the biggest drivers of telemedicine over the past decade, which will open room for healthcare providers to extend and innovate a new way of delivering healthcare services worldwide by enhancing access, quality, efficiency, and cost-effectiveness [2]. 2) Barriers to Telemedicine There are several barriers for implementing telemedicine. According to the Global Observatory for eHealth, the top four potential barriers facing countries in their implementation of telemedicine services are cost, legislation, culture and infrastructure [2]. The cost of implementing telemedicine is high, especially the setup cost. However, cost can be lowered by using cloud computing. The services price will decrease because the system is designed to be customized and opened for general users. Legal issues are a major obstacle. However, after considering and

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | complying with standard regulations like HIPAA, the issues related to patient privacy and confidentiality will be mitigated. The third barrier is culture from both healthcare organization and patient sides. They resist adopting healthcare services that differ from traditional ways. However, we envision a solution that will not require a big change in healthcare service delivery. The system will add a support layer that will improve the quality of health services. Poor infrastructure has an impact on the quality of telemedicine services, e.g., insufficient communication networks and low Internet speed. We propose an architecture based on cloud computing that is more scalable, reliable, and flexible. Also, we will present ideas that will reduce the risk for patient data loss because of commination interruption. 3) Current Telemedicine Solutions There are some services that telemedicine is capable of supporting today. The primary services currently used worldwide include the following: x Teleradiology: This is the use of telecommunication to transmit digital radiological images like X-rays across geographical locations for interpretation and consultation [3]. One of the main benefits of using teleradiology is financial because it will bring the images to the qualified radiologist rather than vice versa. Teleradiology services can be implemented on top of cloud services where we can get the benefit of cloud services and the quality of radiology services. It will facilitate the sharing of clinical information, medical imaging studies and patient diagnostics [4]. x Telepathology: This is the use of telecommunication to transmit digitized pathological results (e.g., microscopic images of cells) for the purpose of interpretation and consultation. Pathology plays an essential role in identifying the characteristics as well as a cause of a disease in the medical field. Together with current technological advances in medicine, pathology continues to play a role in providing information to medical professionals as well as researchers for further investigation in the form of telepathology. Telepathology is a way of using images in an electronic format rather than the view from a glass slide [5]. x Teledermatology: This is the use of telecommunication to transmit medical information concerning skin conditions for the purpose of interpretation and consultation. x Telepsychiatry: This is the use of telecommunication for psychiatric evaluations and consultation via video and telephony. The use of telepsychiatry provides increased access to mental health services. In our opinion, IRPM or telemonitoring deserves to be the top telemedicine solutions in the market. It will improve the quality of delivering other telemedicine solutions.

25

4) Remote Patient Monitoring Services Much research has explored patient monitoring systems but, to the best of our knowledge, little research has been done in developing and deploying RPMs that can be shared by several entities such as patients, hospitals, insurance companies, and government agencies, etc. In this section, we will give an overview of past and current research and development work on RPMs. Several researchers have examined the potential for RPMs to improve the quality of healthcare services. For example, the authores in [6] present a preliminary performance study of mobile cloud to demonstrate its potential in performing continuous health monitoring in daily life and achieving higher diagnostic accuracy. In [7] the authors found that the use of remote monitoring is a promising approach that has the potential to reduce morbidity and increase patient satisfaction in nonhomebound heart failure patients. There is active research on utilizing cloud computing platforms in RPMs. For example, in [6], the authors present a preliminary performance study of mobile cloud to demonstrate its potential in performing continuous health monitoring in daily life and achieving higher diagnostic accuracy. Furthermore, the work in [8] proposed a platform that integrates mobile application technologies and cloud computing to provide secure, robust, scalable and distributed backend for hosting health services that cost-effectively improve life quality. In [9], the authors present a new hybrid framework based on mobile multimedia cloud that is scalable and efficient and provides a cost-effective monitoring solution for noncommunicable disease patients. They propose a novel evaluation model based on Analytical Hierarchy Process (AHP). They found that healthcare and cloud computing are such a natural fit for monitoring patients with chronic diseases. The authors in [10] propose a probability-based bandwidth model in a telehealth cloud system. They design an effective cloud-based telehealth system that will help cloud brokers allocate the most efficient computing nodes and links. Numerous studies have examined the benefits of wireless sensor networks (WSNs) to collect patient data. For example, the authors of [11] show how to accurately track indoor positions, recognize physical activities and monitor vital signs in real time. However, it is an overview research without intelligent components. In [12], the authors deliver an integrated telemedicine service that automates the processes of patients’ vital data collection. They used a wireless sensor network and they sent the data to a cloud computing solution. The implementation provides always-on, real-time data collection. That will help eliminate manual collection and the possibility of typing errors in traditional systems. Moreover, the authors in [13] propose an integrated model based on an IoT architecture and cloud computing telehealth center to integrate software, hardware, and healthcare systems. They present an analytics module as a solution to control an ideal diagnostic about some diseases. Specific features are then compared with the recently deployed conventional models in telemedicine. B. Cloud Computing in Healthcare Cloud computing is a computing model based on distributed computing, process automation, and virtualization technologies.

ISBN: 1-60132-500-2, CSREA Press ©

26

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Cloud computing is the basic environment and platform of future healthcare. It provides quick, secure, reliable, cheap service and allows the user to access the service at any time from different devices [14]. 1) Cloud Computing Characteristic and Advantages Cloud computing can help innovate and deploy applications quickly on a small budget [15]. Cloud computing provides cheap services. Clients should not worry about maintenance and management problems of IT resources [14]. Cloud computing end users can access cloud resources directly through smart devices. Moreover, cloud computing can work as a distributed system. IRPMs can be deployed on decentralized servers and allow patient data transmission through Internet access. Cloud computing can extend the IT resources dynamically based on the need. According to the National Institute of Standards and Technology (NIST), “cloud computing essential characteristics are on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service” [16]. We propose an architecture that targets the broad network access to maximize the communication capability and availability. IRPM service can be measured and utilized efficiently on the top of cloud computing environments. IT resources of IRPM include storage, processing, memory, and network bandwidth. These can be elastically provisioned and released based on system usage. 2) Issues, Challenges and Limitations Despite the benefits that cloud computing offers, there are numerous issues and challenges for organizations embracing this new paradigm. The major challenges are security, data management, governance, control, reliability, availability, and business continuity [17] [14]. Moreover, there are numerous issues and challenges for adopting telemedicine in cloud computing: x Considering critical success factors (save time, accuracy, large scale). x Building strategic thinking. x Security and privacy issues. x Privacy risks involve a lack of control over the collection, use, and sharing of data. x Applying and complying with HIPAA privacy and security regulations. x Lack of trust in the use of telemedicine. x Unauthorized access to transmission, or storage.

data

during

collection,

x Authorization, authentication, and accounting problem in general. x Network security and cryptography needed. x Globalized telemedicine services issue. Each point above deserves to be a research topic but we’ll focus on building the system architecture.

C. Machine Learning in the Healthcare Industry Machine learning is helpful in the healthcare industry. Classification is the most prominent machine learning technique since it corresponds to a task that frequently occurs in everyday life such as classifying medical patients as suffering from a certain illness or risk of a acquiring it. A popular classification model usually applied in medical problems is neural networks, support vector machines, or decision trees. Classification models can help predict illness with the restriction that the value to be predicted is a discrete class. However, there are situations in which the goal is to provide a numerical prediction which is known as regression. The second type of machine learning is called clustering which consists of examining the available data to find groups of examples that are similar in some way [18]. Classification problems are widely used in medicine in order to detect or diagnose a disease or even to determine its severity. Classification techniques have been used not only to support the diagnosis of different diseases but also to analyze clinical information in the form of text or reports. When the goal is to provide numeral prediction regression techniques are used instead of discrete classes [18]. Deep learning is a particular case of machine learning algorithms which is artificial neural networks. Several researchers have included intelligence to RPM. For example, the authors of [19] introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. They claim that the results are competitive against current state-of-the-arts treatment. They argue that DeepCare opens up a new principled approach to predictive medicine. However, in [20], the authors found that Recurrent Neural Networks (RNNs), particularly those using Long Short-Term Memory (LSTM) hidden units, are powerful and are becoming increasingly popular models for learning from sequence data. They effectively model different length sequences and capture long-range dependencies. They present the first study to empirically evaluate the ability of LSTMs to recognize patterns in multivariate time series of clinical measurements. Our approach aims at applying machine learning in an architecture that allows global users (sick or not). The authors of [21] present semi-supervised sequence learning for cardiovascular risk prediction by using multi-task LSTM. They claim that their work is a first step in showing how health conditions can be detected using techniques first developed in natural language processing and computer vision. Furthermore, the work in [22] proposed a generic framework to predict septic shock based on LSTM which is capable of memorizing temporal dependencies over a long period. The experimental results demonstrate the superiority of the proposed framework and the effectiveness of LSTM compared with multiple baselines. Our proposed architecture aims at monitoring the general health based on vital signs and medical variables and that will answer some questions about what happened to the patient before he/she gets admitted to a hospital and after the discharge.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | In addition, some research studied the mortality prediction in Intensive Care Unit (ICU). For example, the authors of [23] compare sliding window predictors with recurrent predictors to classify patient state-of-health from ICU multivariate time series. They report slightly improved performance for the Recurrent Neural Network (RNN) for three out of four targets. Moreover, in [24], the authors propose a novel ICU mortality prediction algorithm combining bidirectional LSTM model with supervised learning. They train and evaluate the LSTM model using a real-world dataset containing 4,000 ICU patients. Experimental results show that their proposed method can significantly outperform many baseline methods. However, our goal is to reduce hospital admission and waiting time by monitoring people (sick or not).

27 Insurance Companies: They will handle the medical insurance plans and coverages. Controller: An organization that sets up, develops, operates, and maintains the system. It can be a profit or a non-profit organization. It coordinates the type of service between the other three groups involved in the system. Moreover, it can apply data analysis and generate some statistics based on the collected data. Furthermore, it can generate alerts based on unusual data and enable the system to generate highly automated health-related recommendations for a single patient, hospital, or organization.

III. INTELLIGENT REMOTE MONITORING ARCHITECTURE We now present the intelligent remote monitoring architecture. The architecture shows the main four groups who will use the system. We highlight the most challenging part of the system which is the data collection from RPM devices. We also list some specific healthcare application and its solutions by using machine learning techniques. A. IRPM System Architecture The proposed system in Figure 1 interprets information from different RPM devices. RPM devices are used to diagnose, monitor, and treat medical variables. The variables will not be treated individually and they will be available for professionals who can interpret information. For example, Age, Weight, and Height are quantitative variables and Race, Gender, and Smoking are categorical variables. So, the professional can see all variable in a single screen even though the data were collected from different RPM devices. The RPM devices must be compatible with the system and follow some standard to enable integration. The proposed system will be used by four major groups: Patients: There are patients who suffer from chronic conditions that require continuous monitoring and tracking of the history of health measurements. We can extend this group to include a potential patient who likes to get the benefits of the system. Patients can select the type of service. They can select which vital sign measurements to be included. Moreover, patients have view access to the system to monitor themselves and to get system alerts and recommendations. Furthermore, patients should read and accept a service level agreement (SLA) for security and privacy purpose. Hospitals: Hospital physicians, nurses and clinicians can get to the system inside or outside the hospital through a system portable page or by integrating the system to the hospital electronic health record (EHR). The system will add features to the hospital EHR system. The system is recommended to be integrated to EHR by using Health Level Seven (HL7). Based on the system result, a hospital can make an action to treat the patient such as a call for a visit, home care, or referral to another clinic or hospital.

Fig 1. Proposed Intelligent Remote Patient Monitoring (IRPM) Architecture.

B. Data Collection Challenges Cloud computing has the ability to store the system data in a highly available environment but the most challenging part of the system is collecting data from RPM devices. There is no guarantee that RPM devices will be connected with the system without interruption or data loss but there are some ideas that reduce the percentage of service downtime and data loss. Idea One: One solution is to save the data inside the RPM devices and work in an off-line mode. Once the connectivity becomes stable, RPM devices will upload the data to the system. Data archiving will help save the data but storage may be limited. On option is to only save significant data. A second option is doing data overwriting. Idea Two: This idea allows different types of communication like wired, wireless, or satellite VSAT technology. This idea will challenge medical devices manufacturers to include these types of communication on their RPM devices. Idea Three: The authors of [25] recommend using fog computing between cloud computing and RPM devices but, this idea will add one more layer and that will add more potential for failure. It will require products to be installed in heterogeneous technologies on heterogeneous platforms. Idea Four: Simplify the RPM devices function to make it user friendly. This will encourage the concept of patient

ISBN: 1-60132-500-2, CSREA Press ©

28

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | centeredness. The work in [26] found that cloud computing supports patient centeredness, a promising future direction for health IT. The medical staff are the primary users of health IT applications and these applications are heavily physician-centered. However, the authors in [26] found evidence that imply a high potential of cloud computing to realize patient centeredness. They noted that cloud computing innovatively involves patient family members to realize patient centeredness. Idea Five: Support manual operations. Manual operations are not recommended but it can be a solution in critical situations. The system’s Web interface is originally designed to be read only for patients but, to achieve the goal of this idea, we need to allow patients to enter data after considering security, legal and regulatory concerns.

C. Machine Learning Examples in Healthcare The main inputs of the system are authentication ID, vital sign measurements, and medical variables. We focus only on the main vital sign measurements (body temperature, pulse rate, respiration rate, and blood pressure) and medical variables both quantitative (e.g., age, weight, and height) and categorical (e.g., race, gender, and smoking). We will add blood glucose since it is also important. The main outputs of the system are recommendations for a single patient or an organization, reports (daily, weekly, monthly, or yearly), and alerts to both patients and hospitals. The objectives of data analysis are significant. We can use data analysis to detect, predict, or prevent illness. Examples of Detection Applications: x Find a pattern by using the clustering model. In this case, we would like to find if there were any distinctive patterns, e.g., if there is an unexpected result coming from a specific area like epidemic. x Positive or negative illness result by classification models. We would like to train the model using a set of labeled data to determine if it is either one thing or another. x Health significant change. In this case, the objective is to detect whether there is a sudden change in one or more vital signs measurements. We will try to find the best machine learning technique that could achieve this objective. x Pain/No pain, especially for children by doing vital signs assessment. Vital signs provide a lot of insight into our overall health which can be affected by pain in several ways. For example, a normal response to pain is an increase in heart rate, breathing rate and blood pressure. We can indicate a pain with severe and harming conditions if the vital signs measurements are abnormal. x Body mass index (BMI) by measuring height and weight in order to calculate the body mass index. The model will help get weight under control by doing a measure of body fat based on height and weight. According to the World Health Organization (WHO), the normal BMI should be

between 18.5 and 24.9 ݇݃Ȁ݉ଶ . Anything over 25 is considered overweight and 30 or more is deemed to be obese. x Emotion status. For example, determine if the patient is calm or under pressure, active or passive. Examples of Prediction and Prevention Applications: x Morbidity prediction. To predict a potential illness and condition (Diabetes, Hypertension, Obese, and Fever). x Recovery prediction. It is close to morbidity prediction. It will predict if there is a possibility of health improvement. We plan to use the LSTM variation of RNNs which enables the accurate modeling of long- and short-term dependencies. Also, we will use some other machine learning techniques to support the application. We will also focus on data preprocessing, training methods, algorithms, and validation. IV. CONCLUSION The ratio of doctor to patient is very low and the population keeps increasing which will require a smart and reliable solution. Some significant impacts are coming from cloud-based IRPM systems in healthcare environments. For example, the collected data can be used to do data mining. It will support decision makers to take an action especially if there is an unexpected result coming from a specific area like epidemic or high radiation rate in the human body. People can watch themselves by knowing the health status and take health recommendations generated by the system. The system is designed to provide cheap, customized health services. The system should integrate into the standard care process. The system will help reduce hospital’s emergency room waiting time. Also, it will help patient avoid staying in hospitals for long periods of time. IRPM cloud-based architectures help in performing continuous health monitoring in daily life and achieving higher diagnostic accuracy and high-quality life. Our focus is currently on developing the prototype to validate the proposed machine learning part which will enable the proposed IRPM to generate highly automated health-related recommendations to (healthy or sick) users.

REFERENCES [1]

K. Alghatani, "Telemedicine implementation: barriers and recommendations.," Journal of Scientific Research and Studies, vol. 3(7), pp. pp. 140-145, 2016.

[2]

World Health Organization, "Telemedicine: opportunities and developments in Member States: report on the second global survey on eHealth," WHO Press, Geneva,Switzerland, 2010.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

[3]

M. Fatehi, R. Safdari, M. Ghazisaeidi, M. Jebraeily and M. Habibikoolaee, "Data standards in tele-radiology.," Acta Informatica Medica, vol. 23, no. 3, p. 165, 2015.

[4]

E. J. M. Monteiro, C. Costa and J. L. Oliveira, "A cloud architecture for teleradiology-as-a-service," Methods of information in medicine, vol. 55, no. 03, pp. 203-214, 2016.

[5]

L. C. Ling and P. Krishnappa, "Telepathology–an update," Int J Collaborative Res Internal Med Public Health, vol. 4, p. 12, 2012.

[6]

X. Wang, Q. Gui, B. Liu, Y. Chen and Z. Jin, "Leveraging mobile cloud for telemedicine: A performance study in medical monitoring," pp. 4950, 2013.

[7]

A. Kulshreshtha, J. Kvedar, A. Goyal, E. Halpern and A. Watson, "Use of remote monitoring to improve outcomes in patients with heart failure: a pilot trial," International journal of telemedicine and applications, p. 3, 2010.

[8]

M. Bitsaki, C. Koutras, G. Koutras, F. Leymann, B. Mitschang, C. Nikolaou and M. Wieland, "An integrated mHealth solution for enhancing patients’ health online," in In 6th European Conference of the International Federation for Medical and Biological Engineering, Springer, Cham, 2015.

[9]

M. Al-Qurishi, M. Al-Rakhami, F. Al-Qershi, M. M. Hassan, A. Alamri, H. U. Khan and Y. Xiang, "A framework for cloud-based healthcare services to monitor noncommunicable diseases patient," International journal of distributed Sensor Networks,11(3),985629, 2015.

[10] J. Wang, B. Guo, M. Qiu and Z. Ming, "Design and optimization of traffic balance broker for cloud-based telehealth platform," In 2013 IEEE/ACM 6th International Conference on Utility and Cloud Computing, pp. (pp. 147-154). IEEE., 2013, December. [11] Z. Wang, Z. Yang and T. Dong, "A review of wearable technologies for elderly care that can accurately track indoor position, recognize physical activities and monitor vital signs in real time," Sensors, 17(2), 341, 2017. [12] C. O. Rolim, F. L. Koch, C. B. Westphall, J. Werner, A. Fracalossi and G. S. Salvador, "A cloud computing solution for patient's data collection in health care institutions.," in In 2010 Second International Conference on eHealth, Telemedicine, and Social Medicine, 2010, February. [13] L. Ramirez, E. Guillén and J. Sánchez, "Analytics Model in a Telehealth Center Based on Cloud Computing and Local Storage," World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, 11(2), pp. 147-150, 2017. [14] A. Al Tayeb, K. Alghatani, S. El-Seoud and H. El-Sofany, "The impact of cloud computing technologies in e-learning," International Journal of Emerging Technologies in Learning (iJET), 8(2013), 2013.

29

[15] P. Jain, D. Rane and S. Patidar, "A survey and analysis of cloud modelbased security for computing secure cloud bursting and aggregation in renal environment.," in In 2011 World Congress on Information and Communication Technologies , 2011, December. [16] P. Mell and T. Grance, The NIST definition of cloud computing, 2011. [17] Z. Mahmood and R. Hill, "Cloud Computing for enterprise architectures," Springer Science & Business Media, 2011. [18] G. Bhanot, M. Biehl, T. Villmann and D. Zühlke, "Biomedical data analysis in translational research: Integration of expert knowledge and interpretable models," in In M. Verleysen (Ed.), 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2017, Louvain-la-Neuve, Beligium: Ciaco - i6doc.com., 2017. [19] T. Pham, T. Tran, D. Phung and S. Venkatesh, "Predicting healthcare trajectories from medical records: A deep learning approach," Journal of biomedical informatics, vol. 69, pp. 218-229, 2017. [20] Z. C. Lipton, D. C. Kale, C. Elkan and R. Wetzel, "Learning to diagnose with LSTM recurrent neural networks," arXiv preprint arXiv:1511.03677, 2015. [21] B. Ballinger, J. Hsieh, A. Singh, N. Sohoni, J. Wang, G. H. .. Tison and M. J. Pletcher, "DeepHeart: semi-supervised sequence learning for cardiovascular risk prediction," in In Thirty-Second AAAI Conference on Artificial Intelligence, 2018, April. [22] Y. Zhang, C. Lin, M. Chi, J. Ivy, M. Capan and J. M. Huddleston, "Lstm for septic shock: Adding unreliable labels to reliable predictions," in In 2017 IEEE International Conference on Big Data (Big Data) , 2017, December. [23] A. McCarthy and C. K. Williams, "Predicting patient state-of-health using sliding window and recurrent classifiers," arXiv preprint arXiv:1612.00662, 2016. [24] Y. Zhu, X. Fan, J. Wu, X. Liu, J. Shi and C. Wang, "Predicting ICU Mortality by Supervised Bidirectional LSTM Networks.," CEVRWS.org, vol. 2142, 2018. [25] G. L. Santos, P. T. Endo, M. F. F. da Silva Lisboa, L. G. F. da Silva, D. Sadok, J. Kelner and T. Lynn, "Analyzing the availability and performance of an e-health system integrated with edge, fog and cloud infrastructures," Journal of Cloud Computing, 7(1), 16, 2018. [26] F. Gao, S. Thiebes and A. Sunyaev, "Rethinking the Meaning of Cloud Computing for Health Care: A Taxonomic Perspective and Future Research Directions," Journal of medical Internet research, 20(7), e10041, 2018.

ISBN: 1-60132-500-2, CSREA Press ©

30

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Seizure Detection Using Machine Learning Algorithms Sarah Hadipour1 , Ala Tokhmpash2 , and Bahram Shafai3 1 Electrical and Computer Engineering, Northeastern University, Boston, MA, USA [email protected] 2 [email protected] 3 [email protected]

Abstract— In this paper we study the use of machine learning algorithms in detecting epileptic seizures. We train multiple classifiers using frequency domain predictors from the intracranial electroencephalogram (iEEG) signals. The classifiers studied are "Support Vector Machines" and "KNearest Neighbors". Using multiple criteria for performance evaluation discussed in this paper we arrive at the best suitable classifier for seizure detection application. Keywords: Epilepsy, detection, intracranial electroencephalogram, classifiers, interictal, preictal

1. Introduction About 1% of the world’s population suffers from epilepsy which causes spontaneous seizures. With known side effects, anticonvulsant medications are prescribed at sufficiently high doses to prevent seizures but between 20-40% of patients report no improvement. Common surgical removal of seizure focal point in young children can cause other brain dysfunctionality, and for many, the spontaneous seizures will continue to happen after the surgical procedures. Due to the possibility of a seizure occurring at any time without warning many patients suffer from chronic depression. Seizure prediction systems can be life changing for patients with epileptic seizures. By accurately identifying the periods in which seizure occurrence has a higher chance of happening we can help epileptic patients live a more normal life. If in an unfortunate case a patient with epilepsy engages in activities, such as driving or swimming, with the help of such devices they can pause or avoid a potential harm upon receiving the seizure alert and administering medications. We use machine learning techniques with the goal of predicting naturally occurring seizures in adults and children with epileptic seizures. Our algorithm can be easily implemented in a wearable seizure warning device in conjunction with an implantable iEEG sensor. A hand-held personal advisory device can alert the patient of a possible epileptic seizure. A prior literature review of similar works can be found in [1] through [11] .

2. Background Seizure has been defined as "a transient occurrence of signs and/or symptoms due to abnormal excessive or synchronous neuronal activity in the brain." by the International League Against Epilepsy (ILAE). Seizure by itself has a distinct set of signs and symptoms such as losing consciousness, which is followed by confusion, having uncontrollable muscle spasms and falling. Although in a typical seizure the borders of ictal (during a seizure), interictal (between seizures) and postictal (after a seizure) often are indistinct looking at the epileptic Intracranial Electroencephalogram signals (EEG) can be helpful for detection, classification and prediction. From medical point of view there are common EEG patterns that helps the medical professional detect the seizure. But clinicians would also rely on researchers to come up with ways of making this detection process easier and more reliable. Some researchers have used motion sensors [11] in seizure detection. These systems can be useful in the detection ˘ Sclonic of motor seizures, such as tonicâA ¸ (which involves electric discharges instantaneously that covers the entire brain) or myoclonic seizures (which are brief shock-like jerks of a muscle or group of muscles). Accelerometers that are useful in motion detection can only be useful in the detection of ongoing seizures. But none the less this detector can potentially get hooked to a device that can then contact caretakers to alert them of ongoing seizures.

3. Methodology Our iEEG data consists of labeled interictal and preictal signals. Interictal iEEG represents the background periods whereas preictal iEEG represents the before-seizure periods. By accurately identifying the preictal periods from the interictal ones we will be able to essentially predict the upcoming seizure onset. To identify the preictal periods, we looked at the unique features that exist in time and frequency domain. This process as shown in the block diagram below happens in the ’transformation’ and the "feature extraction" blocks right after the data collection.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

In the next block we explore the use of multiple classifiers before arriving at the final block. In this step we designed linear and non-linear classifiers such as SVMs and KNNs. The last step in the process is choosing the best classifier using he AUC (Area Under The Curve) of ROC (Receiver Operating Characteristics) curve and the accuracy metrics as well as the complexity , speed and memory usage. Our methodology can be summarized as the following in Figure 1: .

31

states. This has been proven across multiple test subjects Power spectra in the lower frequency range tends to have higher energy in preictal iEEG. This is used as feature vector for the binary classifiers. Figure 2 shows the frequency domain representation of the interictal and preictal signals.

Fig. 2: Frequency Domain Representation of the iEEG Signals. Interictal vs. Preictal.

3.3 Classification Fig. 1: Block Diagram of the Classification Process

3.1 Dataset The iEEG data used in this study consists of two groups of interictal and preictal signals. Interictal iEEG represents the normal state and preictal iEEG represents the period just before and leading up to the seizure. Both signals have been sampled at 400Hz rate. The interictal periods were restricted to be at least four hours before or after any seizure. One hour sequences of interictal ten minute data segments are also examined. The interictal data were chosen randomly from the full data record, with the restriction that interictal segments be as far from any seizure as can be practically achieved, to avoid contamination with preictal or postictal signals. In the long duration recordings it was possible to maintain a restriction of one week before or after a seizure.

3.2 Transformation and Feature Selection The review of the current literature reveled that based on the nature of the iEEG signals it appears that the magnitude of different frequencies during the interictal and preictal can be served as a prominent feature to identify the healthy and pre-seizure states[3]. This will be coupled with other preprocessing filters to extract the desired features . Distribution of energy in different frequency bands from the figure bellow shows that there is a strong evidence of a change in the FFT pattern between interictal and preictal

Selecting the best classifier requires investigation of linear and non-linear Support Vector Machines [3] and fine to coarse K Nearest Neighbors [2]. As a vital part of the process in our models we use 5-fold cross validation technique to ensure accurate results[4] . As a summary in 5-fold cross-validation, the original sample is randomly partitioned into 5 equal size subsamples. Of the 5 subsamples, a single subsample is set aside as validation data for testing the model, and the remaining 4 subsamples are used as training data. The cross-validation process is then repeated 5 times , with each of the 5 subsamples used exactly once as the validation data (we repeated this process for every single classifiers we trained). The 5 results from the folds can then be combined or averaged to produce a single estimation. The advantage of this very accurate method is that all observations are used for both training and validation, and each observation is used for validation exactly once. So this way we would avoid false accuracy rates. For classification problems, one typically uses stratified 5-fold cross-validation, in which the folds are selected so that each fold contains roughly the same proportions of class labels.

3.3.1 Support Vector Machine A support vector machine is designed as a decision hyperplane to separate our two classes. The optimal plane should be in the middle of the two classes, so that the distance from the plane to the closest point on either side is the same.

ISBN: 1-60132-500-2, CSREA Press ©

32

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Kernel SVM is the use of a mapping function that maps our data into a higher dimensional space, then, the maximization and decision rule will depend on the dot products of the mapping function for different samples.We used polynomial of degree two and three as well as the Gaussian kernels, fine to coarse, to compute the best support vector machine. 3.3.2 K Nearest Neighbor k-Nearest Neighbor classifier is a non-parametric approach, which classifies a given data point according to the majority of its neighbors. The KNN algorithm completes its execution in two steps, first finding the number of nearest neighbors and second classifying the data point into particular class using first step. To find the neighbor, it makes use of distance metrics like euclidean distance. It chooses nearest k samples from the training set, then takes majority vote of their class where k should be an odd number to avoid ambiguity. Just like the SVMs, we analyzed fine to coarse, cosine and cubic kernels.

2 and 3 respectively. These two equation also define the Sensitivity and Specificity .

True Positive Rate (Sensitivity) =

TP TP + FN

(2)

False Positive Rate = 1 − Specif icity TN =1− (3) TN + FP The ROC curve is then created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning. The false-positive rate is also known as the fall-out or probability of false alarm and can be calculated as specificity. Figure 4 illustrates the ROC curves of SVMs and the KNNs designed for our experiment.

3.4 Performance Measures (Accuracy and Area Under Curve) Accuracy of classifier refers to the ability of the classifier. In particular, the accuracy of the predictor refers to how well a given predictor can guess the value of predicted attribute for a new data. Equation 1 defines the accuracy in terms of TP, TN, FP and FN: TP + TN (1) TP + FN + TN + FP Where TP = True Positives, TN = True Negatives, FP = False Positives, and FN = False Negatives. Accuracy =

Figure 3 visualizes the accuracy comparison of the eleven classifiers used in this experiment.

Fig. 4: ROC plot of SVNs vs KNNs For the purpose of the illustration and avoid over crowding the graph, we picked the best SVM and best KNN model to show the ROC curve.

4. Results

Fig. 3: Accuracy Comparison of SVN and KNN Classifiers Using the same terminology as above the True Positive Rate and False Positive Rate are defined in equations

From the ROC curves above the area under the curve is calculated and used as the second performance measure next to the accuracy of the classifier. This comprehensive study of the support vector machines and the k-nearest neighbors on the iEEG data shows that there is a trade off that needs to be made when selecting the best classifier. Linear SVM and the coarse KNN have similar performance where linear SVM outperforms the coarse KNN in accuracy and the coarse KNN slightly outperforms linear SVM in the AUC measures. These two are also computationally faster and more efficient

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

than the other classifiers. Either one would result in a very satisfying classification. Table 1 brings together both performance measures of accuracy and the area under the ROC curve. This is particularly helpful in picking the most suitable classifier for this application.

Table 1: Accuracy and AUC in SVM and KNN classifiers. Sample Model Results Parameters AUC Linear SVM 0.77 Quadratic SVM 0.53 Cubic SVM 0.39 Fine Gaussian SVM 0.74 Medium Gaussian SVM 0.77 Coarse Gaussian SVM 0.77 Fine KNN 0.62 Medium KNN 0.75 Coarse KNN 0.80 Cosine KNN 0.73 Cubic KNN 0.75

Accuracy 71.6% 51.3% 41.4% 71.5% 71.5% 71.4% 62.5% 68.9% 71.4% 69.3% 69.3%

33

classifier if the interpretation of the results are not of a concern. On the other hand the cubic KNN should never be considered for this application.

5. Implementation and Future Work Once the best model is identified the model can be implemented on a chip. The chip would be receiving iEEG data in real time that is collected from the implanted electrodes in a patient with epilepcy. The mentioned chip process the data and figure out what state the patient is in. If a preictal period is detected then the chip would wirelessly communicate with a hand-held device and generate an alert to the patient. If this device is somehow connected to a caregiver’s or medical professional’s device they would be notified as well. This device could potentially contact the emergency responders. Figure 5 shows how this algorithm can be implemented in a chip and be used as a form of a hand held device alerting patient of an incoming seizure.

Other factors to also keep in mind when choosing a good classifier are summarized in Table 2. This table shows that the properties of different model types of SVNs and KNNs. Linear SVM always wins the interpret-ability contest in binary classification cases. Evaluating all factors, the linear SVM shows good results with fast speed and medium memory usage and easy interpret-ability. The coarse KNN could also be considered as the best classifier if the interpretation of the results are not of a concern. On the other hand the cubic KNN should never be considered for this application. Table 2: Accuracy and AUC in SVM and KNN classifiers. Sample Model Properties Speed Memory usage Linear SVM Fast Medium Quadratic SVM Fast Medium Cubic SVM Fast Medium Fine Gaussian SVM Fast Medium Medium Gaussian SVM Fast Medium Coarse Gaussian SVM Fast Medium Fine KNN Fast Medium Medium KNN Fast Medium Coarse KNN Fast Medium Cosine KNN Fast Medium Cubic KNN Slow Medium Parameters

Interpretability Easy Hard Hard Hard Hard Hard Hard Hard Hard Hard Hard

Fig. 5: An Implementation of A Seizure Prediction Device

6. Conclusion This table shows that the properties of different model types of SVNs and KNNs. Linear SVM always wins the interpret-ability contest in binary classification cases. All factors counted the linear SVM shows good results with fast speed and medium memory usage and easy interpretability. The coarse KNN could also be considered as the best

We presented and analyzed multiple machine learning algorithms that use iEEG signals to detect the onset of epileptic seizures. Also using a sophisticated multi-factor selection process we arrived at the two suitable model to use for clinical application in regards to SVMs and KNNs. This detector simplifies the implementation of the seizure prediction system into a wearable device.

ISBN: 1-60132-500-2, CSREA Press ©

34

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

This system can be potentially coupled with a closed loop control system to suppress the seizure intensity and mitigate the consequences of the drug-resistant seizures. The block diagram of implementation of a future alerting device has been proposed in our paper as well.

7. Multiple Case Study To complete our investigation, we went beyond the patient specific classifiers and looked across multiple subjects. We looked at the ROC metric of the best SVM and the best KNN model then compared the results in the table below. This table summarizes the two human and two animal iEEG study for epileptic seizure detection.

Table 3: AUC comparison of SVM and KNN classifiers across mulitiple subjects. Classifiers Subject 1 Subject 2 Subject 3 Subject 4

Multi-Subject Results SVM 0.79 0.87 0.84 0.66

KNN 0.81 0.89 0.87 0.71

Our results are consistence with what we have seen in a single patient specific modeling. The KNN still proves to have a slight edge over the SVM in terms of AUC of ROC curve. The accuracy follows the same pattern. However the sub-type of the SVM or KNN might vary from patient to patient. The visualization of the SVM and KNN study across multiple subjects is shown in the Figure 6.

with the Alliance for Epilepsy Research, the University of Pennsylvania and the Mayo Clinic.

References [1] K. Schindler, H. Leung, C.E. Elger, K.Lehnertz (2007) Assessing seizure dynamics by analyzing the correlation structure of multichannel intracranial EEG. Brain 130(Pt 1):65-77 [2] Y. Xu, Q. Zhu, Z. Fan, M. Qiu, Y. Chen, H. Liu (2012) Coarse to fine K nearest neighbor classifier [3] A. Patle, D. Chouhan (2013) SVM kernel functions for classification [4] T. Fushiki (2011) Estimation of prediction error by using K-fold cross-validation [5] A.H. Shoeb, J.V.Guttag (2010) Application of machine learning to epileptic seizure detection, Proceedings of the 27th International Conference on Machine Learning (ICML-10) [6] K. Schindler, H. Leung, C.E. Elger, K.Lehnertz (2007) Assessing seizure dynamics by analyzing the correlation structure of multichannel intracranial EEG. Brain 130(Pt 1):65-77 [7] A. Bablania, D. Reddy Edlaa, S. Dodia (2018) Classification of EEG Data using k-Nearest Neighbor approach for Concealed Information Test [8] X. Li., X. Chen, Y. Yan, W. Wei, and Z. Jane Wang (2014) Classification of EEG Signals Using a Multiple Kernel Learning Support Vector Machine [9] B.Richhariya, M.Tanveer (2017) EEG signal classification using universum support vector machine [10] RS. Fisher, HE. Scharfman, M. deCurtis (2014) How can we identify ictal and interictal abnormal activity? [11] S. Ramgopal, S. Thome-Souza, M. Jackson, NE.Kadish, I. SÃanchez ˛ FernÃandez, ˛ J. Klehm, W. Bosl, C. Reinsberger, S. Schachter, T. Loddenkemper (2014) Seizure detection, seizure prediction, and closed-loop warning systems in epilepsy

Fig. 6: Multi-Subject Study Results

Acknowledgment The data used in this paper is from a Kaggle competition which was sponsored by MathWorks, the National Institutes of Health (NINDS), the American Epilepsy Society and the University of Melbourne, and organized in partnership

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

35

Fog Enabled Health Informatics System for Critically Controlled Cardiovascular Disease Applications Ragaa A.Shehab1 , Mohamed Taher1 , and Hoda K.Mohamed1 1 Computer and Systems Engineering Department, Ain Shams University, Cairo, Egypt

Abstract— Towards the departure from reactive treatment to a more preventive medicine, fog computing promotes increasing the quality and continuity of healthcare to patients, while saving their privacy. This work presents fogenabled system for helping cardiovascular disease CVD patients to early detect their health state, and get real time diagnoses and treatments. We present a complete control system for remote patient monitoring and illustrate how fog computing is essential in realizing the critically controlled health informatics applications. In addition, we model CVD application with two modes of operations: pulse rate mode and ECG mode, and study the effect of distributing the application’s modules across multiple tiers of the fog nodes. The system was simulated using ifogsim. Results indicate a great saving in application’s loop delay, network usage, and energy consumption respect to cloud only based system. Keywords: Health informatics, IoT, fog computing, cloud computing, patient monitoring, critical control health care applications

1. Introduction Evolution in wireless sensor technology constrains the sensors’ size and increases its ability to read various biometric signals, that greatly influences the healthcare monitoring systems. Other drivers in smart health care include big data analytics and machine learning that aid in producing and visualizing insights from the aggregated data not only for patient monitoring but also for increasing the effectiveness of diagnoses and treatments. Cloud computing with its massive amount of resources, can serve as a computational paradigm for biomedical systems. Cloud can efficiently handle the massive amount of patients’ data with increased scalability and complexity. However, in cloud architecture, patients’ data have to be stored remotely, which may not be feasible in health informatics applications due to patient safety and hospital regulations. In addition, the centralization and remoteness of the cloud’s datacenters violate the latency strength of the real time patients’ data analytics applications [1], and thus may not be applicable for critically controlled health informatics applications. Fog computing[2] is a distributed computing paradigm, in which application’s modules can reside hierarchically along the continuum path from the IoT sensors to the cloud. In this architecture, computations can take place in the closest gateway, access point, or router within the patient’s vicinity.

Fog computing provides latency-sensitivity, high-resilience, high-strength and closeness to the IoT devices, in addition to its geo-distribution that allows for a better user mobility, and un-interrupted monitoring service. Therefore, fog is the best candidate for the critically analyzed and critically controlled healthcare applications. Fog computing adds flexibility regarding where computations can be placed that comes with the cost of increased resource management complexity [3]. Resource management challenges arise from the distribution of resources along multiple tiers, heterogeneity of fog edge nodes that vary from a limited processing capability access point to powerful edge cloud, in addition to scalability and elasticity issues. Innovated resource management techniques are needed to mitigate this complexity and overcome the challenges. Fog computing complements the cloud architecture by carrying out confidential real time data analytics tasks within the patient’s vicinity, while keeping long term data analytics to be run efficiently on the cloud. Coronary heart disease ranked as number one over the world for the mortality causes percentage [4]. Due to its importance, this work presents a fog enabled cardiovascular disease CVD application as a fog computing critical control task. CVD application has two modes of operations: Pulse Rate PR mode for normal state patients and Electrocardiogram ECG mode for critical state patients. Patients’ data streams are processed at the edge fog nodes, utilizing machine-learning analysis to accurately classify patient’s state on real time. Accordingly, in the critical situations, a smart hospital caregivers’ feedback is forwarded to control an analgesia infusion pump connected to the patient. In addition, a batch mode data analytics are carried out over the patient’s Electronic Health Record EHR at the cloud over long terms to aid in the diagnosis and treatment processes. CVD application is simulated over ifogsim simulator [5], and for managing the fog network resources, an edgeward module placement algorithm was utilized. In edgeward module placement algorithm, the distributed application’s modules are placed in hierarchical manner close to the network edge. If there is a shortage in a fog node computational resources, the module is transferred to the next alternative fog device with available enough resources in the path hierarchy. Results are compared to cloud based module placement, where all application modules are centrally placed in the cloud’s datacenters. Results show a great saving in the

ISBN: 1-60132-500-2, CSREA Press ©

36

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Fig. 1: Complete control system for remote patient monitoring and treatment application’s loop delay compared to cloud only architecture, also a saving in network usage, and energy consumption of the datacenters. Paper is organized as follows: next section introduces remote patient monitoring control system. Section 3, focuses on fog computing in health informatics. Section 4, presents the state of the art in cardiac health informatics applications. Section 5, provides the proposed system model. Section 6, explains the cardiovascular application model with its modes of operation and module placement algorithm. Section 7, discusses the obtained results. Finally section 8, concludes the paper with suggested future work.

Fig. 2: Fog computing architecture



2. Remote Health Informatics Worldwide, healthcare faces multiple challenges, including: the increase in the aging population and the rise of chronic diseases. Those challenges raises the need for automated systems that is capable of delivering smart health care services to a large number of patients while guaranteeing a high quality of health care and cost efficiency [6]. Towards the solution, and within the scope of open fog consortium use cases in healthcare [7], we present a complete control system for remote patient’s monitoring and treatment, see figure 1. the system consists of: 1) Smart monitoring system: e.g., blood pressure monitor , pulse oximeter, ECG monitor, heart-rate screens, respiratory rate monitor. 2) Smart drug delivery system: e.g., patient controlled analgesia infusion pump. 3) Smart health informatics applications: e.g. vital signs monitoring, patient’s speech analysis, patient’s activity monitoring. 4) Information and communication system: Multiple system architecture could be utilized to deploy health applications: • Device only architecture: that contains all requirements for health informatics application. The limitations of such architecture include the need for a specialized communication infrastructure, system dependability, and interoperability with high cost and limited processing capabilities.



Cloud computing architecture: where the cloud’s server is directly connected to the smart monitoring systems, and responsible for medical data storage, analysis and medical decisions support. However, due to its remoteness and centralization, cloud is inadequate for the requirements of health informatics applications. Challenges include: limited ability to deal interactively with massive amount of IoT sensors and smart health devices with generated medical mass information that result in an unacceptable latency in emergency situations. In addition, cloud has a poor support to patient’s mobility and location awareness services. At the same time, medical data processing over remote cloud data centers may violate the patient’s data privacy issues and raise restrictions over health regulations [1], [6]. Fog computing architecture: where the communication network is architected as a set of clusters, and the medical sensors communicate to the closest fog node, see figure 2. The computing fog node is responsible for aggregating and filtering the data captured from health devices, performing data analytics for supporting decision making systems, and managing insights and vital data offloading [2], [6].

3. Fog Computing in Health Informatics Fog can offer the needed computational resources within the network to meet both regulatory and technical requirements of the healthcare applications. Other benefits of fog computing in healthcare include: reduced latency as fog process tasks closer to the devices, enhanced privacy by processing sensitive data tasks locally, saved energy for the battery driven devices, saved bandwidth as raw data are filtered and analyzed locally, and allowed application scalability and dependability. In addition, fog provides flexibility of computing locus and can serve as an integration and compatibility layer between various standards. in addition, fog

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

guarantees patient’s mobility across different environments which leads to new applications [6]. To fill the gap between healthcare devices and the cloud, fog architecture may be one of the following gateways: Wireless Personal Area Network WPAN e.g. Bluetooth, ZigBee, Wireless Body Area Network WBAN e.g. IEEE 802.15.6, Local Area Network LAN e.g. WiFi, or Wide Area Network WAN e.g. LTE eNodeB. For positioning of applications’ tasks, fog could use single node at LAN or PAN level to collect and analyze time critical data to achieve critical decisions. In addition, mobile computing device could be used for supporting a high degree of mobility as the flexibility of network connectivity is achieved [6]. Healthcare tasks relevant in fog include: data collection, data analysis, data critical analysis, data critical control tasks and context management task. In the data critical control case, application computations that are critical to the patient’s health are required in real time, and a caregiver’s feedback to control the patient’s medical devices is needed in real time[6].

4. State of The Art of Cardiac Health Informatics Applications In the context of patient’s state monitoring and ECG monitoring, literature could be classified according to the computing task: As a fog based data analysis task, in [8] ECG Data compression was conducted and ECG data were encoded to save the transmission energy and decoded at the receiver. A local analysis of video and audio data have been performed in [9] to monitor the patient state and figure out if the patient is in pain, and the processed results only have to be transferred. In [10] ECG was analyzed to detect arrhythmic beats, and for speech recognition and ECG monitoring, relevant data only are compressed and forwarded. As a fog based data critical analysis task, fog is used for data analysis of the critical conditions with alarms when critical situations are detected. In [11] ECG monitoring for feature extraction of ECG data based on wavelet analysis, and a real time notification and enrichment of data with location is conducted. In [12] they perform a Local analysis of ECG data and enrichment with GPS and activity forwarding of relevant data in compressed form. In addition, in [13] ECG Feature extraction has been performed with classification and detection of anomalies with local alarms and notification to staff. As a fog enabled Critical control task, fog is used to control actuators that are critical to patients. In [14] analysis of oxygen level is done to automatically adjust the appropriate oxygen dose for the patient in real time, while taking location and environmental data into account. As a fog based Context management task, in [15] Activity monitoring locally analyzes the heart rate, acceleration and

37

Fig. 3: System model

altitude that classify patient’s activity during sleeping or driving.

5. Fog Enabled Cardio Vascular Disease Health Informatics System In this work, we introduce CVD application as a fog computing critical control task. The application has two modes of operations: PR mode for normal state patients and ECG mode for critical state patients. Machine learning analysis is conducted to predict and classify patient’s state. According to the patient state, an analyzed feedback is forwarded to control analgesia infusion pump connected to the patient. System objectives are: increasing accessibility, quality, efficiency, and continuity of healthcare to patients, in addition to, reducing the overall cost of health care and giving free time for patients by reducing bed days. The system aims to allow a more precise picture of patients, by capturing data continuously and allowing an insight into increasing variety of biometric parameters [7]. Providing a real time patient’s data analytics allows healthcare to be given continuously and everywhere. And that revolutionizes diagnostics and treatment, and aids in reducing the mortality rates and emergency admissions. Our system aims to enable a complete control system of patient’s health state detection, diagnosis and emergency treatment. Our target patients include: hospitalized patients, as well as, remote monitoring for patients who may have few exercises and experience different conditions amid their day to day schedule. The system model consists of: Sensors and actuators that are represented by wearable devices held by the patient. Sensors’ readings are sampled according to the situation and sent to the patient’s smart mobile. In addition, for the flexibility of computing the application’s tasks, we present three tiers of computing architecture: cloud tier, proxy server and edge fog nodes tier, and edge smart mobile tier, see figure 3. Deployment and implementation system include:

ISBN: 1-60132-500-2, CSREA Press ©

38

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

1) Heart monitoring system, represented by pulse oximeter to monitor pulse and blood oxygen concentration, and Electrocardiography with five leads that monitors the P/QRS/T waves of the heart. 2) A smart drug delivery system, that is represented by controlled analgesia infusion pump. 3) A CVD health informatics application model, which is explained in the next section. 4) A Fog information and communication system, that is based on the mobile deployment scenario. In this scenario, patient’s mobile is the first hub between sensors and the cloud and collects data from the heart monitoring system, processes it and sends it to a back end server. Patient’s smart mobile may act as a wireless personal area network WPAN gateway that connects to the WLAN via WiFi or WAN via a cellular connection. Altering the fog nodes’ computational capabilities and operating criteria, could allow us to study other deployment scenarios such as home treatment scenario, hospital deployment scenario, clinical deployment scenario and transport deployment scenario, for ambulances and vehicles with continuous guaranteed connectivity while the infrastructure need to be mobile.

6. Cardio Vascular Disease CVD Application 6.1 Application Model Using ifogsim, the CVD application is modeled as a Directed Acyclic Graph DAG, see figure 4. Modules are represented as vertices and inter module communications (tuples) are represented by edges. Each module is represented by its RAM size in MB, and each edge is described by the count of Million Instructions Per Second MIPS and the network bandwidth required to process that edge in Byte. Two modes of operations are proposed: Pulse Rate PR mode, where the patient holds a pulse oximeter sensor and samples are lively streamed each 10 to 15 seconds, and electrocardiogram ECG mode, where sensors are connected to the patient’s body and data streaming waves are sampled for analytics each 50 to 100 ms. In both modes, the readings are sent to the patient’s smart computing mobile. The mobile holds the client patient health state module, which is responsible for accepting the sensors readings (PR, ECG), filtering the data and eliminating the out-of-range readings, adding any other related information and finally compressing, encoding and securing the packets to be sent. Packets are sent to the Monitoring module for data analytics. Data analytics include a linear classifier for predicting the patient state and statistical analytics for summering the reading across a predefined window. Patient state is classified into normal state, abnormal state, and critical state. After analyzing the readings, caregiver module is invoked for event handling,

Fig. 4: CVD application model

Fig. 5: CVD application modes of operation and the patient is informed about his state or about the ongoing actions using the display/convenience actuator that controls an analgesia infusion pump. If the patient state is abnormal, the patient is asked to turn into the ECG mode of the CVD application, see figure 5. Table 1 and table 2 present the ifogsim simulation parameters. CVD application data flow consists of four stages, see figure 6. The first stage is data collection that transmits patient’s PR or ECG to the client module. Stage 2, is a real time analysis of the patient’s state. Stage 3, is a real time report of the patient’s state to the caregiver’s center to take an action, i.e. informs patient about his state, diagnoses the situation, and gives instructions and proper treatments by controlling the analgesia infusion pump rate. Stage 4, is data storage of long term data analytics to the patient’s Electronic Health Record EHR at the cloud. This stage aids in managing the linear classifier threshold and studies the effectiveness of treatments, and optimizes the process and personalizes the care. For managing the fog network resources, edgeward module placement is studied. It is a hierarchical module placement algorithm, where the application’s modules are placed in hierarchical manner close to the network edge. If there is a shortage in a fog node computational resources, the module is transferred to the next alternative fog device

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

39

Table 1: Simulation parameters PR mode

ECG mode

Tuple Type

CPU Length

N/W Length

CPU Length

N/W Length

PR/ECG Sensor Data Reporting Data Offloading Decision Analytics State/ Action P State/P Action

2000 2000 1000 1000 3000 3500 3000 1000

500 500 500 500 1000 1000 500 500

3000 3000 2000 1000 3000 3500 3000 1000

1000 500 500 500 1000 1000 500 500

Table 2: Configuration of sensor devices Sensor Type

Tuple CPU Length

Average Interarrival

Pulse oximeter Electrocardiogram

2000 MI 3000 MI

10:15sec 50:100ms

Fig. 7: Edgeward module placement algorithm with available enough resources in the path hierarchy, see figure 7. Edgeward module placement is compared to cloud based module placement algorithm, that is based on the traditional application module placement, where all application modules run on the cloud data centers, and sensors transmit data to the cloud for processing and results are conducted back to the actuators.

7. Results and Discussion Utilizing ifogsim simulator [5], we study the CVD application’s PR /ECG modes under the two module placement algorithms: cloud based and edgeward based. Simulator output for CVD application has been figured under four system configurations. Configurations vary respect to the number of fog edge nodes (dept) and mobiles within each dept: Config1: 1d/2m- Config2: 2d/4m - Config3: 3d/4mConfig4: 4d/5m. Thus, we study reporting Application Loop Delay ALD which is the delay of: analyzing patient’s PR/ECG, classifying patient’s state, reporting the important statistical information regarding the patient’s health condition to caregivers and finally instructing the patient about the fast proper treatments. PR/ECG reporting loop modules includes PR, Client, Monitoring, Caregiver, Client and Display modules. Also, we studied the total network usage and energy consumption of the physical system nodes. Analyzing the simulator outputs, in figure 8, computations’ offloading from the centralized cloud to the distributed edge fog nodes within the vicinity of the data sources

Fig. 6: Stages of CVD application

Fig. 8: Application Loop Delay ALD for PR and ECG modes under fog and cloud application module placement

reduces the ALD of PR mode up to 86% and of ECG mode up to 80%. ECG ALD is greater than PR ALD under both edgeward and cloud architecture. In figure 9, Fog reduces the network usage more than the cloud based architecture. In addition, ECG mode consumes more network resources than PR mode. Figure 10, presents the energy consumption of the various datacenters under the PR mode. Figure 11, shows that cloud deployment consumes more energy than fog deployment. In addition, ECG mode consumes more computational energy than PR mode. Figure 12, presents the application loop delay of the fog based module placement under various data arrival rates for both PR and ECG modes. Figure 13, presents the delay of the cloud deployment architecture under various data arrival rates for both PR mode and ECG mode. Under config2, figure 14, shows that fixing the percentage of PR mode patients at 100% while increasing the number of critical state ECG patients from 10% to 100% increases the application loop delay for both PR and ECG modes of operation.

ISBN: 1-60132-500-2, CSREA Press ©

40

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Fig. 9: Total network usage for PR and ECG modes under fog versus cloud modules placement

Fig. 10: Fog nodes energy consumption respect to cloud datacenter energy consumption

Fig. 13: Cloud based ALD for various data interarrival rates

Fig. 14: Fog versus cloud ALD under various percentage of critical state patients

And finally, figure 15, shows that under various ECG inter arrival rates, the CVD application modules are shifted north to more powerful processing capability fog nodes, which results in producing various application delays according to each situation.

8. Conclusion and Future Work

Fig. 11: Cloud’s datacenter energy consumption under fog versus cloud based modules placement

Fig. 12: Fog based ALD for various data interarrival rates

In this work, we introduce CVD application as a fog computing critical control task, with two modes of operations: PR mode and ECG mode. Machine learning analysis is conducted to predict and classify patient’s state. According to the patient’s state, an analyzed feedback is forwarded to

Fig. 15: Fog based ALD under various ECG inter arrival rates

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

control analgesia infusion pump connected to the patient. Edgeward module placement is simulated using ifogsim and compared to cloud based module placement. Results show that the placement of the cardiovascular application modules at constrained resource nodes closer to the data sources, greatly minimize the application’s loop delay respect to cloud based module placement. Loop delay reduction up to 86% for PR mode and 80% for ECG mode has been reported, in addition to a reduction in total network usage and cloud’s datacenter’s energy consumption. Thus, fog computing is able to efficiently deliver the critical real time health informatics applications, while preserving the patients’ privacy and meeting the health care regulations. During Edgeward module placement, altering the sensors’ interarrival rates is not directly proportional to the application loop delay, because the modules shift among heterogeneous processing capabilities fog nodes that greatly affect the application loop delay. In addition, Increasing the number of critical state patients increase the overall application loop delay for both PR mode and ECG mode. Towards a unified view of fog computing in healthcare, we recommend standardization within fog computing mechanisms i.e., protocols for computational tasks offloading, fog connectivity, coordination among availing existing resources for improved performance and enhanced security and trust. In addition, big data analytics techniques in fog computing have to get attention as an enabler for context aware management and decision making systems. Therefore, our future work is a stream analytics platform at the edge, where we aim to deploy a unique platform for both short term streaming analytics and long term batch analytics to ensure efficiency, reliability and fault tolerance of the critical healthcare applications.

41

[8] Xu K, Li Y, Ren F, An energy-efficient compressive sensing framework incorporating online dictionary learning for long-term wireless health monitoring, In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 804–808, doi10.1109/ICASSP.2016.7471786, (2016) [9] Hossain MS, Muhammad G, Cloud-assisted speech and face recognition framework for health monitoring, Mob. Netw. Appl., 20(3),pp :391–399, http://dx.doi.org/10.1007/s11036-015-0586-3, (2015) [10] Dubey H, Yang J, Constant N, Amiri AM, Yang Q, Makodiya K ,Fog data:Enhancing telehealth big data through fog computing. In Proceedings of the ASE BigData &, SocialInformatics 2015, (p. 14:1–14:6), New York, NY, USA:ACM, http://doi.acm.org/10.1145/2818869.2818889, (2015). [11] Gia TN, Jiang M, Rahmani A, Westerlund T, Liljeberg P, Tenhunen H, Fog computing in healthcare internet of things: A case study on ecg feature extraction. In 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications, Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp 356–363, doi10.1109/CIT/IUCC/DASC/PICOM.2015.51, (2015). [12] Wac K, Bargh MS, Beijnum BFV, Bults RGA, Pawar P, Peddemors A, Power- and delay-awareness of health telemonitoring services: the mobihealth system case study, IEEE Journal on Selected Areas in Communications, 27(4), pp :525–536, doi10.1109/JSAC.2009.090514, (2009). [13] Chen H, Liu H, A remote electrocardiogram monitoring system with good swiftness and high reliablility. Computers & Electrical Engineering, 53, 191–202, https://doi.org/10.1016/j.compeleceng.2016.02.004, http://www.sciencedirect.com/science/article/pii/S0045790616300192, (2016) [14] Masip-Bruin X, Marín-Tordera E, Alonso A, Garcia J, Fog-to-cloud computing (f2c): The key technology enabler for dependable e-health services deployment, In 2016 Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net), pp 1–5, doi10.1109/MedHocNet.2016.7528425, (2016). [15] Preden JS, Tammemäe K, Jantsch A, Leier M, Riid A, Calis E , The benefits of self-awareness and attention in fog and mist computing. Computer, 48(7),pp :37–45, doi10.1109/MC.2015.207, (2015)

References [1] Sharma SK, Wang X , Live data analytics with collaborative edge and cloud processing in wireless iot networks. IEEE Access,5, pp:4621– 4635, doi10.1109/ACCESS.2017.2682640, (2017). [2] Bonomi F, Milito R, Natarajan P, Zhu J , Fog Computing: A Platform for Internet of Things and Analytics,. In N. Bessis & C. Dobre (Eds.), Big Data and Internet of Things: A Roadmap for Smart Environments (pp. 169–186). Cham: Springer InternationalPublishing, https://doi.org/10.1007/978-3-319-05029-4_7, (2014) [3] R A. Shehab, Mohamed Taher, Hoda k. Mohamed , Resource management challenges in the next generation cloud based systems: A survey and research directions. The 13th IEEE International Conference on Computer Engineering and Systems (ICCES 2018), (2018). [4] World health rankings, http://www.worldlifeexpectancy.com/cause-ofdeath/coronary-heart-disease/by-country, (2018) [5] Gupta H, Vahid Dastjerdi A, Ghosh SK, Buyya R, ifogsim: A toolkit for modeling and simulation of resource management techniques in the internet of things, edge and fog computing environments. Software: Practice and Experience,e, 47(9), pp: 1275–1296, doi10.1002/spe.2509, (2017). [6] Kraemer FA, Braten AE, Tamkittikhun N, Palma D, Fog computing in healthcare–a review and discussion, IEEE Access, 5, pp:9206–9222, doi10.1109/ACCESS.2017.2704100, (2017). [7] Fog computing use cases, https://www.openfogconsortium.org/resources/usecases, (2017)

ISBN: 1-60132-500-2, CSREA Press ©

42

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

SESSION HEALTH INFORMATICS, HEALTH CARE AND REHABILITATION + PUBLIC HEALTH RELATED SYSTEMS Chair(s) TBA

ISBN: 1-60132-500-2, CSREA Press ©

43

44

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

45

Using Isochrones to Examine NICU Availability in Rural Alabama Ben Hallihan, Myles McLeroy, Blake Wright, Travis Atkison Department of Computer Science, University of Alabama, Tuscaloosa, AL, USA Abstract— There is no period in a human’s life in which the fragility of health is more clear than the first few months after birth; so having a healthy baby is the preeminent concern of every parent and future parent alike. For premature babies especially, access to neonatal care can be the difference between life and death. Unfortunately, many rural communities do not have access to Neonatal Intensive Care Units (NICUs). This research seeks to assess the magnitude of this access problem by creating isochrone maps that illustrate which areas of Alabama have access to NICUs. An isochrone generation software was used to create isochrone maps around every NICU available to Alabamians. The results show an extremely geographically-uneven distribution of resources. Keywords: Isochrone, Health care, NICU, Newborn, Neonatal

1. Introduction When discussing infant mortality, the issue is usually thought to be one that mainly affects underdeveloped countries, not a technologically advanced society like the United States. However, while medical technology and infrastructure make the odds of saving an infant in need much higher in the United States, an infant must have access to that technology and infrastructure if it is going to save his or her young life. The subset of infants that are most susceptible to illness and medical complications is newborn babies, especially those born prematurely. When a newborn baby needs intensive medical care, that baby is admitted to a Neonatal Intensive Care Unit (NICU). Whether a neonatal care facility is designated as a NICU is determined by a set of guidelines created by the American Academy of Pediatrics (AAP). These guidelines base designations on the complexity of care provided by a facility. The AAP defines a NICU to be “a hospital [facility] organized with personnel and equipment to provide continuous life support and comprehensive care for extremely high-risk newborn infants and those with complex and critical illness” [1]. The most common reason for a newborn baby to be admitted to a NICU is complications related to premature birth. For this reason, it is extremely important for a woman who goes into premature labor to be near a facility that is designated as a NICU. Unfortunately, this is not always the case. Most hospitals do not have a NICU, and when a woman in premature labor is being rushed to a hospital, there is no time to drive hours away to a hospital that does have a NICU. If a woman who lives in a rural area goes into premature labor, it is not likely that any hospital within a reasonable distance will have a NICU, meaning it is unlikely that the facility she is going to has the medical personnel and equipment necessary to properly care for her soon-to-be-born baby.

Although premature birth is the most common reason for a baby to be admitted to a NICU, it is not the only one. A newborn baby may have to be admitted to a NICU after he or she has already been taken home if complications or other emergency health issues arise. If a newborn baby has a medical emergency at home and 911 is called, the paramedics are going to bring that baby to the nearest hospital. Similar to the situation of a pregnant woman in premature labor, if the baby lives in a home in a rural area, the likelihood of that hospital having a NICU is low. The baby may be transferred to a hospital with a NICU after admittance to the first hospital, but by that time, it could be too late. So, it is clear that access to NICUs is important; however, what is not as clear is the magnitude of the problem of lack of access to these facilities. In this project, isochrone maps are paired with computing technology to shed light on how serious this problem really is. Isochrones are maps that display the distance a traveler can get from a certain point in any direction in a specified amount of time. (The technical definition of an isochrone is “a line on a diagram or map connecting points relating to the same time or equal times” [2], but for the purposes of this paper, the term “isochrone” will refer to the entire shape created by connecting the set of these points). Isochrones are perfect for analyzing access to services because they can display access based on time rather than based on distance. Isochrones were already being used in transportation and urban planning in the late 19th century, but the complex diagrams had to be mapped out and drawn completely by hand [3]. Now, advancements in computing technology, when paired with powerful mapping software like Google Directions, make it possible to create accurate isochrones in a matter of minutes [4]. In our case, this isochrone mapping technique allows us to see what areas of Alabama are within a certain amount of driving time of a NICU rather than simply seeing what areas are within a certain geographic distance of a NICU. It does not matter how close an infant is to a NICU if there is not the transportation infrastructure in place to get there in time. Isochrones allow us to visually represent the lack of access to NICUs for people living in rural Alabama.

2. Background It is easy to see why it is important for a woman in premature labor or with a critically ill infant to be in close proximity to a NICU, but how often do infants actually die as a result of not being close enough to the necessary care? According to a study performed in 2002 by the Department of Pediatrics at Dartmouth Medical School, the number is

ISBN: 1-60132-500-2, CSREA Press ©

46

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

not insignificant. The study was designed to analyze regional variation in the availability of neonatal care in the United States, and to determine if there is a relationship between availability of neonatal care and lowered neonatal mortality rates [5]. In the study sample, which included nearly 4,000,000 newborn babies, the average neonatal mortality rate was 3.4 deaths per 1000 births. There was a high rate of variation depending on region, and this variation correlated directly with availability of neonatal care resources. After a region has at least minimal access to care, there was not a correlation between quantity of care resources and lowered mortality rates. In regions that had a “very low” number of neonatologists, the neonatal mortality rate was highest. However, there was little difference between in neonatal mortality rates in areas of with a “low” number of neonatologists and those with a “very high” number of neonatologists. This suggests that there is a low baseline that must be met to provide adequate care, and all of the resources after that baseline are not contributing to lower neonatal mortality rates [5]. The study concludes that, not only are there too few NICU facilities in rural regions of the United States, but there is also a surplus of resources in urban regions. It was found that increasing the number of neonatologists was not associated with reductions in the risk of death, and also that there was no clear relationship between the number of neonatal intensive care beds and neonatal mortality [5]. This means that there is little benefit to adding more beds or more doctors to existing NICUs. As a result, additional resources should be distributed based on physical geography rather than population, assuming the ultimate goal of the distribution is to reduce infant mortality rates. According to this study, it is not necessary for a larger population to have a significantly larger number of beds or a greater number of neonatologists in order to satisfy the needs of that population–all that really matters is that a NICU is nearby at all. Another notable study which highlights how geographic location can have a direct, negative effect on the health of rural people was conducted at University of Alabama at Birmingham in 2002. This study examined how utilization of sickle cell treatment services differed between rural and urban populations in Alabama. While the findings were specific to sickle cell disease, the conclusions researchers could draw were broad. According to the study, there is statistically significant evidence that geographic location can result in significantly different health care service, and that this variation is service availability bears meaningful implications for medical and health care support systems [6]. This is another example of rural people being at a health disadvantage as a result of their geographic location, and it is especially relevant because the disadvantaged population in the study was rural Alabamians [6]. The reason that NICUs are not built in rural areas is not difficult to determine: it simply is not profitable. From the Dartmouth study discussed earlier, it is clear that the best way to decrease infant mortality is to upgrade hospital facilities in rural areas so that they qualify as NICUs under the AAP guidelines; unfortunately, hospitals are not able to

make decisions in this way. All in all, any hospital that builds a NICU needs to have enough patients in that facility in order for the facility to be profitable, otherwise the hospital cannot afford to pay for the costly equipment and specialists necessary for quality neonatal intensive care. There is little to no recent data on NICU costs, but an AAP study conducted in 1999 found that “neonatal intensive care stays are among the most expensive types of hospitalizations” [7]. The study analyzed treatment costs in NICUs across the country and found that the median treatment cost for a stay in a neonatal care facility was $49,457, with an average length of stay being 49 days long. There was also a very high ceiling for treatment cost, with some patients being very expensive to treat. The 90th percentile of treatment costs was $130,377, with a maximum treatment cost of $889,136 [7]. It is important to note that “treatment cost” does not refer to the amount that was charged to the family. Treatment cost is the estimated amount of money that the hospital had to spend in order to provide care to the infant. These statistics clearly show that agreeing to treat a critically ill infant can be a very costly commitment for a hospital, and there is a lot of risk involved due to the possibility of a newborn’s stay being extremely expensive. Also, these costs only account for the cost of using and maintaining the equipment that is in an existing NICU, and does not account for the cost that would be associated with upgrading or purchasing all of that equipment in the first place. Ultimately, it is an unwise economical decision for a hospital to upgrade its facilities to meet the standards of a NICU if it is not in an area with a population to fill the beds of that NICU. Hospitals have tried to cut neonatal intensive care treatment costs using multiple methods, with some being more effective than others. One of the methods being considered is the implementation of cost-containment policies, which entail refusing treatment to infants below a certain birth weight. An AAP study analyzing these cost-containment policies found that their implementation nationally would result in the refusal of care to an estimated 3,400 infants each year who would have survived if given treatment, and these policies would only result in a minimal amount of savings for NICU facilities [8]. Additionally, the birth weight-based rationing plans would lead to an increase in the racial disparity of NICU deaths [8]. The study concluded that there would be little cost savings for the facilities, and that these policies would “result in denying care to many infants who would otherwise survive” [8]. A different, more effective cost-saving strategy for NICUs involves the adoption of Big Data analysis techniques in order to optimize care. There is a massive amount of data being produced by medical devices used in neonatal care, and the analysis of this data lead to earlier detection and more efficient treatment of infants in need. A 2013 study at the University of Ontario concluded that there could be dramatic improvements in the efficiency of healthcare systems, as well as better patient outcomes, if hospitals made us of advancing data analysis technologies [9]. The ability to process physiological data streams from multiple patients across the country in real-

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

47

time can provide healthcare professionals with a wealth of meaningful information, and this information can be used to provide more effective care [9]. So, the evidence makes clear that the correct path forward for the optimization of healthcare systems, specifically neonatal care, requires the employment of advancing computing technologies. It is for this reason that isochrone technology is being utilized in this research to analyze the magnitude of this problem, and hopefully to determine the best path forward for solving it.

3. Methodology The first step in examining access to NICUs is the identification of where they are in the first place. In order to do this, the AAP Neonatal Intensive Care Unit database was used to determine the location of all neonatal care facilities in the state of Alabama that are designated as a NICU based on AAP guidelines. The neonatal care facilities that are included in this study are only those which are designated as level 3 or level 4 by the AAP. While there do exist other neonatal care facilities in the state, only level 3 and 4 neonatal care centers are defined as Neonatal Intensive Care Units. The search found 15 level 3 or 4 facilities in Alabama, as well as 5 facilities that are not in Alabama but are within 30 minutes of the Alabama state line [10]. This set of NICUs is therefore an exhaustive set of all the NICUs that Alabama residents could possibly have access to within a 30 minute time frame. This set is listed in Table 1. Once the set of NICUs was identified, an isochrone was generated around each NICU. The isochrone generation tool is a subset of the Transportation Analysis Tool Suite that was developed by the Digital Forensics and Control Systems Security Lab at the University of Alabama. This program takes in a set of longitude and latitude coordinates, a time interval, and a transportation method and then generates an isochrone around the input coordinates. This program interfaces with Google Directions API, and uses Google’s very accurate [11] directions software to build an isochrone map around the given point. Depending on the detail of the isochrone, upwards of 1000 queries to the Directions API may be made in the generation process, as the program determines how far the traveler could get in any direction in the set amount of time. The resulting isochrone is stored as a KML file which can then be opened and viewed. For this study, the isochrone maps used a 30 minute time interval, and the transportation mode was driving. An isochrone was generated for each of the NICU facilities found in the AAP database search. Each isochrone generated in this examination consisted of a set of 64 points connected to make a closed shape. The closed shape (the isochrone) represented the geographical area that was within 30 minutes of each facility. An example of a single isochrone is displayed in Figure 1. Once an isochrone was created around each of the NICU locations, the isochrones were synthesized into a single diagram which displays the total area of the state of Alabama that is within a 30 minute access of neonatal intensive care. This diagram is displayed in Figure 2.

Fig. 1 S HOWN HERE IS A CLOSER LOOK AT A SINGLE ISOCHRONE MAP AROUND NICU IN D OTHAN , A LABAMA . T HIS ISOCHRONE MAP CONSISTS OF 64 POINTS , EACH 30 MINUTES FROM THE NICU, CONNECTED TO FORM

ONE

THE OUTLINE OF THE SHAPE SEEN ABOVE .

T HE SHADED REGION

REPRESENTS THE GEOGRAPHICAL AREA THAT IS WITHIN A DRIVING DISTANCE OF THE

30 MINUTE

D OTHAN NICU.

4. Results According to the American Academy of Pediatrics NICU database, there are 20 total NICUs which Alabama residents may have access to within 30 minutes of the state line. There are 15 NICUs in Alabama, and 7 of these NICUs are in Birmingham. There are an additional 5 NICUs in surrounding states that are within 30 minutes of the Alabama state line. This set of 20 total NICUs is listed in Table 1. As shown in the table, there is only one NICU in Alabama that is designated as Level 4 by the AAP, and the rest of the NICUs are level 3. Both level 3 and level 4 designated facilities are considered NICUs, but a level 4 NICU is considered a “regional” facility. Regional NICUs have the capability to provide certain extremely specialized care that is not usually necessary in a basic level 3 NICU. A closer look at one of the generated isochrones is displayed in Figure 1. This figure shows a 30-minute isochrone created around a NICU located in Dothan, Alabama. A diagram displaying all of the isochrones generated in this project is displayed in Figure 2. The shaded regions of the map in Figure 2 represent the geographical areas of Alabama that are within 30 minutes of a NICU.

ISBN: 1-60132-500-2, CSREA Press ©

48

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Fig. 2 T HE ISOCHRONE MAP SHOWN HERE DISPLAYS HOW FAR A TYPICAL DRIVER CAN GET FROM ANY NICU IN A LABAMA IN ANY DIRECTION IN 30 MINUTES . T HIS DIAGRAM WAS CREATED BY FIRST CREATING A DIAGRAM SIMILAR TO F IGURE 1 FOR EACH NICU, AND THEN SYNTHESIZING THEM TO FORM A SINGLE MAP.

T HE SHADED AREAS ON THE MAP REPRESENT THE GEOGRAPHICAL AREAS OF A LABAMA THAT ARE WITHIN 30 MINUTES OF A NICU. C LEARLY, MUCH OF THE STATE OF A LABAMA IS WELL OVER 30 MINUTES FROM ANY NEONATAL INTENSIVE CARE .

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

49

NICU List— #

Name

State

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Baptist Medical Center Brookwood Medical Center Cooper Green Mercy Hospital Crestwood Medical Center Druid City Hospital Regional Medical Center Hunstville Hospital Northport Medical Center Princeton Baptist Medical Center Shelby Baptist Medical Center St. Vincent’s East St. Vincent’s West Trinity Medical Center UAB Medical West University of Alabama at Birmingham Hospital USA Children’s & Women’s Hospital Sacred Heart Hospital The Medical Center of Columbus Georgia Floyd Medical Center Anderson Regional Medical Center North Mississippi Medical Center

City

Care Level Alabama Montgomery 3 Alabama Birmingham 3 Alabama Birmingham 3 Alabama Huntsville 3 Alabama Tuscaloosa 3 Alabama Huntsville 3 Alabama Northport 3 Alabama Birmingham 3 Alabama Alabaster 3 Alabama Birmingham 3 Alabama Birmingham 3 Alabama Birmingham 3 Alabama Bessemer 3 Alabama Birmingham 4 Alabama Mobile 3 Florida Pensacola 3 Georgia Columbus 3 Georgia Rome 3 Mississippi Meridian 3 Mississippi Amory 3

Table 1 T HIS TABLE LISTS ALL OF THE NICU S IN THE STATE OF A LABAMA , AS AS WELL AS ALL OF THE NICU S IN STATES BORDERING A LABAMA THAT ARE WITHIN 30 MINUTES OF A LABAMA STATE LINES

5. Conclusion It is clear from Figure 2 that there are many areas of Alabama that are not within 30 minutes of a NICU. The large amount of overlap in isochrones shows that the geographic distribution of resources is extremely uneven. Of the 15 NICUs in Alabama, 7 are located in Birmingham. An obvious argument for this placement is that there is a much higher population density in Birmingham than anywhere else in the state [12], and therefore, this area should have significantly more NICUs; however, analysis of previous studies shows that this argument is not valid. Recall the Dartmouth Medical School study discussed earlier. The most significant conclusion made in that study was that the best way to decrease infant mortality was to increase the geographic availability of neonatal intensive care. Additionally, it was concluded that neither the addition of more neonatologists, nor the addition of more NICU beds, correlated to a decrease in infant mortality rates [5]. Unless a population is so large that the existing number of NICU beds is not sufficient to satisfy the needs of the population, upgrading any hospital facilities in that area to NICU status would not be beneficial in decreasing infant mortality rates. For these reasons, it can be concluded that there is very likely a surplus of neonatal intensive care resources in Birmingham, and a more efficient distribution of resources would be one in which NICUs were more spread out across the state. Obviously, it is not practical to actually move existing neonatal care

facilities to rural areas, but the knowledge of this surplus should enlighten future decisions about where neonatal care upgrades would be most beneficial. If the goal is to reduce infant mortality, then future NICU construction should take place in hospitals that are not in close proximity to any existing NICUs. The issue is in how to get hospitals in rural areas to upgrade their facilities. As mentioned earlier, neonatal intensive care is very expensive. The initial purchasing of NICU medical equipment is expensive on its own, and then the usage of this equipment and the continual payment of neonatologists makes the long-run cost even greater. As long as neonatal care is this expensive, it is unrealistic to expect rural hospitals to upgrade their facilities without help. The only clear way to encourage rural hospitals to upgrade their facilities to NICU status is to alleviate the NICU costs these rural hospitals would be faced with if they did these upgrades. One of these ways, which was examined in background research, is the use of Big Data analysis techniques to increase the efficiency of NICU treatment systems. As technology helps decrease the costs of neonatal treatment, the barriers to providing quality NICU treatment will be greatly lowered. This will make it easier for rural hospitals to handle treatment of infants in need of intensive care. While the use of data analysis techniques is a good longterm solution, there are infants in need of care right now that do not have time to wait for technology to advance.

ISBN: 1-60132-500-2, CSREA Press ©

50

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

A more immediate plan to alleviate this issue would be the implementation of government incentives for the upgrading of neonatal care facilities in rural areas. This would help alleviate the costs of care for rural hospitals, and therefore allow these hospitals to treat critically ill infants who they would not be able to treat otherwise. Another solution, which is less complex than government incentives, is simply the education of hospitals and people in general about this problem. The fact that the existence of NICUs in rural areas is so vital to effectively decreasing infant mortality is not obvious, and if people are not aware of this fact, they are not as likely to work towards alleviating this issue. If donors or charities are made aware of how much help their money could do in these situations, they are much more likely to donate in a way that makes upgrading neonatal care facilities possible for rural hospitals. No matter how you go about increasing access to NICUs, our research makes it clear that this is a serious issue, and past literature shows how serious the implications of ignoring this issue would be. A critically ill infant is much more likely to die if the baby is part of a rural family, and until the problem of lack of NICU access is addressed, this will continue to be the case.

6. Future Works There are plenty of other availability issues that could be visually represented with the application of this tool. Some examples include campus parking for colleges, resource availability for the elderly, and transportation availability for veterans. For college parking, this technology can help provide illustrations of how easy or hard it is to reach classrooms or other buildings from various parking lots around universities. On the elderly side, this tool allows the user to see what resources are available in close distances, such as how far they are from a grocery store, or even a hospital in case of medical emergencies. For disabled veterans, this tool can be used to analyze access to transportation services that help them get around. On the side of NICU access, a future application of this tool could be helping place NICUs in areas that have the greatest number of people without access to care. Rather than building NICUs in crowded cities like Birmingham, where there are already many NICUs in close proximity, planning out locations for the new NICU facilities using the isochrone tool would allow for citizens in rural areas throughout the state to have more easily-available neonatal care. As an example, say we wanted to add a NICU to the southern part of rural Alabama. Using the tool, we could give a location option based on how much of the state this NICU would reach within thirty minutes. The same could be done in the northwestern part of Alabama to help provide care to a greater geographical area. Additionally, this technique could be applied to states across the country. This could help alleviate the major issue of current NICU facilities not being close enough to rural parents and families in need.

References [1] “Levels of neonatal care,” Pediatrics, vol. 114, no. 5, pp. 1341–1347, 2004. [Online]. Available: https://pediatrics.aappublications.org/content/114/5/1341 [2] K. Desai, “Isochrones: Analysis of local geographic markets,” Antitrust & Competition Review, p. 26–32, 2010. [3] J. Riedel, “Anregungen f¨ur die konstruktion und die verwendung von isochronenkarten,” Wiesban, vol. 19, no. 10, p. 585, 1913. [4] D. O’Sullivan, A. Morrison, and J. Shearer, “Using desktop gis for the investigation of accessibility by public transport: an isochrone approach,” International Journal of Geographical Information Science, vol. 14, no. 1, pp. 85–104, 2000. [Online]. Available: https://doi.org/10.1080/136588100240976 [5] D. C. Goodman, E. S. Fisher, G. A. Little, T. A. Stukel, C.-h. Chang, and K. S. Schoendorf, “The relation between the availability of neonatal intensive care and neonatal mortality,” New England Journal of Medicine, vol. 346, no. 20, pp. 1538–1544, 2002, pMID: 12015393. [Online]. Available: https://doi.org/10.1056/NEJMoa011921 [6] J. Telfair, A. Haque, M. Etienne, S. Tang, and S. Strasser, “Rural/urban differences in access to and utilization of services among people in alabama with sickle cell disease,” Public Health Reports, vol. 118, no. 1, pp. 27–36, 2003, pMID: 12604762. [Online]. Available: https://doi.org/10.1093/phr/118.1.27 [7] J. Rogowski, “Measuring the cost of neonatal and perinatal care.” Pediatrics, vol. 103 1 Suppl E, pp. 329–35, 1999. [8] J. W. Stolz and M. C. McCormick, “Restricting access to neonatal intensive care: Effect on mortality and economic savings,” Pediatrics, vol. 101, no. 3, pp. 344–348, 1998. [Online]. Available: https://pediatrics.aappublications.org/content/101/3/344 [9] C. McGregor, “Big data in neonatal intensive care,” Computer, vol. 46, no. 6, pp. 54–59, June 2013. [10] American Academy of Pediatrics, “Nicusearch.” [Online]. Available: https://www.aap.org/en-us/advocacy-and-policy/aap-healthinitiatives/nicuverification/Pages/NICUSearch.aspx [11] V. Ceikute and C. S. Jensen, “Routing service quality – local driver behavior versus routing services,” in 2013 IEEE 14th International Conference on Mobile Data Management, vol. 1, June 2013, pp. 97– 106. [12] US Census Bureau, “City and town population totals: 2010-2017,” Feb 2019. [Online]. Available: https://census.gov/data/tables/2017/demo/popest/total-cities-andtowns.html

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

51

Ontology-based Model for Interoperability between openEHR and HL7 Health Applications Cristiano Andr´e da Costa, Matheus Henrique Wichman, Rodrigo da Rosa Righi

Adenauer Corrˆea Yamin

Software Innovation Laboratory - SOFTWARELAB Applied Computing Graduate Program Universidade do Vale do Rio dos Sinos - UNISINOS S˜ao Leopoldo, Brazil 993022–750 Email: [email protected], [email protected], [email protected]

Computer Science Graduate Progam Universidade Federal de Pelotas - UFPEL Pelotas, Brazil 96075–630 Email: [email protected]

Abstract—Information Technology (IT) applied to health care can bring many benefits; however, its adoption remains low. One of the barriers that prevent IT adoption is the lack of interoperability among health care systems. Since nowadays there are many standards for Electronic Health Record (EHR) systems, methods to exchange information among them are needed. In this context, this article proposes a method to retrieve clinical data created by independent health systems using the same input query. Different from related work, the base of the proposed method is on the mapping of health record to OWL ontologies, enabling us to use rules and features from the OWL language to create interoperability equivalences between each system. Our approach was evaluated by querying an ontology fed with mixed openEHR and HL7 records. The results are encouraging, bringing the benefits of using a single entry point to reach in a transparent way different models of health applications. Index Terms—Interoperabilty; Electronic Health Record; Ontology; Health Informatics.

I. I NTRODUCTION Information Technology (IT) applied to health care can bring many benefits, including supporting physicians decisions, costs saving, greater engagement of patients during their treatment and human error rate reduction during procedures. However, adoption of IT tools in health remains low, especially in underdeveloped countries [1], [2]. Several factors contribute to preventing health systems implementation. We can group these factors in human, financial and technical barriers [2], [3]. Human barriers include lack of awareness of the importance of using Electronic Health Records (EHR), no experience with computer applications, negative impressions about Health Information Systems (HIS), as well as other factors. Financial barriers are due to high initial cost to buy EHR solutions and lack of investment to infrastructure changes, support, training, and maintenance. The technical barriers are related to the system implementation, like complicated interfaces, no adaptation to use mobile devices and mainly the absence of interoperability between other systems in use (e.g., diseasespecific surveillance systems). Some efforts have been made to address the interoperability issue [4], [5]. Initially, a possible solution comprises the use of defined sets of electronic messages, transmitted using

EDIFACT1 or HL72 . [6] These messages were used to support service administration, billing, communication and to help in public health measures. However, few messages have been developed to treat the health care process itself, and the existing ones proved not to be flexible enough to represent more specific cases. Given the limitations with EDIFACT messages approach, some efforts were conducted to create an EHR in which data entry is standardized and the information offered is complete, comprehensive, unambiguous and linked to other sources [5]. A dual-model architecture has been adopted to satisfy modern EHR requirements [6]. Following this approach, generic properties from the health record are moved to the Reference Model (RM) while specific information needed by each profession, specialty or service is described through Archetypes [6]. This architecture model is a requirement to solve the widely acknowledged challenge of semantic interoperability, which is the ability of different clinical systems to share health record data while preserving their meaning faithfully. The dual-mode architecture helps to achieve this kind of interoperability by making explicit to other systems the underlying structure, through the Reference Model. Currently, the dual architecture is being used in the design of openEHR Information Architecture, CEN 13606 specification, and HL7 Templates. People frequently change their health care provider from time to time, and this transition can lead to a fragmented EHR since different providers may use diverse systems causing the distribution of clinical data among several databases [4]. By allowing one HIS to interoperate with another HIS permits to keep the patient EHR consistent during health care provider changes. Also, having access to the entire patient history can help physicians to investigate diseases causes and to identify patterns by analyzing old observations. Although clinical data sometimes is shared during health provider changes, it can come in a different format, preventing this data from being merged with the EHR in the new health care provider system. In this context, this article proposes a method to retrieve 1 http://www.unece.org/cefact/edifact/welcome.html 2 http://www.hl7.org

ISBN: 1-60132-500-2, CSREA Press ©

52

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | clinical data created by independent health systems using the same query. To accomplish that, we first mapped health records to ontologies [7]. In particular, we used the Ontology Web Language (OWL) 3 to make explicit concept equivalences in each system. From this point, using a reasoner, we can fetch clinical data from different systems through a single SPARQL query. To evaluate the proposed method, we retrieve blood pressure observations from an openEHR health record and an HL7 health record, taking as reference the openEHR specification. Thus, the proposed method can be extended to retrieve any EHR structure. The remainder of this article is organized as follows. In Section II we present our method to map health records to ontologies. Section III explain how equivalences are created in OWL. Section IV presents results from the experiment we made. Section V discuss similar approaches. Section VI draws some conclusions and future work.

HL7 specification. Afterward, Helen Chan4 extended Orgun’s work by covering a broader range of artifacts from Version 3. Also, Zhanga et al. [10] have translated the HL7 RIM to an ontology. Zhanga approach differs from others because in that case RIM concepts were grouped in three root classes: (1) Model - representing the six backbone classes of RIM -, (2) Data - grouping standard data types -, and (3) Expression - designed to fill the gap between static knowledge and that one generated by inferences. The ontology used here derives from both Chan’s and Zhanga’s approaches. We extracted terms from HL7 Version 3 specification, and corresponding OWL classes created following the original hierarchy, as Figure 1 shows. The OWL classes created were distributed in two other ontologies: RIM ontology to group core classes and Data Types ontology accommodating data types definitions. The group Expression was not used since it is not necessary to represent rules as individuals.

II. M APPING H EALTH R ECORDS TO O NTOLOGIES To store health records in a knowledge base, we have to translate the EHR to an ontology. With ontologies, instances from the health record are represented by individuals of a type corresponding to OWL classes. The conversion process has to take into account the architecture of the EHR system used: when clinical concepts and commons structures share the same layer (single-model architecture) or are in separate levels (dual-model architecture). Despite differences between architectures, both models can be expressed using the Unified Modeling Language (UML), making clear the concepts and relations between them˜[8]. Having the UML representation, the methodology developed by Noy and McGuinness [9] can be applied to generate the ontological representation for each UML diagram. In that case, we created OWL classes following the UML representation. The ontology terms were as close as possible as the ones used in UML. Moreover, we defined cardinality and property ranges by creating property restrictions on the OWL classes. A. Single-model Health Records EHR systems based on a single-model architecture (i.e., HL7) require only a set of ontologies to represent the health record semantically. The ontology we need derives from the HL7 Reference Information Model (RIM) which is the foundation of the HL7 standard. The HL7 RIM is composed of five core classes: Act, ActRelationship, Participation, Role and Entity. Other classes extend these to specialize in concepts. The RIM ontology will use these terms as toplevel classes. After the conversion of an EHR system to an ontology, individuals are created to store the data. Some efforts have been made to translate the RIM specification to an ontology. Initially, Bhavana Orgun has developed an RDFS ontology following the HL7 Version 3. This ontology traced important concepts and properties from the 3 https://www.w3.org/TR/owl-guide

Fig. 1. RIM ontology (on left) and data types ontology (on right).

B. Dual-model Health Records When the organization of an EHR system follows the dualmodel architecture (i.e., openEHR), it requires both RM and archetypes translation. The translation of a dual-model EHR system generates one or more ontologies representing the RM and another group of ontologies for each archetype employed. In that case, individuals will be distributed across ontologies. The translation of RM follows a similar approach to that one used in HL7. The openEHR RM is divided into five documents (i.e., data types, data structures, support, among others). Thus 4 https://www.w3.org/wiki/HCLS/ClinicalObservationsInteroperability/ HL7CDA2OWL.html

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | the translated version is composed of five ontologies, each one comprising one single document. Since Rom´an et al. [11] have already translated the RM, we extended it by revising the existing ontologies and making them up-to-date to the latest openEHR RM version. Figure 2 shows the hierarchy from the OWL version of openEHR RM. Although OWL language has not the concept of generic types, like many programming languages, we used subclasses as an alternative to properly represent the restriction imposed by a generic type. The openEHR RM has several classes, like VERSIONED_OBJECT, where their behavior depends on the T type them are locked. These classes are represented in OWL by creating a class where the ranges, of those properties, depending on the T type, as being owl:Thing. Later, possible values to that generic type are represented through subclasses in which constrains the ranges to their final types. In our example, the data property from VERSIONED_OBJECT_T class allows any descendants of owl:Thing as value. However, VERSIONED_COMPOSITION class overrides that restriction by allowing only COMPOSITION individuals.

53

the archetype ontology instead of their original ranges. This process is repeated for each level, recreating the archetype hierarchy. The node identifier (e.g. at0001) is represented in the translated ontology through an OWL annotation property NodeID. These identifiers help to create a precise mapping between the ADL and the ontology created. Figure 3 shows the resulting ontology from the archetype translation. As mentioned earlier, the method extended the OBSERVATION class creating the Blood_Pressure class, whose name was extracted from the comment in each ADL node. That class contains a restriction on its data and protocol properties allowing only instances from the extended version of HISTORY and ITEM_TREE classes.

Fig. 3. Archetype translated to OWL

Fig. 2. OWL version of the openEHR RM

The translation of openEHR archetypes to OWL is based on the method presented in Leonardo Mat´ıas Ph.D. thesis [12]. Mat´ıas approach targets the definition of archetypes. However, the current implementation is only capable of translating a limited set of RM structures, so we extended it by covering more structures. We also developed another translator to convert instances of archetypes to OWL individuals. The archetype translator begins by taking the definition of the archetype written using the Archetype Definition Language (ADL). For example, to translate the definition of a blood pressure archetype (showed briefly in Figure 4) the translator would extend the RM class OBSERVATION and map ADL restrictions to OWL property restrictions, forcing property ranges to only allow instances from classes extended within

OBSERVATION[at0000] matches { -- Blood Pressure data matches { HISTORY[at0001] matches { ... } } protocol matches { ITEM_TREE[at0011] matches { ... } } } Fig. 4. Fragment from an archetype in ADL

The other translator we developed takes as input an instance of an archetype, or any other RM structure, then convert it to OWL individuals. The instance translator sends the instance through a series of serializers, where each one is responsible for the translation of a type of the RM structure. Once found the serializer for the type being convert, then it creates a triple for every property from the instance. If the property value is a primitive data type (i.e., number, date, string, etc.) it is mapped to an equivalent data type in OWL. Otherwise, when the property value is an instance of another type, the

ISBN: 1-60132-500-2, CSREA Press ©

54

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | serializer searches for a serializer responsible for that type then delegates that instance to it. The conversion process starts with the first level of the instance, however when a nested structure is present the translator runs through all levels of the instance until it reaches the deepest level, meaning that there are no more levels ahead, then the translator returns an individual which is used as value in the upper serializer and so on. When converting the instance of archetypes, the instance translator depends on the archetype translator to produce an ontology for the archetype. Then, instead of creating individuals of the type corresponding to RM classes, it will create individuals whose type is a class from the archetype ontology. To find the needed class in ontology, the translator searches for a class where its NodeID annotation is the same as the instance node identifier. III. M AKING E XPLICIT E QUIVALENCES Having health records represented by ontologies, we need to establish a connection between elements from different health systems. Mapping equivalences would allow us to query several sources at the same time using a single SPARQL query. The query to fetch data would employ terms from just one ontology, taken as a reference, and the reasoner would do the work to find results which are equivalent to the terms from the referenced ontology, based on the equivalences statements embedded in each class or property. The OWL has the properties owl:equivalentClass and owl:equivalentProperty to indicate that a particular class or property is equivalent to another one in a second ontology. The owl:equivalentClass property indicates to the reasoner that the current class and the referenced one in the property have the same class extension, meaning that they have the same set of individuals. Through this, querying individuals belonging to a class would also retrieve individuals of similar types. Although the owl:equivalentClass property being enough to state which classes are equivalent to each other, it only works with ontologies which allows a one-to-one mapping between classes. Because of the differences between HL7 and openEHR that mapping is not possible, one of these differences is in the way that each one store clinical data. For example, to record a blood pressure observation, we would use the corresponding archetype to that clinical concept (see Figure 4) which would lead us to create an individual of type Blood_Pressure in OWL. While in HL7, we would use an Observation individual to represent the same concept. However, the Observation class is used for any clinical concept in HL7, where the only means to distinguish the concept represented by an Observation is through its properties. Therefore, the Blood_Pressure and Observation classes are not semantically equivalent since the latter one could represent a broad range of information while the former one is specific to blood pressure observations. To narrow down this relation, we can use rules to classify HL7’s Observations, so the one-to-one mapping between these classes is possible.

The Semantic Web Rule Language - SWRL [13] was designed to increase the expressivity of an ontology through a set of inference rules. It works by defining an antecedent and a consequent part which if conditions specified in the former one are correct than the conditions from the latter one must also be true. We can use SWRL rules to increase the expressivity of the HL7 ontology by classifying individuals based on its properties, allowing to define a mapping with the openEHR ontology. An approach to achieve the mapping between blood pressure observations from openEHR and HL7 is to make Observation individuals to also belong to a custom OWL class, like Blood_Pressure_Observation, created only for the mapping. A point of departure to develop the SWRL rule needed is to investigate how an Observation is used to store blood pressure records. The Observation class is a specialization of the Act class so it inherits the attributes moodCode and code. The moodCode attribute defines the stage of the act, specifying whether the act is an activity that has happened, can happen, is happening, is intended to happen, or is requested/demanded to happen, while the code attribute determines the clinical concept represented by the act by linking its value to an external coding system, like LOINC terminology. [14] Figure 6 illustrates the attributes of an instance of Observation recording blood pressure. As shown, the moodCode attribute receives a CS object containing the code for an event that already has happened, the EVN code, while to the code attribute is assigned a CS object with the LOINC code for Blood Pressure. The SWRL rule of Figure 5 checks the values of these attributes so when they occur then the individual receives the OWL class Blood_Pressure_Observation. After the classification of individuals through the SWRL rule, we have a one-to-one mapping between openEHR and HL7. Thereat, we can use the owl:equivalentClass property to connect the openEHR’s Blood_Pressure class with the HL7’s Blood_Pressure_Observation class, creating an equivalence between these two EHR standards. IV. E VALUATION The method presented here attempts to address the interoperability issue between different EHR standards by using features from the OWL language. To evaluate it, we begin with the problem that physicians sometimes need to investigate the entire EHR of a patient. However, due to several reasons, this EHR can be fragmented across several institutions and in different formats, making necessary the development of specific queries for each EHR system. Given that scenario, the evaluation of our approach pretends to verify if the SWRL rules and the OWL equivalence constructs are enough to allow a single query to be used to fetch data from different systems. We carried the evaluation by mapping the data to individuals in each EHR system ontology and the development of the query to fetch these individuals. If the query fetches data from a specific patient, we also need to establish a uniform way to

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

55

rim:Observation(?obs) ∧ rim:actClassCode(?obs, ?cs) ∧ rim:actMoodCode(?obs, ?md) ∧ rim:actCode(?obs, ?cd) ∧ datatypes:_code(?cs, ?cs_code) ∧ swrlb:equal(?cs_code, "OBS") ∧ datatypes:_code(?md, ?md_code) ∧ swrlb:equal(?md_code, "EVN") ∧ datatypes:_code(?cd, ?cd_code) ∧ datatypes:_displayName(?cd, ?cd_display_name) ∧ datatypes:_codeSystemName(?cd, ?cd_system_name) ∧ swrlb:equal(?cd_code, "55284-4") ∧ swrlb:equal(?cd_display_name, "Blood pressure systolic and diastolic") ∧ swrlb:equal(?cd_system_name, "LN") =⇒ Blood_Pressure_Observation(?obs) Fig. 5. SWRL rule Patient ID 188 188 188 711 711 711 1709 1709 1709

Fig. 6. Representation of an blood pressure Observation

identify patients since EHRs have different ways to express to whom the data is related. The data for the evaluation is composed of blood pressure and heart rate readings of three patients extracted from the MIMIC-III [15] database. After extraction, the data was normalized and divided into two tables: one for blood pressure and another for heart rate. Table I shows the blood pressure readings and Table II contains heart rate measurements extracted. Diastolic and systolic values are both in millimeters of mercury (mmHg) unit. We choose to represent the first five rows of each table in openEHR while the rest we represented in HL7. Patient ID 188 188 188 711 711 711 1709 1709 1709

Date Diastolic 2161-07-02 17:45 60 2161-07-03 02:00 68 2161-07-03 13:00 61 2185-03-22 14:10 94 2185-03-22 21:52 73 2185-03-23 07:00 73 2118-01-04 12:02 102 2118-01-04 14:15 88 2118-01-04 17:00 81 TABLE I B LOOD PRESSURE READINGS

Date Heart Rate (BPM) 2157-01-12 02:30 75 2157-01-12 03:30 76 2157-01-12 11:00 81 2184-05-25 22:00 114 2184-05-26 04:45 91 2184-05-26 11:00 76 2115-04-28 12:00 79 2115-04-28 18:30 87 2115-04-29 04:00 75 TABLE II H EART RATE READINGS

and openEHR-EHR-OBSERVATION.pulse.v1, both are available on the openEHR Clinical Knowledge Manager - CKM [17]. The resulting ontologies from the archetype translation were also imported into the repo ontology.

Systolic 117 130 127 152 130 121 168 120 136

After preparing the data, we create manually the individuals for each record using the Prot´eg´e tool [16]. We placed the individuals in an ontology repo which acts as a repository for all the health records, as illustrated in Figure 7. That ontology imports the OWL version of HL7 (as shown in Figure 1) and the ontologies corresponding to the openEHR RM. To represent blood pressure and heart rate readings we translated to OWL the archetypes openEHR-EHR-OBSERVATION.blood_pressure.v1

Fig. 7. Individuals in the repo ontology

Before the owl:equivalentClass property be used, the SWRL rule to classify HL7 individuals must be executed. The rule for blood pressure observations is the same to the one represented in Figure 5 and for heart rate readings a very similar rule is used but checking for the appropriated

ISBN: 1-60132-500-2, CSREA Press ©

56

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | LOINC code and assigning the custom OWL class Heart_Rate_Observation instead. With individuals classified, the owl:equivalentClass property can be used to connect blood_pressure:Blood_Pressure class with the Blood_Pressure_Observation class or the pulse:Heart_Heart_beat class with Heart_Rate_Observation class. Although is possible to fetch records from different sources using only the owl:equivalentClass property, physicians frequently need to query a single patient but HL7 and openEHR use different methods to identity patients. Given that problem, we could use again SWRL rules to create an uniform means to identify the patient whether openEHR or HL7 is used. The openEHR stores the identifier of the patient in the subject property of the ENTRY class, parent of OBSERVATION. This identifier is an instance of the wrapper class PARTY_IDENTIFIED or PARTY_SELF. The identifier itself is recorded in a way which depends on the wrapper class used, since we used PARTY_IDENTIFIED class the identifier is an instance of DV_IDENTIFIER class added to the identifiers list. Figure 8 gives an example of rule to extract that identifier from the subject property and copy it to the data property patientID.

Fig. 10. Fetching observations from openEHR and HL7 at same time

gives an example of a query to fetch observations from just one patient (e.g., the patient 711). As explained, this patient contains readings represented through the HL7 but also others in openEHR. Because equivalences we created, both types are returned.

ehr:ENTRY(?e) ∧ ehr:subject(?e, ?pid) ∧ cprm:identifiers(?pid, ?idf) ∧ dtrm:id(?idf, ?id) =⇒ patientID(?e, ?id) Fig. 8. Rule to add the patientID property to openEHR individuals

HL7 follows a similar approach to the one used in openEHR. However, the patient identifier is stored together with other personal identifiers, like the identifier from the doctor, the nurse or any other person involved. This structure makes it hard to develop a rule covering all representations of the patient identifier since there is a large number of fields in which the rule has to check. In Figure 9 there is an example of rule to extract the identifier and put it in the patientID property. The observations used in our evaluation only contains identifiers of the patient; therefore in real situations, this rule should be expanded. rim:Act(?a) ∧ rim:participation(?a, ?p) ∧ rim:participationRole(?p, ?pr) ∧ rim:id(?pr, ?id) ∧ datatypes:_extension(?id, ?e) =⇒ patientID(?a, ?e) Fig. 9. Rule to add the patientID property to HL7 individuals

Through the patientID property, now we have a uniform way to identify the patient, not mattering the EHR standard used. This property and the equivalences created earlier allows us to query data distributed in different formats but using a single SPARQL query, like Figure 10 demonstrates. In this example, the query fetches all blood pressure observations for all patients and, as expected, both records from HL7 as well as from openEHR are retrieved. To retrieve data for a specific patient, we could use the value of patientID property to filter them. Figure 11

Fig. 11. Fetching observations from a specific patient

V. R ELATED W ORK Some efforts already have been made to address the interoperability between heterogeneous health systems. The work done so far is based on the mapping of the health records to ontologies and with the help of custom tools information are correlated. Bicer et al. [18] attempts to create a mapping between HL7 v2 and HL7 v3 messages. Since version 2 of HL7 uses plain-text messages while version 3 is based on the RIM specification, the mapping to ontologies helps to create a consistent representation of both types. An OWL mapping tool called OWLmt was developed to help the mapping. Mart´ınez-Costa el al. [19] focus on the interoperability between archetypes from openEHR and ISO EN 13606. As both

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 | standards follow the dual model architecture, the solution presented is based on the transformation of openEHR archetypes into ISO EN 13606 and vice versa. The conversion is made by combing Semantic Web with Model-driven Engineering technologies. More recently, Roehrs et al. [4] propose a model for interoperability focused on Personal Health Records (PHR). Their proposal is focused on the use of Natural Language Processing (NLP) techniques for converting different formats of data, including openEHR and HL7. As a final result, they also store the data in an ontology. VI. C ONCLUSION The IT applied to health care has the potential to improve substantially the group of processes involved in this area. These improvements would vary from better resource management, better support in decision making and, the most important, improvement in the quality of the treatment for patients. To accomplish these goals, the information exchange between health systems is essential to offer to physicians, at an acceptable speed, all the data they need. This article presented the proposition of a method to reach the interoperability between heterogeneous health systems by the use of ontologies and rules. Our approach allows retrieving data from different health care systems at once, only using a single query. This is allowed because we use the OWL features to map equivalences between standards. Our approach is divided into two steps: (1) the translation to OWL of the HIS structure and (2) the process of creating bindings between similar structures of each system. To create the bindings, we use the construction of equivalences provided by OWL, in addition to SWRL rules when these constructs cannot be used at first. These bindings are used by the reasoner to infer additional knowledge not explicit in the ontology. Although this paper focuses on the data extraction from health systems, we could apply the proposed method presented to many scenarios involving systems not designed to interface with each other. After the evaluation, the method showed viability to use ontologies to retrieve information from different sources at the same time. However, due to the semantic meaning of equivalence constructs, the reasoner deduced some incorrect assertions which could lead to wrong information being retrieved when the query differs from the initial purpose. In our view, this limitation can be overcome by the creation of rules and further specification of the repo ontology. As future work, we plan to extend the interoperability to additional standards. We also plan to evaluate our proposition with huger sets of data, including anonymized data of a partner hospital. Finally, we plan to extend the ontology to cover additional data on electronic health records. ACKNOWLEDGMENTS This work was supported in part by the Coordination for the Improvement of Higher Education Personnel - CAPES (Finance Code 001) and the National Council for Scientific and

57

Technological Development - CNPq (Grant Numbers 303640 / 2017-0 and 405354 / 2016-9). R EFERENCES [1] A. Kutney-Lee, D. M. Sloane, K. H. Bowles, L. R. Burns, and L. H. Aiken, “Electronic health record adoption and nurse reports of usability and quality of care: The role of work environment,” Applied clinical informatics, vol. 10, no. 01, pp. 129–139, 2019. [2] A. Serrano, J. Garcia-Guzman, G. Xydopoulos, and A. Tarhini, “Analysis of barriers to the deployment of health information systems: a stakeholder perspective,” Information Systems Frontiers, pp. 1–20, 2018. [3] B. Jawhari, L. Keenan, D. Zakus, D. Ludwick, A. Isaac, A. Saleh, and R. Hayward, “Barriers and facilitators to electronic medical record (emr) use in an urban slum,” International journal of medical informatics, vol. 94, pp. 246–254, 2016. [4] A. Roehrs, C. A. da Costa, R. da Rosa Righi, S. J. Rigo, and M. H. Wichman, “Toward a model for personal health record interoperability,” IEEE journal of biomedical and health informatics, vol. 23, no. 2, pp. 867–873, 2019. [5] A. Hoerbst and E. Ammenwerth, “Electronic health records. a systematic review on quality requirements,” Methods of Information in Medicine, vol. 49, no. 4, pp. 320–336, 2010. [Online]. Available: http://dx.doi.org/10.3414/ME10-01-0038 [6] D. Kalra, “Electronic health record standards,” IMIA Yearbook, no. 1, pp. 136–144, 2006. [Online]. Available: http://www.schattauer.de/ t3page/1214.html?manuscript=6382\&L=1 [7] M. K. Traor´e, “Ontology for healthcare systems modeling and simulation,” in Proceedings of the 50th Computer Simulation Conference. Society for Computer Simulation International, 2018, p. 5. [8] S. Candra and I. K. Putrama, “Applied healthcare knowledge management for hospital in clinical aspect,” Telkomnika, vol. 16, no. 4, pp. 1760–1770, 2018. [9] N. F. Noy, D. L. McGuinness et al., “Ontology development 101: A guide to creating your first ontology,” 2001. [10] Y.-F. Zhang, Y. Tian, T.-S. Zhou, K. Araki, and J.-S. Li, “Integrating hl7 rim and ontology for unified knowledge and data representation in clinical decision support systems,” Computer Methods and Programs in Biomedicine, vol. 123, pp. 94–108, 2016/10/24 XXXX. [Online]. Available: http://dx.doi.org/10.1016/j.cmpb.2015.09.020 [11] I. Rom´an, L. M. Roa, J. Reina-Tosina, and G. Madinabeitia, “Demographic management in a federated healthcare environment,” International Journal of Medical Informatics, vol. 75, no. 9, pp. 671–682, 2016/11/08 XXXX. [Online]. Available: http://dx.doi.org/10. 1016/j.ijmedinf.2006.04.006 ´ Sicilia, “Combining ontologies and rules with [12] L. L. Mat´ıas and M.-A. clinical archetypes,” Ph.D. dissertation, Universidad de Alcal´a, 2012. [13] I. Horrocks, P. F. Patel-Schneider, H. Boley, S. Tabet, B. Grosof, M. Dean et al., “Swrl: A semantic web rule language combining owl and ruleml,” W3C Member submission, vol. 21, p. 79, 2004. [14] G. W. Beeler Jr, “Introduction to: Hl7 reference information model (rim).” [15] A. E. W. Johnson, T. J. Pollard, L. Shen, L.-w. H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. Anthony Celi, and R. G. Mark, “Mimic-iii, a freely accessible critical care database,” Scientific Data, vol. 3, pp. 160 035 EP –, May 2016, data Descriptor. [Online]. Available: http://dx.doi.org/10.1038/sdata.2016.35 [16] M. A. Musen and t. P. Team, “The prot´eg´e project: A look back and a look forward,” AI Matters, vol. 1, no. 4, pp. 4–12, Jun 2015, 27239556[pmid]. [Online]. Available: http://www.ncbi.nlm.nih. gov/pmc/articles/PMC4883684/ [17] openEHR Clinical Knowledge Manager. [Online]. Available: http: //www.openehr.org/ckm/ [18] V. Bicer, G. B. Laleci, A. Dogac, and Y. Kabak, “Artemis message exchange framework: semantic interoperability of exchanged messages in the healthcare domain,” ACM Sigmod Record, vol. 34, no. 3, pp. 71–76, 2005. [19] C. Mart´ınez-Costa, M. Men´arguez-Tortosa, and J. T. Fern´andez-Breis, “An approach for the semantic interoperability of iso en 13606 and openehr archetypes,” Journal of biomedical informatics, vol. 43, no. 5, pp. 736–746, 2010.

ISBN: 1-60132-500-2, CSREA Press ©

58

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Character Stroke Path Inference in a Rehabilitation Application Sarah Potter, Dr. Benjamin Bishop, and Dr. Renée Hakim Computing Sciences Department, University of Scranton, Scranton, Pennsylvania, United States Computing Sciences Department, University of Scranton, Scranton, Pennsylvania, United States Physical Therapy Department, University of Scranton, Scranton, Pennsylvania, United States

Abstract – Properly inferring the stroke path of a character, represented as a vector font, in a rehabilitation application could result in better approximation of task-specific training (i.e. handwriting) for patients who have functional limitations when using their wrist/hand. This paper will discuss efforts to infer stroke path, applications used for therapeutic purposes, and the intersection of these topics. Additionally, the tools, technologies, and a subset of the algorithms used to develop the application specific to this paper and how this application would be used in a rehabilitation setting will be discussed. Results of this application to create an inferred stroke path will be demonstrated. Keywords: Stroke Path Inference, Vector Font, Physical Therapy, Fortune’s Algorithm, Sutherland-Hodgman Algorithm, Voronoi Diagram

1 1.1

Introduction Goals

This paper discusses stroke path inference and rehabilitation applications. To that end, the process of inferring the stroke paths contained in the phrase (particularly the stroke paths of each character), the algorithms used, the time those algorithms take to run on average, and other results of implementing these changes will be elaborated. Lastly, with regard to the application specific to this paper, some considerations for future enhancements will be discussed.

1.2

Motivation

Applications that are centered around integral life skills, such as writing in the case of this paper and application, could prove useful to physical and/or occupational therapists in a clinical setting by guiding repetitive, meaningful movements. Use of robot-assisted training with a virtual environment may provide the experience, motivation, and attention necessary to improve motor function. In previous work, comparable results were achieved when patients were performing task specific training virtually and in reality [4]. In other studies, it has been shown that robot-assisted therapies can provide outcomes that are comparable to traditional methods of treatment by augmenting extrinsic feedback (any form of information provided to the patient that originates from an external source) and increasing training intensity [1][4][9]. The use of applications that interface with a commercially available

haptic device (i.e. PHANTOM Geomagic Touch) aimed at improving the task of handwriting may prove to be useful in advancing the clinical management of patients with reduced wrist/hand function.

2

Background

This section will discuss the previous work done with regard to character stroke path inference and, more generally, the use of applications that interface with outside systems, often robotics, in order to provide therapies for a variety of patients.

2.1

Stroke Path Inference

In previous work with regard to applications that aim to improve writing outcomes, it has been shown that moving the hand and wrist in general to form simple lines and shapes is beneficial, at least for children learning to write [2]. However, it is also known, from rehabilitation literature, that more difficult and refined tasks are necessary in order to improve patient outcomes and, often, easy and difficult tasks are paired together in a treatment plan [6]. As such, inferring a stroke path that tends to be centered in a character could help provide an easy tracing experience when the character is large in size and a more difficult task that requires refined movements when the character is smaller in size; thus, inferring a stroke path for vector fonts may be a logical consideration for an application aimed to be used to improve clinical outcomes. In other previous work that makes use a stroke path, the spatio-temporal data related to those stoke paths was provided by experts in the form of written samples [8][12].

2.2

Rehabilitation

Within rehabilitation robotics, software applications and the wide array of devices and hardware they interface with have been used for therapeutic purposes; a subset of these systems deal with both arm and wrist/hand movement training for patients of varying diagnoses [3][4][6][9][10][11]. More specifically, applications that make use of haptic devices and/or haptic feedback have improved upper-extremity motor recovery and function for patients who experienced stroke [4][9][11]; it is worth noting that improvements in other physical and mental outcomes were also achieved by patients as described in Byl N, Abrams G, Pitsch E, et al. Furthermore, it has been shown that the addition of haptic

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

59

information plays a key role for tracking stroke paths of characters during visuomotor learning, which is useful in teaching a person to perform new movements in a clinical setting [3]. Lastly, similar applications were shown to promote significant improvements across a multitude of areas with regard to hand strength and functionality after thirteen sessions of therapy following carpal tunnel surgery without any conventional therapy [6].

3 3.1

Methodology Technologies

Figure 2: OpenSCAD script example

This section describes what languages, tools, and technologies were used to create and enhance the application. 3.1.1 Hardware The hardware necessary for the application to run includes a local machine that the application is deployed on, the haptic device that the application interfaces with, and the proper cables and adapters to facilitate the communication necessary between the local machine and the haptic device.

3.2

Using the application

This section describes the events that occur while the application is running, from the points of view of both practitioner and patient.1 3.2.1 Starting the application Once the application has started, it will prompt the practitioner for a user number, which uniquely identifies the patient, the text or phrase the patient is to practice with, and the font of the text or phrase (either print or cursive). After the information required by the application has been entered by the practitioner, the application will render the graphic and haptic environment for the patient to practice on.

Figure 1: PHANTOM Geomagic Touch haptic device 3.1.2 Software The application is written in C++. Both the OpenGL and the OpenHaptics Toolkit APIs [7] were used to display the interface for the patient to practice with. OpenGL was used to render the virtual environment and, thus, make it visible to a patient. OpenHaptics Toolkit was used in order to create, maintain, and interact with the virtual objects that needed to exist with respect to the haptic device. 3.1.3 Other applications/tools The application was developed in Visual Studio 2010. The application also makes use of OpenSCAD [13], which is free software for creating solid 3D CAD objects that can be manipulated via software scripting.

Figure 3: Tracing environment that is rendered. Following appropriate rendering, the patient may begin tracing the phrase. 3.2.2 Tracing the phrase The patient will use the stylus portion of the haptic device to trace the phrase in the virtual environment. The patient will be instructed to write smoothly and accurately by guiding the virtual pen in the center of each letter. If the patient inadvertently touches the background or the edges of a letter, a buzzer sound and tactile feedback in the form of vibration and resistance will occur and this will contribute toward an error score that is maintained, shown, and later recorded, by the application. As the patient successfully moves the stylus to virtually write each letter, touching the polygons that track the 1

DRB Approval: Application approved under DRB #1703

ISBN: 1-60132-500-2, CSREA Press ©

60

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

patient’s progress in the process, the polygons will change color to green and the timer will show how much time the process has taken. Once the patient has touched all of the polygons, the training will end as the phrase has been completed.

STL files are used to create the virtual objects that are visible to the haptic device and, through the haptic device, able to be interacted with by the patient. Lastly, the game-like training of the patient begins and the patient is able to start tracing the phrase.

3.2.3 Completing the phrase Upon completing the phrase, the timer will stop, a message to confirm the patient’s success will appear, and a report will be generated about the trial that just occurred including number of errors and total elapsed time.

3.4

3.2.4 Closing the application Upon closing, the application writes information regarding patient performance during the session to a file. This allows the practitioner to review each practice trial and analyze a patient’s performance.

3.3

Composition of the application

This section describes how the application operates on a technical level. 3.3.1 High-level architecture It is important to know that the application consists of two executables that communicate with each other and that those executables are the focus of the enhancements discussed later. These executables take DXF and STL files as input and one of them interfaces with the PHANTOM Geomagic Touch device.

Figure 4: High-level system view 3.3.2 High-level flow of control The practitioner runs the PT Haptics executable and enters all required information. The PT Haptics executable generates an STL file, which is a “block” with the phrase and the font chosen for the phrase beveled into it, and a DXF file, which contains the phrase in the chosen font, via executing system calls. The PT Haptics executable builds and runs the Letter Partitioning executable by executing another system call. The Letter Partitioning executable parses the DXF file generated by the PT Haptics executable in order to logically represent the phrase and perform Fortune’s algorithm. The results of Fortune’s algorithm, a Voronoi diagram, is divided into its polygons. The set of polygons is internally represented as a collection of collections of line segments. Each polygon is then written to a DXF file format. Then the PT Haptics executable, by executing a system call, generates STL files by linearly extruding each of the DXF files in OpenSCAD. Those

Algorithms used

3.4.1 Brute force algorithms There are two algorithms that could be classified as brute force algorithms, which support the application. 3.4.1.1 Generate sites One of these algorithms is used to generate the sites (in this application’s case, a set of Cartesian points) used by Fortune’s algorithm to generate the Voronoi diagram, which determines the polygons that will be used to determine how much of a phrase has been traced. Samples for the sites are randomly generated and, if a sample is determined to be inside a letter of the phrase, it is stored in a vector and its fitness value is determined. Fitness values for samples are determined by a function of how far the sample is from existing sites and how far the sample is from the edges of a character. The best sample, the one with the highest fitness score that identifies it as being relatively far away from existing sites and the edges of the character, is then stored in a vector containing all the sites. 3.4.1.2 Determine polygons The other of these algorithms determines the set of polygons that are described by the Voronoi diagram, which is internally represented as a collection of line segments that are unordered. It is important to note here that the implementation of Fortune’s algorithm uses a set of sites (Cartesian points) in order to construct a Voronoi diagram. The polygons are determined by evaluating all of the line segments contained within the Voronoi diagram with respect to each site. The line segments forming each Voronoi polygon are stored in a vector. After all the polygons have been determined from the Voronoi diagram, they are each clipped by using the Sutherland-Hodgman algorithm. Once all polygons are clipped, this is the final set of polygons that are described by the Voronoi diagram. 3.4.2 Fortune’s algorithm A popular and fast algorithm to construct a Voronoi diagram is Fortune’s algorithm [5]. The algorithm keeps track of both a sweep line and a beach line. The sweep line is a horizontal line that sweeps the set of sites from top to bottom. The beach line is a misnomer because it is composed of pieces of parabolas. For each site above the sweep line, one may define a parabola equidistant from that site and the sweep line. A new arc/parabola is added to the beach line when the sweep line encounters a site. As the sweep line progresses downward, when two parabolas cross, this traces out the edges of the

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Voronoi diagram. The place where two parabolic arcs meet are called breakpoints and they lie on a Voronoi diagram edge. The arcs flatten out as a sweep line moves down. Voronoi vertices are identified when two breakpoints meet/fuse. 3.4.3 Sutherland-Hodgman algorithm In section 3.4.1.2, polygon clipping is mentioned. The polygons curated from the Voronoi diagram generated by Fortune’s algorithm needed to be clipped as some of them extended quite far in both the negative and positive directions on the x and y planes, with some line segments in the polygons varying greatly in magnitude. The fact that these polygons extended quite far caused problems when the application tried to render them to create the virtual environment that the patient would see and interact with. Sutherland-Hodgman is the basis for polygon clipping in the present application. If a line segment lies within the clipping boundaries, it is stored in a vector representing the clipped polygon. Otherwise, if the line segment intersects with one of the clipping boundaries then it is clipped. The clipped segment is then stored in the vector representing the clipped polygon. Lastly if the polygon is deemed to be “open” after every line segment has been either stored, or disregarded then the polygon is also “closed” by the algorithm.

61

Once the PT Haptics executable begins running and is provided with an acceptable phrase to practice with, it will create a DXF file, shown below, representing the phrase in a specific font using an OpenSCAD file, shown below, via a system call that passes the phrase and font as arguments in the call to OpenSCAD. The system call to create a DXF file takes 8.02 seconds on average; note that this process for phrases of a “print” font take significantly less time than those phrases of a “cursive” font due to differences in the number of line segments.

Figure 6: Part of a DXF file that represents a phrase

Figure 5: Visual representation of Sutherland-Hodgman algorithm2

4

Results

This section describes the detailed process regarding how the application infers more-natural stroke paths for a given phrase from vector fonts. See figure 4 for system context. Also, this section will provide running times for portions of the process. To obtain these running times, phrases of three letters were used. Half of the runs dedicated to gathering this information were performed with cursive phrases and the other half were performed with print phrases.

2

Image courtesy of Wojciech mula

ISBN: 1-60132-500-2, CSREA Press ©

62

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Figure 9: Visual representation of the Voronoi diagram generated for a print phrase Figure 7: OpenSCAD files used to create DXF file representations of phrases After the DXF file that represents the given phrase and font has been created, the application performs another system call in order to build and run the Letter Partitioning application. The Letter Partitioning executable then begins by parsing the DXF file generated by the PT Haptics executable, which represents the phrase as a set of line segments, and then stores those line segments internally as a vector. Once the phrase is represented within the Letter Partitioning executable as a vector of line segments, the executable determines a set of thirty sites, which are Cartesian points, by the brute force algorithm described in section 3.4.1.1. Generating the thirty sites takes 5.62 seconds on average. These sites then serve as the input for Fortune’s algorithm, which constructs a Voronoi diagram. Additionally, these sites serve, once connected, conceptually as the inferred stroke path that the patient would strive to follow as these points are generally located toward the center of each letter in the phrase.

Figure 8: Visual representation of the result of running the application on a single, hardcoded letter to show the sites (points inside the letter) and the Voronoi diagram (line segments partitioning the letter).

Figure 10: Visual representation of the Voronoi diagram generated for a cursive phrase After generating the Voronoi diagram, which takes less than 0.01 seconds on average, the application determines the set of thirty polygons from the vector of line segments that internally represent the Voronoi diagram by the other brute force algorithm, described in section 3.4.1.2. Thirty polygons were determined to be appropriate for the size of the phrases commonly used when executing this application. On average, determining the polygons from the Voronoi diagram takes 0.44 seconds. Each polygon is represented as a vector of line segments. Once the full set of thirty polygons has been determined, the Letter Partitioning executable writes each polygon to a DXF file. At this point, the Letter Partitioning executable has finished executing and the PT Haptics executable resumes execution. The entire execution of the Letter Partitioning executable takes an average time of 28.63 seconds. Now the PT Haptics executable will generate an STL file for every DXF file created by the Letter Partitioning executable via a system call to open a particular OpenSCAD file with OpenSCAD and output the file as the desired file type. This OpenSCAD file will linearly extrude the twodimensional polygon that is described by the DXF file before saving it as an STL file, a format which only supports threedimensional objects. Generating the STL files from the DXF files, which are output from the Letter Partitioning executable, takes an average time of 6.07 seconds.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Figure 11: Individual polygon from print phrase as an STL file in OpenSCAD

63

Figure 14: Two of the polygons from cursive phrase rendered in an OpenHaptics application without the overlay (foreground that outlines the phrase) seen for print phras\ e in figure 3. The patient could, at this point in execution, start tracing the phrase. See figure 3 for a visual. The average time that the entire process from the start of the application until the patient may start tracing the phrase takes is 44.38 seconds.

5 Figure 12: Individual polygon from cursive phrase as an STL file in OpenSCAD The application then creates all of the polygons, and other necessary objects, programmatically by using the OpenHaptics Toolkit such that they are objects that can be interacted with by the haptic device. Lastly, all of the objects are rendered graphically for the patient using OpenGL.

Conclusion

This paper has discussed an application that allows a patient to use a haptic device to regain function in their hand and wrist for the purpose of improving handwriting. It also provides a means of assessing the effects of using such an application to physical and/or occupational therapists. The results have shown how, through a set of algorithms, a stroke path is inferred by the application and then, subsequently, the patient using the application is encouraged to follow that stroke path.

6

Future Work

Future work on the application discussed will include optimizations to algorithms that are currently in use, notably by making use of early exits. Additionally, later iterations of the application may accept written input from another device for a patient to practice on in order to further promote a patient to trace a phrase using a natural stroke path. Lastly, extending the application to make use of visualization hardware in order to evolve the application in the direction of augmented reality is being considered. Figure 13: Two of the polygons from print phrase rendered in an OpenHaptics application without the overlay (foreground that outlines the phrase) seen in figure 3.

7

Acknowledgements

The authors would like to thank Bret Oplinger, a graduate of the University of Scranton’s Software Engineering program, and Liam O’Hare, of the University of Scranton, for their prior contributions to this project.

8

References

[1] Abdollahi F, Lazarro E, Listenberger M, et al. “Error Augmentation Enhancing Arm Recovery in Individuals with Chronic Stroke: A Randomized Crossover Design.”;

ISBN: 1-60132-500-2, CSREA Press ©

64

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Neurorehabilitation Neural Repair, Vol. No. 28, Issue No. 2, 120-128, 2013. [2] Aminordin, Muhammad Amir Azwan Mohamad, et al. “Real-Time Feedback for Children's Pre-Writing Activities Using Haptic Technology.”; 2017 IEEE 13th International Colloquium on Signal Processing & Its Applications (CSPA), 153-158, March, 2017.

[12] Teranishi, Akiko, et al. “Effects of Full/Partial Haptic Guidance on Handwriting Skills Development.”; IEEE World Haptics Conference (WHC), 113-118, 2017. [13] http://www.openscad.org/

[3] Bluteau J, Coquillart S, Payan Y, Gentaz E. “Haptic Guidance Improves the Visuo-Manual Tracking of Trajectories”; PLoS ONE, Vol. No. 3, Issue No. 3, 1-7, March, 2008. [4] Byl N, Abrams G, Pitsch E, et al. “Chronic stroke survivors achieve comparable outcomes following virtual task specific repetitive training guided by a wearable robotic orthosis (UL-EXO7) and actual task specific repetitive training guided by a physical therapist.”; Journal of Hand Therapy, Vol. No. 26, Issue No. 4, 343-351, OctoberDecember, 2013. [5] Fortune S, “A sweepline algorithm for Voronoi diagrams”; SCG '86 Proceedings of the second annual symposium on Computational geometry (ACM New York), 313-322, August, 1986. [6] Heuser, Andrew, et al. “Telerehabilitation Using the Rutgers Master II Glove Following Carpal Tunnel Release Surgery: Proof-of-Concept.”; IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. No. 15, Issue No. 1, 43–49, March, 2007. [7] Itkowitz, B., et al. “The OpenHaptics™ Toolkit: A Library for Adding 3D Touch™ Navigation and Haptics to Graphics Applications.” First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, March, 2005. [8] Kim, Young-Seok, et al. “Haptics Assisted Training (HAT) System for Children's Handwriting.”; 2013 World Haptics Conference (WHC), 559-564, April, 2013. [9] Lo A, Guarino P, Richards L, et al. “Robot-Assisted Therapy for Long-Term Upper-Limb Impairment after Stroke”; New England Journal of Medicine, Vol. No. 362, Issue No. 19, 1772-1783, May, 2010.

[10] Marini, F., et al. “Adaptive Wrist Robot Training in

Pediatric Rehabilitation.”; 2015 IEEE International Conference on Rehabilitation Robotics (ICORR), 175-180, August, 2015. [11] Takahashi C, Der-Yeghiaian L, Le V, Motiwala R, Cramer S. “Robot-based hand motor therapy after stroke”; Brain: A Journal of Neurology (BRAIN), Vol. No. 131, Issue No. 2, 425-437, February, 2008.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

65

Understanding Elderly User Experience on the Use of New Technologies for Independent Living Spiru L.1, Paul C.2, Velciu M. 3, Voicu A.4 and Marzan M.5 Ana Aslan International Foundation, Bucharest, Romania

1,2,3,4,5

Abstract - The emerging of new technologies opens unforeseen horizons to growing ageing population and offers elderly people various opportunities to stay independent, healthy and live a quality life. The understanding of the elderly user experience is paramount in order to enhance their acceptance and adoption of the new technologies and gerontechnologies. The article presents our conclusions regarding elderly user experience emerged as a result of our empirical approach over four ongoing European projects. We carry out a user research methodology centered on a mixt of qualitative and quantitative research methods for analyzing elderly users’ requirements, perceptions and acceptance about using apps in the case of seniors and soon to-be-seniors from various European countries. The insights reflect the important stage of validation of user experience with respect to the elders’ behavior in real world tests. Keywords: elderly user requirements, gerontographics segmentation, new technologies adoption

1

Introduction

Implementation of new technologies and innovative applications designed to enhance active and healthy living is of focus today when ageing populations around the globe pose a challenge to both societies and governments. The early research and testing efforts were concentrated on the simple replication of the technology acceptance models used for the youth and management employees to the elderly-users [1], [2]. The simple replication of the models with no improvements and case specificities produced a long list of criteria that proved to be of little practical value for the seniors [3], [4]. For the last decade, research has significantly contributed to a better understanding of this process and its outcomes in relation with the seniors [5]. The adoption of the health new technologies for the institutionalized seniors has a high success rate, while the adoption of the new technologies by the independent seniors at home is much slower and costly and it is often met with reluctance and inadequacies. Here, we want to contribute by answering to the challenge of the elderly and how to encourage them to accept, make sense of, and be at peace with ageing. A holistic approach remains a challenge to help someone with cognitive decline [6] to live independently.

Gerontographics segmentation is a tactic which details the needs, attitudes, lifestyles and behaviors of the seniors and it is largely employed in analyzing and targeting adult market. The approach has been developed by Moschis [7], [8], and it is based on the assumption that elderly manifest similar behavior as long as they had encountered similar circumstances, experiences and past events. Conclusively, based on the type of aging experienced, there are four segments of the elderly: healthy indulgers, ailing outgoers, healthy hermits and frail recluses. The first group is independent and active, enjoy life and share similar behavior with those younger. The second group, in spite of a health decline, reflect a high-level of psychological well-being. The third group consists of seniors who have a quite well health condition but they insulate themselves socially and “feel” being old. The last group, frail recluses, are people with chronic health conditions and who encountered negative life events. They show a relatively low physical and psychological well-being alike.

2

Our Mission

Ana Aslan International Foundation (AAIF) focuses on promoting innovative ideas, integrated solutions and methods which are personalized and adapted to face new challenges in the field of ageing and well-being. Therefore, AAIF central involvement in various international and European projects aims to facilitate access and ensure a better experience for elder users while using innovative applications based on its mission to offer elder people the opportunity to benefit from advanced technologies and apps as well as use them to live a longer, safer and healthier life. Placing elderly in the center of our concern is essential for the development of an innovative product for holistic health and ageing well. Nevertheless, our work has a strong component of educating elderly on how to benefit from the new technologies and innovative apps in order to fight cognitive decline and live independently.

2.1

Introductory Remarks on Gerontechnology Acceptance Research Project

Our contributions to the elderly user experience emphasize empirical results from analyzing elders’ requirements and experiences with new technology, in various European countries. We selected four projects which had been carried out under the AAL (Active and Assistive Living)

ISBN: 1-60132-500-2, CSREA Press ©

66

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

Programme with funding by the European Union. Research on user requirements aims to reveal insights about “What do elder users need the apps to do, and what they want, taking into account their requirements and satisfaction”. The research methodology is based on qualitative research method for analyzing users’ requirements, perceptions about their needs and technology acceptance in case of elderly. These are designed to collect information about people real needs, attitudes and decisions, according to their own assessment, when faced with new innovative products. The conclusions were directed in order to offer the best product in order to support elders’ health management. Our research findings are in line with Sthienrapapayut and Moschis [9] who show that elderly people are very much convenience-oriented referring here to the high ease of use and that the independent seniors prefer universal technologies and avoid those designed for them, which carry the age-stigma. Each project referred here used successive pilot cycles in order to considerably improve the elderly experience with the new technology. Moreover, because we have mainly tested the products in Eastern and Southern Europe which are risk- adverse and collectivistic cultures and research show that with increasing age seniors become more risk-averse (Steenstra, 2015), we have put much value on security guarantee and on the social influence in technology acceptance. We present the main conclusions on our research as it follows:

2.2

Ella4Life - Your Virtual Personal Assistant for home and on the road

The Ella4Life project (http://ella4life.eu/) aims to develop an integrated solution to help elderly stay healthier and live a more independent and pleasant life. The Ella4Life application is friendly and easy to use, that gives the elderly the possibility to communicate with a virtual assistant in order to keep them informed as well as shares information with the professionals or informal caregivers when necessary. The user research methodology is based on qualitative research method. Data were collected through focus groups and in-depth interviews with seniors and soon-to-be seniors from four European countries (Netherlands, Switzerland, Poland and Romania). The main conclusions are presented here. Insights from focus groups reveal a positive perception about using the Ella4Life solution. Elders agreed the idea of having a virtual assistant and valued the great opportunities of using it in daily activities. The seniors demanded help for: communication, social network, a sense of security, stay independent etc. Also, findings from in-depth interviews reveal that elders have positive attitudes in terms of acceptance of using solution Ella4Life for improving their lives. Seniors appreciated the benefits of interaction with a virtual assistant and connection, receiving advices and useful information in real time such as agenda and news, weather info, medication schedule, notifications and video calling. They valued the monitoring and self-management for measuring blood pressure and reminders for taking medicines and online interaction. The system required features are: the ease of use, easy to learn how to access and have the family agreement for a high security. If

it has the voice control accessibility is better for them. The seniors’ perception is favorable to accepting new technologies in their lives

2.3

IOANNA - Integration of all stores Network & Navigation Assistant Section and subsection headings

The project (http://www.ioanna-project.eu/) develops the integrated solution IOANNA as a platform for facilitating mobility and social engagement of elderly. It helps them to feel safe to walk around the city, to look for best commercial offers, to plan route and movement and to stay active in their community. For evaluation users’ requirements we used quantitative and qualitative research methods using tools like questionnaires as well as focus groups and interviews with seniors from Cyprus and Romania who know how to use ICT’s and are interested in using the opportunities of IOANNA and their services. The results allow us to shape the seniors’ attitudes and behavior and elderly user profile. Seniors feel comfortable using IOANNA solution and would be happy to benefit from its characteristics. Finally, they are determined to accept it, to be active and willing to volunteer, interested in features like movement, medical advice and healthcare. One issue is about doing online purchases because their behavior is conservative and only occasionally, they make purchases online and enjoy doing it. The safety is the main practical barrier that people perceive it to prevent the use a solution like IOANNA. Easy to use interface with friendly images and even a step by step tutorial is necessary, while a bigger screen would be highly appreciated.

2.4

PETAL - Personalizable assisTive Ambient monitoring and Lighting

This project (http://www.aal-petal.eu/) aims to reduce the cognitive decline among Mild Cognitive Impairment (MCI) patients through an assisted ambient environment and the use of neurocognitive stimulation applications. PETAL is comprised of an Ambiental lighting system as well as a neurocognitive stimulation mobile application that are linked through an online interface - it will act as both the receptor of the information coming from the environment as well as a transmitter of the prior set commands. For user requirements evaluation we used qualitative and quantitative methods, in particular questionnaires and focus groups in all three countries participating in the project (Romania, Italy and Austria). We gathered information from both MCI patients as well as formal and informal caregivers. The feedback received points out to the fact that neither MCI patients nor the caregivers are aware of how important lighting can be in terms of their health and well-being. However, patients assess their current lighting system as suitable and appropriate regarding their daily needs. Another aspect worth mentioning is that the familiarity with modern digital devices and the openness towards them is higher than expected, although roughly one half, for instance, has internet access, almost one half uses text

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

messages and e-mails. Apps used to manage tasks, audio messages or digital games for cognitive training are known to a significant minority. On the other hand, the technology is normally used in a very traditional way, like making phone calls or watching TV. Despite of the widespread access to Internet the preferred communication tool (also for reminder/alarm) with caregivers is calling. This means that a good part of the target group fulfils the basic requirements for an advanced use of modern communication technology but they have to be convinced and trained to use it. The seniors themselves are only scarcely conscious of the situations and the tasks where support is needed and possible to offer.

2.5

Senior-TV - ICT-based informal care at home

formal

and

Senior-TV (http://seniortv-aal.eu/ro/) is a product meant to provide ICT-based formal and informal care at home, empowering seniors to live as much as possible independent in their own homes, with special attention being paid at the active prevention, and the fostering of a high-quality, long, and healthy life. Three pilot studies were conducted in the end-user countries of Slovenia, Cyprus and Romania, comprising over 350 senior respondents who joined the developed test technologies and answered to the pre- and post-questionnaires applied over the course of two years, 2017-2019. The main project activities were focused on identifying the most suitable technologies to deploy various services for the elderly at home using a Smart TV setting and on the design of these formal and informal caregiving services. The project proves successful in the nursing homes where elderly manifest a certain degree of frailty and dependence. The same approach cannot be transferred to the independent elderly living at home because it is perceived as using a top-down approach and a healthoriented service perspective. Both are imposing frailty, poor cognitive and physical functioning: health indicators to be monitored, tracking for walking, agenda for reminding, games for better cognitive functioning and a virtual center for better physical functioning. Dissimilar from seniors residing in the nursing homes, seniors at home think about themselves as being healthy and independent, self-perceived age is lower, have a high self-esteem and a future oriented engagement. There is a higher interest of the seniors in enjoyment, than in the usefulness of the Senior- TV services. It is the enjoyment, and not the usefulness, which triggers the interest of the seniors in TV applications. Easiness of use affects the perception of the usefulness and the enjoyment of the Senior-TV services. Therefore, the design of the new technologies for the seniors need to be carried out with the engagement of the seniors since the conceptualization phase.

2.6

Research Results on the Relationship between Elderly and New Technologies

Our research carried out through the above-mentioned European projects confirm the reliability of gerontographics segmentation when analyzing the relation between the elderly and new technologies. The following table clearly support the

statement that the acceptance of new technology by the elderly is highly dependent by the category they belong to, and that this disposition is early infused on the aging persons and are more relevant that socio-demographic factors such as age, income or education. Therefore, we may notice that if there are no differences between these categories at the time of first encountering television, which has been introduced into the houses in 1960s, there are significant differences in the first encountering of the mobile phones and internet, spread in the 2000s. Table 1. The first elderly encounter with various technologies based on gerontographics segmentation

The figure below show that the two categories of frail recluses and healthy hermits, which are both insulated from society and the first group manifests a certain degree of dependency by informal or formal care, have a higher appetence for adopting technology. We may notice that the usage of new technologies are more appealing to these two categories than to the healthy indulgers and ailing outgoers, who do not diversify their technology usage by age.

Fig.1 Elderly usage of diverse TV services based on gerontographics segmentation Our research show that the social influence of the secondary beneficiaries, such as relatives, neighbors and friends and of

ISBN: 1-60132-500-2, CSREA Press ©

67

68

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

the nursing professionals, influence the adoption of technology by the seniors from the categories of frail recluses and healthy hermits. Though, the other two categories are not encouraged by their entourage such as relatives and society to appeal to technology.

3

Conclusions

All our projects presented here aim at fighting cognitive decline and harmonizing the use of technology with the elders’ personal environment for an independent and socially integrated life. Because the seniors’ life habits are too embedded, all technological products are determined by the perception of their “relevance” through the lens of these habits. Technology in itself is less likely to change these old life habits or determine an active behavior towards the new technologies except for cases when there is a great perception of enjoyment and a relative social and family pressure or support for accepting and using it [11]. The already usage of various new technologies encourage the openness towards accepting them. Our research experience [12] are in line with the idea that new technologies and apps for seniors need to fulfill their requirement as well as to engage them from the developmental phases of the project as interaction is mainly one of the most important aspect for seniors nowadays - the sense of belonging is also cultivated when this kind of approach is used. Moreover, the chances of acceptance are highly increased if they accurately understand the health product, especially if health indicators are monitored and self-management is encouraged. In addition, the main concern of the seniors which has to be addressed from the beginning is safety - this is probably the most important factor that can negatively alter their perception regarding new technologies. When it comes to aspects which this kind of solutions need to avoid, we can highly point out that not considering the actual health status of the elderly prior to the implementation phase especially their subjective view on the well- being, can significantly alter the acceptance rate. What is more, we should always make sure that the benefits of new technology are both well emphasized as well as properly perceived by the seniors, as disruptive perceptions can also increase their opposition toward technology. In terms of user requirements, it was unanimously accepted that senior addressed technological products should be easy to use and access, intuitive and, preferably, to offer the possibility of voice control accessibility. The family agreement is not to be neglected as it highly increases the feeling of safety and the acceptance rate among senior users.

4

[2] Legris, P., Ingham, J. and Collerette, P., 2003. Why do people use information technology? A critical review of the technology acceptance model. Information & management, 40(3), pp.191-204. [3] Venkatesh, V., Davis, F. and Morris, M.G., 2007. Dead or alive? The development, trajectory and future of technology adoption research. Journal of the association for information systems, 8(4), p.1. [4] Mącik, R., 2017. The Adoption of The Internet of Things by Young Consumers–an Empirical Investigation. Economic and Environmental Studies, 17(42), pp.363-388. [5] Chen, K. and Chan, A.H.S., 2014. Gerontechnology acceptance by elderly Hong Kong Chinese: a senior technology acceptance model (STAM). Ergonomics, 57(5), pp.635-652. [6] Spiru, Luiza., 2009. A holistic approach of mild cognitive impairment. Journal of the Neurological Sciences, 283(1) p. 245. [7] Moschis, G.P., 1996. Gerontographics: Life-stage segmentation for marketing strategy development. Greenwood Publishing Group. [8] Moschis, G.P., 2003. Marketing to older adults: an updated overview of present knowledge and practice. Journal of Consumer Marketing, 20(6), pp.516-525. [9] Sthienrapapayut, T., Moschis, G.P. and Mathur, A., 2018. Using gerontographics to explain consumer behaviour in later life: evidence from a Thai study. Journal of Consumer Marketing, 35(3), pp.317-327. [10] Steenstra, P., 2015. Elderly classification and involvement in the design process: Framework for specification of the elderly within user centered design (Bachelor's thesis, University of Twente). [11] Venkatesh, V., Thong, J.Y. and Xu, X., 2012. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly, 36(1), pp.157-178. [12] Ana Aslan International, Foundation. Research Projects [Online]. https://www.anaaslanacademy.ro/ (Last Accessed: 19 May 2019).

References

[1] Tan, P.J.B., 2013. Applying the UTAUT to understand factors affecting the use of English e-learning websites in Taiwan. Sage Open, 3(4), p.2158244013503837.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

SESSION HEALTH INFORMATICS: DATA SCIENCE, PREDICTIVE ANALYTICS, SECURITY ISSUES AND RELATED SYSTEMS Chair(s) TBA

ISBN: 1-60132-500-2, CSREA Press ©

69

70

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

71

Software Architecture Integrating Blockchain and Artificial Intelligence for Medical Data Aggregation Jingpeng Tang1, Qianwen Bi2, Bradley Van Fleet1, Jason Nelson1, Carter Davis1, and Joe Jacobson1 1 Department of Computer Science, Utah Valley University, Orem, UT, USA 2 School of Business, Utah Valley University, Orem, UT, USA

Abstract - Big data analytics are making revolutionary changes in the medical field, such as providing personalized medicine and prescriptive analytics, clinical risk intervention and predictive analytics, automated external and internal reporting of patient data, standardized medical terms, and patient registries. The key question is how to use this data intelligently and securely. Artificial Intelligence (AI) and blockchain are two cutting-edge technologies to support this data revolution. Our primary concern is how these technologies will impact medical big data analysis and corresponding software architecture design in this proposed research. Motivated by Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), we propose AI as a Service (AaaS) and Blockchain as a Service (BaaS) in this paper. Keywords: Artificial Intelligence, Blockchain, Aggregation, Medical Data, Software Architecture

1

Data

Introduction

Artificial Intelligence (AI) and blockchain are two cutting-edge technologies in the 21st century. The combination of the two will bring dramatic change and impact on traditional software architecture design. We are concerned primarily with how these technologies will impact medical data. As healthcare providers continue to expand their technology footprint, an increasing concern is storage and access to Protected Health Information (PHI). Health Insurance Portability and Accountability Act (HIPAA) regulations mandate how PHI is stored, transmitted, and accessed. According to the U.S. Department of Health & Human Services, HIPAA “[establishes] important protections for individually identifiable health information… including limitations on uses and disclosures of such information, safeguards against inappropriate uses and disclosures, and individuals’ rights with respect to their health information” [1]. However, these regulations are not all-encompassing. As a result, some providers have made it difficult for patients to access their PHI by introducing costs that can add up to high totals to print or copy data [2].

In this work, the integration of blockchain and artificial intelligence in a medical patient data aggregation platform for optimizing healthcare communication and delivery is explored. By leveraging the immutable, append-only architecture associated with distributed ledgers like Bitcoin and Ethereum, we hope to reduce data transmission overhead as patients interact with multiple facilities. Also integrating artificial intelligence, specifically machine learning, we seek for a way to provide diagnosing assistance to providers. The Medical Assistant platform integrates these technologies into an Angular web application that a patient can interact with to access their data. This platform utilizes Representational State Transfer (REST) APIs to interact with the Ethereum blockchain and Inter-Planetary File System (IPFS) for file storage. To facilitate the goal of diagnosing assistance, a neural network is implemented and integrated to analyze MRI brain scan images to determine the presence of tumors. Once complete, a binary value is assigned indicating the presence of a tumor.

2

Existing Research and Platforms

Warren Sarle’s address on AI shares one of the most commonly used artificial neural networks called multilayer perceptrons. The definition of a simple perceptron and its use of activation functions are explained. Lastly, the benefits of training models with multiple hidden layers, or deep models, are outlined [3]. The simple linear and multilayer perceptrons models discussed in Sarle’s work were used during this research. Bengio and Glorot’s paper on understanding the difficulty of training AI provided additional information about deep neural networks [4]. It outlines the reasons why deep multilayer neural networks were unsuccessful in the past and ways to improve them. Experiments with different activation functions and the results showing how some are not suitable for deep layer models are explained. The paper also discusses the effect of the cost function on the network.

ISBN: 1-60132-500-2, CSREA Press ©

72

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

3

Software Architecture Design

As AI and blockchain are the two main concerns, a software prototype with the following architecture was designed and built:

3.1

SETH

It was identified early on that security is critical in handling medical data and should be designed as a service. Both blockchain and IPFS use decentralization, cryptographic hash, and Merkle tree to achieve data security [5]. In this research, we propose BaaS as a central security and protection design for medical data. Early iterations of the Medical Assistant platform implemented the Hyperledger Fabric blockchain. The Hyperledger Fabric architecture allows for hosting private, permissioned nodes that could be interacted with via the Hyperledger Composer REST API [6]. After further difficulty with configuration and integration, the Ethereum blockchain platform was used for the duration of the research. By using MetaMask, a blockchain approval service, multi-factor approval could be provided for transactions by utilizing their “secure identity vault” [7]. This allows for greater security and data reliability. Integration was achieved through a JavaScript distributed application (dApp). A major concern that was discussed was the security of patient information. Ethereum is a public blockchain, meaning that any data stored, even if encrypted, is available for public download. To alleviate this concern, the Inter-Planetary File System was used to store files containing the data. Utilizing IPFS, hidden nodes could be created to store the data, while the blockchain stored file path hashes. The added benefit of integrating IPFS was reduced data storage in Ethereum, which may also reduce the cost per transaction significantly. Figure 1 is the resultant platform containing both the Ethereum and IPFS integrations packaged into a NodeJS solution – named SETH – and placed on a server. For Medical Assistant to interact with SETH, a REST API has been implemented for front-end development.

3.2

Artificial Intelligence

A second major part of the Medical Assistant platform is diagnosing assistance. Initial investigations were performed using Google’s TensorFlow AI. Choosing this platform facilitated the creation of a basic demo to understand AI development, and practice setting up neural networks.

Figure 2. CNN model with the highest percentage of accuracy Experiments were done using different numbers of layers and perceptrons in order to create the most accurate model. In Figure 2 neurons are represented by nodes. Each column of nodes is called a layer. The first layer is the input layer, with 12288 nodes to match the 12288 pixels in the test images. The last layer is the output layer, with only 1 neuron representing the prediction value. The layers in between represent the hidden layers. After several tests with a variable number of hidden layers, this model gave the highest percentage of accuracy. Once the initial design and sample demo were established, work was put forward to create a training set. The initial use case was to perform image classification using an AI with a Convolutional Neural Network to identify brain scans. This later developed into identifying potential masses in an MRI scan. Sample images were taken from publicly available MRI scan images. To increase the sample size, images were duplicated and reflected about the vertical axis. Figures 3 to 5 show sample images used as part of the training and test design. Pseudo-tumors were drawn onto arbitrarily selected images.

Figure 1 Final SETH Architecture

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

73

Figure 6, Medical Assistant patient overview wireframe The application design approach was to reduce the workflow for users to obtain information from all medical providers. Creating an overview (see Figure 6) that could be synchronized potentially facilitates the aggregation of medical information, regardless of the provider or facility. Figure 3, profile MRI scan without pseudo-tumor Figure 4, top MRI scan with pseudo-tumor. Figure 5, top MRI scan without pseudo-tumor 3.3

Medical Assistant Web Application

In order to facilitate the two major parts of this application (AI and blockchain), a web application was developed to provide a user-friendly interface. This user interface (UI) was designed to be patient facing, allowing enduser access to the blockchain data. An initial concern that was brought up was patient data access and security. As a result, two key mechanisms were put in place: 3.3.1

Multi-Factor Transaction Approval For a user to create a transaction, such as sending medical history to a new provider, they must: 1. Select the information to send; 2. Select the provider to grant access; 3. Start the transaction, and 4. Approve the transaction in MetaMask. The addition of approving the transaction in MetaMask is to alert users to unauthorized activity and prevent potentially fraudulent transactions. The private key used to decrypt files is separated from the user credentials used to access the application allowed device management. 3.3.2

Front-End Design Once a set of features were designed, and initial wireframes were approved, architectural design and development were approached.

A focus of the overview was to create a page that was concise and displayed important information up front. One key area of focus was patient interaction with medical providers by updating personal information. The possibility of a workflow that would allow a patient to update their information (e.g. residential address) was discussed. These updates would then be immediately available for providers and facilities to access. Providing an interface that was simple to use was a key element of this feature. Part of the design took into consideration the structure of the data being stored in SETH, and the REST methods used for communication. The final structure of data was a JSON object with the following definition: { "PatientId": "", "Attention": "", "Line1": "", "City": "", "State": "", "PostalCode": "", "Country": "" } The last feature identified for this research was creating an interface for a user to manage all their providers across different facilities. This interface, shown in Figure 7, should expose actions for adding and removing providers, as well as managing provider access.

Figure 7, Medical Assistant provider management wireframe

ISBN: 1-60132-500-2, CSREA Press ©

74

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

The tools used to implement this application included Angular 6 and the Health Catalyst Cashmere module. The use of Angular allows a full JavaScript web stack as well as objectoriented design. There were several design patterns used with a layered architectural pattern to implement the proof of concept, including reactive programming, unidirectional data flow, and centralized state management. Some of these patterns, while not strictly enforced, were built into the Angular 6 framework [8]. During the implementation, the final design (see Figure 8) added an extra integration with the SETH application for users to upload files, such as images. This added extra control for patients to add information to their profile, without requiring intervention from the provider or facility.

a single module. This allows faster deployment, by providing the capability to drop the files in place and run without having to download or configure the application. Future development may also be focused on providing functionality for storing and retrieving a standard health data format, HL7. This feature, discussed by potential investors, could allow the platform to integrate with existing electronic health record platforms. 5.2

Artificial Intelligence

Further research of other deep learning models and how they can improve the accuracy of this project are going to be explored. CNN and RNN models are very popular models and this project could benefit extremely from them. Additional research into an AI library known as OpenCV could prove useful in being able to detect bad data before the training takes place. The current architecture has several limitations which would prevent further adoption. Some of these limitations include lacking configuration options, limited visualizations, and poor scalability.

Figure 8, final Medical Assistant application design

4

Discussion

Throughout the research, there were major concerns with patient data security. However, further concerns with diagnosing prediction accuracy were brought up in a meeting with potential investors. Regarding the diagnosing accuracy, it was determined to be better to alert the provider at an arbitrary threshold (e.g. 5% identification) than to throw a possible false-negative. Additionally, concerns with data storage efficiency were discussed, primarily around potential data duplication. Because a record in IPFS is given a hash address based on its content, concern was raised that two records could match and only one stored. This was addressed by including the patient identifier in the records being uploaded. By adding this unique identifier, two similar documents would be stored separately. This would also maintain the deduplication capability of the IPFS network, maintaining data storage efficiency.

5 5.1

Future Work SETH

The next priority for the SETH dApp is improving the development pipeline to package the modules and scripts into

During testing, it was discovered that multiple training processes required intense system resources. This critical limitation may be addressed by implementing a scheduling and management module, like an operating systems’ management of processes. A design limitation of this architecture is the need to redesign, retrain, and redeploy for each user story. Introducing an AI as a Service platform may help mitigate this issue. 5.2.1 AaaS Frontend The AaaS platform would work on a request-report architecture, and support REST API calls. A configuration manager may also be implemented with the AaaS platform to allow the creation of different AI models dynamically. This allows a technical user to configure an AI without having to modify the existing codebase, further improving system maintainability. 5.3

Medical Assistant Web Application

Future iterations of the Medical Assistant application will include complete integration with the SETH and AI architectures. Including Ethereum interactions with encrypting and decrypting data. After a stable application is developed, a mobile application framework may be implemented to provide the same functionality on Andriod and iOS platforms.

ISBN: 1-60132-500-2, CSREA Press ©

Int'l Conf. Health Informatics and Medical Systems | HIMS'19 |

A provider or facility targeted application may be designed and developed to allow access to the patient data, without having to export the data. This application would provide more controls for updating a patient’s profile based on the care given. Further integration with the AI architecture would also be implemented to provide diagnosing assistance for the providers.

6

References

[4] Bengio, Yoshua and Glorot, Xavier. “Understanding the Difficulty of Training Deep Feedforward Neural Networks." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Ed. Yee Whye Teh and Mike Titterington. Sardina, Italy, 2010. 249-256. Document. [5] Benet, Juan. “IPFS - Content Addressed, Versioned, P2P File System (DRAFT 3).” Document. 26 March 2019.

[1] Office for Civil Rights. Guidance on HIPAA & Cloud Computing. 16 June 2017. Web. 25 March 2019. .

[6] Hyperledger Fabric. 1 January 2019. Web. 25 March 2019. .

[2] Graham, Judith. In Days Of Data Galore, Patients Have Trouble. 25 October 2018. Web. 25 March 2019. .

[7] MetaMask. MetaMask.io. 1 January 2019. Web. 25 March 2019.

[3] Sarle, Warren S.. “Neural Networks and Statistical Models.” Proceedings of the Nineteenth Annual SAS Users Group International Conference. Cary, NC, USA, 1994

[8] Sinedied. The Missing Introduction to Angular and Modern Design Patterns. 18 September 2017. Web. 21 February 2019.