Innovation in Medicine and Healthcare: Proceedings of 10th KES-InMed 2022 (Smart Innovation, Systems and Technologies, 308) 9811934398, 9789811934391

This book presents the proceedings of the KES International Conferences on Innovation in Medicine and Healthcare (KES-In

120 26 8MB

English Pages 330 [309] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
InMed 2021 Organization
Preface
Contents
About the Editors
Part I COVID-19
1 3D Convolutional Neural Network for Covid Assessment on CT Scans
1.1 Introduction
1.2 Related Works
1.3 Our Approach
1.3.1 Dataset
1.3.2 Methodology
1.4 Results and Experiments
1.4.1 Working Environment
1.4.2 Evaluation and Discussion
1.5 Conclusion
References
2 Prediction Models for COVID-19 in Children
2.1 Introduction
2.2 Literature Review
2.3 Methodology
2.3.1 Dataset Description
2.3.2 Pre-processing Steps
2.3.3 Machine Learning Models
2.4 Results and Discussion
2.5 Conclusion
References
3 Society 5.0 as Digital Strategy for Scalability: Tamba’s COVID-19 Vaccination Management System and Its Expansion
3.1 Background
3.2 Method
3.3 Society 5.0
3.4 gBizSTACK: National-Government-Provided Framework
3.5 COVID-19 Vaccination System and Regional Comprehensive Care
3.5.1 The Existing Immunization Implementation Determination System
3.5.2 Improvements for COVID-19 Vaccination Compliance
3.6 S5RA Compliant Enterprise Architecture Based on AIDAF Framework
3.7 Discussion
3.8 Conclusion
References
Part II Biological Researh and Technologies (1): Medical Care
4 Analysis of Ensemble Learning for Chest X-Ray Images
4.1 Introduction
4.2 Dataset and Methods
4.2.1 Dataset
4.2.2 Feature Extraction and Classification
4.3 Results
4.4 Conclusions
References
5 Adaptive Multi-omics Survival Analysis in Cancer
5.1 Introduction
5.2 Materials
5.3 Methods
5.3.1 Multi-task Loss Function
5.3.2 Adaptive Auxiliary Loss
5.4 Experimental Results and Discussions
5.4.1 Comparison with Only Single Task Loss
5.4.2 Comparison with Different Survival Prediction Approaches Over Cross-Validation Test
5.4.3 Survival Stratification Prediction
5.5 Conclusions
References
6 Risk Analysis of Developing Lifestyle-Related Diseases Based on the Content of Social Networking Service Posts
6.1 Introduction
6.2 Related Work
6.3 Methods
6.3.1 Labeled Data Creation
6.3.2 Model Creation and Evaluation
6.3.3 Statistical Validation
6.4 Results and Discussions
6.4.1 Data Collection
6.4.2 Labeled Data
6.4.3 Model Creation and Evaluation
6.4.4 Analysis of Classification Results
6.4.5 Discussions
References
Part III Biological Researh and Technologies (2): Healthcare
7 Digital Twin in Healthcare Through the Eyes of the Vitruvian Man
7.1 Introduction
7.2 Related Work
7.2.1 Software Architectures
7.2.2 Digital Twins of Patients
7.3 Proposed Approach
7.3.1 General Architecture of Digital Twin of Patient
7.3.2 Digital Twin of Patient Users
7.3.3 Further Discussion
7.4 Conclusions
References
8 Relationship Between Mental Health, Life Skills and Life Satisfaction Among Japanese Athletes
8.1 Introduction
8.2 Research Method
8.2.1 Survey Period and Participants
8.2.2 Survey Method
8.2.3 Analysis Method
8.3 Results
8.4 Consideration
References
9 IT Assisted Gardening for the Revitalization of the Elderly: The Turntable Solution
9.1 Introduction
9.2 Methods
9.2.1 An Overview of the Turntable Solution
9.2.2 The Trial Scheme
9.2.3 Data Analysis Methods
9.3 Results
9.3.1 Input Data
9.3.2 User Events
9.3.3 Correlations and Significant Changes in Standardized Scores
9.4 Discussion
9.5 Conclusion
References
10 Mental Health of the Elderly: Towards a Proposal for a Social Simulator Based on the Complexity Approach
10.1 Introduction
10.2 General Objective
10.3 Theoretical Framework
10.3.1 Elderly
10.3.2 Emotions
10.3.3 Mental Health
10.3.4 Complexity
10.4 Methodology
10.5 Towards the Proposal: The Description
10.6 Conclusions
10.7 Future Works
References
Part IV Deep/Machine Learning for Medical Care and Healthcare
11 Deep Reinforcement Learning Classification of Brain Tumors on MRI
11.1 Introduction
11.2 Methods
11.2.1 Data Collection
11.2.2 Reinforcement Learning Environment, Definition of States, Actions, and Rewards
11.2.3 Training
11.2.4 Supervised Deep Learning (SDL) Classification for Comparison
11.3 Results
11.4 Discussion/Conclusion
11.5 Conflicts of Interest
11.6 Funding
References
12 Deep Learning on Special Processed Video Colonoscopy Datasets
12.1 Introduction
12.2 Colonoscopy Image Databases
12.2.1 Video Colonoscopy Image Databases
12.3 Deep Learning on Our Annotated Image Databases
12.3.1 Annotation of the Video Colonoscopy Image Database
12.4 Deep Learning for Semantic Segmentation of Colonoscopic Frames
12.5 Conclusions
References
13 Improved Mask R-CNN with Deformable Convolutions for Accurate Liver Lesion Detection in Multiphase CT Images
13.1 Introduction
13.2 Method
13.2.1 Mask R-CNN
13.2.2 Improved Mask R-CNN
13.3 Experiments
13.3.1 Dataset and Experimental Settings
13.3.2 Main Results
13.4 Conclusions
References
14 Unsupervised Domain Adaptation with Adversarial Learning for Liver Tumors Detection in Multi-phase CT Images
14.1 Introduction
14.2 Our Approach
14.3 Experiments
14.3.1 Dataset
14.3.2 Implementation Details
14.3.3 Evaluation
14.3.4 Results
14.4 Conclusion
References
15 Unsupervised MR to CT Image Translation Exploiting Multi-scale and Contextual Cues
15.1 Introduction
15.2 Methods
15.2.1 Overall Framework
15.2.2 Multi-scale Discriminator
15.2.3 Structure Based Selection with Context Loss (SBSCL)
15.3 Experiments and Results
15.3.1 Datasets and Evaluation Metrics
15.3.2 Ablation Study
15.3.3 Comparison with Other Methods
15.4 Conclusion
References
Part V Methods for Medical Care and Healthcare
16 Robust Zero Watermarking Algorithm for Medical Volume Data Based on LBP
16.1 Introduction
16.2 Materials and Methods
16.2.1 Local Binary Patterns (LBP)
16.2.2 Three Dimensional Discrete Cosine Transform (3D-DCT)
16.2.3 Algorithm Process
16.3 Experimental Results and Analysis
16.3.1 Single Watermark Experimental Results
16.3.2 Multi Watermark Experimental Results
16.4 Conclusion
References
17 Robust Zero-Watermarking Algorithm for Medical Images Based on Residual Network and Depthwise Separable Convolutions
17.1 Introduction
17.2 Designed CNN
17.3 Proposed Zero-Watermarking Algorithm
17.3.1 The Embedding of the Watermark
17.3.2 The Extraction of the Watermark
17.4 Simulation and Analysis
17.4.1 The Reliability of the Algorithm
17.4.2 Robustness of the Proposed Algorithm
17.4.3 Comparison with Other Algorithms
17.5 Conclusions
References
18 Object Classification Awareness and Tubular Focal Loss for Hepatic Veins Segmentation
18.1 Introduction
18.2 Method
18.2.1 Overview
18.2.2 Pre-processing
18.2.3 Object Classification Awareness Module
18.2.4 Tubular Focal Loss
18.3 Experimental Results
18.3.1 Dataset
18.3.2 Experimental Details
18.3.3 Comparative Experiment
18.3.4 Parametric Ablation Experiment
18.4 Conclusions
References
19 Automation of Joint Space Distance Measurement for Diagnosis Support System in Rheumatoid Arthritis
19.1 Introduction
19.2 Rheumatoid Arthritis and Joint Space Distance Measurement Applications
19.3 Proposed Image Extraction Algorithm for Training
19.3.1 Correct Image Extraction Algorithm
19.3.2 Non-positive Image Extraction Algorithm
19.3.3 Removal of False Positives
19.4 Joint Edge Detection
19.5 Experimental Results
19.6 Conclusion
References
20 A Visual Attention Guidance Approach for Minimally Invasive VR Surgery Training
20.1 Introduction
20.2 Method
20.3 Evaluation Experiments
20.4 Conclusions
References
Part VI Support System/Software for Medical Care
21 VisNow-Medical—A Visual Analysis Platform for Medical Data Processing
21.1 Introduction
21.2 VisNow Medical Platform Concept and Implementation
21.3 VisNow Medical Research Related Applications
21.4 Discussion
References
22 Deep Liver Lesion AI System: A Liver Lesion Diagnostic System Using Deep Learning in Multiphase CT
22.1 Introduction
22.2 System Overview
22.3 Interactive and Visualization Functions
22.3.1 2D CT Image Visualization
22.3.2 3D Visualization for Segmentation Results
22.4 Data Transmission
22.5 Backend: Lesion Diagnosis Functions
22.5.1 Liver Segmentation
22.5.2 Liver Lesion Detection
22.5.3 Liver Lesion Segmentation
22.5.4 Liver Lesion Classification and Early Recurrence Prediction of HCC
22.6 Conclusion
References
23 A Unified Framework for Preoperative Early Recurrence Prediction of Hepatocellular Carcinoma with Multi-phase CT Images
23.1 Introduction
23.2 Methods
23.2.1 Intra-Phase Feature Extraction
23.2.2 Intra-Phase Feature Enhancement
23.2.3 Multi-phase Features Fusion
23.3 Experimental Results
23.3.1 Dataset and Implementation Detail
23.3.2 Ablation Studies
23.3.3 Comparison with Existing Deep Learning Methods
23.4 Conclusions
References
Part VII Support System/Software for Healthcare
24 Strategic Risk Management Model for Agile Software Development: Case of Global Pharmaceutical Enterprise
24.1 Introduction
24.2 Literature Review
24.2.1 Related Work—Key Digital Technologies for Healthcare and Pharmaceutical Industry
24.2.2 Industry 4.0 and Society 5.0
24.2.3 AIDAF Framework and Strategic Risk Management
24.3 Research Methodology
24.4 Proposal of “AIDAF for Enterprise Agile Software Development with STRMM Model”
24.5 Case of GPE: PoC-Based Agile Software Development Project
24.6 Discussions
24.6.1 Case of AIDAF for Enterprise Agile Software Development with STRMM Model in GPE
24.6.2 Direction of Agile Software Development in GPE Toward Industry 4.0 and Society 5.0
24.6.3 Challenges, Future Issues
24.7 Limitations
24.8 Conclusion
References
25 The Development of a Spatial Sound System in the Navigation of Visually Impaired People
25.1 Introduction
25.2 Related Work
25.2.1 Approaches on ETAs
25.3 The Concept Behind the Developed System
25.3.1 The Principle of the Method
25.3.2 The Detection of Multiple Obstacles
25.3.3 The Spatial Audio Principle
25.3.4 Obstacle Perception During Navigation
25.3.5 The Hardware Combination
25.3.6 Requirements
25.4 Connectivity Approach
25.5 Practical Configuration
25.5.1 Simulation
25.5.2 Practical Implementation
25.6 Results
25.6.1 Distance Estimation on Obstacles of Different Materials
25.6.2 Distance Estimation Under Different Light Conditions
25.6.3 Distance Estimation in Different Sound Level Conditions
25.7 Discussion
25.8 Conclusion
References
26 Information System in Support of Health of Children with Congenital Heart Disease
26.1 Introduction
26.2 Background
26.3 Methodology
26.3.1 General Objective
26.3.2 Specific Objectives
26.3.3 SCRUM Agile Methodology
26.3.4 RUP Development Methodology
26.3.5 System Design
26.4 Technological Development
26.5 Conclusions
References
27 Vision Paper for Enabling HCI-AI Digital Healthcare Platform Using AIDAF in Open Healthcare Platform 2030
27.1 Introduction
27.2 Direction of Digital Healthcare, AI and Enterprise Architecture
27.2.1 Core Technologies for Digital Healthcare
27.2.2 Artificial Intelligence for Digital Healthcare in Enterprise Architecture
27.2.3 Society 5.0
27.2.4 AIDAF Framework
27.3 Proposal of HCI-AI Digital Healthcare Platform
27.4 Discussions
27.4.1 Planned Case of a Hospital in Americas for Osteoporosis
27.4.2 Challenge and Future Issues
27.5 Conclusion and Next Research
References
Author Index
Recommend Papers

Innovation in Medicine and Healthcare: Proceedings of 10th KES-InMed 2022 (Smart Innovation, Systems and Technologies, 308)
 9811934398, 9789811934391

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Smart Innovation, Systems and Technologies 308

Yen-Wei Chen Satoshi Tanaka Robert J. Howlett Lakhmi C. Jain   Editors

Innovation in Medicine and Healthcare Proceedings of 10th KES-InMed 2022

Smart Innovation, Systems and Technologies Volume 308

Series Editors Robert J. Howlett, Bournemouth University and KES International, Shoreham-by-Sea, UK Lakhmi C. Jain, KES International, Shoreham-by-Sea, UK

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. Indexed by SCOPUS, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago, DBLP. All books published in the series are submitted for consideration in Web of Science.

Yen-Wei Chen · Satoshi Tanaka · Robert J. Howlett · Lakhmi C. Jain Editors

Innovation in Medicine and Healthcare Proceedings of 10th KES-InMed 2022

Editors Yen-Wei Chen Ritsumeikan University Kyoto, Japan

Satoshi Tanaka Ritsumeikan University Kyoto, Japan

Robert J. Howlett KES International Research Shoreham-by-Sea, UK

Lakhmi C. Jain KES International Selby, UK

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-19-3439-1 ISBN 978-981-19-3440-7 (eBook) https://doi.org/10.1007/978-981-19-3440-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

InMed 2021 Organization

Honorary Chair Lakhmi C. Jain, University of Canberra, Australia and Bournemouth University, UK

Executive Chair Robert J. Howlett, Bournemouth University, UK

General Chair Yen-Wei Chen, Ritsumeikan University, Japan

Program Chair Satoshi Tanaka, Ritsumeikan University, Japan

International Program Committee Members Marco Anisetti, University of Milan, Italy Ahmad Taher Azar, Prince Sultan University, Saudi Arabia Luis Enrique, University of Castilla-la Mancha, GSyA Research Group, Spain

v

vi

InMed 2021 Organization

Cecilia Dias Flores, Universidade Federal de Ciências da Saúde de Porto Alegre, Brazil Amir H. Foruzan, Shahed University, Iran Yoshiaki Fukami, Keio University/Gakushuin University, Japan Arnulfo Alanis Garza, Instituto Tecnológico de Tijuana, México Arfan Ghani, University of Bolton, Greater Manchester, United Kingdom Kyoko Hasegawa, Ritsumeikan University, Japan Anca Ignat, Faculty of Computer Science, University “Alexandru Ioan Cuza” of Iasi, Romania Yutaro Iwamoto, Ritsumeikan University, Japan Titinunt Kitrungrotsakul, Zhejiang Lab, China Dalia Kriksciuniene, Vilnius University, Lithuania Jingbing Li, Hainan University, China Liang Li, Ritsumeikan University, Japan Jing Liu, Zhejiang Lab, China Yihao Li, Ritsumeikan University, Japan Jiaqing Liu, Ritsumeikan University, Japan Giosue’, Lo Bosco, University degli studi di Palermo, Italy Mihaela Luca, Institute of Computer Science, Romanian Academy Daniela López De Luise, CI2S Labs Yoshimasa Masuda, Keio University, Japan Kazuyuki Matsumoto, Tokushima University, Japan Rashid Mehmood, King Abdul Aziz University, Saudi Arabia Mayuri Mehta, Sarvajanik College of Engineering and Technology, India Aniello Minutolo, Institute for High Performance Computing and Networking, ICAR-CNR, Italy Victor Mitrana, Polytechnic University of Madrid, Spain Marek R. Ogiela, AGH University of Science and Technology, Krakow, Poland Vijayalakshmi Ramasamy, University of Wisconsin-Parkside, Wisconsin Margarita Ramirez Ramirez, Universidad Autonoma de Baja California, Mexico Donald Shepard, Brandeis University, USA Yu Song, Ritsumeikan University, Japan Catalin Stoean, University of Craiova, Romania Ruxandra Stoean, University of Craiova, Romania Kazuyoshi Tagawa, Aichi University, Japan Eiji Uchino, Yamaguchi University, Japan Junzo Watada, Universiti Teknologi Petronas, Malaysia Rui Xu, Dalian University of Technology, China Yingying Xu, Zhejiang Lab, China Yoshiyuki Yabuuchi, Shimonoseki City University, Japan Shuichiro Yamamoto, Nagoya University, Japan Cristina Manresa-Yee, Universitat de les Illes Balears, Spain Hiroyuki Yoshida, Harvard Medical School/Massachusetts General Hospital, USA

InMed 2021 Organization

vii

Organization and Management KES International (www.kesinternational.org) in partnership with the Institute of Knowledge Transfer (www.ikt.org.uk).

Preface

The 10th KES International Conference on Innovation in Medicine and Healthcare (InMed-22) was held in Rhodes, Greece on 20–22 June 2022. The InMed-22 is the 10th edition of the InMed series of conferences. The conference focuses on major trends and innovations in modern intelligent systems applied to medicine, surgery, healthcare, and the issues of an aging population including recent hot topics on artificial intelligence for medicine and healthcare. The purpose of the conference is to exchange the new ideas, new technologies, and current research results in these research fields. We received submissions from many countries around the world. All submissions were carefully reviewed by at least two reviewers of the International Program Committee. Finally, 27 papers were accepted to be presented in this proceeding, which covers a number of key areas in smart medicine and healthcare including: (1) COVID-19; (2) Biological Research and Technologies for Medical Care; (3) Biological Research and Technologies for Healthcare; (4) Deep/Machine Learning for Medical Care and Healthcare; (5) Methods for Medical Care and Healthcare; (6) Support System/Software for Medical Care; and (7) Support System/Software for Healthcare. In addition to the accepted research papers, a number of keynote speeches by leading researchers were presented at the conference.

ix

x

Preface

We would like to thank Ms. Yuka Sato of Ritsumeikan University for their valuable assistance in editing this volume. We are also grateful to the authors and reviewers for their contributions. June 2022

Yen-Wei Chen Ritsumeikan University Kyoto, Japan Satoshi Tanaka Ritsumeikan University Kyoto, Japan Robert J. Howlett Bournemouth University Shoreham-by-Sea, UK Lakhmi C. Jain University of Technology Sydney Sydney, Australia Australia and Liverpool Hope University Liverpool, UK

Contents

Part I 1

COVID-19

3D Convolutional Neural Network for Covid Assessment on CT Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Insaf Setitra, Rym Khettab, Anfel Sara Bouachat, Yuji Iwahori, and Abdelkrim Meziane

2

Prediction Models for COVID-19 in Children . . . . . . . . . . . . . . . . . . . . Vincent Peter C. Magboo and Ma. Sheila A. Magboo

3

Society 5.0 as Digital Strategy for Scalability: Tamba’s COVID-19 Vaccination Management System and Its Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yoshiaki Fukami and Yoshimasa Masuda

Part II

3

15

27

Biological Researh and Technologies (1): Medical Care

4

Analysis of Ensemble Learning for Chest X-Ray Images . . . . . . . . . . Anca Ignat

41

5

Adaptive Multi-omics Survival Analysis in Cancer . . . . . . . . . . . . . . . Isabelle Bichindaritz and Guanghui Liu

51

6

Risk Analysis of Developing Lifestyle-Related Diseases Based on the Content of Social Networking Service Posts . . . . . . . . . . . . . . . . Naomichi Tabuchi, Kazuyuki Matsumoto, Minoru Yoshida, Ryota Nishimura, and Kenji Kita

63

xi

xii

Contents

Part III Biological Researh and Technologies (2): Healthcare 7

8

9

Digital Twin in Healthcare Through the Eyes of the Vitruvian Man . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spyridon Kleftakis, Argyro Mavrogiorgou, Konstantinos Mavrogiorgos, Athanasios Kiourtis, and Dimosthenis Kyriazis Relationship Between Mental Health, Life Skills and Life Satisfaction Among Japanese Athletes . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoya Yuan, Koichiro Aoki, Chieko Kato, and Yoshiomi Otsuka IT Assisted Gardening for the Revitalization of the Elderly: The Turntable Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . István Vassányi, Benedek Szakonyi, Daniela Loi, Angelika Mantur-Vierendeel, Guilherme Correia, Antonio Solinas, Bojan Blažica, Riccardo Pazzona, Andrea Manca, Marco Guicciardi, and Balázs Gaál

75

87

95

10 Mental Health of the Elderly: Towards a Proposal for a Social Simulator Based on the Complexity Approach . . . . . . . . . . . . . . . . . . . 107 Consuelo Salgado Soto, Margarita Ramirez Ramirez, Esperanza Manrique Rojas, Rocio Villalon Cañas, Maricela Sevilla Caro, and Hilda Beatriz Ramirez Moreno Part IV Deep/Machine Learning for Medical Care and Healthcare 11 Deep Reinforcement Learning Classification of Brain Tumors on MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Joseph Stember and Hrithwik Shalu 12 Deep Learning on Special Processed Video Colonoscopy Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Adrian Ciobanu, Mihaela Luca, Radu Alexandru Vulpoi, and Vasile Liviu Drug 13 Improved Mask R-CNN with Deformable Convolutions for Accurate Liver Lesion Detection in Multiphase CT Images . . . . 141 Chanyu Lee, Yutaro Yiwamoto, Lanfen Lin, Hongjie Hu, and Yen-Wei Chen 14 Unsupervised Domain Adaptation with Adversarial Learning for Liver Tumors Detection in Multi-phase CT Images . . . . . . . . . . . . 149 Rahul Kumar Jain, Takahiro Sato, Taro Watasue, Tomohiro Nakagawa, Yutaro Iwamoto, Xianhua Han, Lanfen Lin, Hongjie Hu, Xiang Ruan, and Yen-Wei Chen

Contents

xiii

15 Unsupervised MR to CT Image Translation Exploiting Multi-scale and Contextual Cues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Rui Xu, Yuening Zhang, Xinchen Ye, Fu Jin, Xia Tan, and Huanli Luo Part V

Methods for Medical Care and Healthcare

16 Robust Zero Watermarking Algorithm for Medical Volume Data Based on LBP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Wenyi Liu, Jingbing Li, Jing Liu, and Jixin Ma 17 Robust Zero-Watermarking Algorithm for Medical Images Based on Residual Network and Depthwise Separable Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Junhua Zheng, Jingbing Li, Jing Liu, and Yen-Wei Chen 18 Object Classification Awareness and Tubular Focal Loss for Hepatic Veins Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Feiyu Wang, Guoyu Tong, and Huiyan Jiang 19 Automation of Joint Space Distance Measurement for Diagnosis Support System in Rheumatoid Arthritis . . . . . . . . . . . 213 Tomio Goto, Kosuke Goto, and Koji Funahashi 20 A Visual Attention Guidance Approach for Minimally Invasive VR Surgery Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Kazuyoshi Tagawa, Chikato Kuyama, Masaya Yamamoto, and Hiromi T. Tanaka Part VI

Support System/Software for Medical Care

21 VisNow-Medical—A Visual Analysis Platform for Medical Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Piotr Regulski and Kazimierz Szopinski 22 Deep Liver Lesion AI System: A Liver Lesion Diagnostic System Using Deep Learning in Multiphase CT . . . . . . . . . . . . . . . . . . 237 Titinunt Kitrungrotsakul, Yingying Xu, Jihong Hu, Jing Liu, Yinghao Li, Lanfen Lin, Ruofeng Tong, Jingsong Li, and Yen-Wei Chen 23 A Unified Framework for Preoperative Early Recurrence Prediction of Hepatocellular Carcinoma with Multi-phase CT Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Shuyi Ouyang, Yingying Xu, Weibin Wang, Yinhao Li, Fang Wang, Qingqing Chen, Lanfen Lin, Yen-Wei Chen, and Hongjie Hu

xiv

Part VII

Contents

Support System/Software for Healthcare

24 Strategic Risk Management Model for Agile Software Development: Case of Global Pharmaceutical Enterprise . . . . . . . . . 261 Tetsuro Miyake, Yoshimasa Masuda, Katsura Deguchi, Masashi Iwasaki, Kazuya Obanayama, and Kasei Miura 25 The Development of a Spatial Sound System in the Navigation of Visually Impaired People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Dimitrios Palogiannidis and Hanadi Solieman 26 Information System in Support of Health of Children with Congenital Heart Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Margarita Ramírez Ramírez, María del Consuelo Salgado Soto, Esperanza Manrique Rojas, Sergio Octavio Vázquez Núñez, and Magdalena Serrano Ortega 27 Vision Paper for Enabling HCI-AI Digital Healthcare Platform Using AIDAF in Open Healthcare Platform 2030 . . . . . . . . 301 Yoshimasa Masuda, Ryo Ishii, Donald Shepard, Rashimi Jain, Osamu Nakamura, and Tetsuya Toma Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313

About the Editors

Prof. Yen-Wei Chen received his B.E. degree in 1985 from Kobe University, Kobe, Japan. He received his M.E. degree in 1987 and his D.E. degree in 1990, both from Osaka University, Osaka, Japan. He was Research Fellow at the Institute of Laser Technology, Osaka, from 1991 to 1994. From October 1994 to March 2004, he was Associate Professor and Professor in the Department of Electrical and Electronic Engineering, University of the Ryukyus, Okinawa, Japan. He is currently Professor at the College of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan. He is also a Visiting Professor at the College of Computer Science and Technology, Zhejiang University, Hangzhou, China. He was Visiting Scholar at Oxford University, Oxford, UK, in 2003 and at Pennsylvania State University, Pennsylvania, USA, in 2010. His research interests include medical image analysis and pattern recognition. He has published more than 300 research papers. He has received many distinguished awards including Best Scientific Paper Award of ICPR2013 and Outstanding Chinese Oversea Scholar Fund of Chinese Academy of Science. Prof. Satoshi Tanaka got his Ph.D. in theoretical physics at Waseda University, Japan, in 1987. After being experienced as Assistant Professor, Senior Lecturer, and Associate Professor at Waseda University and Fukui University, he became Professor of Ritsumeikan University in 2002. His current research target is computer visualization of complex 3D shapes such as 3D scanned cultural heritage objects, inside structures of a human body, and fluid simulation results. Recently, he was President of JSST (Japan Society for Simulation Technology), President of ASIASIM (the Federation of Asia Simulation Societies), and Vice President of the VSJ (Visualization Society of Japan). Currently, he is working as Cooperation Member of Japan Science Council. He is the best paper winners at Asia Simulation Conference 2012, Journal of Advanced Simulation in Science and Engineering in 2014, and many others.

xv

xvi

About the Editors

Prof. Robert J. Howlett is the Academic Chair of KES International, a nonprofit organization that facilitates knowledge transfer and the dissemination of research results in areas including Intelligent Systems, Sustainability, and Knowledge Transfer. He is a Visiting Professor at ‘Aurel Vlaciu’ University of Arad in Romania. His technical expertise is in the use of intelligent systems to solve industrial problems. He has been successful in applying artificial intelligence, machine learning and related technologies to sustainability and renewable energy systems; condition monitoring, diagnostic tools and systems; and automotive electronics and engine management systems. His current research work is focused on the use of smart microgrids to achieve reduced energy costs and lower carbon emissions in areas such as housing and protected horticulture. Prof. Lakhmi C. Jain Ph.D., Dr. H.C., M.E., B.E. (Hons.), Fellow (Engineers Australia), is with the Liverpool Hope University and the University of Arad. He was formerly with the University of Technology Sydney, the University of Canberra and Bournemouth University. Professor Jain founded the KES International for providing a professional community the opportunities for publications, knowledge exchange, cooperation and teaming. Involving around 5,000 researchers drawn from universities and companies world-wide, KES facilitates international cooperation and generate synergy in teaching and research. KES regularly provides networking opportunities for professional community through one of the largest conferences of its kind in the area of KES.

Part I

COVID-19

Chapter 1

3D Convolutional Neural Network for Covid Assessment on CT Scans Insaf Setitra, Rym Khettab, Anfel Sara Bouachat, Yuji Iwahori, and Abdelkrim Meziane

Abstract Due to the rapid spread of the COVID-19 respiratory pathology, an effective diagnosis of positive cases is necessary to stop the contamination. CT scans offer a 3D view of the patient’s thorax and COVID-19 appears as ground glass opacities on these images. This paper describes a deep learning based approach to automatically classify CT scan images as COVID-19 or not COVID-19. We first build a dataset and preprocess this data. Preprocessing includes normalization, resizing and data augmentation. Then, the training step is based on a neural network used for tuberculosis pathology. Training of the dataset is performed using a 3D convolutional neural network. The results of the neural network model on the test set returns an accuracy of 80%. A prototype of the approach is implemented in a form of a web application to assist doctors and speed up the COVID-19 diagnosis. Codes of both the training and the web application are available online for further research.

I. Setitra (B) · R. Khettab · A. S. Bouachat University of Science and Technology Houari Bouemadiene USTHB, Bab Ezzouar, Algeria e-mail: [email protected] R. Khettab e-mail: [email protected] A. S. Bouachat e-mail: [email protected] Y. Iwahori Department of Computer Science, Chubu University, Kasugai, Japan e-mail: [email protected] A. Meziane Research Center on Scientific and Technical Information, CERIST, Ben Aknoun, Algeria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 Y.-W. Chen et al. (eds.), Innovation in Medicine and Healthcare, Smart Innovation, Systems and Technologies 308, https://doi.org/10.1007/978-981-19-3440-7_1

3

4

I. Setitra et al.

1.1 Introduction Today, the Corona VIrus Disease 19 (COVID-19) pandemic is still keeping the world in suspense. Starting from China to Italy, it ends up affecting 223 countries. By dint of being confined, this pandemic has had a devastating impact on our social livehood, on the global economy and more importantly on public health. COVID-19 is caused by the Severe Acute Respiratory Syndrome 2 (SarsCov-2) virus which belongs to the corona virus family. This disease causes respiratory infections that lead to death in case of complications. The global population has known a rapid increase in the number of infections as well as re-infections. By the time of writing this manuscript, more than 239 million cases have been diagnosed COVID-19 positive and more than 4 million deaths around the world have been reported.1 Effective COVID-19 diagnosis is one of the challenges faced by doctors to help quarantine affected people and stop the pandemic. The principal COVID-19 diagnosis tool is the Real Time -Polymerase Chain Reaction (RT-PCR). RT-PCR allows to detect the viral presence of the virus protein (the genetic information of the virus) in the human body. Although being a robust diagnosis tool, RT-PCR detects the presence of the virus if the patient is infected at the time of the test, and is not effective after the first 3 days of infection [9]. As an alternative, Chest X-Ray and Computed Tomography scans (CT-scans) are considered as another screening diagnosis tool for COVID-19 assessment. While X-ray imaging is more cost-effective and faster than RT-PCR, chest X-rays are not specific to COVID-19 and may overlap with other diseases [4]. CT-scans seems to be a better diagnosis tool since they provide a detailed structure of the lung in 3D and allow to observe the most significant lung lesions [3].2 The typical appearance of COVID-19 pneumonia on chest CT images is the ground glass opacification spread through the lungs most frequently peripheral in distribution [5]. The ground glass opacities are small air sacs or alveoli filled with fluid, turning to a shade of grey on CT scans. On severe infections more fluid will gather in the lobes of the lungs, which makes the ground glass appearance progresses to a solid white consolidation. Additionally, a “crazy paving” pattern can appear on the lungs and make them appear thicker. This is due to swelling of the interstitial space along the walls of the lung lobules. The look is similar to irregular shaped stones used to pave a street, hence the name “crazy paving” [5]. The typical appearance of the lungs on CT scans after a COVID-19 infection shows that these images have their potential in distinguishing healthy and infected patients. However, due to the large amount of information, and the high dimensionality of CT images, automatic and semi-automatic analysis is necessary to help the virus diagnosis. Deep learning, considered as a revolution in medical imaging, provides an accurate analysis of this kind of data, which results in a better medical decision. Deep neural networks (DNNs) have recently been employed for the diagnosis of several diseases, leading to promising results [16]. 1 2

https://www.worldometers.info/coronavirus/. https://github.com/Insaf-Setitra/3D-CovidClassification.

1 3D Convolutional Neural Network for Covid Assessment on CT Scans

5

In this paper, we propose a deep neural network to analyze COVID-19 on CTscans images. The proposed approach uses a 3D Convolutional Neural Network trained initially on the task of tuberculosis classification. First, the training phase is performed on an open-source CT scans database collected from 3 public datasets. A private dataset collected form local hospitals is used as testing. This local dataset was annotated by medical specialists. Thereafter, several configurations are tested, by changing the network parameters and the training database which results in a set of models. Finally, the best model amongst the trained models is used in a web application for further use. The application is meant to help doctors diagnose patients rapidly and with few efforts. Both the training code and the web application can be accessed on github for further research. While the models are trained on CT scans images, we believe that the networks can be retrained with a minimum modification for any multispectral image classification task. The remaining of the paper is structured as follows: Sect. 1.2 presents some works related to COVID-19 automatic diagnosis tools. Section 1.3 describes our approach for COVID-19 analysis based on deep learning. Section 1.4 presents the results and a discussion and finally Sect. 1.5 concludes the work with some perspectives.

1.2 Related Works COVID-19 related works can be distinguished according to three criteria, first the chest dataset type used (either CT scans or X-Ray images), second the deep network architecture used (2D or 3D deep network) and third the classification type (either binary or multiclass classification). In [13] a new model labeled DarkCovidNet is proposed to detect COVID-19. Images used are X-Ray provided by [6] and [19]. The architecture of the network is based on the original DarkNet-19 [15] and is composed of 17 layers. The model is trained and evaluated for two tasks, namely a binary classification (COVID-19 vs. No-Findings) and a multiclass classification (COVID-19 vs. No-Findings vs. Non COVID-19). The binary classification achieves an accuracy of 98.08%. For the multiclass classification, cross validation has been used, and the average classification performance attains 87.02%. Authors support that the model is useful for detecting COVID-19 with the presence of a heat map in normal subjects but its effectiveness decreases in Pneumonia and Acute Respiratory Distress Syndrome (ARDS) cases (Non-Covid19). Moreover, the quality of the prediction decreases in poor quality X-ray images because the lung image is diffuse and much lung ventilation is lost [13]. In [1], the binary classification of COVID-19 and Non COVID-19 CT scan images using transfer learning is proposed. The 4 deep convolutional neural network models SqueezeNet, ResNet18, ResNet50 and ResNet101 [2] have been used for the task and tested on images of [21]. In this work, previous to the classification phase, several steps are performed. First, input data images are transformed to the JPEG format, then resized and finally normalized before being transformed to the stationary

6

I. Setitra et al.

wavelet. The pre-processed images are then augmented and fed to the 4 pre-trained networks for transfer learning. According to authors, ResNet18 model has shown best performing results with 99.4% accuracy on the testing set. A further step is proposed by the authors to localize abnormalities in COVID-19 positive CT scan images. This is done by comparing the areas of activation of the obtained features from the first convolutional and deep layers with the original scans. Jin et al. [10] proposed an AI system of 5 blocks for COVID-19 diagnosis from CT volumes. The first block is a 2D CNN model that takes as input the CT volume slice by slice for the segmentation of the lung. The second block is a 2D classification deep network whose backbone is ResNet152 [8]. The input of the network are the CT slices concatenated to the segmented lungs. The outputs represent the levels of being normal and COVID-19 affected. One volume is considered COVID-19 positive when any one of its slices is COVID-19 positive. The third block is a network that locates abnormal slices in COVID-19 positive cases. The fourth block is a network visualization module for interpreting the attentional region of deep networks. Regions of attention are extracted by binarizing the output of Grad-CAM [17], and give more details about diagnosis suggestions. The last block is a traditional feature extraction block that consists of an image phenotype and radiomic analysis to understand the imaging characteristics of COVID-19. Additionally to the five networks, a reader study was conducted in [10] with five board-certified radiologists to interpret 200 CT volumes. The Receiver Operating Curve (ROC) had an Area Under Curve (AUC) of 0.9805, a sensitivity of 0.9470, and a specificity of 0.9139. Among the five readers, one reader performed better than the AI system, one reader performed worse, and the rest three had similar performance as the AI system. Karim et al. [16] proposed a new model labeled DeepCovidExplainer for detecting COVID-19 on Chest X Ray images (CXR). The CXR images are first pre-processed before being augmented and classified. The preprocessing phase includes principally noise elimination normalization and resizing. Further, training set images are augmented by rotating the training set by up to 15 deg before being classified. The multi-class classification (COVID-19-Normal-Pneumonia) is performed using a neural ensemble method with VGG, ResNet, and DenseNet model’s architectures. The classification results for the ensemble methods showed that the VGG-19+DenseNet161 combination outperforms the other combinations. Quantitative results of the method are 0.926, 0.917, and 0.925 for precision, recall, and F1 scores respectively. In order to highlight the class discriminating regions, a gradient-guided class activation maps (Grad-CAM++) and layer-wise relevance propagation (LRP) were used in this work. In the previously analyzed works, we make three major observations, (1) all analyzed works use a sort of deep network, either solely for classification, or for both segmentation and classification, (2) no previous work in the present state of the art has used the 3D volume as one input. In this work, we would like to use the 3D input image to preserve the spatial information of the volume. We explore the benefits cited by [1, 18] mainly for (1) reducing the pre-processing of the dataset, (2) speeding up the learning process, and (3) adjusting the complexity by reducing

1 3D Convolutional Neural Network for Covid Assessment on CT Scans

7

the number of parameters. Our motivation behind using the 3D volume instead of the 2D slices is that, as stated in [7], independent processing of individual slices in 2D CNNs will strongly degrade the performance of the model. 3D CNNs exploit simultaneously the geometry of the volume (height, width and depth). Our approach is inspired of the approach presented in [22]. The latter is a deep network that uses 3D volumes of CT scans for a multiclass classification, namely classify tuberculosis vs. No-findings images, whereas our model classifies images of COVID-19 versus Non COVID-19.

1.3 Our Approach Our approach for COVID-19 diagnosis consists of using a 3D CNN used initially for the task of tuberculosis classification. The work consists of 3 major phases, the first phase, corresponds of CT scan images collection to build our dataset. The second phase consists of preprocessing and spliting the dataset into training validation and testing sets. Finally, the last phase is to train the 3D convolutional neural network with the preprocessed training set and validate the model. Transfer learning is a technique that allows to improve the performances of a trained model by giving it knowledge learned by another model. In our case, the weights obtained in tuberculosis classification are used as our initial weights. First, we split the training set into 2 parts. We first trained the model on one part of the dataset and saved the weights obtained. Then, we retrained the same model on the other part of the dataset by keeping the same weights of the first training.

1.3.1 Dataset The dataset used includes open source CT scans and local collected data. CT scans imaging is able to distinguish different organs density and create a 3D image of the body because of the powerful algorithms used to calculate the X-ray absorption by each organ’s tissue of the slice [3]. The open source collected CT scans are issued from several datasets [11, 12] and available on Kaggle platform.3 Whereas, the collected data for testing are issued from a medical center in Algeria. Each image of the local database was anonymysed and annotated by a specialized radiologist. These images were initially in the classical DICOM format4 then converted to NIFTI file format using dcm2niigui.5 The motivation behind this conversion is to have a simple, easy and portable format file so that to manipulate and analyze CT scans without 3

https://www.kaggle.com/. The DICOM format is a file container mainly used by the radiologist to store and archive medical images, containing a header with metadata and a set of images (data elements and data tag). 5 https://people.cas.sc.edu/rorden/mricron/dcm2nii.html. 4

8

I. Setitra et al.

loss of spatial information. Moreover, NIFTI format allows to compact slices into one complete file. The details of the used dataset are as follows: • First set 102 CT scans: The first 102 CT scans of 50 slices each, are issued from Russian hospitals and annotated as no findings. • Second set 82 CT scans: The second 82 CT scans of 50 to 300 slices per scan, are also issued from Russian hospitals, their annotations are as follows: – 2 scans with ground glass opacities, consolidations, reticular changes and pulmonary parenchymal involvement >=75%. – 20 files with ground glass opacities with pulmonary parenchymal involvement 25% – 2 files with COVID-19 positive. The three open-source datasets were used for training and validation while the private collected Algerian dataset was used for testing. The three open-source datasets are divided as follows, 70% for training and 30% for validation.

1.3.2 Methodology In this section, we discuss the method used covering the preprocessing phase and classification phase.

1.3.2.1

Preprocessing

The preprocessing phase consists of normalization, resizing, data augmentation and dataset split. The image normalization is applied in order to unify the image values and have common scale in the dataset. In our case, the CT scans slices intensity are stacked forming a volume (volumetric image), therefore, we normalize these volume’s elements also called voxels. The voxel represents the smallest sampled 3D

1 3D Convolutional Neural Network for Covid Assessment on CT Scans

9

regular element in the lung. Voxels are rescaled in a range varying between 0 and 1 using the normalization equation defined as follow: f (x) =

x − xmin xmax − xmin

(1.1)

where • x, refers to the voxel value of the slice in HU6 unit. • xmax , refers to the maximum radio density value of the CT scans in the dataset. • xmin , refers to the minimum radio density value of the ct scans in the dataset. The normalized image is then resized into a 128 × 128 × 64 volume Resizing is performed using the Spline Interpolated Zoom (SIZ) technique and allows to accelerate the processing time. In SIZ, a zoom is applied to the entire image volume according to an F factor in order to keep a limited number of slices. This technique is used to avoid drastic transformations and loss of information between the original slice volume and the reduced slice volume. The input volume is zoomed or squeezed by replicating the nearest pixel along the depth z-axis. The SIZ algorithm is summurised in the following Algorithm: Spline Interpolated Zoom Algorithm Require: A 3D volumetric image I of size W × H × depth. Ensure: I is a rank of 3 tensors. 1. Set constant target depth of size N. 2. Calculate the depth denoted as D. 3. Compute depth factor by 1D denoted as DF. N

4. Zoom I using spline interpolation by the factor DF. 5. Output processed volume I’ of dimension W × H × N . After resizing, the 3 public datasets are split into training and validation. The training set represents 70% of our dataset whereas the the validation set represents 30%. Finally, to enlarge the small training set of chest radiography data augmentation is applied by rotating the training CT scan images. A random angle in a range of [−20, 20] is applied to each training volume of the training dataset.

1.3.2.2

Classification

The 3D CNN model type is considered as the most suitable for a full exploitation of the anatomical space of the lungs and a good extraction of its characteristics. Ground glass opacities, the main radiological symptoms on CT scans image diagnosed positive to COVID-19, can also occur in atypical form of tuberculosis. However, these 6

The HU (Hounsfield) scale is a quantitative scale for describing radiodensity in medical imaging according to the density type of tissue. Air is represented by a value of −1000 (black on the grey scale) and bone between +700 (cancellous bone) to +3000 (dense bone) white on the grey scale

10

I. Setitra et al.

Fig. 1.1 3D convolutional neural network architecture

two diseases have different clinical signs. In addition, a severe COVID-19 infection cause usually acute respiratory distress syndrome (ARDS) unlike tuberculosis [20]. the model used in this work is a 17 layers 3D CNN implemented basically to classify tuberculosis CT scans and No-findings CT scans, that constitutes of 4 3D convolutional layers with the two first layers consisting of 64 filters followed by 128 and 256 filters all with 3 × 3 × 3 kernel sizes. Every convolutional layer is followed by a MAXPOOL layer with stride of 2 and ReLU activation function ending with a batch normalization layer. in other words, the 3D CNN model consists of 4 convmaxpool-BN blocks. The final output of these blocks is flattened and given to a fully connected layer with 512 neurones, and finally we implement this using 2 dropouts layers. The output is then passed to a dense layer consisting of two neurons with softmax activation function for the binary classification task. The network architecture is kept as simple as possible to avoid parameterization problem[14]. Figure 1.1 illustrates the architecture of the CNN chosen.

1.4 Results and Experiments 1.4.1 Working Environment We carried out the development and the experiments on Dell Inspiron with Core (TM) i3-7100U CPU 2.40 GHz processor, 8 GB RAM, and 64-bits windows 10 and using kaggle plateform that provides NVidia K38 GPUs in kernels, and 13 GB RAM. Our CNN model was implemented in python on Kaggle plateform. Tesorflow and Keras libraries were used for training while Nibabel and Ndimage libraries were used for processing of the multispectral medical images. A learning rate of 10−4 with Adam optimizer and a binary cross entropy loss function were used. A batch size of 2 volumes and 100 epochs were performed and early stopping was used. We give in the following, further details about the development environment.

1 3D Convolutional Neural Network for Covid Assessment on CT Scans

1.4.1.1

11

Python Programming Language

Python Programming Language a high-level and versatile programming language. It is equipped with several libraries of linear algebra, image processing, machine learning and deep learning. – Kaggle: Kaggle is a free web platform from Google for writing and testing python code online. This platform provides its users with hardware resources in the cloud such as storage, RAM, GPU and TPU. The advantage of Kaggle lies in its wide variety of databases it has, the possibility of sharing them as well as sharing code scripts. Competitions and contests between data scientists and machine learning specialists take place also on this platform with the aim of encouraging the culture of open source. – Nibabel: Python Language Package licensed under MIT to handle different file format used in medical imaging and more specifically in neuro-imaging. – Tensorflow: library based on deep learning and made open source by Google allowing to develop dataflows. – keras: it is a python API interface allowing to manipulate and train different neural network architectures – NumPy: Python library for providing and manipulating large arrays and multidimensional matrices. – SciPy: Python library allowing to make important scientific calculations, it is equipped with mathematical functions and numerical analysis to operate on multidimensional tables. – Ndimage: sciPy package includes various analysis functions designed for the multidimensional image processing.

1.4.2 Evaluation and Discussion In order to measure the performance of our mode, we use the precision, recall and F-scores measures defined as follows: pr ecision = Recall = F1 − scor e =

TP T P + FP

TP T P + FN

2 × pr ecision × r ecall pr ecision + r ecall

(1.2)

(1.3)

(1.4)

where: – True Positive (TP): Positive Covid volume predicted as positive by our network.

12

I. Setitra et al.

Fig. 1.2 Confusion matrix in the testing set

– False positive (FP): Negative Covid volume predicted as positive by our network. – False Negative (FN): Positive Covid volume predicted as negative by our network. – True negative (TN): Negative Covid volume predicted as negative by our network. During training, the accuracy of the model starts with a value of 65% whereas the validation set accuracy starts with a value of 54 %. After the first epoch, the training accuracy achieves 60% while the validation accuracy remains stable. During the remaining epochs, the accuracy of training and validation increases and decreases periodically while the model tries to reach its local minimum. By the end of the training phase, the model reaches an accuracy of 87% on the training set and 72% on the validation set. The model stops training after 92 epochs because of the early stopping parameter. The loss metric used is based on the incorrect predictions made by the model, and is calculated using the mean of squared differences between the ground truth labels and the predicted values. The loss of training set starts with a value 0.6 and 1 on the for the validation set. After the first epoch, the value of the loss function on the training set remains stable and decreases to 0.8 on the validation set. The loss function on the validation set decreases slower than the loss function in the training set. By the end of the training, the loss function attains 0.3 on the training set and 0.6 on the validation set (Fig. 1.2). The resulting confusion matrix shows that the false positive value is 0, false negative value is 3, true positive is 12 is and true negative is 1. Thus, the accuracy is 80%, the F1-score is 0.88 and the recall is 1. The model predicts correctly the positive COVID-19 cases. We believe that these results are related to the unbalance. Indeed, number of COVID-19 positive volumes is smaller than positive volumes collected during the COVID-19 pandemic.

1.5 Conclusion In this paper, a deep learning approach to analyze the COVID-19 CT scans images and make an automatic diagnostic of this disease was presented. First, open-source data and local data were grouped and preprocessed. Secondly, the binary classification

1 3D Convolutional Neural Network for Covid Assessment on CT Scans

13

was done using a 3D CNN model. Lastly, the developed model was tested on our local hospital CT transformed volumes. Evaluation results show that our approach can identify COVID-19 with an accuracy of 80%, however the model is less able to identify CT scans of healthy lungs. A limitation of the study is the use of a limited number CT scans images due to the memory challenges that we faced. More data from diverse source is necessary for a further evaluation. In the same line, we would argue that the developed model will act as a smart medical assistance in order to have a rapid and accurate detection of COVID-19. Thus, help doctors and medical practitioners in their task, and allow them focus on more critical cases. Acknowledgements We wish to acknowledge the help provided by Dr. N. Kacher from the CIMK imagery center and the medical staff for their collaboration and feedback on our experiments.

References 1. Ahuja, S., Panigrahi, B.K., Dey, N., Rajinikanth, V., Gandhi, T.K.: Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices. Appl. Intell. 51(1), 571–585 (2021) 2. Amidi, A., Amidi, S.: ResNet-18, ResNet-18 convolutional neural network. https://www. mathworks.com/help/deeplearning/ref/resnet1.html. Accessed 01 March 2021 3. Benmalek, E., Elmhamdi, J., Jilbab, A.: Comparing CT scan and chest X-ray imaging for covid-19 diagnosis. Biomed. Eng. Adv. 1, 100003 (2021) 4. Borakati, A., Perera, A., Johnson, J., Sood, T.: Diagnostic accuracy of X-ray versus CT in covid-19: a propensity-matched database study. BMJ Open 10(11) (2020) 5. Chung, M., Bernheim, A., Mei, X., Zhang, N., Huang, M., Zeng, X., Cui, J., Xu, W., Yang, Y., Fayad, Z.A., Jacobi, A., Li, K., Li, S., Shan, H.: CT imaging features of novel coronavirus (2019-ncov). RNS Radiol. 259(1), 2020 (2019) 6. Cohen, J.P., Morrison, P., Dao, L., Roth, K., Duong, T.Q., Ghassemi, M.: Covid-19 image data collection: prospective predictions are the future (2020). ArXiv: 2006.11988 7. Dou, Q., Chen, H., Yu, L., Zhao, L., Qin, J., Wang, D., Mok, V.C.T., Shi, L., Heng, P.-A.: Automatic detection of cerebral microbleeds from mr images via 3d convolutional neural networks. IEEE Trans. Med. Imaging 35(5), 1182–1195 (2016) 8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015). CoRR abs/1512.03385 9. Hellewell, J., Russell, T.W., The SAFER Investigators, Field Study Team, The Crick COVID-19 Consortium, CMMID COVID-19 Working Group, Beale, R., Kelly, G., Houlihan, C., Nastouli, E., Kucharski, A.J.: Estimating the effectiveness of routine asymptomatic PCR testing at different frequencies for the detection of SARS-CoV-2 infections (2020). medRxiv 10. Jin, C., Chen, W., Cao, Y., Xu, Z., Tan, Z., Zhang, X., Deng, L., Zheng, C., Zhou, J., Shi, H., Feng, J., Ozturk, T.: Development and evaluation of an artificial intelligence system for covid-19 diagnosis. Nat. Commun. 11(1), 5088 (2020) 11. Ma, J., Wang, Y., An, X., Ge, C., Yu, Z., Chen, J., Zhu, Q., Dong, G., He, J., He, Z., Ni, Z., Yang, X.: Towards efficient covid-19 CT annotation: a benchmark for lung and infection segmentation (2020). arXiv: 2004.12537 12. Morozov, S.P., Andreychenko, A.E., Pavlov, N.A., Vladzymyrskyy, A.V., Ledikhova, N.V., Gombolevskiy, V.A., Blokhin, I.A., Gelezhe, P.B., Gonchar, A.V., Chernina, V.Yu.: Mosmeddata: chest CT scans with covid-19 related findings dataset (2020). https://mosmed.ai/en/ datasets/covid19_1110/

14

I. Setitra et al.

13. Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, Ö., Rajendra Acharya, U.: Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 121, 103792 (2020) 14. Raghu, M., Zhang, C., Kleinberg, J.M., Bengio, S.: Transfusion: understanding transfer learning for medical imaging. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’AlchéBuc, F., Fox, E.A., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8–14 December 2019, Vancouver, BC, Canada, pp. 3342–3352 (2019) 15. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017, pp. 6517–6525. IEEE Computer Society (2017) 16. Rezaul Karim, Md., Döhmen, T., Cochez, M., Beyan, O., Rebholz-Schuhmann, D., Decker, S.: Deepcovidexplainer: explainable COVID-19 diagnosis from chest X-ray images. In: IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2020, Virtual Event, South Korea, December 16–19, 2020, pp. 1034–1037. IEEE (2020) 17. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22–29, 2017, pp. 618–626. IEEE Computer Society (2017) 18. Torrey, L., Shavlik, J.: Transfer Learning. IGI Global (2009) 19. Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017, pp. 3462–3471. IEEE Computer Society (2017) 20. Yang, H., Lu, S.: Covid-19 and tuberculosis. J. Transl. Intern. Med. 8(2), 59–65 (2020) 21. Zhao, J., Zhang, Y., He, X., Xie, P.: Covid-CT-dataset: a CT scan dataset about COVID-19 (2020). CoRR abs/2003.13865 22. Zunair, H., Rahman, A., Mohammed, N., Cohen, J.P.: Uniformizing techniques to process CT scans with 3D CNNs for tuberculosis prediction (2020)

Chapter 2

Prediction Models for COVID-19 in Children Vincent Peter C. Magboo and Ma. Sheila A. Magboo

Abstract COVID-19 can also be acquired by children and compared to adults, most of the time they have only mild cases. Moreover, clinical manifestations in children are non-specific and can also be seen in other viral infections. Nonetheless, they are also susceptible to acquire the severe form of the disease requiring urgent hospitalization. In our study, we applied several machine learning models for COVID-19 prediction among children using only the clinical laboratory findings. The best performing models were obtained with random forest, decision tree, AdaBoost, and XGBoost with a 95–98% accuracy, 96–100% sensitivity and 92–96% specificity. Because of data imbalance, random oversampling was applied resulting to the improvement of performance metrics particularly of the best performing models. The top three important features for the best performing model (random forest) were leukocytes, platelets, and eosinophils. Our study has generated useful insights of using widely available, simple, and readily measurable laboratory blood counts in the development of robust models to predict COVID-19 in children with high reliability. These models can be used as decision-support tools in clinical practice when to request a COVID-19 diagnostic test, particularly in cases of limited RT-PCR tests. Tools aided with machine learning can thus be utilized to optimize constrained resources.

2.1 Introduction COVID-19, a severe respiratory illness caused by the virus SARS-CoV-2, has become a worldwide health catastrophe today. As of January 27, 2022, more than 363 million people had been afflicted worldwide causing more than 5.6 million deaths [1]. Similar with adults, children are also at risk for contracting COVID-19. Based on the UNICEF data of 106 countries, around 23 million children and adolescent were reported V. P. C. Magboo (B) · Ma. S. A. Magboo Department of Physical Sciences and Mathematics, University of the Philippines Manila, Manila, Philippines e-mail: [email protected] Ma. S. A. Magboo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 Y.-W. Chen et al. (eds.), Innovation in Medicine and Healthcare, Smart Innovation, Systems and Technologies 308, https://doi.org/10.1007/978-981-19-3440-7_2

15

16

V. P. C. Magboo and Ma. S. A. Magboo

cases of which 34% belong to the age group 0–9 years old [2]. Though most of the time children have only mild cases, nonetheless, they are also at risk of getting the severe form of the disease requiring urgent hospitalization [3]. In one study based in Spain involving 1,040 patients less than 16 years old, almost half (47.2%) were asymptomatic, 10.8% had comorbidities, and only 2.6% required hospitalization with no reported deaths [4]. In another study involving Norwegian children and adolescents, there were many short-term increases in primary care consultation seen in the first month, second month and third month after a positive SARS-CoV-2 test result compared with those who tested negative. Results have shown that the increase in primary care visits was due to respiratory and general or unspecified conditions while there was no increase in specialists care consultations [5]. Moreover, clinical manifestations in children are non-specific and can also be seen in other viral infections. At any rate, there is a need for an early detection of the presence of SARS-CoV-2 to provide the best possible care to patients. Additionally, the prompt detection of severe COVID-10 remains a challenge to health professionals. Hence, it is important to develop decision-making support tools that assist in providing a more accurate diagnosis. Our objective is to develop a machine learning (ML) model that predicts the presence of COVID-19 based on laboratory findings among children. The main contribution of this research is to find robust and reliable prediction models that will assist healthcare professional in assessing presence of COVID-19 among children using only clinical laboratory findings.

2.2 Literature Review Several studies applied ML models solely on clinical laboratory findings to predict COVID-19. In a study by Mamlook et al. [3], authors studied the laboratory findings for diagnosis of COVID-19 using predictive algorithms. They applied artificial neural network (ANN), random forest (RF), support vector machines (SVM), decision trees (DT), classification and regression trees (CART), and gradient boosted trees (GBM). The best result was obtained by CART with 92.5% accuracy and identified leukocytes, monocytes, potassium, and eosinophils to be the most important predictors, indicating that these features play a crucial role in COVID-19. Banerjee et al., used ML models, ANN, and a simple statistical test to classify SARS-CoV-2 positive patients using blood counts only without patient’s symptoms or history [6]. Their models (RF, shallow learning, and ANN) obtained a 94–95% an area under the receiver operating characteristic curve (AUROC) in predicting SARS-CoV-2. They also concluded that normalized data of different blood parameters from SARS-CoV2 positive patients exhibit a decrease in platelets, leukocytes, eosinophils, basophils and lymphocytes, and an increase in monocytes suggesting a characteristic immune response profile pattern in the full blood count that are detected from simple and rapid blood tests. They concluded that ML methodology has the potential to improve initial screening of patients when PCR-based diagnostic tools are limited. In another study by Yang et al. [7], authors applied gradient boosting decision tree (GBDT)

2 Prediction Models for COVID-19 in Children

17

incorporating patient demographic features (age, sex, race) with 27 routine laboratory tests to predict SARS-CoV-2 infection. Their model obtained 85.4% AUROC and concluded that ML models using routine laboratory test results offer opportunities for early and rapid identification of high-risk COVID-19 infected patients even before RT-PCR results become available. Other studies also applied ML models using laboratory findings or clinical symptoms and radiographic results. Martinez-Velazquez et al., applied DT, RF, SVM, ANN, and a voting ensemble for detecting COVID-19 infections using self-reported symptoms. Their best model obtained 75.2% sensitivity, 60.9% specificity, and AUROC of 72.8% [8]. Ma et al.[9], studied the use of symptoms and laboratory results in predicting the Computed Tomography (CT) outcomes for the pediatric patients with positive RT-PCR testing results using decision tree-based machine learning models. Their model obtained 84% AUROC, 82.0% accuracy, and 84% sensitivity for predicting CT outcomes. Additionally, authors concluded that the combination of 5 laboratory indicators (age, creactive protein, neutrophils, lymphocyte, and ferritin) can effectively predict CT findings of COVID-19 patients. In Antoñanzas et al. [10], authors applied RF, SVM, XGBoost, logistic regression (LR) and multilayer perceptron to assess the need for a SARS-CoV-2 test in children based on their clinical symptoms. The best model obtained 65% AUROC and indicated that absence of high-grade fever to be a major predictor of COVID-19 in younger children, while loss of taste or smell was the most determinant symptom in older children. Marateb et al., implemented an ensemble classifier to make a COVID-19 diagnosis using demographics, symptoms and signs, blood markers, and family history of diseases applied to 3 datasets [11]. The classifier obtained 96% sensitivity, 95% specificity, 96% AUROC, and 96% accuracy in dataset 1 while AUROC of 97% and 92% were obtained for datasets 2 and 3, respectively. The most important features in their study were white blood cell count, shortness of breath, and C-reactive protein for datasets 1, 2, and 3, respectively. Kouanou et al. [12], applied SVM on two publicly available datasets for COVID-19 diagnosis. Their study showed 99.29% accuracy, 92.79% sensitivity, and 100% specificity in dataset 2 while in dataset 1, SVM obtained 92.86% accuracy, 93.55% sensitivity, and 90.91% specificity. On the other hand, some studies also reported using other ML techniques to assess for COVID-19. Goreke, Sari and Kockanat developed a deep learning-based hybrid classifier for COVID-19 detection which obtained 94.95% accuracy, 94.98% F1score, 94.98% precision, 94.98% recall, and AUROC of 100% [13]. Li et al., have shown that COVID-19 patients could be clustered into subtypes based on serum levels of immune cells, gender, and reported symptoms [14]. They used XGBoost model which obtained 92.5% sensitivity and 97.9% specificity in differentiating COVID-19 patients from influenza patients. They also concluded that computational methods trained on large clinical datasets could generate accurate COVID-19 diagnostic models to mitigate the impact of lack of testing. Finally, Dayan et al. [15], applied federated learning—training AI models on data from 20 institutes around the world while maintaining anonymity, to predict future oxygen requirements of patients with COVID-19 using vital signs, laboratory data and chest X-rays. Their

18

V. P. C. Magboo and Ma. S. A. Magboo

model got more than 92% AUROC for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room.

2.3 Methodology The study shall be performed in several steps. The first step is the loading of the dataset. Pre-processing techniques applied to the dataset include data cleaning, feature selection procedures, and application of random oversampling for handling data imbalance. The next step is to apply a variety of ML algorithms to be followed by performance assessment. Lastly, feature importance of attributes of the best performing models shall also be obtained. The machine learning pipeline for this study is seen in Fig. 2.1.

2.3.1 Dataset Description In our study, we used a publicly available dataset on COVID-19 in children from Kaggle [16]. This dataset contains 5,664 anonymized normalized data on COVID-19 and laboratory tests from patients less than 18 years old on admission seen at the Hospital Israelita Albert Einstein, at São Paulo, Brazil. The dataset consists of 111 columns including the target variable, SARS-Cov-2 exam result. There is also data imbalance with 13% COVID positive and 87% COVID negative.

Fig. 2.1 Machine learning pipeline for COVID-19 prediction in children

2 Prediction Models for COVID-19 in Children

19

2.3.2 Pre-processing Steps To prepare the dataset for machine learning, we started with data cleaning and applied pre-processing methods. We dropped the rows without full blood count entries and those variables deemed unimportant for COVID-prediction based on previous studies [6, 13, 17–20]. Likewise, outliers were also removed. The resulting dataset consisting of 531 records still has some missing values after the initial data cleaning. In our study, we opted not to do imputation on these missing values as this situation merely represents the usual complexity in decision making in medical practice. Physicians base their decisions on what laboratory tests to request based on patient’s medical history and physical examination. Hence, there is no standard set of laboratory tests ordered to every individual or to a specific condition. We decided to remove those variables with more than 10% missing values, namely: serum glucose, alanine transaminase, aspartate transaminase, sodium, potassium, urea, creatinine, c-reactive protein, and neutrophils. Upon re-inspection, we also dropped 4 more rows with missing values in the mean platelet volume or monocytes resulting to 527 records. We then applied feature selection methods to reduce the number of attributes leading to a reduction of the computational complexity of prediction algorithms which in turn increase the accuracy rate of the models and avoiding potential overfitting [20–22]. In this study, feature elimination is done by a majority vote among recursive feature elimination (RFE), correlation with the target variable (t pj θ, x j − t pi (θ, xi ) 0, else

(5.4)

Comp(i, j) is the set of acceptable pairs, that is, i is not censored and at the time of i’s event, j is not censored. By combining the Cox negative partial log-likelihood function L Z with the above ordinal loss: ordinal_ Loss, the objective loss function can be formulated as a multitask model:

56

I. Bichindaritz and G. Liu

L(θ ) = L Z (θ ) + λ ∗ or dinal_Loss

(5.5)

where λ denotes a multi-task weight coefficient. Numerous existing approaches learning several tasks in the meantime employ a naive weighted sum of losses, in which the loss weights are uniform, or altered in a crude and manual manner. However, the model effect exhibits extreme sensitivity to the weight selection process. The afore-mentioned weight hyper-parameters can be tuned at high cost. Thus, a more convenient approach capable of learning the optimal weights is required. We develop a method to integrate several loss functions for learning some objectives in an adaptive manner.

5.3.2 Adaptive Auxiliary Loss In this paper, the present study employs methylation or gene expression features to make survival analysis for cancer cases. Our primary task is obtaining the training model. The primary task has a corresponding loss L main , which can be the expected return loss employed to calculate the policy gradient. The present study employs the Cox negative partial log-likelihood function as the main loss L main , i.e. L main = L Z . In order to increase data efficiency, besides the primary task, one has access to one or more auxiliary tasks that share some unknown structure with the primary task [22]. Here, the ordinal loss can be used as auxiliary loss of an auxiliary task, i.e. L aux = ordinal_Loss. Our goal is to optimize the main loss L main . However, using gradientbased optimization with only the primary task gradient ∇θ L main is commonly slow and unstable, due to the high variance of the deep network. Thus, auxiliary task is used to help to learn a good feature representation. We can combine the main loss with the loss from auxiliary tasks as: L(θ ) = L main (θ ) + λL aux (θ )

(5.6)

where λ denotes the weight for the auxiliary task. Under the intuition that modifying θ to minimize L aux will improve L main if the two tasks are sufficiently related, we propose to modulate the weight λ at each learning iteration t by how useful the auxiliary task is for the primary task given θt . θt is the set of all model parameters at training step t. It is assumed that this study updates the parameters θt by adopting a gradient descent on this combined objective: θt+1 = θt + α∇θt L(θt )

(5.7)

where α denotes the gradient step size. At each optimization iteration, we want to efficiently approximate the solution to:

5 Adaptive Multi-omics Survival Analysis in Cancer

  argmin L θt + α∇θt (L main + λt L aux ) λt

57

(5.8)

The weights λt of the auxiliary task need to be tuned. However, tuning the parameters λt becomes more computationally intensive as the number of iterations increases. We try to look for a cheap heuristic to approximate λt which is better than keeping λt constant and does not require hyper-tuning. Our goal is to find an auxiliary task with weight to make L main decrease fastest. To be specific, Vt (λ) is defined as the speed at which the loss of the primary task decreases at time step t. d L main (θt ) ≈ L main (θt+1 ) − L main (θt ) dt  = L main θt + α∇θt L(θt ) − L main (θt )

Vt (λ) =

(5.9)

≈ L main (θt ) + α∇θt L main (θt )T ∇θt L(θt ) − L main (θt ) = α∇θt L main (θt )T ∇θt L(θt ) We can simply calculate the gradient to update λ: ∂ Vt (λt ) = α∇θt L main (θt )T ∇θt L aux (θt ) ∂λt λ = λ − β∇λ Vt (λ)

∇λt Vt (λt ) =

(5.10)

where β refers to the set gradient step size. This update rule is based on the dot product between the gradient of the primary task and the gradient of the auxiliary task. The auxiliary loss will add extra gradient flow during backpropagation, facilitating the reduction of the vanishing gradient issue for earlier layers. From an intuitive perspective, the stated approach exploits the online experiences to determine whether an auxiliary task has been useful in decreasing the primary task loss. The update regulation refers to a product of a derivation to maximize the speed at which the major task loss is reduced.

5.4 Experimental Results and Discussions In this part, we assess the performance of the developed approach and carry out experiments on the training set through tenfold cross validation. To be specific, this study first combines mRNA genes expression and DNA methylation features. Subsequently, the Cox proportional hazards model is built on the integrated features in the training set. To demonstrate the effectiveness of the developed method, we compare with five other machine learning approaches. For survival stratification prediction, the median risk score predicted by the cox proportional hazards model acts as a threshold for splitting cases into low-risk and high-risk groups. Lastly, this

58

I. Bichindaritz and G. Liu

study tests whether these two groups achieve obvious survival outcomes by Kaplan– Meier estimator and rank test. The survival curves are drawn by applying different approaches. This study assesses the performance of the developed approach and other comparable methods using the Concordance index (C-index). C-index quantifies the fraction of all pairs of cases with predicted survival times ordered in a correct manner as: C − index =

1 m  I (F(xi ) < F(x j )) i=1 j:ti