Web Semantics: Cutting Edge and Future Directions in Healthcare [1 ed.] 0128224681, 9780128224687

Web Semantics strengthen the description of web resources to exploit them better and make them more meaningful for both

181 33 15MB

English Pages 288 [271] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title-page_2021_Web-Semantics
Web Semantics
Copyright_2021_Web-Semantics
Copyright
Contents_2021_Web-Semantics
Contents
List-of-contributors_2021_Web-Semantics
List of contributors
Preface_2021_Web-Semantics
Preface
Representation
Reasoning
Security
Chapter-1---Semantic-intelligence--An-overview_2021_Web-Semantics
1 Semantic intelligence: An overview
1.1 Overview
1.2 Semantic Intelligence
1.2.1 Publishing and consuming data on the web
1.2.2 Semantic Intelligence technologies applied within enterprises
1.3 About the book
Chapter-2---Convology--an-ontology-for-conversational-agents_2021_Web-Semant
2 Convology: an ontology for conversational agents in digital health
2.1 Introduction
2.2 Background
2.3 The construction of convology
2.3.1 Specification
2.3.2 Knowledge acquisition
2.3.3 Conceptualization
2.3.4 Integration
2.4 Inside convology
2.4.1 Dialog
2.4.2 Actor
2.4.3 ConversationItem
2.4.4 Event
2.4.5 Status
2.5 Availability and reusability
2.6 Convology in action
2.6.1 Other scenarios
2.7 Resource sustainability and maintenance
2.8 Conclusions and future work
References
Chapter-3---Conversion-between-semantic-data-models--the-story_2021_Web-Sema
3 Conversion between semantic data models: the story so far, and the road ahead
3.1 Introduction
3.2 Resource Description Framework as a semantic data model
3.3 Related work
3.4 Conceptual evaluation
3.4.1 Comparison study
3.4.2 Generalized architecture
3.5 Findings
3.6 Concluding remarks
References
Chapter-4---Semantic-interoperability--the-future-of-health_2021_Web-Semanti
4 Semantic interoperability: the future of healthcare
4.1 Introduction
4.1.1 Healthcare interoperability: a brief overview
4.2 Semantic web technologies
4.2.1 Resource data framework
4.2.2 RDF graphs
4.2.3 Vocabularies, RDFS and OWL
4.2.4 SPARQL
4.2.5 Applications of semantic web technology
4.3 Syntactic interoperability
4.3.1 Health level 7 version 2.x
4.3.2 Health level 7 version 3.x
4.3.3 Fast healthcare interoperable resource
4.4 Semantic interoperability
4.4.1 History of clinical coding systems
4.4.2 Difference between clinical terminology systems and clinical classification systems
4.4.3 Semantic interoperability and semantic web technology
4.5 Contribution of semantic web technology to aid healthcare interoperability
4.5.1 Syntactic interoperability and semantic web technology
4.5.2 Semantic interoperability and semantic web technology
4.6 Discussion and future work
4.6.1 Challenges with the adoption of semantic web technology at the semantic interoperability level
4.6.2 Challenges with the adoption of semantic web technology at the syntactic interoperability level
4.7 Conclusion
References
Chapter-5---A-knowledge-graph-of-medical-institutions-in-K_2021_Web-Semantic
5 A knowledge graph of medical institutions in Korea
5.1 Introduction
5.2 Related work
5.2.1 Formal definition of knowledge base
5.2.2 Public data in Korea
5.3 Medical institutions in Korea
5.4 Knowledge graph of medical institutions
5.4.1 Data collection
5.4.2 Model of administrative district
5.4.3 Model of medical institutions
5.4.4 Graph transformation
5.5 Conclusion
References
Chapter-6---Resource-description-framework-based-semantic-knowl_2021_Web-Sem
6 Resource description framework based semantic knowledge graph for clinical decision support systems
6.1 Introduction
6.2 Knowledge representation using RDF
6.2.1 Knowledge-based systems
6.2.2 Knowledge representation in knowledge-based system
6.2.3 Resource description framework for knowledge representation
6.3 Simple knowledge organization system
6.3.1 Knowledge organization system
6.3.2 Simple knowledge organization system
6.3.3 Simple knowledge organization system core and resource description framework
6.4 Semantic knowledge graph
6.4.1 Knowledge graphs
6.4.2 Semantic knowledge graph
6.4.3 RDF-based semantic knowledge graph
6.5 Semantic knowledge graph for clinical decision support systems
6.5.1 Clinical decision support systems
6.5.2 Semantic knowledge graph for clinical decision support systems
6.5.3 Advantages of RDF-based semantic knowledge graph
6.6 Discussion and future possibilities
6.7 Conclusion
References
Chapter-7---Probabilistic--syntactic--and-semantic-reasoning-u_2021_Web-Sema
7 Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare
7.1 Introduction
7.2 Multientity Bayesian networks
7.3 Semantic web and uncertainty
7.4 MEBN and ontology web language
7.5 MEBN and probabilistic context-free grammar
7.6 Summary
References
Chapter-8---The-connected-electronic-health-record--a-semantic-e_2021_Web-Se
8 The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record
8.1 Introduction
8.2 Motivating scenario: smart health unit
8.3 Literature review
8.3.1 Background
8.3.1.1 Electronic health record-related standards and terminologies
8.3.1.2 Semantic interoperability: internet of things-based ontologies
8.3.2 Related Studies
8.3.2.1 Electronic health records and EHR systems
8.4 Our connected electronic health record system approach
8.4.1 Architecture description
8.4.2 Data processing module
8.4.2.1 Preprocessing data
8.4.2.2 Data transformation
8.4.2.3 Data analysis based on data aggregation process
8.5 Implementation
8.6 Experimental results
8.6.1 Analysis performance of connected electronic health record
8.6.2 Response time of connected electronic health record
8.7 Conclusion and future works
References
Chapter-9---Ontology-supported-rule-based-reasoning-for-emer_2021_Web-Semant
9 Ontology-supported rule-based reasoning for emergency management
9.1 Introduction
9.2 Literature review
9.3 System framework
9.3.1 Construction of ontology
9.4 Inference of knowledge
9.4.1 System in action
9.4.1.1 Tools/techniques/languages employed
9.4.2 Sample scenarios
9.5 Conclusion and future work
References
Chapter-10---Health-care-cube-integrator-for-health-care-da_2021_Web-Semanti
10 Health care cube integrator for health care databases
10.1 Introduction: state-of-the-art health care system
10.2 Research methods and literature findings of research publications
10.2.1 Indian health policies and information technology
10.2.2 Electronic health record availability in India and its privacies challenges
10.2.3 Electronic health records databases/system study
10.2.4 Study of existing health knowledgebases and their infrastructures
10.2.5 Study of existing solution available for health data integration
10.2.6 Health care processes and semantic web technologies
10.2.7 Research objectives
10.3 HCI conceptual framework and designing framework
10.4 Implementation framework and experimental setup
10.5 Result analysis, conclusion, and future enhancement of work
10.5.1 Result analysis
10.5.2 Conclusion
10.5.3 Future enhancement of work
References
Chapter-11---Smart-mental-healthcare-systems_2021_Web-Semantics
11 Smart mental healthcare systems
11.1 Introduction
11.2 Classification of mental healthcare
11.3 Challenges of a healthcare environment
11.3.1 Big data
11.3.2 Heterogeneity
11.3.3 Natural language processing
11.3.4 Knowledge representation
11.3.5 Invasive and continuous monitoring
11.4 Benefits of smart mental healthcare
11.4.1 Personalization
11.4.2 Contextualization
11.4.3 Actionable knowledge
11.4.4 Invasive and continuous monitoring
11.4.5 Early intervention or detection
11.4.6 Privacy and cost of treatment
11.5 Architecture
11.5.1 Semantic annotation
11.5.2 Sentiment analysis
11.5.3 Machine learning
11.6 Conclusion
References
Chapter-12---A-meaning-aware-information-search-and-retrieval_2021_Web-Seman
12 A meaning-aware information search and retrieval framework for healthcare
12.1 Introduction
12.2 Related work
12.3 Semantic search and information retrieval in healthcare
12.4 A framework for meaning-aware healthcare information extraction from unstructured text data
12.4.1 Meaning-aware healthcare information discovery from ontologically annotated medical catalog database
12.4.2 Semantic similarity computation
12.4.3 Semantic healthcare information discovery—an illustration
12.5 Future research dimensions
12.6 Conclusion
Key terms and definitions
References
Chapter-13---Ontology-based-intelligent-decision-support-syst_2021_Web-Seman
13 Ontology-based intelligent decision support systems: A systematic approach
13.1 Introduction
13.2 Enabling technologies to implement decision support system
13.2.1 IoT-enabled decision support system for data acquisition, transmission, and storage
13.2.1.1 Data acquisition
13.2.1.2 Data transmission and storage
13.2.2 Application of machine learning and deep learning techniques for predictive analysis of patient’s health
13.2.2.1 Identification of diseases
13.2.2.2 Smart electronic health records
13.2.2.3 Behavioral monitoring
13.3 Role of ontology in DSS for knowledge modeling
13.3.1 Issues and challenges
13.3.2 Technology available
13.4 QoS and QoE parameters in decision support systems for healthcare
13.4.1 Why QoS versus QoE is important in such system implementation in healthcare?
13.4.2 Definition of significant quality of service and quality of experience parameters
13.4.2.1 Quality of service metrics parameters
13.4.2.2 QoE metrics
13.5 Conclusion
References
Chapter-14---Ontology-based-decision-making_2021_Web-Semantics
14 Ontology-based decision-making
14.1 Introduction
14.2 Issue-Procedure Ontology
14.3 Issue-Procedure Ontology for Medicine
14.4 Conclusion
References
Chapter-15---A-new-method-for-profile-identification-using-ont_2021_Web-Sema
15 A new method for profile identification using ontology-based semantic similarity
15.1 Introduction
15.2 Proposed method
15.2.1 Weight allocation for keyword
15.2.2 Semantic matching
15.2.2.1 Build paths
15.2.2.2 Semantic similarity
15.2.2.3 Weight computing of the concept
15.2.3 Profile creation
15.3 Conclusion
References
Chapter-16---Semantic-similarity-based-descriptive-answer-e_2021_Web-Semanti
16 Semantic similarity–based descriptive answer evaluation
16.1 Introduction
16.2 Literature survey
16.3 Proposed system
16.3.1 Wu and Palmer: word similarity
16.3.2 Semantic similarity between a pair of sentences
16.3.3 Semantic similarity between words (similarity matrix calculation)
16.4 Algorithm
16.5 Data set
16.6 Results
16.7 Conclusion and discussion
References
Chapter-17---Classification-of-genetic-mutations-using-ontologi_2021_Web-Sem
17 Classification of genetic mutations using ontologies from clinical documents and deep learning
17.1 Introduction
17.2 Clinical Natural Language Processing
17.3 Clinical Natural Language Processing (Clinical NLP) techniques
17.3.1 Statistical techniques in Clinical Natural Language Processing
17.3.1.1 Bag of words
17.3.1.2 Term frequency-inverse document frequency
17.3.1.3 Rapid automatically keyword extraction
17.3.2 Linguistic techniques in Clinical Natural Language Processing
17.3.2.1 Part of speech tagging
17.3.2.2 Tokenization
17.3.2.3 Dependency graph
17.3.3 Graphical techniques in Clinical Natural Language Processing
17.3.3.1 TextRank
17.3.3.2 Hyper link induced topic search
17.3.4 Machine learning techniques in Clinical Natural Language Processing
17.3.4.1 Support vector machine
17.3.4.2 Word2Vec
17.3.5 Deep learning techniques in Clinical Natural Language Processing
17.3.5.1 Convolution neural network
17.3.5.2 Recurrent neural network
17.4 Clinical Natural Language Processing and Semantic Web
17.4.1 Ontology creation from clinical documents
17.4.2 Framework for classification of genetic mutations using ontologies from clinical document
17.5 Case study: Classification of Genetic Mutation using Deep Learning and Clinical Natural Language Processing
17.6 Conclusion
References
Chapter-18---Security-issues-for-the-Semantic-Web_2021_Web-Semantics
18 Security issues for the Semantic Web
18.1 Introduction
18.1.1 Security and cryptography
18.1.1.1 Symmetric key cryptography or secret key cryptography
18.1.1.2 Asymmetric key cryptography or public-key cryptography
18.1.2 Introduction to Semantic Web
18.2 Related work
18.3 Security standards for the Semantic Web
18.3.1 Securing the extensible markup language
18.3.2 Securing the resource description framework
18.3.3 Information interoperability in a secured way
18.3.3.1 Management of trust for the Semantic Web
18.4 Different attacks on the Semantic Web
18.4.1 Importance of transport layer security on the Semantic Web
18.5 Drawbacks of the existing privacy and security protocols in W3C social web standards
18.6 Semantic attackers
18.7 Privacy and Semantic Web
18.8 Directions for future security protocols for the Semantic Web
18.9 Conclusion
References
Index_2021_Web-Semantics
Index
Recommend Papers

Web Semantics: Cutting Edge and Future Directions in Healthcare [1 ed.]
 0128224681, 9780128224687

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

WEB SEMANTICS

WEB SEMANTICS Cutting Edge and Future Directions in Healthcare Edited by

SARIKA JAIN Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India

VISHAL JAIN Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Greater Noida, Uttar Pradesh, India

VALENTINA EMILIA BALAS Faculty of Engineering, Aurel Vlaicu University of Arad, Romania

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2021 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-822468-7 For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

Publisher: Mara Conner Acquisitions Editor: Chris Katsaropoulos Editorial Project Manager: Megan Healy Production Project Manager: Omer Mukthar Cover Designer: Mark Rogers Typeset by MPS Limited, Chennai, India

Contents List of contributors ix Preface xi

4. Semantic interoperability: the future of healthcare Rashmi Burse, Michela Bertolotto, Dympna O’Sullivan and Gavin McArdle

1. Semantic intelligence - An overview Sarika Jain

1.1 Overview

4.1 4.2 4.3 4.4 4.5

Introduction 31 Semantic web technologies 32 Syntactic interoperability 37 Semantic interoperability 40 Contribution of semantic web technology to aid healthcare interoperability 46 4.6 Discussion and future work 49 4.7 Conclusion 51 References 51

1

Section I Representation 2. Convology: an ontology for conversational agents in digital health

5. A knowledge graph of medical institutions in Korea

Mauro Dragoni, Giuseppe Rizzo and Matteo A. Senese

2.1 Introduction 7 2.2 Background 9 2.3 The construction of convology 10 2.4 Inside convology 12 2.5 Availability and reusability 16 2.6 Convology in action 17 2.7 Resource sustainability and maintenance 2.8 Conclusions and future work 20 References 21

Haklae Kim

5.1 Introduction 55 5.2 Related work 56 5.3 Medical institutions in Korea 57 5.4 Knowledge graph of medical institutions 5.5 Conclusion 66 References 67

19

60

6. Resource description framework based semantic knowledge graph for clinical decision support systems

3. Conversion between semantic data models: the story so far, and the road ahead

Ravi Lourdusamy and Xavierlal J. Mattam

Shripriya Dubey, Archana Patel and Sarika Jain

6.1 6.2 6.3 6.4 6.5

Introduction 69 Knowledge representation using RDF 71 Simple knowledge organization system 75 Semantic knowledge graph 77 Semantic knowledge graph for clinical decision support systems 81 6.6 Discussion and future possibilities 83 6.7 Conclusion 84 References 84

3.1 Introduction 23 3.2 Resource Description Framework as a semantic data model 24 3.3 Related work 25 3.4 Conceptual evaluation 27 3.5 Findings 28 3.6 Concluding remarks 29 References 30

v

vi

Contents

7. Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare Shrinivasan Patnaikuni and Sachin R. Gengaje

7.1 7.2 7.3 7.4 7.5

Introduction 87 Multientity Bayesian networks 89 Semantic web and uncertainty 90 MEBN and ontology web language 91 MEBN and probabilistic context-free grammar 92 7.6 Summary 93 References 93

10. Health care cube integrator for health care databases Shivani A Trivedi, Monika Patel and Sikandar Patel

10.1 Introduction: state-of-the-art health care system 129 10.2 Research methods and literature findings of research publications 131 10.3 HCI conceptual framework and designing framework 136 10.4 Implementation framework and experimental setup 140 10.5 Result analysis, conclusion, and future enhancement of work 148 Acknowledgment 149 References 149

Section II Reasoning

11. Smart mental healthcare systems Sumit Dalal and Sarika Jain

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record Salma Sassi and Richard Chbeir

8.1 8.2 8.3 8.4

Introduction 97 Motivating scenario: smart health unit 99 Literature review 100 Our connected electronic health record system approach 105 8.5 Implementation 110 8.6 Experimental results 111 8.7 Conclusion and future works 113 References 114

9. Ontology-supported rule-based reasoning for emergency management Sarika Jain, Sonia Mehla and Jan Wagner

9.1 Introduction 117 9.2 Literature review 119 9.3 System framework 120 9.4 Inference of knowledge 122 9.5 Conclusion and future work 127 References 127

11.1 Introduction 153 11.2 Classification of mental healthcare 154 11.3 Challenges of a healthcare environment 155 11.4 Benefits of smart mental healthcare 158 11.5 Architecture 159 11.6 Conclusion 161 References 162

12. A meaning-aware information search and retrieval framework for healthcare V.S. Anoop, Nikhil V. Chandran and S. Asharaf

12.1 Introduction 165 12.2 Related work 167 12.3 Semantic search and information retrieval in healthcare 170 12.4 A framework for meaning-aware healthcare information extraction from unstructured text data 170 12.5 Future research dimensions 174 12.6 Conclusion 174 Key terms and definitions 174 References 175

vii

Contents

13. Ontology-based intelligent decision support systems: A systematic approach

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

Ramesh Saha, Sayani Sen, Jayita Saha, Asmita Nandy, Suparna Biswas and Chandreyee Chowdhury

Punam Bedi, Shivani, Neha Gupta, Priti Jagwani and Veenu Bhasin

13.1 Introduction 177 13.2 Enabling technologies to implement decision support system 178 13.3 Role of ontology in DSS for knowledge modeling 182 13.4 QoS and QoE parameters in decision support systems for healthcare 187 13.5 Conclusion 190 References 191

14. Ontology-based decision-making

17.1 Introduction 233 17.2 Clinical Natural Language Processing 234 17.3 Clinical Natural Language Processing (Clinical NLP) techniques 235 17.4 Clinical Natural Language Processing and Semantic Web 242 17.5 Case study: Classification of Genetic Mutation using Deep Learning and Clinical Natural Language Processing 245 17.6 Conclusion 249 References 249

Mark Douglas de Azevedo Jacyntho and Matheus D. Morais

14.1 Introduction 195 14.2 Issue-Procedure Ontology 198 14.3 Issue-Procedure Ontology for Medicine 14.4 Conclusion 208 References 208

Section III 203

15. A new method for profile identification using ontology-based semantic similarity Abdelhadi Daoui, Noreddine Gherabi and Abderrahim Marzouk

15.1 Introduction 211 15.2 Proposed method 212 15.3 Conclusion 218 References 218

16. Semantic similarity based descriptive answer evaluation Mohammad Shaharyar Shaukat, Mohammed Tanzeem, Tameem Ahmad and Nesar Ahmad

16.1 Introduction 221 16.2 Literature survey 222 16.3 Proposed system 223 16.4 Algorithm 227 16.5 Data set 227 16.6 Results 228 16.7 Conclusion and discussion 229 Acknowledgments 230 References 230

Security 18. Security issues for the Semantic Web Prashant Pranav, Sandip Dutta and Soubhik Chakraborty

18.1 Introduction 253 18.2 Related work 258 18.3 Security standards for the Semantic Web 259 18.4 Different attacks on the Semantic Web 18.5 Drawbacks of the existing privacy and security protocols in W3C social web standards 263 18.6 Semantic attackers 264 18.7 Privacy and Semantic Web 264 18.8 Directions for future security protocols for the Semantic Web 265 18.9 Conclusion 266 References 266

Index 269

262

List of contributors Sumit Dalal National Institute of Technology Kurukshetra, Haryana, India

Nesar Ahmad Department of Computer Engineering, Zakir Husain College of Engineering and Technology, Aligarh Muslim University, Aligarh, India

Abdelhadi Daoui Department of Mathematics and Computer Science, Hassan 1st University, FST, Settat, Morocco

Tameem Ahmad Department of Computer Engineering, Zakir Husain College of Engineering and Technology, Aligarh Muslim University, Aligarh, India

Matheus D. Morais Coordination of Informatics, Fluminense Federal Institute, Campos dos Goytacazes, Rio de Janeiro, Brazil

V.S. Anoop Kerala Blockchain Academy, Indian Institute of Information Technology and Management Kerala (IIITM-K), Thiruvananthapuram, India

Mark Douglas de Azevedo Jacyntho Coordination of Informatics, Fluminense Federal Institute, Campos dos Goytacazes, Rio de Janeiro, Brazil

S. Asharaf Indian Institute of Information Technology and Management - Kerala (IIITM-K), Thiruvananthapuram, India

Veenu Bhasin P.G.D.A.V. College, University of Delhi, Delhi, India

Mauro Dragoni Fondazione Bruno Kessler, Trento, Italy Shripriya Dubey Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India Sandip Dutta Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, India

Suparna Biswas Department of Computer Science & Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata, India

Sachin R. Gengaje Department of Computer Science and Engineering, Walchand Institute of Technology, Solapur, Maharashtra, India

Rashmi Burse School of Computer Science, University College Dublin, Dublin, Ireland

Noreddine Gherabi Sultan Moulay Slimane University, ENSAK, LASTI Laboratory, Khouribga, Morocco Neha Gupta Department of Computer Science, University of Delhi, Delhi, India

Punam Bedi Department of Computer Science, University of Delhi, Delhi, India Michela Bertolotto School of Computer Science, University College Dublin, Dublin, Ireland

Soubhik Chakraborty Department of Mathematics, Birla Institute of Technology, Mesra, Ranchi, India

Priti Jagwani Aryabhatta College, University of Delhi, Delhi, India

Nikhil V. Chandran Data Engineering Lab, Indian Institute of Information Technology and Management - Kerala (IIITM-K), Thiruvananthapuram, India

Sarika Jain Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India

Richard Chbeir Univ Pau & Pays Adour, E2S/ UPPA, LIUPPA, EA3000, Anglet, France

Haklae Kim Chung-Ang University, Seoul, South Korea

Chandreyee Chowdhury Department of Computer Science & Engineering, Jadavpur University, Kolkata, India

Ravi Lourdusamy Sacred Heart (Autonomous), Tirupattur, India

ix

College

x

List of contributors

Abderrahim Marzouk Department of Mathematics and Computer Science, Hassan 1st University, FST, Settat, Morocco Xavierlal J. Mattam Sacred Heart College (Autonomous), Tirupattur, India Gavin McArdle School of Computer Science, University College Dublin, Dublin, Ireland Sonia Mehla National Institute of Technology Kurukshetra, Haryana, India Asmita Nandy Department of Computer Science & Engineering, Jadavpur University, Kolkata, India Dympna O’Sullivan School of Computer Science, Technological University Dublin, Dublin, Ireland Archana Patel Institute of Computer Science, Freie Universita¨t, Berlin, Germany Monika Patel S.K. Patel Institute of Management and Computer Studies-MCA, Kadi Sarva Vishwavidyalaya, India Sikandar Patel National Forensic University, Gandhinagar, India

Sciences

Shrinivasan Patnaikuni Department of Computer Science and Engineering, Walchand Institute of Technology, Solapur, Maharashtra, India Prashant Pranav Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, India

Giuseppe Rizzo Italy

LINKS Foundation, Torino,

Jayita Saha Department of Artificial Intelligence and Data Science, Koneru Lakshmaiah Education Foundation Deemed to be University, Hyderabad, India Ramesh Saha Department of Information Technology, Gauhati University, Guwahati, Assam, India Salma Sassi VPNC Lab., FSJEGJ, University of Jendouba, Jendouba, Tunisia Sayani Sen Department of Computer Application, Sarojini Naidu College for Women, Kolkata, India Matteo A. Senese Italy

LINKS Foundation, Torino,

Mohammad Shaharyar Shaukat University of Munich, Germany Shivani Department of Computer University of Delhi, Delhi, India Mohammed Tanzeem

Technical Science,

Adobe, India

Shivani A Trivedi S.K. Patel Institute of Management and Computer Studies-MCA, Kadi Sarva Vishwavidyalaya, India Jan Wagner RheinMain University of Applied Sciences, Germany

Preface • Reasoning: When “Semantic Web” will finally happen, machine will be able to talk to machines materializing the socalled “intelligent agents.” The services offered will be useful for web as well as for the management of knowledge within an organization. • Security: In this new setting, traditional security measures will not be suitable anymore; and the focus will move to trust and provenance. The semantic security issues are required to be addressed by the security professionals and the semantic technologists.

Over the last decade, we have witnessed an increasing use of Web Semantics as a vital and ever-growing field. It incorporates various subject areas contributing to the development of a knowledge-intensive data web. In parallel to the movement of concept from data to knowledge, we are now also experiencing the movement of web from document model to data model where the main focus is on data compared to the process. The underlying idea is making the data machine understandable and processable. In light of these trends, conciliation of Semantic and the Web is of paramount importance for further progress in the area. The 17 chapters in this volume, authored by key scientists in the field are preceded by an introduction written by one of the volume editors, making a total of 18 chapters. Chapter 1, Introduction, by Sarika Jain provides an overview of technological trends and perspectives in Web Semantics, defines Semantic Intelligence, and discusses the technologies encompassing the same in view of their application within enterprises as well as in web. In all, 76 chapter proposals were submitted for this volume making a 22% acceptance rate. The chapters have been divided into three sections as Representation, Reasoning, and Security.

This book will help the instructors and students taking courses of Semantic Web getting abreast of cutting edge and future directions of semantic web, hence providing a synergy between healthcare processes and semantic web technologies. Many books are available in this field with two major problems. Either they are very advanced and lack providing a sufficiently detailed explanation of the approaches, or they are based on a specific theme with limited scope, hence not providing details on crosscutting areas applied in the web semantic. This book covers the research and practical issues and challenges, and Semantic Web applications in specific contexts (in this case, healthcare). This book has varied audience and spans industrial professionals, researchers, and academicians working in the field of Web Semantics. Researchers and academicians will find a comprehensive study of the state

• Representation: The semantics have to be encoded with data by virtue of technologies that formally represent metadata. When semantics are embedded in data, it offers significant advantages for reasoning and interoperability.

xi

xii

Preface

of the art and an outlook into research challenges and future perspectives. The industry professionals and software developers will find available tools and technologies to use, algorithms, pseudocodes, and implementation solutions. The administrators will find a comprehensive spectrum of the latest viewpoint in different areas of Web Semantics. Finally, lecturers and students require all of the above, so they will gain an interesting insight into the field. They can benefit in preparing their problem statements and finding ways to tackle them. The book is structured into three sections that group chapters into three otherwise related disections:

Representation The first section on Representation comprises six chapters that specifically focus on the problem of choosing a data model for representing and storage of data for the Web. Chapter 2, Convology: an ontology for conversational agents in digital health by Dragoni et al. propose an ontology, namely, Convology, aiming to describe conversational scenarios with the scope of providing a tool that, once deployed into a real-world application, allows to ease the management and understanding of the entire dialog workflow between users, physicians, and systems. The authors have integrated Convology into a living lab concerning the adoption of conversational agents for supporting the selfmanagement of patients affected by asthma. Dubey et al. in Chapter 3, Conversion between semantic data models: the story so far, and the road ahead, provide the trends in converting between various semantic data models and reviews the state of the art of the same. In Chapter 4, Semantic interoperability:

the future of healthcare Burse et al. have beautifully elaborated the syntactic and semantic interoperability issues in healthcare. They have reviewed the various healthcare standards in an attempt to solve the interoperability problem at a syntactic level and then moves on to examine medical ontologies developed to solve the problem at a semantic level. The chapter explains the features of semantic web technology that can be leveraged at each level. A literature survey is carried out to gage the current contribution of semantic web technologies in this area along with an analysis of how semantic web technologies can be improved to better suit the health-informatics domain and solve the healthcare interoperability challenge. Haklae Kim in his Chapter 5, A knowledge graph of medical institutions in Korea, has proposed a knowledge model for representing medical institutions and their characteristics based on related laws. The author also constructs a knowledge graph that includes all medical institutions in Korea with an aim to enable users to identify appropriate hospitals or other institutions according to their requirements. Chapter 6, Resource description framework based semantic knowledge graph for clinical decision support systems, by Lourdusamy and Mattam advocates the use of Semantic Knowledge Graphs as the representation structure for Clinical Decision Support Systems. Patnaikuni and Gengaje in Chapter 7, Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare, exploit the key concepts and terminologies used for representing and reasoning uncertainties structurally and semantically with a case study of COVID-19 Corona Virus. The key technologies are Bayesian networks, Multi-Entity Bayesian Networks, Probabilistic Ontology Web Language, and probabilistic context-free grammars.

Preface

Reasoning At the scale of www, logic-based reasoning is not appropriate and poses numerous challenges. As already stated in different chapters of Section 1, RDF provides a machine-processable syntax to the data on the web. Reasoning on Semantic Web involves deriving facts and relationships that are not explicit in the knowledge base. This section groups 10 contributions based on reasoning within the knowledge bases. There is an absence of a reference model for describing the health data and their sources and linking these data with their contexts. Chapter 8, The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record, by Sassi and Chbeir addresses this problem and introduces a semanticenabled, flexible, and unified electronic health record (EHR) for patient monitoring and diagnosis with Medical Devices. The approach exploits semantic web technologies and the HL7 FHIR standard to provide semantic connected EHR that will facilitate data interoperability, integration, information search and retrieval, and automatic inference and adaptation in real-time. Jain et al. in Chapter 9, Ontology-supported rule-based reasoning for emergency management, have proposed an ontologysupported rule-based reasoning approach to automate the process of decision support and recommending actions faster than a human being and at any time. Chapter 10, Healthcare-Cube Integrator for Healthcare Databases by Trivedi et al. proposes the Healthcare-cube integrator as a knowledge base that is storing health records collected from various healthcare databases. They also propose a processing tool to extract data from assorted databases. Chapter 11,

xiii

Smart mental healthcare systems, by Dalal and Jain provides an architecture for a smart mental healthcare system along with the challenges and benefits incurred. Chapter 12, A meaning-aware information search and retrieval framework for healthcare, by Anoop et al. discusses a framework for building a meaning-aware information extraction from unstructured EHRs. The proposed framework uses medical ontologies, a medical catalog-based terminology extractor and a semantic reasoner to build the medical knowledge base that is used for enabling a semantic information search and retrieval experience in the healthcare domain. In Chapter 13, Ontology-based intelligent decision support systems: a systematic approach, Saha et al. emphasize several machine learning algorithms and semantic technologies to design and implement intelligent decision support system for effective healthcare support satisfying quality of service and quality of experience requirements. Jacyntho and Morais in Chapter 14, Ontology-based decision-making, have described the architecture and strengths of knowledge-based decision support systems. They have defined a method for the creation of ontology-based knowledge bases and a corresponding fictitious health care case study but with real-world challenges. As the data are exploding over the web, Daoui et al. in Chapter 15, A new method for profile identification using ontology-based semantic similarity, aim to treat and cover a new system in the domain of tourism in order to offer users of the system a set of interesting places and tourist sites according to their preferences. The authors focus on the design of a new profile identification method by defining a semantic correspondence between

xiv

Preface

keywords and the concepts of an ontology using an external resource WordNet. Compared to the objective type assessment, the descriptive assessment has been found to be more uniform and at a higher level of Bloom’s taxonomy. In Chapter 16, Semantic similarity-based descriptive answer evaluation, Shaukat et al. have put in efforts to deal with the problem of automated computer assessment in the descriptive examination. Lastly in this section, Chapter 17, Classification of genetic mutations using ontologies from clinical documents and deep learning, by Bedi et al. have presented a framework for classifying cancerous genetic mutation reported in EHRs. They have utilized clinical NLP, Ontologies and Deep Learning for the same over Catalog of Somatic Mutations in Cancer Mutation data and Kaggle’s cancerdiagnosis dataset.

Security Though posed as the future of web, is semantic web secure? In the semantic web setting, traditional security measures are no more suitable. This section closes the book by providing Chapter 18, Security issues for the semantic web, by Pranav et al. providing the security issues in the semantic web. This chapter also suggested ways of potentially aligning the protocols so as to make them more robust to be used for semantic web services. As the above summary shows, this book summarizes the trends and current research advances in web semantics, emphasizing the existing tools and techniques, methodologies, and research solutions. Sarika Jain (India) Vishal Jain (India) Valentina Emilia Balas (Romania)

C H A P T E R

1 Semantic intelligence: An overview Sarika Jain Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India

1.1 Overview Due to many technological trends like IoT, Cloud Computing, Smart Devices, huge data is generated daily and at unprecedented rates. Traditional data techniques and platforms do not prove to be efficient because of issues concerning responsiveness, flexibility, performance, scalability, accuracy, and more. To manage these huge datasets and to store the archives for longer periods, we need granular access to massively evolving datasets. Addressing this gap has been an important and well-recognized interdisciplinary area of Computer Science. A machine will behave intelligently if the underlying representation scheme exhibits knowledge that can be achieved by representing semantics. Web Semantics strengthen the description of web resources for exploiting them better and making them more meaningful for both human and machine. As semantic web is highly interdisciplinary, it is emerging as a mature field of research that facilitates information integration from variegated sources. Semantic web converts data to meaningful information and is therefore a web of meaningful, linked, and integrated data by virtue of metadata. Current web is composed primarily of unstructured data, such as HTML pages and search in current web is based on keyword search. These searches are not able to make out the type of information on the HTML page, that is, it is not possible to extract different pieces of data from different web pages about a concept and then give integrated information about the concept. The semantic web provides such a facility with lesser human involvement. As the web connects documents, in the same manner, semantic web connects pieces of information. In addition to publishing data on the World Wide Web, the semantic web is being utilized in enterprises for myriad of use cases. The Artificial Intelligence technologies, the Machine Intelligence technologies, and the semantic web technologies together make up the Semantic Intelligence technologies (SITs). SITs have been found as the most

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00011-0

1

© 2021 Elsevier Inc. All rights reserved.

2

1. Semantic intelligence: An overview

important ingredient in building artificially intelligent knowledge-based systems as they aid machines in integrating and processing resources contextually and intelligently. This book describes the three major compartments of the study of Web Semantics, namely representation, reasoning, and security. It also covers the issues related to the successful deployment of semantic web. This chapter addresses the key knowledge and information needs of the audience of this book. It provides easily comprehensible information on Web Semantics including semantics for data and semantics for services. Further, an effort has been made to cover the innovative application areas semantic web goes hand in hand with a focus on Health Care.

1.2 Semantic Intelligence Semantic Intelligence refers to filling the semantic gap between the understanding of humans and machines by making a machine look at everything in terms of object-oriented concepts as a human look at it. Semantic Intelligence helps us make sense of the most vital resource, that is, data; by virtue of making it interpretable and meaningful. The focus is on information as compared to the process. To whatever application, the data will be put to; it is to be represented in a manner that is machine-understandable and hence human-usable. All the important relationships (including who, what, when, where, how, and why) in the required data from any heterogeneous data source are required to be made explicit. The primary technology standards of the SITs are RDF (Resource Description Framework) and SPARQL (SPARQL Protocol and RDF Query Language). RDF is the data model/format/serialization used to store data. SPARQL is the query language designed to query, retrieve, and process data stored as RDF across various systems and databases. Both of these technologies are open-ended making them a natural fit for iterative, flexible, and adaptable software development in a dynamic environment; hence suitable for a myriad of open-ended problems majorly including unstructured information. It is even beneficial to wrap up the existing relational data stores with the SPARQL end points to integrate them with any intelligent application. This all is possible because semantic web operates on the principle of Open World Assumption; wherein all the facts are not anticipated in the beginning; and in the absence of some fact, it cannot be assumed false. Semantics is no more than discovering “relationships between things.” These relationships when discovered and represented explicitly help manage the data more efficiently by making sense of it. In addition to storing and retrieving information, semantic intelligence provides a flexible model by acting as an enabler for machines to infer new facts and derive new information from existing facts and data. In all such systems with a large amount of unstructured and unpredictable data, SITs prove to be less cost-intensive and maintainable. By virtue of being able to interpret all the data, machines are able to perform sophisticated tasks for the mankind. In today’s world SITs are serving a very broad range of applications, across multiple domains, within enterprises, and on the web. A fullfledged industry in its own sense has emerged in the last 20 years when these technologies were merely drafts. In addition to publishing and consuming data on the web, SITs are being used in enterprises for various purposes.

Web Semantics

1.1 Overview

3

1.2.1 Publishing and consuming data on the web Publishing data on the web involves deciding upon the format and the schema to use. Best practices exist to publish, disseminate, use, and perform reasoning on high-quality data over the web. RDF data can be published in different ways including the linked data (DBPedia), SPARQL endpoint, metadata in HTML (SlideShare, LinkedIn, YouTube, Facebook), feeds, GRDDL, and more. Semantic interlinked data is being published on the web in all the domains including e-commerce, social data, and scientific data. People are consuming this data through search engines and specific applications. Publishing semantic web data about the web pages, an organization ensures that the search results now also include related information like reviews, ratings, and pricing for the products. This added information in search results does not increase ranking of a web page but significantly increases the number of clicks this web page can get. Here are some popular domains where data is published and consumed on the semantic web. • E-commerce: The Schema.org and the GoodRelations vocabulary are global schema for commerce data on the web. They are industry-neutral, syntax-neutral, and valid across different stages of value chain. • Health care and life sciences: HealthCare is a novel application domain of semantic web that is of prime importance to human civilization as a whole. It has been predicted as the next big thing in personal health monitoring by the government. Big pharma companies and various scientific projects have published a significant amount of life sciences and health care data on the web. • Media and publishing: The BBC, The FT, SpringerNature, and many other media and publishing sector companies are benefitting their customers by providing an ecosystem of connected content to provide more meaningful navigation paths across the web. • Social data: A social network is a two-way social structure made up of individuals (persons, products, or anything) and their relationships. The Facebook’s “social graph” represents connections between people. Social networking data using friend-of-a-friend as vocabulary make up a significant portion of all data on the web. • Linked Open Data: A powerful data integration technology is the practical side of semantic web. DBPedia is a very large-linked dataset making the content of Wikipedia available to the public as RDF. It incorporates links to various other datasets as Geonames; thus allowing applications to exploit the extra and more precise knowledge from other datasets. In this manner, applications can provide a high user experience by integrating data from multiple linked datasets. • Government data: For the overall development of the society, the governments around the world have taken initiatives for publishing nonpersonal data on the web making the government services transparent to the public. 1.2.2 Semantic Intelligence technologies applied within enterprises Enterprise information systems comprise complex, distributed, heterogeneous, and voluminous data sources. Enterprises are leveraging SITs to achieve interoperability and implement solutions and applications. All documents are required to be semantically tagged with the associated metadata.

Web Semantics

4

1. Semantic intelligence: An overview

• Information classification: The knowledge bases as are used by the giants Facebook, Google, and Amazon today are said to shape up and classify data and information in the same manner as the human brain does. Along with data, a knowledge base also contains expert knowledge in the form of rules transforming this data and information into knowledge. Various organizations represent their information by combining the expressivity of ontologies with the inference support. • Content management and situation awareness: The organizations reuse the available taxonomic structures to leverage their expressiveness to enable more scalable approaches to achieve interoperability of content. • Efficient data integration and knowledge discovery: The data is scaling up in size giving rise to heterogeneous datasets as data silos. The semantic data integration allows the data silos to be represented, stored, and accessed using the same data model; hence all speaking the same universal language, that is, SITs. The value of data explodes when it is linked with other data providing more flexibility compared to the traditional data integration approaches.

1.3 About the book This book contains the latest cutting-edge advances and future directions in the field of Web Semantics, addressing both original algorithm development and new applications of semantic web. It presents a comprehensive up-to-date research employing semantic web and its health care applications, providing a critical analysis of the relative merit, and potential pitfalls of the technique as well as its future outlook. This book focuses on a core area of growing interest, which is not specifically or comprehensively covered by other books. This book describes the three major compartments of the study of Web Semantics, namely Representation, Reasoning, and security. It covers the issues related to the successful deployment of semantic web. Further, an effort has been made to cover the innovative application areas semantic web goes hand in hand with focus on HealthCare by providing a separate section in every chapter for the case study of health care, if not explicitly mentioned. The book will help the instructors and students taking courses of semantic web getting abreast of cutting edge and future directions of semantic web, hence providing a synergy between health care processes and semantic web technologies.

Web Semantics

C H A P T E R

2 Convology: an ontology for conversational agents in digital health Mauro Dragoni1, Giuseppe Rizzo2 and Matteo A. Senese2 1

Fondazione Bruno Kessler, Trento, Italy 2LINKS Foundation, Torino, Italy

2.1 Introduction The conversation paradigm has been implemented for the realization of conversational agents overwhelmingly in the last years. Natural and seamless interactions with automated systems introduce a shift from using well-designed and sometimes complicated interfaces made of buttons and paged procedures to textual or vocal dialogs. Asking questions naturally has many advantages with respect to traditional app interactions. The main one is that the user does not need to know how the specific application works, everyone knows how to communicate, and in this case, the system is coming toward the user to make the interaction more natural. This paradigm has been integrated into mobile applications for supporting users from different perspectives and into more well-known systems built by big tech players like Google Assistant and Amazon Alexa. These kinds of systems dramatically reduce the users’ effort for asking and communicating information to systems that, by applying natural language understanding (NLU) algorithms, are able to decode which are the actual users’ intentions and to reply properly. However, by performing a deeper analysis of these systems, we can observe a strong limitation of their usage into complex scenarios. The interactions among users and bots are often limited to a single-turn communication where one of the actor sends an information request (e.g., a question like “How is the weather today in London?” or a command like “Play the We Are The Champions song”) and the other actor provides an answer containing the required information or performs the requested action (e.g., “Today the weather in London is cloudy.” or the execution of the requested song).

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00004-3

7

© 2021 Elsevier Inc. All rights reserved.

8

2. Convology: an ontology for conversational agents in digital health

While this is true for most of the current conversational agents, the one made by Google seems to be more aware of the possibility of multiturn conversation. In fact, in some particular situations, it is capable of carry a context between one user question and the following ones. An example could be asking “Who is the current US president?” and then “Where he lives?;” in this particular case, the agent resolves the “he” pronoun carrying the context of the previous step. Anyway this behavior is not general and is exploited only in some common situations and for a limited amount of steps. An evidence of this is the limit of the DialogFlow platform (a rapid prototyping platform for creating conversational agents based on the Google Assistant intelligence) to maintain context from one step to another (the maximum number of context it can carry is 5). While this mechanism could appear among sentences belonging to the same conversation, it is not true among different conversations, what we noticed is that each conversation is for sure independent from the previous ones. Hence, the agent does not own a story of the entire dialog. Additionally, the assistant does not seem to be conscious about the actual status of the conversation; this marks the impossibility for it to be an effective tool to achieve a complex goal (differently from single interactions like “turning on the light”). This situation strongly limits the capability of these systems of being employed into more complex scenarios where it is necessary to address the following challenges: (1) to manage long conversations possibly having a high number of interactions, (2) to keep track of users’ status in order to send proper requests or feedback based on the whole context, (3) to exploit background knowledge in order to have at any time all information about the domain in which the conversational agent has been deployed, and (4) to plan dialogs able to dynamically evolve based on the information that have been already acquired and on the long-term goals associated with users. To address these challenges it is necessary to sustain NLU strategies with knowledge-based solutions able to reason over the information provided by users in order to understand her status at any time and to interact with her properly. Conversational agents integrating this knowledge-based paradigm go one step beyond state-of-the-art systems that limit their interactions with users to a single-turn mode. In this chapter, we present Convology (CONVersational ontOLOGY), a top-level ontology aiming to model the conversation scenario for supporting the development of conversational knowledge-based systems. Convology defines concepts enabling the description of dialog flows, users’ information, dialogs and users events, and the real-time statuses of both dialogs and users. Hence, systems integrating Convology are able to manage multiturn conversations. We present the TBox, and we show how it can be instantiated into a real-world scenario. The chapter is structured as follows. In Section 2.2, we discuss the main types of conversation tools by highlighting how none of them is equipped with facilities for managing multiturn conversations. Then, in Sections 2.3 and 2.4, we present the methodology used for creating Convology and we explain the meaning of the concepts defined. Section 2.5 shows how to get and to reuse the ontology, whereas Section 2.6 presents an application integrating Convology together with examples of future projects that will integrate it. Section 2.7 discusses the sustainability and maintenance aspects, and, finally, Section 2.8 concludes the chapter.

I. Representation

2.2 Background

9

2.2 Background Conversational agents, in their larger definition, are software agents with which it is possible to carry a conversation. Researchers discussed largely on structuring the terminology around conversational agents. In this chapter, we decide to adhere to Franklin and Graesser (1997) that segments conversational agents according to both learned and indexed content and approaches for understanding and establishing a dialog. The evolution of conversational agents proposed three different software types: generic chit-chat (i.e., tools for maintaining a general conversation with the user), goal-oriented tools that usually rely on a large amount of prebuilt answers (i.e., tools that provide language interfaces for digging into a specific domain), and the recently investigated knowledge-based agents that aim to reason over a semantic representation of a dataset to extend the intent classification capabilities of goal-oriented agents. The first chit-chat tool, named ELIZA (Weizenbaum, 1966), was built in 1966. It was created mainly to demonstrate the superficiality of communications and the illusion to be understood by a system that is simply applying a set of pattern-matching rules and a substitution methodology. ELIZA simulates a psychotherapist and, thanks to the trick of presenting again to the interlocutor some contents that have been previously mentioned, it keeps the conversation without having an understanding of what really is said. At the time when ELIZA came out, some people even attributed human-like feelings to the agent. A lot of other computer programs have been inspired by ELIZA and AIML—markup language for artificial intelligence—has been created to express the rules that drive the conversation. So far, this was an attempt to encode knowledge for handling a full conversation in a set of predefined linguistic rules. Domain-specific tools were designed to allow an individual to search conversationally into a restricted domain, for instance simulating the interaction with a customer service of a given company. A further generalization of this typology was introduced by knowledgebased tools able to index a generic (wider) knowledge base and provides answers pertaining a given topic. These two are the largest utilized types of conversational agents (Ramesh et al., 2017). The understanding of the interactions is usually performed using machine learning, in fact recent approaches have abandoned handcrafted rules utilized in ELIZA toward an automatic learning from a dialog corpus. In other words, the understanding task is related to turning natural language sentences into something that can be understood by a machine: its output is translated into an intent and a set of entities. The response generation can be fully governed by handcrafted rules (e.g., if a set of conditions apply, say that) or decide the template response from a finite set using statistical approaches [using some distance measures like TF-IDF, Word2Vec, Skip-Thoughts (Kiros et al., 2015)]. In this chapter, we focus on the understanding part of the conversation. While machine learning offers statistical support to infer the relationship between sentences and classes, one pillar of these approaches is the knowledge about the classes of these requests. In fact, popular devices such as Amazon Echo and Google Home require, whether configured, to list the intents of the discussion. However, those devices hardly cope with a full dialog, multiturn, as the intents are either considered in isolation or contextualized within strict boundaries. Previous research attempts investigated the

I. Representation

10

2. Convology: an ontology for conversational agents in digital health

multiturn aspect with neural networks (Mensio et al., 2018). The conversation was fully understood statistically, that is, through statistical inference of intents sequentially, without a proper reasoning about the topics and actors of the conversation. Other research attempts exploited the concept of ontology for modeling a dialog stating that a semantic ontology for dialog needs to provide the following: first, a theory of events/situations; second, a theory of abstract entities, including an explication of what propositions and questions are; and third, an account of Grounding/Clarification (Ginzburg, 2012). An ontology is thus utilized to also order questions maximizing coherence (Milward, 2004). Despite the research findings on this theme and the trajectory that shows a neat interaction between statistical inference approaches and ontologies for modeling the entire dialog (Flycht-Eriksson and Jo¨nsson, 2003), there is a lack of a shared ontology. In this chapter, we aim to fill this gap by presenting Convology.

2.3 The construction of convology The development of Convology followed the need of providing a metamodel able not only to provide a representation of the conversational domain but also to support the development of smart applications enabling the access to knowledge bases through a conversational paradigm. Such applications aim to reduce users’ effort in obtaining required information. For this reason, the proposed ontology has been modeled by taking into account how it can be extended for being integrated into real-world applications. The process for building Convology followed the METHONTOLOGY (Ferna´ndezLo´pez et al., 1997) methodology. This approach is composed by seven stages: Specification, Knowledge Acquisition, Conceptualization, Integration, Implementation, Evaluation, and Documentation. For brevity, we report only the first five steps since they are the most relevant ones concerning the design and development of the ontology. The overall process involved four knowledge engineers and two domain experts from the Trentino Healthcare Department. More precisely, three knowledge engineers and one domain experts participated to the ontology modeling stages (hereafter, the modeling team). While, the remaining knowledge engineer and domain expert were in charge of evaluating the ontology (hereafter, the evaluators). The role of the domain experts was to supervise the psychological perspective of the ontology concerning the definition of proper concepts and relationships supporting the definition of empathetic dialogs. The choice of METHONTOLOGY was driven by the necessity of adopting a life-cycle split in well-defined steps. The development of Convology requires the involvement of the experts in situ. Thus the adoption of a methodology having a clear definition of the tasks to perform was preferred. Other methodologies, like DILIGENT (Pinto et al., 2004) and NeOn (Sua´rez-Figueroa, 2012), were considered before starting the construction of the Convology ontology. However, the characteristics of such methodologies, like the emphasis on the decentralized engineering, did not fit our scenario well.

I. Representation

2.3 The construction of convology

11

2.3.1 Specification The purpose of Convology is twofold. On the one hand, we want to provide a metamodel fully describing the conversation domain from the conversational agent perspective. On the other hand, we want to support the development of smart applications for supporting users in accessing content of knowledge bases by means of a conversational paradigm. As mentioned in Section 2.1, Convology supports the modeling of a full dialog between users and systems. From the granularity perspective, Convology is modeled with a low granularity level. As we discuss in Section 2.4, Convology contains only top-level concepts representing the main entities involved in describing a conversation and that can be used for storing information about user-based events that can be exploited for reasoning purposes. The rationale behind this choice is to avoid changes in the TBox when Convology is instantiated into a new domain. Thus, when a new application is developed, the experts in charge of defining all entities involved in the conversation supported by the application will work only on the ABox.

2.3.2 Knowledge acquisition The acquisition of the knowledge necessary for building Convology was split in two phases: (1) the definition of the TBox and (2) the definition of the ABox. The TBox has been modeled by the modeling team having also competences in NLU. The modeling activity started by analyzing the requirements for realizing a classic (i.e., single-turn) conversational agents and by defining which kind of information are necessary for supporting the multiturn paradigm. At this point, the modeling team defined the set of entities playing an important role during the reasoning process. In particular, three concepts have been defined: UserEvent, UserStatus, and DialogStatus. The first one defines events of interest associated with users. Such events are the basic information used at reasoning time. The second one allows to model the status of interest in which a User can be and it can be activated at reasoning time in case a specific set of UserEvent is verified. Finally, the third one represents a snapshot of a conversation between a User and a Agent and works as trigger for the system to perform specific actions. In Section 2.4, we will explain each concept and the interactions among them in more detail. Differently, knowledge defined within the ABox is acquired through the collaborative work with domain experts. Indeed, when Convology is instantiated into a new application, it is necessary to define which are the relevant information (i.e., questions, answers, intents, etc.) used by the conversational agent for managing dialogs. Such information can be provided only by domain experts. Let us consider the sample scenario we reported in Section 2.6 about the asthma domain. There, pulmonologists have been involved for providing all the knowledge necessary for managing a conversation with users in order to collect information needed for supporting a real-time reasoning of their healthy status.

I. Representation

12

2. Convology: an ontology for conversational agents in digital health

2.3.3 Conceptualization The conceptualization of Convology was split into two steps. The first one was covered by the knowledge acquisition stage, where most of the terminology is collected and directly modeled into the ontology. While the second step consisted in deciding how to represent, as classes or as individuals, the information we collected from unstructured resources. Then, we modeled the properties used for supporting all the requirements. During this stage, we relied on several ontology design patterns (Hitzler et al., 2016). However, in some cases, we renamed some properties upon the request of domain experts. In particular, we exploit the logical patterns Tree and N-Ary Relation, the alignment pattern Class Equivalence, and the content patterns Parameter, Time Interval, Action, and Classification.

2.3.4 Integration The integration of Convology has two objectives: (1) to align it with a foundational ontology and (2) to link it with the Linked Open Data (LOD) cloud. The first objective was satisfied by aligning the main concepts of Convology with ones defined within the DOLCE (Gangemi et al., 2002) top-level ontology. Concerning the second objective, although it is not addressed by the TBox of Convology, it can be satisfied when Convology is integrated into specific application and some of the intents can be aligned with concepts defined in other ontologies. As example, if Convology is integrated into a chat-bot supporting people about diet and physical activity, instances of the Intent concept can be aligned with concepts defined within the AGROVOC1 vocabulary. Similarly, the integration of Convology, proposed in Section 2.6, into a conversational agent supporting people affected by asthma opens the possibility of aligning instances of the Intent concept with concepts defined into an external medical knowledge base like UMLS2. This way, individuals defined within the ABox of Convology may work as a bridge between Convology and the LOD cloud.

2.4 Inside convology The ontology contains five top-level concepts: Actor, ConversationItem, Dialog, Event, and Status. Among these, the Dialog concepts does not subsume any other concept. However, it works as collector of other concepts for representing a whole dialog instance. Fig. 2.1 shows a general overview of the ontology with the hierarchical organization of the concepts. Below, by starting from each top-level concept, we detail each branch of Convology by providing the semantic meaning of the most important entities. 1

http://aims.fao.org/vest-registry/vocabularies/agrovoc.

2

https://www.nlm.nih.gov/research/umls/.

I. Representation

2.4 Inside convology

13

FIGURE 2.1 Overview of Convology.

2.4.1 Dialog The Dialog concept represents a multiturn interaction between a User and one or more Agent. A new instance of the Dialog concept is created when a user starts a conversation with one of the agents available within a specific application. The hasId datatype property associated with the Dialog instance works as tracker for all the interactions made during a single conversation between a User and the involved Agent. Furthermore, the value of this property is used at reasoning time for extracting from the knowledge repository only the data related to a single conversation in order to maintain the efficiency of the reasoner suitable for a real-time environment.

2.4.2 Actor The Actor concept defines the different roles that can take part into a conversation. Within Convology, we foresee two main roles represented by the concepts Agent and User. Instances of the Agent concept are conversational agents that interact with users. When Convology is deployed into an application, instances of Agent concept represents the different agents involved into the conversations with the users adopting the application. Within the same application (e.g., the conversational agent implemented for the asthma scenario described in Section 2.6), Convology will have a different instance of the Agent concept for each User even if the application is the same. The rationale behind is that different active conversations may be in different statuses. Hence, for favoring the efficiency of reasoning activity, different instances are created into the ontology. Finally, different instances of the Agent concept are associated with different instances of the Dialog concept. The second concept defined in this branch is User. Instances of the User concept represents the actual users that are dialoguing with the conversational agent. A new instance of the User concept is created when a new user starts a conversation within a specific application (e.g., a new user installs the application for monitoring her asthma conditions). An instance of the User concept can be associated with different instances of the Dialog and

I. Representation

14

2. Convology: an ontology for conversational agents in digital health

Agent concepts. The reasons for which this does not happen for the Agent concept (i.e., an Agent instance can be associated with one and only one instance of User) is because the focus of Convology is to track and support the conversations from the user perspective. Thus the ontology maintains a single instance of User for each deployment of Convology due to the necessity of tracing the whole history of users. For debug purposes (e.g., to analyze the behavior of the conversational agents for evaluating its effectiveness), it is anyway possible to collect all instances of the Agent concept.

2.4.3 ConversationItem A ConversationItem is an entity taking part into a conversation and that allows to represent relevant knowledge for supporting each interaction. Within Convology, we defined four subclasses of the ConversationItem concept: Question, Intent, Feedback, and DialogAction. An individual of type Question represents a possible question that an instance of type Agent can send to a User. Instances of Question are defined by domain experts together with all the Intent individuals that are associated with each Question through the hasRelevantIntent object property. A Question can be associated with also a specific UserEvent through the hasTriggerQuestion object property. An Intent represents a relevant information, detected within a natural language answer provided by a User, that the NLU module is able to recognize and that the reasoner is able to process. Concerning the mention to the NLU module, it is important to clarify that the detection of an Intent within a user’s answer requires the integration of a NLU module able to classify the content of users’ answers with respect to each Intent associated with the Question sent to a User. Hence, one of the prerequisites for deploying Convology into a real-world system is the availability of a module that maps the content of users’ answers with the instances of the Intent concept defined into the ontology. The possible strategies that can be implemented for supporting such a mapping operation are out of scope of this chapter. An Intent is then associated with a StatusItem through the activated object property: once a specific Intent is recognized, a StatusItem instance is created into the knowledge repository for supporting the inference of the user’s status. Differently from a Question where it is expected that a User performs a new interaction and a set of relevant Intent are associated with them, a Feedback represents a simple sentence that an Agent can send to users and for which it does not expect any reply. Feedback are used for closing a conversation as result of the reasoning process or simply for sending single messages to users without requiring any further interaction. Instances of the DialogAction concept describes the next action that an Agent individual has to perform. DialogAction individuals can be defined by domain experts as consequences of the detection of specific intents or can be generated as result of reasoning activities and associated with a DialogStatus instance. Individuals of type DialogAction are associated with a Question or a Feedback individual representing the next message sent to a User. Moreover, a DialogAction might have the datatype property waitTime set, that is, the amount of seconds that the system must wait before sending the Question or the Feedback to the User.

I. Representation

2.4 Inside convology

15

2.4.4 Event The Event concept describes a single event that can occur during a conversation. Within Convology, we identified three kinds of events: EventQuestion, EventAnswer, and UserEvent. Instances of these concepts enable the storage of information within the knowledge repository, trigger the execution of the reasoning process, and allow the retrieval of information for both analysis and debugging purposes. An EventQuestion represents the fact that a Question has been submitted to an Actor. Here, we do not make distinctions between the actors because, from a general perspective, the model supports scenarios where questions are sent from a User to an Agent. Instances of this concept are associated with knowledge allowing to identify the timing (hasTimestamp datatype property), the Actor instance that sent the question (sentQuestion object property), and the Actor instance that received the question (receivedQuestion object property). On the contrary, the EventAnswer concept represents an Answer provided by an Actor. The timing information associated with individuals of this concept is defined through the hasTimestamp datatype property, whereas the sender and the receiver are defined by the sentAnswer and receivedAnswer object properties. The last concept of this branch is the UserEvent one. A UserEvent represents an Event associated with a specific user. The purpose of having a specific UserEvent concept instead of inferring UserEvent objects from the EventQuestion and EventAnswer individuals is that a UserEvent does not refer only to questions and answers but also to other events that can occur. Examples are the presence of one or more Intent within users’ answer (this kind of knowledge cannot be associated with EventAnswer individuals because Agent instances do not provide Intent within an answer) or information about users’ action that are not directly connected with the conversation (the storage of these information is important in case of it is of interest to analyze users’ behaviors). The relationship between a UserEvent and an Intent is instantiated through the hasRecognizedIntent object property. Finally, instances of UserEvent can trigger the activation of a specific UserStatus (explained below) as result of the reasoning process. Triggering events are instantiated through the hasTriggerQuestion and triggers object properties. The former allows to put in relationship a UserEvent with a Question. The latter associates a UserEvent with a specific UserStatus. Both relationships are defined as result of the reasoning process.

2.4.5 Status The last branch of Convology has the Status concept as top-level entity. This branch contains concepts describing the possible statuses of users, through the UserStatus and StatusItem concepts, or of dialogs, through the DialogStatus concept. Instances of the UserStatus concept are defined by the domain experts, and they represent which are the relevant statuses of a User that the conversational agent should discover during the execution of a Dialog. Let us consider the asthma scenario described in Section 2.6, the aim of the conversational agent is to understand which is the health status of the user. Within this application, the domain experts defined four UserStatus based on the gravity of the symptoms that are recognized during the conversation. A UserStatus is associated with a set of UserEvent that, in turn, are associated with Intent individuals.

I. Representation

16

2. Convology: an ontology for conversational agents in digital health

This path describes which is the list of Intent enabling the classification of a User with respect to a specific UserStatus. This operation is performed by a SPARQL-based reasoner. A UserStatus individual is associated with a set of StatusItem individuals representing atomic conditions under which a UserStatus can be activated. Generally, not all StatusItem has to be activated for inferring, in turn, a UserStatus. Different strategies can be applied at reasoning time, but they are out of scope of this chapter. The third subsumed concept is the DialogStatus one. A DialogStatus individual provides a snapshot of a specific Dialog at a certain time. Entities associated with a DialogStatus individual are the Dialog which the status refers to, the identifiers of the User and of the one or more Agent involved into the conversation, and the DialogAction that has to be performed as next step. Individuals of type DialogStatus are created at reasoning time after the processing of the Intent recognized by the system.

2.5 Availability and reusability Convology is licensed under the Creative Commons Attribution-NonCommercialShareAlike 4.03, and it can be downloaded from the Convology website4. The rational behind the CC BY-NC-SA 4.0 is that the Trentino Healthcare Department, that funds the project in which Convology has been developed, was not in favor of releasing this ontology for business purposes. Hence, they force the adoption of this type of license for releasing the ontology. Convology can be downloaded in two different modalities: (1) the conceptual model only, where the user can download a light version of the ontology that does not contain any individual, or (2) the full package, where the ontology is populated with all the individuals we have already modeled for the asthma domain. Convology is constantly updated due to the project activities using the ontology as core component. The ontology is available also as web service. Detailed instructions are provided on the ontology website. Briefly, the service exposes a set of informative methods enabling the access to a JSON representation of the individuals included into the ontology. The reusability aspect of Convology can be seen from two main perspectives. First, Convology describes a metamodel that can be instantiated from conversational agents into different domains. This opens the possibility of building an ecosystem of knowledge resources describing conversational interactions within many scenarios. Second, Convology enables the construction of innovative smart applications combining both natural language processing and knowledge management capabilities as presented in Section 2.6. Such applications represent innovative solutions within the conversational agents field.

3

https://creativecommons.org/licenses/by-nc-sa/4.0/.

4

http://w3id.org/convology.

I. Representation

2.6 Convology in action

17

2.6 Convology in action As introduced, a real practical scenario based on Convology was the development of PuffBot, a multiturn goal-oriented conversational agent supporting patients affected by asthma. The current version of PuffBot supports interactions in Italian, but we are in the process of extending it to both English and Chinese languages. In Fig. 2.2, we provide a sample conversation in Italian between PuffBot and a user. PuffBot is equipped with a NLU module able to classify intents contained within natural language text provided by users. The current list of intents available in PuffBot is relatively short (almost 40) and were defined with the collaboration of the domain experts of the Trentino Healthcare Department. Within this list, there are also defined 12 intents referring to the OnBoarding part consisting in a set of Question submitted for building a preliminary of the user profile (e.g., the name, the city where she lives, sports practiced, etc.) that is stored into the knowledge repository and used as contextual information at reasoning time. The main aim of PuffBot is to perform a realtime inference of UserStatus in order to monitor users’ health conditions and to suggest the most effective action to take for solving undesired situations. During the design phase, we decided to create a hierarchy of intents, each single intent belongs to a set of related intents. For instance, we have defined different types of intents related to the cough (e.g., cough frequency, last episode) and other intents related to the recently medical examinations done or breath situation. To handle different steps of

FIGURE 2.2 This figure illustrates an example of an entire conversation with PuffBot. The two screenshots on left show the OnBoarding phase where we delineate the user profile. The third one instead is the real conversation scenario where we want to infer the UserStatus through a series of questions. The last message contains the overall resume with the advice made by the reasoner.

I. Representation

18

2. Convology: an ontology for conversational agents in digital health

conversations, all together we have exploited the possibility of creating several instances of Dialog, each one with its own DialogStatus identified by Convology with a unique identifier. The conversation can be triggered both by the user and the agent. When the agent receives a trigger from the outside (e.g., a humidity changing in the air was detected), it can ask specific questions to the user in order to monitor his status. Anyway the user can start the conversation by saying something and so by triggering an UserEvent that has to be related to one of the defined intents. Each time PuffBot recognizes a relevant intent (i.e., an intent modeled within the knowledge base), and it triggers the reasoner that is in charge of inferring the current user’s status and to generate the next DialogAction to take. For instance, a possible DialogAction can be a further question needed for understanding the UserStatus with higher accuracy. Once the application classifies the UserStatus with a certain accuracy5, the reasoner triggers the dispatch of an advice to the user containing a summary of the information that has been acquired and inferred through the use of Convology. Generally, this advice is an instance of the Feedback concept. Fig. 2.3 presents an exemplification about how the reasoning process works. On the topleft part of the picture, we report a piece of the conversation between the user and PuffBot. Red circles highlight relevant messages provided by users that are transformed into UserEvent individuals (i.e., the blue blocks in Fig. 2.3). At this point the NLU module is invoked for analyzing the natural language text provided by the user and it returns the set of detected Intent. For each Intent, the hasRelevantIntent object property is instantiated (i.e., the green arrows in Fig. 2.3) in order to associate each UserEvent individual with the related Intent (i.e., the white block in Fig. 2.3). The right part of Fig. 2.3 shows three instances of the UserStatus concepts, namely LowRisk, MediumRisk, and HighRisk. These individuals are defined by domain experts and they represent the risk level of a User of having a strong asthma event in the short period. Each status is associated with several symptoms that are instances of the StatusItem concept. Within the knowledge base, the relationships between an Intent and a StatusItem are defined through the activates object property (i.e., the red arrows). Hence, the detection of specific Intent triggers the activation of specific StatusItem individuals. At this point, the SPARQL-based reasoner starts and try to infer which is the most probable status in which the user is and, in case of an undecided classification, it generated the proper individuals for triggering the continuation of the conversation (i.e., DialogAction individuals).

2.6.1 Other scenarios Besides the description of the PuffBot application, Convology is going to be deployed in more complex scenarios. Below, we mention two of them always related to the healthcare domain, indeed, as explained in Section 2.7, currently, the sustainability of Convology is strictly connected with activities jointly done with the Trentino Healthcare Department. The first one concerns the promotion of adopting healthy lifestyle. Here, a conversational agent is used for acquiring information about consumed food and performed physical activities by means of natural language chats with users. With respect to the PuffBot application, the number of possible Intent and UserStatus dramatically increases due to the high number of relevant entity that the system has to recognize (i.e., one Intent for each recipe 5

The strategies implemented for classifying users within different statuses are out of scope of this chapter.

I. Representation

2.7 Resource sustainability and maintenance

19

FIGURE 2.3 Exemplification of the reasoning process.

and physical activity). The second scenario relates to support users affected by diabetes concerning its self-management of the disease. One of the most common issue in selfmanaging chronic disease is given by psychological barriers avoiding users in performing self-monitoring actions (e.g., measuring glycemia value). Convology will be deployed into an application used for knowing which are the barriers affecting each user. With respect to the first scenario and to the PuffBot application, the main challenge that will be addressed by the domain experts is the definition of all relevant Intent associated with each barrier that has to be detected. This modeling task will require a strong interaction between psychologists and linguistics in order to identify all natural language expressions that can be linked with each barrier.

2.7 Resource sustainability and maintenance As mentioned in the previous section, the presented ontology is the result of a collaborative work between several experts. While, on the one hand, this collaboration led to the

I. Representation

20

2. Convology: an ontology for conversational agents in digital health

development of an effective and useful ontology; on the other hand, the sustainability and the maintenance of the produced artifact represent a criticality. Concerning the sustainability, this ontology has been developed in the context of the PuffBot project. The goal of this research project is to provide the first prototype of conversational agent relying on the use of a knowledge base in order to support a multistep interaction with users. This project, recently started within FBK, is part of the “Trentino Salute 4.0” framework promoted by the Trentino’s local government with the aim of providing smart applications (e.g., intelligent chat-bots) to citizens for supporting them under different perspectives (e.g., monitoring of chronic diseases, promoting healthy lifestyles, etc.). One of the goals of this framework is to promote the integration of artificial intelligence solutions into digital health platforms with the long-term goal of improving the life quality of citizens. The presented ontology is part of the core technologies used in this framework. The overall sustainability plan for the continuous update and expansion of the Convology ontology is granted by this framework and by the projects mentioned in Section 2.6. The maintenance aspect is managed by the infrastructure available within FBK from both the hardware and software perspectives. In particular, we enable the remote collaboration between experts thanks to the use of the MoKi (Dragoni et al., 2014) tool (details about the tool are out of the scope of this chapter). Here, it is important only to remark that this tool implements the support for the collaborative editing of ontologies by providing different views based on the kind of experts (domain expert, language expert, ontology engineer, etc.) that has to carry out changes to the ontology. The canonical citation for Convology is “Dragoni M., Rizzo G., Senese M.A., Convology: an Ontology For Conversational Agents (2019). http://w3id.org/convology”.6

2.8 Conclusions and future work In this chapter, we presented Convology: a top-level ontology for representing conversational scenarios with the aim of supporting the building of conversational agents able to provide effective interactions with users. The knowledge modeled within Convology derives from the analysis of knowledge engineers with competences in NLU, and it has been thought for providing a metamodel able to ease the development of smart applications. We described the process we followed to build the ontology and which information we included. Then, we presented how the ontology can be utilized and we introduced the projects and use cases that currently integrate and use Convology. Future research activities will focus on the integration of our model within the projects we mentioned in Section 2.7 with the aim of verifying the correctness and completeness of Convology and to further improve the model. Furthermore, our intent is to analyze if also the Convology TBox can be opened to domain experts in order to provide a more flexible tool for describing specific domains. Finally, we aim to integrate Convology within mindfulness applications that, from the conversational perspective, are very complex to manage and it would be a stressful test-bed for the proposed model. 6

DOI of the ontology file will be provided in case of acceptance in order to include possible refinements suggested by Reviewers.

I. Representation

References

21

References Dragoni, M., Bosca, A., Casu, M., Rexha, A., 2014. Modeling, managing, exposing, and linking ontologies with a wiki-based tool. In: LREC, pp. 1668 1675. Ferna´ndez-Lo´pez, M., Go´mez-Po´rez, A., Juristo, N., 1997. Methontology: from ontological art towards ontological engineering. In: Proc. Symposium on Ontological Engineering of AAAI. Flycht-Eriksson, A., Jo¨nsson, A., 2003. Some empirical findings on dialogue management and domain ontologies in dialogue systems—implications from an evaluation of birdquest. In: Proceedings of the Fourth SIGdial Workshop on Discourse and Dialogue, 2003, pp. 158 167. , https://www.aclweb.org/anthology/W03-2113 . . Franklin, S., Graesser, A., 1997. Is it an agent, or just a program?: a taxonomy for autonomous agents. In: Mu¨ller, J.P., Wooldridge, M.J., Jennings, N.R. (Eds.), Intelligent Agents III Agent Theories, Architectures, and Languages. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 21 35. Gangemi, A., Guarino, N., Masolo, C., Oltramari, A., Schneider, L., 2002. Sweetening ontologies with dolce. In: Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web, Springer-Verlag, 2002, pp. 166 181. , http://dl.acm.org/citation.cfm? id5645362.650863 . . Ginzburg, J., 2012. A semantic ontology for dialogue. The Interactive Stance. Oxford University Press, Oxford. Hitzler, P., Gangemi, A., Janowicz, K., Krisnadhi, A., Presutti, V. (Eds.), 2016. Ontology engineering with ontology design patterns—foundations and applications. Studies on the Semantic Web, IOS Press, Vol. 35. Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R.S., Torralba, A., Urtasun, R., et al., 2015. Skip-thought vectors, CoRR abs/1506.06726. , http://arxiv.org/abs/1506.06726 . . arXiv:1506.06726. Mensio M., Rizzo, G., Morisio, M., (2018) Multi-turn qa: a rnn contextual approach to intent classification for goaloriented systems. In: Companion Proceedings of the The Web Conference 2018, WWW ’18, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, pp. 1075 1080. Available from: https://doi.org/10.1145/3184558.3191539. Milward, D., 2004. Ontologies and the structure of dialogue. In: Proceedings of the 8th Workshop on the Semantics and Pragmatics of Dialogue (Catalog), 2004, 69 77. Pinto H.S., Staab, S., Tempich, C., 2004. DILIGENT: towards a fine-grained methodology for distributed, looselycontrolled and evolving engineering of ontologies. In: de Ma´ntaras, R.L., Saitta, L. (Eds.), Proceedings of the 16th Eureopean Conference on Artificial Intelligence, ECAI’2004, including Prestigious Applicants of Intelligent Systems, PAIS 2004, Valencia, Spain, August 22 27, 2004, IOS Press, pp. 393 397. Ramesh, K., Ravishankaran, S., Joshi, A., Chandrasekaran, K., 2017. A survey of design techniques for conversational agents. In: Kaushik, S., Gupta, D., Kharb, L., Chahal, D. (Eds.), Information, Communication and Computing Technology, Springer Singapore, Singapore, pp. 336 350. Sua´rez-Figueroa, M.C., 2012. NeOn Methodology for Building Ontology Networks: Specification, Scheduling and Reuse (PhD thesis), Technical University of Madrid, 2012. , http://d-nb.info/1029370028 . . Weizenbaum, J., 1966. Eliza—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 36 45. Available from: https://doi.org/10.1145/365153.365168.

I. Representation

C H A P T E R

3 Conversion between semantic data models: the story so far, and the road ahead Shripriya Dubey1, Archana Patel2 and Sarika Jain1 1

Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India 2Institute of Computer Science, Freie Universita¨t, Berlin, Germany

3.1 Introduction From senior citizens to young toddlers, as people are becoming more and more techsavvy, the world has seen a sprouting growth in the usage of interactive multimedia such as videos, photos videos, etc. for the exchange and sharing of information. As a result, metadata related to the information also has increased in volume. Metadata is defined as the data which describes the data, in short, the data about data (Jain and Patel, 2020). The increase of data needs a wider space to fit and for that it should be organized in such a way that it occupies the least amount of space in data storage. As semantics means “the meaning of something,” web semantics is a term used to make the meaning of web data explicit to ma. Thus in addition to the data organization for an efficient storage, data access and interpretation of data by machine is also a dire need for web semantics to prosper. The automation of a task to be performed by a user on the web is one of the most challenging milestones to be covered by the World Wide Web. For example, the task of searching a document on the web cannot solely be relied on “word matching,” that is, matching set of keywords entered by the user against the documents and texts already present on web. It may be the case that the entered keywords are not dominantly present on the document which the user is searching for. Hence a lot of manual efforts must be applied after the matching to make the search more efficient and effective (Patel et al., 2018). Thus to deploy such automated algorithms to perform some useful tasks, the underlying technology must be such that it provides more precisely described data which

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00006-7

23

© 2021 Elsevier Inc. All rights reserved.

24

3. Conversion between semantic data models: the story so far, and the road ahead

would make the web machine understandable compared to machine readable. By describing the metadata more precisely, we can enhance the learning capabilities of machines, resulting in knowledgeable machines. Use of XML (extensible mark-up language) is highly popular when it comes to defining metadata and sharing of web content. However, XML and RDF (Resource Description Framework) both address the problem of heterogeneity, but XML is primarily a serialization format compared to RDF that is considered as a data model. XML does not provide a universally global unique identifier, that is, the identifiers added to an XML document are unique for that document and not globally. Here comes RDF which uses an ontology as a schema, and it has a unique Universal Resource Identifier (URI) globally. Also, RDF is more flexible than XML. This makes the interoperability among ontologies (which are knowledge representation of concepts within a domain through the relationships between them) (Patel and Jain, 2020; Lassila), and hence the web much more enhanced. Data in RDF is structured in the form of triplets as opposed to XML which stores data in a tree format, thus data organization is also a lot better for data, which is present in an enormous amount on the web. There are a lot more advantages of using RDF for the modeling of data and thus conversion of XML data into RDF format is gaining popularity. A number of approaches have been introduced for transforming the XML data into RDF. This work shows a study of various approaches presented so far by the learned authors and developers. Section 3.2 describes the importance of RDF for Linked Data and Semantic Computing and why is it necessary in the first place. Section 3.3 shows the related work through a literature review done on the journal articles and approaches of the converter. Section 3.4 goes on to present the conceptual evaluation of converter. It comprises comparison table of the converter based on the various parameters and general architecture which is followed in the converter workflow. Section 3.5 discusses the findings deduced from the rigorous study of these approaches. Certain questions raised while reading this literature have also been discussed. Finally, Section 3.6 concludes the study.

3.2 Resource Description Framework as a semantic data model Why use RDF when XML is doing a fine job? To start with, XML is a mark-up language that defines particular standards for annotating the data which is to be carried and making it human as well as machine understandable, whereas RDF is a framework which provides a data model for the data which organizes and annotates the data elements and provides a standard for their relationship with each other and similar real-world entities. So, XML is useful when users want to query the document itself, whereas RDF is useful when data is to be queried while considering its meaning. Hence RDF makes the interpretation of the data much better, facilitating machine understandability. The fact that RDF species a global unique identifier for its data elements facilitates interoperability among data present in the web worldwide. When using XML as the underlying technology, metadata is formatted in an XML schema which consists of specific tags to annotate the data along with plain text which semantically describes the tags in the schema and grammatical meaning of the document. However, this does not facilitate to make the data understandable and interoperability suffers as each XML schema may have different set of rules for

I. Representation

3.3 Related work

25

its metadata, each set of documents may have similar tags with different meanings in their respective XML schema. When using RDF, an RDF schema such as Ontology Web Language (OWL) ontology is made for modeling the data which represents not only the data but also the relationships among those data. As XML is used to format the metadata in most of the web documents, creation of ontologies has to be done on the basis of present set of rules for existing metadata. To make ontologies from an XML schema, a key component is that the data present in XML must have related domain knowledge with it which is what ontologies work upon to link the World Wide Web. Thus it is recommended to transform XML documents into RDF instances to establish OWL ontologies from them. In this chapter, various approaches put forward for this transformation have been discussed.

3.3 Related work As RDF is in the format of triples, it describes relationships between entities as well as their meaning in the form of statements consisting of a subject, a predicate, and an object, that is, resource, a property, and property value (Hardesty, 2016). There have been many approaches given by learned authors for converting an XML metadata into RDF metadata. One of the earliest known converters for RDF format in the Semantic Web is Triple (Manola and Miller, 2004). It was announced in 2007. Among various RDF syntax libraries is Raptor, a part of the Redland librdf package which was written in C, Triplr is based on Raptor. Guessing the format of the supplied input data was its key feature at the time it was designed. This key feature proves very useful as a functionality of online converters. The Triplr service is based on the REST API and is served as a raw REST service. It does not, however, take HTML-based input which might help the users while composing REST URIs. Battle (2006) devised this conversion by making use of OWL ontology. The XML is first represented in an OWL ontology and the result to be obtained are the RDF instances of this OWL ontology. Thus they specify a mapper document which is nothing but a mapping link between the XML document and the ontology. This linking document is an XML document. Their approach was exemplified by applying it on DIG35 metadata specification, which describes metadata about digital images. They state that XML metadata focuses on the structure of the document rather than its semantics. So, what better tool than a semantic web technology like an ontology to focus on the semantics. For this, XML has to be converted into RDF instances. Hardesty (2016) described that transitioning from XML to RDF is the consideration for stepping toward linked data and semantic web. As stated in RDF Primer 1.0 “RDF directly represents only binary relationships” (Stolz et al., 2013). The metadata in XML is described by encoding values in their respective elements and attributes. On the contrary, RDF forms statements for a value, which comprises direct references to that value. The references are to the thing or value that is being described, the reference to the descriptor that describes it, and a reference to the value at that descriptor’s reference. Bischof et al. (2012) presented an approach to transform XML to RDF and back again. He called the resultant tool Gloze; it works under the Jena framework. In the Gloze approach, the modeling of XML content into RDF is shown by mapping the XML elements and attributes into RDF. This approach he says is nonlossy, that is, RDF can be mapped back to XML. He called it the Gloze

I. Representation

26

3. Conversion between semantic data models: the story so far, and the road ahead

approach which maps XML into RDF by showing that XML can be modeled into RDF. It maps XML into RDF in such a way that RDF can be again mapped backed to XML which makes it a nonlossy approach. However, in this process, the sequencing given implicitly in the tree structure of XML might be lost. To describe the manner in which XML is mapped into RDF and back into XML, the Gloze approach makes XML schema the basis. Unlike other procedural approaches like XSLT, Gloze approach gives the benefit that the XML schema which is used as basis is neutral when it comes to the direction in which mapping has to be done. Beckett and Broekstra, 2013 presented RDF translator which makes the conversion between multiple data formats possible like RDF/XML, RDFa, Microdata, N-Triples, RDF/JSON, etc. In their proposal, the focus is on the technical facet of assisting the burgeoning of Semantic Web applications with the capability of syntax transformation, as well as collaborative aspects of the process of development. The days when Semantic Web had only started blossoming, the prime language for the RDF structure of standard serialization was XML, (as picturized in the Semantic Web stack1), the eminence which RDF syntaxes used to have has now changed and they are being replaces by syntaxes like RDF/XML, N-Triples, Notation 3 (N3) that embraces Turtle and N-Triples, RDF in attributes (RDFa), and JSON. RDFa has come out to be most popular when some semantic content has to be published on a web page (Stolz et al., 2013). The fact that there are a lot of options of these syntaxes makes it burdensome for the developers of Semantic Web as they now have to pay attention on various variants of these different syntaxes. It also makes the interoperability of the semantic web tools vulnerable and limited. If a Semantic Web developer, say, is not well acknowledged with RDFa, then they would not want to spend their time and resources on getting familiarized with RDF embedded in HTML using RDFa, rather they would prefer to use the syntax they are well aquatinted with. For example, it is not easy to support parsing of RDFa belonging to the other pages if web pages of a web site reply on the library of JavaScript. The need of a comprehensive online converter thus arises. What makes the solution effective and multipurpose is an amalgamation of features which work together to provide the bidirectional conversion of various RDF data formats, to have made possible that syntax can be highlighted for corresponding supported serialization formats, the fact that in order to make the collaboration better, sharing functionality can be linked, a Web Interface which provides user with clean and straightforward design to operate with and be acquainted and complied with latest Web technologies. Stefan Bischof et al. (Bischof, 2007) presented a new language called XSPARQL which is a combination of XQuery and SPARQL. Although XQuery and SPARQL languages were designed for two different data models (Van Deursen et al., 2008), the authors show that by merging XQuery and SPARQL together such as in XSPARQL the purpose of bringing XML and RDF closer can be accomplished. Mapping in either direction (i.e., XML to RDF and vice versa) can be helped by the precise and concise intuitive solutions provided by XSPARQL. The fact that SPARQL does not handle XML data says that transformations of such kinds cannot be accomplished by SPARQL alone. Serializing RDF graphs using RDF/XML seems the only way by which RDF data can be queried using XQuery.

1

http://www.w3.org/2000/Talks/1206-xml2k-tbl/slide10-0.html

I. Representation

27

3.4 Conceptual evaluation

3.4 Conceptual evaluation This section shows the comparative analysis between different converters based on different parameters like input/output format, programming language, last released date, and so on. Here we also draw a generalized architecture of the converter.

3.4.1 Comparison study Table 3.1 shows a comparison study of various converter tools presented so far, whether available online or not. The year shows in which they were proposed, whether through an article, journal, or final programmatical product made online. It is to be noted that as the years progressed, the functionality of conversion between multiformat input/output has been added into a converter making it a more effective tool for semantic web. Thus after a year or two, the converter tool kept making progress, as far as data format was concerned.

3.4.2 Generalized architecture The diagram shown in Fig. 3.1 represents the generalized architectural flow of a converter. The converter tool takes as input, the XML document, the mapping document, and OWL ontology (acting as vocabulary for making the ontology corresponding to this XML). It is shown that the data is extracted from this XML schema in form of instances. TABLE 3.1 Comparison between converters and their significant publishing year. Last release date

Online availability

Available converter

Input format

Output format

Programming Year language

Battle (2006)

XML/RDF

RDF/XML

2014 None

Stolz et al. (2013) XML/RDF-JSON/NTriples/Microdata

XML/RDF-JSON/ N-Triples/ Microdata

2013 Python

2013

Yes

Bischof et al. (2012)

XML

RDF

2012 XSPARQL

2012

Yes

Van Deursen et al. (2008)

XML schema

RDF instances

2008 None

2008

No

Catasta et al. (2019)

RDF/XML, Turtle, Notation3, RDFa, Microformats, HTML5, JSON-LD, CSV

RDF

Java

2019

Yes

Kellogg (2011)

RDFa, JSONLD, RDFXML, n3, microdata, tabular, trix, turtle, normalise, rj, trig, normalise, nquads

RDFa, JSONLD, n3, tabular, trix, turtle, RDFXML, normalise, rj, trig, normalise, nquads, ntriples, vocabulary

Ruby

Yes

Garcı´a et al.

RDF/XML, Turtle/N3

RDFa

Not given

Yes

I. Representation

No

28

3. Conversion between semantic data models: the story so far, and the road ahead

Input document

Extract metadata (XML schema) of the input document

OWL ontology(s) which work as vocabularies for forming the ontology corresponding to the current XML data Input vocabulary

XML data (instances from the metadata)

Mapping OWL ontology to XML data

Output RDF document

RDF file Extract RDF instances from ontology Ontology corresponding to the input XML

Refer to rules and principles in the mapping document for mapping XML to ontology Mapping document

FIGURE 3.1 Generalized architectural flow of the converter. RDF, Resource Description Framework; OWL, Ontology Web Language.

The metadata extracted from XML schema is used to make its ontology, that is, an ontology is developed according to the given XML schema and data inside it. To make this ontology, a mapping document is used as input. A mapping document is in the XML format and it provides a link between the XML and OWL ontology, thus the entities in the XML can be mapped in the OWL ontology, by figuring out which category does it fall in. The XML schema can map to more than one ontology to figure out the entities and their categories. To do this, a set of rules has to be followed which is provided in the mapping document. A mapping document is in the XML format and it consists of elements such as import statements, vocabularies, and identifiers. This document says all the rules of mapping between the XML and the ontology. The OWL ontology which is to be imported in the resulting RDF instances is specified by import element, by constructs such as, owl: import. The RDF instances are then extracted from the ontology thus formed, from the imported OWL ontology and XML schema, and hence a corresponding RDF file is made. The process of translating XML to RDF is also called lifting, the reason being, the data in RDF is abstracted on a higher level than in XML where data is semi-structured, thus the opposite conversion is called lowering (Bischof, 2007).

3.5 Findings Various approaches have been put forward by learned authors. Some of them are discussed in Section 3.2. Various intriguing questions are raised from the literature studies

I. Representation

3.6 Concluding remarks

29

of converter. Such as what led to such amount of different approaches on converting the data formats. This chapter discusses the following questions which could be thought about while trying to understand converters: Why available converters are not suitable and why there has been a rise in the need of new converters? It can be observed that as the number of users of internet rise, the variety of users also rises, and thus there is a variety in the problems and use cases of internet users. Different types of users from a software employee, a schoolteacher to a defense personnel use the internet for their own use. Thus as the variety of users increases, it becomes necessary to change the features of a converter. Such as RDF Translator described in (Beckett and Broekstra, 2013), bidirectional conversion between data formats, syntax highlighting, link sharing, interactive web user interface, and compliance with latest web technologies are some of the features which they have described in their paper, which makes their product stand out. What all functionalities must/could be supported by a converter tool? It is the basic functionality of a converter to transform data format into the one which is more machine understandable. After going through and studying various approaches, it has been contemplated that a converter can perform much more than this basic function. It can provide as a functionality, the testing and checking of the annotations which may be encoded in lesser convenient or hard to parse formats such as RDFa in HTML translated to N3. The developers may provide a human friendlier format at first, after it has been modeled it can be converted into the target format, for example, taking N3 as the modeling structure and then publishing in RDF/XML so that it is easier for the applications to interoperate with each other. There are few popular data formats (structured), hence the converter can be comprised conversions among these structured data formats including Microdata. Microdata is most popular these days and is being used by search engines such as Google, Microsoft, Yandex, and Yahoo!, as it consists of a syntax which is alternative for embedding structured data in HTML. A converter comprising these formats should be able to meet with most of the needs of the developer. The service should be user-friendly by being available online for free, this provides more accessibility to the user. The user interface service can also include the functionality of keyboard shortcuts, copying and pasting hassle-free, etc. Applying REST API to the web service makes it easier for different data formats from heterogeneous sources to be integrated effortlessly.

3.6 Concluding remarks The structural logic for structured data is provided by XML in its hierarchical definition of data comprised by elements and attributes. On the other hand, RDF’s main focus is to derive data logic which aims to declare data resources, these resources are related to each other using properties. All of these properties are given a unique key, that is, Uniform Resource identifier, meaning as clear by the term, they are uniquely identified by single reference points, unlike in the case of XML where each property is provided with its description encased and encoded. The fact that RDF has an XML language has given birth to an honest confusion that RDF itself is XML or can be expressed as XML. As clearly read in Lassila’s work (Patel et al., 2018), regarding the specification of RDF from the World Wide Web Consortium (W3C), “RDF encourages the view of ‘metadata being data’ by

I. Representation

30

3. Conversion between semantic data models: the story so far, and the road ahead

using XML (eXtensible Markup Language) as its encoding syntax.” Hence, it is clear that even though RDF has a way of expressing its resources, which are related to each other by properties, using XML, RDF cannot be called as an XML schema itself. RDF uses an XML language, which also sometimes, confusingly called RDF, and thus RDF/XML. The literature has provided approaches and tools for the conversion of heterogeneous data formats into RDF, which helps move semantic web in the direction of being interoperable and being able to not just read but also understand the data. Since many techniques and approaches have been proposed, a converter must do more than the basic functionality of converting into RDF. By providing a mixture of features like bidirectional conversions, a converter can be made to really appeal to the user. As such tools are provided online, the libraries which they are dependent upon must be updated regularly to keep matching with the current updated standards. Providing error feedbacks and status information may prove to be a helpful feature and the upcoming approaches could try to inculcate it.

References Battle, S., 2006. Gloze: XML to RDF and back again. In: Jena User Conference. , http://jena.hpl.hp.com/juc2006/ proceedings.html. (Cit. on p.) . . Beckett, D., Broekstra, J., 2013. , https://www.w3.org/TR/rdf-sparql-XMLres/ . . Bischof, S., et al., 2007. Triplr. , http://triplr.org/ . . Bischof, S., Decker, S., Krennwallner, T., Lopes, N., Polleres, A., 2012. Mapping between RDF and XML with XSPARQL. J. Data Semant. 1 (3), 147 185. Catasta, M., et al., 2019. Introduction to Apache Any23. , http://any23.apache.org/ . . Garcı´a, R., Hepp, M., Radinger, A. RDF2RDFa converter. , http://www.ebusiness-unibw.org/tools/rdf2rdfa/ . . Hardesty, J.L., 2016. Transitioning from XML to RDF: considerations for an effective move towards Linked Data and the Semantic Web. Inf. Technol. Librar. 35 (1), 51 64. Jain, S., Patel, A., 2020. Situation-aware decision-support during man-made emergencies. In: Proc. of ICETIT 2019. Springer, Cham, pp. 532 542. Kellogg, G., 2011. RDF.rb. , http://rdf.greggkellogg.net/distiller?command 5 serialize . . Lassila, O. Introduction to RDF Metadata. , https://www.w3.org/TR/NOTE-rdf-simple-intro-971113.html . . Manola, F., Miller, E., 2004. RDF Primer. , https://www.w3.org/TR/rdf-primer/#structuredproperties . . Patel, A., Jain, S., 2020. A novel approach to discover ontology alignment. Recent Adv. Comput. Sci. Commun. 13 (1). Patel, A., Jain, S., Shandilya, S.K., 2018. Data of SemanticWeb as unit of knowledge. J. Web Eng. 17 (8), 647 674. Stolz, A., Rodriguez-Castro, B., Hepp, M., 2013. RDF translator: a restful multi-format data converter for the semantic web. arXiv preprint arXiv:1312.4704. Van Deursen, D., Poppe, C., Martens, G., Mannens, E., Van de Walle, R., 2008. XML to RDF conversion: a generic approach. In: Proc. 2008 International conference on automated solutions for cross media content and multichannel distribution, pp. 138 144. IEEE.

I. Representation

C H A P T E R

4 Semantic interoperability: the future of healthcare Rashmi Burse1, Michela Bertolotto1, Dympna O’Sullivan2 and Gavin McArdle1 1

2

School of Computer Science, University College Dublin, Dublin, Ireland School of Computer Science, Technological University Dublin, Dublin, Ireland

4.1 Introduction Easy access, exchange, and integration of healthcare information have proved to be a major challenge since the early 1980s and still remain an unsolved issue. Attempts in the past have addressed this problem using bespoke solutions, but a generic solution still eludes the healthcare community. This chapter gives an overview of this problem at different levels. The chapter reviews the various healthcare standards developed in an attempt to solve the interoperability problem at a syntactic level and then proceeds to examine medical ontologies developed to solve the problem at a semantic level. The limitations of state-of-the-art technologies are explored along with an analysis of how semantic web technologies could be applied in the health informatics domain to assist the exchange and integration of healthcare information. The chapter is organized as follows: The remainder of Section 4.1 details the levels of healthcare interoperability that need to be preserved for the exchange of healthcare information to take place without interruption. Section 4.2 provides an introduction to semantic web technologies including the Protocol & RDF Query Language (SPARQL), Resource Description Framework (RDF), RDF graphs, vocabularies like Resource Description Framework Schema (RDFS), and Web Ontology Language (OWL). Section 4.3 focuses on syntactic interoperability and discusses previous healthcare standards, their shortcomings, and how the introduction of semantic web technologies in the latest healthcare standard fast healthcare interoperability resources (FHIR) has the potential to solve the interoperability challenge. Section 4.4 focuses on semantic interoperability and provides a brief history of its evolution from read code systems to current polyhierarchical medical ontologies, the difference between clinical coding

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00018-3

31

© 2021 Elsevier Inc. All rights reserved.

32

4. Semantic interoperability: the future of healthcare

systems and clinical terminology systems. Section 4.4 also discusses several clinical terminology systems available in the market and introduces Systematized NOmenclature of MEDicine-Clinical Terms (SNOMED-CT), the world’s most comprehensive clinical terminology system. The section also explains the features of semantic web technology that can be leveraged to improve semantic interoperability. Section 4.5 reviews the literature to gage the current contribution of semantic web technologies in the improvement of healthcare interoperability at both syntactic and semantic levels. Based on this analysis and review, Section 4.6 describes some innovative future directions to enhance healthcare interoperability using semantic web technologies and finally, Section 4.7 concludes the chapter.

4.1.1 Healthcare interoperability: a brief overview Healthcare interoperability is the capability of health information systems to expose, make accessible, and seamlessly communicate information locked in heterogeneous systems. Better interoperability between systems can facilitate well-informed decisions and improve the delivery of healthcare services (HIMSS, 2019). Interoperability is complex and, to ensure effective communication, it needs to be maintained at several levels: • Physical interoperability: deals with the physical connections required for transmission of data from one point (source of information) to another (receiver of information). Since the advent of the Internet and effortless connection of hundreds of computer systems, the challenge of physical interoperability is considerably reduced. Therefore this chapter focuses on the interoperability problem at syntactic and semantic levels. • Syntactic interoperability: deals with the structure, syntax, and packaging of healthcare data. Various healthcare standards have emerged in the past few decades that define fixed message structures, data types, and formats to ensure uniformity in storage and retrieval of healthcare data. Health level 7 (HL7) is a leading not-for-profit organization involved in the development of the most widely used healthcare standards to facilitate smooth exchange of electronic healthcare information between heterogeneous and distributed systems. Section 4.3 of this chapter discusses syntactic interoperability, various HL7 standards, and the advantages of applying semantic web technology in this area. • Semantic interoperability: deals with the semantics, that is, meaning of clinical terms used by healthcare professionals. After the aforementioned levels have accomplished the transfer of data from point A (sender) to point B (receiver), semantic interoperability ensures that the receiver correctly interprets and understands what the sender wants to convey. This is achieved by adopting various clinical terminology systems that provide standard codes to replace heterogeneous synonymous clinical terms for symptoms, diagnoses, and procedures. Section 4.4 discusses the challenges to providing robust semantic interoperability and the contribution of semantic web technology in this area.

4.2 Semantic web technologies This section gives an introduction to semantic web technologies including SPARQL, RDF, RDF graphs, and vocabularies like OWL and RDFS. The section also reviews a wide range of domains that have leveraged semantic web technologies.

I. Representation

4.2 Semantic web technologies

33

4.2.1 Resource data framework The original web of documents was extended to build the semantic web. The primary goal of this augmentation was to link web content in a machine comprehensible format. The elementary unit of the semantic web is RDF (RDF, 2014). RDF is a data model, which was developed as an alternative to the relational model to store and retrieve dynamic web resources effectively (Ducharme, 2013). While the basic unit of a relational database management system (RDBMS) model is a tuple, the basic unit of RDF data model is a triple. A subject, an object, and a predicate together constitute a triple. A record identifier in an RDBMS model transforms into a subject in RDF, a column in RDBMS converts to a predicate in RDF, and the value in an RDBMS cell transforms to an object in RDF. A universal resource identifier (URI) is used to locate web resources and RDF links such web resources to each other using triples. The value of a subject and predicate of an RDF triple must be a web resource represented using a URI, whereas an object can be a web resource represented using a literal or a URI. Fig. 4.1 illustrates the conversion of a column (attribute) of an RDBMS tuple to an RDF triple. A major advantage of RDF over the traditional RDBMS data model is that it facilitates exposure, integration, merging, and sharing of structured or semistructured data across multiple applications even if the underlying schemas differ (RDF, 2014). For example, while it is a tedious task to merge information stored in tables with different schemas of different sizes say (m 3 n) and (p 3 q), when these tables are converted into sets of triples, the complexity of merging this information is reduced as the structure of the storage building blocks (the triples—subject, object, and predicate) is independent of the amount of information stored, thus making the data merging process smooth and efficient.

4.2.2 RDF graphs RDF graphs are formed when information represented by multiple triples is grouped together. Fig. 4.2 illustrates an elementary example of an RDF graph built by grouping and linking data from other triples to the triple illustrated in Fig. 4.1. Blank nodes are used in RDF graphs to improve the organization of data. A blank node does not have any identity on its own and its only purpose is to group meaningful data together. For example, Jack’s address information like street, postal code, and city could have been scattered among his other details like name, contact information, and profession, but a better way to

FIGURE 4.1 Conversion of an attribute from RDBMS tuple to RDF triple format. RDBMS, Relational database management system; RDF, resource description framework.

I. Representation

34

4. Semantic interoperability: the future of healthcare

FIGURE 4.2 An example of RDF graph. RDF, resource description framework.

represent this information would be to group the nodes as they collectively represent his complete address. This organization is achieved by introducing a blank node.

4.2.3 Vocabularies, RDFS and OWL RDF represents relationships among resources in the form of triples. However, linking resources using the triple format does not add semantics to the data. In order for a machine to correctly interpret any information, semantics have to be added to this raw RDF triple data. Semantics are added by representing subjects, objects, and predicates as instances of well-defined structures, called classes, and guidelines on how these classes can be linked with each other. For example, consider the triple from Fig. 4.1 where Alex is diagnosed with otitis media. In order for a machine to interpret the meaning of this triple, classes Patient, Disorder, and Diagnosis have to be defined. Using these semantics the machine can then interpret that “Alex” is a patient (an instance of the class patient) and “otitis media” is a disorder (an instance of the class disorder) and Diagnosis represents the fact that the patient has a particular disorder. Vocabularies are created to add semantics to the otherwise raw data represented by RDF. The addition of semantics with the help of vocabularies is what makes the web meaningful. Vocabularies define classes on the basis of common properties and provide rules to relate these classes to each other. The RDFS supplies a set of standard vocabularies that can be reused. Additionally, RDFS allows the creation of new customized vocabularies that can be integrated with the existing standard vocabularies to better represent the semantics of individual data. Some standard vocabularies that specialize in particular areas are listed below:

I. Representation

4.2 Semantic web technologies

35

• Friend of a friend (FOAF) (FOAF, 2014) specializes in representing personal relationships. It is useful in social network and social media analysis; • Dublin Core (DCMI, 2019) is a bibliographic vocabulary that represents relationships like author/creator or title of a book; • Geonames (GeoNames, 2006) specializes in adding semantics to geospatial data; • The data cube vocabulary (W3C, 2009) provides a way to meaningfully link statistical, multidimensional data. • Simple Knowledge Organization System (W3C, 2009) represents concept hierarchies and mappings. • RDFS (W3C, 2009) supports the creation of classes for the RDF resources and relationships among these classes. Among the above-mentioned examples, RDFS is the most popular vocabulary. The following paragraphs discuss RDFS in further detail. In particular, RDFS (W3C, 2009) supports the creation of classes for the RDF resources and relationships among these classes. For example, you can define a class “Person” by using rdfs:Class and then define a class of “Patient” by using rdfs:subClassOf, which introduces an inheritance (parent child) property among the classes, and the machine can then infer that all patients are persons. Relationships among the classes are defined using properties in RDFS. To define an rdf: property, two important elements have to be specified, its range (rdfs:range) and domain (rdfs:domain), which are nothing but constraints to validate the values of its object and subject, respectively. For example, an rdf:property “Diagnose” should have an rdfs:domain of class “Doctor” and rdfs:range of class “Disorder,” by which a machine understands that an instance of class “Doctor” can “Diagnose” an instance of class “Disorder.” Defining such classes and relationships provides meaning to the data. However, the expressive power of RDFS is limited. So, OWL (W3C, 2009) was created to define more complex structures. RDFS is often used in conjunction with ontologies like OWL that build on top of it to define more complex structures that cannot be expressed using RDFS alone. Using OWL you can infer additional information and deduce relationships that are not explicitly defined. For example, if there is a relationship that states: 1. John is Maria’s spouse, using owl:SymmetricProperty on the property “spouse,” you can automatically infer that Maria is John’s spouse too, even if that relationship is not explicitly defined. 2. Alex is Jack’s patient, using owl:inverseOf on the properties “patient” and “doctor,” you can deduce that Jack is Alex’s doctor, even when the relationship is not explicitly defined. Apart from this, OWL also allows you to describe data in terms of set operations on already existing classes. Consider the following examples: 1. The class “Father” can be deduced from owl:unionOf classes “Parent” and “Male.” 2. “Patients with diabetes and blood pressure” can be defined as an intersection of classes “patients with diabetes” and “patients with blood pressure.” The ability of OWL to describe complex classes and intelligently infer relationships can be used in a variety of fields where complex interrelated data need to be analyzed.

I. Representation

36

4. Semantic interoperability: the future of healthcare

4.2.4 SPARQL The SPARQL Protocol and RDF Query Language (pronounced as sparkle) (W3C, 2009) is world wide web consortium’s (W3C) standard query language to construct, modify, and retrieve data stored in RDF format, that is, RDF triples or RDF graphs. SPARQL is mostly used to query data stored in RDF format and the “Protocol” part in SPARQL is only employed when writing queries that need to be passed back and forth between different machines. SPARQL queries work by mining the RDF triples for properties mentioned in the query. Fig. 4.3 presents a basic SPARQL query to find all patients diagnosed with “otitis media” and its corresponding structured query language (SQL) query. SQL (pronounced as sequel) is a query language used to manage data stored in a relational database system. SPARQL allows the usage of prefixes to make the queries more readable. Prefix vcard is used to replace the URI “,http://www.w3.org/2006/vcard/ns#..” Both SQL and SPARQL queries perform functions like retrieving, modifying, ordering, aggregating, grouping, and joining data. In spite of these commonalities, a major difference is that SPARQL queries are executed on RDF triples while SQL queries operate on data stored in tables.

4.2.5 Applications of semantic web technology The underlying principle of semantic web technology to consolidate data from heterogeneous sources and meaningfully link it together can be generalized and applied in a variety of fields. For example, Dadkhah et al. (2020) explored the potential of semantic web technology in software testing. Lampropoulos et al. (2020) proposed a method to improve augmented reality by leveraging knowledge graphs and semantic web technology. Viktorovi´c et al. (2019) proposed a method to improve automated vehicular navigation with the assistance of semantic web technology. Louge et al. (2019) discussed the applications and suitability of semantic web technology in the domain of astrophysics. Moussallem et al. (2018) explained the potential of semantic web technology to overcome the obstacle of syntactic and lexical ambiguity in machine translation. Drury et al. (2019) examined the potential of semantic web technology to make unstructured agricultural data more meaningful and motivate further research in this domain. The W3C has published a list of 13 semantic web use cases that also includes 35 case studies (Case Studies, 2005). While this section has presented several domains that find semantic web technologies advantageous, the chapter mainly examines applications of semantic web technology in the health informatics domain. The next section assesses the contribution of semantic web FIGURE

4.3 An example of SPARQL query and its corresponding SQL query. SPARQL, Protocol and RDF query language.

I. Representation

4.3 Syntactic interoperability

37

technology to the improvement of syntactic interoperability in healthcare. It discusses the shortcomings of previous HL7 standards and how compliance to semantic web technology makes the latest HL7 standard “FHIR” more beneficial.

4.3 Syntactic interoperability As discussed in Section 4.1.1, syntactic interoperability deals with the structure, packaging, and format of healthcare data. Various healthcare standards have been developed to ensure standardized data structures, data types, and formats for easy storage and exchange of healthcare information. HL7 is a leading organization that creates standards that aid in hassle-free exchange of electronic health information. Since 1987, HL7 has continually developed standards to resolve the interoperability problem. This section discusses the major HL7 standard versions, the problems each standard was able to successfully address, and its shortcomings. The section also analyses how compliance with semantic web technology makes, the latest HL7 standard, FHIR a promising solution to solve the syntactic interoperability problem.

4.3.1 Health level 7 version 2.x In the early 1980s, people realized that it was difficult to exchange electronic healthcare data as every hospital independently developed its own applications without observing any collaborative guidelines. Building interfaces between every pair of systems to exchange information was a costly affair, which motivated the creation of health level 7 version 2.x (HL7-v2). HL7-v2 is a messaging standard developed to facilitate communication of electronic healthcare information among heterogeneous systems. The first version of the v2.x series was developed in 1987 and since then versions up to 2.8 have been released. The use of HL7-v2 is extensive with more than 35 countries having adopted the standard and greater than 95% coverage in healthcare organizations in the United States (Health Level 7, 1987). HL7-v2 provides a message format for exchanging healthcare data between heterogeneous interfaces. Information from different systems can be loaded into an HL7-v2 message which then acts as a communication protocol between the two systems. HL7-v2 messages can be classified into different types based on the information they carry. Fig. 4.4 provides an example of an ADT message used to exchange a patient’s “Admission Discharge and Transfer” information. The sample message shown in Fig. 4.4 is an ADT^01 message, which in particular represents a patient admit or visit. HL7-v2 messages consist of segments, elements, FIGURE 4.4 Sample HL7-v2 ADT message. HL7-v2, Health level 7 version 2.x.

I. Representation

38

4. Semantic interoperability: the future of healthcare

components, and subcomponents. A segment is identified by three capital letters, for example, MSH contains the message header information, EVN contains details about the event that caused the admission, PID contains patient identification information, NK1 represents the next of kin details. Each segment consists of elements, components, and subcomponents that are separated by | (pipe), ^ (caret), and & (ampersand), respectively. These messages can be used to exchange information between different organizations or different departments within the same organization, even when the underlying structure or schema to store this information is heterogeneous. Although HL7-v2 solved the problem of communication, it had some limitations. In particular, it lacked a standard application data model which meant the successful use of HL7-v2 largely depended on the information gathering capacity of the display of an application (Shaver, 2006). HL7-v2 was not a plug and play technology and required site customizations. It had inconsistencies within the standard, as the formal techniques applied to model HL7-v2 data elements were insufficient. For example, the number of events in an ADT and ORM (Orders) message did not match. ORM is an HL7-v2 message used to communicate information about an order. While ADT had many event types like ADT^01 for patient admit, ADT^02 for patient transfer, ADT^08 for patient information update, ORM had only one event type ORM^001 for all types of transactions. Such inconsistencies led to confusion among its users. It lacked well-defined user roles which led to variations in vendor implementations. To summarize, HL7-v2 promoted a message-oriented solution for exchange and provided excessive flexibility, which resulted in semantic inconsistencies across different versions of the same standard. These limitations motivated the development of health level 7 version 3 (HL7-v3).

4.3.2 Health level 7 version 3.x The use of HL7-v2 was widespread by the mid-90s. In the late 90s, the HL7 organization decided to address the limitations of HL7-v2 and thus commenced the development of a new standard—HL7-v3. The first version of HL7-v3 was released in 2005. HL7-v3 was an attempt to eliminate the semantic inconsistencies of HL7-v2 and develop a plug and play technology for easy use across sites without the need for customization. To achieve this, HL7-v3 introduced a well-defined data model that helped to eliminate semantic inconsistencies. This data model was called the reference information model (RIM). It formed the backbone of HL7-v3 and was used to represent all clinical data in a standard format. The RIM is based on Unified Modeling Language and comprises a collection of classes that model all clinical information. The core classes of RIM are as follows: • • • •

Role—for example, a patient/doctor/employee Entity—a person playing a role Act—of monitoring or recording vitals Participation—a role participates in an act

HL7-v3 also introduced the Clinical Document Architecture standard, which was used to create a patient discharge summary. While it provided a good way to transfer a static document containing information between two systems, the summary level record lacked granularity and did not expose individual elements, contained in it, for examination and querying. I. Representation

4.3 Syntactic interoperability

39

Adoption of the RIM in HL7-v3 introduced uniformity by defining data models and eliminated the drawbacks of HL7-v2 but overextended in achieving its goal of standardization and turned out to be too inflexible for practical use. In particular HL7-v3 required application users to understand the RIM to effectively use the tools for data exchange. The regulatory changes required to implement the standard and training programs to educate users made the adoption of HL7-v3 complex and expensive. This proved to be antithetical to the goal of HL7 organization, which was to develop cheap and simple solutions for healthcare information exchange. These shortcomings motivated the development of the latest HL7 standard FHIR.

4.3.3 Fast healthcare interoperable resource After analyzing the shortcomings of HL7-v2 and HL7-v3, the HL7 community realized that, while implementing stringent data models and regulatory policies to ensure a smooth exchange of healthcare information seemed great in theory, it did not work out well in practice. This knowledge led to the development of the latest HL7 standard FHIR, which tries to maintain a balance between standardization, flexibility, and cost of implementation. Development of FHIR commenced in 2011 and the first version was released in 2014. Since 2014, four new versions have been released. FHIR is the only HL7 standard that is semantic web compliant. Bender and Sartipi (2013) provide a critical comparison of the previous HL7 standards and FHIR. This section explores the features of FHIR that make it a promising solution to solve the interoperability challenge. First, unlike the previous standards that used Simple Object Access Protocol (SOAP) architecture, FHIR is based on a Representational State Transfer (REST) architecture. The most important advantage of using REST architecture in FHIR is that it makes FHIR resources easily accessible via the web. The JavaScript Object Notation (JSON) format provides lightweight and readable messages as opposed to previous heavyweight XML messages. Table 4.1 lists the differences between SOAP and REST architecture. Second, the basic building block of FHIR is a resource. A resource provides a much more granular approach to store information. For example, a single HL7-v2 ADT message can be broken down into three or more FHIR resources depending on its type. Consider TABLE 4.1 Simple object access protocol (SOAP) versus representational state transfer (REST). Simple object access protocol (SOAP)

Representational state transfer (REST)

A communication protocol

An architectural style

Supports XML format only

Supports XML, JSON, RDF formats

Heavyweight messages due to complex structure

Lightweight simple readable messages

SOAP APIs perform operations by invoking remote procedure calls (RPC)

REST APIs access a resource using its universal resource identifier (URI) and call a service using a simple URL path

Uses web services description language (WSDL) to set up communication parameters before actual communication begins

No dedicated protocol used to set up communication channel. Simple XML and JSON format used to send and receive messages

Cannot be cached

Can be cached

I. Representation

40

4. Semantic interoperability: the future of healthcare

the ADT^01 message example illustrated in Fig. 4.4, the MSH and EVN segments can be converted into a “Message Header” resource in FHIR, PID and NK1 segments can be converted into a “Patient” resource in FHIR, and PV1 segment can be converted into an “Encounter” resource in FHIR. This granular structure makes it easier to access and modify small chunks of information, without modifying an entire message. FHIR also provides a way to pool resources together to build a summary level document dynamically (FHIR, 2019). Since an FHIR document contains information about the resources building it, these individual resources can be easily accessed for querying and examination. Third, to solve the inflexibility issue caused due to extreme adherence to a data model, FHIR restricts the creation of highly definitive models that extensively represent every tiny detail of information. Instead, it provides an extensibility mechanism from which the existing resource definitions can be enriched. Semantic consistency is maintained by providing FHIR profiles, which are abstract layers that provide constraints and guidelines for the extension of resource definitions depending on the context (FHIR Profiles, 2015). Semantic web Shape Expressions (ShEx) is an RDF data constraint language and Solbrig et al. (2017) proposed a ShEx model to validate FHIR profiles. The model discovered 10 types of errors along with their root causes and was officially included in FHIR’s Third Draft Standard for Trial Use (DSTU3) release (FHIR RDF, 2019). Fourth, as FHIR is semantic web compliant, FHIR resources can be represented as RDF graphs (RDF FHIR, 2019). FHIR Linked Data Module (FHIR LDD, 2018) grounds FHIR semantics in RDF which makes FHIR data compatible with any RDF application. This compatibility facilitates FHIR to easily access and integrate valuable data from a huge repository of existing semantic web resources. FHIR’s capability to express resources semantically is enhanced with the adoption of OWL and RDF. This enhancement facilitates data from multiple datasets to be linked effortlessly and assists in the deduction of inferential knowledge. Valuable data from various sources like medical ontologies can then be integrated with FHIR and leveraged to make well-informed decisions, thus improving the quality of healthcare services. The healthcare standards and literature discussed in this section mainly work on the syntactic interoperability level. The next section of this chapter explores the applications of semantic web technology at the level of semantic interoperability. It provides a brief history and evolution of semantic interoperability and the advantages of applying semantic web technology to aid semantic interoperability.

4.4 Semantic interoperability As discussed in Section 4.1.1, semantic interoperability ensures that the meaning and context of the health information communicated between two systems are conveyed properly and interpreted correctly. This is accomplished using clinical coding systems that convert clinical information like procedures, diagnoses, symptoms into numeric/alphanumeric codes. For example, doctors and clinicians belonging to different organizations may use heterogeneous clinical terms for the same disorder or some may abbreviate the terms to keep them short. Integrating information from such heterogeneous systems introduces inconsistencies in the data and retrieves misleading results if, for example, the search word used does not match the other synonymous or abbreviated forms used for the same disorder. This can be avoided

I. Representation

4.4 Semantic interoperability

41

by assigning a standard numeric/alphanumeric code to represent all synonyms/abbreviations of a disorder. A collection of such codes builds a clinical coding system and ensures uniformity across heterogeneous healthcare information systems.

4.4.1 History of clinical coding systems The earliest attempts at building clinical coding systems were undertaken in the mid1980s. The motivation for developing clinical codes was driven by two main factors: 1. To avoid repetitive typing of lengthy clinical terms during data entry in hospital information systems. 2. To eliminate confusion and ensure accurate interpretation of medical terms across geographical and linguistic boundaries by converting miscellaneous, synonymous jargons for a single condition, procedure, or symptom into standardized numeric/ alphanumeric codes. This was the primary and more important motivation. In the beginning, clinical terms were represented using read codes. Read codes were organized in chapters starting from 0 (representing occupation) to 9 (representing administration) followed by chapters A-Z (Read Codes, 1982). Chapters A-U were dedicated to record clinical conditions, diseases, and disorders. Read code representation relied on the lexical features and character sequence of the code. For example, chapter F was dedicated to representing the nervous system and senses. Otitis media is a type of nervous system and sense organ disease, so it was represented using a code F52 based on its position in the chapter hierarchy. Fig. 4.5 illustrates the read code for otitis media (SNOMED, 2005). This form of representation had two major drawbacks: 1. Multiple inheritance, which is very common in medical ontologies, could not be represented using read codes, that is, a concept could not have two parents. For example, Otitis media caused by infection can also be classified as an infectious/ parasitic disease represented by chapter “A.” But since its code starts with F, it cannot be listed as an infectious disease. 2. If a concept was misplaced in the hierarchy (NHS, 1994), correcting it meant making changes to the characters in the code. For example, placing Otitis media in the infectious disease category, represented by chapter A, would mean changing F52 to Ax. All previously implemented systems, representing Otitis media using F52, would then be rendered incorrect. To overcome these drawbacks, relationships had to be represented without relying on the character sequence of codes. This led to the development of Is-A hierarchy FIGURE 4.5 Otitis media represented using a read code system.

I. Representation

42

4. Semantic interoperability: the future of healthcare

FIGURE 4.6 Otitis media represented using Is-A and attribute relationships.

relationships, which were conducive to represent the inherent polyhierarchical nature of medical ontologies. While this format improved on the original read codes and contained techniques to describe concepts with multiple parents, not all relationships suited this format. In particular, not all relationships between clinical concepts are subsumption relationships. To refine the details of a concept a new type of relationship, called the attribute relationship, was added. Attribute relationships provide information about a concept that describe it in addition to its type in a concept hierarchy. Fig. 4.6 illustrates the hierarchy and attribute relationships for otitis media (SNOMED, 2005). With the use of Is-A relationship otitis media can now have two parents, “disorder of a sense organ” and “infection,” at the same time. The attribute relationships, “causative agent” and “finding site,” further refine the details and provide additional information about otitis media.

4.4.2 Difference between clinical terminology systems and clinical classification systems When considering semantic interoperability in healthcare, there are two types of clinical coding systems that come into play—clinical terminology systems and clinical classification systems. The focus of this chapter is on clinical terminology systems. The two terms are often incorrectly used interchangeably. While both systems aid in providing semantic interoperability, they serve specific purposes and interchangeable use of the two is considered inaccurate (Alakrawi, 2016). Fig. 4.7 illustrates the levels where each of the two systems operates (Redrawn with changes from SNOMED (2005)) and Table 4.2 highlights the key differences between clinical terminology systems and clinical classification systems. To better understand, these differences are explained by taking an example of each system. A clinical classification system is exemplified by the International Classification of Diseases—Tenth Revision (ICD-10) and a clinical terminology system is exemplified by SNOMED-CT. There are several clinical terminology systems which specialize in particular areas. For example, RxNorm focuses on clinical drugs, RadLex defines codes related to radiology information, Logical Observation Identifiers Names and Codes (LOINC) specializes in laboratory test codes and observational readings, Current Procedural Terminology focuses on outpatient billing procedures and Healthcare Common Procedure Coding System utilizes clinical procedure codes. While these terminology systems cover specific areas of the medical domain, SNOMED-CT is all-inclusive and the most extensive clinical terminology system that has 80% 90% coverage of the entire medical domain (Nachimuthu and Lau, 2007).

I. Representation

4.4 Semantic interoperability

43

FIGURE 4.7 Types and purposes of clinical coding systems. TABLE 4.2 Difference between clinical terminology system and classification system. Criteria

Terminology system

Classification system

Usage

Mainly used for capturing clinical information, that is, data entry into EHR systems

Mainly used for analytics, healthcare surveillance, and planning.

End users

People directly involved in patient care and treatment. People responsible for documenting a clinical situation For example, clinicians, technical lab assistants

Scope

Broad—for example, SNOMED-CT not only covers codes for diseases but also other relevant information like clinical procedures, laboratory tests, pharmaceutical products, and body structures

Healthcare professionals responsible for analyzing clinical statements and assigning codes based on rules and conventions For example, policy makers, healthcare planners, epidemiologists Narrow—for example, ICD-10 is primarily designed for classification of diseases

Granularity More granular—very detailed. For example, specialists can find codes for very specific conditions or diseases and can also refine the details according to their needs using postcoordination

Less granular—comparatively less detailed. Due to statistical focus for analytical purposes, the codes are not very detailed and specialists often cannot find specific detailed codes for particular diseases due to generalization

SNOMED-CT covers all the aforementioned codes to a large extent and eliminates the need to install multiple clinical terminology systems. This vast coverage of SNOMED-CT makes it the most adopted clinical terminology system. The size of the SNOMED-CT vocabulary

I. Representation

44

4. Semantic interoperability: the future of healthcare

allows healthcare providers to customize the terminology according to their needs and install only parts of the terminology that are most frequently used, thus making it an elegant solution without any performance degradation. Given its widespread use and coverage, this chapter focuses on examining the contribution of semantic web technologies in the enhancement of SNOMED-CT. SNOMED-CT was formed as a result of merging two separate clinical terminology systems—Clinical Terms Version 3 which was a read codes system designed by the United Kingdom’s National Health Service and Systematized NOmenclature of MEDicine—Reference Terminology contrived by the College of American Pathologists. A 3-year project (Stearns et al., 2001) was undertaken in 1998 and the first version of SNOMED-CT was published in January 2002. Subsequently, two new versions of SNOMED-CT are released annually (January and July) to incorporate newly requested concepts and correct erroneous concepts in the previous versions. SNOMED-CT was developed with the predominant goal of capturing clinical data and recording information to be stored in electronic health record (EHR) systems. However, the adoption of semantic web technologies in the health informatics domain has opened a wide range of possibilities to leverage this stored data in a number of ways like querying the data for analytical purposes using SPARQL, integrating clinical data from heterogeneous sources using RDF, using RDF graphs to analyze medical ontologies and improve quality assurance. The next section outlines the features of semantic web technologies that make it a viable candidate to improve semantic interoperability.

4.4.3 Semantic interoperability and semantic web technology This section examines the aspects of semantic web technology that make it a suitable candidate to improve semantic interoperability in the healthcare domain. It highlights features of semantic web technologies that can be leveraged by applications to improve storage, querying, analytics, and quality assurance of medical ontologies, to aid semantic interoperability in healthcare. Relational databases have represented the predominant storage model to date as most of the real-world data have a fixed, regular structure and there is not much variation in the way in which entities are related to each other. The basic difference between traditional RDBMS approaches and semantic web technologies lies in the model that each one follows to store the data. In traditional RDBMS models, a schema is defined based on the relationships among entities, and the data are then stored in this fixed structure. This means that the relationships among entities are known a priori. On the other hand, the semantic web stores the relationships among concepts at individual record level, that is, there is no fixed schema that defines relationships among entities. Every triple itself defines the relationship among its subject and object at an individual record level. This kind of representation is beneficial when the structure of data is dynamic and has a lot of variation in the way entities are related to each other. As discussed in Section 4.4.1, medical ontologies like SNOMED-CT have a highly polyhierarchical and dynamic structure where the cardinality of relationships that a concept may have may differ drastically depending on the concept, thus making semantic web technologies a suitable option for storage.

I. Representation

4.4 Semantic interoperability

45

First, most of the EHR systems in hospitals are implemented using RDBMS models. However, for an EHR system to store clinical codes for various clinical symptoms, conditions, and procedures, it must work in collaboration with a medical ontology system. These medical ontologies are complex graphs formed by a combination of taxonomic/hierarchical (Is-A) relationships and definitive (attribute) relationships among the clinical concepts. Medical ontologies have a highly irregular structure with a lot of variation in their relationships. While a traditional RDBMS model is suitable to store the structured part of information in an electronic health record like patient demographics, details of orders, patient bills, and so on, it is not designed to store complex, variable, and irregular medical ontological data. Semantic web technology is a promising candidate to store such complex medical ontologies. Second, medical ontologies like SNOMED-CT are huge in volume and contain more than a million concepts. This huge size makes it difficult to keep track of relationships among all concepts. Even after taking utmost care, erroneous links, like missing or redundant relationships may be introduced in the ontological data, due to human oversight. The inherent quality of semantic web technologies like OWL to infer and deduce relationships among its resources can be exploited to examine the graphical structure and relationships among various clinical concepts. As discussed in Section 4.2.3, the feature of OWL properties to deduce relationships, that are not explicitly defined, among concepts can be used to find missing relationships in medical ontologies. This makes OWL a great tool for quality assurance of medical ontologies. Third, we compare traditional SQL queries and SPARQL queries for retrieval of medical ontological data. In SQL, the depth and range of a query have to be defined statically, with the number of joins on the tables to be explored, while writing the query. This means only data in the tables mentioned in the queries will be examined. SPARQL, however, supports queries of “undefined path” and “undefined depth” on RDF graphs (Vicknair et al., 2010). This means SPARQL queries freely traverse the graphs/triples based on the constraints mentioned in the query and the degrees of separation among the concepts do not have to be mentioned explicitly while writing the query. Consider the example in Fig. 4.8, the SQL query limits its retrieval capacity by statically defining the tables. The SQL query then retrieves information by joining the data with foreign and primary key constraints, whereas the SPARQL query can explore all RDF triples having properties

FIGURE 4.8 SPARQL versus SQL query to retrieve medication for “Otitis media.” SPARQL, Protocol and RDF query language.

I. Representation

46

4. Semantic interoperability: the future of healthcare

medication since the schema itself is stored at record level. This feature of SPARQL helps in exploring and extracting data that otherwise would not be retrieved by static SQL queries. The unrestricted traversal not only improves the retrieval performance but also helps in finding anomalous relationships. The additional advantage of discovering anomalous relationships can be exploited for quality assurance of medical ontologies. These features of semantic web technologies make it a viable technology to aid semantic interoperability in the healthcare domain. The next section reviews relevant literature to further gage the current state of semantic web technology to aid syntactic and semantic interoperability in the health informatics industry.

4.5 Contribution of semantic web technology to aid healthcare interoperability After understanding the features of semantic web technology that make it a promising candidate to aid syntactic interoperability (Section 4.3) and semantic interoperability (Section 4.4), this section reviews the literature to study the current contribution of semantic web technology to aid healthcare interoperability at the respective levels introduced in Section 4.1.

4.5.1 Syntactic interoperability and semantic web technology To improve the quality of healthcare delivery it is crucial to leverage the richness of healthcare data integrated from heterogeneous sources, in decision making. This section reviews the literature to acknowledge the contribution of semantic web technologies in achieving this goal at the syntactic interoperability level. Substitutable medical applications and reusable technologies (SMART) was an initiative undertaken by Harvard Medical School and Boston Children’s Hospital to solve the syntactic interoperability problem. This project aimed to build a platform based on web standards that enabled currently available medical applications to be plugged in without any customization or modification to interact freely with the existing EHR systems. Initially, SMART had developed its own clinical models but with the adoption of RDF as a primary format for representation of resources in FHIR Release 3 (FHIR RDF, 2019), SMART quickly gravitated toward the new standard and adopted it officially, thus renaming the project to “SMART on FHIR” (Mandel et al., 2016). There exists a “SMART App Gallery” (SMART, 2014) that currently holds more than 70 SMART on FHIR applications. Semantic web compliance for FHIR encouraged many initiatives. Peng and Goswami (2019) proposed a method employing the OWL to integrate electronic health data from EHR systems using FHIR and Patient-Generated Health Data from the home environment and wearable devices using Web of Things and Internet of Things. Luz et al. (2015) provided a new insight by stating that just representing FHIR using RDF and ontologies was not enough and a concrete information infrastructure was needed to guarantee full interoperability. To achieve this they proposed a Multilevel Model-Driven approach for syntactic validation of FHIR schemas. Abbas et al. (2018) used FHIR ontology representation to store patient and drug information in an FHIR server. Potential health risks of drug drug interaction were then analyzed by executing SPARQL queries on this semantic web compliant data.

I. Representation

4.5 Contribution of semantic web technology to aid healthcare interoperability

47

Healthcare data can be found in a variety of locations like blogs, social media-based patient communities, and discussion forums. Ae Chun and MacKellar (2012) proposed a unique project to integrate healthcare data from these alternate sources using semantic web technologies. Chun used RDF to extract medical concepts from the Unified Medical Language System to better understand and classify the healthcare data collected from the aforementioned sources. Timm et al. (2011) proposed a three-step method to demonstrate the benefits of semantic web technology in improving data interoperability for cross-domain collaboration. The method was applied to a hypertension disease model and had participants including clinical practitioners, domain experts, and healthcare researchers. The outcome proved that adding semantic web technologies in the toolkit was beneficial to aid healthcare interoperability. Sonsilphong and Arch-int (2013) proposed a framework based on semantic web technologies to integrate data from heterogeneous databases. Given its application in the health informatics domain, the framework was used to integrate patient data from heterogeneous independently built hospital information systems. Similarly, Shah and Thakkar (2019) present a method that combined software agents and semantic web for meaningful retrieval of healthcare information. While semantic web technologies are beneficial for integrating healthcare information from heterogeneous sources, they require semantic annotations for efficient usage. Peng et al. (2018) proposed a method to inject REST web services with semantic annotations and convert them into RDF graphs to improve interoperability between medical ontological data. Legaz-Garcı´a et al. (2016) proposed a framework, called Archetype Management System, employing OWL to enable meaningful secondary use of EHR data using semantic web technology. As discussed in Section 4.3, the majority of the EHR systems have been implemented using the previous HL7 standards based on an XML format. Representing data in an XML format makes the integration and sharing of healthcare information cumbersome compared to the web-friendly JSON format of semantic web technologies. Thuy et al. (2012) proposed a mechanism to convert this huge volume of existing healthcare data from XMLbased documents to OWL ontologies. The conversion to OWL ontology makes the healthcare data easily accessible to FHIR and medical ontology systems and promotes interoperability. Kiourtis et al. (2019) built on the work (Thuy et al., 2012) and proposed a system to preserve healthcare interoperability at both syntactic and semantic levels. The proposed system worked toward the transformation of traditional XML-based healthcare datasets to an FHIR resource ontology and included medical ontology mapping to preserve the semantic interoperability. Solbrig et al. (2017) also tried to resolve interoperability problems at both the syntactic and semantic level by proposing a system that blended semantic web technologies like OWL, Description Logic (DL), and RDF with FHIR. The proposed system used a DL reasoner, to combine FHIR RDF data and SNOMED-CT ontology. The outcome was improved classification and retrieval of clinical information like diagnoses, recognizing prescriptions, and so on.

4.5.2 Semantic interoperability and semantic web technology As discussed in Section 4.4, medical ontologies like SNOMED-CT are highly dynamic in nature and exhibit polyhierarchical characteristics. This makes semantic web technologies

I. Representation

48

4. Semantic interoperability: the future of healthcare

like OWL and RDF graphs a good option for characterizing and storing medical ontological data. This section reviews the current contribution of semantic web technology to improve semantic interoperability. The section particularly examines applications of semantic web technologies to SNOMED-CT, since SNOMED-CT is the most exhaustive medical ontology. There have been detailed studies to analyze the structure of SNOMED-CT and how certain graphical structures like nonlattice subgraphs indicate the presence of potential discrepancies or erroneous representations such as missing or redundant relationships among clinical concepts (Wang et al., 2007). Semantic web technologies like RDF and SPARQL are useful for analyzing such graphical structures. Zhang and Bodenreider (2010) and Zhang (2010) have contributed greatly in this area to detect nonlattice subgraph structures in SNOMED-CT and the potential errors corresponding to them. Zhang (2010) devised a method called Lattice-based Structural Auditing, based on semantic web technologies like RDF and SPARQL, for assessing and improving the quality of biomedical ontologies like SNOMED-CT. A nonlattice structure is identified by the absence of a unique Maximal Common Descendent or Minimal Common Ancestor. Zhang and Bodenreider (2010) demonstrated the use of SPARQL queries on SNOMED-CT data represented in the form of RDF graphs to detect such patterns. Cui et al. (2018) studied this issue further and emphasized the importance of taking into consideration the lexical features of a clinical concept, along with the graphical structure of SNOMED-CT. They not only identified erroneous patterns but also auto-suggested changes to remediate the error using semantic web technologies like RDF and DL reasoner (Cui et al., 2018). Obitko et al. (2004) stated that the traditional approach of grouping concepts that belonged to a particular domain and defining relationships among them was not sufficient to effectively represent and exploit the domain knowledge to its maximum potential. Therefore they introduced the use of formal concept analysis (FCA) (Ganter and Wille, 1999) to analyze conceptual structure patterns present in the data instead of just grouping concepts belonging to a domain in a taxonomy. FCA quickly found its application in the health informatics domain to structurally analyze complex medical ontologies. Jiang et al. (2003) applied FCA to define better medical ontologies, mainly focusing on the cardiovascular ontology. Zhao et al. (2018) applied FCA in ontology matching to identify corresponding conceptual structures in overlapping medical ontologies to facilitate semantic interoperability and reuse of data. Jiang and Chute (2009) also applied FCA in the quality assurance of medical ontologies. Jiang and Chute evaluated SNOMED-CT’s semantic comprehensiveness, with the help of FCA, and added anonymous nodes as placeholders for missing concepts in SNOMED-CT. This method was only applied to two subhierarchies of SNOMED-CT— Procedure and Clinical Finding which constituted less than 60% of the entire terminology system. Soon it was realized that while FCA proved to be a beneficial tool for conceptual structure analysis of medical ontologies, it was not scalable (Cole and Eklund, 1999). FCA could not be applied to an entire medical ontology like SNOMED-CT, due to its huge volume. Zhang and Bodenreider (2010) demonstrated the use of semantic web technology to address this challenge. Zhang and Bodenreider presented a scalable method based on SPARQL to find nonlattice structures in medical ontologies (Zhang and Bodenreider, 2010). The proposed method was successfully applied to the entire SNOMED-CT ontology consisting of 19 subhierarchies and more than 300,000 concepts. This method proved that semantic web

I. Representation

4.6 Discussion and future work

49

technology was suitable to work with huge volumes of medical ontologies and had the potential to solve the scalability limitation of other methods. Given the advantages of semantic web representation for medical ontologies, SNOMEDCT released an OWL version for the clinical terminology system (SNOMEDGuide, 2009). (Golbreich et al., 2003) discussed the suitability of semantic web technologies to effectively represent complex medical ontologies. Golbreich et al. (2003) explored Prote´ge´ (Noy, 2001) and OIL (Fensel, 2001) which are tools for ontology development and explained the need to bridge the gap between the expressive power required to represent a medical ontology and the current expressive power of semantic web technologies. Pisanelli et al. (2004) demonstrated the role of semantic web ontologies in dealing with medical polysemy, that is, a phenomenon in which a term has multiple meanings. They explained this by taking an example of the term “inflammation,” which has multiple interpretations, based on the context, in a medical vocabulary. Another feature of semantic web technology that can be employed to improve semantic interoperability is to study the underlying structure of different medical ontologies to identify commonalities to support integration and data reuse. Sabou et al. (2006) proposed an innovative approach to use semantic web knowledge for efficient ontology mapping. The proposed approach addressed the limited mapping capacity of previous systems that followed structure or label-based matching and produced mappings that these systems failed to discover. Paslaru Bontas et al. (2004) combined the benefits of semantic web technology with natural language processing and built an efficient retrieval system for medical ontological queries. The system was tested to retrieve pathology reports and digital images. Aime´ (2015) proposed a method for smooth exchange of patient information in existing EHR systems using semantic web technologies. To accomplish this exchange, an innovative semantic interoperability framework called Terminology and Data Elements Repositories for Healthcare Interoperability (TeRSan) was introduced that focused on efficient mapping of medical ontologies like SNOMED-CT, LOINC, PathLex, and ICD10. The work and applications highlighted in this section show that much research has been and continues to be carried out in the domain of syntactic and semantic interoperability. The flexibility offered by semantic web technologies has driven much innovation in applications to resolve compatibility issues as well as techniques for efficient error detection and correction. The next section presents areas where further research and development are required for successfully incorporating semantic web ontologies in the health informatics domain.

4.6 Discussion and future work Health information exchange has been a persistent problem for decades. The origin of the healthcare interoperability problem can be attributed to the early digitization of healthcare data without any standard guidelines or data models. Every hospital developed its own system which included databases (syntactic structures) and clinical codes (semantic structures) to ease the storage and retrieval of healthcare data. This resulted in a situation with hundreds of scattered hospital information systems with no standard data models and heterogeneous clinical coding systems. Integration of healthcare information from heterogeneous sources is important to make meaningful use of existing data and

I. Representation

50

4. Semantic interoperability: the future of healthcare

leverage it to improve the quality of healthcare services in the future. The healthcare community has been trying to solve the interoperability challenge for the past four decades. Based on the literature review conducted in this chapter to gage the involvement of semantic web technology in this area, semantic web technology seems to be a promising candidate to solve the healthcare interoperability challenges at both syntactic and semantic levels. However, close examination reveals certain shortcomings of semantic web technologies that need to be addressed to ensure a more effective contribution to the area. This section discusses the detected shortcomings and offers potential directions for future work with the anticipation that the analysis made in this chapter will encourage future research efforts in further developing aspects of semantic web technology that can better solve the interoperability challenges at both the syntactic and semantic levels.

4.6.1 Challenges with the adoption of semantic web technology at the semantic interoperability level The foremost impediment to the full exploitation of semantic web technology for overcoming semantic interoperability challenge in health informatics is the lack of standardization in the development of medical ontologies. Multiple competitive ontologies represent the same domain knowledge. For example, there are multiple ontologies to represent the cancer domain. This gives rise to two problems (1) the responsibility of choosing the most accurate ontology falls on the user and (2) using different ontologies to represent the same domain knowledge introduces heterogeneity in the healthcare data. In order to solve the interoperability challenge, it is important to set individual objectives aside and work unitedly as a community toward adopting a single ontology development strategy for the entire healthcare domain. Second, the volume and variety of healthcare data are a concern when trying to build a single ontology. This challenge is addressed by SNOMED-CT to a large extent (80% 90% coverage) by providing a single comprehensible ontology with customized installation options. However, SNOMED-CT (Nachimuthu and Lau, 2007) requires complementary support from other ontologies to attain 100% coverage of the healthcare domain. There is much scope for improvement in this regard, to make SNOMED-CT completely independent of other supplementary ontologies. Third, semantic web technologies were not originally designed to support medical ontologies and have eventually found their application in the medical domain. Keeping this in mind, you cannot disregard the basic differences in the expressive powers of the two standards: while both SNOMED-CT and OWL use DL at the backend, there is a difference in the expressive powers of SNOMED-CT and OWL. Precautions must be taken to ensure that converting SNOMED-CT to an OWL-based ontology does not introduce or infer erroneous relationships that were not present in SNOMED-CT.

4.6.2 Challenges with the adoption of semantic web technology at the syntactic interoperability level First, the majority of the currently available healthcare data exists in traditional RDBMS models and XML-based documents. Converting this data into an ontology is resource

I. Representation

References

51

expensive. For example, the conversion of an XML-based HL7-v2 message into corresponding FHIR resources or RDF graphs requires a domain knowledge. With the huge size of ontologies and millions of resources and relationships to map to, the process requires effective tools to speed up the mapping along with the constant supervision of domain experts to ensure the end results are accurate. Automation of such mapping procedures would boost the utilization of semantic web technologies in the healthcare domain. Second, there is a need for better user interfaces to visualize the mapping of scattered healthcare data into semantic web ontologies. This would facilitate visual inspection and validation. The development of such user interfaces will encourage mapping initiatives and reduce the dependency on domain experts. This is a much required but relatively unexplored field. Third, RDF triples store schema level information at an individual record level which makes query execution costly in terms of response times and usage of computer resources. Given the huge volumes of medical ontologies, scalability after a certain limit may cause performance degradation. This area requires to be researched thoroughly before full-fledged adoption of semantic web technologies in the healthcare domain. Finally, healthcare data contain sensitive, confidential information about patients. Traditional healthcare systems were designed to be private with limited access to healthcare data. With semantic web technologies trying to access these private data and making it easily accessible via the web, it is important to adhere to strict privacy policies to protect all patients’ sensitive medical information. This aspect needs to be fully investigated before the deployment of any system relying on possibly vulnerable technologies.

4.7 Conclusion Overall the literature review conducted in this chapter suggests that semantic web technologies have enhanced the capability of healthcare standards in tackling the interoperability problem. Addressing the aforementioned shortcomings would further contribute to improving the capability and integration of semantic web technologies in solving the healthcare interoperability problem at both syntactic and semantic levels. We hope our analysis will encourage future research efforts in further developing aspects of semantic web technology to better suit the healthcare domain and provide a generic solution for the interoperability challenge.

References Abbas, R. et al., 2018. Mapping FHIR resources to ontology for DDI reasoning. s.l. In: The 15th Scandinavian Conference on Health Informatics. Ae Chun, S., MacKellar, B., 2012. Social health data integration using semantic web. s.l. In: Proceedings of the 27th Annual ACM Symposium on Applied Computing—SAC ’12. Aime´, X., et al., 2015. Semantic interoperability platform for Healthcare Information Exchange 3. Alakrawi, Z., 2016. Clinical terminology and clinical classification systems: a critique using AHIMA’s data quality management model. Perspect. Health Inf. Manag. , https://perspectives.ahima.org/clinical-terminology-and-cli . . Bender, D., Sartipi, K., 2013. HL7 FHIR: An Agile and RESTful approach to healthcare information exchange. s.l. In: Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, pp. 326 331.

I. Representation

52

4. Semantic interoperability: the future of healthcare

Booth, N., 1994. What are the Read Codes? Health Libraries Rev. 11 (3), 177 182. Case Studies, 2005. Semantic web case studies and use cases. , https://www.w3.org/2001/sw/sweo/public/ UseCases/ . . (Accessed March 2020). Cole, R., Eklund, P., 1999. Scalability in formal concept analysis. Comput. Intell. 15 (1), 11 27. Cui, L., Bodenreider, O., Shi, J., Zhang, G.-Q., 2018. Auditing SNOMED CT hierarchical relations based on lexical features of concepts in non-lattice subgraphs. J. Biomed. Inform. 78, 177 184. Dadkhah, M., Araban, S., Paydar, S., 2020. A systematic literature review on semantic web enabled software testing. J. Syst. Softw. 162, 110485. DCMI, 2019. http://www.Dublincore.org. (2019). DCMI: DCMI Metadata Terms. , https://www.dublincore. org/specifications/dublin-core/dcmi-terms/ . . (Accessed April 2020). Drury, B., Fernandes, R., Moura, M.-F., de Andrade Lopes, A., 2019. A survey of semantic web technology for agriculture. Inf. Process. Agric. 6 (4), 487 501. Ducharme, B., 2013. Learning SPARQL: Querying and Updating With SPARQL 1.1. s.l. O’reilly, Sebastopol. Fensel, D., et al., 2001. OIL: an ontology infrastructure for the Semantic Web. IEEE Intelligent Systems 16 (2), 38 45. FHIR, 2019. Documents—FHIR v4.0.1. , https://www.hl7.org/fhir/documents.html . . (Accessed March 2020). FHIR LDD, 2018. Linked-data-module—FHIR v3.2.0. , https://hl7.org/fhir/2018Jan/linked-data-module. html . . (Accessed March 2020). FHIR Profiles, 2015. Profile—FHIR v0.0.82. , https://www.hl7.org/fhir/DSTU1/profile.html . . (Accessed March 2020). FHIR RDF, 2019. Rdf—FHIR v3.0.2. , https://www.hl7.org/fhir/STU3/rdf.html . . (Accessed March 2020). FOAF, 2014. FOAF vocabulary specification. , http://xmlns.com/foaf/spec/ . . (Accessed April 2020). Ganter, B., Wille, R., 1999. Formal concept analysis: mathematical foundations. Springer, Berlin, New York. GeoNames, 2006. GeoNames ontology—geo semantic web. , http://www.geonames.org/ontology/documentation.html . . (Accessed March 2020). Golbreich, C., Dameron, O., Gibaud, B., Burgun, A., 2003. Web Ontology Language Requirements w.r.t Expressiveness of Taxonomy and Axioms in Medicine. Lecture Notes in Computer Science 180 194. Health Level 7, 1987. HL7 standards product brief—HL7 version 2 product suite | HL7 international. , https:// www.hl7.org/implement/standards/product_brief.cfm?product_id 5 185 . . (Accessed March 2020). HIMSS, 2019. Healthcare Information and Management Systems Society HIMSS Dictionary of Healthcare Information Technology Terms, Acronyms, and Organizations. Productivity Press, New York. Jiang, G., Chute, C., 2009. Auditing the semantic completeness of SNOMED CT using formal concept analysis. J. Am. Med. Inform. Assoc. 16 (1), 89 102. Jiang, G., Ogasawara, K., Endoh, A., Sakurai, T., 2003. Context-based ontology building support in clinical domains using formal concept analysis. Int. J. Med. Inform. 71 (1), 71 81. Kiourtis, A., Nifakos, S., Mavrogiorgou, A., Kyriazis, D., 2019. Aggregating the syntactic and semantic similarity of healthcare data towards their transformation to HL7 FHIR through ontology matching. Int. J. Med. Inform. 132, 104002. Lampropoulos, G., Keramopoulos, E., Diamantaras, K., 2020. Enhancing the functionality of augmented reality using deep learning, semantic web and knowledge graphs: a review. Vis. Inform. Legaz-Garcı´a, M., et al., 2016. A semantic web based framework for the interoperability and exploitation of clinical models and EHR data. Knowl. Syst. 105, 175 189. Louge, T., et al., 2019. Semantic web services composition in the astrophysics domain: issues and solutions. Future Gener. Comput. Syst. 90, 185 197. Luz, M., Nogueira, J. d. M., Cavalini, L., Cook, T., 2015. Providing full semantic interoperability for the fast healthcare interoperability resources schemas with resource description framework. s.l., International Conference on Healthcare Inf. Mandel, J., et al., 2016. SMART on FHIR: a standards-based, interoperable apps platform for electronic health records. J. Am. Med. Inform. Assoc. 23 (5), 899 908. Moussallem, D., Wauer, M., Ngomo, A.-C., 2018. Machine translation using semantic web technologies: a survey. J. Web Semantics 51, 1 19. Nachimuthu, S., Lau, L., 2007. Practical issues in using SNOMED CT as a reference terminology. Stud. Health Technol. Inform. 129, 640 644.

I. Representation

References

53

Noy, N., et al., 2001. Creating Semantic Web contents with Protege-2000. IEEE Intelligent Systems 16 (2), 60 71. Obitko, M., Snasel, V., Smid, J., 2004. Ontology design with formal concept analysis. s.l., CLA. Paslaru Bontas, E., Tietz, S., Tolksdorf, R. & Schrader, T., 2004. Generation and Management of a Medical Ontology in a Semantic Web Retrieval System. On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE, pp. 637 653. Peng, C., Goswami, P., 2019. Meaningful integration of data from heterogeneous health services and home environment based on ontology. Sensors 19 (8), 1747. Peng, C., Goswami, P., Bai, G., 2018. Linking health web services as resource graph by semantic REST resource tagging. Procedia Comput. Sci. 141, 319 326. Pisanelli, D., Gangemi, A., Battaglia, M., Catenacci, C., 2004. Coping with Medical Polysemy in the Semantic Web: the Role of Ontologies. Studies in Health Technology and Informatics 107, 416 419. RDF, 2014. RDF—semantic web standards. , http://www.w3.org/RDF/ . . (Accessed February 2020). RDF FHIR, 2019. RDF—FHIR v4.0.1. , https://www.hl7.org/fhir/rdf.html#ontologies . . (Accessed March 2020). Read Codes, 1982. SCIMP guide to read codes | SCIMP. , https://www.scimp.scot.nhs.uk/better-information/ clinical-coding/scimp-guide-to-read-codes . . (Accessed March 2020). Sabou, M., d’Aquin, M. & Motta, E., 2006. Using the Semantic Web as Background Knowledge for Ontology Mapping. Ontology Matching. Shah, P., Thakkar, A., 2019. Comparative analysis of semantic frameworks in healthcare. Healthc. Data Analytics Manag. 133 154. Shaver, D., 2006. The HL7 evolution comparing HL7 version 2 to version 3, including a history of version 2. , https://corepointhealth.com/wp-content/uploads/hl7-v2-v3-evolution.pdf . . (Accessed March 2020). SMART, 2014. SMART app gallery. , https://gallery.smarthealthit.org/ . . (Accessed March 2020). SNOMED, 2005. SNOMED CT E-learning platform. , https://elearning.ihtsdotools.org/course/view.php? id 5 22§ion 5 2#getting-started . . (Accessed March 2020). SNOMED CT OWL Guide, 2009. SNOMED CT OWL Guide - SNOMED Confluence. [Online] Available at: http://snomed.org/owl [Accessed 30 Mar 2020]. Solbrig, H., Prud’hommeaux, E., Jiang, G., 2017. Blending FHIR RDF and OWL. CEUR Workshop Proc. 2042. Solbrig, H., et al., 2017. Modeling and validating HL7 FHIR profiles using semantic web shape expressions (ShEx). J. Biomed. Inform. 67, 90 100. Sonsilphong, S., Arch-int, N., 2013. Semantic interoperability for data integration framework using semantic web services and rule-based inference: a case study in healthcare domain. J. Convergence Inf. Technol. 8, 150 159. Stearns, M., Price, C., Spackman, K. & Wang, A., 2001. SNOMED clinical terms: overview of the development process and project status. In: Proceedings. AMIA Symposium, pp. 662 666. Thuy, P., Lee, Y.-K., Lee, S., 2012. S-Trans: semantic transformation of XML healthcare data into OWL ontology. Knowl. Syst. 35, 349 356. Timm, J., Renly, S., Farkash, A., 2011. Large scale healthcare data integration and analysis using the semantic web. Stud. Health Technol. Inform. 169, 729 733. Vicknair, C. et al., 2010. A comparison of a graph database and a relational database. s.l. In: Proceedings of the 48th Annual Southeast Regional Conference on—ACM SE ’10. Viktorovi´c, M., Yang, D., Vries, B., Baken, N., 2019. Semantic web technologies as enablers for truly connected mobility within smart cities. Procedia Comput. Sci. 151, 31 36. W3C, 2009. All standards and drafts—W3C. https://www.w3.org/TR/. (Accessed March 2020). Wang, Y., et al., 2007. Structural methodologies for auditing SNOMED. J. Biomed. Inform. 40 (5), 561 581. Zhang, G.-Q., 2010. Large-scale, exhaustive lattice-based structural auditing of SNOMED CT. Knowl. Sci., Eng. Manag. 615. Zhang, G.-Q., Bodenreider, O., 2010. Using SPARQL to test for lattices: application to quality assurance in biomedical ontologies. Lecture Notes Comput. Sci. 273 288. Zhao, M., Zhang, S., Li, W., Chen, G., 2018. Matching biomedical ontologies based on formal concept analysis. J. Biomed. Semant. 9 (1), 11.

I. Representation

C H A P T E R

5 A knowledge graph of medical institutions in Korea Haklae Kim Chung-Ang University, Seoul, South Korea

5.1 Introduction According to the Medical Service Act (The Ministry of Health and Welfare, 2020), medical institutions refer to a “place or organization where medical personnel provide medical services to the general public or multiple specific people” (The Ministry of Health and Welfare, 2020). Medical personnel in this Act refer to a doctor (physician), a dentist, an Oriental medicine doctor, a midwife, or a nurse authorized by the Minister of Health and Welfare. Medical institution data are required to link information on expertise in the medical domain with socio-cultural information. Medical institutions can be divided into several categories such as a clinic, a midwifery clinic, or a hospital-level institution (Oh et al., 2011). These types are the basis for classifying medical subjects, the composition of doctors, and the range of medicine practiced. In addition, because medical institutions play a crucial role in social activities, there is a strong connection with social infrastructure such as administrative districts, populations, and transportation. Medical institution information is provided by various organizations such as the Ministry of Health and Welfare1, the Health Insurance Review and Assessment Service2, the Korean Hospital Association3, and Healthcare Bigdata Hub4. However, the attributes of datasets vary depending on the institution (Kim, 2018). For example, the Public Data Portal5 provides a dataset of medical institutions in the area by local governments, but the 1

http://www.mohw.go.kr/react/jb/sjb030301ls.jsp

2

https://www.hira.or.kr/rd/hosp/getHospList.do

3

http://www.hospitalmaps.or.kr/hm/frHospital/hospital_list_state.jsp?s_mid 5 020100

4

https://opendata.hira.or.kr/home.do

5

http://data.go.kr

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00001-8

55

© 2021 Elsevier Inc. All rights reserved.

56

5. A knowledge graph of medical institutions in Korea

attributes of the datasets are not common. Therefore it is difficult to secure one complete dataset from various data sources. In addition, medical institution information needs to be connected with external data such as addresses, administrative districts, and transportation, which necessitates a review of the use of public data (Kim, 2018). This chapter presents a method of constructing a knowledge graph by connecting external data related to medical institution information. The knowledge model is designed to reflect the characteristics of medical institutions in accordance with the Act, and the knowledge graph technology is applied to connect external data such as administrative areas. This chapter is organized as follows. Section 5.2 examines the concept and application issues of knowledge graphs and open data. Section 5.3 examines the concept and status of medical institutions in Korea. Section 5.4 describes a method of collecting heterogeneous data sources and a process of transforming the knowledge graph. Section 5.5 summarizes the conclusion.

5.2 Related work 5.2.1 Formal definition of knowledge base A knowledge base plays an important role in interlinking thousands of data sources such as data warehouses, customer information, and enterprise resource management in enterprise environments (Kim, 2017b). It can consist of directly business-relevant data, such as products, supply chains, or customer data, along with common sense and generalpurpose knowledge for an enterprise such as countries, cities, airports, or facts about the world (Kasneci et al., 2008; Kim, 2017a). Several candidates already exist for constructing a fundamental knowledge base to realize general and global knowledge. Formally, the data model for a knowledge base is a directed, labeled graph (Kim, 2017b; Kasneci et al., 2008; Kim et al., 2016), ðV; ℰ; N; Nℒ ; ℰℒ Þ

(5.1)

where V denotes a set of nodes, ℰDV 3 V is a multiset of edges, Nℒ is a set of nodes, and ℰℒ is a set of edges. Each node vAV is assigned to a label lðeÞANL , and each edge eAE is assigned to a label lðeÞAEL . Each node describes an entity as an instance. For example, a node v with label lðvÞ 5 Barack Obama represents the president of the United States (Kim, 2017b). Likewise, each edge represents a relationship between two entities. If w is a node with label lðwÞ 5 1961, the fact that “Barack Obama was born in 1961” is represented by an edge e 5 ðv;wÞ with the edge label lðeÞ 5 bornInYear. Thus the facts in a knowledge base can be represented by the labels of nodes that are connected by edges (i.e., simply stated, Barack Obama (president) bornInYear 1961).

5.2.2 Public data in Korea The Act on the Promotion of the Provision and Use of Public Data (The Ministry of the Interior and Safety, 2013) is designed to prescribe “matters for promoting the provision and use of data held and managed by public institutions to guarantee citizens the right to access public data,” contribute to the improvement of their quality of life, and develop the

I. Representation

5.3 Medical institutions in Korea

57

national economy by utilizing such public data in the private sector (The Ministry of the Interior and Safety, 2013). According to this act, “public institution” refers to any state agency, local government, or public institution, as defined in subparagraph 10 of Article 3 of the Framework Act on National Informatization, and “public data” denotes any data or information, including databases and electronic files, processed in optical or electronic form, and created or acquired and managed by any public institution. In contrast, “machine-readable form” means any form of data whose particulars or internal structure can be ascertained, modified, converted, extracted, or otherwise processed by software (The Ministry of the Interior and Safety, 2013). The term “provision” refers to when public institutions allow users to access public data in a machine-readable format or transmit such data to users by various means (Kim, 2018). The Public Data Portal refers to the open data integration system run by Ministry of the Interior and Safety. The objective of this ministry is to release a variety of open data related to the government of South Korea to facilitate convenience and ease of use by data users (Kim, 2018). The portal provides the white papers issued by government departments and affiliated agencies, knowledge pertaining to the latest topics of interest selected by subject matter experts, and the status of open data releases from data providers. As of March 2020, the Public Data Portal has released 30,646 file datasets, 3,347 open APIs, and 120 standard datasets from 734 public institutions. The file data were provided in either standard formats based on the public data open standard or formats that did not conform to the standard. • File data: contain data in a file format that is periodically updated. • Standard data: contain data in a standard format that is compliant with the public data open standard. • API data: contain large volumes of frequently updated data. According to the international evaluation of public data hosted by the OECD and Web Foundation (The Web Foundation, 2018), Korea has established itself as a leading country. For example, Korea was ranked fourth, alongside France, in the Web Foundation’s Open Data Barometer (2018) (The Web Foundation, 2018).

5.3 Medical institutions in Korea The Medical Service Act (The Ministry of Health and Welfare, 2020) is devised to provide for the matters necessary for medical services to the people to ensure that all citizens can receive “the benefits of high-quality medical treatment,” thereby protecting and improving the public health (The Ministry of Health and Welfare, 2020). According to this Act, “medical institutions refer to a place where medical personnel provide medical services or premature birth services to the general public or multiple specific people.” Medical institutions in Korea are divided into several categories as follows. • Clinic-level medical institution: Doctors, dentists, or oriental medicine doctors provide medical services primarily to outpatients. This category comprises medical clinics, dental clinics, and oriental medicine clinics.

I. Representation

58

5. A knowledge graph of medical institutions in Korea

• Midwifery clinic: A midwife conducts health activities and provides education and counseling for premature births, pregnant women, and newborn babies. • Hospital-level medical institution: Doctors, dentists, or oriental medicine doctors provide their medical services primarily to inpatients. This category comprises hospitals, dental hospitals, oriental medicine hospitals, intermediate care hospitals, and general hospitals. Note that the Minister of Health and Welfare may determine and publicly notify the standard services to be rendered by each type of medical institution, as set forth in paragraph (2) 1 through 3, when deemed necessary for health and medical policies (amended by Act No. 9386, Jan. 30, 2009; and Act No. 9932, Jan. 18, 2010). Furthermore, the Minister of Health and Welfare may designate a general hospital specialized in providing medical services that require high levels of expertise for treating serious diseases, as a superior general hospital from among the general hospitals that satisfy the requirements (Oh et al., 2011; The Ministry of Health and Welfare, 2020). Specialized hospitals are hospital-level medical institutions that provide highly challenging medical treatments for specific diseases, as designated by the Minister of Health and Welfare (Oh et al., 2011). To specialize medical services in small- and medium-sized hospitals, in November 2011, the first specialized hospital system was implemented. A total of 99 specialty hospitals were announced. The designation of the second cycle specialty hospital lasted for three years, from January 1, 2015 to 2017. At the second round of screening, new and existing hospitals were also dropped. Relative evaluation was performed for hospitals that met the specified criteria (absolute evaluation) such as mandatory medical subjects, medical personnel, beds, clinical quality, and medical service level, as determined by the minister. The medical delivery system is designed to enable all people, who need medical services, receive proper medical care in the right place and at the right time through efficient operation of the medical system and medical resources (Oh et al., 2011; Dronina et al., 2016). It is divided into first (clinics), second (local hospitals), and third (general hospitals) medical institutions for providing effective and efficient medical services. Doctors are responsible for patients with mild symptoms in local clinics, and specialists in secondary or tertiary care institutions are responsible for patients with severe symptoms or difficult diagnosis and treatment. The first type of medical institutions are typically clinics and public health centers. They have less than 30 beds and primarily treat outpatients. The second type of medical institutions are hospitals and general hospitals with more than 30 beds and legal medical treatment. The third type of medical institutions should have specialists in all medical subjects and have more than 500 beds; they are advanced general hospitals. Unlike primary and secondary medical institutions, a “medical referral letter” is required for treatment in tertiary medical institutions. Documents prepared by the medical staff of the first and second types of medical institutions must be found after issuance. Patients can visit a tertiary medical institution without a referral. However, they cannot receive health insurance benefits from the National Health Insurance Corporation. Table 5.1 summarizes the status of medical institutions by region. In terms of administrative districts, metropolitan cities have a more uniform distribution of medical institutions than other regions. Medical institutions can be divided into public medical institutions and private medical institutions, as detailed in Table 5.2. Commercial institutions and nonprofit medical institutions are classified based on whether they intend to earn profit. Medical institutions can be divided into public medical institutions and private medical institutions. Medical institutions with

I. Representation

TABLE 5.1 Status of medical institutions by region in Korea (Ministry of Health and Welfare10, 2018).

Region

Superior general hospital

General hospital

Intermediate Hospital care hospital

Medical clinic

General dental clinic

Dental clinic

Midwifery clinic

Oriental medicine hospital

Oriental medicine clinic

Seoul

13

45

223

119

8372

65

4807

3

42

3614

Busan

4

25

137

187

2342

24

1265

2

9

1135

Daegu

5

10

108

64

1760

17

882



2

869

Incheon

3

16

62

72

1529

9

900

2

26

649

Gwangju

2

20

78

64

933

13

614



87

318

Daejeon

1

9

49

51

1068

6

523

1

6

512

Ulsan



8

41

42

604

3

381



2

283

Sejong







6

171

1

80



1

73

Gyeonggi

5

58

265

336

6817

36

4082

7

51

3070

Gangwon

1

14

45

33

761

4

382



2

354

Chungbuk

1

12

39

52

865

3

417

1

8

393

Chungnam

2

11

45

91

1062

11

548



5

521

Jeonbuk

2

11

77

86

1153

3

573



29

509

Jeonnam

1

23

74

84

943

7

469

2

24

377

Gyeongbuk



20

76

120

1282

14

652

1

4

632

Gyeongnam 2

23

139

144

1620

20

881

1

9

801

Jeju

6

7

9

436

1

212

1



185



http://www.mohw.go.kr/react/jb/sjb030301vw.jsp?PAR_MENU_ID 5 03&MENU_ID 5 0321&CONT_SEQ 5 349042&page 5 1

10

60

5. A knowledge graph of medical institutions in Korea

TABLE 5.2 Classification of medical institutions (Korea Statistical Information Service11, 2018). Category

Contribution type

Corporate body

Public Medical Institutions

National Government

National Corporation

Local Government

Public Corporation

Special Corporation

Special Corporation

Private Medical Institutions

Corporate type

Profit type

Corporation

Nonprofit

School Corporation Religious Corporation Social Welfare Corporation Private

Corporation Foundation Corporation Company Corporation Medical Corporation

Private 11

Private

Private

Profit

http://kosis.kr/statHtml/statHtml.do?orgId 5 354&tblId 5 DT_HIRA43

public establishments include national, public, municipal, military, and special corporations. Private establishments include school corporations, religious corporations, social welfare corporations, divisions, foundation corporations, company corporations, and medical corporations. Moreover, there are medical corporations and individuals. In addition, the establishment subjects can be further subdivided into national contributions, local government appearances, special corporation appearances, private appearances, and individuals. Nationally funded medical institutions include national and military hospitals, whereas locally funded medical institutions include public hospitals. In contrast, there are special corporations such as school corporations, religious corporations, and social welfare corporations. Finally, private medical institutions include private corporations, foundation corporations, company corporations, and individual medical institutions. Depending on whether an organization is for-profit, it is divided into commercial and nonprofit medical institutions. For-profit medical institutions are privately opened hospitals, and all other corporate-type hospitals are nonprofit medical institutions. The medical institutions are classified and organized according to these criteria as follows.

5.4 Knowledge graph of medical institutions 5.4.1 Data collection Medical institution data were collected from the Public Data Portal6 and the Healthcare Bigdata Hub7. The Public Data Portal provides medical institution data 6

http://data.go.kr

7

https://opendata.hira.or.kr/home.do

I. Representation

5.4 Knowledge graph of medical institutions

61

from the local government, whereas the Healthcare Bigdata Hub provides integrated data. The collected data include similar attributes, but their specific values are different. Therefore, to link different types of data, IDs must be individually assigned to the medical institutions. For example, the hospital information consists that of 72,525 clinics and hospitals nationwide, with hospital codes, address codes, phone numbers, and address information. In the medical institution facility data, the hospital name and the type code name are the same as the hospital information, but they provide various types of information about the specific facilities owned by the hospital. However, the entire dataset cannot be identified by the hospital name, and no unique code is provided to identify the hospital. Therefore the hospital code information service8 provided by the Public Data Portal is used to receive a unique identifier, which is used as reference information for data connection. This service provides a set of code information associated with medical institutions, originally developed by the Health Insurance Review and Assessment Service. Data that do not have a unique ID are linked based on the hospital code and hospital name. Table 5.3 summarizes the collected datasets for generating a knowledge graph. These datasets are interlinked

TABLE 5.3 Summary of collected datasets. Dataset

The numbers of rows

Hospital

72,525

Subject code, Hospital code, Address code, Postal code, Address, Phone number, Website, Established date, Number of doctors, and Geocoordinates (11)

Pharmacy

22,547

Subject code, Address code, Postal code, Address, Phone number, Established date, and Geo-coordinate (13)

Facilities of Medical Institute

95,072

Subject code, Address code, Postal code, Address, Phone number, and Hospital beds with different types (18)

Detail

19,473

Parking availability, Operation times of medical services, and Options of holiday service (34)

Medical subject

332,241

Subjects of medical services and Number of doctors (5)

Public transportation

39,810

Types of public transportations, Line numbers, and Bus or Subway station (5)

A set of major columns (the numbers of columns)

Medical equipment 50,217

Medical equipment code, Medical Equipment, and Number of Medical Equipment (5)

Specialized hospital 105

Subject code and Code name (3)

8

https://www.data.go.kr/dataset/15001697/openapi.do?lang 5 en

I. Representation

62

5. A knowledge graph of medical institutions in Korea

with others by applying Linked Data technologies (Jovanovik et al., 2014; Kalampokis et al., 2013a; Ro¨der et al., 2018; Fleiner, 2018).

5.4.2 Model of administrative district All medical institutions are located in the administrative regions of the Republic of Korea. Based on these administrative regions, all current situations are analyzed, and policy decisions are made on specific issues. Therefore consistent administrative district information is an important factor in representing medical institution data. The administrative district is a divisional system created by the state to ensure effective administration. It contains the standard information that can link the spatial information, including the physical location of the national land, with the administrative information on the administrative tasks performed by the national agencies. According to the Local Autonomy Act (The Ministry of the Interior and Safety, 2017), the administrative districts of the Republic of Korea consist of the upper-level local autonomy, lower-level local autonomy, and subdistrict administrative districts. South Korea is made up of 17 first-tier administrative divisions, including six metropolitan cities (gwangyeoksi), one special city (teukbyeolsi), one special autonomous city (teukbyeol-jachisi), and nine provinces (do), including one special autonomous province (teukbyeol) (Haklae, 2017). These are further subdivided into various smaller entities, including cities (si), counties (gun), districts (gu), towns (eup), townships (myeon), neighborhoods (dong), and villages (ri) (Haklae, 2017). The top tier of the administrative divisions are the provincial-level divisions, which are divided into several types: provinces, metropolitan cities, special cities, and special self-governing cities. The knowledge model for administrative districts represents the legal relationship between individual administrative districts (Haklae, 2017). The KoreaAdministrative Division class, which represents the administrative districts of Korea, is the top class of all administrative districts, comprising regional governments, basic municipalities, and nonautonomous districts. Fig. 5.1 presents a schematic of the relationship between the top and bottom of the administrative district system in the Republic of Korea. This model represents the relationships between the administrative units defined in the Local Autonomy Act. Note that there is a limit to fully expressing the Korean administrative districts in English; however, they can be generally written in

FIGURE 5.1 Hierarchical structure of medical institutions.

I. Representation

5.4 Knowledge graph of medical institutions

63

English. For example, a district is divided into an autonomous district (jachigu) and a nonautonomous district (ilbangu); a neighborhood is composed of both the legalstatus (beopjeongdong) and administrative neighborhood (haengjeongdong), as well as an urban village and rural village, referred to as “tong” and “ri,” respectively. It is not possible to provide the appropriate meaning of each unit in English. Therefore this model is based on the Korean vocabulary and Roman alphabet notation, and additional comments and labels are provided using the rdfs:label and rdfs:comment properties.

5.4.3 Model of medical institutions The knowledge model uses the schema (schema.org) model vocabulary9 to reuse the ontology vocabulary. All medical institutions are subclasses of the MedicalOrganization class. This model defines the minimum core vocabularies and reuses the vocabulary defined in other data models (Kalampokis et al., 2013b; Kuˇcera et al., 2013; Miloˇsevi´c et al., 2012). As illustrated in Fig. 5.1, medical institutions defined in the Medical Service Act have specific relationships. In particular, a general hospital is a type of a specialized or superior hospital. A medical clinic has two subclasses, dental clinic and Oriental medicine clinic. Note that a clinic in the Act is equivalent to the Medical Clinic class. The uniform resource identifier (URI) is a core element for connecting data and must be consistently defined and extended (Kim, 2017a; Wang et al., 2017). While designing the URIs for the given datasets, the primary focus must be to ensure that each URI remains persistent (Papadakis et al., 2015). According to Berners-Lee (2006), the best resource identifiers are designed with simplicity, stability, and manageability, instead of just providing the descriptions for people and machines. Vocabularies are collections of classes and properties that are used to describe resources (Papadakis et al., 2015). To describe the vocabularies, identifiers have patterns (/def/{vocabulary}/), whereas classes and properties are defined as follows. /:host-name/def/{vocabulary}/{class} /:host-name/def/{vocabulary}/{property} Note that the convention for a class name within a URI is UpperCamelCase, whereas that for a property name is “lower-case-hyphenated.” According to this guideline, for example, the General Hospital class and the property medicalSubject can be described as follows. /:host-name/def/MedicalInstitution/GeneralHospital /:host-name/def/MedicalInstitution/medicalSubject An instance can be described by combining the instance identifier (/id/) and instance name (#instance) as follows. /:host-name/id/{vocabulary}/{#instance}

9

http://schema.org

I. Representation

64

5. A knowledge graph of medical institutions in Korea

5.4.4 Graph transformation The collected data were transformed into a knowledge graph using OpenRefine, which is a standalone open-source application for data cleansing and conversion to other formats (Maali et al., 2012; Delpeuch, 2019; Verlic, 2012). This solution is similar to a spreadsheet application but provides the functionality to operate like a database and is used for data wrangling. The collected data were generated as individual projects by OpenRefine. The medical institution identifier is obtained through the hospital code information service (open API) and converted using an encryption method for consistent URI expression. Data types vary depending on the column, and it is necessary to specify the correct data formats. For example, the total number of doctors working in a medical institution and the number of beds should be numerical, and the date of establishment should be in the format of a date. OpenRefine provides the function to convert the column data type. The green column in Fig. 5.2 represents the number of doctors in a medical institution, and the result is converted into a number format. Columns with numbers in black are the original data before conversion. Object type data, such as administrative area and postal code, are linked with the internal and external linked data. For example, the administrative area can be connected to the administrative area knowledge graph, and the value of linked data can be linked using the reconciliation

FIGURE 5.2 Interlinking heterogeneous datasets using OpenRefine.

I. Representation

5.4 Knowledge graph of medical institutions

65

service provided by OpenRefine. The text in blue in Fig. 5.2 denotes the value of the external linked data and the completed URI format. Individual data are transformed into a knowledge graph by applying a medical institution knowledge model. The resource description framework (RDF) extension of OpenRefine is a graphical user interface for exporting the data of OpenRefine projects in a set of graph data formats (Delpeuch, 2019). The RDF extension provides the function to convert the columns of the dataset to graphs. The RDF skeleton can edit columns in the Subject-Predicate-Object format, and the Subject and Object can be declared in the URI format. Predicate is used to define attributes, and the added vocabulary can be used in advance. The “mi” shown in Fig. 5.3 is a namespace for the vocabulary defined in the medical institution knowledge model. Consequently, the knowledge graph of the given datasets has approximately 40,000 entities, including medical institutions and administrative districts, and 820,000 facts. As shown in Fig. 5.4, all entities in the knowledge graph are interconnected and can be discovered by other entities. Some domain-specific knowledge, such as medical devices and equipment and medical subjects, is already interlinked into the graph. However, more detailed information must be added and interlinked to this graph, as entities of the current version are only described with a simple name and short description.

FIGURE 5.3 Schema alignment using the RDF skeleton.

I. Representation

66

5. A knowledge graph of medical institutions in Korea

FIGURE 5.4 Visualization of the knowledge graph. A medical institution is interlinked to other entities when they have the same information.

5.5 Conclusion This study introduces a method of transforming the medical institution data provided by the government into a knowledge graph. Information on medical institutions is provided or utilized in various formats; however, the data are not consistently expressed. This study applies knowledge graph technologies to connect various data sources. The medical institution data include the type of hospital prescribed by the law, along with detailed information about the institution. The medical institution knowledge model includes the meaning and relationship of data, and it can expand the semantic connectivity of data by connecting them with a variety of models such as administrative areas.

I. Representation

References

67

This chapter presents an example of constructing medical institution data as a knowledge graph. However, the cases where knowledge graphs are used to solve real problems need to be verified in future research. In addition, it is necessary to study the in-depth modeling of detailed information regarding domains, such as medical fields and medical devices.

References Berners-Lee, T., (2006). Linked data - design issues. W3C, Available at: https://www.w3.org/DesignIssues/ LinkedData.html. Delpeuch, A. (2019). ‘A survey of OpenRefine reconciliation services.’, CoRR abs/1906.08092. Dronina, Y., Yoon, Y.M., Sakamaki, H., Nam, E.W., 2016. Health system development and performance in Korea and Japan: a comparative study of 20002013. J. lifestyle Med. 6 (1), 1626. Fleiner, R. (2018). Linking of Open Government Data. 2018 IEEE 12th International Symposium on Applied Computational Intelligence and Informatics (SACI), 1-5. Haklae, K., 2017. Building knowledge graph of the Korea administrative district for interlinking public open data. J. Korea Contents Assoc. 17 (12), 110. Jovanovik, M., Najdenov, B., Strezoski, G., Trajanov, D., 2014. Linked open data for medical institutions and drug availability lists in macedonia. In: Bassiliades, N., Ivanovic, M., Kon-Popovska, M., Manolopoulos, Y., Pal- panas, T., Trajcevski, G., Vakali, A. (Eds.), ‘ADBIS (2)’, Vol. 312 of Advances in Intelligent Systems and Computing. Springer, pp. 245256. Kalampokis, E., Tambouris, E., Tarabanis, K.A., 2013a. Linked open government data analytics. In: Wimmer, M. A., Janssen, M., Scholl, H.J. (Eds.), ‘EGOV’, Vol. 8074 of Lecture Notes in Computer Science. Springer, pp. 99110. Kalampokis, E., Tambouris, E., Tarabanis, K.A., 2013b. On publishing linked open government data. In: Ketikidis, P.H., Margaritis, K.G., Vlahavas, I.P., Chatzigeorgiou, A., Eleftherakis, G., Stamelos, I. (Eds.), Panhellenic Conference on Informatics. ACM, pp. 2532. Kasneci, G., Suchanek, F.M., Ifrim, G., Elbassuoni, S., Ramanath, M., Weikum, G., 2008. NAGA: harvesting, searching and ranking knowledge. In: Wang, J.T. (Ed.), Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 1012, 2008. ACM, pp. 12851288. Kim, H., 2017a. Building a k-pop knowledge graph using an entertainment ontology. Knowl. Manag. Res. Pract. 15 (2), 305315. Kim, H., 2017b. Towards a sales assistant using a product knowledge graph. J. Web Semant. 46-47, 1419. Kim, H., 2018. Analysis of standard vocabulary use of the open government data: the case of the public data portal of korea. Qual. Quantity 53. Kim, H., He, L., Di, Y., 2016. Knowledge extraction framework for building a largescale knowledge base. EAI Endorsed Trans. Indust. Netw. Intellig. Syst. 3 (7), e2. Kuˇcera, J., Chlapek, D., Neˇcasky´, M., 2013. Open government data catalogs: current approaches and quality ˝ A., Leitner, C., Leitold, H., Prosser, A. (Eds.), Technology-Enabled Innovation for per- spective. In: Ko, Democracy, Government and Governance’, Vol. 8061 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp. 152166. Maali, F., Cyganiak, R., Peristeras, V., 2012. A publishing pipeline for linked government data. In: Simperl, E., Cimiano, P., Polleres, A., Corcho, Presutti, V. (Eds.), ‘ESWC’, Vol. 7295 of Lecture Notes in Computer Science. Springer, pp. 778792. Miloˇsevi´c, U., Janev, V., Ivkovi´c, M., Vraneˇs, S., (2012). Linked open data: Towards a better open government data model for serbia. In: Proceedings of the Infotech 2012 ICT Conference, JURIT. Oh, H., Park, J.-S., Park, A.-R., Pyun, S.-W., Kim, Y., 2011. A study on revitalization of primary healthcare organizations through development of standard functions. J. Korean Med. Assoc. 54, 205. Papadakis, I., Kyprianos, K., Stefanidakis, M., 2015. Linked Data Uris and Libraries: The Story So Far, 21. D-Lib Mag (5/6).

I. Representation

68

5. A knowledge graph of medical institutions in Korea

Ro¨der, M., Stoilos, G., Geleta, D., Shamdasani, J., Khodadadi, M., 2018. Medical knowledge graph construction by aligning large biomedical datasets. In: Shvaiko, P., Euzenat, J., Jime0 nez-Ruiz, E., Cheatham, M., Hassanzadeh, O. (Eds.), OM@ISWC’, Vol. 2288 of CEUR Workshop Proceedings. CEUR-WS.org, pp. 218219. The Ministry of Health and Welfare, (2020). Medical Service Act, Available at: https://www.law.go.kr/LSW/ eng/engLsSc.do?y 5 0&x 5 0&menuId 5 2&query 5 Medical 1 service 1 actyion 5 lawNm#liBgcolor26. The Ministry of the Interior and Safety, (2013). Act on promotion of the provision and use of public data, Available at: http://www.law.go.kr/LSW/eng/engLsSc.do?menuId 5 2§ion 5 lawNm&query 5 public 1 data&x 5 0&y 5 0#liBgcolor2. The Ministry of the Interior and Safety, (2017). Local autonomy act, Available at: http://www.law.go.kr/LSW/ eng/engLsSc.do?menuId 5 2§ion 5 lawNm&query 5 Local 1 Autonomy 1 Act 1 &x 5 0&y 5 0#liBgcolor15 The Web Foundation, (2018). Open Data Barometer—Leaders Edition, September 2018, Available at: https:// webfoundation.org/research/open-data-barometer-leaders-edition/. Verlic, M., 2012. LODGrefine—LOD-enabled Google refine in action. In: Lohmann, S., Pellegrini, T. (Eds.), I-SEMANTICS (Posters & Demos). CEUR-WS.org, pp. 3137. Wang, Q., Mao, Z., Wang, B., Guo, L., 2017. Knowledge graph embedding: a survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29 (12), 27242743.

I. Representation

C H A P T E R

6 Resource description framework based semantic knowledge graph for clinical decision support systems Ravi Lourdusamy and Xavierlal J. Mattam Sacred Heart College (Autonomous), Tirupattur, India

6.1 Introduction Knowledge representation (KR) has been a great challenge for the construction of knowledge-based systems (KBSs). For years there has been the use of syntactic rules for machine representation of knowledge. In syntactic representation, knowledge is inferred from the syntactic analysis of information using rules that relate the parts of a syntactic analysis that represent its meaning. It is a rule-based formal symbolic logic system which uses deductive grammar to represent natural language. It is a lengthy process of parsing different words in a sentence and composing matching sentences. In the syntactic KR, the process of representation is the same for both the structured and the unstructured data. Semantic representation, on the other hand, maps words directly to its meaning represented by it and the words are linked to fit a particular template which has a similar meaning. All the words in a particular domain and their relations are mapped in the semantic graph representation. The linked data form a repository and can be used to make meaningful sentences on a domain. The use of semantic representation can reduce the process of knowledge extraction and knowledge base creation to a very large extent. Moreover, being closer to the natural language forms, the semantic graph representation is more easily understandable in an application like the clinical decision support systems (CDSS). On structured data, the process of creation and use of a semantic web-based knowledge system is easier than on unstructured data. Formal languages use syntactic structures while natural languages are semantic-based structures. While representing natural languages in a formal language structure a combination of both the structures will make the representation efficient and robust. The syntactic semantic

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00012-2

69

© 2021 Elsevier Inc. All rights reserved.

70

6. Resource description framework based semantic knowledge graph for clinical decision support systems

technique of KR can be done in a semantic knowledge graph (SKG) by creating a lexicon from the unstructured data using syntactic techniques and then creating the SKG. Another combination of techniques that can be used in the SKG is that of taxonomy and ontology. While ontology gathers various concepts within a domain and describes the relationship between the various concepts within the domain, taxonomy places all the concepts in a domain in hierarchical order. The relationship between the various concepts is not mentioned in the hierarchy. Automatic building of taxonomies from unstructured data is possible and ontologies can be build using the concepts in a taxonomy by establishing the relationship between the various concepts. The simple knowledge organization system (SKOS), which is a construct of the world wide web consortium (W3C), is built using the resource description framework (RDF). The SKOS represents concepts in terms of its abstract notions. It is a representation of knowledge organization schemes such as thesauri, terminologies, indices, subject heading lists, taxonomies, glossaries, catalogs, and other types of controlled vocabulary within the semantic web framework. The class and properties of each concept in SKOS are defined using RDF resources. The concept properties include the preferred index terms which may be more than one, the meaning of the terms in alternate terms, and other details of the concept. In most cases the organization of the concepts in SKOS is hierarchical. In cases in which the concepts are not organized in hierarchical order, they are linked by associative relationships. A lexicon is first created in SKOS and the relationship between the various concepts is then identified and developed using RDF. Using the RDF-based semantic web, a common framework to allow data to be used across applications is built by the SKOS. RDF, developed under the W3C specifications, is designed for data exchange using the metadata data model based on the eXtensible Markup Language (XML) syntax. The conventions used in RDF allow the interoperability of classes among the individual metadata element sets. The conventions represent semantics using standard representation techniques that are simple and powerful data models. The RDF data model allows publishing vocabularies that are human-readable while at the same time being machineprocessable. Separate information sources can make use of the RDF structure so that combining information for different sources to build a common knowledge base is possible. The knowledge base, created using the RDF structure, can also be the information source for another knowledge base. CDSS requires medical information for its knowledge base. The medical information is collected from various information sources like medical literature, hospital records, research findings, local health records, and so on. It has to be collected, validated, and evaluated to form the knowledge base of the CDSS. The semantic web can simplify the whole process and structure a robust knowledge base for any decision support system (Jain, 2020). In the case of a CDSS, relevance and reliability of the knowledge are lifesaving. The knowledge has to be in the right place at the right time to support decision making. Delay in knowledge accusation and display of relevant information can cost lives and the processes once started based on a certain decision cannot be altered or reversed. Therefore knowledge in CDSS is not only important but also crucial to saving lives. Such knowledge is acquired by filtering a huge amount of data and machine processing information. The data for knowledge can be from both structured and unstructured sources. The metadata has to be formed from the sources for the understanding of users and the machines. RDF-based SKG will, therefore be the most apt form of KR for an application such as the CDSS.

I. Representation

6.2 Knowledge representation using RDF

71

The chapter is an attempt to analyze the semantic web-based KR techniques using RDF to make CDSS efficient, reliable, and robust for the decision-making process. Any decision support system can be similarly configured to achieve the goal of decision support. The second section of the chapter is a description of RDF. This section elaborates on how RDF could be used for KR, especially as a SKG. The third section describes SKOS. Here the W3C recommendation for KR of structured vocabulary in the form of classification schemes, thesauri, subject heading systems, taxonomies, and so on is presented. SKOS uses RDF and was initially used in the Semantic Web. The fourth section is about SKG. The knowledge graph is a graphical KR and the use of semantics adds clarity and efficiency. The fifth section is on SKG for CDSS. This section elaborates on how KR in a knowledge-based CDSS can be built using RDF-based SKG. In the sixth section, the theme of the chapter is summarily discussed and future possibilities are mentioned. The last section is the conclusion of the chapter.

6.2 Knowledge representation using RDF KR is a process of representing knowledge in a machine-processable and machinereadable format. It is very significant in any KBS. The use of RDF can enhance KR and increase the efficiency, reliability, and robustness of the KBS.

6.2.1 Knowledge-based systems KBSs are rule-based systems that interact with multiple rules to infer knowledge. While individual rules are straight forward, the inferences are made through multiple complex procedures based on the rules. Rule-based KBS are two types. The forward chaining or production systems make use of production rules that are fired by a rule engine based on the conditions in the working storage being met. When the production rule is fired, its inferences update the working storage and therefore the second set of conditions to fire the next production rule is based on the rule engine and the current data in working storage. In contrast, the backward chaining or logical programming rules begin by testing the truth value of the conditions which in turn depend on the truth value of other conditions that need to be tested. If at any point the truth value of a condition fails, then the backtracking method is used to test alternate paths of conditions (Cummins, 2017; Swain, 2013b). KBS represented knowledge explicitly so that the knowledge could be used for reasoning. The reasoning was rule-based and different logical inferencing techniques were used to produce conclusions based on the represented knowledge. The traditional KBS consisted of the knowledge base, the inference engine, and the working memory. The early inferencing engines used the rules stored in the knowledge base to arrive at logical conclusions. The greatest drawback of such KBS was that knowledge represented in the knowledge base was dated and constant updating and maintenance of the knowledge base were difficult. Knowledge, unlike data, is not just a group of elements or items. Data are any quantities, characters, or symbols that can be stored in a computer on which operations can be performed. Information is processed data that establish the relationship between the data in a particular domain. Knowledge combines information based on certain rules and can

I. Representation

72

6. Resource description framework based semantic knowledge graph for clinical decision support systems

produce new information (Swain, 2013b; Harmon, 2019; Swain, 2013a; Becerra-Fernandez and Sabherwal, 2015; Graham, 1997; Greenes, 2014a; Bonissone, 1989). The knowledge base of the KBS system is sometimes considered as an extension to the original database system. The knowledge base might contain only facts(data) and rules (Graham, 1997). Such a knowledge base architecture can be integrated into an existing database model. Although such integration has many advantages, it can render the system more complex. It will also add to the existing issues concerning database systems. Another reason that discourages such an integration is the basic difference between a knowledge base and a database. Since the development of a knowledge base differs from that of a database, the creation and functioning of a knowledge base cannot be hindered by the limitations of the database (Mylopoulos, 1986). The KBS and expert systems are often used as synonyms. While sometimes expert systems are considered as an evaluated KBS system, at other times KBS is considered as a generic term consisting of expert systems (Graham, 1997). The earliest known example of the KBS is the Dendral that was used to identify organic molecules using the chemical knowledge base and the reasoning process of a chemist. Newer KBS can use artificial intelligence in its inference mechanism and knowledge base development. The declarative programming used in KBS allows the users to specify the goals to achieve and the mechanism to be used. Since the knowledge base is independent of the reasoning techniques, different applications can make use of the same knowledge base. The reasoning techniques are part of the expert shell that is used in the KBS. The reasoning technique in a KBS depends on the method used for the representation of knowledge such as symbolic logic, lambda calculus, semantic representation, and so on. The difficulty in developing a robust KBS lies in the fact that evaluating its competence and consistency is demanding (Jain and Meyer, 2018). Moreover, since KBS is linked to expert knowledge, it will remain explicit. It will not be useful for general problems of a domain (Swain, 2013b; Adelman, 2017). Although there are a variety of evaluation methods to evaluate a KBS, the evaluation methodology is itself evolving and is far from perfect. Nevertheless, there has been a lot of effort in reducing anomalies in the verification, validation, testing, and evaluation of the knowledge base and the KBS. KBS is part of a larger system like the decision support system and therefore the process of evaluation also depends on the evaluation of the larger system (Adelman, 2017). The anomalies in KBS has to be taken care of to avoid considerable damage to the larger system. Since most KBS are rule-based, errors occur in the use of the rules. The anomalies in KBS are popularly classified as four errors, namely, redundancy, conflict, circularity, and deficiency. Redundancy errors occur when there is more than one rule that can result in the same inference. Conflict errors happen when rules are contradictory or when incompatible inferences can be made with the existing rules. Circularity error is those in which the inference depends on itself. Deficiency errors result when no inference can be made from the given set of rules (Edelkamp and Schro¨dl, 2012).

6.2.2 Knowledge representation in knowledge-based system KR is the crucial component of a KBS. It is the language with which knowledge is written in the knowledge base. The inference engine works with a knowledge compiler to use

I. Representation

6.2 Knowledge representation using RDF

73

the knowledge and to create new knowledge. The knowledge acquired from various structured and unstructured sources is placed in the knowledge base using KR. Therefore KR is what makes the KBS successful. Two important characteristics of an efficient KR are: first, that it should be expressive and should possess the capacity to represent concepts of high-level languages avoiding knowledge acquisition bottlenecks, and second, it should be flexible for constant modification and updating of the knowledge base. Predicate calculus, production rules, and frames were used for KR for a long time (Bonissone, 1989). There are certain constraints related to KR in KBS. (1) KR may not be accurate since it is a simplified representation of the real world. (2) Reality may be represented differently in different KR. (3) The inference engine in the KBS is limited by the KR and therefore the reasoning process depends upon the KR that may not be exactly as in the real world. (4) KR is also constrained by the computing power of the system. (5) Just as there is a limitation in natural language, communication using KR is also limited (Dubitzky, 2013; Brachman and Levesque, 2004). Since there can be more than one representation of the reality, there should be certain “criteria of adequacy” that has to be maintained for a KR to be acceptable by the KBS. The criteriums are: (1) There should be no contradiction between what needs to be represented and what is represented by the KR. (2) The KR should have the capacity to express the facts that are necessary for the KBS. (3) The KR should also help the reasoning process of the inference engine. (4) The KR should be machine-processable. If the KR fails to fulfill any of these criteriums, the KR is inadequate for the KBS and the KBS remains inefficient. Six other criteriums are important for the KR to be efficient rather than just being adequate. They are: (1) Lack of ambiguity, that is, the semantics or the meaning of representation should be the one and not many. (2) Clarity, that is, the semantics or the meaning of representation should be easily understood. (3) Uniformity, that is, how KR is developed should not be arbitrary. (4) Notational convenience, that is, the KR should have the ability to represent all concepts convenient to the KBS. (5) Relevance, that is, the knowledge represented by the KR should be relevant to the KBS. (6) Declarative, that is, the meanings of the representation are independent of how it is being made use of and the representation can be substituted by another without it losing its truth value (Bench-Capon, 1990; Olimpo, 2011; Woods, 1987). Graph-based KR has advantages in comparison to other traditional text-based KR since it facilitates knowledge flow and better understandability of the representation. The patterns, express abstractions, and new relationships in knowledge become clear in graphic KR. Therefore the graph-based approaches in KR help in the conceptualization and development of the knowledge base as they support, orient, and enhance the processes. The graph-based KR can include text-based KR and text can enhance the expressive capacity of the graph-based approaches (Olimpo, 2011).

6.2.3 Resource description framework for knowledge representation RDF was developed by W3C in 1999 to allow interoperability of metadata across web resources. The precursors of the RDF are Dublin Core and the platform for Internet content selectivity content rating initiative. The RDF is modeled for the data explanations

I. Representation

74

6. Resource description framework based semantic knowledge graph for clinical decision support systems

in the Semantic Web in a machine-readable and machine-processable format. The Syntax and Schema specifications of the RDF provide uniform metadata syntax and schemes that allow the development of application and services across the different resource description sources. The semantics of objects in the various resources are expressed by the Uniform Resource Identifier (URI) which allows the resources to be exploited. Once the resources are uniformly identified, applications and services are enabled to develop process rules for automated decision-making regarding web resources. RDF is modeled formally on the principles of a directed labeled graph in which the semantics of resource description is expounded. The resource in the RDF description is shown using the property type and value of the resource. RDF is an application of the XML (Iannella, 1998; Miller, 1998; Pan, 2009; Bizer et al., 2018b; Witten et al., 2010; Singh and Huhns, 2006). RDF is used for KR since it can fulfill the four requirements of a KR modeling language, namely, it should be able to explicitly represent all the concepts of a domain, it should be able to represent the relationship between the concepts, it should include constraints of the relationship and finally, the concepts represented should be made use by the application for which it has been represented. Although RDF is an application of XML, RDF can apply mathematically sound semantics while at the same time it is machine-processable in a well-defined manner. Moreover, the graph structure of the RDF allows the encoded representation of any discrete knowledge structures. The language used by RDF for KR is simple and is standardized for applications (Singh and Huhns, 2006). KR in RDF is in the form of triples (s,p,o) containing a subject (s), a predicate (p), and an object (o). The subject and objects are concepts and entities of a domain while the predicate is the attribute or relationship between the entities. The subject is also the resource, the predicate is the property and the object is the property type or value. All facts in the real world are represented using the triples. The triples are also represented using the RDF graph in which the subject and object of the triple are adjacent nodes while the edge between them is the predicate. All related information becomes subgraphs. Common resources across multiple sources are represented by the use of their corresponding URI (Bizer et al., 2018b; Cure´ and Blin, 2015). The RDF datasets are stored and managed internally using structures (native RDF, relational, graph, etc.) similar to the database systems in the RDF store. Query language like SPARQL allows efficient processing of queries of the RDF datasets in the RDF store. Inferencing is not possible in all RDF stores but is allowed as a special extension to it. There are two types of storage structures for RDF stores, namely, the centralized storage and the distributed storage. Centralized storage systems are of three types: (1) Triple stores in which the RDF triples are stored in a simple relational table with attributes. (2) Vertically partitioned tables in which the properties are stored in separate tables, and (3) Property tables in which the properties of a common subject are kept in the same table. Strategies for distributed storage include storing in a different machine, using a distributed file system, and storing using an identity key (Hose and Schenkel, 2018). RDF technologies consist of all technologies like data exchange formats, query languages, and various vocabularies and ontologies that are associated with the RDF representation. RDF technologies help integrating the syntactic and semantic specifications of the resources in terms of defining the properties of the resources and the relationship between them. These technologies process the RDF datasets that are contained in the RDF stores.

I. Representation

6.3 Simple knowledge organization system

75

The common data query language used in RDF is SPARQL which can work on the RDF store. There is also a wide variety of controlled vocabularies like SKOS, Dublin Core, schema.org, and ontological languages like RDF schema (RDFS) and OWL 2 based on RDF (Bizer et al., 2018a).

6.3 Simple knowledge organization system Knowledge organization is required to order knowledge in a way that it can easily be accessed and used. There are many ways in which knowledge is organized in a system. The use of RDF to represent the resources in a domain supports the knowledge organization and in turn, improves the efficiency of the KBS.

6.3.1 Knowledge organization system Knowledge organization system (KOS) is a general term that covers all types of knowledge management methods such as authority files, glossaries, dictionaries, gazetteers, subject headings, classification schemes, taxonomies, categorization schemes, thesauri, and so on. KOS can broadly be of three types. The Term List is those which contain terms such as authority files, glossaries, dictionaries, gazetteers, and so on. The Classifications and Categories are schemes such as subject headings, classification schemes, taxonomies, categorization schemes, and so on. The Relationship Lists consist list of concepts with their relationship such as Thesauri, Semantic Networks, and Ontologies (Hodege, 2000; Lei Zeng and Mai Chan, 2004). Authority files, Gazetteers, and Directories may be classified as Metadata-like Models (Zeng, 2008). KOS initially began as an indexing service but later had many other purposes like search and browsing, providing semantic domain maps, supporting KBS with concepts, and so on. The searching and browsing functions are the basic complementary functions in most KOS tools. While searching makes use of the term list type of KOS, browsing uses classification schemes. KOS is based on the semantic structure of a domain and uses them in the search and browse functions through concepts, properties of the concepts, and the relationship between concepts. In web services, KOS supports resource discovery and retrieval that will help users to apply indices using the semantic direction given by the KOS. The semantic structure of KOS also helps in building Topic maps which in turn will improve the effectiveness of retrieval. Along with these advantages of the KOS are the disadvantages of developing the KOS. KOS is expensive in terms of both time and money but the greater problem is in the maintenance and updating of the KOS (Kumbhar, 2012).

6.3.2 Simple knowledge organization system SKOS was first published in August 2009 as a W3C recommendation for systems used for vocabulary and data modeling of KOS. SKOS began with the idea of using RDF to assist in the issue related to improving the search interface. The difficulty of transferring the existing KOS for use in the semantic web also added to the development of SKOS. Semantic web uses formal mathematics to describe domain knowledge while KOS, on the

I. Representation

76

6. Resource description framework based semantic knowledge graph for clinical decision support systems

other hand, was more intuitive in the knowledge classification. The vocabulary languages of the semantic web like resource description framework schema (RDFS) and the web ontology language (OWL) use class and properties which in combination with formal reasoning rule produce new knowledge through inference (Malik et al., 2015). KOS, on the other hand, are term-based or concept-based depending on their KR preference and do not have a formal structure. To use the informal KOS for the formally structured semantic web it has to be converted and the conversion process is not easy. SKOS is a simple method of converting KOS for use in the semantic web (Baker et al., 2013). Another reason for the development of SKOS is the need to create metadata for the semantic web that contains common vocabularies of a domain held in a shared repository of meaning. SKOS goes beyond the representation of a single KOS that promotes the deeper use of the semantic web. In data management, the SKOS allows distributed and decentralized data. SKOS can be combined with other semantic web vocabularies resulting in enhanced knowledge. Multiple standards can be represented in the underlying graph structure of the SKOS. The basic usage of the SKOS is the construction of a repository of common types of controlled vocabulary using RDF. SKOS vocabulary can also be directly extended by constructing RDFS subclasses and subclass properties so that the interoperability and portability of the vocabulary need not be compromised. A combination of OWL and SKOS can enable the development of expressive foundation and specific evaluation features for the Semantic web (Miles et al., 2005a; Miles, 2006).

6.3.3 Simple knowledge organization system core and resource description framework SKOS evolved rapidly due to the need for controlled vocabularies to be represented in a simple form so that the concept and concept properties of a domain can be modeled without much expertise and cost. SKOS is therefore seen as an extension to RDF that can represent various concept schemes such as taxonomies, glossaries, classification schemes, and thesauri along with the concept and relationships. The SKOS core is the fundamental model to represent the framework and content of a concept scheme. SKOS Core vocabulary consists of RDF properties and RDFS classes to represent the concept scheme through an RDF graph (Miles et al., 2005a; Simona Elena Varlan and Cosmin Tomozei, 2007; Miles et al., 2005b). An additional feature that distinguishes SKOS from KOS is the node label in a thesaurus. The node label groups vocabularies by indexing term hierarchy that makes browsing easier. There are other such groupings in SKOS that allow hierarchical display without disturbing the underlying concepts and properties. Since SKOS is developed using RDF, the other RDF vocabularies like the Dublin Core Metadata Initiative (DCMI) terms could also be used in combination. Such capabilities enable SKOS a standard representation structure for controlled vocabularies which in turn helps SKOS to represent local requirements using extended subclasses or subproperties of RDFS. To help the extensions of SKOS, the SKOS Core vocabulary is divided into families such as lexical labeling properties, symbolic labeling properties, documentation properties, semantic relation properties, and so on. Within these families, the properties are hierarchically arranged to allow easy extension at

I. Representation

6.4 Semantic knowledge graph

77

the corresponding level of semantics. The extendibility of the SKOS allows the SKOS to go beyond a small group of similar use cases (Miles et al., 2005a; Miles, 2006; Miles et al., 2005b). SKOS Core evolved from the work done in RDF vocabularies to create thesaurus content. The initial aim of the SKOS Core was to make the RDF expression of thesauri conform to the standard of the ISO 2788-1986 but it was later felt that lowering the standard could expand the scope of the thesaurus to include other types of controlled vocabulary using the conceptual basis. There is hardly any difference between concepts in RDF using the SKOS Core vocabulary and the class properties or individuals of ontologies. So there has not been a consensus regarding the modeling concepts with RDF using SKOS Core vocabularies or using RDFS or OWL ontologies. In RDF there has been a certain confusion as to whether a statement of the resource is about a person or a concept. This confusion could lead to mistakes while merging and mapping RDF graphs. So, RDFS or OWL ontologies are preferred although it is much more complicated and time-consuming to program SKOS with RDFS or OWL (Miles et al., 2005b).

6.4 Semantic knowledge graph The graphical representation of datasets helps the KR both from the user’s point of view and from that of the KBS developer. Knowledge is represented in the form of knowledge graph which can be linked to the semantic conceptualization of the various entities in a domain. The use of RDF in building a knowledge graph that is semantically linked to the resources will improve the reliability of the KBS.

6.4.1 Knowledge graphs Graph structures to represent knowledge have been studied and used for quite some years but it shot into fame when Google’s Knowledge Graph was introduced in 2012. Like all other graph structures, knowledge graphs consist of vertices (or nodes) and edges (or connections). The concepts of or entities in a domain are represented by vertices and they refer to the generic classes of physical objects in the real world. The edges between two adjacent vertices represent the relationship between the concept or entities. These relationships are also referred to as attributes or properties of the concepts. Knowledge graphs can semantically conceptualize the natural language and therefore can be used to represent structured and semistructured knowledge for machine processing. It is also used as a knowledge repository for many knowledge-based applications (Lourdusamy and Mattam, 2020b; Yan et al., 2018; Kejriwal, 2019). Knowledge graphs are an efficient form of KR from modeling and computation point of view. From the modeling point of view, the semantic conceptualization and visualizing property of the knowledge graph are useful for developing models by experts and scientists. The reasoning mechanism is also easily understood due to the visualizing possibility of homomorphism. Moreover, the same language is used both at the interface level and computational level. Therefore the ease of creating an inferencing engine for the KBS

I. Representation

78

6. Resource description framework based semantic knowledge graph for clinical decision support systems

with knowledge graph based KR both at the modeling level and the implementation level significantly reduces the cost in terms of time and effort. From the computational point of view, graph homomorphism unifies the complexity results with the algorithms. Moreover, the visualization of knowledge constructs provided by the graph enables the formation of other algorithmic ideas much more than using logical formulas (Michel and Marie-Laure, 2009). Data held in data structures such as arrays, stacks, list links, and so on can be converted into knowledge graphs by linking the data and adding conceptual information to it. The knowledge graph which is a directed labeled graph has a concept mapping done to establish the relationship between the concepts. Information linked to the concepts is derived by filtering out unwanted data. So, the abstraction of knowledge from the information is done by a reasoning process to the already filtered information. The reasoning process will take into consideration the relevance of the information. The ranking of the information according to its relevance is directly related to the probability that the relation that was assumed when the raw data were initially accepted is contained within the concept. There are no constraints to the graph formation of the knowledge graph, allowing the knowledge graph to evolve with new concepts and relations. The linking of a knowledge graph to other knowledge graphs allows multiple information to be combined and strengthens the reliability of information (Duan et al., 2018).

6.4.2 Semantic knowledge graph SKG is a type of knowledge graph in which the semantics of the source is linked to the knowledge graph. Such a linking will enhance the updating of the knowledge graph with newer resources that come directly from the sources without any external transference of new knowledge. The primary difference between SKG and semantic networks is that while semantic networks have binary relations between the nodes, the relations in SKG are of any arity. Moreover, in SKG, there is a clear distinction between the ontological knowledge and the other types of knowledge such as the implicit and explicit or factual knowledge since the concepts of the SKG are linked to the source (Michel and Marie-Laure, 2009; Grainger et al., 2016). The basic form of SKG consists of a triple; concept (subject), entity (object), and the relationship(predicate) between the two. The concept and entities are resources within a domain. There can be more than one relationship between a concept and an entity. For example, between Jerry and Tom, there could be relationships “relativeOf,” “childOf,” and “sonOf.” These relationships can be placed in a hierarchy from generic to specific. Likewise, the entities themselves could be placed in a hierarchy as in the example “Male Person Father”(See Fig. 6.1A and B). These will form the hierarchy of concepts and relationships of the SKG. If all the entities of a domain are placed in a particular hierarchy, newer concepts and relationships can be added to it. So, a basic form SKG will consist of each triple having four tuples—the concept, the entity, the relationship, and a label as shown in Fig. 6.2. The label for the concept and entity gives their hierarchy. The degree of relationship is also mentioned in the labeled graph of SKG (Michel and Marie-Laure, 2009).

I. Representation

79

6.4 Semantic knowledge graph

relaveOf

1

childOf

2

sonOf

Person

1

a

Male

2

Tom

b

FIGURE 6.1 (a) Hierarchy of relationships. (b) Hierarchy of concepts.

Degree Person: Jerry

2 sonOf

Relaon label Person: Tom

FIGURE 6.2 Concept, entity, relationship, and label of a triple.

Concept label

Document 1 Jerry, son of Tom is playing … Document 2 Tom and Jerry are friends … Document 3 Tom and Jerry are always together

Entities Jerry SKG Tom Son of Person: Jerry sonOf Playing Friends always_together

Person: Tom

FIGURE 6.3 Linking SKG to documents in a repository. SKG, Semantic knowledge graph.

In an SKG there could be a coreference of either concepts or relationship or entity. If two nodes represent the same entity then the nodes are said to be coreferent. Coreferent nodes can be interpreted as lattice-theoretically or order-theoretically. The lattice-theoretically interpretation of coreferent nodes will lead to the node that is an intersection of the two nodes while the order-theoretically interpretation will lead to a subtype node which results from the subset of the intersection of the coreferent nodes. The simplest way of representing coreferent nodes that represent a single entity is to merge them into one concept node. The resultant node will be a conjunctive type of node. Since all coreferent nodes cannot be merged into a conjunctive subtype, the equivalent nodes that should not be merged have to be indicated using an individual marker (Michel and Marie-Laure, 2009). For automatic creation and modification of SKG from unstructured text documents, the concepts and entities in the various documents are linked to the nodes of the SKG as shown in Fig. 6.3. When a new concept is extracted from a document, they form a new node in the SKG. If the new node has a relationship with any existing nodes, then the new node is linked to the existing node. Otherwise, it forms an unconnected SKG. The nodes can be checked for coreference and for the possibility of making conjunctive subtypes and if possible, the entity from the document is linked to the merged node. There are many ways of ranking the nodes and relationships when linking the entity in the document. One method is to rank it according to its popularity in the documents. Here the number of times the entity appears in all the documents in the repository has to be considered since some document might have less mention of the entity while others might have more mention of the same depending on the subject matter of the document. Another method of ranking is to check the number of relationships an entity has to other entities. In ranking the nodes according to the relatedness, the number of documents checked does not matter. The ranking can also be done according to the depth of the path a concept has in the SKG (Grainger et al., 2016).

I. Representation

80

6. Resource description framework based semantic knowledge graph for clinical decision support systems

6.4.3 RDF-based semantic knowledge graph Yago, Freebase, Google’s Knowledge Graph, Facebook’s Graph Search, Microsoft’s Satori, and Yahoo’s Knowledge Graph are some well-known examples of knowledge graphs. Most knowledge graphs presently are RDF based. The concepts and relations in the datasets of RDF-based SKG are represented as URIs. The entity might either be a URI or a literal. A query of RDF graph is also done using the triple pattern and the result of the query together with a reasoning process can add new triples to the knowledge graph. The triple structure of the RDF-based SKG also enhances the search and browse functions in a KBS. The search and browse are the two most important functions in any KBS. Moreover, the RDF-based SKG can be adopted to allow faceted search which is very significant for any search process (Arenas et al., 2016; Yoghourdjian et al., 2017; Arnaout and Elbassuoni, 2018). Faceted search in RDF-based SKG has two important challenges. First of all, the search should produce queries that will lead to new knowledge if necessary. The new knowledge has to be added to the knowledge base without disturbing the existing representation. Such addition and modification of the knowledge base will guarantee the correctness, robustness, scalability, and extensibility of the system. Second, the faceted search based on the RDF structure of the SKG should be able to produce meaningful queries at the schema level. Such a faceted search will make navigation effective by providing an attractive schema level architecture. Moreover, it will allow abstraction of facet and value ranking while the core functionality of the present system is visible (Arenas et al., 2016). The effective use of RDF-based SKG depends on its ability to expand the knowledge with knowledge from unstructured texts in which knowledge is largely available. Also, the ability to efficiently search using a flexible querying mechanism will enhance the SKG. Another factor that will improve the efficiency of the SKG is the feature that allows the ranking of the query results. The ranking of the query result can give the significance of the resource and also enable faster-faceted searches. But the ranking can also lead to ignoring insignificant knowledge available in the knowledge graph. Even lesser-known resources in any might be useful for knowledge development and have to be displayed in a search query. In other words, there should be a diversity of query results to have a broader view of all aspects (Arnaout and Elbassuoni, 2018). Weight extensions in an RDF help it to include ranking. Therefore the RDF in SKG is a weighted RDF where the ranking is added as an RDF tuple. Weighted RDF has much more significance than just assisting in the query process and in building significant knowledge in a KBS. Weighted RDF can also help in other SKG merging operations such as join, union, project, filter, order, and distinction (Lourdusamy and Mattam, 2020b; Liu et al., 2012; Ceden˜o and Candan, 2011). RDF graph embedding is also another way of placing ranks on a resource. Graph embedding converts the nodes and edges into an equivalent vector space without compromising the properties and semantics of the nodes and edges. There are many RDF embedding techniques based on the different applications of the RDF graph. RDF graph embedding is more significant for the use of machine learning and other artificial intelligence algorithms in the RDF-based SKG (Ristoski et al., 2019; Saeed and Prasanna, 2018). The use of these algorithms on the SKG dataset has increased the efficiency, reliability, and robustness of the KBS manifold times. Weighted RDF or

I. Representation

6.5 Semantic knowledge graph for clinical decision support systems

81

RDF embeddings support all AI techniques while the semantic properties of the RDF are preserved. Therefore the adaptability of the KBS to numerically inferencing will lead to the gradual evolution of consumer-friendly KBS.

6.5 Semantic knowledge graph for clinical decision support systems CDSS is a highly popular and significant use case of KBS application. Decision support systems in healthcare are of great importance as the efficiency of decision making for healthcare workers is lifesaving. Although clinical support has to be done by medical personnel, they could be assisted to a great extent by reliable systems. The KBS in the CDSS has to be robust to support efficient and reliable decisions. RDF-based SKG can help the development of such a CDSS.

6.5.1 Clinical decision support systems There has been a rapid evolution of CDSS from the time computers started to be used in the clinical workflow. Decision support systems were initially used in business but gradually it came to be used for healthcare decisions too. The electronic medical records, which were the precursor to CDSS, were the initial use of computers in healthcare. With the abundance of digital data from the medical records and with the rapid growth of technology, CDSS has gained importance in both predictive diagnosis and treatment. Newer technologies like the internet-of-everything, machine learning, and big data analytics have supported the development of highly efficient CDSS. The newer technologies have also contributed to the instability in the development process of the CDSS. There has been a lack of consistency in the use of technology for CDSS and that has also contributed to the inability of CDSS to adopt various relevant technologies. On the whole, the popularity of CDSS has been always low. The skeptical attitude of the medical personnel has also contributed to the lack of its use (Lourdusamy and Mattam, 2020a). CDSS can be described in several ways but the focus of CDSS is to use computer systems to bring relevant knowledge to all involved in healthcare which includes patients, medical professionals, and healthcare workers. CDSS was initially used as an information retrieval system. These systems made full use of the medical data entered in the medical health records. Later CDSS combined the medical data with medical knowledge and used them for alerts, reminders, constraint suggestions, and inferencing. The newer CDSS use diagnostic and therapeutic reasoning with the help of nuances captured from human expertise. With the help of artificial intelligence technology and continuous data inputs, the newer CDSS is highly efficient and robust. The evolution of CDSS was impelled by the growth of technology, the patient-centric approach in healthcare, and the business impetus of medical care (Greenes, 2014a; El-Gayar et al., 2008; Greenes, 2014b). The type of support or intervention provided by CDSS can be characterized as those assisting the diagnostic and therapeutic requirements of patients, those that recommend tests or treatments, and those that make use of medical knowledge in decision support. Accordingly, there were the methods used in CDSS such as logical methods, statistical and

I. Representation

82

6. Resource description framework based semantic knowledge graph for clinical decision support systems

probabilistic methods, heuristic methods, mathematical models, and computational intelligence. For the successful use of CDSS, the system should make use of computational intelligence to produce timely interventions. It should also be consistent and cost-effective. An effective CDSS will improve patient safety, reduce cost, and lessen the chances of relapse. The commercialization of CDSS has led to several different methods and models being followed in a variety of CDSS. So, there is a nonportability underlying KBS in the CDSS and unfamiliarity in the use and purpose of CDSS. Additionally, there have also been legal and ethical complications with the use of CDSS in many countries (El-Gayar et al., 2008). The possible challenges in developing a highly effective and robust CDSS are knowledge generation and validation, knowledge management, and dissemination and implementation of the CDSS, and its evaluation. Each of these challenges has stages and these stages are cyclic. The knowledge generation and validation process consist of knowledge extraction, knowledge refinement, knowledge validation, KR, and knowledge update. Knowledge management and dissemination process consist of curation and content management, collaborative authoring and editing, visioning and tracking of changes, standards-based dissemination, and localization and update. Similarly, the CDSS implementation and evaluation process consist of the creation of a decision model, checking the application environment and interface, testing the effectiveness of the CDSS, getting feedback and modification, and updates of the CDSS. During these processes, there will be a lot of interaction between the process and the various stages. For the successful adaptation of CDSS, the finances should be specially allocated for the development and implementation of CDSS, the existing systems should be incorporated into the new system, the CDSS users should find it effective and easy, the KBS of the CDSS should be up-to-date and CDSS should reduce the overall cost of healthcare (Greenes, 2014b).

6.5.2 Semantic knowledge graph for clinical decision support systems CDSS can be broadly classified into nonknowledge-based CDSS and knowledge-based CDSS. The explicit identification of a nonknowledge-based CDSS is the absence of the KR scheme and the knowledge base. The nonknowledge-based CDSS makes use of artificial intelligence techniques for prediction and decision support. Two popular artificial intelligence techniques used in these are artificial neural networks and genetic algorithms. These algorithms are used by directly analyzing the currently available data in the system. It does not form new knowledge as in the KBS. The biggest drawback of nonknowledge-based CDSS is the requirement of high computational power. Also, the time taken by nonknowledge-based systems is much more than the KBS as they require fresh iterations for each reasoning process. Moreover, the reasoning process is in a “backbox” and not visible to the user of the system. So, there could be a lack of trust in the system. The advantages of the nonknowledge-based systems are that it does not require rules for the inferencing and can work with incomplete information. Therefore the nonknowledge-based system is used for medical image processing and analysis of waveforms (Lourdusamy and Mattam, 2020a; Berner and La Lande). KBS-based CDSS has the advantages of having knowledge that can be updated and modified with every new information and through the inferencing engine. Since the resources are linked to the knowledge base, any modification in the resource is reflected in

I. Representation

6.6 Discussion and future possibilities

83

the knowledge base and the CDSS that operates on the KBS. There is a process of automatic updating of the knowledge base from unstructured sources like websites and medical literature that makes the CDSS relevant and efficient.

6.5.3 Advantages of RDF-based semantic knowledge graph There are numerous advantages of implementing RDF-based SKG in the knowledge base of a CDSS. Three important advantages are portability, resource updating and modification, and user and machine readability. The portability of the KBS is an important feature that enables any application based on KBS. With the use of RDF-based knowledge graph for KR in the KBS, the KBS remains independent of the application. As in the case of CDSS in which there is diminishing popularity because of the lack of portability of the existing system, the KBS using the RDF knowledge graph becomes easily portable to other applications in the healthcare unit. Another very important requirement of a successful CDSS is the use of relevant knowledge. Relevant knowledge in the KBS implies that the knowledge base is constantly updated and modified with useful knowledge from reliable sources. The linking of the resource with the KBS using the RDF allows all modifications in the resources to be reflected in the KBS. New resources can also be constantly added to the KBS. RDF-based SKG allows new knowledge to be acquired from the literature. Therefore any new resource can form new knowledge that can be validated and updated in the knowledge base. The ranking possibility of the RDF adds to the query processing and search in the KBS (Shi et al., 2017). The RDF-based SKG is a knowledge graph that allows the visualization of the knowledge. The visualization feature of the graph is very useful for both the user and the developer of the KBS. Moreover, the user of the applications based on the KBS like the CDSS will also have a visual display of the reasoning process making the system very user friendly. The RDF-based SKG is a machine-readable and processable KR and is efficient and reliable for the KBS.

6.6 Discussion and future possibilities SKG is RDF-based knowledge graph that allows linking unstructured data in documents and websites to the resources. It also allows the ranking of the concepts, entities, and relationships that not only help query, search, and browsing but also allows the use of artificial intelligence techniques to process data without modifying the underlying RDF property type and property values. The CDSS can be a use case application that makes use of KBS. The KR in the KBS of the CDSS could be using RDF-based SKG. There are numerous advantages in modeling the CDSS in such a manner. The CDSS will be highly efficient, reliable, and robust. Moreover, it will be user friendly and the ability to visualize the knowledge will make it convenient for the developer and user of the CDSS.

I. Representation

84

6. Resource description framework based semantic knowledge graph for clinical decision support systems

However, there is a lot that has to be done to actualize such a system. Two main features of the RDF-based SKG are the linking resources from unstructured sources to the RDF graph and to add the ranking of the concepts, entities, and relationships to the basic structure of the RDF. Although many possibilities are researched and studied to implement the two features, it is easier said than done. A reliable and efficient method is yet to be implemented. Without these two features being implemented, the KR using RDF will be of little use in applications such as the CDSS.

6.7 Conclusion It is made clear that the RDF-based SKG is a good form of representing knowledge when it comes to KBS. As in the case of the CDSS, the RDF-based SKG has the added advantage of creating new knowledge from medical literature and modifying the knowledge base to keep it updated. For a system like CDSS which has failed to impress users because of constant overhauling of the system with newer technologies, the RDF-based KBS, which is the underlying structure of the application, is the right choice since the application remains independent of the KBS and therefore modification of the application does not require changes in the KBS. Moreover, a certain feature of the RDF-based SKG representation like the weighted or embedded RDF to hold rankings or weights, the linking of the resources at the source, and so on makes an application like CDSS efficient, reliable, and robust for decision making. However, a lot of work needs to be done to make the structure fully functional.

References Adelman, S.R.L., 2017. Handbook for Evaluating Knowledge-Based Systems: Conceptual Framework and Compendium of Methods. Springer, Boston, MA, pp. 155 182. ˇ Zheleznyakov, D., 2016. Faceted search over RDFArenas, M., Cuenca Grau, B., Kharlamov, E., Marciuˇska, S., based knowledge graphs. J. Web Semant. 37 38, 55 74. Arnaout, H., Elbassuoni, S., 2018. Effective searching of RDF knowledge graphs. J. Web Semant. 48, 66 84. Baker, T., Bechhofer, S., Isaac, A., Miles, A., Schreiber, G., Summers, E., 2013. Key choices in the design of simple knowledge organization system (SKOS). J. Web Semant. 20, 35 49. Becerra-Fernandez, I., Sabherwal, R., 2015. The nature of knowledge, Knowledge Management Systems and Processes, Second Ed. Routledge, New York, NY, pp. 17 37. Bench-Capon, T.J.M., 1990. Introduction to knowledge representation, Knowledge Representation, 32. Elsevier, pp. 11 25. Berner, E.S., La Lande, T.J., 2004. Overview of clinical decision support systems (an updated version of Chapter 36 in Ball MJ, Weaver C, Kiel J (eds), Healthcare Information Management Systems, Third Edition). Decis. Support. Syst. 6, 463 477. Bizer, C., Vidal, M.-E., Weiss, M., 2018a. RDF technology. Encyclopedia of Database Systems. Springer New York, New York, NY, pp. 3106 3109. Bizer, C., Vidal, M.-E., Weiss, M., 2018b. Resource description framework. Encyclopedia of Database Systems. Springer New York, New York, NY, pp. 3221 3224. Bonissone, P.P., 1989. In: Jovanovi´c, A.S., Kussmaul, K.F., Lucia, A.C., Bonissone, P.P. (Eds.), Knowledge Representation and Inference in Knowledge Based Systems (Expert Systems), 53. Springer, Berlin, Heidelberg, pp. 53 65. Brachman, R.J., Levesque, H.J., 2004. Introduction. Knowledge Representation and Reasoning. Elsevier, pp. 1 14. Ceden˜o, J.P., Candan, K.S., 2011. R 2 DF framework for ranked path queries over weighted RDF graphs. In: Proceedings of the International Conference on Web Intelligence, Mining and Semantics—WIMS ’11, p. 1.

I. Representation

References

85

Cummins, F.A., 2017. Rules for actions and constraints. Building the Agile Enterprise. Elsevier, pp. 155 182. Cure´, O., Blin, G., 2015. RDF and the semantic web stack. RDF Database Systems. Elsevier, pp. 41 80. Duan, Y., Shao, L., Hu, G., 2018. Specifying knowledge graph with data graph, information graph, knowledge graph, and wisdom graph. Int. J. Softw. Innov. 6 (2), 10 25. Dubitzky, W., 2013. Encyclopedia of Systems Biology. Springer-Verlag, New York. Edelkamp, S., Schro¨dl, S., 2012. Automated system verification. Heuristic Search. Elsevier, pp. 701 736. El-Gayar, O.F., Deokar, A., Wills, M., 2008. Current issues and future trends of clinical decision support systems (CDSS). Encyclopedia of Healthcare Information Systems. IGI Global, pp. 352 358. Graham, D., 1997. Introduction to knowledge-based systems. Knowledge-Based Image Processing Systems. Springer, London, pp. 3 13. Grainger, T., Aljadda, K., Korayem, M., Smith, A., 2016. The semantic knowledge graph: A compact, autogenerated model for real-time traversal and ranking of any relationship within a domain. In: 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pp. 420 429. Greenes, R.A., 2014a. A brief history of clinical decision support. Clinical Decision Support. Elsevier, pp. 49 109. Greenes, R.A., 2014b. Definition, Scope, and Challenges. Elsevier. Harmon, P., 2019. AI-driven process change. Business Process Change. Elsevier, pp. 417 439. Hodege, G., 2000. Systems of Knowledge Organization for Digital Libraries: Beyond Traditional Authority Files, 1937. The Digital Library Federation Council on Library and Information Resources, Washington DC. Hose, K., Schenkel, R., 2018. RDF stores. Encyclopedia of Database Systems. Springer, New York, NY, pp. 3100 3106. Iannella, R., 1998. An idiot’s guide to the resource description framework. N. Rev. Inf. Netw. 4 (1), 181 188. Jain, S., 2020. Understanding semantics-based decision support. CRC Press, Taylor & Francis Group, ISBN: 9780367443139 (HB). Jain, S., Meyer, V., 2018. Evaluation and refinement of emergency situation ontology. Int. J. Inf. Educ. 8 (10), 713 719. In this issue. Kejriwal, M., 2019. What is a knowledge graph? Domain-Specific Knowledge Graph Construction. Springer International Publishing, pp. 1 7. Kumbhar, R., 2012. Knowledge organisation and knowledge organisation systems. Library Classification Trends in the 21st Century. Elsevier, pp. 1 6. Lei Zeng, M., Mai Chan, L., 2004. Trends and issues in establishing interoperability among knowledge organization systems. J. Am. Soc. Inf. Sci. Technol. 55 (5), 377 395. Liu, S., Cedeno, J.P., Candan, K.S., Sapino, M.L., Huang, S., Li, X., 2012. R2DB: a system for querying and visualizing weighted RDF graphs. In: 2012 IEEE 28th International Conference on Data Engineering, pp. 1313 1316. Lourdusamy, R., Mattam, X.J., 2020a. Clinical decision support systems and predictive analytics. In: Jain, V., Chatterjee, J.M. (Eds.), Machine Learning With Health Care Perspective. Springer, Cham, pp. 317 355. Lourdusamy, R., Mattam, X.J., 2020b. Knowledge graph using resource description framework and connectionist theory. J. Phys. Conf. Ser. 1427, 012001. Malik, S., Mishra, S., Jain, N.K., Jain, S., et al., 2015. Devising a super ontology. Procedia Comput. Sci. 70, 785 792. In this issue. Michel, C., Marie-Laure, M., 2009. Basic conceptual graphs. Graph-based Knowledge Representation. Springer, London, pp. 21 57. Michel, C., Marie-Laure, M., 2009. Simple conceptual graphs. Graph-based Knowledge Representation. Springer, London, pp. 59 81. Michel, C., Marie-Laure, M., 2009. Introduction. Graph-based Knowledge Representation. Springer, London, pp. 1 17. Miles, A., 2006. SKOS: requirements for standardization. Proc. Int. Conf. Dublin Core Metadata Appl. 44 (0), 1 9. Miles, A., Matthews, B.M., Beckett, D., Brickley, D., Wilson, M., Rogers, N., 2005a. SKOS: a language to describe simple knowledge structures for the web. In: XTech5. Miles, A., Brickley, D., Matthews, B., Wilson, M., 2005b. SKOS core: simple knowledge organisation for the web. Proc. Int. Conf. Dublin Core Metadata Appl. 5, 3 10. Miller, E., 1998. An introduction to the resource description framework. D-Lib Mag. 4 (5). Mylopoulos, J., 1986. On knowledge base management systems. In: Brodie, M.L., Mylopoulos, J. (Eds.), Topics in Information Systems. Springer, New York, NY, pp. 3 8. Olimpo, G., 2011. Knowledge flows and graphic knowledge representations. Technology and Knowledge Flow. Elsevier, pp. 91 131.

I. Representation

86

6. Resource description framework based semantic knowledge graph for clinical decision support systems

Pan, J.Z., 2009. Resource description framework. Handbook on Ontologies. Springer, Berlin, Heidelberg, pp. 71 90. Ristoski, P., Rosati, J., Di Noia, T., De Leone, R., Paulheim, H., 2019. RDF2Vec: RDF graph embeddings and their applications. Semant. Web 10 (4), 721 752. Saeed, M.R., Prasanna, V.K., 2018. Extracting entity-specific substructures for RDF graph embedding. In: 2018 IEEE International Conference on Information Reuse and Integration (IRI), pp. 378 385. Shi, L., Li, S., Yang, X., Qi, J., Pan, G., Zhou, B., 2017. Semantic health knowledge graph: semantic integration of heterogeneous medical knowledge and services. Biomed. Res. Int. 2017, 8 10. Simona, E.V., Tomozei, C., 2007. Simple knowledge organisation system. In: Supplement Proceedings of CNMI 2007, vol. 17, pp. 299 308. Singh, M.P., Huhns, M.N., 2006. Resource description framework. Service-Oriented Computing. John Wiley & Sons, Ltd., Chichester, UK, pp. 119 136. Swain, M., 2013a. Knowledge base. Encyclopedia of Systems Biology. Springer, New York, NY, pp. 1073 1074. Swain, M., 2013b. Knowledge-based system. Encyclopedia of Systems Biology. Springer, New York, NY, pp. 1084 1086. Witten, I.H., Bainbridge, D., Nichols, D.M., 2010. Metadata, How to Build a Digital Library, 27. Elsevier, pp. 285 341. Woods, W.A., 1987. Knowledge representation: what’s important about it? The Knowledge Frontier. Springer, New York, NY, pp. 44 79. Yan, J., Wang, C., Cheng, W., Gao, M., Zhou, A., 2018. A retrospective of knowledge graphs. Front. Comput. Sci. 12 (1), 55 74. Yoghourdjian, H., Elbassuoni, S., Jaber, M., Arnaout, H., 2017. Top-k keyword search over wikipedia-based RDF knowledge graphs. In: Proceedings of the 9th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, pp. 17 26. Zeng, M.L., 2008. Knowledge organization systems (KOS). Knowl. Organ. 35 (2 3), 160 182.

I. Representation

C H A P T E R

7 Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare Shrinivasan Patnaikuni and Sachin R. Gengaje Department of Computer Science and Engineering, Walchand Institute of Technology, Solapur, Maharashtra, India

7.1 Introduction The key to achieving human parity in processing and analyzing healthcare records is syntactic and semantic reasoning under the uncertainty imposed by real-world situations. Semantic reasoning using Probabilistic Web Ontology Language (PR-OWL), a probabilistic extension to the Ontology Web Language (OWL), to capture the uncertainty in the realworld situation in the form of probabilistic ontologies using multientity Bayesian networks (MEBN) and integration with probabilistic syntactic reasoning is the right step toward achieving the human parity in healthcare decision support systems. Study of physiology of the human body and various diseases affecting it is a complex task involving several processes and understanding of complex concepts and terminologies. Understanding is a very crucial aspect when it comes to the process of reasoning in the domain of medicine and healthcare. Reasoning in healthcare domain heavily depends on the observables in the medical practices and random variations involved (Dalal et al., 2019). Several uncertainties in the medical practices and observations make the process of reasoning very tedious in building effective clinical decision support systems. Bayesian reasoning is well suited for reasoning with uncertainty and is being actively used in healthcare to reason with uncertain knowledge of observations in predicting outcomes of treatment procedures, disease diagnosis, finding effective and optimal constraint-based treatment alternatives (Lucas et al., 2004). Clinically feasible, Bayesian modeled probabilistic model for diagnosing preeclampsia, a disorder related to pregnancy (Velikova et al., 2014), was a first of its kind.

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00009-2

87

© 2021 Elsevier Inc. All rights reserved.

88

7. Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare

In today’s precision medicine paradigm probabilistic knowledge-based reasoning using Bayesian reasoning outperforms regression-based risk estimation and causal analysis of medical practices (Arora et al., 2019). Bayesian networks have helped to achieve costeffective treatment procedures in healthcare recently (Brenner, 2019). Let us go through an example of Bayesian network to model a situation where a patient has traveled to a region that has an outbreak of COVID19 epidemic and has flu-like symptoms. The modeled Bayesian network estimates the likelihood that the patient who has traveled through a region where there is COVID19 epidemic has contracted the COVID19 virus given he/she has flu-like symptoms. Let P() denote likelihood in terms of probability and the following Bayesian network showing causal relationships between having flu symptoms, travel history, and contracting COVID19 virus. If we know, • P(Flu) 5 likelihood of an average person showing flu-like symptoms. • P(Flu | CoV) 5 likelihood of having flu-like symptoms in patients contracted COVID19 virus. • P(Travel) 5 likelihood of patient having traveled through COVID19 epidemic region based on patient’s travel history • P(CoV) 5 likelihood of an average person contracting COVID19 virus. We can probabilistically reason about, • P (CoV | T, Flu) 5 likelihood of patient showing flue like symptoms and traveled through COVID19 epidemic regions having contracted COVID19 virus. By using Bayesian theorem for the Bayesian network depicted in Fig. 7.1, P(CoV|T, Flu) 5 P(T, Flu|CoV) 3 P(CoV)/P(T, Flu), where P(T, Flu) 5 P(T) 3 P(Flu) since they are assumed to be independent events in this context. Bayesian networks need to have a joint probability distribution to be specified for the Cartesian product of all possible values of the random variables (nodes in the Bayesian network). Sometimes specifying a joint probability distribution for a large network could be very tedious and exponential. Also, Bayesian networks are specifically modeled to a particular situation making them rigid, lacking expressivity and ability to define constraints for random variables. Several approaches have been discussed in Laskey (2008) to overcome the limitations of Bayesian networks, one such approach is MEBN which shall be discussed in the next section.

FIGURE 7.1 Bayesian network for COVID19 likelihood diagnosis.

I. Representation

7.2 Multientity Bayesian networks

89

7.2 Multientity Bayesian networks This section of the chapter briefly puts forth terminologies and concepts of MEBN. Before we see MEBN, it is pertinent to know first-order logic (FOL), the most widely used and researched logic system. Computer science and computational mathematics use FOL for defining theories in terms of theoretically and practically. Knowledge representation systems have their foundations in FOL (Laskey, 2008). A typical FOL theory consists of axioms. Axioms are expressed in the form of sentences in FOL. A set of rules of reasoning permit axioms to derive sentences. Practically, FOL in computer systems has axioms implemented as data structures and rules of reasoning for sentence derivations are implemented as algorithms. Constants, variables, functions, and predicates are the main pieces of the FOL theories where variables act as placeholders for constants. The functions in FOL determine a relative constant of their input arguments. Predicates are an essential component of FOL which define relationship between variables and constants, in general, different components. A predicate represents a property of or relation between components that can be true or false, for example, a predicate hasCOVID19(p) describes the property of patient p being positive for disease COVID19 by indicating Boolean true, where p is a variable in FOL syntax and hasCOVID19() is a predicate. Here a semantic meaning is implicit in defining a relationship of patient being diagnosed positive to COVID19 using a predicate hasCOVID19(). Considering a real-world situation where a patient has symptoms of COVID19 but did not travel to COVID19 epidemic regions recently, the patient cannot be said positive to COVID19 with 100% truth value but hypothetically say 60% given the contextual statistical observations. But given the fact that the patient traveled to a region affected by COVID19 epidemic say Wuhan recently the percentage of truth value could jump to 90% or more. The uncertainty of this kind cannot be modeled in FOL. The uncertainty is in the form of semantic aspect. Such uncertainties inherent in the theories can be modeled using Bayesian FOL. (Laskey et al., 2000). MEBN models the uncertainty by assigning probability value to the predicate rather than a Boolean true or false. A logical integration of FOL semantics and Bayesian network is MEBN, a FOL extension to Bayesian networks. MEBN Theory, M-Theory consists of collection of fragments of Bayesian networks called MFrags. Usually, an MFrag represents a knowledge base which can be instantiated to situation-specific Bayesian network for a reasoning task. The uncertainty associated with the knowledge is implicitly represented as probability values with consistent joint probability across all the MFrags of an M-Theory. MEBN overcome the gap in Bayesian networks by being very rigid and inflexible for modeling dynamic environments. (Costa, 2005). Combined with an expressivity of FOL MEBN are given the ability to model uncertainty for Bayesian networks. Based on situation and context MEBN generate situation-specific Bayesian network for reasoning. Probabilistic knowledge is represented as collection of MFrags where each MFrag represents a conditional probability distribution based on given constraints. There exists a consistent joint probability distribution for all MFrags of MTheory. Knowledge bases and facts and prior probability estimates modeled and represented in MTheory form the basis for probabilistic reasoning (Carvalho, 2011).

I. Representation

90

7. Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare

For the problem of likelihood of COVID19 diagnosis, consider a problem of a patient visiting a clinic with a fever, cough, and cold and if it happens that the patient recently visited a COVID19 epidemic affected region it is very likely that the patient’s diagnosis may indicate he has a fever due to the COVID19. Otherwise, the patient may be having a fever due to some other reason. In order to model the patient disease diagnosis problem into MTheory the first step is to construct ontology for the problem. The ontology for the patient diagnosis problem described above will have the following classes, object properties, and relationships in OWL syntax and semantics. It is important to note that in OWL object properties relate an object (individual) to object (individual) and data type properties relate objects (individual) to data values. 1. Classes: • Patient • Region • Symptom 2. Properties: • hasCOVID19 (Patient p) | Range data type: Boolean - Domain: Patient - Range: Boolean • hasCOVID19EpidemicPresent (Region r) | Range data type: Boolean - Domain: Region - Range: Boolean • hasVisitedRegion (Patient p, Region r) | Range data type: Boolean - Domain: Patient 3 Region - Range: Boolean • hasSymptom (Patient p, Symptom s) | Range data type: Boolean - Domain: Patient 3 Symptom - Range: Boolean MTheory for the COVID19 diagnosis problem will be a collection of MFrags as depicted in Fig. 7.2. hasCOVID19(), hasCOVID19EpidemicPresent(), hasVisitedRegion(), and hasSymptom() are random variables for which local conditional probability distribution is prior defined. For the MTheory modeled, given an example evidence table and conditional probability tables (CPT) for each MFrag. An MEBN query on random variable hasCOVID19() produces a situation-specific Bayesian network which when inferred gives an estimation of likelihood of patient contracting COVID19 virus. The construction of ontology, MTheory, and knowledge base is facilitated by a tool called UnBBayes (Matsumoto et al., 2011).

7.3 Semantic web and uncertainty Consider the example of patient visiting a region where there is COVID19 outbreak, if patient has actually visited a small town adjoining the region, a deterministic reasoning algorithm either infers as true or false for the patient visiting an epidemic-prone region. However, what is expected is stratified levels of plausibility. The example here emphasizes

I. Representation

7.4 MEBN and ontology web language

91

FIGURE 7.2 MTheory for COVID19 likelihood diagnosis.

the need for well-defined knowledge representation and reasoning with uncertainty within the Semantic Web (Jain et al., 2016). OWL, the W3C standard has a foundational base in classical description logic. Currently OWL lacks support for uncertainty. Semantic Web relying on OWL inherits the disability to represent uncertainties and lacks uncurtaining reasoning ability. Uncertainty reasoning is very crucial for clinical decision support systems that power the modern healthcare systems. To incorporate support for uncertainty reasoning in the Semantic Web, a Bayesian network-based approach was proposed (Costa, 2005) to model probabilistic ontologies. Probabilistic ontologies have the expressivity and yet have a well-defined logical basis for representing and reasoning under uncertainty. The proposed approach is called as probabilistic ontology language PR-OWL (Carvalho et al., 2017) and is founded on principles of MEBN. The next section details the PR-OWL language for modeling probabilistic ontologies.

7.4 MEBN and ontology web language For representing and reasoning under uncertainty the semantic web community has defined some standard approaches. These approaches differ in application of probabilistic methods for representing uncertainty on the Semantic Web. The approaches are broadly categorized as group of probabilistic languages for semantic web. The first group of languages is based on developing vocabulary for representing elements of a Bayesian Network in RDF and linking them to regular RDF triples. The second group of languages

I. Representation

92

7. Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare

are those that represent random variables and their probabilistic dependencies through conditional probability distributions. The third group of languages are extended Description Logics with probabilistic information. Finally, the fourth group of languages are those that integrate OWL with Logic programming. A probabilistic extension to OWL to represent uncertainty in ontologies expressed in OWL (Carvalho et al., 2017), PR-OWL, is a language for defining probabilistic ontologies. PR-OWL is based on an approach suggested by Poole et al. (2008). Poole et al. identified reasoning uncertainty types for Semantic Web. These types are uncertainty in terms of probability of an individual to belong to a class, probability of individual existing, probability of an individual having a property with a given value. Probabilistic ontologies are those which model the mentioned uncertainty types. Costa (2005) defined probabilistic ontologies as Definition: A probabilistic ontology is an explicit, formal knowledge representation that expresses knowledge about a domain of application. This includes: 1. 2. 3. 4. 5. 6. 7.

Types of entities existing in the domain; Properties of those entities; Relationships among entities; Processes and events that happen with those entities; Statistical regularities that characterize the domain; Inconclusive, ambiguous, incomplete, unreliable, and dissonant knowledge; Uncertainty about all the above forms of knowledge;

where the term entity refers to any concept (real or fictitious, concrete or abstract) that can be described and reasoned about within the domain of application. Fundamentally, PR-OWL is an upper ontology written in OWL which allows modeling constructs for representing probabilistic ontologies using MEBN. Probability distributions for properties of classes defined in OWL are defined using PR-OWL, specifically, this means mapping Random variables in MEBN to properties in OWL. Work by Bucci et al. (2011) stressed the need for probabilistic reasoning for ontologies modeled for medical diagnosis. PR-OWL being the state of the art for representing and reasoning uncertainty for ontologies modeled using OWL is the default choice for semantic web-assisted medical diagnosis. MEBN have been used in various domains (Patnaikuni et al., 2017), to be specific to the medical domain Cypko et al. (2014) used MEBN for treatment decision of laryngeal cancer. PR-OWL is interoperable with nonprobabilistic ontologies since it was built with backward compatibility this allows OWL built ontologies to interoperate with probabilistic ontologies built using PR-OWL.

7.5 MEBN and probabilistic context-free grammar Probabilistic Context-Free Grammar (PCFG) has been at the core of syntactic reasoning and pattern recognition tasks. Syntactic and pattern recognition methods are being used in medical domain, pattern recognition using syntactic parsers for ECG signals (Trahanias and Skordalakis, 1990) is the pioneering example of applications of syntactic reasoning in

I. Representation

References

93

medical domain. Another example of usage of syntactic methods is the analysis of coronary artery images (Ogiela and Tadeusiewicz, 2002). Though PCFGs are probabilistic yet they fail to probabilistically reason in cases where reasoning is primarily semantic in nature. A typical example is prepositional phrase attachment problem in NLP parsers. Mapping PCFG with MEBN (Patnaikuni and Gengaje, 2019) paves a way for syntactic and semantic reasoning, syntactico-semantic reasoning. Further augmenting MEBN with ontologies using PR-OWL syntactico-semantic reasoning is more powerful with probabilistic ontology-driven PCFGs. Some of the key applications of syntactico-semantic reasoning in healthcare could be context and situation-aware cancer diagnosis systems tailored for specific context like work and occupation-related cancers. Predictive situation awareness (PSAW) is an ability to estimate likelihood of future situations that could evolve over time. PSAW in medical domain could be broadly applied for • Disease detection and surveillance • Monitoring and alerting • Patient tracking MEBN enable PSAW for the above. PSAW systems heavily rely on probabilistic ontologies for situation modeling.

7.6 Summary Modern healthcare systems are increasingly relying on state of the art computational methods and algorithms. There is ever-increasing demand to make healthcare systems more personal, accurate, and closer to human reasoning capabilities. Artificial intelligence powered by probabilistic methods is powering human-like reasoning abilities into the Semantic Web. One of the key technologies enabling Semantic Web to address the modalities of uncertainty reasoning is MEBN. Ontologies are the threads stitching various medical and healthcare knowledge bases together to enable creation of newer knowledge bases. Traditional ontologies fail to provide adequate support for modeling uncertainty and PROWL fills that gap with its flexibility and representational power, and FOL expressivity by the virtue of being an upper ontology built on top of OWL using MEBN. Further by gluing PCFG with MEBN paves a way toward probabilistic syntactic and semantic reasoning for context-aware diagnosis and reasoning in healthcare and clinical decision support systems.

References Arora, P., Boyne, D., Slater, J.J., Gupta, A., Brenner, D.R., Druzdzel, M.J., 2019. Bayesian networks for risk prediction using real-world data: a tool for precision medicine. Value Health 22 (4), 439 445. Brenner, D.R., 2019. Using machine learning to bend the cost curve—addressing high-cost targeted therapeutics. JAMA Netw. Open 2 (9), e1911913. Bucci, G., Sandrucci, V., Vicario, E., 2011. Ontologies and Bayesian networks in medical diagnosis. In: 2011 44th Hawaii International Conference on System Sciences. IEEE, pp. 1 8.

I. Representation

94

7. Probabilistic, syntactic, and semantic reasoning using MEBN, OWL, and PCFG in healthcare

Carvalho, R.N. (2011). Probabilistic Ontology: Representation and Modeling Methodology (Ph.D. dissertation). George Mason University. Carvalho, R.N., Laskey, K.B., Costa, P.C., 2017. PR-OWL—a language for defining probabilistic ontologies. Int. J. Approximate Reasoning 91, 56 79. Costa, P.C.G. (2005). Bayesian Semantics for the Semantic Web (Ph.D. dissertation). George Mason University. Cypko, M., Stoehr, M., Denecke, K., Dietz, A., Lemke, H.U., 2014. User interaction with MEBNs for large patientspecific treatment decision models with an example for laryngeal cancer. Int. J. CARS 9 (Suppl 1). Dalal, S., Jain, S., Dave, M., et al., 2019. A Systematic Review of Smart Mental Healthcare. 2019 5th International Conference on Cyber Security and Privacy in Communication Networks (ICCS). Available at SSRN 3511013. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id 5 3511013. Jain, S., Gupta, C., Bhardwaj, A., 2016. Research directions under the parasol of ontology based semantic web structure. In: Abraham, A., Cherukuri, A., Madureira, A., Muda, A. (Eds.), Soft Computing and Pattern Recognition. Advances in Intelligent Systems and Computing (AISC), 614. Springer, Cham, pp. 644 655. Laskey, K.B., 2008. MEBN: a language for first-order Bayesian knowledge bases. Artif. Intell. 172 (2-3), 140 178. Laskey, K.B., D’ambrosio, B., Levitt, T.S., Mahoney, S., 2000. Limited rationality in action: decision support for military situation assessment. Minds Mach. 10 (1), 53 77. Lucas, P.J., Van der Gaag, L.C., Abu-Hanna, A., 2004. Bayesian networks in biomedicine and health-care. Artif. Intell. Med. 30 (3), 201 214. Matsumoto, S., Carvalho, R.N., Ladeira, M., da Costa, P.C.G., Santos, L.L., Silva, D., et al., 2011. UnBBayes: a java framework for probabilistic models in AI. JAVA Acad. Res. 34. Ogiela, M.R., Tadeusiewicz, R., 2002. Syntactic reasoning and pattern recognition for analysis of coronary artery images. Artif. Intell. Med. 26 (1-2), 145 159. Patnaikuni, S., Gengaje, S., 2019. Syntactico-semantic reasoning using PCFG, MEBN & PP attachment ambiguity. In: 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT). IEEE, Vol. 1, pp. 1 7. Patnaikuni, P., Shrinivasan, R., Gengaje, S.R., 2017. Survey of multi entity Bayesian networks (MEBN) and its applications in probabilistic reasoning. Int. J. Adv. Res. Comput. Sci. 8 (5). Poole, D., Smyth, C., Sharma, R., 2008. Semantic science: ontologies, data and probabilistic theories. Uncertainty Reasoning for the Semantic Web I. Springer, Berlin, Heidelberg, pp. 26 40. Trahanias, P., Skordalakis, E., 1990. Syntactic pattern recognition of the ECG. IEEE Trans. Pattern Anal. Mach. Intell. 12 (7), 648 657. Velikova, M., van Scheltinga, J.T., Lucas, P.J., Spaanderman, M., 2014. Exploiting causal functional relationships in Bayesian network modelling for personalised healthcare. Int. J. Approximate Reasoning 55 (1), 59 73.

I. Representation

C H A P T E R

8 The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record Salma Sassi1 and Richard Chbeir2 1

VPNC Lab., FSJEGJ, University of Jendouba, Jendouba, Tunisia 2Univ Pau & Pays Adour, E2S/UPPA, LIUPPA, EA3000, Anglet, France

8.1 Introduction Relying on the Electronic Health Record (EHR) (Rizo et al., 2005) and medical devices (MDs). A general practitioner (GP) can monitor both chronic and acute diseases. However, the GP is still unable to use both EHR systems and MDs to make efficient clinical decisions. MD heterogeneity in the data collecting process is still one of the major challenges to implement an IoT-based EHR system. To assist the GP in accessing data resources efficiently, we need to connect EHR data to MDs in order to provide efficient interoperability and integration. However, existing studies (such as Fortino et al., 2017; Cappon et al., 2017; Kim et al., 2018; Wang and Lee, 2015) are limited to specific MDs and are focused on specific health cases. Accordingly, the exchanged information is heterogeneous in format and units of measurement and does not have the same coding format. In this context, ensuring syntactic and semantic interoperability among these heterogeneous MDs themselves and the EHR becomes difficult. To deal with this challenge, a connected EHR model that defines the EHR and all of the MDs, along with their characteristics, their capabilities, their reliabilities, their contexts, their data, and their formats, is required to ensure an interoperable EHR connected to interoperable MDs. To construct this type of system—a connected EHR—many challenges need to be addressed: (1) how to define a syntactic interoperability model, due to heterogeneous data; (2) how to provide a semantic interoperability model, due to the difficulty of retrieving and analyzing semantic information from all types of medical sensors; (3) how to model context of the patient and that of the employed device in order to guarantee the certainty of the detected data; and (4) how to

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00008-0

97

© 2021 Elsevier Inc. All rights reserved.

98

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

link EHR to the relevant medical knowledge in order to support informed decisions and improve patient health care. To address these challenges, we provide in this paper a unified and normalized model of how the obtained data (structured, semistructured, and unstructured health data) might be efficiently used by heterogeneous health care IoT-based systems. This model utilizes knowledge about the IoT and the health care domain to better analyze patient health history. Our approach is able to (1) collect data from databases, health care devices, and sensors, defining and normalizing health data; (2) semantically annotate health care data to provide meaningful communication between the EHR and heterogeneous MDs; (3) model the patient context and the employed device context; and, finally, (4) aggregate data and align them with medical knowledge to provide a connected care and preventive plan mandated by that knowledge and to support decisions and improve patient health care. We define the connected EHR system as a system collecting and managing data from applications and MDs. The connected EHR provides accurate and connected data in real time in order to adjust medical practices and the health care plan for a patient. It is a flexible and adaptive record containing clinical data and rules of inferences generated using the health domain knowledge in order to infer or deduce information about a patient. A connected EHR allows real-time monitoring, preventive care, forecasting, predictive modeling, and decision optimization in order to improve the quality of care. Fig. 8.1 displays the advantages of our connected EHR over the traditional EHR system. The remainder of this paper is organized as follows. The motivating scenario is described in Sec. 8.2. The literature is reviewed and discussed in Sec. 8.3. Our connected EHR system and the architecture are described in detail in Sec 8.4. Section 8.5 presents the

FIGURE 8.1 Connected electronic health record (EHR) compared with traditional EHR system.

II. Reasoning

8.2 Motivating scenario: smart health unit

99

implementation of our approach, the experimentation conducted, its results, and discussion. Section 8.6 is concludes this study and provides several perspectives.

8.2 Motivating scenario: smart health unit In order to highlight the requirements of a suitable approach for modeling the connected EHR system, we illustrate a smart health care scenario in a medical unit that we call Smart Health Unit (SHU). In the SHU, equipment is installed to provide smart health care services, and Medical Devices (MD) are used to sense the health information of the patient (such as blood oxygen, temperature, and heart rhythms). When the patient needs to check on his health, the GP gives the patient MDs connecting him to the GP staff devices. For example, in the SHU, cardiomyopathy disease is continuously monitored in order to follow its evolution and make an appropriate decision based on its behavior. In this context, the patient needs to communicate and to share heartbeat, temperature, and blood oxygen with GPs, receive notification from them, and step up their intervention in difficult cases. On the other hand, GPs need easy access to the patient’s data to interpret them and suggest the appropriate treatment. In the SHU, diverse IoT-based applications to monitor the state of a patient are installed (Cappon et al., 2017; Kim et al., 2018; Wang and Lee, 2015). These MDs contain diverse sensors to detect the patient’s temperature, heartbeat, blood pressure, etc. However, the MDs are heterogeneous in terms of deployment contexts, computing capabilities, and communication protocols. They generate a huge amount of heterogeneous and ambiguous data that describe various cases. This lack of uniformity is powerfully limiting because it obstructs the need to obtain structured, semistructured, and unstructured health data that can be efficiently used by heterogeneous health care IoT-based systems to better analyze patients’ health history. To be precise, the main four limitations of the SHU are: 1. Syntactic interoperability: IoT applications are disease specific and heterogeneous in terms of deployment contexts, computing capabilities, and communication protocols. This makes it almost impossible to reuse these IoT-applications in another SHU having a different infrastructure. 2. Semantic heterogeneity: IoT applications generate a massive volume of heterogeneous and ambiguous data. This makes it nearly impossible to interpret the transferred knowledge between communicating MDs and to provide efficient results. 3. Lack of linking MDs to their various contexts: The device context has to be described in order to identify its capacity and reliability in order to be sure about the reliability of the gathered data and to be able to easily repair it in case of damage. 4. Lack of connection between EHR, MDs, and domain knowledge: This makes it impossible to efficiently synthetize the patient’s history, analyze the care activities, interpret health data, generate hypotheses, and design and test interventions for improvement. These limitations are not only restricted to just the SHU. This is principally due to the lack of a generic data model for structured and unstructured health data. The existing IoT applications neither propose an appropriate syntactic interoperability design nor provide a semantic interoperability design, due to the difficulty of retrieving and analyzing semantic

II. Reasoning

100

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

information from all types of medical sensors. This is partly due to the absence of studies linking EHR, MDs, and relevant medical knowledge to support informed decisions and improve patient health care. For all these reasons, we propose an appropriate approach to (1) collect data from databases, health care devices and sensors, define and normalize structured and unstructured health data; (2) semantically annotate health care data to provide meaningful communication between patients and GPs; (3) model the MD in the patient context and in the employed device context; (4) aggregate data and align them with the medical knowledge in order to provide a connected care plan mandated by the knowledge and to support decisions and improve patient health care.

8.3 Literature review In this section, we present the background and studies relevant to our research. First, we present and discuss some background information related to the terminologies and standards used to describe syntactic and semantic health interoperability in medicine. Second, we describe and discuss existing ontologies dedicated to the IoT domain. Finally, we present existing EHR systems and highlight the difference between our proposed system and existing ones.

8.3.1 Background To the best of our knowledge, there are no studies on syntactic and semantic interoperability for both databases and MDs in the same work. Nevertheless, several standards, terminologies, and IoT-based ontologies have been described related either to syntactic interoperability (prEN, 2007; Bender and Sartipi, 2013; CDA, 2020; Beale et al., 2005f; National Electrical Manufacturer Association, 2011; SDMX-HD, 2010; Integrating Healthcare Enterprise; IHE, 1998; Electronic BusinessebXML, 2006), to semantic interoperability (SNOMED; Bodenreider, 2004; Tudorache et al., 2013; Kalra et al., 2005; McDonald et al., 2003), or to IoT-based ontologies (Compton; Xue et al., 2015; Shi et al., 2012; Gyrard et al., 2014; Russomanno et al., 2005; Nachabe et al., 2015; Hirmer et al., 2016; Daniele et al., 2015; Balaji et al., 2016; Chen et al., 2003; Kim and Kim, 2015; Flury et al., 2004; available at Fikes et al., 2000; Hobbs and Pan, 2004; Bermudez-Edo et al., 2016; Dridi et al., 2020). The following subsections describe these studies. 8.3.1.1 Electronic health record-related standards and terminologies Standards and terminologies ensure syntactic and semantic interoperability to ensure information sharing among systems (Bailer, 2005). Health care data need to be normalized to be efficiently used by actors (humans and machines) in different contexts. An EHR, as defined in Iakovidis (1998), is “digitally stored health care information about an individual’s lifetime with the purpose of supporting continuity of care, education and research, and ensuring confidentiality at all times.” EHR is a "set of important health data about a patient history health defined in a document structure.” To address the EHR syntactic interoperability problem, several standards are proposed: Health Level 7

II. Reasoning

8.3 Literature review

101

(HL7) (Bender and Sartipi, 2013); FHIR1 Clinical Document Architecture (CDA), 2004, CEN EN 13606 EHRcom (prEN, 2007), and openEHR (Beale et al., 2005f); Digital Imaging and Communications in Medicine (DICOM) (National Electrical Manufacturer Association, 2011); Statistical Data and Metadata Exchange Health Domain (SDMX-HD) (SDMX-HD, 2010); and Healthcare Enterprise (IHE) (IHE, 1998), which specified the Cross-Enterprise Document Sharing (XDS) integration profile (Bender and Sartipi, 2013; IHE, 1998; Electronic BusinessebXML, 2006). These standards aim to design and structure the health data content to be efficiently exchanged among heterogonous systems. The most important initiatives terminologies are SNOMED-CT (SNOMED), Unified Medical Lexicon System (UMLS) (Bodenreider, 2004), International Classification of Diseases (ICD)-11 (Tudorache et al., 2013), OpenEHR (Kalra et al., 2005), and Loinc (McDonald et al., 2003). Despite efforts from standards developing organizations (SDOs), EHR systems are still unable to conform to interoperability standards and are not natively interoperable. Our analysis revealed that the adoption of health standards is slow paced due to many factors, among which, we note, is the large number of developed standards. This leads to many problems, the most important of which are the overlapping standards, the difficulty of combining existing standards, and the high cost of transforming to new standard-based solutions (European Commission, 2008; CEN/ CENELEC, 2009). We report also that each EHR system based on e-health standards ensures accessing the patient data for only some health organizations. Thus we note the absence of frameworks and methods that can be easily applied to different standards. The Semantic Health report (ISO 18308, 2011) recommends using Web Ontology Language (OWL) ontologies to support semantic interoperability. So, since the FHIR standard is based on the OWL format, we choose using it to improve interoperability in an EHR system. The UMLS Metathesaurus is chosen because it contains a large biomedical vocabulary and numerous health concepts, including clinical signs and symptoms. Also, the UMLS Metathesaurus allows semantic mapping among existing terminologies, and it’s the best mechanism to map existing systems. 8.3.1.2 Semantic interoperability: internet of things-based ontologies The IoT health data are heterogeneous in structure with various formats comprising video streams, images, and strings. Handling these heterogeneous data and processing them in real time is a key factor in building a connected EHR system. In the literature, several IoT ontologies are proposed describing (1) the characteristics of sensors, such as Semantic Sensor SSN (Compton et al., 2009; Xue et al., 2015; Shi et al., 2012); M3 (Gyrard et al., 2014); OntoSensor (Russomanno et al., 2005); MyOntoSens (Nachabe et al., 2015; Hirmer et al., 2016); SAREF (Daniele et al., 2015); and Brick (Balaji et al., 2016); (2) IoT context description (Chen et al., 2003); Baldauf et al., 2007; and Kim and Kim (2015); (3) users and/or devices locations, WGS84 ontology (Brickley, n.d.; Flury et al., 2004; Kim et al., 2018); and (4) IoT temporal context DAML-Time (DARPA Agent Markup Language project Time initiative) (Fikes et al., 2000); DAML-S (DAML for Web Services), KSL-Time (Stanford Knowledge Systems Lab Time ontology) (Hobbs and Pan, 2004); and OWL-Time 1

https://www.hl7.org/fhir/overview.html

II. Reasoning

102

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

ontology (Bermudez-Edo et al., 2016) and SMSD (Dridi et al., 2020), enabling sensor data description with semantic metadata in order to understand their context. We identify some criteria helping us in choosing the appropriate IoT-based ontology. First of all is the sensor data description criterion describing sensor data to make them machine-understandable and interoperable, as well as to facilitate data integration. First, we have to describe the MD and its capability to search specific medical contents and events. Second, the most important aspect in an MD network is its extensibility in order to facilitate new MD implementation. Finally, it seems important to model the semantics of concepts and their relationships in a particular context. MD context has to be based on the patient’s activity, the physical location of the patient, and temporal concepts and environment. In Table 8.1, we provide a comparison regarding main IOT-based ontologies with respect to our criteria. We note that we choose SMSD ontology because it covers the essential concepts in both the IoT and the health care domains. The majority of the proposed approaches in the health care domain ignore the reuse of already defined models for IoT. They limited the description of the health data source to only sensor concept. In fact, first, the reuse of some concepts in SMSD (Semantic Medical Sensor Data ontology) from diverse ontologies developed in the IoT such as SSN increase interoperability. Second, diverse contexts are presented in SMSD ontology concerning the functioning of the used object and also the patient’s state.

8.3.2 Related Studies 8.3.2.1 Electronic health records and EHR systems The EHR includes health data, care plan, care actions, and care outcome. ISO (ISO 18308, 2011) defines EHR as a “repository of information regarding the health status of a subject of care, in computer processable form.” We distinguish two types of EHR: shareable and nonshareable. Shareable EHR includes Integrated Care EHR (ICEHR). ICEHR aims to ensure continuing, efficient, and quality integrated health care. We note two types of ICEHR: Core EHR and Extended HER (ISO 18308, 2011). The first concerns only health information, and the second may contain health information and knowledge. Accordingly, Core EHR is considered a Passive Repository, whereas Extended EHR contains the Active Facilitator component. EHR is used by EHR systems to support patient’s health care, but it should not depend on EHR systems technology. We note three types of EHR systems: Local EHR system, Shared EHR system, and EHR Directory Service System (ISO 18308, 2011). The first one is mainly used by individual clinical centers. The second is based on the ICEHR EHR type to integrate health care activities within a community of GPs. The last one contains a set of links to existing EHR. In order to overcome challenges relating to EHR systems and in the absence of common measures and benchmark, we define here 12 criteria used to compare and discuss traditional EHR systems: 1. Application domain criterion: We distinguish two categories: (a) Generic (or domain independent) application and (b) Domain-specific application. 2. Input data criterion: Inputs data are grouped in (a) Fully Structured, (b) Semistructured, and (3) Unstructured Data formats.

II. Reasoning

103

8.3 Literature review

TABLE 8.1 The challenges tackled by existing sensor ontologies. Context

Criteria ontology

Data Sensor data Sensor Sensor access and Location Time description discovery capabilities sharing Extensibility description description

SSN (Bailer, 2005)

X

X

Xue et al. (2015), Iakovidis (1998)

X

Shi et al., European Commission (2008) M3 (CEN/ CENELEC, 2009)

X X X

OntoSensor (ISO 18308, 2011) MyOntoSens (Wan et al., 2016)

X

X

Hirmer et al. (2016), Parpinelli et al. (2002)

X

SAREF [47]

X

X

Brick [48],

X

X

Event ontology

X

X

Baldauf et al. [50]

X

Chen et al. [49]

X

Kim et al. [51]

X

WGS84 ontology [52]

X

X

Flury et al. [53] DAML-Time [54]

X

KSL-Time [55]

X

OWL-Time [56]

X

SMSD [57]

X

X

X

X

X

X

3. Interoperable criterion: We distinguish three types of systems: (a) fully interoperable system, which integrates all information from the outside, (b) partially interoperable system, which integrates information from some systems, and (c) noninteroperable system. 4. IoT integration criterion: We distinguish two categories: (a) IoT integration of the Generic (or Domain-independent) system and (b) Domain-specific System.

II. Reasoning

104

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

5. Health care standards criterion: We distinguish three types of health care standards: (a) information-based standards used to model clinical information, (b) document-based standards used to model clinical documents, and (c) hybrid standards used to model both clinical information and documents. 6. Completeness criterion: This determines whether the traditional systems cover the essential concepts concerning the medical objects and the health care domain. 7. Types of data: We distinguish four types of data: (a) local data, (b) shared data, (c) metadata index, and (d) knowledge. 8. Context-aware criterion: We distinguish two types of the context-aware criterion: Total and Partial. Existing systems contain concepts about the deployed context of the medical objects, such as time, location, and trajectoy, or/and about the patient, for instance, disease, symptom, historic information. 9. Reasoning criterion: This criterion determines whether the existing systems are interested in not only the diagnosis of the patient state, the anticipation of the possible risk for prevention purposes, and the proposed treatment but also the verification of the connected objects’ states. 10. Medical knowledgebased reasoning: This criterion determines whether existing systems are using medical knowledge to support decisions and improve patient health care. 11. Types of health plan criterion: This criterion is proposed to determine whether existing systems deploy a care health plan or preventive care plan. 12. Users: This criterion aims to identify for whom the proposed system is addressed. In Table 8.2, we provide a comparison regarding traditional EHR systems with respect to our criteria. First, the table illustrates that all existing studies are specific to domain and to IoT-applications; that means they are heterogeneous in terms of deployment contexts, computing capabilities, and communication protocols. This makes it nearly impossible to reuse these systems and to exchange information between them. Second, existing local EHR systems use health standards only to model clinical data. However, shared and EHR directory services use standards to model either clinical data or clinical documents. However, none of them use a hybrid standard in order to model both information and clinical documents. From this, we can deduce that none of the existing systems are able to model all the concepts concerning the medical objects and the health care domain. Third, traditional EHR systems do not link MDs to their various contexts. The device context is, however, required to verify and diagnose its state in order to be sure of the reliability of the gathered data and to be able to easily repair it in case of damage. Systems still enable us to intelligently interpret and reason from the transferred knowledge among all communicating MDs in order to synthetize a patient’s health history and provide accurate desired results. Finally, none of traditional EHR systems align a patient’s EHR with domain knowledge, making it nearly impossible to efficiently analyzing care activities, interpret health data, generate hypotheses, and design and test interventions for improvement. The main contributions of this study are a semantic-enabled, connected EHR aimed at normalizing and indexing structured and unstructured data gathered from heterogeneous health data sources and connected objects.

II. Reasoning

105

8.4 Our connected electronic health record system approach

TABLE 8.2 Comparison between our system and existing ones. Traditional EHR system Local EHR system

EHR directory Shared EHR system service

Connected EHR system

Application domain

Domain-specific

Domain-specific

Domain-specific

Generic

Inputs data

Structured and unstructured data

Structured and unstructured data

Structured and unstructured data

Structured and unstructured data

Interoperability

Noninteroperable system.

Partially interoperable

Partially interoperable

Fully interoperable system

IoT integration

Domain-specific IoT

Domain-specific IoT

Domain-specific IoT

Generic IoT

Standard scope

Information-based standard

Information-based standard

Information-based standard

Hybrid standard

Completeness

No

No

No

Yes

Types of data

Local data

Shared data

Meta-data index

Shared data 1 Shared knowledge

Context-aware

No

Partial

Partial

Total

Reasoning

Partial

Partial

Partial

Total

Medical knowledgebased reasoning

No

No

No

Yes

Type of health plan

Care plan

Care plan

Care plan

Care plan Preventive plan

Users

GP

GP and patient

GP

All health actors

EHR, Electronic health record.

8.4 Our connected electronic health record system approach We propose to develop a connected EHR aimed at connecting data gathered from heterogeneous health data sources and connected objects. In this section, we detail our proposed EHR, as depicted in Fig. 8.2, for patient monitoring through MDs. First, we describe the data collection process via the MD and the patient medical records. A syntactic and semantic modeling representation is suggested by defining their sources and their contexts in relation with either the MD or the patient.

8.4.1 Architecture description The proposed framework aims at (a) collecting data from databases, health care devices and sensors and defining and normalizing health data; (b) semantically annotating health care data to provide meaningful communication between the EHR and heterogeneous MDs; and (c) modeling the patient context and the employed device context. For the first

II. Reasoning

106

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

FIGURE 8.2 Proposed framework architecture for connected electronic health record (EHR).

aim, the patient health status context is considered in order to decide the appropriate diagnosis and treatment. In addition, the time context is used to define the required time to monitor the patient and to analyze the patient’s health data. Moreover, we focus on the location context to determine the place of the patient and make a suitable intervention and (d) aggregating data and aligning them with medical knowledge (Clinical Pathways Guidelines [CPG], Clinical Practices [CP], Best Clinical Evidence, Population Health data, Health Problem-specific Discussions and Expert’s Clinical expertise) to provide a connected care and preventive plan mandated by knowledge to support decisions and improve patient health care. Our framework contains four modules (as shown in Fig. 8.2). • Data processing module: This module aims to process and index health data in order to describe the current health care situation. It is important for health organizations to identify health risks at an early stage. Every incoming data is transformed and analyzed in three steps: data preprocessing, data transformation, and data analysis. • Health domain manager module: This provides users (and mainly experts) the medical health domain description to enable meaningful domain interoperability. This

II. Reasoning

8.4 Our connected electronic health record system approach

107

description concerns the organizational and functional interoperability using the health standard FHIR, the semantic interoperability using the health terminology UMLS, and the MD data annotation using SMSD ontology. This module also provides the health domain knowledge: Clinical Practices Guidelines (CPG), Clinical Pathways (CP), Best Clinical Evidence, Population Health data, Health Problem-Specific Discussions, and Expert’s Clinical expertise. All these data are unstructured and are often needed to query such large unstructured data from distributed data sets for better reasoning on the input data. • Data visualization module: This module is responsible for the visual representation of data. It provides visual and interactive communication and includes the techniques to graphically present data so as to summarize and understand the meaning of the data and to communicate the health information clearly and efficiently. Also, it enables users to rapidly find insights in health data. • Data manipulation module: This module aims to process and response user queries and detect health events. Indeed, once our knowledge base is defined in reference to previous modules, end users need to exploit it. In this regard, our goal is to propose an IoT-based clinical decision support system. To provide a suitable decision, this system takes into account the health data that describe the patient’s information and technical data representing the state of the MD. It consists mainly of three submodules: Connected EHR Management, Diagnosis and Treatment, and Notification. The second submodule analyses and interprets the detected patient vital signs in order to treat and prevent earlier diseases risks. The Notification submodule notifies patients of the GP’s decision. The proposed system requires a secure actor authentication to protect the accessed information through a password stored in the database. After that, the end user is able simply to make inquiries in user-friendly ways.

8.4.2 Data processing module 8.4.2.1 Preprocessing data The first module consists of preprocessing health data. Data preprocessing involves transforming raw data into an understandable format. The first step consists of data extraction from multiple and heterogonous sources, which leads to various data integration challenges. The second step consists of cleaning data, which is the most important task in building any analysis model. This includes (1) quantizing outliers, (2) handling missing values, and (3) converting categorical entries into numeric values. 8.4.2.2 Data transformation The second module is data transformation based on data semantization using SMSD ontology and data normalization using FHIR and UMLS. Based on the heterogeneity characteristics of health data, we propose an integrated framework to model health data according to the FHIR standard. The data transformation module is composed of two processes, as shown in Fig. 8.3. In the first step, we model unstructured data in an FHIR standard. We use NLP tools to identify health data from diverse applications and to convert them into their appropriate data types

II. Reasoning

108

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

FIGURE 8.3 Transforming structured and unstructured data using FHIR Standard, UMLS terminology, and SMSD IoT-based ontology. FHIR, Fast healthcare interoperability resources; IoT, internet of things; SMSD, siemens medical solutions diagnostics; UMLS, unified medical language system.

within the FHIR standard. In the second step, we combine structured data with the NLP output, we map FHIR metadata, and we normalize them. Finally, we integrate different normalized data into a generic framework that supports the direct generation of the data in a FHIR standard format. 8.4.2.3 Data analysis based on data aggregation process The third module is data analysis. It is based on two analysis modules. (1) The insight analysis module delivers information for better decisions based on the normalized health data, aggregated data, and unified EHR. It allows aggregating and continuously updating the EHR from electronic medical record (EMR) systems, claims systems, IoT-based systems, and domain health data. We distinguish three types of insight analysis: (a) descriptive analysis describes patient’ history; (b) diagnostic analysis describes the reasons and factors of some health events; and (c) prescriptive analysis describes activities in optimal decision making. (2) Foresight analysis consists of predictive analysis and aims to determine trends and probabilities to predict future outcomes. It’s important for the GP to understand

II. Reasoning

8.4 Our connected electronic health record system approach

109

complications that a patient can develop. To achieve these goals, we propose a health data analysis architecture based on these two modules, as shown in Fig. 8.4: 1. Insight analysis module is composed of three phases: (a) The data cleaning phase consists of quantizing outliers, handling missing values, and converting categorical entries into numeric values. (b) The health data aggregation phase allows integrating unified EHR and health domain knowledge to generate a patient’s health summary, which undoubtedly leads to describing and understanding the health status of the patient. We describe, in this phase, a new data aggregation model, called the Graph-based Aggregation Model (GAM). The main idea behind GAM is to model the most pertinent health data that must be aggregated. GAM is based on converting the aggregation model into a corresponding visual representation. The aggregation phase and visualization module will be detailed in subsequent discussion. (c) The health report generation phase enables us to collect and view data and trends across the health data history. Calculating descriptive data represents a vital first step when conducting health care analysis and should always occur before making data prediction. Since the aggregated model is generated, it enables decision makers to assess a specific process in a more manageable form. We define two types of reports: - The care plan trajectories report: This contains actions, behavior, and health care observations. This trajectory aims at describing how the care plan evolves from creation to verification of its conformity with CPG, CP. For example, we can generate

FIGURE 8.4 Health data analysis architecture.

II. Reasoning

110

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

symptoms and diseases observations. This trajectory describes patients’ progress from symptoms to diseases and eventually to death. - The activities trajectories report: This contains health activities, health events, and relationships. For example, we can generate symptoms, disease events, admission type, medication events, and surgical procedures. This trajectory allows us to exhaustively quantify risk from the patient’s EHR. 2. Foresight analysis module is composed of two phases: (a) The features extraction model is based on health historical data and health knowledge. It consists of organizing the input data into classes. Here, we use a density-based FS (DFS) approach (Wan et al., 2016) to evaluate the pertinence of a feature. DFS allows us to remove the unnecessary features The features selection step consists of selecting a group of features in each iteration. Thus we create a subset of optimal features that is considered as the most significant feature for the classification process. To be sure that a feature is good, it is important to reduce overlapping among the remaining classes. To explore and assign ranks, the DFS attributes the distribution of features overall classes along with their correlation. (b) The classification module is based on Ant Colony Optimization (ACO)based classifier algorithm (Parpinelli et al., 2002). We note that DFS is also used to increase the accuracy of the ACO algorithm. ACO extracts classification rules. It assigns each instance to a predefined class, using the values of some features. The ACO algorithm structures the schema, generates rules, prunes rules, updates features, and uses discovered rules. The classification model performance is evaluated by exploiting either a k-fold stratified cross-validation strategy based on the training set. The prediction performance is evaluated through quality indices, such as Fmeasure or precision, and recalls which offer important insights on the performance of the classifier. The GAM model and the graphical visualization will be detailed in a dedicated study.

8.5 Implementation We propose a connected EHR prototype. Our prototype consists mainly of (1) monitoring, in a real time, the temperature, the ECG, and the pulse oximeter; (2) displaying a graphical visualization of both historical and current patient’s health; and (3) providing health services decision making and health services home care. We used VMWare virtualization technology to create a cloud storage system. Sqoop is used to preprocess data stored in a MySQL database. We also used Kettel to extract, transform, and load operations on the FHIR standard. HBase is better used for efficient health data query processing. Indeed, a json file is generated containing all the patient’s health data. After that, we create a metatable containing row key and value to preprocess raw data. We obtain duplicated records, which are deleted according to their IDs. Fig. 8.5 depicts the flowchart of this process. The connected EHR prototype analyses static health data (gender, age, diseases) and dynamic health data (blood oxygen, temperature, and heart rhythms) and aligns them with health domain knowledge in order to identify early risks with blood oxygen saturation, elevated temperature, and abnormal heart rhythms. To monitor measures, detect risks, give alarms, and adapt a patient’s care plan in real time, we used Spark MLLib and Spark Streaming techniques (Fig. 8.6).

II. Reasoning

111

8.6 Experimental results

FIGURE 8.5 Flowchart of spark streaming operations.

FIGURE 8.6 Generating HBase.

8.6 Experimental results To evaluate our approach, we focus on the functional level of our connected EHR. To this end, we evaluated the performance of the analysis process and the response time of our system.

8.6.1 Analysis performance of connected electronic health record To evaluate the effectiveness degree of our model reasoning, three evaluation measures— Recall, Precision, F-Measure—were considered using the following equations. We denote TP, FP, TN, and FN as True Positive (correctly analyzed instances as required), False Positive (incorrectly analyzed instances as required), False Negative (incorrectly analyzed instances as not required), and the True Negative (correctly analyzed instances as not required). Precision 5

TP ðTP 1 FPÞ

II. Reasoning

(8.1)

112

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

Recall 5

F 2 mesure 5

TP ðTP 1 FN Þ

(8.2)

2  Precision  Recall ðPrecision 1 RecallÞ

(8.3)

The values of these measures reflect the relevance level of the analysis model by identifying the correct and the ambiguous ones. For experimentation purposes, we considered three data sets (temperature, blood pressure, and heart rate) from physiobank,2 and we took 10 patients as a sample. For each data set, we only took 1000 data records that correspond to the one-month duration for the monitoring of these patients (4 measurements per day * 30 days * 10 patients). Consequently, 5000 data records (1000*3) are stored in the SMSD ontology. These data sets contain other information such as the time and the location of these measures. Consequently, a total of 16,400 instances (measures, time, location, symptoms, disease, drugs, food) were created to form the whole knowledge base of our connected EHR system. In fact, the generated heath care plan was validated with health care experts (three GPs) who helped us to calculate the reasoning phase performance of our knowledge base. First, we looked at just a few contexts (age and sex) that help doctors in a primary diagnosis. Second, we deeply focused on diverse contexts for advanced diagnosis. Tables 8.3 and 8.4 show the results of the correctly analyzed patients with our system compared to those analyzed manually (with domain experts), in the first and the second cases, respectively. Fig. 8.7 shows the similarity between analyzed cases by connected EHR system and analyzed cases by expert domain. We recognized that taking into account syntactic and semantic interoperability and the context-aware reasoning gives a more precise and correct diagnosis and reduced adverse event that usually happens in the case of incorrect diagnosis.

8.6.2 Response time of connected electronic health record This section is devoted to evaluating the effect of the data quantity (Fig. 8.7) and their contexts in the response time of the connected EHR system. The response time is defined as Trep 5 Tload 1 Tinf 1 Tquery. TABLE 8.3 Comparing system classification with domain experts’ classification, in the primary diagnosis. Correctly analyzed cases by expert domain

Incorrectly analyzed cases by expert domain

Correctly analyzed cases by connected EHR system

(2500) TP

(1210) FP

Incorrectly analyzed cases by connected EHR system

(900) FN

(910) TN

2

https://physionet.org/physiobank/

II. Reasoning

113

8.7 Conclusion and future works

TABLE 8.4 Comparing system classification with domain experts’ classification, in the primary diagnosis. Correctly analyzed cases by expert domain

Incorrectly analyzed cases by expert domain

Correctly analyzed cases by connected EHR system

(3520) TP

(510) FP

Incorrectly analyzed cases by connected EHR system

(680) FN

(1200) TN

FIGURE 8.7 Connected electronic health record performance.

(EHR)

system

where Tload is the time needed to load the SMSD ontology in the Drools engine, Tinf is the time of the execution of the analysis phase, and Tquery determines the processing time of the SPARQL queries to display the results for users. SMSD ontology is composed of approximately 90 concepts, 110 object properties, and 100 data properties. In Fig. 8.8, we show the response time as increasing from 12.2 to 23.43 s when the stored data number increases from 5000 to 16,400. Thus, the number of stored data impacts significantly on response performance.

8.7 Conclusion and future works This paper has proposed a connected EHR system. It was designed based on four fundamental phases: the data preprocessing phase, the data transformation phase, the insight analysis phase, and the foresight analysis phase. The preprocessing phase considered two sources of data: data collected from MDs and data collected from medical records. The transformation phase was interested in unifying health data using the FHIR standard, the UMLS

II. Reasoning

114

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

FIGURE 8.8 Response time of connected electronic health record (EHR).

ontology, and SMSD IoT-based ontology to represent knowledge about the MDs, the monitored patient, and their contexts. The resulting unified EHR was exploited by an aggregated model and an aggregation-based visualization model. These models were suggested for two main objectives: (1) configuration and management of the employed objects and (2) patient state diagnosis and decision making, taking into account the alterable context. We have also proposed an implementation phase focused on the development of an SHU prototype based on IoT for patient monitoring, which integrated health data, MD data, and domain knowledge data to provide diverse services for GPs and patients according to its deployment context. The evaluation of our approach centered on functional evaluation that was focused on the analysis performance and the response time of the connected EHR system. Prospectively, we aim to propose and to implement an intelligent solution to optimize the use of the ontology instances to be then smartly processed and analyzed. This solution will be able to define a direct and seamless interpretation of the knowledge base and the system without using the inference engine. In this way, the effective use of knowledge ensures more understandable and reasonablesystem definition. In addition, we will focus on the scalability challenge of the connected EHR system in order to be able to manage potentially huge quantities of data accurately.

References Available at http://www.cs.rochester.edu/Bferguson/daml/. Bailer D.J. Interoperability: the key to the future health care system. Health Aff 2005, 24:W5. Balaji B., Bhattacharya A., Fierro G., Gao J., Gluck J., Hong D., et al. Brick: Towards a Unified Metadata Schema for Buildings, in: Proceedings of the ACM International Conference on Embedded Systems for EnergyEfficient Built Environments (BuildSys). ACM, 2016. Baldauf, M., Dustdar, S., Rosenberg, F., 2007. A survey on context-aware systems. Int. J. Ad Hoc Ubiquitous Comput. 2 (4), 263277. Beale, T., Heard, S., Kalra, D., and Lloyd, D. 2005f. The openEHR Data Structures Information Model, Revision 1.6rc1 (Release 1.0 draft). openEHR Reference Model, the openEHR foundation.

II. Reasoning

References

115

Bender, D. and Sartipi, K. (2013) Hl7 fhir: An agile and restful approach to healthcare information exchange. In Proceedings of the 26th IEEE international symposium on computer-based medical systems, pages 326331. IEEE. Bermudez-Edo M., Elsaleh T., Barnaghi P., Taylor K., IoT-Lite: A Lightweight Semantic Model for the Internet of Things, in: Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ ATC/ScalCom/CBDCom/IoP/ SmartWorld), 2016 Intl IEEE Conferences, IEEE, 2016, pp. 9097. Bodenreider, O., 2004. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res. 32, 267270. Brickley. D. Basic geo (WGS84 lat/long) vocabulary, Documento informal escrito en colaboracion. Cappon, G., Acciaroli, G., Vettoretti, M., Facchinetti, A., Sparacino, G., 2017. Wearable continuous glucose monitoring sensors: a revolution in diabetes treatment. Electronics 6 (3), 65. CDA 2020. HL7 Clinical Document Architecture - Release 2.0, , http://xml.coverpages.org/CDA-20040830v3.pdf . . CEN/CENELEC. \eHealth-INTEROP Report in response to eHealth Interoperability Standards Mandate. ETSI, 2009. Chen, H., Finin, T., Joshi, A., 2003. An ontology for context-aware pervasive computing environments. Knowl. Eng. Rev. 18 (03), 197207. Compton M., Barnaghi P., Bermudez L., Garcia-Castro R., Corcho O., Cox S., et al. (2009). The SSN Ontology of the W3C Semantic Sensor Network Incubator Group, Web Semantics: Science, Services and Agents on the World Wide Web. Daniele L., den Hartog F., Roes J., Study on Semantic Assets for Smart Appliances Interoperability (2015). Dridi A., Sassi S., Chbeir R., Faiez S. Flexible semantic integration framework for Fully-Integrated HER based on FHIR standard. 12e`me conference on agents and artificial Intelligence. 2224 February 2020. Malta. 684691. Electronic Business XML (ebXML), 2006. Registry/Repository, , https://www.oasis-open.org/committees/ tc_home.php?wg_abbrev 5 regrep . . European Commission. \ICT Standards in the Health Sector: Current Situation and Prospects. , http://goo.gl/ KOKZn ., 2008. Fikes R., Zhou Q., A Reusable Time Ontology, Tech. rep. (2000). Flury T., Privat G., Ramparany F. OWL-based location ontology for context-aware services, Proceedings of the Artificial Intelligence in Mobile Systems (AIMS 2004) (2004) 5257. Fortino, G., Russo, W., Savaglio, C., Shen, W., Zhou, M., 2017. Agent-oriented cooperative smart objects: from IoT system design to implementation. IEEE Trans. Syst. Man. Cybern. Syst. 118. Gyrard A., Datta S.K., Bonnet C., et al., Standardizing generic cross-domain applications in internet of things, in: Globecom Workshops (GC Wkshps), 2014, IEEE, 2014, pp. 589594. Hirmer P., Wieland M., Breitenbucher B., Mitschang B., Dynamic Ontology-Based Sensor Binding, in Advances in Databases and Information Systems: 20th East European Conference, ADBIS 2016, Lecture Notes in Computer Science, Vol. 9809, Springer International Publishing, Cham, pp. 323337. Hobbs, J.R., Pan, F., 2004. An ontology of time for the semantic web. ACM Trans. Asian Lang. Inf. Process 3 (1), 6685. Iakovidis, I., 1998. Towards personal health record: current situation, obstacles and trends in implementation of Electronic Healthcare Records in Europe. Intern. J. Med. Inform. 52 (123), 105117. IHE, 1998. IT Infrastructure Integration Profiles, , http://www.ihe.net/Technical_Framework/index.cfm#IT . . Integrating Healthcare Enterprise , https://www.ihe.net/ . . ISO 18308. “Health informatics -- Requirements for an electronic health record architecture.” 2011. Kalra, D., Beale, T., Heard, S., 2005. The openEHR Foundation. Stud. Health Technol. Inform. 115, 153173. Kim, J., Campbell, A.S., Wang, J., 2018. Wearable non-invasive epidermal glucose sensors: A review. Talanta 177, 163170. Kim S.I., Kim S.H. Ontology based Location Reasoning Method using Smart Phone data, in: 2015 International Conference on Information Networking (ICOIN), IEEE, 2015, pp. 509514. McDonald, C.J., Hu, S.M., Suico, J.G., Hill, G., Leavelle, D., Aller, R., et al., 2003. LOINC, a universal standard for identifying laboratory observations: a 5-year update. Clin. Chem. 49 (4), 624633. Nachabe, L., Girod-Genet, M., El Hassan, B., 2015. Unified data model for wireless sensor network. IEEE Sens. J. 15 (7), 36573667.

II. Reasoning

116

8. The connected electronic health record: a semantic-enabled, flexible, and unified electronic health record

National Electrical Manufacturer Association. \Digital Imaging and Communications in Medicine (DICOM) Part 1: Introduction and Overview. (2011). Oh, H., Rizo, C., Enkin, M., Jadad, A., 2005. What is eHealth (3): a systematic review of published definitions. J. Med. Internet Res. 7 (1). Parpinelli, R.S., Lopes, H.S., Freitas, A.A., 2002. Data mining with an ant colony optimization algorithm. IEEE Trans. Evolut. computation 6, 321332. Russomanno D.J., Kothari C., Thomas O., Sensor Ontologies: from Shallow to Deep models, in: Proceedings of the Thirty-Seventh Southeastern Symposium on System Theory, 2005. SSST’05., IEEE, 2005, pp. 107112. SDMX-HD.\Statistical Data and Metadata Exchange-Health Domain Standard v1.0. , http://goo.gl/8WKM3 . , 2010. prEN, 2007. 13606-1, “Health informatics - Electronic health record communication - Part 1: Reference model”, Draft for CEN Enquiry, CEN/TC 251 Health Informatics. European Committee for Standardization, Brussels, Belgium. Shi, Y., Li, G., Zhou, X., Zhang, X., 2012. Sensor ontology building in semantic sensor web, Internet Things, 284. Springer, p. 277. SNOMED Clinical Terms: Overview of the Development Process and Project Status. , https://www.researchgate.net/ publication/11535553_SNOMED_clinical_terms_Overview_of_the_development_process_and_project_status . . Tudorache, T., Nyulas, C.I., Noy, N.F., Musen, M.A. Using semantic web in ICD-11: Three years down the road. In International Semantic Web Conference; Springer: Berlin, Germany, 2013; pp. 195211. Wan, Y., Wang, M., Ye, Z., Lai, X., 2016. A feature selection method based on modified binary coded ant colony optimization algorithm. Appl. Soft Comput. 49, 248258. Wang, H.-C., Lee, A.-R., 2015. Recent developments in blood glucose sensors. J. Food Drug. Anal. 23 (2), 191200. Xue L., Liu Y., Zeng P., Yu H., Shi.Z., An Ontology based Scheme for Sensor Description in Context Awareness System, in: Information and Automation, 2015 IEEE International Conference on, IEEE, 2015, pp. 817820.

II. Reasoning

C H A P T E R

9 Ontology-supported rule-based reasoning for emergency management Sarika Jain1, Sonia Mehla2 and Jan Wagner3 1

Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India 2National Institute of Technology Kurukshetra, Haryana, India 3RheinMain University of Applied Sciences, Germany

9.1 Introduction Emergency situations (disasters) pose an immediate risk to life, health, property, or environment if not properly handled. A crisis is not inevitably a disaster but an inescapable event that comminates people, the environment, or the property. By their nature, emergency situations are risky, unstructured, and complex, having no single accurate solution. Therefore there is a demand for a decision support system (DSS) or advisory system that provides reasonable answers to complex problems. Natural emergencies (earthquake, volcanic eruption, flood, cyclone, epidemics, tsunami) and unnatural or human-made emergencies (train accident, air crash, building collapse, warfare, bomb blast) are the two main types of crisis situations. Natural disasters are the events that are induced by nature and are ad hoc in nature, destructive, and unpredictable. Human-made emergencies are those events that are caused by the human mishandling of dangerous equipment deliberately, carelessly, or negligently. The problem domain of emergency situations is wide and not well-defined. Disasters occur regularly, and they are a menace for infrastructure and human lives. There is a giant variety of crisis, and each of them requires a specific reaction. The reactions to emergency situations can be very different depending on the type of emergency, location, and accessibility of resources (Jain et al., 2009). Planning is essential to coordinate the many tasks before the action can take place. All the essential facts should be collected first and then represented in the right way. Today, the techniques traditionally used to handle emergency situations have become outdated. Emergency managers gather in a war room where all information will be handled, and then they take decisions based on this information. Decision makers or higher authorities need to deal with very complex

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00017-1

117

© 2021 Elsevier Inc. All rights reserved.

118

9. Ontology-supported rule-based reasoning for emergency management

and extensive historical data and take rapid decisions as they encounter complications. However, the amount of information generated and the way of data are expressed may overload the cognitive skills of decision makers and even sophisticated professionals and lead to erroneous or inaccurate actions that put the victims’ lives in danger. The decisiontaking process has been optimized by working on a program that takes decisions faster as compared to the capabilities of humans. Advisory systems do not take decisions by their own but rather help decision makers take their decisions. The recommendations generated by the advisory systems are not always final solution to the problem. Instead, they represent recommendation used by decision makers to decide on the final solution for a problem. Artificial intelligence (AI) techniques can be used in all the parts of DSS or advisory system, such as in the database, knowledge base, model base, user interface, and other components. The incorporation of AI techniques magnifies the cognitive efficiency of the decision maker, makes the inferred information explicit, searches out new facts and consolidates them, and identifies new relationships and patterns with proper explanations and justifications. A knowledgedriven DSS contains knowledge and solves problems like a human being. It has two parts: a knowledge base and an inference engine. Knowledge representation formalism can be used to delineate the decision-making knowledge in the form of rules (Jain et al., 2018). Rules are a popular way to encode domain-specific experts’ knowledge in order to perform human reasoning. The benefits of adopting a knowledge-driven DSS are that it: • is easy to embed new facts and derive information implicitly; • is able to provide justification and benefits for unstructured and semistructured problems; • is able to express human intelligence in a form that can be understandable by machines, that is, an ontology (it is highly expressive and mainly used for complex and large schema); • is better at communication and interoperability; • provides deep knowledge in a specific domain, which involves semantic association and rules. The proposed knowledge-driven approach consists of fetching knowledge from experts and representing it in a proper form that is readable and understandable by a computer. This knowledge is used in the valuation of solutions to problems within the domain. In a traditional relational database, tables are used to store concepts, and the system does not contain any information to define the meaning of concepts or how the concepts are related to one another. Unless the meanings of the entities and their relationships are understood, any theoretical model representing domain data and the procedures applied for mapping that data is incomplete. A common language is needed to improve coordination not only among systems but also among users in order to avoid semantic incompatibilities. This is where semantic technology excels. The Resource Description Framework (RDF) is an extensible framework for knowledge representation of the semantic world, which is utilized to code all data in meaningful triples. In this way a reasoner can derive new facts automatically without writing explicit code. In this chapter, the knowledge-driven approach is used to develop the advisory system. It includes a knowledge base and an inference engine, both employed to generate recommendations. Semantic technologies

II. Reasoning

9.2 Literature review

119

have been used for early and reliable detection and classification of emergency situations (Mehla and Jain, 2020). Ontology is one of the important technologies for representing knowledge. Ontology is used by the system to determine the required resources to deal with the emergency situation. Formalized semantics is the main advantage of using ontologies (RDF format). Ontology development languages do not provide the expressiveness we want, so we need to integrate rules in abridged form (Jain et al., 2016).

9.2 Literature review The study of related works on advisory systems for emergency situation management is the focus in this section. The idea of emergency or crisis management started early in the 1980s in the last century. In the modern emergency management field, the crux of crisis management is how to make efficient and effective decisions under definite time constraints and mental pressure. Therefore, an advisory system, or DSS, is an effective tool in this research field in solving crisis problems automatically. Papamichail and French (2005) discussed the benefits of expert system technology for understanding problems and choosing better solutions based on the suggested advice provided by the DSS. Yoon et al. (2008) has designed a computer-based prototype of emergency resource management to provide training. It solved the complex problem of transportation resource allocation and transportation safety. De Maio et al. (2011) proposed a cognitive map with fuzzy concept to support resource discovery and knowledge processing. Yu and Lai (2011) define a distance-based method for decision making to resolve emergency decision problems. Ngai et al. (2012) present development of the team’s emergency management system prototype for handling the logistics of accidents in real time. Vescoukis et al. (2012) describes a flexible service-oriented architecture to support decision making for environmental emergency management on site at the geographic locations. The system uses real-time geographic data sets to support logistics operations and environmental modeling in emergency situations. Hadiguna et al. (2014) suggested a logical system model based on the Web for decision support and evaluated the feasibility of resources allocation in an evacuation. Liu et al. (2014) constructed a model based on a fault tree analysis method to support decision making. This model analyzes the ontogenesis of an emergency and solves the complex decision-making problems in emergency. The concept of ontology started to appear in the field of advisory system around 2000. Chen et al. (2013) developed a system to provide recommendations for the selection of antidiabetic drugs based on ontological techniques and a reasoning method, where rules with fuzzy techniques are used to express knowledge to infer information about antidiabetic drugs. Rahaman and Hossain (2013) discussed a belief rule-based approach to appraise the possibility of heart failure (HF) by using risk factors, symptoms, and signs. Xu et al. (2014) designed a conceptual knowledge model for earthquake disaster emergency response. This model stored rules, along with meta-, procedural, and factual knowledge. Zhang et al. (2016) developed meteorological disaster ontology (MDO) to illustrate the constituents of disaster. Semantic web rule language (SWRL) was adopted

II. Reasoning

120

9. Ontology-supported rule-based reasoning for emergency management

to identify the implicit relationships among the domain knowledge, which is explicitly defined in MDO. Sahebjamnia et al. (2017) designed a DSS consisting of a rule-based, inference engine, knowledge-based system, and a simulator to construct a three-level humanitarian relief chain. Rakib and Uddin (2019) described a context-aware, rule-based system by integrating an ontology-based model with rules to depict context and infer changed contexts. It consists of a lightweight rule engine and user preferences to diminish the number of rules in order to enhance the execution speed of the inference engine and reduce the total execution cost and time. We have concluded that there is a lack of an ontology-supported rulebased reasoning approach on advisory systems for emergency management. This chapter focuses mainly on ontology integration for both model and data and illustrates how to utilize ontologies to resolve complex problems in advisory systems for crisis or emergency management.

9.3 System framework Declarative knowledge is delineated as factual information that is stored in memory and static in nature. It is the part of knowledge that interprets the concepts. Things/ concepts, their properties, and the relationships among these things/concepts and their properties describe the domain of declarative knowledge. Procedural knowledge is defined as knowledge of how to operate or how to perform. To become more skilled in problem solving, it is necessary to rely more on procedural knowledge than on declarative knowledge. Domain ontology was built systematically to store concepts about emergency situations with the help of experts, and rules store the recommendations based on those emergencies (Mehla and Jain, 2019a,b). An advisory system for emergency situations has been developed to assist decision makers or higher authorities in taking decisions. Rules depend on ontology to infer implicit knowledge from existing knowledge. The framework incorporates the following: 1. Ontology (declarative knowledge): The backbone of this system is an Emergency Resource Ontology (ERO), which stores the information about emergency situations attributes, resources, and available and available instances. 2. Inference of knowledge (procedural knowledge): Jena rules have been used to represent expert’s knowledge. Rule-based reasoning with Pellet reasoner has been employed to generate recommendations. The rules indicate the following: What action should the decision makers take in case of viral infections in a particular city? What are the resources required to help people in recovery? 3. User interface.

9.3.1 Construction of ontology Prote´ge´, an open source software editor, has been used to build the Emergency Resource Ontology. The main constituent elements of the ontology are classes, attributes, individuals, and relationships. The class represents a set of entities within a domain with

II. Reasoning

9.3 System framework

121

a generalization and specialization relationship. Attributes depict the relationships among concepts or instances. Two types of properties are used to depict the relationships: data and object properties. Individuals represent the concrete example of concepts within the domain. 1. Ontology concepts: All concepts related to the emergency situation domain are represented in a hierarchy using the Prote´ge´ tool. An emergency situation is in a class that defines various emergency events as subclasses. Two different concepts can be either siblings of each other or related via a superclass/subclass relationship. Fig. 9.1 shows the hierarchical structure of domain concepts as a graphic visualization in Prote´ge´. 2. Ontology properties: Classes alone will not give sufficient information about the domain ontology. There is a need to define the structure of concepts internally. Ontology needs properties that are acting in its structure like predicates in a sentence. Data and object are two main properties that are essential in ontology. Data properties connect an individual with a value. The value can be one of the standard programming datatypes like Boolean, double or string, but its own datatypes are possible as well. Fig. 9.2A shows the structure of the used data properties. Consider the concept “Emergency Situations,” the system requires the storage of information like number of people in danger and its location. If the concept is “Earthquake,” then additional property like intensity is required. If the concept is “Viral Infectious Disease,” then a property for the way of transmission is required. Object properties describe the relationship between classes and individuals: for example, GroundShaking measured by Seismograph. To represent this information in Prote´ge´, the object property is defined as Measured by Seismograph, its domain is Earthquake, and

FIGURE 9.1 Hierarchical structure of concepts.

II. Reasoning

122

9. Ontology-supported rule-based reasoning for emergency management

FIGURE 9.2 List of properties.

the range is Seismograph. Both Earthquake and Seismograph should be either concept or individual. Fig. 9.2B shows the structure of the object properties used. 3. Ontology individuals: Individuals (instances) are the key components of ontology. In ERO, individuals of Earthquake class are 1897_In_Assam_01, 1905_In_HimachalPradesh_01, 1934_In_Bihar_01, 1950_In_Assam_01, 1967_In_Maharashtra_01, 1991_In_Uttarakhand_01, 1993_In_Maharashtra_01 shown in the figure. In the same way, individuals are stored for various concepts in ontology. A list of some such individuals is shown in Fig. 9.3.

9.4 Inference of knowledge In addition to the body of knowledge, we have an inference engine that is termed a shell in the recommendation system. There are various inference mechanisms apart from rule-based reasoning—narrative-based reasoning, case-based reasoning, and many more—but we adopted the rule system. Three main inference processes for a rule systems are discussed as follows: 1. Forward inference process is data oriented in which, if the premises match the situation, then the process attempts to affirm the conclusion. It is suitable for solving multiobjectives.

II. Reasoning

9.4 Inference of knowledge

123

FIGURE 9.3 List of individuals.

2. Backward inference is suitable for inference with targets. If the current goal is to decide the correct conclusion, then the process strives to decide whether the premises (facts) match the situation. 3. Forward and backward inference may also be integrated. Forward inference is selected in the initial phases, and then backward inference is selected after accomplishing clear outputs. In the proposed system, rule-based reasoning with forward inference has been used to represent and to reason on the basis of the domain-specific knowledge to support decision makers in the decision-making process in an emergency situation. Rules are stored as if-then (antecedent-consequent) statements that illustrate the logical inferences and that can be drawn from an assertion in a particular form. Here are the steps for executing the rules: 1. Load model: Reads an ontology model and converts it to statements and loads the statements into a Jena model. 2. Run rules: Runs a Jena rules file on the model. 3. Query model: Executes similar rules based on the input given by the operator.

9.4.1 System in action This section describes the system implemented for emergency situation management.

II. Reasoning

124

9. Ontology-supported rule-based reasoning for emergency management

9.4.1.1 Tools/techniques/languages employed Various disparate languages, developing environments, and platforms have been used to obtain the target. Java is a programming language used to develop all the processes, and rule-based reasoning is applied to generate recommendations for the decision makers. Netbeans IDE has been worked as an integrated development environment during the development phase. Ontology has been used to design the knowledge base used to store the emergency resources. Various languages used to depict the knowledge are Knowledge Interchange Format (KIF), RDF(S), Simple HTML Ontology Extensions (SHOE), Darpa Agent Markup Language 1 Ontology Interchange Language (DAML 1 OIL), OWL, and OWL2. The essential attributes of a good knowledge representation language are expressiveness, inference engine, constraint checking, and implementation. OWL2 is more concise and expressive in representing the knowledge model. OWL2 is an advanced version, and it has been selected due to its extra attributes that were missing in Web Ontology Language (OWL) and that were used to create the ontology of an emergency situation. Prote´ge´ 5.2 ontology editor is used to construct the ontology. SPARQL is a query language that has been used to derive implicit knowledge on ontology with rules. We could build our rule-based reasoning advisory system on several types of inference engines. After a first screening of possibilities, the options included the Jess rule engine, BaseVISor rule engine, Jena rule engine, and a reasoner like Pellet that would work with SWRL rules (Mehla and Jain, 2019a,b). The Pellet reasoner can be used with the Jena API, as well as with the OWL API. Rule engines utilize the Rete algorithm, which is really fast; the Pellet reasoner uses the tableaux method. So for performance issues, it might be better to choose one of the engines that utilizes the Rete algorithm. Jess unfortunately does not support OWL, and our ontology would have to be converted into a Jess file. As our ontology will be continuously updated, this would be highly unpractical. All the other engines support OWL, with BaseVISor being the most similar to Jess. It is based on a Rete network that is optimized for triples. It therefore supports all the semantics of Resource Description Framework Schema (RDFS) while supporting the semantics of OWL only in part. It supports OWL RL, which is a subset of OWL DL, and can be used as the basis for rule-based implementations. SWRL results are treated as assertive statements, which would not be very practical in our system as every time the rule based reasoning (RBR) system is consulted, the results would be in accordance with our ontology. There is a need to remove the results from the ontology so that future requests would not be affected. Jena is widely used in the fields of semantic web, and there exists a large community that continuously develops the framework, which can only be beneficial to the project. Pellet is also widely used and has a large community as well. Jena provides an application programming interface for storing, constructing, querying, and managing OWL ontologies in different formats (RDF/XML, N-triples, and N3). Furthermore, Jena supports an engine to execute SPARQL queries on ontology. Jena rules stored the experts’ recommendations in the form of implication. Pellet reasoner was used to fire the rules with ontology to provide the results. Rules format: Description or Name of Rule: (condition to be met)

II. Reasoning

9.4 Inference of knowledge

125

(condition 1 . . . condition n) 5 . (fact to assert) (fact to assert 1 . . . fact to assert n)

9.4.2 Sample scenarios The parameters can be modified to create multiple scenarios. These parameters are selecting the earthquake location or the viral infectious diseases with their affected parameters such as describing the number of people in that location, intensity, and the space between the rescue team and the influenced location. Actions will be generated based on the described rules. Rule set has been processed based on the data stored in the database. This rule set has the ability to represent the present state of emergency and recommend a plan or action to decision makers. We have chosen two major emergencies—earthquake and viral infectious diseases—with their scenarios in this paper. Scenario 1: Earthquake When the ground shakes violently, buildings can be damaged or destroyed, and its occupants may be killed or injured. Instances of groundshaking or earthquake have been created to store related parameters and resources in the ontology. The action depends on the level of intensity and the location (city and country) of the earthquake. The reaction of an earthquake on the surface of the Earth is measured in magnitude and intensity. The lower numbers on the intensity scales deal with the way in which earthquakes are felt by people. The higher numbers on the scales are based on observed structural damage. For any given situation, some of the actions recommended by the system are Prevention, Clarification of Situation, Supply Good Support, Medical Support, Evacuation, Clearing Work. The rule base for an earthquake scenario contains four rules: 1. Rule 1:

2. Rule 2:

3. Rule 3:

4. Rule 4:

If the earthquake has any location and number of residents on that particular location, then the number of possible victims in that location may be estimated. (?x es:has_location ?y), (?y es:NbOfResidents ?z) 5 . (?x es:NbOfPossibleVictims ?z) If the earthquake has an intensity greater than defined critical intensity in a particular location, then reaction is needed may be recommended. (?x rdf:type es:Ground_Shaking), (?x es:has_intensity ?y), (?x es:has_location ?c), (?c es: hasCriticalIntensity ?I), greaterThan(?y,?I) 5 . (?x es:reactionIsNeeded “true”) If the earthquake or ground shaking has some location and primer landscape type hilly area, then prevention may be advised. (?x rdf:type es:Ground_Shaking), (?x es:has_location ?c), (?c es:has_primer_Landscape ?I), (?I rdf: type es:Hilly) 5 . (?x es:needs_Prevention “true”) If Rule2 is true, then evacuation needs, clearing work, medical support, goods support, and clarification of situation may be advised. (?x rdf:type es:Ground_Shaking), (?x es:reactionIsNeeded “true”) 5 . (?x es:needs_Evacuation “true”), (?x es:needs_ClearingWork “true”), (?x es:needs_MedicalSupport “true”), (?x es: needs_SupplyGoodsSupport “true”), (?x es:needs_ClarificationOfSituation “true”)

II. Reasoning

126

9. Ontology-supported rule-based reasoning for emergency management

FIGURE 9.4 Scenarios: (A) Ground_Shaking (intensity , 5 4); (B) Ground_Shaking (intensity . 5 4); (C) Viral_infectious_disease_by_air; (D) Viral_infectious_disease_by_person_to_person.

When the intensity level is less than or equal to 4, then only prevention action is advised by this tool, as shown in Fig. 9.4A. We cannot stop earthquakes, but we can try to mitigate their effects by recognizing their possibility, building safe structures, and giving educational training on safety during earthquakes. We can also overcome the risk of

II. Reasoning

References

127

human-induced earthquakes by planning to reduce the effects of the natural earthquake. When the intensity level is greater than 4, then it will suggest providing prevention, clarification of situation, evacuation, clearing work, medical support, good support to injured persons, as shown in Fig. 9.4B. Evacuation is an immediate and urgent movement of people away from the actual occurrence of an emergency situation. Transport vehicles are required to remove people from dangerous places. Scenario 2: Viral Infectious Diseases Viral diseases are immensely widespread infections that are caused by viruses, a type of microorganism. Infectious diseases are spread in various ways: indirect contact, directly person-to-person, person-to-animal, and by air. Infectious diseases are usually transmitted through direct person-to-person contact. A person can infect another person through droplets resulting from speaking. Some infectious diseases can also be spread from an animal to a person when an infected animal scratches or bites you. Some infectious agents may travel long distances and stay suspended in the air for a protracted period. For these situations, following actions are advised by the system: prevention, enlightenment of population, medical support, containment, as shown in Fig. 9.4C and D. Rules for the viral infectious disease scenario are listed here. 1. Rule 1:

If there is viral infection and way of transmission in air then “needs containment” may be advised. (?x rdf:type es:Viral_Infection_Diseases), (?x es:way_of_transmission “Air”) 5 . (?x es:needs_Containment “true”)

2. Rule 2:

If there is viral infection diseases then needs medical support, prevention, enlightenment of population may be advised. (?x rdf:type es:Viral_Infection_Diseases) 5 . (?x es:needs_MedicalSupport “true”), (?x es:needs_Prevention “true”), (?x es: needs_EnlightenmentOfPopulation “true”)

9.5 Conclusion and future work This chapter proposes an ontology-supported rule-based reasoning approach, which has the ability to improve the decision maker’s efficiency in emergencies. Ontology is used by the system to determine the required resources to deal with emergency situations. It mingled the RBR and ontology to merge all the knowledge for emergencies and their corresponding actions. The gist of this chapter is an integrated approach of ontology with logic rules in order to recommend appropriate suggestions in emergencies and significantly reduce damage to property and loss of life. Future work is to collaborate with early warning systems to reduce the destruction caused by natural disasters.

References Chen, S.M., Huang, Y.H., Chen, R.C., 2013. A recommendation system for anti-diabetic drugs selection based on fuzzy reasoning and ontology techniques. Int. J. Pattern Recognit. Artif. Intell. 27 (04), 1359001. De Maio, C., Fenza, G., Gaeta, M., Loia, V., Orciuoli, F., 2011. A knowledge-based framework for emergency DSS. Knowl. Syst. 24 (8), 1372 1379.

II. Reasoning

128

9. Ontology-supported rule-based reasoning for emergency management

Hadiguna, R.A., Kamil, I., Delati, A., Reed, R., 2014. Implementing a web-based decision support system for disaster logistics: a case study of an evacuation location assessment for Indonesia. Int. J. disaster risk Reduct. 9, 38 47. Jain, S., Jain, N.K., Goel, C.K., 2009. Reasoning in EHCPRs system. Int. J. Open. Probl. Compt. Math. 2 (2). Jain, S., Mehla, S., Mishra, S., 2016. An ontology of natural disasters with exceptions. In: 2016 International Conference System Modeling & Advancement in Research Trends (SMART). IEEE, pp. 232 237. Jain, S., Mehla, S., Agarwal, A.G., 2018. An ontology based earthquake recommendation system. In: International Conference on Advanced Informatics for Computing Research. Springer, Singapore, pp. 331 340. Liu, Y., Fan, Z.P., Yuan, Y., Li, H., 2014. A FTA-based method for risk decision-making in emergency response. Computers & Oper. Res. 42, 49 57. Mehla, S., Jain, S., 2019a. Rule languages for the semantic web. Emerging Technologies in Data Mining and Information Security. Springer, Singapore, pp. 825 834. Mehla, S., Jain, S., 2019b. Development and evaluation of knowledge treasure for emergency situation awareness. Int. J. Computers Appl. 1 11. Mehla, S., Jain, S., 2020. An ontology supported hybrid approach for recommendation in emergency situations. Annals of Telecommunications 75 (7), 421 435. In this issue. Ngai, E.W.T., Leung, T.K.P., Wong, Y.H., Lee, M.C.M., Chai, P.Y.F., Choi, Y.S., 2012. Design and development of a context-aware decision support system for real-time accident handling in logistics. Decis. Support. Syst. 52 (4), 816 827. Papamichail, K.N., French, S., 2005. Design and evaluation of an intelligent decision support system for nuclear emergencies. Decis. Support. Syst. 41 (1), 84 111. Rahaman, S., Hossain, M.S., 2013. A belief rule based clinical decision support system to assess suspicion of heart failure from signs, symptoms and risk factors. In: Informatics, Electronics & Vision (ICIEV), 2013 International Conference on IEEE, pp. 1 6. Rakib, A., Uddin, I., 2019. An efficient rule-based distributed reasoning framework for resource-bounded systems. Mob. Netw. Appl. 24 (1), 82 99. Sahebjamnia, N., Torabi, S.A., Mansouri, S.A., 2017. A hybrid decision support system for managing humanitarian relief chains. Decis. Support. Syst. 95, 12 26. Vescoukis, V., Doulamis, N., Karagiorgou, S., 2012. A service oriented architecture for decision support systems in environmental crisis management. Future Gener. computer Syst. 28 (3), 593 604. Xu, J., Nyerges, T.L., Nie, G., 2014. Modeling and representation for earthquake emergency response knowledge: perspective for working with geo-ontology. Int. J. Geographical Inf. Sci. 28 (1), 185 205. Yoon, S.W., Velasquez, J.D., Partridge, B.K., Nof, S.Y., 2008. Transportation security decision support system for emergency response: a training prototype. Decis. Support. Syst. 46 (1), 139 148. Yu, L., Lai, K.K., 2011. A distance-based group decision-making methodology for multi-person multi-criteria emergency decision support. Decis. Support. Syst. 51 (2), 307 315. Zhang, F., Zhong, S., Yao, S., Wang, C., Huang, Q., 2016. Ontology-based representation of meteorological disaster system and its application in emergency management: illustration with a simulation case study of comprehensive risk assessment. Kybernetes 45 (5), 798 814.

II. Reasoning

C H A P T E R

10 Health care cube integrator for health care databases Shivani A Trivedi1, Monika Patel1 and Sikandar Patel2 1

S.K.Patel Institute of Management and Computer Studies-MCA, Kadi Sarva Vishwavidyalaya, India 2National Forensic Sciences University, Gandhinagar, India

10.1 Introduction: state-of-the-art health care system The demand for real-time health care information is increasing among the health care sector’s stakeholders in the current digital and smartphone era. In the health care sector, organizations and institutes and their staff are working under inestimable pressure. While attempting to maintain the health care of patients and providing appropriate facilities, doctors, nurses, and other caretakers are dealing with troublesome and time-consuming multiple IT infrastructures and health care software solutions. Health records are stored and maintained in different forms by different health organizations—hospitals, laboratories, pharmacies, specialists, rehabilitation centers, physicians, and public health resources. Globally, government agencies are encouraging the integrated health care of patients. The Indian Council of Medical Research has provided the conceptual framework of integrated health care in India. It has also indicated that the Ministry of Health and Family Welfare and the Ministry of Ayurveda, Yoga & Naturopathy, Unani, Siddha and Homoeopathy (AYUSH) are the authorities for health care in India. A huge setup of primary health care falls under the ministry of Health and Family Welfare while the responsibilities of AYUSH are unclearly defined. The current Indian government is prima facie on the integration of these two bodies in the Indian integrated health care system. In health care sectors, the type of health data can be categorized in structured and unstructured form. Structured data include medical prescription, dose, itinerary, visit type and time, CPT, ECG, pathology, histology, radiology, home treatments, monitors, medical tests and examinations, personal health records, administrative records for maintenance of staff, and the resources of health care institutions. Unstructured data are on medication instructions, allergies, differential diagnosis, digital clinical notes, physical examinations,

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00013-4

129

© 2021 Elsevier Inc. All rights reserved.

130

10. Health care cube integrator for health care databases

paper clinical notes, tracings medical images, and health policies. Further, the availability of these health records is not maintained centrally. In 2007, the WHO Director-General has mentioned that a comprehensiveness and integrated approach is required in the health sector for service delivery, since the biggest challenge is the fragmentation of health records. An International Data Corporation report signifies that the volume of health care data in 2013 was 153 exabytes and that the projected volume is 2,314 exabytes by 2020. So integrating data from multiple health care systems is an underlying challenge for any health care facility if it is to improve patient care and the performance indicators for health care providers. Understanding the numerous sources of health records and the complete health track of a patient becomes an alluring challenge for health care organizations. For instance, a patient goes to four dispensaries in the region to get opinions on a health problem. Each of these dispensaries has a different electronic medical record (EMR) system that stores data collected at each visit. After the last visit, the patient picks up a prescription at the medical store, presenting a fifth data point. Having any further health records, such as health records collected from a wearable device or online doctor visits, will create other data points housed in separate databases. Now collecting that data from the multiple databases becomes arduous and inconvenient. However, these integrated records would be vital in health analysis for predictive health care. In a research publication review for extraction, transformation, and loading (ETL) tools, it was found that the tools used in a health care data warehouse system store and organize the data for the individual organizational needs. Extracting medical health records from multifarious data sources is the foremost challenge for ETL tools. These autonomous health records are disseminated in different semantic websites. Thus the integration of health records from disseminated semantic websites for tracking the health record history of patients becomes a thorny to an inoperable task for doctors, nurses, pathologists, and other medical practitioners. This provides the motivation to do a further review of research publications in the areas of web semantics, health databases, health knowledgebase uses, and the challenges in creating a cutting-edge environment. In this direction, the research gap has been identified as having three aspects. The first is health data growth and health data as big data. Every year, health data increase by 48%, creating a big data problem in that the data are comprised of structured voluminous health records stored at organized databases. Health records that are generated through wearable devices, telecounseling, online consultation, online medical stores, and medical examination machines enabled by the Internet of Things have been identified as data in motion. Also, health records are available in various forms concerning the nature of data. Veracity is observed in health records in various semantic webs and health care databases. The second aspect is the challenge of integrating the distributed health records that are of big data nature, which leads to the challenges of managing fragmented and heterogeneous health records. The third aspect is the affinity of health record sources and targeted health record information, the scalability of integration tools, and the quality of available firsthand information. The challenge is to provide a solution for conflicting and duplicates records in order to generate accurate health analytics. This research gap has motivated us to develop a solution to collect health records from various databases and generate the health analytics. These analytics are stored in the knowledgebase in a document-oriented database, the Health Care Cube. The Health Care

II. Reasoning

10.2 Research methods and literature findings of research publications

131

Cube has integrated, summarized data for a particular patient and, further, is utilized by health care users for decision making in patient treatments. In this chapter, the work is presented in the conceptual framework, designing framework, and implementation framework of the Healthcare Cube Integrator (HCI). The Health Care Cube is a knowledgebase that stores health records collected from various health care databases using the HCI to perform ETL. Furthermore, that health record is used for health analytics and the discovery of health knowledge from an individual and an organizational viewpoint. This is chapter organized in five sections. The first section presents a state-of-the-art health care system, particularly in terms of the databases and knowledgebases in the health care information system. The second section elaborates on research methods and a review of research publications. The third section represents the conceptual framework and design framework. The fourth section contains the implementation framework and experimental setup along with health data collection. The fifth section discusses result analysis, conclusion, and future enhancement of the work.

10.2 Research methods and literature findings of research publications Experiments based on case studies are established to research the health care sector for the development of a health care integrator in which conceptual, design, and implementation frameworks for health data collection are taken into consideration. For framework designing, a discussion and interview technique is used. Based on that, result analysis, the conclusion, and future work are elaborated. The research publication review is carried out into five different segments: (1) Indian health policies, (2) electronic health record availability in India and its privacy challenges, (3) electronic health records databases/system study, (4) a study of existing health knowledgebases and its infrastructures, and (5) a study of existing solutions available for health data integration. The research gap and research objectives are elucidated from the established research path.

10.2.1 Indian health policies and information technology Indian health care systems are not imperishable for its huge population, especially in rural locations. Singh (2016) highlights that India is going through a lack of medical resources including doctors, nurses, and health care workers. The Indian government is also thinking about transformational change through holistic health care that can be globally accessible. In a huge society, the health of the people is the major dimension in public policy: the understanding of illness and its care, the scope of socioeconomic differences, the availability of health services, and the quality and cost. Epidemiological conversion, demographical shift, environmental modifications, and social determinants of health are the outlines of public health in India. In 1978, health care was revised to handle these challenges. This article focuses on public health needs in India and other consequences, such as accomplishments, restrictions, and future actions. The health of a population involves living conditions, clean drinking water, nutrition, child development, sanitation, social security measures, and education. According to the health policies of India (Chaudhuri and Roy, 2017), the health information management systems need to be strengthened in order to ensure a district-level electronic database of information on health system

II. Reasoning

132

10. Health care cube integrator for health care databases

components by 2020 and to establish federated integrated health information architecture, Health Information Exchanges and a National Health Information Network by 2025. The World Health Organization Regional Office for Europe (2016) has documented a systematic way of defining integrated health care in which integrated health care is categorized into four types: organizational, functional, service, and clinical. The vertical and horizontal aspects of integration are also discussed in it.

10.2.2 Electronic health record availability in India and its privacies challenges Years of research and arguments in health care services have been spent on how the knowledgebase can breed novel thinking about how to make health care systems safe and sound, more efficient, more reasonable, and more patient oriented for diverse communities. Information and Communication Technologies (ICT) has transfigured the way health care is distributed in India over the last few years. In this chapter, secondary data are collected from private websites, government documents and websites, and various international and national journals to see the effects of ICT in health. e-Health has a broader concept, including telemedicine, health information systems, EHRs (electronic health records), and many more. Growth and Wadhwa (2019) believe that collaboration between the government and private sectors will increase interventions for health services by improving quality and delivery. According to a PricewaterhouseCoopers (PwC) report by Mehta et al. (2016), digital technology appears to have barged in on health care by shifting the way care delivery models provide results. The use of digital technology is expected to grow soon, especially in health care. Health data are a mixture of structured and unstructured data such as medication, diagnosis, procedures, genetics, social history, family history, symptoms, lifestyle, and environment. Data in health care are rapidly increasing by 48% annually due to the intervention of Internet of Things (IoT) and computer-based systems, and these data come in variety, volume, veracity, and velocity. Such Big Data can be utilized for the betterment of the health of the population. da Costa et al. (2018) have worked on a tremendous amount of patient data, manually collected in hospitals through different medical devices. They gave an approach to interconnect medical devices through the Internet of Things and to integrate data from these distributed sources. They have concluded that a patient-centered approach is critical and that the IoT paradigm will continue to provide more optimal solutions for patient management in hospital wards.

10.2.3 Electronic health records databases/system study The patient-centric health care system and its analytical opportunities from the perspective of various stakeholders have been presented by Palanisamy and Thirunavukarasu (2019). Their paper reviewed distinct Big Data frameworks concerning essential data sources, analytical capabilities, and application areas. It is seen that framework-based solutions of health care always cater to the requirements of stakeholders. A study of the current status of health care management has been done by Jani and Trivedi (2010) for the enhancement of the existing framework to be more effectual in its execution. The authors believe that the improved framework will improve management and performs reality mining of health care services. Bolloju et al. (2002) presented an approach for integrating knowledge management and decision support as both activities are interdependent.

II. Reasoning

10.2 Research methods and literature findings of research publications

133

Model marts and model warehouses are used as repositories to build an inclusive framework for enterprise decision support systems through an assortment of renovations. Chi et al. (2018) emphasize healthy diet data spread across the Internet for healthy eating patterns. The authors believe that searching from various platforms to integrate for understanding is time-consuming and difficult. In this research, authors have designed a knowledge graph, which provides semantic data retrieval for the integration of healthy diet information, which can be used for national sustainable development. Brook and Vaiana (2015) uses evidence-based truths to develop a framework to implement the structure of a national conversation related to health care. The authors point out a number of questions by which they can change health care systems for better output. They have designed 12 key facts related to health services, which they have thoughtfully crafted, on the science of medicine and on health services research itself. Bellatreche et al. (2018) have mentioned that an advanced database system requires changes and that these changes are based on five factors: (1) the evolution of data source; (2) changes in the real world represented in an integration system; (3) the evolution of domain ontologies and knowledgebases, such as listed DBpedia, FreeBase, Linked Open Data Cloud, Yago, etc., usually involved in the construction of these databases; (4) new user requirements; and (5) creating simulation scenarios (what-if analysis). Wulff et al. (2018) worked on the use of openEHR archetypes and AQL as a feasible approach to bridge the interoperability gap between local infrastructures and clinical decision support systems (CDSSs). They mentioned that this is the first openEHR-based CDSS that is technically reliable and capable in a real context and that facilitates clinical decision support for complex tasks.

10.2.4 Study of existing health knowledgebases and their infrastructures Arkin et al. (2018) present the challenges to integrating heterogeneous and distributed data and its data sharing, integration, and analysis. KBase provides web-based UI that includes many analytical functions for the various data types of the U.S. Department of Energy. Its sharing capabilities increase the opportunity for collaboration and mutual support among scientists. CDSSs are known to be the technological solutions for huge amounts of health data, the complexity of the data, and limited time, but CDSS provides limited utilities for rapid learning. To achieve effective health care, all sources of knowledge and learning methods acceptance and implementation can be addressed by new solutions, says Yu (2015). Yu et al. (2019) believes that the delivery of the right medicine to the right patient at the right dose at the right time, depending on the distinct features of each patient, is a crucial piece of precision medicine and that, subsequently, drug efficiency improves and drug reactions decrease. On the other hand, it is exigent to get firsthand information regarding major diseases, drugs, or genes. The authors proposed PreMedKB, a precision medicine knowledgebase for data integration from verified and normalized databases that provide search portals efficiently. Mu¨ller et al. (2019) have developed an open access knowledgebase to diagnose diseases by assisting medical decision support. They have demonstrated the implementation of diagnosis and the set-covering algorithm and its comparison with their solution. Ko¨hler et al. (2019) have provided a solution to expand the standardized vocabulary of the phenotypic abnormalities associated with more than 7000 diseases. Their solution, the human phenotype ontology (HPO), provides

II. Reasoning

134

10. Health care cube integrator for health care databases

interoperability with other ontologies and has enabled it to be used to improve diagnostic accuracy by incorporating model organism data. It also plays a key role in the popular Exomiser tool, which identifies potential disease-causing variants from whole-exome or whole-genome sequencing data. Rotmensch et al. (2017) have worked on generating a knowledge graph from patient health records. They have used an automatic approach to create graphs from EMRs in any number of domains faster and without any prior knowledge. They have mentioned the future extension of their research toward the integration algorithm for the extraction of health records from more diverse data sources. Tang et al. (2018) applied an open-data web platform such as Drug Target Commons, which offers target bioactivity data annotation, standardization, and intraresource integration. The open-data web platform invites researchers to discover and repurposing drug applications for reuse and the extension of multiple bioactivity data. Venkatesan et al. (2018) highlights that the diversified and dispersed data in the agronomic domain makes it difficult for agronomic researchers to integrate information. The authors have developed AgroLD—a knowledgebase depending on semantic web technologies to incorporate data in various amounts related to plant species for a specific community. AgroLD focuses on a domain-specific knowledge platform for solving difficult biological and agronomical queries.

10.2.5 Study of existing solution available for health data integration Ong et al. (2017) proposed the rule-based Dynamic-ETL engine for source data sets integration. The engine offers a scalable solution for health data transformation and performs half of the work using automatic query generation customized code. The D-ETL rule engine searches for inconsistent and duplicate data and furnishes adequate performance outcomes with the adaptable and translucent process. Chen et al. (2019) proposed a decentralized data hub called SinoPedia, which includes linked data services that can republish resource description framework (RDF) data in a single platform. From the different linked data services, a linked data publishing service can rewrite from knowledgebases for a resource-forwarding mechanism to scattered endpoints and also provides a platform for other services: LDTS, LDQS, LDPS, and LDKS. Any application that has SPARQL can use these services. An event-based approach for knowledge extraction that works on a hybrid data repository for health care is proposed by Yu et al. (2014). Extraction transmits on preprocessing with a combination of a Hadoop Mapreduce algorithm and data mining application programming interfaces, which generate huge knowledgebases and integrate them into the NoSQL data store. In another extraction method, an H-event-based knowledgebase extraction approach of post-processing illuminates RDF formats in the semantic repository for data analytics work. The proposed approach has been implemented and is working as the strength of character for MyHealthAvatar application. Chakraborty et al. (2017) brings to light that structured and unstructured data are growing rapidly and are available for business advancement and decision making in health care, medical care, transportation, and other areas. Though available, the data are inaccessible for users due to the clumsy formats. The authors sensed the demand to develop a tool to make such data useful by transforming them so that they can be analyzed and visualized for business decisions. To solve the problem of users’ unwillingness to share data, Wang (2019) suggests a systematic approach for multistakeholder participation in data collection, administrative authority co-establishment,

II. Reasoning

10.2 Research methods and literature findings of research publications

135

information standards, a data audit mechanism, measures for expanding data integration for multiplying effectiveness, and adopting the Public Private Partnerships model. Capterra (2017) has compared various electronic health record systems irrespective of capture. Using a unique algorithm, it has implemented a system to interact with active customers such as various health organizations and active users as medical professionals. The outcome of this system implementation has increased medical practices’ productivity, integrated health care delivery to patients, etc. Heyeres et al. (2016) have done systematic literature reviews on health service integration to determine how studies report the characteristics of included studies, service integration types, methods, and outcomes. Health policy makers can get assistance in decision making during the development of health policy. They have outlined a range of strategies as integrated care pathways, governance models, integration of interventions, collaborative/integrated care models, and health care service integration. They have concluded that there is no “one size fits all” approach. Prasser et al. (2018) have developed DIFUTURE architecture for information in medicine on German medical informatics. For this purpose, they have done data integration for future medicines based on targeted diagnosis and therapy. They have also used a use-case-driven and diagnosis-oriented approach for medical research. The explosion of confusing information systems collecting huge amounts of heterogeneous data is a difficult challenge in the health care area. It is essential to address this challenge and bring about a new age of e-health care and patient-centered health care delivery systems. Jayaratne et al. (2019) proposed the platform for clinicians, patients, medical persons, and historical data from various health systems. This platform allows integrating and accommodating distinct data sources such as wearable IoT devices. A report by Gartner (2019) has mentioned the comparison of various data integration tools in the market. Alteryx Designer and Informatica Center are two widely used tools for data integration.

10.2.6 Health care processes and semantic web technologies Bellatreche et al. (2018), Dalal et al., (2019), and Tiwari et al., (2018) have mentioned the importance of constructing adequate ontologies, and vocabularies as required to integrate the semantics of data sources and to make their content machine understandable and interoperable. Patients, doctors, and other health care entities need not be restricted to the hospitals, but they can club together to share data in order to achieve common objectives, says Yu (2015). Such shared data from all dimensions of health care can be converted into knowledge for the patient’s benefit. Ko¨hler et al. (2019) have worked on the HPO having numerous and more diverse users. To meet these emerging needs, they worked on the project and expanded it by adding new content, language translations, mappings, and computational tooling, as well as integrations with external community data. The HPO continues to collaborate with clinical adopters to improve specific areas of the ontology and to extend standardized disease descriptions. The newly redesigned HPO website is www.human-phenotype-ontology.org. Ko¨hler et al. (2019) present the methodology for searching semantic queries without transformation and permiting direct database access via SQL. Ontology-SQL offers semantic expressions, a framework for annotating free-text-containing database tables, and a parser for rewriting and passing the queries to the database server. The presented framework is efficiently measured based on the comparison of several semantic queries in which the quality of annotation has

II. Reasoning

136

10. Health care cube integrator for health care databases

improved. The authors have found that the systematic analysis of free-text-containing medical records is feasible with typical tools. Narula et al. (2013) have discussed the issue of the superfluity of data on the Web, making it tricky to access and to retrieve the appropriate information. To understand the situation, the authors performed a case study on an Online Shopping System using JADE (Java Agent Development Framework). In this research paper, authors used a case study to demonstrate a new technique of extracting appropriate information from the system. Jain and Singh (2013) have focused on diverse methods for mapping databases and ontology. Ontology is the type of structure defining the meanings of metadata that are collected and standardized in order to store data in the new era. We have online and offline data in various forms; hence it is necessity to map data of ontology and databases for knowledge achievement. Patel and Jain (2018) believe that development of a knowledgebase requires good knowledge depiction. As a result, the authors have majorly focused on knowledge depiction in their work. It is known that getting an idea of any information from unfinished, vague and huge data is difficult. Unlike databases, a knowledgebase gives control in getting information. The authors have provided the solution by developing an effective knowledgebase and studying problems of knowledge representation techniques. The outcome of the literature survey has motivated us to develop a knowledgebase to store health care data and to develop an ETL process to extract health data from various health databases. Based on that, an experimental research method was finalized to identify the benefits and challenges related to the implementation of a health care knowledgebase, and it is presented in the next section.

10.2.7 Research objectives Based on the research publications review and research gap, the research objective is to develop a document-oriented knowledgebase named Health Care Cube and an ETL process named HCI. The Healthcare Cube is a knowledgebase to store health information through the HCI. This has been implemented by means of a case study by developing a Healthcare Cube knowledgebase from the health records of employees of an organization and integrating those health records from assorted databases. HCI is an algorithm developed for the integration of health records from assorted databases. The Health Care Cube knowledgebase is based on the latest disruptive database technologies and has as an objective to store a variety of data in a schema-free form, i.e., JSON format. Thus the proposed framework HCI is step forward in integrating health care databases with the health care knowledgebase in order to access health care analytics, in addition to identifying the benefits and challenges of Health Care Cube and HCI implementation and execution.

10.3 HCI conceptual framework and designing framework The conceptual framework of the HCI is based on a generic ETL framework. Generic ETL is defined as a data integration process involving steps like extraction, Transformation, and loading. These steps are taken to acquire data from diversified sources in order to build a data warehouse, data hub, data lake, or knowledgebase. As mentioned earlier, HCI is a knowledgebase. It stores patient-centric health records. Fig. 10.1 depicts the patient-centric health record consisting of the basic health information, doctor, pathology, and collection of

II. Reasoning

10.3 HCI conceptual framework and designing framework

137

FIGURE 10.1 Patient-centric healthcare cube integrator conceptual framework and designing framework.

health records from wearable devices. We have used agile methods as the development strategies of HCI. In the current scenario, the identified health data sources are available in physical, structured database, and cloud database forms. Health data source basic information about the patient, like name, birth date, age, blood group, diabetic condition, smoker, drinker, etc., are fed directly to the system by users/patients themselves. A user/patient can consult more than one doctor—ENT, heart specialist, etc.—for medical treatment concerns. In this chapter, for the prototype implementation, we have taken data from one of the clinics of the employee of the organization. The doctor has maintained his/her data for the past 10 years, and the system consists of MS-Access based Microsoft VB application. It is found that one employee/patient/user can consult more than one specialized doctor. Similarly, for pathological laboratory data, one patient may have such data more than once and that he/she may consult more than one pathological laboratory. It is found that we have collected some physical data and some SMS file format received from a patient android mobile phone. During our analysis, it has also been found that very different wearable devices are used by various employees of the organization. Here we have taken Mi-Fit belt as a source of data. Fig. 10.2 depicts the data design in the form of a class diagram for patient-centric data. Table 10.1 represents the conceptual design requirements for the knowledgebase. Based on the analysis and design number, users identified for HCI are listed as user/ patient/employee, employer, pathological lab technician, and doctors. Modules are identified as extraction services, transformation and loading services, application services, and information delivery. We have implemented the three basic ETL processing steps—extraction, transformation, and loading—in eight steps, as represented in Fig. 10.3. The first step is the selection of data sources, and browsing health data from the selected source is the second step. In the third step, data extraction takes place by user intervention. In the fourth step, the user can choose the transformation of data, such as month-wise or day-wise aggregation for the health records extracted. After that in the fifth step, data are transferred, and

II. Reasoning

138

FIGURE 10.2

10. Health care cube integrator for health care databases

Class diagram of healthcare cube integrator patient-centric data.

TABLE 10.1 Summarized input, process, and output of the HCI system. Input device/user Input

ETL process Output

Output device/user

Paper/patient

Physical health records

HCI

Number of the visit to a doctor, frequency of medical test/ pathological test

Patient/doctor

Health management system

HCI

Patient/ technician/paper

Pathological laboratory

HCI

Number of the visit to a doctor, last visited date, information about the last visit, a summary of number of doctors visited, etc. frequency of medical test/pathological test

Web dashboard for patient/employer/ doctor/pathological technician Web dashboard for patient/employer

Wearable devices/user/ AppCloud

AppCloud data heartbeats, walking, sleeping, etc.

HCI

Total walking footsteps per week/ month etc., BP data average per week, month, etc.

ETL, extraction, transformation, and loading; HCI, healthcare cube integrator.

II. Reasoning

Web dashboard for patient/employer/ doctor/pathological technician Web dashboard for patient/employer/ doctor/pathological technician

10.3 HCI conceptual framework and designing framework

139

FIGURE 10.3 HCI ETL processing steps. ETL, Extraction, transformation, and loading; HCI, healthcare cube integrator.

subsequently, the new aggregate multidimensional data structure for the knowledgebase is generated. In the sixth step, the user approval data loads into a patient-centric knowledge base. In the seventh step, all data stores for further processing. The last step is used by the user/patient/employee for generating his health record knowledge, such as how many times he has visited a particular doctor. If he wants, he can drill down into the last visit data of that doctor and can also drill for a particular date. Similarly, pathological lab data or data from wearables can also be rolled up and drilled down for patient-centric analysis. Based on system requirement analysis and design, the following design challenges are identified and listed: • Employees of the organization are undergoing multiple medicine therapy. • More than 70% of private practitioners are keeping physical or semistructured health records of their patients. • An unwillingness to share health data is found among the different healthcare communities. • The web semantics health data are not available for all the employees of the organization. As in this case study, we have considered a solution for employees of an organization, in which employees have different doctors and different pathological laboratories and use different wearable devices. India has multiple medicine therapy systems like Ayurveda, Siddha, Unani and Yoga, Naturopathy, Homoeopathy, and Allopathic. It is also observed that one employee is also consulting more than one medicinal system for treatment. It is also observed that few doctors in private practice maintain patient health records physically. So it’s a big challenge to find the electronic health records for all employees of the organization. And also it is found that web semantic data for employee’s electronic health II. Reasoning

140

10. Health care cube integrator for health care databases

records are not available. Another challenge is the data privacy issue for doctors who are asked to share the electronic records of their patients with other specialized doctors. So it has been decided to follow a generic layered architecture for a prototype implementation of HCI, which is described in the next section.

10.4 Implementation framework and experimental setup Implementation framework of the HCI is based on generic data warehouse layered architecture for the ETL process. Fig. 10.4 depicts the five layers of implementation

FIGURE 10.4

Implementation layered architecture of HCI. HCI, Healthcare cube integrator.

II. Reasoning

141

10.4 Implementation framework and experimental setup

TABLE 10.2

H/W and S/W requirement for healthcare cube integrator.

Software requirement

Hardware requirement

MongoDB Server 4.2, JSON

Min 4 GB RAM

Django

Core i3 Processor

Python 3 AWS Cloud Service

architecture of the HCI with five functionalities. First-layer data sources define the various sources available for the electronic health data of the patient. The data can come from mobile applications, wearable devices, health record management systems, IoT enabled health devices, pathological data, and existing knowledgebases for medicine, protein databases, and biological databases on web semantic and physical data available from employees and organizations. So data extraction permission at this layer produces various electronic health data in the form of .csv, .json, .RDF, .dbf, etc. All this information is extracted in the next layer of extraction services. This is implemented using the Amazon Cloud Service and Python through the Django framework with back-end MongoDB. This service also ensures data privacy through digital certification and user authentication for data sharing among users. The next layer is data staging and transformation services, i.e., taking input data from the previous layer, through cloud services, as the base electronic health records of the patient and then staging these data into the data staging area on MongoDB document storage database. Here users/patients can customize their needs for aggregating health records. The aggregation and transformation take place at the data staging area, and summarized knowledge is generated and stored in the document store for the next layer of app services. This is implemented using a content delivery network by means of bootstrap technology, which provides the information requested from the user/patient through the web dashboard available at the presentation layer. Table 10.2 presents the minimum hardware and software requirements for the implementation of the layered architecture of HCI. Fig. 10.5 presents the start schema for the equivalent RDBMS data model. This defines the exact facts to be identified for the patient’s need, such as how many times he/she visited the ENT in a month or in a year, or how many doctor visits have taken place. The last visit to the doctor and the treatment plan can be available in one click. So from the figure, the dimensions are identified like doctor, time, pathological laboratory, wearable device. From data sources identified in the operational data, the environment can be extracted and aggregated to be stored in document data storage in JSON format. The implementation of the data model for document storage of patient basic data, doctor data, and pathological data is as follows: JSON Format for Basic Patient Data {"_id":{"$oid":"5e6dfe8ba9481733a7cf16de"},"id":1,"MyUser_id":null,"height":"5.7","weight":"77","dob":{"$date":"2020-0313T00:00:00.000Z"},"age":"22","blood_group":"O 1 ","diabetic":false,"heigh_bp":false,"smoking":false,"drinking":false}

II. Reasoning

142

FIGURE 10.5

10. Health care cube integrator for health care databases

Equivalent relational database model. BP, Blood pressure.

JSON Format for Doctor’s Data {"_id":{"$oid":"5e6e030155ffbf1b3bda25c0"},"id":1,"MyUsers_id":1,"doctor_name":"RK", "specialization":"ENT"} JSON Format for Patient’s Treatment Data by Doctors {"_id":{"$oid":"5e6e040a55ffbf1b3bda25c1"},"id":1,"MyUser_id":1,"doctor_id":1,"days": "55","date_time":{"$date":"2020-03-13T12:04:02.000Z"},"candex":"• COMPLAINT OF R- ear discharge L- deafness B- ear pain 4 days hghg • SWELLIN OF NASAL VESTIBULE B- Is it associated with fevwr? • POST OP.FOLLOW UP EAR SURGERY B- wound healed B- Post aural raw area B- Wound healed B- canal wick not in situ-removed by.pt. • POST OP. FOLLOW UP NASAL SURGERY B- headach B- foul smell B- No improovement in

II. Reasoning

10.4 Implementation framework and experimental setup

143

symptomps || AFTER FOODAVOID DIRECT INJURY ON THE NOSEAVOID DRIVING \u0026 WORKING IN A KITCHENAVOID DUSTY AND POLUTED ATMOSPHEAR","dandtr":"** R Acute Follicular Tonsilitis ------------------------------------------------------------------------------------------# {1}-ONE SPOON (5 ML.) AT BED TIME [7]# {4} -1/2 TAB AT BED TIME 7]# {1}-TWO DROPS THREE TIMES A DAY IN EACH NOSTRIL [7]","data_time":{"$date":"2020-03-15T10:31:38.434Z"}} JSON Format for Patient’s Pathological Data {"_id":{"$oid":"5e6e049555ffbf1b3bda25c2"},"id":1,"MyUser_id":1,"pathology_name": "healthcare","report_name":"Haemogram Report"} JSON Format for Patient’s Pathological Data {"_id":{"$oid":"5e6e04d355ffbf1b3bda25c3"},"id":1,"MyUser_id":1,"pathology_id":1, "haemoglobin":"15.2","RBC":"5.19","WBC":"6100","platelet_count":"2.33","Polymorphs": null,"Lymphocytes":"41","eosinophils":"6","monocytes":"3","basophils":"0","pcv":"45","mcv":"86.7","mch":"29.3","mchc":"38.8","rdw_cv":"12.9","RBC_s":"RBCs are normochromic normocytic","WBC_s":null,"Platelet_s":"Adequate and Normal","result":"Malarial Parasites not seen","data_time":{"$date":"2020-03-14T00:00:00.000Z"}} JSON Format for Patient’s Wearable Device Data {"_id":{"$oid":"5e6e071255ffbf1b3bda25c4"},"id":1,"MyUser_id":1,"heart_bit":"40", "bp":"80","runing":"23","sleeping":"50"} The code section in Table 10.3 shows the settings for the path of HCI in Python. The code section in Table 10.4 shows the administrative settings of HCI in Python. The code section in Table 10.5 shows the settings for the migration model of HCI in Python. The code section in Table 10.6 shows the settings for the migration model of HCI in Python. This section of the chapter includes the implementation of the knowledgebase framework of HCI. A large amount of health data is available, but it is scattered and stored in different file formats, so it cannot be used for analysis or decision making. To use such data, standardization and integration are required for storing them in a database, from which various dimensional data can be accessed to analyze the 360-degree view of the patient’s record. For a set of patients, the Health Care Cube integrates and stores health records collected from the various sources in health care, such as basic health data, doctor’s clinical data, pathological data, and health records from wearable devices like IoT data. TABLE 10.3

HCI Settings.

{ "python.pythonPath":"C:\\Users\\Sikandar\\.virtualenvs\\upl-F38S7Q9W\\Scripts\\python.exe" } { "python.pythonPath":"C:\\Users\\Sikandar\\.virtualenvs\\upl-F38S7Q9W\\Scripts\\python.exe" }

II. Reasoning

TABLE 10.4 Administrative settings. #!/usr/bin/env python """Django's command-line utility for administrative tasks.""" import os import sys def main(): os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'KNN.settings') try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) from exc execute_from_command_line(sys.argv)

TABLE 10.5

Migration model settings.

# Generated by Django 3.0.4 on 2020-03-13 09:39 from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name='Genral', fields=[ ('id',

models.AutoField(auto_created=True,

primary_key=True,

serialize=False,

verbose_name='ID')), ('drname', models.CharField(blank=True, max_length=255, null=True)), ('patient_name', models.CharField(blank=True, max_length=255, null=True)), ('days', models.CharField(blank=True, max_length=255, null=True)), ('date_time', models.DateTimeField(null=True)), ('candex', models.TextField(blank=True, null=True)), ('dandtr', models.TextField(blank=True, null=True)), ('vital_date', models.TextField(blank=True, null=True)), ],

),

]

10.4 Implementation framework and experimental setup

TABLE 10.6 settings.

145

Migration model for adding document data store

# Generated by Django 3.0.4 on 2020-03-14 05:21 from django.db import migrations, models import django.utils.timezone class Migration(migrations.Migration): dependencies = [ ('Ahn', '0004_auto_20200314_0959'), ] operations = [ migrations.AddField( model_name='doctor_detail', name='data_time', field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now), preserve_default=False, ), migrations.AddField( model_name='pathology_detail', name='data_time', field=models.DateTimeField(auto_now_add=True, default=django.utils.timezone.now), preserve_default=False, ), ]

FIGURE 10.6 Patients’ data. BP, Blood pressure.

Fig. 10.6 shows the screenshot of HCI with the Basic Data tab selected containing basic information on a patient, such as patient name, height, date of birth, age, blood group. Is the patient diabetic? Does the patient have a blood pressure issue? Does the patient smoke or drink alcohol? The basic data for the patient reflect general health issues that the patient is suffering from. Such information can be recorded at the time of registration or upon the first visit to the clinic.

II. Reasoning

146

10. Health care cube integrator for health care databases

FIGURE 10.7

Doctors’ data.

FIGURE 10.8

Patients’ complaints and examinations data.

Fig. 10.7 shows another tab of the HCI containing doctors’ data, which is separated into two categories of the patient’s visiting data and historical data. This screenshot includes doctor name, specialization of the doctor, the total number of the patient visits, and the date of the latest visit. Such data are stored at the clinic by the doctor during routine checkups and eventually can be used to analyze the patient’s health issue pattern during the period time. Fig. 10.8 shows patients’ complaints and examinations, as well as the diagnoses and treatments given by the doctor. Detailed data regarding diagnoses and treatment are stored for the given time period and can be used to find out the last treatment type for the particular diagnosis and its effect. Such data can be used to analyze and diagnose current health problems more efficiently and effectively. Fig. 10.9 shows patients’ pathological data generated during treatment given to the patient. This tab collects data from pathological labs in a variety of reports. Detailed data of the pathological test can be viewed in history. This tab stores all the reports done at a particular time, which in turn is helpful in analyzing the time-wise progress or repeats of treatment, which can speed up health recovery. Fig. 10.10 shows detailed data of pathological tests for a selected patient, including test data on hemoglobin, RBC count, WBC count, platelet count, and other details depending upon the test recommended by the doctor.

II. Reasoning

10.4 Implementation framework and experimental setup

147

FIGURE 10.9 Patients’ pathological data.

FIGURE 10.10

Patients’ pathological reports. RBC, Red blood cell; WBC, white blood cell.

FIGURE 10.11

Internet of Things—wearable device data.

Fig. 10.11 shows the wearable device data collected from patients’ IoT devices with sensors. These data are on the day-to-day done by the patient. Such data include heartbeat, blood pressure, steps walked throughout the day, sleeping patterns, and other routine data depending upon the device’s features. The day-to-day activity of patients helps the doctor to understand and incorporate the data in order to improve the health of the patient.

II. Reasoning

148

10. Health care cube integrator for health care databases

This developed application was implemented with the data of one employee of the organization and tested with two doctor’s data, two pathological data sources, and a wearable device’s data.

10.5 Result analysis, conclusion, and future enhancement of work 10.5.1 Result analysis During HCI system design, development and implementation discussion was arranged among organizers, employees, doctors, patients, pathological laboratory technicians, and researchers. It was found that the majority of employees of the organization avail themselves of medical treatment from private practitioners. It was also found that only 20% of private practitioner doctors maintain electronic health records, particularly in Gandhinagar, Gujarat, India. Furthermore, it was found that, within the organization, 70% of employees follow and track their health treatment through Ayurveda, homeopathy, and allopathic. It was also observed that two private doctors were concerned for the data privacy of their treatment plan in sharing data with other doctors in the amalgam of different medical treatments. It was also found that 80% of government practitioners and medical hospitals maintain electronic health records through a health management system. At the same time, there is no integration of the two different government medical systems for maintaining the health records of patients. So it’s a big challenge to maintain and integrate patientcentric data from such diverse islands of medical systems. Researchers are advised to make this project patient-centric and to involve the organization’s central medical health center. Patient find it convenient to use the medical health dashboard to keep track of health records with multiple views. Organizers recommend adding medical insurance data, especially when the organization is providing insurance facilities to their employees, so that transparently tracking and maintaining employees’ health records, along with insurance data, is easier. Another challenge employees do not consistently maintain pathological data with one pathological laboratory. All laboratory test data systems are also islands of information, which is still another big challenge. So patients need to be able to authenticate having permission to customize the dashboard in the currently implemented system, so that, through the users, intervention-required data can be extracted and transformed and a knowledgebase can be generated.

10.5.2 Conclusion From the results analysis of the implementation of HCI and discussion, the following patients needs are met: • Patients can track the total number of visits to a specialized doctor, pathological test data, and wearable device test data, and they can perform drill-down and roll-up to obtain summarized and detailed data easily.

II. Reasoning

References

149

• Organizers can track employee’s data easily. Even though the patient goes from one doctor to another, the doctor treatment can be shared easily in the case of multiple treatments and even in multiple medical systems. From the result analysis of the implementation of HCI, successful implementation has the following benefits: • It is possible to extract information from the islands of the electronic health record system. • Data privacy can be maintained through user/patient authentication, doctor authentication, and lab technician authentication. • Aggregate information generated from detailed data is easily available—at one’s fingertips—to track health records easily by implementing data on the document storage.

10.5.3 Future enhancement of work The implemented work can be enhanced in the future with mobile applications and mobile services to make it more real-time. The system can be evaluated for performance evaluation with more data. More automation can be incorporated into the system for data extraction. HCI can be integrated with an existing medicine knowledgebase to yield more knowledge about the medicine given to the patient. The HCI can be upgraded to integrate government medical systems. To handle health record privacy and data monetization, the framework of this proposed architecture can be implemented with Hyperledger Fabric blockchain technology.

Acknowledgment We are very thankful to our management for allowing us to perform an experiment and for encouraging us to take this project at the government level. We are thankful to Dr. Jignesh Rajvir M.S. (ENT) D.L.O and Dr. Ashil Manvadaria M.S.(ENT) for giving us a platform to study their system and providing us valuable information to carry on this project.

References Arkin, A.P., et al., 2018. KBase: the United States department of energy systems biology knowledgebase. Nat. Biotechnol. 36 (7), 566 569. Available from: https://doi.org/10.1038/nbt.4163. Bellatreche, L., Valduriez, P., Morzy, T., 2018. Advances in databases and information systems. Inf. Syst. Front. Available from: https://doi.org/10.1007/s10796-017-9819-2. Bolloju, N., Khalifa, M., Turban, E., 2002. Integrating knowledge management into enterprise environments for the next generation decision support. Decis. Support. Syst. 33 (2), 163 176. Available from: https://doi.org/ 10.1016/S0167-9236(01)00142-7. Brook, R.H., Vaiana, M.E., 2015. Using the knowledge base of health services research to redefine health care systems. J. Gen. Intern. Med. 30 (10), 1547 1556. Available from: https://doi.org/10.1007/s11606-015-3298-2. Capterra, 2017. The 20 Most Popular EMR Software Solutions, pp. 1 4. Available at: https://www.capterra.com/ infographics/top-emr-software. Chakraborty, J., Padki, A. and Bansal, S.K., 2017. Semantic ETL-state-of-the-art and open research challenges. In: Proceedings - IEEE 11th International Conference on Semantic Computing, ICSC 2017, pp. 413 418. Available from: https://doi.org/10.1109/ICSC.2017.94. Chaudhuri, B.R., Roy, B.N., 2017. National health policy. J. Indian Med. Assoc. 72 (6), 149 151. Chen, T., et al., 2019. SinoPedia—a linked data services platform for decentralized knowledge base. PLoS One 14 (8). Available from: https://doi.org/10.1371/journal.pone.0219992.

II. Reasoning

150

10. Health care cube integrator for health care databases

Chi, Y., et al., 2018. Knowledge management in healthcare sustainability: a smart healthy diet assistant in traditional Chinese medicine culture. Sustainability 10 (11). Available from: https://doi.org/10.3390/ su10114197. Dalal, S., Jain, S., Dave, M., et al., 2019. A Systematic Review of Smart Mental Healthcare. 2019 5th International Conference on Cyber Security and Privacy in Communication Networks (ICCS). Available at SSRN 3511013. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id 5 3511013. da Costa, C.A., et al., 2018. Internet of Health Things: toward intelligent vital signs monitoring in hospital wards. Artif. Intell. Med. 61 69. Available from: https://doi.org/10.1016/j.artmed.2018.05.005. Gartner, 2019. Data Integration Tools Market. Available at: https://www.gartner.com/reviews/market/data-integration-tools. Growth, T. and Wadhwa, M., 2019. ICT Interventions for Improved Health Service Quality and Delivery in India: A Literature Review (January). Heyeres, M., et al., 2016. The complexity of health service integration: a review of reviews. Front. Public Health. Available from: https://doi.org/10.3389/FPUBH.2016.00223. Jain, V., Singh, M., 2013. A framework to convert relational database to ontology for knowledge database in semantic web. Int. J. Sci. Technol. Res. 2 (10), 9 12. Jani, N.N., Trivedi, S.A., 2010. ICT drivers in effective health care management system. J. Sci. 1 (1), 131 135. Jayaratne, M., et al., 2019. A data integration platform for patient-centered e-healthcare and clinical decision support. Future Gener. Comput. Syst. 92, 996 1008. Available from: https://doi.org/10.1016/j.future.2018.07.061. Ko¨hler, S., et al., 2019. Expansion of the Human Phenotype Ontology (HPO) knowledge base and resources. Nucleic Acids Res. 47 (D1), D1018 D1027. Available from: https://doi.org/10.1093/nar/gky1105. Mehta, R.; Raaghavan, V.; Thadani, N., 2016. Indian Healthcare on the Cusp of a Digital Transformation. Available at: https://www.gita.org.in/Attachments/Reports/indian-healthcare-on-the-cusp-of-a-digital-transformation.pdf. Mu¨ller, L., et al., 2019. An open access medical knowledge base for community driven diagnostic decision support system development. BMC Med. Inform. Decis. Mak. 19 (1), 1 7. Available from: https://doi.org/10.1186/ s12911-019-0804-1. Narula, G.S., Jain, V., Singh Dr, M., 2013. An approach for information extraction using jade: a case study. JGRCS 4 (4), 186 191. 2010. Ong, T.C., et al., 2017. Dynamic-ETL: a hybrid approach for health data extraction, transformation and loading. BMC Med. Inform. Decis. Mak. 17 (1). Available from: https://doi.org/10.1186/s12911-017-0532-3. Palanisamy, V., Thirunavukarasu, R., 2019. Implications of big data analytics in developing healthcare frameworks a review. J. King Saud. Univ. Comput. Inf. Sci. 31 (4), 415 425. Available from: https://doi.org/ 10.1016/j.jksuci.2017.12.007. Patel, A., Jain, S., 2018. Formalisms of representing knowledge. Proc. Comput. Sci. 125, 542 549. Available from: https://doi.org/10.1016/j.procs.2017.12.070. Prasser, F., et al., 2018. Data Integration for Future Medicine (DIFUTURE). Methods Inf. Med. 57 (S 01), e57 e65. Available from: https://doi.org/10.3414/ME17-02-0022. Rotmensch, M., et al., 2017. Learning a health knowledge graph from electronic medical records. Sci. Rep. 7 (1), 1 11. Available from: https://doi.org/10.1038/s41598-017-05778-z. Singh, R., 2016. Integrated healthcare in India—a conceptual framework. Ann. Neurosci. 23 (4), 197 198. Available from: https://doi.org/10.1159/000449479. Tang, J., et al., 2018. Drug target commons: a community effort to build a consensus knowledge base for drugtarget interactions. Cell Chem. Biol. 25 (2), 224 229. Available from: https://doi.org/10.1016/j.chembiol.2017.11.009. e2. Tiwari, S., Jain, S., Abraham, A., Shandilya, S., et al., 2018. Secure Semantic Smart HealthCare (S3HC). Journal of Web Engineering 17 (8), 617 646. In this issue. Venkatesan, A., et al., 2018. Agronomic linked data (AGROLD): a knowledge-based system to enable integrative biology in agronomy. PLoS One 13 (11). Available from: https://doi.org/10.1371/journal.pone.0198270. Wang, Z., 2019. Data integration of electronic medical record under administrative decentralization of medical insurance and healthcare in China: a case study. Isr. J. Health Policy Res. 8 (1). Available from: https://doi. org/10.1186/s13584-019-0293-9. World Health Organization Regional Office for Europe, 2016. Integrated Care Models: An Overview Working Document. Available at: http://www.euro.who.int/pubrequest.

II. Reasoning

References

151

Wulff, A., et al., 2018. An interoperable clinical decision-support system for early detection of SIRS in pediatric intensive care using openEHR. Artif. Intell. Med. 89, 10 23. Available from: https://doi.org/10.1016/j. artmed.2018.04.012. Yu, H.Q. et al., 2014. Healthcare-Event driven semantic knowledge extraction with hybrid data repository. In: 4th International Conference on Innovative Computing Technology, INTECH 2014 and 3rd International Conference on Future Generation Communication Technologies, FGCT 2014. Luton: IEEE, pp. 13 18. Available from: https://doi.org/10.1109/INTECH.2014.6927774. Yu, P.P., 2015. Knowledge bases, clinical decision support systems, and rapid learning in oncology. J. Oncol. Pract. 11 (2), e206 e211. Available from: https://doi.org/10.1200/jop.2014.000620. Yu, Y., et al., 2019. PreMedKB: an integrated precision medicine knowledgebase for interpreting relationships between diseases, genes, variants and drugs. Nucleic Acids Res. 47 (D1), D1090 D1101. Available from: https://doi.org/10.1093/nar/gky1042.

II. Reasoning

C H A P T E R

11 Smart mental healthcare systems Sumit Dalal1 and Sarika Jain2 1

National Institute of Technology Kurukshetra, Haryana, India 2Department of Computer Applications, National Institute of Technology Kurukshetra, Haryana, India

11.1 Introduction Mental illness is very prevalent in modern society. Mental disorder is an umbrella term used for various mental illnesses related to mind health, for example, bipolar disorder, depression, anxiety, schizophrenia, and dementia. According to various mental health surveys and studies, approximately 21 million people suffered from schizophrenia, 47.5 million from dementia, 21 million from depression, and 60 million people from bipolar disorder worldwide. In severe disorder cases, a person attempts suicide. Approximately 800,000 people are thought to die by suicide each year. Someone may find these numbers very little in a 7 1 billion population. Even in a country like the United States which has much better medical infrastructure, than many of the countries in the world, the situation is not different. The National Survey on Drug Use and Health (NSDUH) 2016 report states that 18.3% among adults aged 18 or older is the estimated 12-month prevalence of mental illness, and for serious mental illness, this figure is 4.2% among adults. Substance use disorders were excluded from this evaluation. If this doesn’t clear the cloud over severity of the situation, then according to the World Health Organization (WHO), mental disorder will be the second highest disease, by 2020, people would be suffering from. Mental illness affects the life of a person, his/her family, and friends very badly. The patient faces difficulty in accomplishing daily-life activities. A person suffering from a mental illness cannot enjoy life. Hence, the overall working ability reduces. In lieu of the severity and importance of the situation, WHO in 2015 announced Sustainable Development Goals report comprising “Good Health and Well-being” as its Goal 3. In traditional healthcare, patient has to visit a clinic for physician’s precautions. Physician based on lab tests, personal observations, and interviews during visit approximates patient’s situation and assesses his health. But 21st century is the century with machines affecting/helping humans in daily-life activities. Advancement of technology has brought great changes in various life aspects, be it transportation, education,

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00010-9

153

© 2021 Elsevier Inc. All rights reserved.

154

11. Smart mental healthcare systems

communication, or services. Healthcare domain is, also, being transformed with the introduction of sensors and IoT (Internet of Things) devices, replacing traditional biomedical devices. This helped in 24*7 and remote monitoring of patients when physician is not available, or patients not present at clinic. Invasive monitoring helps in tracking a patient’s health during his/her normal life routine. Technology helped physicians as well as patients and other agencies to keep track of health of their interested domain. Social data is being considered for monitoring and having insights into users’/patients’ life. Various researches proved social data’s importance in determining and monitoring users’ health status. In mental health domain, nowadays, use of social data has gained momentum in detecting and monitoring mental health of a user. Sentiment analysis of data, shared on social platforms, using various techniques (such as machine learning, and dictionary based) helps in invasive and continuous monitoring.

11.2 Classification of mental healthcare Traditional healthcare: Mental healthcare is very patient oriented. But in old-age system, a patient is assessed in the clinical settings, which leads to intentional or unintentional fact hiding. Also, there is a social stigma associated with the mental health. Even if a person has gathered enough courage to visit a physician, it is very difficult to assess and monitor a person for his/her mental health status. Physicians depend on the situation orated by the patient (interviews), questionnaires (DSM-5, PHQ, etc.), and self-observations. Patient can hide or manipulate his/her thoughts and activities that need for evaluation of mental health, as per his/her convenience. This leads to an approximation of mental health which may be not exact. Hence, course of treatment recommended by the physician may not be that much effective and consumes more time; also, it may worsen the situation. One more drawback is early prevention and detection of mental disease happens only in rare cases. All these factors culminate in low success rate. IoT-supported healthcare: With the availability of IoT-enabled biomedical devices, mental healthcare is being transformed. Various systems use different IoT devices/sensors for collecting and monitoring user/patient data which is further processed using various machine learning algorithms for finding insights and providing recommendations to physicians or patients. IoT-enabled healthcare helps in continuous noninvasive monitoring of the patient health status. Also, timely intervention is possible due to early detection of the disease. These systems named smart mental healthcare systems, by the respective researchers, help in improving the treatment effective and reducing time course. Some systems are also capable of taking actions itself that affect patients’ condition, such as adjusting, inducing medicine dose, temperature. Some of the devices used for mental health monitoring are FitBit, smart watches, smartphones, etc. IoT-enabled mental healthcare systems are IBM Watson. Social data supported healthcare: With the reach of Internet to a large population, social media platforms have become a popular place to share anyone’s life events with the world. Social data is the gold mine for researchers to find insights into someone’s life. People share their emotions, activities, pictures, and other stuff very frequently. Many researchers used this personal social data for sentiment analysis to detect and monitor a user’s mental health (Bhatia et al., 2021). It reduces the use of IoT devices to monitor a patient. Moreover, early detection and prevention of disease are possible as social data can II. Reasoning

11.3 Challenges of a healthcare environment

155

be analyzed even when a person is doubtful about mental health. But most of the IoT devices are generally used only on the advice of physicians. Monitoring a person without letting him/her know is also possible by analyzing social data. There are cases when patients intentionally or unintentionally do not agree to visit a physician and behaves as if he/she is not suffering from any mental disease. In such cases, acquaintances of person can take help from physician, and he can assess the person’s mental health by sentiment analysis of his/her social data. Social data can also be used for effective, easy detection, and monitoring of health status of a region. So, use of social data for assessing mental health is another bigger wave of reorganization of mental healthcare. These systems automatically monitor and generate recommendations for early detection, prevention, and cure. These are also called smart mental healthcare systems by the researchers. IoT 1 social data 1 Internet surfing data: Considering Internet surfing is another aspect of assessing the mental health of a person. This data would provide better insights into a person’s ongoing thoughts. But it is not widely considered in current mental healthcare systems. Merging this data with the IoT and social data would help in better prediction and effective monitoring. There may be a case where patient is not on social platforms or does not share anything. In this situation, surfing data will be useful. We think a system that considers every aspect of human data will be the real smart mental healthcare system.

11.3 Challenges of a healthcare environment Smart healthcare environment comprises heterogeneous data from various sources such as social media, electronic medical records, IoT devices (manufactured by various vendors with different versions, different working standards/protocols, data formats, having varying storage, computation, battery, and other capabilities), and other types of data such as Internet surfing. This data gives rise to the various challenges that need to overcome for automated, effective, and accurate decision-making process (Fig. 11.1). The challenges faced are discussed in the following subsections.

11.3.1 Big data Very large sets of data are generated in healthcare, and it is streaming as well as batch data. Big data is a suitcase challenge that comprises various challenges such as heterogeneity, scaling, processing, quality, security, and privacy. There are two big data processing modes: streaming and batch. Streaming processing is straight-through-processing as the value of data reduces as time passes (real-time processing). Batch processing is store-then-process, that is, data first is stored and then can be processed online and offline. Heterogeneity gives rise to interoperability problem, which is discussed in the next subsection.

11.3.2 Heterogeneity Two devices are said to be capable of communicating with each other if these can exchange and understand each other’s data. But this communication among various systems of IoT is

II. Reasoning

156

FIGURE 11.1

11. Smart mental healthcare systems

Sentiment analysis techniques.

limited due to various aspects such as devices from different vendors, different versions; these may have different working standards, communicating technologies, data formats, etc. This arises interoperability issue. Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged. There are various levels of interoperability which are given as follows: Level 0 or no interoperability: Completely independent systems without any interoperability facility are placed in this category. Level 1 or technical interoperability: The systems having facility of protocol for the purpose of communication or data transfer with other systems are said to have level 1 interoperability. This level helps in maintaining coherence at the signal or protocol level, plug, and play. Level 2 or syntactic interoperability: Systems having specific interoperability protocol to exchange data and use services are known to have syntactic level of interoperability. High-level architecture system is an example of level 2 interoperable system. Level 3 or semantic interoperability: Semantic interoperability refers to the ability of two or more systems to automatically interpret the information exchanged meaningfully and accurately to produce useful results as defined by the end users of the systems. Semantic interoperability is also used in a more general sense to refer to the ability of two or more systems to exchange information with an unambiguous and shared meaning. Semantic interoperability implies that the precise meaning of the exchanged information is understood by the communicating systems. Hence, the systems can recognize and process semantically equivalent information homogeneously, even if their instances are heterogeneously represented, that is, if they are differently structured and/or using different terminology or different natural language. Semantic interoperability can thus be said to be distinct from the other levels of interoperability because it ensures that the receiving system understands the meaning

II. Reasoning

11.3 Challenges of a healthcare environment

157

of the exchange information, even when the algorithms used by the receiving system are unknown to the sending system. Pragmatic interoperability: Systems with this level of interoperability have information about the methods and procedures employed by them. In more crisp form, participating systems have knowledge regarding the context or use of the data shared between them. Dynamic interoperability: Systems capable of understanding the state transitions in assumptions and constraints happening over the time and can take leverage of these transitions. Such systems have dynamic interoperability as they can adapt to the changes happening over a time span. Conceptual interoperability: Conceptual interoperability is reached if the assumptions and constraints of the meaningful abstraction of reality are aligned.

11.3.3 Natural language processing Natural language data available on Internet or social media platforms (Facebook, Twitter, Reddit, etc.) plays an important role in modern healthcare systems to track patients’ activities and to provide various recommendations. But it is mostly unstructured, and analyzing this data is a big challenge. There are many techniques for analyzing social data of a person which are broadly classified into three methods. Machine learning various classification and regression techniques are available to extract useful information from data available on social accounts, for example: Sarcasm detection: It is an important part of language used by humans. When analyzing social data, sarcastic sentences need to be detected and considered carefully, for example, “Oh, great only this was lacking.” It is a sarcastic sentence and does not be considered in positive sense. Free word order: Some languages are free order (like Hindi or Hinglish), means position of subject, object, or verb does not change the meaning of the sentence. A sentence can be written in many ways, for example, “aaj m ghar par hu” or “m ghar par huaaj” or “ghar par hu m aaj.” Mixed script (code mixed) sentences: More than one language is used in a sentence. The most widely used mixed scripting language is Hinglish, for example, “m market ja rhahu,” in this sentence, Hindi is written using English script.

11.3.4 Knowledge representation The data produced from different sources need to be stored for processing and future reference. The data is big, so, abstract information or knowledge is required to be stored for future reference. The information should provide a common view to everyone, and also it should be in the form so that computers can understand this knowledge for automated healthcare system. It is important to handle these problems. There are various types of data representation schemes, such as knowledge bases, ontology, and knowledge graphs, which can be used for storing the information generated and for the insights

II. Reasoning

158

11. Smart mental healthcare systems

achieved after processing it. Knowledge bases were used widely for intelligent systems, but due to noncommon view to different persons, world is shifting to ontologies and knowledge graphs. The knowledge shared on the Internet, in the form of ontologies, for the purpose of common view and reuse is known as Linked Open Data (LOD). The size of LOD is increasing vastly so accessing that data is also another challenge.

11.3.5 Invasive and continuous monitoring In mental healthcare domain, evaluating a patient for detection of a disease and tracking effectiveness of course of action recommended by the physician is typical as no lab test can accurately map a patients’ mental health status. It is not possible for the patient as well as physician to be present 24*7 in clinic for supervision. And patient may not permit to monitor himself/herself during daily-life routine, despite being a time-consuming and costly procedure.

11.4 Benefits of smart mental healthcare Use of smart components in the mental healthcare domain gives benefits that are generally not possible in traditional healthcare system. These help in transitioning from clinic- to patient-centric system that benefits patients by providing timely interventions and personalized, contextualized course for the treatment of illness and also help in invasive and continuous monitoring. There are other benefits too, which are discussed in this section.

11.4.1 Personalization Recommendations to a patient are provided considering his/her symptoms and other factors that affect his/her health. Most of the systems present today generally recommend things only based on symptoms inputted by the patient, and this has a drawback as course of action to cure disease may vary based on patients’ disease severity, recovering ability, allergic conditions, etc. So, for effective and error accurate cure, session treatment is personalized in smart healthcare systems.

11.4.2 Contextualization Context is an external factor or a combination of external factors that affect the situation of a patient, for example, environmental temperature or patient’s residence locality. Context provides extra information (information other than patient’s physical or social data) that gives better insights in the patient’s condition. When analyzing social data for mental health assessment, it becomes important to consider the context of the conversation or post.

11.4.3 Actionable knowledge The healthcare domain is very expertise specific. Most of the procedures pursued to reach the conclusion to decide a course of treatment are tough to understand. Smart healthcare systems process patients’ data considering the health context without letting them feel

II. Reasoning

11.5 Architecture

159

exhausted to decide upon things and provide actionable information that can be implemented without further data processing. Hence, no or very less expertise of the domain is required to implement the actions recommended by the system, that is, a novice person can understand the recommendations.

11.4.4 Invasive and continuous monitoring Invasive monitoring means without interfering life activities of a patient, and continuous monitoring means real-time monitoring. Smart healthcare system monitors patients in their natural habitat using various sensors (wearable and nonwearable such as smart watch, smart phone, social data as sensor). This monitoring gives much more accurate results for health status assessment as there are chances a patient may forget or hide some useful information that he/she may think useless.

11.4.5 Early intervention or detection Data inputs from sensors, social media, or other Internet platforms give signs well before a person or their acquaintances understand his/her situation. Problem identification at early stages helps in faster recovery and better treatment results and reduces life inability.

11.4.6 Privacy and cost of treatment Smart mental healthcare systems help in maintaining the disease and course of action private. This encourages patients to get treatment for their disease without fear of social indifference or any other concern. Moreover, it also reduces the cost of the treatment as less clinic visits are required.

11.5 Architecture This section presents a general architecture of smart mental healthcare system shown in (Fig. 11.2). It depicts the various forms of inputs, processing of data, the intermediate processing or the reasoning, and the final output based upon the use case employed. The technical view of the architecture is discussed.

11.5.1 Semantic annotation Annotation simply means adding some extra information in reference to the available information. It enriches the information and helps in understanding the available information, relating or comparing it in another context or something else. Machine does not understand anything. For making them understand, we need some transformation in the data/knowledge representation approach and the same is provided by the ontology engineering. Ontology is the representation of world in the form of concepts or entities, attributes, and instances. In simple form, it is an objective or global view of a domain and so

II. Reasoning

160

FIGURE 11.2

11. Smart mental healthcare systems

Word cloud of challenges.

helps in sharing a common view of the domain information that can be used by machines without human interference, for example, “Dasharatha is the father of Rama.” In ontological view, in this sentence, Dasharatha and Rama are instances of class person, and “Is father of” is a relation between both or a relation from class person to the same class. After semantic annotation, this sentence will be represented in the ontology form that helps in attaching some more information to every instance, if available. This extra information attached helps machines to better understand the data and its context to use it properly whenever required without human intervention.

11.5.2 Sentiment analysis Sentiment analysis is the process of finding sentiments, that is, if the sense of a sentence is positive, negative, or neutral. Three levels of sentiment analysis: Document level: document is a single unit for analyzing sentiment. Sentence level: document is broken down into sentences and each sentence is considered a single unit for sentiment analysis. Aspect-based sentiment analysis finds various aspects about which a sentence is discussing and sentiment for those aspects will be analyzed. It’s also called feature-based sentiment analysis. For performing these various types of sentiment analysis, there are various techniques.

11.5.3 Machine learning It helps in automatic sentiment analysis of data. Data is generated at very fast pace in modern times, this is big data, and there are various challenges (such as volume and

II. Reasoning

11.6 Conclusion

161

FIGURE 11.3 Architecture of smart mental healthcare.

velocity) we counter when processing this big data for insights (Fig. 11.3). As machines are more efficient, in terms of time consumption, error-free processing, etc., compared to human beings, machine learning algorithms are implemented for automatic sentiment analysis to reduce human effort. In this approach, machine finds out patterns in the input data to approximate the output data. Machine learning approaches. Two basic approaches to achieve this are supervised machine learning and unsupervised machine learning. Supervised learning requires training a machine to approximate output. Training dataset contains labeled input values, that is, output values corresponding to input values are known here, we train machine to learn this mapping of input to output. So that when there is new input, it can give required result. Unsupervised learning approximates the output without prior experience like in the case of supervised learning, that is, machine is not trained on the dataset or output variables are unknown. Machine sorts the unsorted or unlabeled data according to similarities.

11.6 Conclusion Technology inclusion is being carried on in the healthcare domain to handle the issues faced in traditional healthcare system. It has many benefits over the prevalent healthcare, but challenges related to the social data or the data generated from IoT-enabled devices are big hurdles in leveraging the technology to its full potential. Mental healthcare classification based on social data and IoT helps in better understanding the basic fundamental

II. Reasoning

162

11. Smart mental healthcare systems

blocks and challenges pertaining to that. This chapter discussed the challenges related to different perspectives of the domain and a general architecture to understand the fundamental blocks.

References Available https://www.who.int/whr/2001/media_centre/press_release/en/. Briand, A., Almeida, H., Meurs, M.J., 2018. Analysis of social media posts for early detection of mental health conditions. In: Advances in Artificial Intelligence: 31st Canadian Conference on Artificial Intelligence, Canadian AI 2018. Proceedings 31, 8 11 May 2018, Toronto, ON. Springer International Publishing, pp. 133 143. Bhatia, Srishti, Kesarwani, Yash, Basantani, Ashish, Jain, Sarika, 2021. Engaging Smartphones and Social Data for Curing Depressive Disorders: An Overview and Survey. In: M. Dave et al. (eds.) Paradigms of Computing, Communication and Data Sciences. PCCDS 2020. Available from: 10.1007/978-981-15-7533-4. In press. Chen, M., Hao, Y., Hwang, K., Wang, L., Wang, L., 2017. Disease prediction by machine learning over big data from healthcare communities. IEEE Access 5, 8869 8879. Chen, W.L., Chen, L.B., Chang, W.J., Tang, J.J., 2018, April. An IoT-based elderly behavioral difference warning system. In: 2018 IEEE International Conference on Applied System Invention (ICASI). IEEE, pp. 308 309. Dalal, S., Jain, S., Dave, M.D., 2019. A Systematic Review of Smart Mental Healthcare. Available from: SSRN 3511013. Galva´n-Tejada, C.E., Zanella-Calzada, L.A., Gamboa-Rosales, H., Galva´n-Tejada, J.I., Cha´vez-Lamas, N.M., Gracia-Corte´s, M., et al., 2019. Depression episodes detection in unipolar and bipolar patients: a methodology with feature extraction and feature selection with genetic algorithms using activity motion signal as information source. Mob. Inf. Syst. 2019. Gaur, M., Alambo, A., Sain, J.P., Kursuncu, U., Thirunarayan, K., Kavuluru, R., et al., 2019. Knowledge-aware assessment of severity of suicide risk for early intervention. The World Wide Web Conference. ACM, pp. 514 525. Guntuku, S.C., Yaden, D.B., Kern, M.L., Ungar, L.H., Eichstaedt, J.C., 2017. Detecting depression and mental illness on social media: an integrative review. Curr. Opin. Behav. Sci. 18, 43 49. Gyrard, A., Gaur, M., Shekarpour, S., Thirunarayan, K., Sheth, A., 2018. Personalized health knowledge graph. ISWC 2018 Contextualized Knowledge Graph Workshop. Available from: http://knoesis.org/sites/default/ files/personalized-asthma-obesity%20%2814%29.pdf. Hayati, N., Suryanegara, M., 2017, October. The IoTLoRa system design for tracking and monitoring patient with mental disorder. In: 2017 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT). IEEE, pp. 135 139. Islam, S.R., Kwak, D., Kabir, M.H., Hossain, M., Kwak, K.S., 2015. The internet of things for health care: a comprehensive survey. IEEE Access 3, 678 708. Kamdar, M.R., Wu, M.J., 2016. PRISM: a data-driven platform for monitoring mental health. In: Biocomputing 2016: Proceedings of the Pacific Symposium, pp. 333 344. Kumar, A., Sharma, A., Arora, A., 2019. Anxious depression prediction in real-time social data. arXiv preprint arXiv:1903.10222. Mishra, S., Jain, S., Abraham, A., Shandilya, S., 2018. Secure semantic smart healthcare (S3HC). J. Web Eng. 17 (8), 617 646. Available from: https://doi.org/10.13052/jwe1540-9589.1782. ISSN: 1540-9589, E-ISSN: 1544-5976 (SCI Indexed). Patel, A., Jain, S., Shandilya, S.K., 2019. Data of semantic web as unit of knowledge. J. Web Eng. 17 (8), 647 674. Available from: https://doi.org/10.13052/jwe1540-9589.1783. ISSN: 1540-9589, E-ISSN: 1544-5976 (SCI Indexed). Rastogi, N., Singh, S.K., Singh, P.K., 2018, February. Privacy and security issues in big data: through Indian prospective. In: 2018 Third International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU). IEEE, pp. 1 11. Sheth, A., Jadhav, A., Kapanipathi, P., Lu, C., Purohit, H., Smith, G.A., et al., 2014. Twitris: a system for collective social intelligence. Encyclopedia of Social Network Analysis and Mining (ESNAM). Springer, pp. 2240 2253. Sheth, A., Jaimini, U., Thirunarayan, K., Banerjee, T., 2017, September. Augmented personalized health: How smart data with IoTs and AI is about to change healthcare. In: 2017 IEEE Third International Forum on Research and Technologies for Society and Industry (RTSI). IEEE, pp. 1 6.

II. Reasoning

References

163

Singh, S.K., Rathore, S., Park, J.H., 2019. BlockIoTIntelligence: a blockchain-enabled intelligent IoT architecture with artificial intelligence. Future Gener. Comput. Syst. 110. Tadesse, M.M., Lin, H., Xu, B., Yang, L., 2019. Detection of depression-related posts in Reddit social media forum. IEEE Access. 7, 44883 44893. Tai, C.H., Fang, Y.E., Chang, Y.S., 2017. SOS-DR: a social warning system for detecting users at high risk of depression. Personal. Ubiquitous Comput. 1 12. Thorstad, R., Wolff, P., 2019. Predicting future mental illness from social media: A big-data approach. Behavior research methods 51 (4), 1586 1600. Thermolia CH., Bei E.S., Petrakis E.G.M.: A Chronic Ontology Model as TeleCare-Decision Support System for Longitudinal Monitoring of Patients with Bipolar Disorder, presented in the 2nd Joint WPA-INA-HSRPS International Psychiatric Congress, Athens, Greece, October 30 to November 2, PP24, pp. 143-144, 2014.

II. Reasoning

C H A P T E R

12 A meaning-aware information search and retrieval framework for healthcare V.S. Anoop1, Nikhil V. Chandran2 and S. Asharaf3 1

Kerala Blockchain Academy, Indian Institute of Information Technology and Management Kerala (IIITM-K), Thiruvananthapuram, India 2Data Engineering Lab, Indian Institute of Information Technology and Management - Kerala (IIITM-K), Thiruvananthapuram, India 3 Indian Institute of Information Technology and Management - Kerala (IIITM-K), Thiruvananthapuram, India

12.1 Introduction Semantic or meaning-aware search is remaining as the buzzword in the computing area for many years now. This has a huge potential to transform the current keyword-based search paradigm to a meaning-aware/semantic information retrieval. The traditional keyword-based search uses the exact match between the user-supplied phrase and the contents of the pages. On the other hand, the semantic search considers the context and meaning of the search word or phrase given by the user, for satisfying the user’s need in a more intuitive way. For example, if we use the keyword “covid19” in a semantic search engine interface, then the search results may not only include the pages having the keyword “covid19” but also the symptoms, treatment, and other related information as well. Here, the context is “disease spread by coronavirus,” and the search result will be fetched and filtered according to the context. But in the case of a traditional search engine, these results will be fetched from the sources where the search keyword or phrase is tagged syntactically. According to the researchers and practitioners of the semantic web, the major issue with the keyword-based search engine is the unavailability of semantics in a given search context. Studies have estimated that almost 20% 30% of users do not find relevant and accurate results in the top search results returned by the current keyword-based search engines. A meaning-aware/semantic search technology has a huge potential here

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00003-1

165

© 2021 Elsevier Inc. All rights reserved.

166

12. A meaning-aware information search and retrieval framework for healthcare

that can be enabled as an extension to the current web. The advantage of the semantic web is that the information is represented in such formats and mechanisms that can be read, understood, and processed by machines independently without the intervention of humans. The current web uses HyperText Markup Language (HTML) to represent the information and links established between the resources/pages that make the World Wide Web. In the semantic web, Resource Description Framework and Web Ontology Language (OWL) for knowledge representation. The semantic web not only links objects such as media (audio, video, pictures, etc.) but also links information and entities such as people, place, organization, and location. A semantic search engine may have many such types of interrelated and interlinked information in its backend which will help it process complex natural language queries asked by the user. The end customers of such a semantic search engine expect the engine to understand and interpret the intention of the user, rather than looking for an exact word/phrase match to be included in the result set. The primary consideration while designing such a semantic search engine is to give intentbased intelligence rather than focus on a keyword-based search or match finding. Every real-world object in the semantic web is represented by a Uniform Resource Identifier and the organizations that wish to adopt and implement semantic search should first publish their data in a standard form called ontology, typically represented using OWL. The data represented using such a mechanism is used by the machines to easily index and interpret information and can be used for providing semantic search experiences. A generic architecture for building a semantic search experience is shown in Fig. 12.1. In addition to the standard layers available in the architecture of a traditional keywordbased search engine, a new layer called semantic layer is employed in semantic search FIGURE 12.1 A generic architecture for building a semantic search experience.

II. Reasoning

12.2 Related work

167

engines which are vested with the duty of ontology generation and metadata management. Users submit search queries to the search engine through the presentation layer and the application layer is vested with the duty of processing the query submitted. The semantic layer’s duty is to process the result using the ontology information available and present it through the user interface module. The remainder of this chapter is organized as follows: Section 12.2 details some of the very recent and prominent approaches reported for information extraction, search, and retrieval in the healthcare domain. Section 12.3 discusses semantic search and information retrieval in healthcare, and in Section 12.4, we propose our framework for meaning-aware information extraction, search, and retrieval from unstructured healthcare documents. Some future research dimensions and conclusions are given in Section 12.5 and Section 12.6, respectively.

12.2 Related work Semantic web, being a buzzword for many years, is still a center of attraction for many researchers and practitioners in computer science. Thus there reported many approaches to incorporating and utilizing meaning into representation and processing of data rather than just checking for mere syntactic pattern matching. Thus in the recent past, many approaches are reported in the literature with varying degrees of success. Some of those research contributions in healthcare information search and retrieval are discussed in this section. Rule-based approaches such as regular expressions were used to discover Peripheral Arterial Disease from radiology reports by Savova et al. (2010). They evaluated Mayo’s clinical natural language processing (NLP) system, which is an open source. They manually created a dataset that consisted of 223 positive, 19 negatives, 63 probable, and 150 unknown cases. The accuracy achieved by this dataset was 0.93, whereas the baseline for named entity recognition was 0.46. A group of smoking rules was developed to enhance the smoking status classification by Sohn and Savova (2009). Changing the model for negation detection for a nonsmoker, using a time-based resolution to differentiate past and current smoker and better detection methods for unknown cases improved the performance of this system. A humanknowledge engineer often writes many rules for a clinical information extraction system. Due to their efficiency and effectiveness, machine learning-based information extraction approaches have gained interest. Horng et al. (2012) used Latent Dirichlet Allocation along with logistic regression (LR) to identify patients with sepsis. These automatic detection methods help patients by triggering sepsis decision support messages, physicians order entry, before being viewed by an emergency physician. Roberts et al. (2012) used machine learning approaches for information extraction to identify actionable locations in radiology reports. The experiments were performed on 1000 manually annotated instances which gave results with an F1-measure of 86.04 on locations. Their future work aims at improving syntactic parsing techniques and transitioning their methods to new types of clinical data. Zheng et al. (2014) used machine learning and NLP methods to identify gout flares. For the training and evaluation datasets, 1264 and 1192 clinical notes from two separate sets of 100 patients were selected. This method identified gout flares with higher sensitivity and specificity compared to previous studies.

II. Reasoning

168

12. A meaning-aware information search and retrieval framework for healthcare

The study published by Barrett et al. (2013) combined classification which was featurebased and template-based extraction from clinical text. A random selection of 215 anonymous consult letters was used for the study. The results of the NLP extraction were compared with the results obtained by expert clinicians. The average accuracy of automated extraction was 73.6%, which was comparable to experts. Sarker and Gonzalez (2015) proposed a system to detect adverse drug reactions using an automatic text classification mechanism using Support Vector Machine (SVM). This chapter focuses on large user data generating sources of textbased information like social media. Semantic features like topics, concepts, sentiments, and polarities are added to the model. Social media has also been utilized for healthcare in (Bhatia et al., 2020) and (Dalal et al., 2019). To enable faster creation of rule-based applications and to eliminate the need for manual knowledge engineers, Kluegl et al. (2016) proposed a tool called UIMA Ruta. It was designed with a goal of fast development. Knowledge could be extracted quickly by the rule language and its corresponding model. UIMA Ruta Workbench additionally provides extensive editing support and rule induction. The SVM was used to classify Chronic Obstructive Pulmonary Disease (COPD) among asthma patients with electronic medical records (Himes et al., 2009). This Bayesian network model with an accuracy of 83.3% is able to predict COPD in the independent set of patients. A more improved model is required for making the system clinically useful. Rochefort et al. (2015) used multivariate LR to detect events with adverse relations from electronic health records (EHR). They developed a statistical NLP model for identifying narrative radiology reports for the presence of pneumonia. Deleger et al. (2013) proposed an NLP-based method to risk stratify patients with abdominal pain by analyzing the content of EHRs. They analyzed a random sample of 2100 patients with abdominal pain from pediatric emergency department. This automated system extracted relevant elements from emergency department physician notes and assigned a risk value for acute appendicitis based on the pediatric appendicitis score. The average F-measure achieved was 0.867, which was comparable to physician experts. Liu et al. (2017a) explore Long Short Term Memory (LSTM) based on the recurrent model on clinical entity recognition and protected information recognition. The model consists of an input layer, an LSTM layer, and an inference layer. This model achieves F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification tasks. The proposed LSTM model outperforms traditional machine learning-based models that need to be feature engineered by hand. Li and Huang (2016) introduced a clinical information extraction system that uses deep neural networks to identify event spans and their attributes from raw clinical notes. This method attached each word to their part-of-speech tag and shaped information as extra features. The results show that it outperforms existing models on standard evaluation datasets without the help of a domain-specific feature extraction toolkit. Liu et al. (2017b) propose a hybrid approach based on recurrent neural network (RNN) and conditional random field (CRF) for deidentification. They extend the LSTM-based model by adding some context features to the neural network. The system achieves an F1-score of 91.43% on the N-GRID corpus. Si et al. (2019) use contextual embeddings for clinical concept extraction tasks. They show that semantic information is encoded in contextual embeddings. Pretrained

II. Reasoning

12.2 Related work

169

contextual embeddings achieves state-of-the-art results with respective F1-measures of 90.25, 93.18 (partial), 80.74, and 81.65 on four concept extraction corpora: i2b2 2010, i2b2 2012, SemEval 2014, and SemEval 2015. Agosti et al. (2019) extract semantic relations from medical documents that can be employed to improve the retrieval of medical literature. They defined a rule-based method and a learning method for extracting relations from queries and documents. Fraser et al. (2019) worked on a new dataset MedMentions and explored along various dimensions. They contrasted contextual versus noncontextual embeddings, general versus domain-specific, and LSTM-based versus attention-based. They also experimented with linear as well as bidirectional LSTM classification layers. The authors also propose a modification to the bidirectional encoder representations from transformers-based named entity recognition architectures. Pergola et al. (2018) have proposed a new approach that combines local and global context of words and phrases and combines the word semantic encoded by the context-based and corpus-based embeddings. They have used Latent Semantic Analysis (LSA) and FastText embeddings, which improved the coherence of final topics and decreased computational cost. Sun and Loparo (2019) use a distant supervision method to transfer text into computer interpretable representations. A total of 100 clinical trial records were randomly picked and 386 free-text records were received from ClinicalTrials.gov. The framework achieved an accuracy of 0.83 and a recall of 0.79. Kormilitzin et al. (2020) developed a clinical named entity model using MIMIC-III free-text data. They demonstrated that transfer learning plays an essential role in developing a robust model applicable across different clinical domains. Wang et al. (2017) developed a two-stage query expansion method that integrates latent semantic representations. Iterative expansion of queries using a heuristic approach to increase the similarity between queries and documents was done first. Next, a tensor factorization-based latent semantic model was used to identify meaningful patterns under sparse settings. TREC CDS 2014 dataset was used for the experiments and the results showed improved performance than baseline systems. The impact of domain knowledge on user click prediction was explored by Karanam et al. (2017). They found that users who were domain experts performed better in a semantic space having more medical information. Also, the users who were not domain experts performed better in a semantic space having a low level of medical information. This study is significant as it proves that the domain knowledge and semantic space are mutually dependent. Goeuriot et al. (2018) did a detailed study of 2013 and 2014 CLEF eHealth medical information retrieval tasks. This chapter provides information on methods for helping nondomain experts get health advice online. This chapter also discusses the methods used for creating the 2013 and 2014 CLEF datasets and the results, findings obtained from them. The importance of query expansion, the usefulness of the datasets, and the significance of an evaluation campaign on medical IR datasets are considered. A state of the art Learning to Rank system was proposed by Soldaini and Goharian (2017) on 2016 CLEF eHealth dataset. It achieved 26.6 % improvement over the best model in NDCG@10. They used a combination of statistical and semantic features like LSA to build their model. The results show that semantic features can be used for online health search.

II. Reasoning

170

12. A meaning-aware information search and retrieval framework for healthcare

A deep learning-based Hierarchical Attention Networks (HAN)s method for information extraction on cancer pathology reports was explored by Gao et al. (2018). A total of 942 de-identified pathology reports were obtained from the National Cancer Institute Surveillance, Epidemiology, and End Results program. HAN was used for primary site matching and histological grade classification. Comparisons with traditional machine learning methods and other deep learning methods show that HAN performs better. The mean F-scores, micro and macro for HAN were 0.852, 0.708, respectively. This study showed the usefulness of HAN in unstructured cancer pathology reports. Banerjee et al. (2019) proposed an attention-based hierarchical RNN model, which classifies unstructured medical text data. The study shows the usefulness of both convolutional neural network and RNN in medical text classification. These approaches may be used for further enhancing the automatic classification of radiology reports.

12.3 Semantic search and information retrieval in healthcare One of the significant steps to improve healthcare practice is to collect, process, link, and share medical data. It is very challenging to process such data primarily due to its heterogeneous nature, lack of proper formatting, and unavailability of standard tools and techniques to enable flexible search and analytics. The recent technological advancements in the healthcare domain caused a heavy amount of data to be generated from EHR, medical devices, and other healthcare documents. It is nontrivial that these data contain highly valuable information and patterns and unearthing those patterns helps healthcare practitioners to make prompt decisions. This task remains a challenge and we need better algorithms, tools, and methodologies to derive actionable insights from the vast amount of healthcare data. One of the major challenges associated is with the interoperability of healthcare data. The accumulated data will mostly be from different sources and may be in different formats. Another challenge is to process and represent the data in an efficient mechanism so that the users including medical researchers and practitioners can access it in an easy manner.

12.4 A framework for meaning-aware healthcare information extraction from unstructured text data Ontologies are considered to be the backbone of semantic computing, and creating ontologies is a very challenging and complex task. Once the ontologies are generated using manual or semi-automated mechanisms, then the available data or unstructured textual corpus may be indexed and mapped against the appropriate ontology set to leverage meaning-aware search experience on that corpus. The best practice is to do the ontological annotation incrementally which should also be in an unambiguous manner. The ontology annotation deals with the process of associating or connecting every word, phrase, or concepts in the corpus under consideration to an appropriate class in ontology. For example, the phrase “swine flu” may be related to the ontological class “disease” and/or “respiratory disease.” In healthcare, any relevant term is typically explained by a set of features and associated information including symptoms, medication, etc. These can be in the

II. Reasoning

12.4 A framework for meaning-aware healthcare information extraction from unstructured text data

171

structured, semi-structured, or unstructured form. Normally, all these details are maintained in medical databases that are often published for consumption by the users and medical practitioners. This database is generally called a medical catalog and this should be annotated ontologically against the medical ontologies to enable semantic knowledge discovery in a healthcare platform. The process of annotating such an ontology is shown in Fig. 12.2. The various components of the annotation process are described below: 1. Medical ontology The medical ontology module is used for establishing and describing semantic relationships between various healthcare categories and properties associated with them. These collections of medical terminology classes are interconnected and their instances define the semantic product concepts defined by the medical ontology. A reasonable number of approaches for creating ontologies have been reported in the recent past, with varying degrees of success. But in most of the cases, the subject-matter experts have to handcraft this ontology, especially ontologies for sensitive and critical areas such as healthcare. 2. Medical catalog terminology extractor This module deals with the extraction of medical terminologies such as disease name, symptoms, causes, and medications from the semi-structured or unstructured medical catalogs. For each instance of a medical term, this module will create a feature list where the feature means a semi-structured representation of each medical concept using the medical ontology. This feature construction will be easy if the medical catalog and other associated documents are available in a well-structured manner. If the catalog documents are not structured, then we need to use the keyphrase or topic extractor for extracting the medical terminologies. For generic keyphrase extraction, there are many off-the-shelf tools available but for medical domains, the availability of such extractors is very limited.

FIGURE 12.2 Ontological annotation process for medical catalogs.

II. Reasoning

172

12. A meaning-aware information search and retrieval framework for healthcare

3. Semantic reasoner Semantic reasoner is a component that infers or derives consequences logically from a collection of information or facts extracted primarily from unstructured documents and represented in the ontology. A semantic reasoner considers a keyphrase or topic and expands these phrases or topics into a semantically rich representation. Semantic reasoner takes healthcare/medical categories from the medical category database and also medical features from the healthcare instance in the category. Then it identifies semantically similar healthcare concepts in medical ontology for each feature. Then the reasoner augments or enhances it by adding more information using healthcare feature description by the similar healthcare concepts, which are semantically similar.

12.4.1 Meaning-aware healthcare information discovery from ontologically annotated medical catalog database The user who may be searching for medical records or healthcare information is not likely to supply just syntactic or even meaning-aided query for searching for something. The user most likely will be using search queries with concepts or terms that may or may not available or related to the terms in the medical catalog. For instance, consider a query like “I have symptom X and symptom Y for the period P. Is this the disease D?” To better explain such examples, the following sections outline what are the classes of meaning-aware discovery experiences that may be developed using the semantically annotated medical catalog database. 1. Semantic query experience When a search query is supplied by the user, this should be semantically mapped to a healthcare concept using the ontology reasoner which consumes the healthcare ontology discussed earlier. This semantic medical/healthcare concept can be best used to query the annotated medical/healthcare catalog database to build a meaning-aware search experience. 2. Meaning-aware search results recommendations For search results recommendations, traditional search engines use a syntactic description of the terminology available in the healthcare catalog database. In this scenario, the exact match of the features will be computed and the search results are shown to the users. In the semantic search paradigm, the catalog database allows more meaningful comparisons to produce results. The user-supplied topic or phrase will be expanded using techniques such as word/phrase embedding to generate more semantically similar terms. Then the search operation will be carried out using the expanded set.

12.4.2 Semantic similarity computation The semantic search framework discussed earlier mentioned that semantic comparisons between healthcare terminologies such as disease name, symptoms, and medications are crucial for satisfying the user’s information search need. In this section, we explain how semantics enabled comparisons are made possible using healthcare ontologies. In this framework, all the comparisons are made between two or more healthcare concept vectors

II. Reasoning

12.4 A framework for meaning-aware healthcare information extraction from unstructured text data

173

that have been generated through the annotation process discussed previously. Here, we take the example of symptoms to disease semantic comparison. Let us consider disease “d” to be represented by a concept in vector format which is obtained by the annotation process discussed previously, which can be represented in the form: M(d) 5 {(M1, W1), (M2, W2), . . . , (MK, WK)}. Here, Mi represents medical concepts and Wi represents the associated weight (similarity, between 0 and 1) which depicts the strength of the association that the healthcare concept Mi has with the symptom or other disease. With this, now we may be able to compute the semantic similarity for any pair of healthcare or medical concepts and phrases even without any syntactic/exact match. Another way of doing this is to use pretrained embedding models to calculate the semantic similarity. Embedding models represent words and phrases in high-dimensional vector spaces and the similarity of the words and phrases will be calculated using the cosine angle between these two words/phrase vectors. The average weights of matching concepts that are syntactically similar can be calculated as the measure of similarity on this concept dimension. To measure the similarity between a super-concept and a sub-concept, the average weights of these concepts divided by the distance can be ascertained. Distance is computed as the number of edges in the shortest path connecting the two concepts in the healthcare ontology. Here, the hierarchy of concepts is deduced from the healthcare ontology that is a priori available for the healthcare system. Queries on medical terms or keyphrases are identified and semantically similar concepts or topics are recommended, which may have user interest.

12.4.3 Semantic healthcare information discovery—an illustration To enable the semantic information search and retrieval mechanisms, the healthcare portal maintains ontological structures that represent meaningful concepts, keyphrases, and topics for each of the relevant categories of medical or healthcare data. As mentioned earlier, the meaning-aware search experience can be enabled in a healthcare portal by querying the annotated clinical catalog database with a semantified or meaning infused query supplied by the user. The meaning can be incorporated into this user query by using a pretrained word or phrase embeddings which are custom trained on healthcare data. The querying and searching process using the medical/healthcare concepts results in surfacing all those clinical catalogs having the required meaningful healthcare/medical concepts and thus provides a great semantic search experience for the user (Table 12.1). To provide content-rich meaning-aware healthcare information search experience, the knowledge represented using ontology should be maintained at different levels of granularity. Often the portal may have to provide multiple dimensions of information on a TABLE 12.1

Search query and expanded product concepts using the proposed method.

Search query

Semantically expanded product concepts

Swine flu

Pneumonia, respiratory failure, conjunctivitis, parotitis, hemophagocytic syndrome

Coronavirus Severe acute respiratory syndrome, betacoronavirus, lower respiratory tract, pneumonia, global containment

II. Reasoning

174

12. A meaning-aware information search and retrieval framework for healthcare

specific healthcare topic. From the examples given earlier, it can be observed that the semantic reasoner-based ontological annotation process generating semantic-rich conceptual space for medical catalogs and the associated healthcare information system offers a potential direction for building healthcare portals with better information search and retrieval algorithms. Building meaning-aware healthcare information retrieval experience using the semantically annotated healthcare data is the most critical step in exploring this highly rewarding opportunity.

12.5 Future research dimensions Information extraction in the healthcare sector is a major focus of attention in the present digital era. The identification of medical concepts from raw data is a challenging task. We explored the various methods used for clinical concept extraction such as rule-based, machine learning, hybrid, and deep learning-based models. This chapter proposes a novel framework for information extraction in the Healthcare Domain using ontological annotation. Current models focus on contextual embeddings and deep architectures for extracting information from medical records. These models tend to be optimized for one particular dataset and need to be re-trained for a different dataset. The development of information extraction models which are data agnostic is a possible future research dimension. The critical challenge is to build the ontological knowledge base for clinical data. The development of an open standard for building a unified healthcare domain ontology database is a novel research area.

12.6 Conclusion This chapter discussed a framework for meaning-aware knowledge discovery for the healthcare domain. It shows how ontological annotation of healthcare documents such as EHR and medical catalogs can improve the user search experience. Also, the ontological annotation process and the illustration were discussed in this chapter with sufficient detailing to understand how the proposed framework can better suggest results for the users. This chapter provides a pointer to a novel research dimension in building the next generation meaning-aware healthcare platform that will transform how the information search and retrieval works today.

Key terms and definitions Semantic search: It is the process of proving meaning-aware search experience on a textual corpus or web. It is an improvement over the conventional syntactical/keywordbased search using word search indexes. Semantic knowledge discovery in healthcare: It is the process of discovering healthcare knowledge from a healthcare/medical web portal using semantically tagged medical catalog information.

II. Reasoning

References

175

Ontology: Ontologies are reusable specifications used for describing relationships and properties of resources. OWL is a commonly used ontology representation technique. Meaning-aware search result recommendation in healthcare: It is the process of suggesting similar diseases, symptoms, and medications using semantically annotated product catalog information. Semantic query experience in healthcare: This is the semantic search experience on semantically annotated medical/healthcare catalog databases.

References Agosti, M., Di Nunzio, G.M., Marchesin, S. Silvello, G., 2019. A relation extraction approach for clinical decision support. arXiv preprint arXiv:1905.01257. Banerjee, I., Ling, Y., Chen, M.C., Hasan, S.A., Langlotz, C.P., Moradzadeh, N., et al., 2019. Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification. Artif. Intell. Med. 97, 79 88. Barrett, N., Weber-Jahnke, J.H., Thai, V., 2013. Engineering natural language processing solutions for structured information from clinical text: extracting sentinel events from palliative care consult letters. In MedInfo (pp. 594 598). Bhatia, Srishti, Kesarwani, Yash, Basantani, Ashish, Jain, Sarika, et al., 2020. Engaging Smartphones and Social Data for Curing Depressive Disorders: An Overview and Survey. M. Dave et al. (eds.) Paradigms of Computing, Communication and Data Sciences. PCCDS 2020. doi:10.1007/978-981-15-7533-4. In press. Dalal, Sumit, Jain, Sarika, Dave, Mayank, et al., 2019. A Systematic Review of Smart Mental Healthcare. 2019 5th International Conference on Cyber Security and Privacy in Communication Networks (ICCS). https://papers. ssrn.com/sol3/papers.cfm?abstract_id=3511013. Deleger, L., Brodzinski, H., Zhai, H., Li, Q., Lingren, T., Kirkendall, E.S., et al., 2013. Developing and evaluating an automated appendicitis risk stratification algorithm for pediatric patients in the emergency department. J. Am. Med. Inform. Assoc. 20 (e2), e212 e220. Fraser, K.C., Nejadgholi, I., De Bruijn, B., Li, M., LaPlante, A., Abidine, K.Z.E., 2019. Extracting UMLS concepts from medical text using general and domain-specific deep learning models. arXiv preprint arXiv:1910.01274. Gao, S., Young, M.T., Qiu, J.X., Yoon, H.J., Christian, J.B., Fearn, P.A., et al., 2018. Hierarchical attention networks for information extraction from cancer pathology reports. J. Am. Med. Inform. Assoc. 25 (3), 321 330. Goeuriot, L., Jones, G.J., Kelly, L., Leveling, J., Lupu, M., Palotti, J., et al., 2018. An analysis of evaluation campaigns in ad-hoc medical information retrieval: CLEF eHealth 2013 and 2014. Inf. Retr. J. 21 (6), 507 540. Himes, B.E., Dai, Y., Kohane, I.S., Weiss, S.T., Ramoni, M.F., 2009. Prediction of chronic obstructive pulmonary disease (COPD) in asthma patients using electronic medical records. J. Am. Med. Inform. Assoc. 16 (3), 371 379. Horng, S., Sontag, D.A., Shapiro, N.I., Nathanson, L.A., 2012. 340 machine learning algorithms can identify patients who will benefit from targeted sepsis decision support. Ann. Emerg. Med. 60 (4), S121. Karanam, S., Jorge-Botana, G., Olmos, R., van Oostendorp, H., 2017. The role of domain knowledge in cognitive modeling of information search. Inf. Retr. J. 20 (5), 456 479. Kluegl, P., Toepfer, M., Beck, P.D., Fette, G., Puppe, F., 2016. UIMA Ruta: rapid development of rule-based information extraction applications. Nat. Lang. Eng. 22 (1), 1 40. Kormilitzin, A., Vaci, N., Liu, Q. Nevado-Holgado, A., 2020. Med7: a transferable clinical natural language processing model for electronic health records. arXiv preprint arXiv:2003.01271. Li, P. Huang, H., 2016. UTA DLNLP at SemEval-2016 Task 12: deep learning-based natural language processing system for clinical information identification from clinical notes and pathology reports. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) (pp. 1268 1273). Liu, Z., Yang, M., Wang, X., Chen, Q., Tang, B., Wang, Z., et al., 2017a. Entity recognition from clinical texts via recurrent neural network. BMC Med. Inform. Decis. Mak. 17 (2), 67. Liu, Z., Tang, B., Wang, X., Chen, Q., 2017b. De-identification of clinical notes via recurrent neural network and conditional random field. J. Biomed. Inform. 75, S34 S42.

II. Reasoning

176

12. A meaning-aware information search and retrieval framework for healthcare

Pergola, G., He, Y. Lowe, D., 2018. Topical phrase extraction from clinical reports by incorporating both local and global context. In: Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence. Roberts, K., Rink, B., Harabagiu, S.M., Scheuermann, R.H., Toomay, S., Browning, T., et al., 2012. A machine learning approach for identifying anatomical locations of actionable findings in radiology reports. In: AMIA Annual Symposium Proceedings (vol. 2012, p. 779). American Medical Informatics Association. Rochefort, C.M., Buckeridge, D.L., Forster, A.J., 2015. Accuracy of using automated methods for detecting adverse events from electronic health record data: a research protocol. Implement. Sci. 10 (1), 5. Sarker, A., Gonzalez, G., 2015. Portable automatic text classification for adverse drug reaction detection via multicorpus training. J. Biomed. Inform. 53, 196 207. Savova, G.K., Fan, J., Ye, Z., Murphy, S.P., Zheng, J., Chute, C.G. et al., 2010. Discovering peripheral arterial disease cases from radiology notes using natural language processing. In: AMIA Annual Symposium Proceedings (vol. 2010, p. 722). American Medical Informatics Association. Si, Y., Wang, J., Xu, H., Roberts, K., 2019. Enhancing clinical concept extraction with contextual embeddings. J. Am. Med. Inform. Assoc. 26 (11), 1297 1304. Sohn, S. and Savova, G.K., 2009. Mayo clinic smoking status classification system: extensions and improvements. In: AMIA Annual Symposium Proceedings (vol. 2009, p. 619). American Medical Informatics Association. Soldaini, L., Goharian, N., 2017. Learning to rank for consumer health search: a semantic approach. European conference on information retrieval. Springer, Cham, pp. 640 646. Sun, Y. Loparo, K., 2019. Information extraction from free text in clinical trials with knowledge-based distant supervision. In: Proceedings of the 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC) (vol. 1, pp. 954 955). IEEE. Wang, H., Zhang, Q., Yuan, J., 2017. Semantically enhanced medical information retrieval system: a tensor factorization based approach. IEEE Access. 5, 7584 7593. Zheng, C., Rashid, N., Wu, Y.L., Koblick, R., Lin, A.T., Levy, G.D., et al., 2014. Using natural language processing and machine learning to identify gout flares from electronic clinical notes. Arthritis Care Res. 66 (11), 1740 1748.

II. Reasoning

C H A P T E R

13 Ontology-based intelligent decision support systems: A systematic approach Ramesh Saha1, Sayani Sen2, Jayita Saha3, Asmita Nandy4, Suparna Biswas5 and Chandreyee Chowdhury4 1

Department of Information Technology, Gauhati University, Guwahati, Assam, India Department of Computer Application, Sarojini Naidu College for Women, Kolkata, India 3 Department of Artificial Intelligence and Data Science, Koneru Lakshmaiah Education Foundation Deemed to be University, Hyderabad, India 4Department of Computer Science & Engineering, Jadavpur University, Kolkata, India 5Department of Computer Science & Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata, India 2

13.1 Introduction The functions of decision support system (DSS) may be two-fold. First, providing deeper analysis of patient’s health condition as a tool to aid the practitioner based on historical data of the patient itself to find any notable deterioration (Lysaght et al., 2019). Second, predicting future health condition of the patient based on huge training data applying big data, machine learning, etc. (Chatterjee et al., 2017; Middleton et al., 2016). The first one would help to extend timely support to the elderly people by analyzing patient vitals (Zenuni et al., 2015) suffering from chronic diseases needing prolonged treatment. The second one would be beneficial for symptom-based diagnosis of both the diseased and apparently fit people. The DSS may fill the gap of blind sensor-based monitoring with application of intelligent methodologies toward a meaningful analysis and findings. Web semantics plays a key role in storing electronic health record (EHR) based on ontology for enhanced usability of acquired knowledge (Sutton et al., 2020). DSS is also prevalent in providing patient health monitoring and support while patient-in-transit (home to hospital, hospital to home, or hospital to pathology lab, etc.) with the application of Internet of Vehicles (IoV) considering mobility. For the successful

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00005-5

177

© 2021 Elsevier Inc. All rights reserved.

178

13. Ontology-based intelligent decision support systems: A systematic approach

FIGURE 13.1 Workflow of DSS for remote healthcare that combines different databases to a knowledge base for better services. DSS, Decision support system.

application of such aid in healthcare, quality of service (QoS) and quality of experience (QoE) (Baig et al., 2019) are to be ensured. For instance, user-friendliness has an important role to play for wide-scale adaptation of such technology. Fig. 13.1 details out the workflow of DSS for smart and remote healthcare supported by enabling technology such as Internet of Things (IoT), sensor, ML, and artificial intelligence (AI) and ontology support.

13.2 Enabling technologies to implement decision support system This section summarizes the core techniques behind state-of-the-art DSSs for IoT-based smart healthcare applications. Data collected from heterogeneous resources are transmitted and stored either in a server or cloud storage. The analysis based on ML and deep learning techniques is applied there at the server-side to extract meaningful patterns and classifications from different types of data. This is detailed in the subsequent subsections.

13.2.1 IoT-enabled decision support system for data acquisition, transmission, and storage Data-driven clinical DSSs are developed today for smart healthcare. For a detailed analysis of a patient’s condition, patients’ data has to be collected from different sources and kept in data repositories. Patients’ data include his/her medical history, sensing data, doctor’s observations, ECG/EEG reports, etc. The following subsection details some of the data sources.

II. Reasoning

13.2 Enabling technologies to implement decision support system

179

13.2.1.1 Data acquisition Sensors are an important part of the DSS. Wearable sensors have been used widely for disease diagnosis and behavior monitoring. To monitor daily activities, most of the works use these sensors where those are being attached to someone’s body parts or are carried by them. Spa¨nig et al. (2019) used a smart garment where small electronic devices are integrated into textile fabrics, along with multipurpose sensors. Those garments were designed for people who risk their lives in their duty like Fire Fighters, Civil Protection rescuers, etc. In this work mainly the integrated accelerometer and ECG lead were used, from which signals were collected and processed for extracting features, for Human Activity Recognition (HAR) purpose. On the lap of this modern era, there is demand for cheap and portable devices for identifying activities and a smartphone is one such option. Smartphone is affordable as well as reliable, where it comes with the features of multitasking and data collection from its embedded sensors. We have mentioned earlier certain sensors that we can wear or carry. Those sensors can be found integrated onto the motherboard of a smartphone, so while carrying that we can easily collect data without any unbearable weights over our body. 13.2.1.2 Data transmission and storage Data from different sources are either stored at a local server or in cloud storage for durability. Knowledge repositories and data warehouses enable extracting insights from data pertaining to multiple sources. Since user privacy cannot be guaranteed for cloud storage, it is better to store the insights rather than data. For frequent interaction with cloud, Internet traffic could be a bottleneck. So, another paradigm of edge computing is coming up where data may be stored in cloud but some pretrained prediction models are placed at the edges. Edges are one hop away from the data sources and they do not need the Internet connection. So, for real-time response, these pretrained models kept at the edges are useful in detecting emergency conditions of patients.

13.2.2 Application of machine learning and deep learning techniques for predictive analysis of patient’s health During the last few years, there was a significant growth in the number of applications in analysis of patient’s health using ML. Many researchers have explored application domains to identify specific activity types or behaviors, disease, gene expression, drug discovery to reach specific goals in these domains. Mostly used machine learning techniques are Support Vector Machine, Decision Tree (DT), Multi-Layer Perceptron, etc. in clinical DSS. The idea of Artificial Neural Network (ANN) is based on the belief that working of the human brain can be imitated using silicon and wires as living neurons and dendrites by making the right connections. ANNs are composed of multiple nodes, which imitate biological neurons of the human brain. The neurons are connected by links and they interact with each other. The nodes take input data and perform simple operations on the data. The results of these operations are passed to other neurons. The output at each node is called its activation or node value. Each link is associated with weight. ANNs are capable of learning, which takes place by altering weight values. The advancement of learning

II. Reasoning

180

FIGURE 13.2

13. Ontology-based intelligent decision support systems: A systematic approach

Overall working procedure of a clinical DSS. DSS, Decision support system.

technique makes the feature engineering automatic. Deep learning techniques are helpful to find the different hidden patterns in medical data (Thakur and Biswas, 2020). It can mitigate the issue of missing data or attributes. Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Deep Belief Network, etc. are mostly used to build the DSS. The clinical DSS has several applications that are beneficial for medical science. Few applications like EHRs, disease identification, and behavior monitoring are considered for further discussion. The overall working procedure is described in Fig. 13.2. 13.2.2.1 Identification of diseases The system proposed for this purpose helps the doctors to make predictions about the diseases that the patient may suffer from. It becomes useful for patients as well as practitioners. Rathke et al. (2011) attempted diagnosing diabetes disease since it is a rising area of concern and may pose to be a threat to human life. They proposed an application through which a user, viz. a doctor, patient, physicians, and medical students, can input information and those attributes are sent to a server with the help of the Internet. Those data were used as training dataset at first and processed at the server-side by the admin,

II. Reasoning

13.2 Enabling technologies to implement decision support system

181

and features were extracted from them so that an amalgamation of data mining and machine learning can be applied to the dataset. The classification algorithms included DT and k-Nearest Neighbor. Their expert system is found to achieve considerable accuracy in training and test phase. These applications can be used by anybody in future, especially by doctors and medical practitioners for the purpose of diagnosis of diseases. With the advancement of AI learning techniques, Description Logic (DL) techniques are found to contribute significantly in designing complex systems for patient identification. Gray et al. (2016) reported a system that works as virtual doctor and diagnoses the type 2 diabetes mellitus. The system is developed with some noninvasive sensors. Speech recognition and analysis are used to interact with the patients automatically. The DL techniques make this system more efficient and reduce involvement of the practitioner. The Deep Neural Network (DNN) is applied here. The cloud-based IoT DSSs use different data of wearable and smartphone embedded sensors, pathological reports, medical images for identifying different diseases and their severity. The clinical DSS reported in Wang and Zeng (2013) is built to predict kidney disease and its severity. The DNN along with particle swarm optimization is used here for identifying disease and achieved considerable accuracy with proper diagnosis. 13.2.2.2 Smart electronic health records EHR contains different patient information in digital form. The caregivers can track and analyze the data for various patients. Due to its portability and efficiency, EHR can store large volume of data. The basic contents are patients’ information, clinical information, continuous patient health parameter readings, diagnosis process, etc. Initially, EHR is used to store the huge data of the patients and healthcare administration. However, analyzing these data, abnormal situations could be predicted and prevented. Miotto et al. (2016) proposed an application for maintaining health records using a digital card that can be used solely by doctors, respected other departments in the hospital and the receptionist. They found out an optimal solution, using machine learning, for patients’ treatment and maintenance of their records. They used DT, and then Random Forest for performance improvement. Few works could be found in the literature where deep learning techniques are used to build different applications of EHR. The medical records of patients are stored periodically based on each visit of the patients. The medical record contains pathological reports, different diagnosis details, event details, etc. The data are not in regular mode due to irregular visit of a patient in a clinic, but EHR can maintain regular data. The data are converted into sentences for individual patients’ visits and fed into the neural network layer as shown in Benrimoh et al. (2018). Here, CNN is applied for the analysis. ARNN-based technique is used to extract different clinical events and their corresponding attributes from the records of EHR as shown in Lawanot et al. (2019). Sequence labeling is an important feature here for disease prediction and prescribing drug. 13.2.2.3 Behavioral monitoring Nowadays, it has become very important to keep an eye over people’s activities for surveillance and health monitoring.

II. Reasoning

182

13. Ontology-based intelligent decision support systems: A systematic approach

TABLE 13.1 Comparison of state-of-the-art works. Existing work and year

Domain of CDS

(Saha et al., 2018a) 2016

Electronic health record

(Wang and Zeng, 2013) 2019

Disease Kidney disease identification and its severity

Wearable and smartphone embedded DNN for classification, sensor, pathological reports, etc. PSO feature extraction

(Kumar, 2015) 2019

Behavior monitoring

Web camera images, sensors

Application

Data source

Learning techniques

Disease prediction

Multi domain patients’ data

Stacked Denoising Autoencoders

Mood, stress of office employee

Faster R-CNN, Fuzzy Clustering

CDS, Clinical Decision Support; DNN, Deep Neural Network; R-CNN, Region-based Convolutional Neural Network; PSO, Particle Swarm Optimization.

The clinical DSS for mental health and behavior monitoring reported in Saha et al. (2018b) provides effective solutions and treatments for depression and Alzheimer’s disease. These types of decision systems are helpful for practitioners and caregivers. Different machine learning and deep learning techniques make these systems autonomous. Kumar (2015) proposed a system to monitor behavior of the employees using the images of daily activity, their mood, and stress. The daily hand movements are captured by web camera. Faster Region-based CNN (Faster R-CNN) is applied here to identify the human daily activity. The Region of Interest is labeled in the random images of body and hand. Fuzzy clustering is applied for analyzing stress and mood. Activity recognition can effectively work on any smartphones by applying an ensemble of supervised classifiers as shown in Roche (2003) and Choi et al. (2006). The system is reported to perform well even when the training and test smartphone configurations are different. Currently, machine learning techniques are playing an important role in creating and maintaining clinical DSS. CNN and its different variants are mostly used as learning techniques for such DSSs. The learning techniques are mostly found to perform well when the data sets are properly labeled. Few representative works are summarized in Table 13.1. Neural network-based techniques are found to be the dominant analysis technique for many smart healthcare applications.

13.3 Role of ontology in DSS for knowledge modeling In healthcare system, information from patients, nurse, and doctors are stored as EHR, which can be accessed by Web Ontology Language in OWL format (Abanda et al., 2011). Metadata modeling is used to classify the record stored in the database. This metadata model can easily help us to map input data to the output data. There are several advantages of modeling metadata but the most important is that it can reduce noise and handle incomplete and erroneous data present in the ontology database. In contemporary era, researchers have started a wide research on the e-healthcare-based ontology system (Noy, 2004).

II. Reasoning

183

13.3 Role of ontology in DSS for knowledge modeling

FIGURE 13.3 based decision system.

Ontologysupport

FIGURE 13.4 Medical activity ontologies. (A) Medical activity classes; (B) medical activity tree; (C) medical objective properties.

They have utilized different aspects of the research environment in knowledge management, web management, business-to-business application, and other important applications. A DSS, relying on a semantic technology, has been invented to help doctors to make the correct decision to diagnose diseases accurately and prevent them from spreading. Fig. 13.3 (Abas et al., 2011) shows the principal components of the system like phytopathology ontology, a rule-based engine, and a semantic index module. For example, Agriculture University of Ecuador (Ana and Francisco) has contributed to design ontology for plant diseases. Such an ontological system has been designed using OWL 2 Web ontology language and prote´ge´ (Knublauch et al., 2004; Protocol, S.P.A.R.Q.L., 2016). Ontology-based medical service system for IoT healthcare is illustrated in Fig. 13.4 (Noy, 2004).

II. Reasoning

184

13. Ontology-based intelligent decision support systems: A systematic approach

The phytopathology ontology for IoT-based healthcare system has defined eight main classes, namely Disorder, Hospital, ObsoleteClass, Subset, Symptom, Synonym, SymptomType, and Treatment. These classes are further divided into subclasses as given in Fig. 13.4. However, ontological information has been collected from psychopathy document as a resource and validated by the domain experts. The phytopathology not only defines the taxonomy of the diseases but also establishes a set of properties that allows the system to identify the disease based on the symptoms of the patient. The ontology provides a relationship between classes and the object properties. Hence the classes play an important role in this approach. Therefore class is available in the vast majority of area and scope of the properties. Similarly, the properties like isCausedBy, isSymptomOf, hasSymptom, isTreatedWith, etc. can be defined as well. However, the ontology is populated with its objective properties and is built by iterative processes. Now, the knowledge-based ontology system (Erling and Mikhailov, 2010) has two components. One is a terminology concept that describes a domain as far as its properties and classes (Tbox). The other one is defined as assertion that describes the attribute of individuals (Abox). It lays the foundation between instance and other assertion concerning instance members of Tbox concept. The individual of the present work is put away in the RDF (RDFWorkingGroup, 2016) triples format (Knublauch et al., 2004; Lagos-Ortiz et al., 2018). It has three components: a subject, a predicate, and an object. Here, subject and object instance is defined by the ontology, but predicate is related to the properties. The RDF triples are generated and stored in this work, a web-based application server that provides SQL , XML, and RDF data management (Paredes-Valverde et al., 2015). • Rule-based engine The benefit of the ontology-based system can be described in two principal ways. First, it offers a standard vocabulary for phytopathology domain and helps to instigate different data sources. Second, this ontology represents a foremost source of computable domain knowledge, where the disease can be diagnosed by applying DSS. In that scenario, a group of domain experts, concerning different aspects of the various diseases, is invited to the interview. When the interviews are performed successfully, the experts solicit to make a set from conditions based on the knowledge contained in the ontology where these conditions are about the relationship between the symptoms and diseases. They furnish the different symptoms of the divergent diseases and produce better results. They also set a rank for each of the symptoms. In this way, possible old and new diseases are found and some of the symptoms are shared. Thus the disease can be easily diagnosed using rule-based engine. For this reason, the rulebased engine uses the objective property is TreatmentOf or is RecommendationOf that refers to any recommendation and treatment suggested by the caregiver. • Semantic indexing module The main purpose of the ontology system is disease diagnosis and recommendation to the patient. Semantic indexing approach is built on the semantic annotation technique. It is recommended by Cunningham et al. (2009). The document of the semantic index module is related to deceased document. It is processed by the NLP (natural language processing), and semantic annotating is created as per the ontology model. The GATE framework (Zhao et al., 2017) is used

II. Reasoning

13.3 Role of ontology in DSS for knowledge modeling

185

in its processing, and NLP tool is developed for the software component that processes the human language. Once the semantic annotating document is ready, then existing module assigns a weight to the annotations. It reflects the document meaning of ontology concepts. Each assigned weight of the annotation will permit the system to provide the caregiver a more rectified document with the diagnosed disease. The weightage procedure is adopted by the classic vector class model (Iroju et al., 2013). It is grounded on the frequency of occurrence of ontology concept in each document. Still, the semantic element is not only the ontology element that explicitly appears in the documents but also the ontology elementary taxonomic relation with the elements.

13.3.1 Issues and challenges Ontology-based systems have an abundance of benefit; however, there are some challenges related to the knowledge-based decision systems that should be communicated. Information from heterogeneity of data, such as low level to high level and vice versa, should be retrieved. In this book chapter, we represent some paradigms used to ease the challenges by using various types of technology. Some researchers have introduced means to tackle the interoperability challenges (Iroju et al., 2013; Lasierra et al., 2013; Miori and Russo, 2012) from the technological point of view. Ontology has been used to upgrade a semantic interoperability in this system and verify the homogeneity of data format. The coupling of semantic and ontology approach has proved to be a big challenge and a feasible solution needs to be found (Jain, 2020). In ontology-based system, the challenges and issues are as follows: 1. In IoT environment, different providers send sensor data which is collected by different consumers. This huge amount of data is stored in a heterogeneous manner. Information sharing and querying of sensors is an obstacle in this application. In context reasoning, IoT sensed data is low-level context. Therefore it should be converted to high-level context according to the context model and interference rules. For this requirement, some researchers focus on ontology-based interference and domain-specific rules (Miori and Russo, 2012). 2. In the real-world, driving safety decision is a primary concern. Analyzing data from sensors and improving autonomous vehicle driving have become one of the pressing issues in the research field. Although collecting and representing sensor data in machine-understandable format for further knowledge extraction is the most difficult problem and an immensely complicated task. 3. Healthcare solution (Gyrard et al., 2014) can be an effective and efficient one if heterogeneous heath data are stored in a uniform format in the database having a common vocabulary. It should be understandable by the man machine, which means it needs a uniform format by means of creating a proper standard. The standard should have the understanding ability by the man machine interface through ontology tools and schemes should be accurate. Only then ontology plays a vital role in healthcare system, which increases the awareness and curiosity of the patients as well as doctors while developing applications for better and affordable schemes in the public health sector.

II. Reasoning

186

13. Ontology-based intelligent decision support systems: A systematic approach

In the context of ontology-based healthcare (Iroju et al., 2013; Lasierra et al., 2013), medical systems need to be improved as follows: • To improve accuracy in the diagnosis of the patient in real time, identifying the symptoms of the disease, test results and thorough study of medical history is crucial. • In healthcare, building more powerful system that incorporates informal data acts as a strong tool for the diagnosis process as it is reused, stored, and shared with the patient database. • In healthcare, using an ontology-based system can support and integrate knowledge and data. The earlier issues are focal points for the study and prerequisite for the development of ontology-driven semantic knowledge base for public healthcare. Bearing this in mind, following challenges need to be addressed to develop a powerful healthcare system. If any emergency occurs, the system can easily build a relationship between health and environmental data. The current research work uses structure-based robust methodology that builds centralized knowledge base for healthcare systems and a well-structured scientific approach. To address these challenges, initiative has been taken to standardize the system further to meet the requirements.

13.3.2 Technology available Ontology-based IoT technology is helping to solve many a problem as discussed below: • Collecting and inferring the information: In this scenario, doctors can collect real-time and vital information from the patient and store it in the database followed by its registration through the administrator who assigns unique code to patient and doctor for mapping. If the doctor finds a new disease that has not been diagnosed previously, they must identify the possible symptoms, and this information should be documented and shared with other doctors for future reference. • Finding possible reasons and solution: Checking the recently collected patient’s vital data with the interrelated components of the patient to identify the possible reasons for the disease. Then needful suggestion is provided by the expert group. After that, the doctor describes the disease and finally draws conclusion about the disease at hand. This is followed by breakdown analysis where the cause of disease is introduced by the doctor and a possible treatment type is prescribed. The treatment may vary depending upon severity of the disease and this information is updated in the database. • Emergency decision support system: In the emergency DSS, all the task groups take responsibility between Coordination and Information sharing for the decision-making process and any changes in these actions require adjustment according to the decision made. Here all the important information is communicated through ontology-based data accessing. After the analysis of the data by the doctor, treatment arrangements are made. Besides this, the doctor prescribes the proper medicines to the patient and the dosage for it. As usual in the medical system doctor cannot construct ontology about a new disease. As a result, it is difficult to cure the patient. Predicting the type of disease that can occur in other countries is an inconceivable task. So, the doctor cannot make any important decisions when the emergency condition occurs. There needs to be an increased domain-

II. Reasoning

13.4 QoS and QoE parameters in decision support systems for healthcare

187

specific IoT healthcare application and domain knowledge should be made more popular. That is the reason why more than 200 sensor-based ontology projects are already defined on a worldwide scale (Noy, 2004; Tudorache et al., 2008; Protocol, S.P.A.R.Q.L., 2016; Gyrard et al., 2014). These ontology-based systems provide expertise knowledge, explanation, as well as illustration of the different concepts of the diseases. Particularly, in domain-based applications ontology and semantic descriptions are used. It is a means to define types of properties and interrelationships of the entities that exist for a specific domain.

13.4 QoS and QoE parameters in decision support systems for healthcare For decision-making process in QoS and QoE domain there are in general three phases: (1) formulation of the decision-making problem; (2) collection, storage, and fusion of relevant data in connection with the given problem; (3) data processing for decision-making (Laskey, 2006). In DSS corresponding to each phase, there is one module such as (1) interaction module to facilitate interaction with user and system for problem formulation; (2) data module to collect, analyze, and process problem-specific data; and (3) design module to realize the DSS strategy (Marakas, 1999). Nowadays technology-enabled smart healthcare system is getting popularized due to its QoS assurance in term of consistency, accuracy, timeliness, and mobility support making this solution a convenient one. This service is sensor, IoT and cloud supported hence it depends on network performance and availability many a times (Marakas, 1999; Varela et al., 2014). This is very crucial when an emergency situation occurs. The medical staff can be aware of the patient’s health during an emergency thereby minimizing the delay for the treatment (Ullah et al., 2012). QoS and QoE are very important parameters with respect to healthcare issue (Lee et al., 2008). While transmitting the health-related data of a patient we must keep in mind the various QoS and QoE parameters for getting the appropriate information regarding the health of a patient (Wootton et al., 2015; Algaet et al., 2014; Rego et al., 2018). Security, Reliability, Performance, Timeliness, Cost, and QoE parameters should be taken into account when designing the DSS based on ontology. Old and new paradigms for the design of healthcare systems in DSS based on ontology are shown in Fig. 13.5 (Negra et al., 2016). Identifying the needs regarding the perception of the user for knowing the acceptable threshold values of the QoS parameters depends solely on the network performance. The main factors are the rate of data packet loss, end-to-end delay, and efficiency (Ullah et al., 2012). QoE is an assessment that is totally user-centric (Varela et al., 2014).

13.4.1 Why QoS versus QoE is important in such system implementation in healthcare? Constant monitoring of the patient’s health is the main concern for which patient data is transferred to the service provider or the centers for healthcare (Algaet et al., 2014). Nowadays QoE is very much important for resource allocation in wireless

II. Reasoning

188

13. Ontology-based intelligent decision support systems: A systematic approach

FIGURE 13.5 Old and new paradigms for the design of healthcare systems in DSS based on ontology. DSS, Decision support system.

healthcare monitoring. The quality of the medical image data and nonimage data at the receiver end is the main factor affecting the quality of health monitoring service (Lee et al., 2008). The quality of nonimage data is assessed by the QoS parameters but, however, the quality of medical images is assessed by the QoE parameters by considering the experience of people who has direct realization. Proper resources must be allotted for transmitting the patient data to guarantee the quality of medical data at the receiver side. Since the number of wireless resources are limited, there is an increasing demand for more wireless bandwidth. So the goal is to maximize the network capacity thereby minimizing the system cost. The metrics for assessment of the quality of medical image is more delicate since a loss of one-pixel of information may hamper a doctor’s diagnosis (Negra et al., 2016). The aim of a medical practitioner is to reach to a conclusion by diagnosis of the medical image of the patient.

13.4.2 Definition of significant quality of service and quality of experience parameters 13.4.2.1 Quality of service metrics parameters QoS is a Quality Evaluation Metric which requires a bandwidth, throughput, end-toend delay, packet loss rate, and jitter and it guarantees whether a network performs well. These are the parameters that have very high implications in healthcare (Algaet et al., 2014; Rego et al., 2018). The QoS of the network is determined by four parameters traditionally attributed to flow of data packets.

II. Reasoning

13.4 QoS and QoE parameters in decision support systems for healthcare

189

1. Reliability: Reliability (Lee et al., 2008) is a very important property of flow. It basically means trustworthy. Due to lack of it, if a packet is lost, it needs to be resent. 2. Delay: It is another property of flow, the direction of which is from source to destination (Lee et al., 2008). Propagation delay is the time taken by the signal to travel from the sender to the receiver. It is calculated as Link Length/Propagation speed over the medium. 3. Jitter: It is a delay variation between packets of same flow. If jitter is high, then the gap between two delays becomes large, but if it is low, then the gap is small (Wootton et al., 2015). 4. Bandwidth: Different bandwidths (Wootton et al., 2015) are required by various applications. To achieve good QoS, the techniques need to be applied are as follows: 1. Over provisioning: Excess router capacity, buffer space, and bandwidth are provided so that packets flow through easily. 2. Buffering: It smooths out jitter and does not affect reliability or bandwidth which increases delay. 3. Traffic shaping: It forces bursty traffic to be transmitted at a uniform rate. 4. Resource reservation: In this bandwidth, buffer space and CPU cycles are needed for successful transmission which is reserved beforehand (Lee et al., 2008). 5. Admission control: Here depending on the current load, a router decides whether it should accept or reject a new job. 6. Proportional routing: Here traffic is divided equally among all the router so that no single router gets overburdened (Lee et al., 2008). QoS parameters that are used in QoS mechanism or the parameters which are considered as a resource for assuring QoS to DSS based on Ontology flow are (Lee et al., 2008) Bandwidth, Delay, Data packet loss, and Control overhead of data packets. Except these traditional QoS, some other QoS parameters which are very much required in DSS are as follows: 1. Energy efficiency: This is the process of reducing energy consumption by using less energy to achieve the same output (Saha et al., 2019). 2. End-to-end reliability: It is trustworthy from the point of source to destination, that is, from sender to receiver. 3. Security: Security is a very important mechanism for preserving the protected data hence ensuring that the data received is not damaged. QoS mechanism needs to be implemented carefully. Some of the key principles and concepts which must be followed when we are implementing QoS mechanism are listed as follows: Transparency principle - Application and services provided by the network should be shielded from the implemented QoS management mechanism for resource reservation. Specification for flow performances - Requirements of the user for different flow performances must be categorized in different categories (Algaet et al., 2014).

II. Reasoning

190

13. Ontology-based intelligent decision support systems: A systematic approach

TABLE 13.2 Comparison of various QoS-based algorithms. Authors name

Year of publication

Environment contemplate

QoS metrics

Jae-Min Lee

2008

Hospital

Reliability, energy, delay

Richard Wootton et al.

2012

Hospital

Reliability, energy, delay

Rim Negra et al.

2015

Remote Health Monitoring

Energy, delay

Mustafa AlmahdiAlgaet et al.

2014

Healthcare applications

Reliability, delay

Di Lin et al.

2015

Chronic disease patient monitoring

Minimum delay, high data rate

QoS, Quality of service.

QoS and QoE are the parameters that assess the quality of healthcare images and the efficiency of performance (Varela et al., 2014; Lin et al., 2015). In Table 13.2, various QoS algorithms are described. 13.4.2.2 QoE metrics Patient vitals may be in the form of data or image, accordingly QoE varies. To guarantee QoE parameters, network assessment to be estimated. Bayes statistics model is used for this (Rego et al., 2018). Network resources need to be evaluated to support critical scenario in healthcare. For example an emergency health condition deterioration or patient-intransit seamless health data transmission requirement, etc. may be priority of data traffic (Gupta and Biswas, 2019), routing strategy, bandwidth variation, backup buffer, number of sleeping nodes or active-backup nodes, etc. Patient-diagnosis-oriented Quality Metrics (Algaet et al., 2014) can be divided into (1) Objective, Subjective, and Quasisubjective metrics. QoE considers each and every factor that provides the consciousness regarding the system quality that includes system, human, and contextual factors. Human Factors consist of low-level and high-level processing. System Factors are regarding the system related to Content, Media, Network, and Device. Context Factors are Physical, Temporal Social, Economic, Task, and Technical information. QoS and QoE parameters are mandatory for Healthcare in Web Semantics for DSS based on ontology to achieve real-time requirements as discussed earlier. Mainly ontology provides emergency medical help in making decision by the DSS from patient vital heath data through IoV (Saha and Biswas, 2018) which are trusted and authorized vehicles. Hence the parameters, that is, End-to-End Delay will decrease and Energy Efficiency will increase thereby decreasing the Cost and increasing Reliability.

13.5 Conclusion Seamless health monitoring and support is a need nowadays and is realizable due to advanced technical support such as IoT, sensors, cloud, artificial intelligence, and machine learning. This solution is available with enhanced efficiency as supported by ontology-

II. Reasoning

References

191

based computerized DSS. This chapter helps the readers to experience a systematic, comprehensive, and informative journey starting from overview of an ontology-based DSS for IoT-based smart healthcare through intelligent techniques for big health data handling toward role of ontology in design and implementation of DSS. This remains incomplete unless this system incorporates the usability issues of the users in terms of QoS and QoE which have been elaborated in detail. Ontology ensures semantic interoperability among heterogeneous devices and stakeholders involved in healthcare domain. Ontology-based context model helps in modeling combining both health vitals and ambient data thus increasing accuracy and precision of diagnosis of DSS thus leading toward proper and correct findings by doctors.

References Abanda, H., Ng’ombe, A., Tah, J.H., Keivani, R., 2011. An ontology-driven decision support system for land delivery in Zambia. Expert. Syst. Appl. 38 (9), 10896 10905. Abas, H.I., Yusof, M.M., Noah, S.A.M., 2011. The application of ontology in a clinical decision support system for acute postoperative pain management. In: Proc. 2011 International Conference on Semantic Technology and Information Retrieval (pp. 106 112). IEEE. Algaet, M.A., Noh, Z.A.B.M., Shibghatullah, A.S., Milad, A.A., Mustapha, A., 2014. Provisioning quality of service of wireless telemedicine for e-health services: a review. Wirel. Personal. Commun. 78 (1), 375 406. Ana, M.G., Francisco, D.C.M., Ontology model for the knowledge management in the agricultural teaching at the UAE. Baig, M.M., et al., 2019. Clinical decision support systems in hospital care using ubiquitous devices: current issues and challenges. Health Inform. J. 25 (3), 1091 1104. Benrimoh, D., Fratila, R., Israel, S., Perlman, K., Mirchi, N., Desai, S., et al., 2018. Aifred health, a deep learning powered clinical decision support system for mental health. In: The NIPS’17 Competition: Building Intelligent Systems (pp. 251 287). Springer, Cham. Chatterjee, P., Cymberknop, L.J., Armentano, R.L., 2017. IoT-based decision support system for intelligent healthcare—applied to cardiovascular diseases. In: Proc. 2017 7th International Conference on Communication Systems and Network Technologies (CSNT). IEEE. Choi, N., Song, I.Y., Han, H., 2006. A survey on ontology mapping. ACM Sigmod Record 35 (3), 34 41. Cunningham, H., Maynard, D., Bontcheva, K., Tablan, V., Ursu, C., Dimitrov, M., et al., 2009. Developing Language Processing Components with GATE Version 5: (a User Guide). University of Sheffield. Erling, O., Mikhailov, I., 2010. Virtuoso: RDF support in a native RDBMS. In: Semantic Web Information Management (pp. 501 519). Springer, Berlin, Heidelberg. Gray J., Corley J., Eddy B.P., 2016 An experience report assessing a professional development MOOC for CS Principles. In: Proc. 47th ACM Technical Symposium on Computing Science Education (pp. 455 460). Gupta, R., Biswas, S., 2019 Priority based IEEE 802.15.4 MAC by varying GTS to satisfy heterogeneous traffic in healthcare application. In: Wireless Networks, SCI Indexed, Springer, in UGC-CARE, impact factor 2.405, ISSN 1022-0038. Gyrard, A., Datta, S.K., Bonnet, C., Boudaoud, K., 2014. Standardizing generic cross-domain applications in Internet of Things. In: Proc. 2014 IEEE Globecom Workshops (GC Wkshps) (pp. 589 594). IEEE. Iroju, O., Soriyan, A., Gambo, I., Olaleke, J., 2013. Interoperability in healthcare: benefits, challenges and resolutions. Int. J. Innov. Appl. Stud. 3 (1), 262 270. Knublauch, H., Fergerson, R.W., Noy, N.F. and Musen, M.A., 2004. The Prote´ge´ OWL plugin: an open development environment for semantic web applications. In: Proc. International Semantic Web Conference (pp. 229 243). Springer, Berlin, Heidelberg. Jain, S., 2020. Understanding semantics-based decision support. 162 Pages. Chapman and Hall/CRC. Kumar, V., 2015. Ontology based public healthcare system in Internet of Things (IoT). Procedia Comput. Sci. 50, 99 102. Lagos-Ortiz, K., Medina-Moreira, J., Mora´n-Castro, C., Campuzano, C., Valencia-Garcı´a, R., 2018. An ontologybased decision support system for insect pest control in crops. In: Proc. International Conference on Technologies and Innovation (pp. 3 14). Springer, Cham.

II. Reasoning

192

13. Ontology-based intelligent decision support systems: A systematic approach

Lasierra, N., Alesanco, A., Guille´n, S., Garcı´a, J., 2013. A three stage ontology-driven solution to provide personalized care to chronic patients at home. J. Biomed. Inform. 46 (3), 516 529. Laskey, K.B., 2006: Decision making and decision support. http://ite.gmu.edu/klaskey/SYST542/DSS_Unit1.pdf. Lawanot, W., Inoue, M., Yokemura, T., Mongkolnam, P., Nukoolkit, C., 2019. Daily stress and mood recognition system using deep learning and fuzzy clustering for promoting better well-being. In: Proc. 2019 IEEE International Conference on Consumer Electronics (ICCE) (pp. 1 6). IEEE. Lee, J.M., Lee, J.H., Chung, T.M., 2008. Experimental QoS test of medical data over wired and wireless networks. In: Proc. 2008 10th International Conference on Advanced Communication Technology (vol. 1, pp. 142 146). IEEE. Lin, D., Labeau, F., Vasilakos, A.V., 2015. QoE-based optimal resource allocation in wireless healthcare networks: opportunities and challenges. Wirel. Netw. 21 (8), 2483 2500. Lysaght, T., et al., 2019. AI-assisted decision-making in healthcare. Asian Bioeth. Rev. 11 (3), 299 314. Marakas, G.M., 1999. Decision Support Systems in the Twenty-first Century. Prentice-Hall, Inc, Upper Saddle River, NJ, USA. Middleton, B., Sittig, D.F., Wright, A., 2016. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb. Med. Inform. 25.S (01), S103 S116. Miori, V., Russo, D., 2012. Anticipating health hazards through an ontology-based, IoT domotic environment. In: Proc. 2012 Sixth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (pp. 745 750). IEEE. Miotto, R., Li, L., Kidd, B.A., Dudley, J.T., 2016. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. Negra, R., Jemili, I., Belghith, A., 2016. Wireless body area networks: applications and technologies. Procedia Comput. Sci. 83, 1274 1281. Noy, N.F., 2004. Semantic integration: a survey of ontology-based approaches. ACM Sigmod Record 33 (4), 65 70. ´ ., Ruiz-Martı´nez, A., Valencia-Garcı´a, R., Alor-Herna´ndez, G., Paredes-Valverde, M.A., Rodrı´guez-Garcı´a, M.A 2015. ONLI: an ontology-based system for querying DBpedia using natural language paradigm. Expert. Syst. Appl. 42 (12), 5163 5176. Protocol, S.P.A.R.Q.L., 2016. RDF Query Language. RDF Data Access Working Group (DAWG) Std. Rathke, F., Hansen, K., Brefeld, U., Mu¨ller, K.R., 2011. StructRank: a new approach for ligand-based virtual screening. J. Chem. Inf. Model. 51 (1), 83 92. Rego, A., Canovas, A., Jimenez, J.M., Lloret, J., 2018. An intelligent system for video surveillance in IoT environments. IEEE Access. 6, 31580 31598. Available from: https://doi.org/10.1109/access.2018.2842034. Roche, C., 2003. Ontology: a survey. IFAC Proc. Vol. 36 (22), 187 192. Saha, R., Biswas, S., 2018. Analytical study on data transmission in WBAN with user mobility support. In: 2018 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET) (pp. 1 5). IEEE. Saha, J., Chowdhury, C., Biswas, S., 2018a. Two phase ensemble classifier for smartphone based human activity recognition independent of hardware configuration and usage behaviour. Microsyst. Technol. 24 (6), 2737 2752. Saha, J., Chowdhury, C., Roy Chowdhury, I., Biswas, S., Aslam, N., 2018b. An ensemble of condition based classifiers for device independent detailed human activity recognition using smartphones. Information 9 (4), 94. Saha, R., Naskar, S., Biswas, S., Saif, S., 2019. Performance evaluation of energy efficient routing with or without relay in medical body sensor network. ESCI. Springer, ISSN 2190-7188. Available from: https://doi.org/ 10.1007/s12553-019-00346-z. Spa¨nig, S., Emberger-Klein, A., Sowa, J.P., Canbay, A., Menrad, K., Heider, D., 2019. The virtual doctor: an interactive clinical-decision-support system based on deep learning for non-invasive prediction of diabetes. Artif. Intell. Med. 100, 101706. Sutton, R.T., Pincock, D., Baumgart, D.C., Sadowski, D.C., Fedorak, R.N., Kroeker, K.I., 2020. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digital Med. 3 (1), 1 10. Thakur, D., Biswas, S., 2020. Smartphone based human activity monitoring and recognition using ML and DL: a comprehensive survey. J. Ambient. Intell. Hum. Comput. Springer. Available from: https://doi.org/10.1007/ s12652-020-01899-y,2020. Tudorache, T., Noy, N.F., Tu, S., Musen, M.A., 2008. Supporting collaborative ontology development in Prote´ge´. In: Proc. International Semantic Web Conference (pp. 17 32). Springer, Berlin, Heidelberg.

II. Reasoning

References

193

Ullah, M., Fiedler, M., Wac, K., 2012. On the ambiguity of Quality of Service and Quality of Experience requirements for eHealth services. In 2012 6th International Symposium on Medical Information and Communication Technology (ISMICT) (pp. 1 4). IEEE. Varela, M., Skorin-Kapov, L. and Ebrahimi, T., 2014. Quality of service versus quality of experience. In: Quality of Experience (pp. 85 96). Springer, Cham. Wang, Y., Zeng, J., 2013. Predicting drug-target interactions using restricted Boltzmann machines. Bioinformatics 29 (13), i126 i134. Wootton, R., Liu, J., Bonnardot, L., 2015. Relationship between the quality of service provided through store-andforward telemedicine consultations and the difficulty of the cases implications for long-term quality assurance. Front. public. Health 3, 217. Zenuni, X., Raufi, B., Ismaili, F., Ajdari, J., 2015. State of the art of semantic web for healthcare. Procedia Soc. Behav. Sci. 195, 1990 1998. Zhao, L., Ichise, R., Liu, Z., Mita, S., Sasaki, Y., 2017. Ontology-based driving decision making: a feasibility study at uncontrolled intersections. IEICE Trans Inf. Syst. 100 (7), 1425 1439.

II. Reasoning

C H A P T E R

14 Ontology-based decision-making Mark Douglas de Azevedo Jacyntho and Matheus D. Morais Coordination of Informatics, Fluminense Federal Institute, Campos dos Goytacazes, Rio de Janeiro, Brazil

14.1 Introduction A decision support system (DSS) is a special information system that helps us to make better decisions in our business tasks. Since first introduced in the late 1960s, DSSs have been applied in a variety of business areas such as health care, legal, emergency management, power management, and e-commerce recommender systems. Based on Sprague and Carlson (1982), the early DSS architectures contained three main components: • dialog or user module: human computer interface used to interact with the system; • data module: database management system which stores the data collected and processed; • model module: model which encodes the decision support strategy. Since the 1990s, knowledge base has been introduced in the architecture mentioned earlier, providing reasoning capability to the system, leading to knowledge-based DSS (KBDSS), also called expert systems, where the knowledge domain of the problem to be solved is formally modeled in advance and then executed through inference modules. Thus according to Marakas (2003), a KBDSS architecture contains the three previous components plus: • inference module: reasoner (software that makes use of formal axioms to infer new data) and knowledge base that together carry out tasks relating to problem recognition and solution suggestions; • user: the system user as a fundamental part of the problem-solving process. In addition to these five components, we argue that a service module, employing a service architectural style like Representational State Transfer (Fielding, 2000), is very convenient to naturally accommodate different kinds of front-end (Web, mobile, desktop, etc.). A KBDSS has many advantages over a conventional DSS, due to its intelligence and knowledge. Some of these benefits are highlighted in Mendes (1997):

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00016-X

195

© 2021 Elsevier Inc. All rights reserved.

196

14. Ontology-based decision-making

• The expertise of the specialists can be distributed so that it can be used by a large number of people. • Improvement of the productivity and performance of its users, considering that it provides a vast knowledge, which, certainly, under normal conditions, would require more time to assimilate it and, therefore, use it in decision-making. • Reduction of the degree of dependency that organizations maintain when they find themselves in critical, unavoidable situations, such as, the lack of a specialist. When registering the knowledge of employees in the KBDSS, a significant reduction in the degree of dependence between the company and the employee’s presence is promoted. • Suitable tools to be used in training groups of people, quickly and pleasantly, and can serve, after the training, as an instrument for collecting information on the performance of the trainees, obtaining subsidies to reformulate the lessons to obtain better performance, in addition to providing immediate support to trainees during the use of knowledge in carrying out their daily tasks. In the last two decades, research in KBDSS field has been boosted by Semantic Web standards and technologies. In particular, ontologies have been adopted in various application domains. According to Jacyntho (2012), ontology is a formal representation of a given domain of knowledge, where concepts (classes) of this domain are explicitly defined, as well as their properties and relationships. Ontologies bring two differential benefits: • give explicit semantics to data, thus the power of inference to machines, making it possible, through reasoners, discovers new knowledge; • promote the sharing and reuse of a vocabulary, ensuring a common understanding of some domain of knowledge. To illustrate the power of inference that ontologies can provide, the standard Web Ontology Language (OWL) (W3C OWL Working Group, 2012), a meta-ontology used to create ontologies, allows to express several kinds of axioms, such as: • relationships between classes: classes and subclasses; equivalent classes; disjoint classes; complex classes formed by restrictions and using logic operators; • relationships between properties: object and datatype properties; subproperties; domain restrictions and range of properties; universal restrictions; existential restrictions; value restrictions; qualified cardinality restrictions; functional, inverse-functional and nonfunctional properties; inverse properties; transitive properties; reflexive and nonreflexive properties; symmetric and asymmetric properties; property chains; disjoint properties; keys. Based on our experience, to construct an ontology-based knowledge base, using the Semantic Web standards, the knowledge engineers/developer team should perform the following iterative method (Fig. 14.1): 1. Inspired in Gruninger and Fox (1995), together with domain experts formulate the decision-making problem through a set of competence questions for which the knowledge base should provide answers; 2. Based on the competence questions determine the scope and build a conceptual model of domain concepts (classes) and their relationships;

II. Reasoning

14.1 Introduction

197

FIGURE 14.1 Ontology-based knowledge base method.

3. Validate the conceptual model with domain experts, resolving misunderstandings over terms and establishing a true domain model that becomes a ubiquitous vocabulary among knowledge engineers, developer team, and stakeholders; 4. Using OWL, map the conceptual domain classes/properties, and their invariants into an ontology, that is, the T-Box statements in Description Logics (DL) terminology. For class invariants and business rules that cannot be expressed in OWL-DL, the language SPARQL Inferencing Notation (SPIN)1 (Knublauch, 2011) can be used to link a class to SPARQL (Harris, 2013) queries that formalize constraints (SPARQL ASK queries) and inference rules (SPARQL CONSTRUCT queries), respectively. SPIN combines concepts from object-oriented languages, query languages, and rule-based systems to describe object behavior; 1

As of this writing, SPIN has not yet become a W3C recommendation. However, SPIN has become the de-facto industry standard for representing rules and constraints on the Semantic Web models, because it is based on the SPARQL standard and therefore can be natively executed by SPARQL engines of the databases. This is a big advantage of SPIN over other independent proposals like Semantic Web Rule Language (SWRL) (Horrocks et al., 2004).

II. Reasoning

198

14. Ontology-based decision-making

5. Import the OWL ontology file into a Resource Description Framework (RDF) database (triple store) that offers inference feature; 6. For each competence question, create a corresponding SPARQL query, using the ontology classes/properties; 7. Populate the RDF database with data (individuals, instances of the classes), that is, the A-Box statements in DL terminology; 8. Execute the corresponding SPARQL query over the data, with the database’s inference feature enabled, to obtain an answer to a competence question. For the construction of the ontology itself (tasks 1 4), guidelines of some ontology engineering methodology, like Ontology Development 101 (Noy and McGuinness, 2001) or NeOn methodology (Sua´rez-Figueroa et al., 2012), are welcome. The core feature of KBDSS is its specialized problem-solving expertise. In other words, based on its knowledge about a specific domain, the system receives a set of symptoms or signals as input, recognize the problem, and recommend a solution to solve it as output, usually in the form of procedures (workflows). The KBDSS’ knowledge needs to be described in some kind of model. Particularly, for ontology-based KBDSSs, this model is an ontology. The ontology has an endless life cycle and should be developed iteratively and incrementally. The ontology needs to be constantly refined, either because a specification error or improvement has been detected, or because a new business problem has been discovered and, therefore needs to be analyzed and properly modeled in the ontology. In the following sections, an actual problem-solving OWL ontology, namely IssueProcedure Ontology (IPO) (Morais and Jacyntho, 2016; Morais, 2015), and subsequently, a corresponding fictitious case study, with real-world challenges, using a specialization of IPO for health care, the Issue-Procedure Ontology for Medicine (IPOM) (Morais and Jacyntho, 2016; Morais, 2015), will be introduced.

14.2 Issue-Procedure Ontology IPO is a core ontology that intends to provide to machines the necessary semantics so that they not only render information but also identify problems (issues) from a set of symptoms and then autonomously return possible solutions to that problems. The solutions are represented by procedures detailed via workflows. Core ontology means that it can be directly instantiated, but it can also be extended or specialized for more specific types of issues, refining the power of expressiveness and inference. The IPO website is ,http://purl.org/ipo/core.. The IPO was developed from the following set of competence questions: • • • • • • •

CQ1: What are the categories of a problem/symptom/solution? CQ2: What are the symptoms of a problem? CQ3: What are the solutions to a problem? CQ4: What caused the problem? CQ5: What is the host of the problem? CQ6: Who created/registered this problem/symptom/solution? CQ7: What actions (workflow) to be taken to solve the problem?

II. Reasoning

14.2 Issue-Procedure Ontology

• • • •

199

CQ8: Does one problem cause another problem? CQ9: Does one problem depend on another problem? CQ10: What are the possible problems, based on a set of symptoms? CQ11: In which problem classes, a given occurrence is classified based on its symptoms?

Fig. 14.2 presents the IPO conceptual domain model through a Unified Modelling Language (UML) class diagram. The main classes are Symptom, Issue, and Procedure. Such classes represent the core concepts of the domain. The Symptom class represents signals that indicate the occurrence of a problem. For example, in healthcare, a patient’s symptoms indicate the presence of a disease. As another example, for an IT professional, software error messages can be a problem with the computer or the software itself. The Issue class corresponds to the concept of problem itself. Properties for various relationships between problems are also present, as an issue can be the cause and/or

FIGURE 14.2 IPO conceptual domain model (Morais and Jacyntho, 2016). IPO, Issue-Procedure Ontology.

II. Reasoning

200

14. Ontology-based decision-making

caused, directly or indirectly, by another issue, as well as an issue A can depend on issue B, specifying that issue B must be solved before solving A. In addition to being indicated by symptoms, an Issue is also associated with actions (Procedure class) to solve it. The Procedure class represents the actions that must be taken to solve the problem. An issue can be solved in several ways, and a Procedure can contain several actions and even contain other Procedures. In this way, the ontology provides a means for representing various actions and conditionals to establish an order of execution of the actions, as well as criteria for executing or not an action. In this way, it is possible to assemble simple and complex workflows. The IPO still reuses the Simple Knowledge Organization System (SKOS) (Issac and Summers 2009) ontology to define classification schemes, that is, a set of hierarchically related categories (or concepts), instances of the skos:Concept class, forming a thesaurus, under which instances of the IssueEntity class can be grouped or classified. It is important to note that there is another form of classification, by creating subclasses of IssueEntity class. The use of subclasses leads to a more refined inference capability and should be used when dealing with intrinsic type/subtype classification, where new classes, with new restrictions, need to be created to describe more specific types of issues in a particular context. For each competence question, a corresponding SPARQL query was created to obtain the answer. In the SPARQL queries ex:Issue1 is the URI of the issue in question and the ontology namespace prefixes used are: prefix ipo: ,https://purl.org/ipo/core#.. prefix rdfs: ,http://www.w3.org/2000/01/rdf-schema#. prefix rdf: ,http://www.w3.org/1999/02/22-rdf-syntax-ns#.. prefix ex: ,http://www.example.org/..

CQ1: What are the categories of a problem/symptom/solution? SELECT ?category WHERE { ex:Issue1 ipo:hasCategory ?category. }

CQ2: What are the symptoms of a problem? SELECT ?symptom WHERE { ex:Issue1 ipo:indicatedBy ?symptom. }

CQ3: What are the solutions to a problem? SELECT ?solution WHERE { ex:Issue1 ipo:solvedBy ?solution. }

II. Reasoning

14.2 Issue-Procedure Ontology

201

CQ4: What caused the problem? SELECT ?causativeAsset WHERE { ex:Issue1 ipo:hasCausativeAsset ?causativeAsset. }

CQ5: What is the host of the problem? SELECT ?hostAsset WHERE { ex:Issue1 ipo:hasHostAsset ?hostAsset. }

CQ6: Who created/registered this problem/symptom/solution? SELECT ?creator WHERE { ex:Issue1 ipo:hasMaker ?creator. }

CQ7: What actions (workflow) to be taken to solve the problem? Once the procedures to solve the problem have been obtained (CQ3), for each procedure we must execute the following queries to obtain the steps of the procedure’s workflow. Since ex:Issue1 is solved by ex:Procedure1, we have: 1. Query to obtain the first step of the procedure’s workflow: SELECT ?firstStep WHERE { ex:Procedure1 ipo:hasFirstStep ?firstStep. }

2. Parameterized (parameter ,currentStep.) recursive query to obtain the current step’s action and current step’s next steps. This query must be executed as many times as needed until all procedure’s steps are returned: SELECT ?action ?transition ?guard ?nextStep WHERE { , currentStep. ipo:activates ?action. OPTIONAL { , currentStep. ipo:hasOutcoming ?transition. ?transition ipo:hasGuardCondition ?guard. ?transition ipo:hasTarget ?nextStep } }

II. Reasoning

202

14. Ontology-based decision-making

CQ8: Does one problem cause another problem? SELECT ?anotherIssue WHERE { ex:Issue1 ipo:causes ?anotherIssue. }

CQ9: Does one problem depend on another problem? SELECT ?anotherIssue WHERE { ex:Issue1 ipo:dependsOn ?anotherIssue. }

CQ10: What are the possible problems, based on a set of symptoms? The query shown below is for the set of symptoms {ex:Symptom1, ex:Symptom2}. To further improve, the issues returned are decreasingly ordered by the number of symptoms present in that set, thus obtaining a list of issues ordered from most likely to least likely. For this purpose, the GROUP BY clause is used to group the results by issue (?Issue) and the aggregating function COUNT(?issue) to count the number of symptoms per issue (?SymptomQuantity), ordering the results by this quantity in a decreasing way: SELECT ?issue (COUNT(?issue) AS ?symptomQuantity) WHERE { {ex:Symptom1 ipo:indicates ?issue.} UNION {ex:Symptom2 ipo:indicates ?issue.} } GROUP BY ?issue ORDER BY DESC(?symptomQuantity)

CQ11: In which problem classes, a given occurrence is classified based on its symptoms? SELECT ?class WHERE { ex:ocorrence1rdf:type ?class. }

Relating to this last competence question (CQ11), strictly speaking, the Issue class denotes types of problem, not occurrences of problem in space/time. So that we can record occurrences of a certain type of problem, it is necessary to extend the IPO, creating a subclass of the Issue class, which represents that specific type of problem, with at least one set of necessary and sufficient conditions (equivalentClass axiom) related to its symptoms. The occurrences will then be automatically classified as instances of this subclass. To illustrate this kind of usage of the IPO ontology, we present the IPOM.

II. Reasoning

14.3 Issue-Procedure Ontology for Medicine

203

14.3 Issue-Procedure Ontology for Medicine IPOM is a domain ontology, an extension of the IPO, aimed at diagnosing diseases to assist the doctor in his/her decision-making. It is important to make it clear that this is a simple realistic and fictitious example, with purely educational purposes. This ontology does not have the necessary rigor to be applied in real cases. The central objective of the IPOM ontology is basically to address the last IPO’s competence question (CQ11), that is, make it possible for the machine to be able to automatically classify occurrences of diseases, based on symptoms presented by patients, and then propose possible treatments. For timestamp purposes, one more competence question was added, namely: • CQ12: What is the date and time of an occurrence? For this case study, the diseases selected were Flu (Mayo Clinic Staff, 2019) and Pneumonia (Mayo Clinic Staff, 2018). This choice was made because they are related diseases and well known, thus facilitating the understanding of the examples. It is worth mentioning that for these diseases, there are other symptoms (less common), but for this study, only the most known symptoms were considered. Fig. 14.3 presents the IPOM conceptual domain model through a UML class diagram. The IPOM extends the IPO for disease descriptions and extends the Event ontology (Raimon and Abdallah, 2007) for occurrence descriptions. The Event ontology, in turn, reuses the OWL-Time ontology (Cox et al., 2017). The Event ontology is centered around the notion of event (any relevant occurrence in time and space). The OWL-Time is an ontology of temporal concepts (time positions and durations). The Disease class is a base class for all types of diseases. Thus restrictions common to all diseases should be defined in this class. All other classes of specific diseases, like Flu and Pneumonia, are subclasses of this class. This class is a subclass of ipo:Issue and event:Event, defining that all instances of the Disease class are, at the same time, treated as an issue by the IPO and as an event by the Event ontology.

FIGURE 14.3 IPOM conceptual domain model (Morais, 2015). IPOM, Issue-Procedure Ontology for Medicine.

II. Reasoning

204

14. Ontology-based decision-making

Flu and Pneumonia represent the diseases of interest. Each of these classes has at least one set of necessary and sufficient conditions (equivalentClass axiom) related to its symptoms, which permits the machine to automatically classify occurrences as instances of them. The two classes Flu and Pneumonia are detailed below. For the sake of space, consider that the symptoms and treatments have already been properly defined as direct instances of ipo:Symptom and ipo:Procedure, respectively. In this example, it was defined that to be a member of the Flu class, it is necessary and sufficient (owl:equivalentClass) that the occurrence is related, through the ipo:indicatedBy property, to the symptom Fever and at least one other Flu symptom (owl:hasValue). In addition, the following property-value pairs (owl:hasValue) were defined as necessary conditions (rdfs:subClassOf): title (ipo:title), description (ipo:description), treatment (ipo:solvedBy), and cause (ipo:hasCausativeAsset). Therefore when classifying an occurrence as an instance of Flu, it will inherit all these property-value pairs. OWL code of the Flu class, in Turtle syntax: @prefixipom: ,http://purl.org/ipo/medicine#.. @prefix ipo: ,http://purl.org/ipo/core#.. @prefix owl: ,http://www.w3.org/2002/07/owl#.. @prefix rdf: ,http://www.w3.org/1999/02/22-rdf-syntax-ns#.. @prefix rdfs: ,http://www.w3.org/2000/01/rdf-schema#.. @prefix xsd: , http://www.w3.org/2001/XMLSchema# . . ipom:Flu a owl:Class; rdfs:subClassOf [a owl:Restriction; owl:hasValueipom:influenza; owl:onProperty ipo:hasCausativeAsset], [a owl:Restriction; owl:hasValue "Flu"^^xsd:string; owl:onProperty ipo:title], [a owl:Restriction; owl:hasValueipom:FluTreatment; owl:onProperty ipo:solvedBy], [a owl:Restriction; owl:hasValue "The Flu is a contagious infection of the nose, throat and lungs caused by the influenza virus. It causes coughing, difficulty breathing, fever, headache, muscle pain and weakness. The virus is spread from person to person by inhaling infected droplets in the air."^^xsd:string; owl:onProperty ipo:description], ipom:Disease; owl:equivalentClass [a owl:Class; owl:intersectionOf ( [a owl:Class; owl:unionOf ( [a owl:Restriction; owl:hasValue ipom:Chills; owl:onProperty ipo:indicatedBy]

II. Reasoning

14.3 Issue-Procedure Ontology for Medicine

205

[a owl:Restriction; owl:hasValueipom:Headache; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValueipom:MusclePain; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValueipom:NasalCongestion; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValueipom:Weakness; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValueipom:Coughing; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValueipom:SoreThroat; owl:onProperty ipo:indicatedBy] )

]

[a owl:Restriction; owl:hasValueipom:Fever; owl:onProperty ipo:indicatedBy] ) ].

Similarly, it was defined that to be a member of the Pneumonia class, it is necessary and sufficient (owl:equivalentClass) that the occurrence is related, through the ipo:indicatedBy property, to the symptoms Fever and Cough and at least one other respiratory symptom (owl:hasValue). In addition, the following property-value pairs (owl:hasValue) were defined as necessary conditions (rdfs:subClassOf): title (ipo:title), description (ipo:description), treatment (ipo:solvedBy), and cause (ipo:hasCausativeAsset). Therefore when classifying an occurrence as an instance of Pneumonia, it will inherit all these property-value pairs. OWL code of the Pneumonia class, in Turtle syntax: @prefixipom: ,http://purl.org/ipo/medicine#.. @prefix ipo: ,http://purl.org/ipo/core#.. @prefix owl: ,http://www.w3.org/2002/07/owl#.. @prefix rdf: ,http://www.w3.org/1999/02/22-rdf-syntax-ns#.. @prefix rdfs: ,http://www.w3.org/2000/01/rdf-schema#.. @prefix xsd: , http://www.w3.org/2001/XMLSchema# . . ipom:Pneumonia a owl:Class; rdfs:subClassOf [a owl:Restriction; owl:hasValue "Pneumonia is an infection that inflames the air sacs in one or both lungs. The air sacs may fill with fluid or pus (purulent material), causing cough with phlegm or pus, fever, chills, and difficulty breathing. A variety of organisms, including bacteria, viruses and fungi, can cause pneumonia."^^xsd:string;

II. Reasoning

206

14. Ontology-based decision-making

owl:onProperty ipo:description], [a owl:Restriction; owl:hasValue ipom:PneumoniaTreatment; owl:onProperty ipo:solvedBy], [a owl:Restriction; owl:hasValue ipom:Streptococcus_pneumoniae; owl:onProperty ipo:hasCausativeAsset], ipom:Disease; owl:equivalentClass [a owl:Class; owl:intersectionOf ( [a owl:Class; owl:intersectionOf ( [a owl:Restriction; owl:hasValue ipom:Fever; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValue ipom:Coughing; owl:onProperty ipo:indicatedBy] )

]

[a owl:Class; owl:unionOf (

[a owl:Restriction; owl:hasValue ipom:ShortnessOfBreath; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValue ipom:ChestPain; owl:onProperty ipo:indicatedBy] [a owl:Restriction; owl:hasValue ipom:Fatigue; owl:onProperty ipo:indicatedBy] )

]

) ].

Fig. 14.4 shows the RDF graph for an occurrence (ex:occurrence1) that presents Fever, Cough, and Chest Pain symptoms. The classes and properties of the IPO are with ipo prefix, the IPOM with ipom prefix, the Event ontology with event prefix, and the OWL-Time ontology with time prefix. All dotted triples are relationships inferred by the machine, based only on the symptoms of the occurrence (represented by the ipo:indicatedBy property). Therefore the inferred triples are automatically generated when the machine classifies the occurrence as an instance of the Pneumonia and Flu classes. Realize that the user only needs to register the symptoms and the machine decides, based on the ontology’s axioms, which possible diseases match that occurrence. It is interesting to note that the necessary conditions defined by Fever and Coughing symptoms of the Pneumonia class meet the necessary and sufficient conditions of the Flu class, so the machine infers that Pneumonia is a subclass (rdfs:subClassOf) of Flu. So every instance of Pneumonia is also an instance of Flu. This kind of inference can be very

II. Reasoning

14.3 Issue-Procedure Ontology for Medicine

207

FIGURE 14.4 Automated diagnosis of disease occurrence RDF graph. RDF, Resource Description Framework.

important in decision-making, because when an occurrence is classified into two hierarchically related classes, it is more likely that the disease is the most specific class, suggesting to the professional that start investigating it. In this case, Pneumonia is more likely than just Flu.

II. Reasoning

208

14. Ontology-based decision-making

Revisiting the competence question CQ11: In which problem classes, a given occurrence is classified based on its symptoms?, once the database’s reasoner has classified the occurrence by inference, the corresponding SPARQL query execution becomes trivial, returning, in this example, Pneumonia and Flu. To finish, the SPARQL query for the additional competence question CQ12: What is the date and time of an occurrence?: prefix ex: ,http://www.example.org/.. prefix event: ,http://purl.org/NET/c4dm/event.owl#.. prefix time: ,http://www.w3.org/2006/time#.. SELECT ?beginning ?end WHERE { ex:occurrence1event:time ?properInterval. ?properInterval time:hasBeginning ?instant1. ?properInterval time:hasEnd?instant2. ?instant1 time:inXSDDateTime?beginning. ?instant2 time:inXSDDateTime ?end. }

14.4 Conclusion In this chapter, we described the architecture and strengths of KBDSSs. In particular, we looked at ontology-based KBDSSs, defining a method for the creation of ontology-based knowledge bases and introducing an actual problem-solving OWL ontology, called IPO. In the following, a realistic case study was presented, using the IPOM, a specialization of IPO for healthcare. Along the case study, through the automated diagnosis of diseases, it was possible to see how the power of inference of ontologies can help us in decision-making.

References Cox, S., Litte, C., Hobbs, J.R., Pan, F., 2017. Time ontology in OWL. W3C Recommendation. ,https://www.w3. org/TR/owl-time/. (accessed 10.03.20.). Fielding, R., 2000. Representational state transfer (REST). Chapter 5 in Architectural Styles and the Design of Network based Software Architectures. (Doctoral dissertation, Ph.D. Thesis, University of California, Irvine, CA). Gruninger, M., Fox, M.S., 1995. Methodology for the design and evaluation of ontologies. In: Proc.Workshop on Basic Ontological Issues in Knowledge Sharing, IJCAI-95, Montreal. Harris, S., 2013: SPARQL 1.1 Query Language. W3C Recommendation. ,https://www.w3.org/TR/sparql11query/. (accessed 10.03.20.). Horrocks, I., Patel-Schnieder, P.F., Boley H., Tabet, S., Grosof, B., Dean, M., 2004. SWRL: a semantic web rule language combining OWL and RuleML. W3C member submission. ,https://www.w3.org/Submission/SWRL/ . (accessed 10.03.20.). Issac, A., Summers, E., 2009. SKOS simple knowledge organization system primer. W3C Working Group Note [Online]. ,https://www.w3.org/TR/skos-primer/. (accessed 10.03.20.). Jacyntho, M.D., 2012. Mutigranularity Locking Model for RDF. (Doctoral dissertation, Ph.D. Thesis, Department of Informatics, PUC-Rio, Rio de Janeiro, Brazil, in Portuguese). Knublauch, H., Hendler, J.A., Idehen, K., 2011. SPIN Overview and Motivation. W3C Member Submission. ,https://www.w3.org/Submission/2011/SUBM-spin-overview-20110222/. (accessed 10.03.20.).

II. Reasoning

References

209

Marakas, G.M., 2003. Decision Support Systems in the 21st Century (Vol. 134). Prentice Hall, Upper Saddle River, NJ. Mayo Clinic Staff, 2018. Pneumonia. Diseases and conditions - Mayo Clinic. ,https://www.mayoclinic.org/diseases-conditions/pneumonia/symptoms-causes/syc-20354204. (accessed 01.03.20.). Mayo Clinic Staff, 2019. Influenza (flu). Diseases and conditions - Mayo Clinic. ,https://www.mayoclinic.org/ diseases-conditions/flu/symptoms-causes/syc-20351719. (accessed 01.03.20.). Mendes, R.D., 1997. Inteligeˆncia artificial: sistemas especialistas no gerenciamento da informac¸a˜o. Cieˆncia da Informac¸a˜o, 26(1), in Portuguese. Morais, M.D., 2015. Issue Procedure Ontology (IPO): an extensible ontology for the domain of symptoms, problems and solutions. (Master dissertation, Candido Mendes University Campos, Campos dos Goytacazes RJ, Brazil, in Portuguese). Morais, M.D., Jacyntho, M.D., 2016. Issue procedure ontology (ipo): an ontology for symptoms, problems and solutions. Perspect. Cieˆnc. Inf. 21 (4), 3 28. in Portuguese. Noy, N.F., McGuinness, D.L., 2001. Ontology development 101: A guide to creating your first ontology. Raimon, Y., Abdallah, S., 2007. The Event ontology. ,http://motools.sourceforge.net/event/event.html. (accessed 10.03.20.). Sprague, R.H., Carlson, E.D., 1982. Building Effective Decision Support Systems. Prentice-Hall, Englewood Cliffs, NJ. Sua´rez-Figueroa, M.C., Go´mez-Pe´rez, A., Ferna´ndez-Lo´pez, M., 2012. The NeOn methodology for ontology engineering. Ontology Engineering in a Networked World. Springer, Berlin, Heidelberg, pp. 9 34. W3C OWL Working Group, 2012. OWL 2 Web Ontology Language. W3C Recommendation. ,https://www.w3. org/TR/owl2-overview. (accessed 10.03.20.).

II. Reasoning

C H A P T E R

15 A new method for profile identification using ontology-based semantic similarity Abdelhadi Daoui1, Noreddine Gherabi2 and Abderrahim Marzouk1 1

Department of Mathematics and Computer Science, Hassan 1st University, FST, Settat, Morocco 2Sultan Moulay Slimane University, ENSAK, LASTI Laboratory, Khouribga, Morocco

15.1 Introduction Today, with the explosion of the quantity of data on the web, the search for such information has become difficult for a user. For this purpose, a number of recommender systems based on semantic web (Lee et al., 2001; Mehla and Jain, 2020) have emerged to filter the information represented to the user in order to display only what is interesting for the user in question, based on a set of criteria that represent the interests and user preferences (which is called a user profile). These recommender systems can ask the user to enter their interests and preferences and store them into profile files or into a database at the first connection using a form (Frikha et al., 2015) or detect them (the interests and preferences) by analyzing its behavior on the web (Yinghui, 2010). In the current chapter, our work focuses on the identification of profiles, where our proposed method aims to display to the owner of the current profile the touristic sites and places which can be interested in him according to their preferences. This kind of systems is a base of several works of research. The authors of Frikha et al. (2016) propose a semantic social recommender system for Tunisian tourism which relies on ontology. This one contains all concepts and relations of the medical tourism in Tunisia. This information is extracted from Tunisian medical tourism providers’ services. Yinghui (Catherine) Yang in Yinghui (2010) presents a new method able to identify users according to their behavior on the web, in order to recommend products, advertising, personalized content, etc. The paper Wassermann and Zimmermann (2011)

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00019-5

211

© 2021 Elsevier Inc. All rights reserved.

212

15. A new method for profile identification using ontology-based semantic similarity

presents a new technique for automatic adaptation of user interface based on statistical methods that use a set of properties related to user interaction. These properties can be part of the user profiles. For the same goal (profile identification), Huakang Li, Longbin Lai, and Xiaofeng Xuin (Li et al., 2013) have designed a technique for identifying the user’s interests using the topics of visited web pages. This method bases on Wikipedia Category Network nodes (Mohanty et al., 2020). In general, the recommender systems are based on the similarity calculation for identifying the products which will be recommended. This concept (similarity) is used in several areas of research. For example, in Daoui et al. (2017) and Daoui et al. (2018), the authors propose two methods for computing the semantic similarity between the concepts defined in the same ontology. Yuhua Li, Zuhair A. Bandar, and David McLean in Li et al. (2003) present a method for computing the semantic similarity between words using external information resources (lexical, taxonomy, and corpus). In paper of Hau et al., (2005), the authors propose a new method rely on semantic similarity measurement for defining the compatibility between semantic web services (Burstein et al., 2005; McIlraith and Martin, 2003) which are annotated by OWL ontologies [Web Ontology Language (LNCS Homepage, a)]. Also, we find the authors of Gherabi and Bahaj (2012) compute the rate of similarity between outlines of 2D shapes using local and global features of these outlines extracted from an XML file. In the literature, they are four approaches to create and store the profiles: 1. 2. 3. 4.

Using Using Using Using

a set of keywords (Susan et al., 2007). a matrix of weighted keywords (Susan et al., 2007). taxonomies (Sulieman, 2014) ontologies (Frikha et al., 2014).

In our method, we base on the last approach. Where we use the ontologies to create and store the profiles (the interests and preferences of users) basing on the Composite Capability/Preference Profiles (CC/PP) technique (LNCS Homepage, b). CC/PP is a standard and recommendation of W3C for specifying terminal capabilities as well as user preferences according to RDF formalism. “DATAtourisme” (LNCS Homepage, c) is a platform containing data accessible by all the world proposing tourist datasets. It aims at gathering tourist information data produced by Tourist Offices, Departmental Agencies, and Regional Tourism Committees, in order to disseminate them in open-data and thus facilitate the creation of innovative tourist services by start-ups, digital agencies, media, and other public or private actors.

15.2 Proposed method In the current section, we present our proposed method for identification of profiles, which allows identifying the user interests in the tourism domain according to their preferences. This method is summarized in the Fig. 15.1.

II. Reasoning

15.2 Proposed method

213 FIGURE 15.1 A graphical representation of the proposed method for profiles identification.

The first step of our method is to allocate the weights of keywords (Section 2.1). Then, Section 2.2 represents our process of semantic correspondence between the given keywords and ontology concepts of “DATAtourisme.” Finally, Section 2.3 is devoted to the creation of the ontology of CC/PP profiles.

15.2.1 Weight allocation for keyword The phase of weight allocation to the keywords plays an important role in the process of profile identification because we base on their values to define the importance degree of

II. Reasoning

214

15. A new method for profile identification using ontology-based semantic similarity

each user preference. Therefore we can sort the touristic sites and places to recommend in order to their importance for the user in question. In our proposed method for computing the weight of each keyword, we use the formula (15.1). This weight should be strictly greater than 0 and less or equal to 1 (0 , w # 1). The weight W(Ki) of the keyword Ki is defined as: V ðKi Þ W ðKi Þ 5 Pj21 i50 V ðKi Þ

(15.1)

where Ki represents the keyword i which represents one object extracted from the images stored into user’s mobile device, V(Ki) defines the number of presence of this object in these images, and j represents the total number of the given keywords. The value of V(Ki) is represented as input in our algorithm.

15.2.2 Semantic matching In this section, we define the method to find the correspondence between the given keywords and the ontology concepts of “DATAtourisme” (LNCS Homepage, c). This method is illustrated by several steps as given below. 15.2.2.1 Build paths For computing the semantic matching between a given keyword and a “DATAtourisme” concept, we need in the first time to define all paths of the given keyword and “DATAtourisme” concept which we want to compute the semantic matching between them. These paths are built using the “WordNet” (LNCS Homepage, d), where we start from the given word (keyword or “DATAtourisme” concept name) toward the root node of “WordNet.” For this intent, we rely on the technique of hypernym (find the hypernyms of the current word, then find their hypernyms and so on until the root node) which allows building all paths of the word in question. Each built path represents a sequence of words starts with the keyword or “DATAtourisme” concept name, and the next word represents its hypernym and so on until the root of the “WordNet.” For each “DATAtourisme” concept, before building their paths, we tag the words which can constitute the name of this concept and we select only the nouns. For example, we select the nouns “Entertainment” and “Event” for the concept “EntertainmentAndEvent” exists into “DATAtourisme” ontology, and we remove the preposition “And” in order to avoid the building of the useless paths, because “WordNet” consists of four separate databases, one for nouns, one for verbs, one for adjectives, and one for adverbs. Therefore the semantic matching between two words which exist in two different databases is equal to zero. For this reason, we build the paths only for the tagged nouns which constitute the name of the concept existing in “DATAtourisme” (“EntertainmentAndEvent”). In our proposed method for building the paths, the tagging process is ensured by Stanford Natural Language Processing.

II. Reasoning

15.2 Proposed method

215

15.2.2.2 Semantic similarity In the literature, there are many methods for computing the semantic similarity between words (Li et al., 2003), sentence, or even between ontologies (Hau et al., 2005). Hence, in this chapter, we had used the method proposed in (Daoui et al., 2017) which allows computing the semantic similarity between ontology concepts using dynamic programming. This method takes as input two concepts defined in the same ontology, but we had adapted it to take as input the current keyword (K) and one tagged word (TW) of “DATAtourisme” concept. In our case, these inputs keywords are defined as concepts (CK, CTW) built from “WordNet.” The formula used for computing the semantic similarity is defined as: SSimðCK ; CTW Þ 5

  1 0 , deg # 1 deg 3 SDisðCK ; CTW Þ 1 1

(15.2)

SSim (CK, CTW) represents the semantic similarity between the given keyword K and the tagged word TW, the parameter “deg” represents the impact degree of Semantic distance in the semantic similarity, and SDis(CK, CTW) represents the semantic distance between two concepts CK and CTW. In this chapter, the semantic distance is computed using our technique of dynamic programming (Daoui et al., 2017). The distance is computed for all paths of the keyword and the tagged word which are built in the previous section. In the case where the “DATAtourisme” concept name is composed of more than one word, we need to define the total semantic similarity which represents the average of the values of the semantic similarity computed between the current keyword and each tagged word compose the name of the “DATAtourisme” concept. For example, the total semantic similarity between the keyword “Party” and the concept “EntertainmentAndEvent” which exists into “DATAtourisme” ontology is equal to the average of the semantic similarity values computed between this keyword and the tagged words “Entertainment” and “Event” successively. The formula proposed for computing the total semantic similarity is defined as:   Pj5NTNC21 SSim CK ; CTWi i50 TotalSSimðCK ; CÞ 5 (15.3) NTNC where, TotalSSim(CK, C) represents the total semantic similarity between the keyword K and the concept C which defined into “DATAtourisme” ontology, NTNC represents the number of the tagged nouns defined in the concept C and SSim(CK, CTWi ) represents the semantic similarity between the keyword K and the tagged noun i. 15.2.2.3 Weight computing of the concept After having defined the total semantic similarity between the keyword K and the concept “DATAtourism” C, we compute the weight of the keyword K based on the calculated value of the semantic similarity (formula 15.4). For the calculated weight to be important in the user’s preferences, the similarity must be greater than a value (ε), this value represents a parameter defined in the experience, it is a minimum value of the total semantic similarity between the keyword K and “DATAtourisme” concept (C) which we can take in consideration for storing these latter into CC/PP profile. In the other case, where the value of the total semantic similarity is strictly less than ε, we continue without storing the keyword and “DATAtourisme” concept into CC/PP profile. II. Reasoning

216

15. A new method for profile identification using ontology-based semantic similarity

The formula proposed to compute this one is defined as: W ðCÞ 5 W ðKÞ 3 TotalSSimðCK ; CÞ

(15.4)

where W(C) represents the calculated weight of the concept C which exists into “DATAtourisme” ontology. W(K) and TotalSSim(CK, C) represent successively the weight of the keyword K and the total semantic similarity between the keyword K and concept C.

15.2.3 Profile creation Our proposed method based on the algorithm presented in the Fig. 15.2, which aims to create a CC/PP profile. Where we use the external resources “WordNet” and Stanford FIGURE 15.2 Our proposed algorithm for creation of CC/PP profile. CC/PP, Capability/Preference Profiles.

Input: set of keywords/values Output: CC/PP profile ontology Create an empty CC/PP profile ontology For K in kl do //kl represents the set of keywords Compute W(K) //using the formula 1, W(K) represents //the weight of keyword K Insert K into CC/PP profile Insert the corresponding concept of K into CC/PP profile // the corresponding concept is obtained Insert W(k) into CC/PP profile Else Build the paths for K //using WordNet tagging the words composing the name of the concept C using StanFord NLP For TW in TWL do //TWL represents the set of the //tagged words and TW represents the current tagged //word Build the paths for TW //using WordNet Computing the semantic similarity between K and TW //using the formula 2 End for Computing the total semantic similarity between K and the concept C //using the formula 3 If the total semantic similarity >= Compute W(C) //using the formula 4, //W(C) represents the weight of the concept C Insert K into CC/PP profile Insert the concept C into CC/PP profile Insert W(C) into CC/PP profile End if End for End if End for Return CC/PP profile

II. Reasoning

Composite

15.2 Proposed method

217

Natural Language Processing for defining the semantic matching between the given keywords and the concepts of “DATAtourisme” ontology. This latter (CC/PP profile) will be used in the process of profile identification. At this step, for creating a user profile, we rely on the given keywords. In the first, we try to find the correspondence between a given keyword and the concepts of “DATAtourisme” ontology and store it into CC/PP profile. In the case, where there is no correspondence between them, we use the external resources “WordNet” and Stanford

xmlns:schema="http:// topbraid.org/schema/">



0.3



0.2



0.5



II. Reasoning

FIGURE 15.3 An example of CC/PP profile CC/PP generated by our system. CC/PP, Composite Capability/Preference Profiles.

218

15. A new method for profile identification using ontology-based semantic similarity

Natural Language Processing for finding the semantic matching between the given keyword and the concepts of “DATAtourisme” ontology. Then, we base on this result to store the more similar concepts of “DATAtourisme” ontology into the CC/PP profile. In the other case, where there is no semantic matching between them, we deduce that the given keyword is not a part of the tourism domain. By analyzing this algorithm, we can see that each component in the CC/PP profile contains three attributes: 1. The keyword. 2. The type of the keyword or of the more similar “DATAtourisme” concept. 3. The weight of the keyword or of the more similar “DATAtourisme” concept. An example of a CC/PP profile generated by this algorithm is presented in Fig. 15.3:

15.3 Conclusion In this chapter, we have presented a new method for profile identification, which allows identifying the user interests in the tourism domain according to their preferences in order to select the places and touristic sites to recommend. This method based on dynamic programming for computing de similarity semantic between concepts and which relies on the CC/PP technique that represents a W3C standard and allows representing user preferences into the ontology file.

References Burstein, M., Bussler, C., Zaremba, M., Finin, T., Huhns, M.N., Paolucci, M., et al., 2005. A semantic web services architecture. IEEE Internet Comput. 5 (9), 5261. Daoui, A., Gherabi, N., Marzouk, A., 2017. A new approach for measuring semantic similarity of ontology concepts using dynamic programming. J. Theor. Appl. Inf. Technol. 95 (17), 41324139. Daoui, A., Gherabi, N., Marzouk, A., 2018. An enhanced method to compute the similarity between concepts of the ontology. In: Noreddine, G., Kacprzyk, J. (Eds.), International Conference on Information Technology and Communication Systems, Advances in Intelligent Systems and Computing, 640. Springer International Publishing AG. Frikha, M., Mhiri, M., Gargouri, F., 2014. Toward a user interest ontology to improve social network-based recommender system. In: Sobecki, J., Boonjing, V., Chittayasothorn, S. (Eds.), Advanced Approaches to Intelligent Information and Database Systems, Studies in Computational Intelligence, 551. Springer International Publishing, Switzerland. Frikha, M., Mhiri, M., Gargouri, F., 2015. A semantic social recommender system using ontologies based approach for tunisian tourism. Adv. Distrib. Comput. Artif. Intelligence J. 4 (1), 90106. Frikha, M., Mhiri, M., Gargouri, F., Zarai, M., 2016. Using T.M.T. Ontology in Trust-Based Medical Tourism Recommender System. IEEE. Gherabi, N., Bahaj, M., 2012. Outline matching of the 2D shapes using extracting XML data. In: Elmoataz, A., et al., (Eds.), ICISP. Springer, Heidelberg, pp. 502512. Hau, J., Lee, W., Darlington, J., 2005. A semantic similarity measure for semantic web services. In: The 14th International Conference on World Wide Web, Chiba, Japan, 2005. Lee, T.B., Hendler, J., Lassila, O., 2001. The semantic web. Sci. Am. 284, 118.

II. Reasoning

References

219

Li, H., Lai, L., Xu, X., Shen, Y., Xia, C., 2013. User interest profile identification using Wikipedia knowledge database. In: International Conference on High-Performance Computing andCommunications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 2013, pp. 23622367. IEEE. Li, Y., Bandar, Z.A., McLean, D., 2003. An approach for measuring semantic similarity between words using multiple information sources. IEEE Trans. Knowl. Data Eng. 4 (15), 871882. LNCS Homepage, (a) https://www.w3.org/TR/owl2-overview. (Accessed 17 November 2016). LNCS Homepage, (b) https://www.w3.org/TR/CCPP-struct-vocab/. (Accessed 15 January 2004). LNCS Homepage, (c) http://www.datatourisme.fr/ontologie/2018/04/09. LNCS Homepage, (d) https://wordnet.princeton.edu//. (Accessed 2007). McIlraith, S.A., Martin, D.L., 2003. Bringing semantics to web services. IEEE Intell. Syst. 1 (18), 9093. Mehla, Sonia, Jain, Sarika, 2020. An ontology supported hybrid approach for recommendation in emergency situations. Annals of Telecommunications 75, 421435. Available from: https://doi.org/10.1007/s12243-020-00786. In this issue. Mohanty, S.M., Chatterjee, J.M., Jain, S., Elngar, A.A., Gupta, P., 2020. Recommender system with machine learning and artificial intelligence: Practical tools and applications in medical, agricultural and other industries. Scrivener Publishing LLC. Sulieman, D., 2014. Towards Semantic-Social Recommender Systems (Doctoral thesis). Cergy Pontoise University France. Susan, G., Mirco, S., Aravind, C., Alessandro, M.:, 2007. User profiles for personalized information access the adaptive webLNCS 4321 In: Brusilovsky, P., Kobsa, A., Nejdl, W. (Eds.), The Adaptive Web. Springer-Verlag, Berlin Heidelberg, pp. 5489. Wassermann, B., Zimmermann, G., 2011. User profile matching: a statistical approach. In: The Fourth International Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies, and Services, CENTRIC. Yinghui, Y., 2010. Web user behavioral profiling for user identification, Decision Support Systems, 49. Elsevier B.V., pp. 261271 (2010).

II. Reasoning

C H A P T E R

16 Semantic similaritybased descriptive answer evaluation Mohammad Shaharyar Shaukat1, Mohammed Tanzeem2, Tameem Ahmad3 and Nesar Ahmad3 1

Technical University of Munich, Germany 2Adobe, India 3Department of Computer Engineering, Zakir Husain College of Engineering and Technology, Aligarh Muslim University, Aligarh, India

16.1 Introduction The purpose of providing education is to facilitate learners in developing a theoretical or practical understanding of a given subject. By the end of the learning process, learners should have achieved learning outcomes that include the ability to recall, understand, organize, integrate, and apply ideas, to analyze, to create a new whole, and to express themselves in writing (Bloom et al., 1956). So it is beneficial to determine the degree of knowledge absorbed by the learner (Bransford et al., 2000). This can be achieved by a uniform assessment examination (Zie˛ba et al., 2014). Uniformity can be assessed with the help of an intelligent computer system. Assessment, either summative or formative, is often categorized as either objective or subjective. The assessment of responses to objective (Zie˛ba et al., 2014) questions, which have a single correct answer (Azevedo et al., 2019) or sometimes more than one but a limited number of correct answers, is simple and easy. Subjective assessment is not as simple, and the responses to subjective questions may have more than one form of expression. Part of the difficulty involves the inherent complexities and ambiguities in language that make subjective assessment more complicated and cumbersome than objective assessment. The depth of knowledge grasped by a learner is not always suitable to measurement by objective assessment. In assessment by means of descriptive examination, students are evaluated at a higher level of Bloom’s Taxonomy (Bloom et al., 1956; Anderson and Krathwohl, 2001), as compared to that of objective examination. Unfortunately, assessment via descriptive examination is a cumbersome process and is still done manually due to

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00014-6

221

© 2021 Elsevier Inc. All rights reserved.

222

16. Semantic similaritybased descriptive answer evaluation

ambiguities present in natural language, whereas objective assessment is comparatively easy and has been well supported in many systems for years. This work attempts to present an algorithm for the automatic evaluation of multiple-sentence descriptive answers by considering both syntactic and semantic similarities (Nandini and Uma Maheswari, 2018; Kate et al., 2005). The motivation for building an algorithm for the automatic evaluation of multiple-sentence descriptive answers is to reduce human effort and to ensure uniformity in evaluation irrespective of any influence and inclination for any reason or due to the perspective of the human evaluator. The purpose of short answer questions is usually to examine the basic knowledge and facts of the particular subject (Varshney et al., 2018; Raghib et al., 2018). That much can be subjectively evaluated and judged against the model answer, which could be extracted from the referred textbook. The traditional manual descriptive answer paper evaluation typically uses a prepared master solution. The solutions for all the questions are prepared and kept in the master solution. The fact points and the knowledge to be judged are brought over from the specified textbook or other knowledge base, are kept in the solution set, and serve as a baseline in the process of evaluating the student answers (Paul and Pawar, 2014). A few attempts have been made in the automatic or semiautomatic evaluation of descriptive answer paper evaluation (Pe´rez et al., 2005; Mohler and Mihalcea, 2009). They are mainly based on analyzing shared words, keyword matching (Smeaton, 1992), and sequence matching. This approach appears inappropriate because a different sentence, in terms of structure and word content with similar meanings, can be constructed due to the inherent flexibility of natural language. So both lexical and semantic analysis should be taken into consideration.

16.2 Literature survey For more than two decades, several natural language processing approaches to automating evaluation (Salton, 1989) and to the grading of descriptive answers have been attempted. Several methods and algorithms are proposed in the literature based on ontology (Maedche and Staab, 2001), summarization (Akhtar et al., 2017; Akhtar et al., 2019; Bansal et al., 2019), similarity matching (Li et al., 2006), statistical, graph-based, etc. A few semisupervised and partially automatic approaches are also available. These methods are validated against human judgment. At the abstract level of classification, the most straightforward approach is the combining keywordsbased method. However, the combining keywords method has generally been observed as a poor way to evaluate text. This method relies on keywords and looks for coincident or similar keywords or n-grams for the text that has been referenced. Also, it is difficult to tackle problems in the answers given by the student, such as words with the same meaning or the coexistence of many possible meanings for a word or phrase. Another technique is the pattern matching technique (Fung, 1995), which is a relatively better evaluation method than the comparison of keywordsbased method. In this method, a bilingual lexicon of nouns and proper nouns is compiled from unaligned and noisy parallel text from Asian or European language pairs. For the identification of nouns and proper nouns, parts of speech (POS) tagging is used for one of the languages in the pair. The frequency of each word and information about the position of every word written

II. Reasoning

223

16.3 Proposed system

in the answer for high- and low-frequency words are represented in two different forms for matching the pattern. This method also uses the techniques for finding the point of a new anchor and for eliminating noise. This method provides an efficiency of 73.1% (Fung, 1995). C-rater (Leacock and Chodorow, 2003) is an automatic scoring engine for assigning partial and/or full credit to a short answer with about 84% correctness. The evaluation is based on the syntactic feature of the sentences and ignores the bag of words approach that could change the meaning of the sentence. Another approach has used latent semantic analysis (LSA) (Kanejiya et al., 2003) to compare and evaluate responses (based on semantics similarities). But the problem here is the need for a vast collection of documents as well as the issue in creation of a dense representation while the original representation is sparse. Other approaches include combining keywords, breaking answers into conceptual and semantic parts, machine learning approaches, and LSA. Statistical methods like combining and matching keywords (Smeaton, 1992), pattern matching techniques, machine learning techniques (Etzioni et al., 2007), information extraction techniques (Litvak and Last, 2008), a semantic approach, LSA, and LSA with syntactic and semantic information are used in attempts to automatically evaluate descriptive answers. These methods have achieved specifically good results, but still there is a need for a more robust system that can efficiently grade descriptive answers and that can be adopted in education systems. We present a simple and easy to adopt semantic-based approach. Its implementation is discussed in this chapter.

16.3 Proposed system The overall layout of the system is presented in Fig. 16.1. As shown in Fig. 16.1, the complete task is divided into three modules: a preprocessing module, a similarity module, and a score module. The subtasks within each module are FIGURE 16.1 overview.

Preprocessing module

Sentence detector Tokenization POS tagging Grouping Synset (SenseID) WSD

Similarity module

Semantic word similarity [Similarity matrix] Word relatedness path length-based similarity measurement [fuzzy relation matrix] Bipartite graph Semantic sentence similarity [Dice cofficient]

Score module

Knowledge representation [Conceptual similarity] Assign score On the basis of semantic and conceptual similarity

II. Reasoning

Overall system

224

16. Semantic similaritybased descriptive answer evaluation

FIGURE Student answer

Model answer

Set of sentences S1

Set of sentences S2

16.2 Abstract

system architecture.

Semanc similarity between each pair of sentences in S1 × S2

Select semancally similar pairs

Final score

Conceptual similarity

also listed in Fig. 16.1. Their descriptions follow in the forthcoming section. The proposed system architecture for the evaluation of the descriptive question answer is given in Fig. 16.2. The components of the system are: Student Answer: This is the answer written by a student in the examination to be evaluated. The student answer is divided into a set of sentences S1. S1 is the bag of sentences, i.e., the set of sentences of the Student Answer derived through a sentence tokenizer. Model Answer: This is the model answer provided by an experienced teacher or by the content taken from the referred book. The student answer has to be evaluated against this model answer. Similarly, the model answer is converted into a set of sentences S2. S2 is the bag of sentences, i.e., the set of sentences of the Model Answer derived through the sentence tokenizer. Semantic Similarity Computation: Once each bag (Student Answer Bag and Model Answer Bag) is ready, the next step is the Semantic Similarity Computation. It is computed between each pair of sentences using the method (Li et al., 2006; Pawar and Mago, 2019; Janda et al., 2019) described in Section 16.3.2. This method measures the similarity between a pair of sentences by first converting a sentence to a set of tokens (words) and then measuring the semantic similarity between the pair of words using a similarity matrix. The similarity matrix is calculated using a lexical database WordNet. Finally, the overall similarity between each pair of sentences is calculated. S 5 S1 3 S2 where S is a set of pairs of sentences (x, y) such that x A S1 and y A S2. The similarity of each pair of sentences (x, y) A S is calculated using the method described in Section 16.3.2. This similarity is indicated by sim(x, y).

II. Reasoning

225

16.3 Proposed system

Selecting Semantically Similar Pairs: In this stage, we consider the threshold similarity score value (fixed) in order to choose the semantically similar pair. The chosen pair of sentences is further conceptual similarity.  analyzed for measuring  Select x; y in S such that simx; y . 5 QðthresholdÞ: Let S0 5 x; y where sim x; y . 5 Q: Conceptual Similarity Computation: Conceptual similarity between a pair of sentences can be measured by analyzing whether the two sentences convey the same concept. They are similar conceptually, indicated by both the sentences being the same to some extent. Final Score Calculation: The overall score will be calculated based on semantic and conceptual similarity between the pair of sentences. The various components of the processing are depicted in Figs. 16.3 and 16.4. Further details of essential elements are also described.

16.3.1 Wu and Palmer: word similarity Wu and Palmer’s (1994) word similarity uses WordNet lexical database to calculate the similarity between the two words. It computes the similarity score between the two words based on the depth of the synsets in WordNet taxonomy; moreover, it considers the depth of the least common subsumer. The formula is:  Score 5 2  depthðlcsÞ= DepthðS1Þ 1 DepthðS2Þ Sentence1

Sentence1

TokenSet1

FIGURE 16.3

Similarity measure.

TokenSet1

Compute dice coefficient using similarity matrix

Sentence1 1

2

3

4

5

• Tokenizaon

Sentence2 1

• POS tagging

2

• Stop words removal • Classify word into set of tag • Synset for each word

3

4

5

• Tokenizaon • POS tagging • Stop words removal • Classify word into set of tag • Synset for each word

Similarity matrix

II. Reasoning

FIGURE 16.4 of sentences.

Preprocessing of a pair

226

16. Semantic similaritybased descriptive answer evaluation

It returns the relatedness score range of 0 , score , 5 1, with 1 for two identical concepts. The score can never be zero or a negative number if no path exists. If an error occurs, then the error level is set to nonzero, and an error string is created.

16.3.2 Semantic similarity between a pair of sentences The semantic similarity of a pair of sentences is calculated by measuring the similarity between token set of individual words (or tokens) (Li et al., 2006; Pawar and Mago, 2019; Janda et al., 2019; Jain and Patel, 2019). In Fig. 16.3, Sentence1 belongs to set S1, and Sentence2 belongs to set S2. These sentences are first converted into token sets, TokenSet1 and TokenSet2. The semantic similarity between the two token sets is calculated using the Dice Coefficient. Dice Coefficient is defined as twice the number of common terms in the compared strings divided by the total number of terms in both strings (Dice, 1945).

That is:

Dice Coefficient: Sim(X, Y) 5 (2 3 |X - Y|)/(|X| 1 |Y|),

where X and Y are two TokenSets. When calculating the Dice Coefficient, we consider a given pair of words (i, j) to be matched (or similar) if the entry corresponding to pair (i, j) in the similarity matrix is 1. That is, if SimX (i, j) 5 1, then a pair of words (i, j) contribute to the one matched word in the Dice Coefficient calculation. The detailed description of similarity matrix calculation is given in Section 16.3.3.

16.3.3 Semantic similarity between words (similarity matrix calculation) The similarity matrix SimX (i, j) is a two-dimensional matrix that denotes whether or not the word i is semantically similar to the word j.  1; if i is similar to j Sim X i; j 5 0; if i is not similar to j The similarity matrix for a set of sentences, sentence1 and sentence2, is calculated in the following steps: Tokenization: Each sentence is converted into a set of tokens (or words). The tokenization is performed using Stanford NLP Tokenizer. POS Tagging: Stanford POS tagger is used to tag the word representing noun, verb, adverb, etc. This tagging will help us to find the synonyms of a word using WordNet. Stop Words Removal: Stopwords like “I,” “me,” “a,” “the,” “an,” “also,” etc. are removed by using a list of stopwords since stop word comparison will give us no useful result. It is therefore unnecessary to compare them. After removing stopwords, a raw set of tokens is formed. Classification of Tokens into Set of Tag: Further, the raw set of tokens of a sentence is divided into many small sets of tokens, formed on the basis of the tag that each word carries. This small set makes calculation easier because now only nouns are compared to nouns and not with the verbs.

II. Reasoning

16.5 Data set

227

Synset Generation: The synonyms set is generated for each word in TokenSet1 using WordNet. Finally, a similarity matrix is generated for each TokenSet by comparing one of the synset of TokenSet1 with the word from TokenSet2.

16.4 Algorithm Complete stepwise working for the evaluation is represented in the following pseudo code: 1. Input a sentence from Bag S1 2. Preprocessing (using Stanford’s open NLP library function) a. Normal tokenizing b. POS tagger c. Set aggregation formation (e.g. UNMARRIED MAN 5 BACHLOUR) 3. Word Sense Disambiguation WSD (e.g. bank): Lesk&Banergeealgo 4. Input second sentence from bag S1 5. Repeat step 2 & 3 for second sentence 6. Similarity matching (mix match of WUP and Path length algorithm) a. Set matching (set: noun, verb) Store matching result b. How nouns are related using verb in two different sentences (which verb is relating the matched nouns) c. Figure out the adjectives (how adjective is affecting the noun and which noun) d. Consider Adverb in measure (e.g. ran slowly vs ran quickly) 7. Overall score from 6.a-d a. Reward / penalty b. A 5 w*a 1

16.5 Data set To test the proposed system of short descriptive answers, a data collection process has been taken up. An online test using university moodle services at Aligarh Muslim University for undergraduate students was organized, and the responses from over 60 students were collected on introductory descriptive type question sets. Students submitted answers to 50 short questions across the domain of computer architecture. So the data set became 52 3 50 5 2600 (as a few students were absent from this test). A sample question and a few sample answers are: Sample Question 1: Describe digital systems in short. Model Answer: Digital systems are the interconnection of the digital hardware modules interconnected with common data and control paths. Which are designed to accept binary information and process the information to perform a specific task. The basic unit of digital circuits is logic gates. Sample Answer1: Digital system is a branch of computer that deals with the interconnection of various hardware modules. It consists of logic gates.

II. Reasoning

228

FIGURE 16.5

16. Semantic similaritybased descriptive answer evaluation

Similarity score matrix with model answer.

Sample Answer2: Digital system is the interconnection of digital modules. It is designed to accept binary information and process it. Sample Answer3: Digital system works only on binary input. Before feeding this data set to the system, the answer was independently evaluated manually by a human examiner (one of the authors of this paper), and a complete evaluation cage for all answers and all students present was prepared in matrix form.

16.6 Results Overall, the proposed work is evaluated for its efficiency, and the metrics shown in Fig. 16.5 are defined for each student answer. For each model answer, the matrix in Fig. 16.5 is prepared; the column attributes are from the model answers, and the row attributes are the tokens from the sample answer. The matrix computes the semantic relatedness of word senses by counting the number of nodes along the shortest path between the senses in the ’is-a’ hierarchies of WordNet. DepthðS1; S2Þ 5 1=path lengthðS1; S2Þ Examples for two sample answers are given here, with their computed semantic similarity scores with the model answer: Q.1. what are micro operations? Model Ans. The Microoperations is a set of elementary operations that are performed on the content of the register. E.g. shift, count, load, clear etc. Ans.1 The operations which are perfromed on the binary information stored in the register. Examples: Shift, Count, etc.

II. Reasoning

229

16.7 Conclusion and discussion

Results: Sim[0][0]-0.56 Sim[0][1]-0.08 Sim[1][0]-0.09 Sim[1][1]-0.70 Semantic Similarity

-(.56 1 .70)/2 5 0.63

Ans. 2 Microoperations are the operations performed on data stored in registers. These are elementary operations that can be completed in parallel during one clock pulse period. For example, shift, count, clear, load etc. Sim[0][0]-0.69 Sim[0][1]-0.35 Sim[0][2]-0.07 Sim[1][0]-0.1

Sim[1][1]-0.16

Sim[1][2]-0.76 Semantic Similarity

-(.69 1 .76)/2 5 0.72 The result of the proposed method is evaluated against the manual evaluation of sample answers, and the differences are calculated. The model was found to be working very closely to manual evaluation and produced acceptable results. The sample of the calculated differences is presented in Table 16.1.

16.7 Conclusion and discussion The examination is an important part of the educational system, and examination and evaluation are iterative exercises within the system. Much of the time of teachers and teacher assistants is required for it. Many universities and institutions are thus shifting toward online examination systems. Multiple choice question evaluation is simple, and many efficient systems are available for it. In contrast, a robust and reliable system for the evaluation of subjective answers is missing. The proposed system is a step toward building such a system capable of evaluating descriptive answers. This proposed system computes the score for students’ descriptive answers based on the semantic similarity measure against the model answers. Thus the proposed system could be of great value and could contribute time and effort savings. TABLE 16.1

Comparison with manual baseline method.

Results

Baseline (manual)

Proposed

Difference ( 1 / 2 ) D

|D|

Ans1

0.75

0.81

1 0.06

0.06

Ans2

0.70

0.62

20.08

0.08

Ans3

0.30

0.40

20.10

0.10

II. Reasoning

230

16. Semantic similaritybased descriptive answer evaluation

So far, this system is capable of handling only the English language (Ahmad et al., 2020; Joshi et al., 2020), and there is no support for mathematical expression evaluation. Thus this system may be extended for improvement in terms of multilingual support and to incorporate capabilities by which it can evaluate mathematical expressions and derivations too.

Acknowledgments This work was supported by the Visvesvaraya Ph.D. Scheme for Electronics and IT fellowship of Ministry of Electronics and Information Technology (Meity), with awardee number MEITY-PHD-2979, Government of India.

References Ahmad, T., Ahmed, S.U., Ali, S.O., Khan, R., 2020. Beginning with exploring the way for rumor free social networks. J. Stat. Manage. Syst. 23 (2), 231238. Available from: https://doi.org/10.1080/09720510.2020.1724623. Akhtar, N., Javed, H., and Ahmad, T., 2017. Searching related scientific articles using formal concept analysis. In: 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 21582163. Akhtar, N., Javed, H., and Ahmad, T., 2019. Hierarchical summarization of text documents using topic modeling and formal concept analysis. In: Advances in Intelligent Systems and Computing. Anderson, L.W. and Krathwohl, D.R., 2001. A Taxonomy for Learning, Teaching, and Assessing. Azevedo, J.M., Oliveira, E.P., Beites, P.D., 2019. Using Learning Analytics to evaluate the quality of multiplechoice questions: a perspective with Classical Test Theory and Item Response Theory. Int. J. Inf. Learn. Technol. Bansal, P., Somya, N.K., Govil, S., Ahmad, T., 2019. Extractive review summarization framework for extracted features. Int. J. Innovative Technol. Explor. Eng. 8 (Issue7C2), 434439. Bloom, B.S., Englehard, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R., and Committee of College and University Examiners, 1956. Taxonomy of Educational Objectives: The Classification of Educational Goals, New York. Bransford, J.D., Brown, A.L., and Cocking, R.R., 2000. How People Learn: Brain, Mind, Experience, and School. Dice, L.R., 1945. Measures of the amount of ecologic association between species. Ecology. Etzioni, O., Banko, M., and Cafarella, M.J., 2007. Machine reading. In: AAAI Spring Symposium—Technical Report. Fung, P., 1995. A Pattern Matching Method for Finding Noun and Proper Noun Translations from Noisy Parallel Corpora. Jain, S., and Patel, A., 2019. Smart ontology-based event identification. In: 2019 IEEE 13th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), pp. 135142. IEEE. Janda, H.K., Pawar, A., Du, S., Mago, V., 2019. Syntactic, semantic and sentiment analysis: the joint effect on automated essay evaluation. IEEE Access. Joshi, S., Nagariya, H.G., Dhanotiya, N., Jain, S., 2020. Identifying fake profile in online social network: an overview and survey. International Conference on Machine Learning, Image Processing, Network Security and Data Sciences. Springer, Singapore, pp. 1728. Kanejiya, D., Kumar, A., and Prasad, S., 2003. Automatic Evaluation of Students’ Answers Using Syntactically Enhanced LSA. Kate, B., Borchardt, G., and Felshin, S., 2005. Syntactic and semantic decomposition strategies for question answering from multiple resources. In: AAAI Workshop—Technical Report. Leacock, C., Chodorow, M., 2003. C-rater: automated scoring of short-answer questions. Comput. Hum. Li, Y., McLean, D., Bandar, Z.A., O’Shea, J.D., Crockett, K., 2006. Sentence similarity based on semantic nets and corpus statistics. IEEE Trans. Knowl. Data Eng. Litvak, M. and Last, M., 2008. Graph-Based Keyword Extraction for Single-Document Summarization. Maedche, A., Staab, S., 2001. Ontology learning for the semantic web. IEEE Intell. Syst.

II. Reasoning

References

231

Mohler, M. and Mihalcea, R., 2009. Text-to-text semantic similarity for automatic short answer grading. In: EACL 2009—12th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings. Nandini, V., Uma Maheswari, P., 2018. Automatic assessment of descriptive answers in online examination system using semantic relational features. J. Supercomput. Paul, D.V. and Pawar, J.D., 2014. Use of syntactic similarity based similarity matrix for evaluating descriptive answer. In: Proceedings - IEEE 6th International Conference on Technology for Education, T4E 2014. Pawar, A., Mago, V., 2019. Challenging the boundaries of unsupervised learning for semantic similarity. IEEE Access. Pe´rez, D., Gliozzo, A., Strapparava, C., Alfonseca, E., Rodrı´guez, P., and Magnini, B., 2005. Automatic assessment of students’ free-text answers underpinned by the combination of a BLEU-inspired algorithm and latent semantic analysis. In: Proceedings of the Eighteenth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2005—Recent Advances in Artifical Intelligence. Raghib, O., Sharma, E., Ahmad, T., and Alam, F., 2018. Emotion analysis and speech signal processing. In: IEEE International Conference on Power, Control, Signals and Instrumentation Engineering, ICPCSI 2017. Salton, G., 1989. Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer. Smeaton, A.F., 1992. Progress in the application of natural language processing to information retrieval tasks. Comput. J. Varshney, V., Varshney, A., Ahmad, T., and Khan, A.M., 2018. Recognising personality traits using social media. In: IEEE International Conference on Power, Control, Signals and Instrumentation Engineering, ICPCSI 2017. Wu, Z. and Palmer, M., 1994. Verbs Semantics and Lexical Selection. Zie˛ba, M., Tomczak, J., Brzostowski, K., 2014. Asking right questions basing on incomplete data using restricted Boltzmann machines, 2014th ed. Oficyna Wydawnicza Politechniki Wrocławskiej, WrocławWrocław, pp. 2332.

II. Reasoning

C H A P T E R

17 Classification of genetic mutations using ontologies from clinical documents and deep learning Punam Bedi1, Shivani1, Neha Gupta1, Priti Jagwani2 and Veenu Bhasin3 1

Department of Computer Science, University of Delhi, Delhi, India 2Aryabhatta College, University of Delhi, Delhi, India 3P.G.D.A.V. College, University of Delhi, Delhi, India

17.1 Introduction Health care is one of the basic needs of every human being. It includes nourishing the health through proper diet, exercise, regular health check-up, disease diagnosis, and proper treatment to maintain a healthy body. To improve health care, researchers have taken up many initiatives in the past, leading to new discoveries in healthcare domain (Wang et al., 2018). Most of this research utilizes data from medical documents such as pathology reports, doctor’s notes, or other documents. These documents provide valuable information about the health of a patient. However, the textual data contained in these clinical documents is unstructured in nature. In order to extract important information from clinical documents, there is a need for specialized techniques that can process this unstructured textual data. Natural language processing (NLP) techniques process textual data and extract relevant information from it (Falessi et al., 2010). These techniques play a crucial role when the amount of data is very large; therefore manual knowledge extraction becomes a tedious and time-consuming task for humans. Since NLP techniques reduce time and effort during data processing phase, these are widely used by the research community in different domains. However, the use of NLP in medical research is still at a nascent stage. The processing of medical documents using NLP is referred to as Clinical Natural Language Processing or Clinical NLP (Wang et al., 2018).

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00007-9

233

© 2021 Elsevier Inc. All rights reserved.

234

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

The information extracted using clinical NLP can be used to create an ontology for providing a formal representation of the extracted information. Storage of information in ontology provides interoperability of clinical data between different medical facilities. Ontologies were introduced as the building block of the semantic web (SW) (Bedi et al., 2007). The aim of the SW is to make the data available on the internet which is human readable as well as machine readable (Shrivastava and Agrawal, 2016). This is accomplished using ontologies which preserve the syntactic and semantic context of data. Past research works have utilized the power of SW and ontology in various applications such as identification of crop diseases (Marwaha et al., 2009), searching of flight ticket (Turnip et al., 2019), and disease name extraction (Magumb et al., 2018). In the medical domain, ontology provides a standard format for storing and exchanging medical data in the form of electronic health records (EHRs). Previously, medical data mainly comprised radiology reports, pathology reports, prescriptions, observations, and clinical notes. Nowadays, genetic information is also considered as crucial medical information, specially in cancer diagnosis. Recent studies have shown that the use of genetic data for cancer diagnosis provides better results (Vazquez et al., 2012). Changes in genetic data can cause cell growth at uncontrollable level and hence makes healthy cells cancerous. So, cancer diagnosis requires classifying human body cells as cancerous and noncancerous. In the medical domain, machine learning (ML) and deep learning (DL) techniques are generally used for classification (Li and Yao, 2018; Luo, 2017). ML creates a mathematical model using training dataset so that the model can learn to classify the test instances. In ML, features are explicitly provided for classification. In contrast, DL, a subset of ML, can automatically extract important features on its own. The hidden layers in DL models help to capture the fine details of the given input. This increases the performance of DL methods as compared to ML. The rest of this chapter is organized as follows. The second section discusses clinical NLP and its applications in the medical domain. Third section explains different approaches of clinical NLP including statistical, linguistic, graphical, ML, and DL approaches. Section four introduces the concepts of SW and ontology in the medical domain. It also presents a framework for classifying cancerous genetic mutation. Fifth section describes a case study for the proposed framework using clinical NLP, ontology, and DL techniques. The last section concludes the chapter.

17.2 Clinical Natural Language Processing Natural language refers to the language understood by humans. Though humans can easily comprehend natural language, but it is not understandable by computers. Computers only understand binary language. Therefore a barrier exists between humans and computers for direct interactions. To bridge this gap, a specialized technique is used to process natural language, which is known as NLP. When NLP techniques are applied to textual medical documents, they are known as Clinical NLP. Clinical NLP techniques process the data from various medical documents to extract important information from them. Medical text is a combination of natural language and

II. Reasoning

17.3 Clinical Natural Language Processing (Clinical NLP) techniques

235

terms specific to medical domain (Bjo¨rne et al., 2013). Therefore processing medical text is a challenging task. Clinical NLP techniques use different methods, including the combined knowledge of general corpus and medical corpus to properly understand the medical text. Medical practitioners need to keep themselves updated with the developments in their field but they do not have enough time. Clinical NLP can help in summarizing these reports so that the practitioners need not spend much time reading the reports (Balipa et al., 2018). Other applications of clinical NLP include medical entity recognition (Bjo¨rne et al., 2013), question-answering of medical-related concepts (Kumar et al., 2018), finding gene association with disease (Vazquez et al., 2012), detect cases of declining medication prescribed by the providers (Malmasi et al., 2019) finding symptoms and background, etc. Some of these applications have been explained below. • Medical entity recognition and resolution Retrieving medical entities (symptoms, treatments, and prescriptions) from large medical document takes a lot of time and manual work. Another problem with entity recognition is the use of acronyms in doctor’s notes, such as acronyms of blood pressure are “BP,” “Systolic or Diastolic,” or “Sys BP.” All three terms refer to the same entity and create ambiguity at the time of extraction. Clinical NLP techniques resolve such ambiguities using a predefined dictionary and successfully retrieve the corresponding entity. • Question-answering Clinical NLP can create question-answering system for medical domain which can understand the clinical query and give precise answers. This helps the practitioner to remain updated with new research advancements in their field. Also, the practitioners can get the latest information in minimum time. • Auto-completion of medical documents Doctors have to spend time to make lengthy reports. Clinical NLP can complete the reports based on little inputs from doctors to provide context, using ML techniques such as skip-gram. It can predict the next word or sentence based on the input of a doctor. It can help the doctors to complete these reports faster. • Intent classification Intent refers to the context in which patient/doctor is talking about. For example, a patient tells the symptoms, and then the intent is to find the disease having those symptoms. Clinical NLP finds the intent based on the important keywords extracted from the conversation/document and classify the intent based on some clinical evidence using different ML techniques. The next section describes the clinical NLP techniques used for the above-mentioned applications.

17.3 Clinical Natural Language Processing (Clinical NLP) techniques Clinical NLP techniques are categorized into five major categories: statistical, linguistic, graphical, ML, and DL. Details of these techniques are described below.

II. Reasoning

236

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

17.3.1 Statistical techniques in Clinical Natural Language Processing The statistical techniques in clinical NLP are based on different statistic measures such as frequency and cooccurrence (Dutta, 2016). Statistical techniques are used to convert the clinical text data into feature vectors that are fed to ML algorithms. Examples of statistical techniques are Bag of Words (BOW), Term Frequency-Inverse Document Frequency (TF-IDF), and RAKE. 17.3.1.1 Bag of words BOW is popular because of its simplicity and ease of implementation. In BOW, unique words are selected from the corpus to make a dictionary. For every document in the corpus, the words present in the document are represented as 1, and the words not present in the document are represented as 0. This creates a binary array of document. The limitation of BOW is that it neglects the sequence of words, and hence, the context of the sentence gets lost. Document clustering of clinical narratives in Patterson and Hurdle (2011) used BOW for representing features in the form of vectors. These BOWs are used by various models of ML. 17.3.1.2 Term frequency-inverse document frequency TF-IDF is an information retrieval technique (Falessi et al., 2010). It consists of two elements: term frequency (calculates the occurrences of a word in the document) and inverse document frequency (represents the occurrence of a word in the whole corpus). If the value of IDF is closer to 0, it indicates that the word is occurring very commonly in the corpus and vice versa. The product of these two terms is treated as the score for keyword ranking. TFðw; dÞ 5 

frequency of w in d number of terms in d

number of documents IDFðw; cÞ 5 log number of documents containg w

(17.1)  (17.2)

here w represents the words, d represents the document in which the word belongs, and c represents the corpus. In document clustering (Patterson and Hurdle, 2011), TF-IDF is used for providing weight to each word in a document based on its frequency in the document. In Man et al. (2017), TF-IDF was used for feature weighting while classifying the protein sequences. Only the feature vectors with high TF-IDF value have been used as input vectors for classification. 17.3.1.3 Rapid automatically keyword extraction RAKE is the keyword extraction technique which extracts candidate keywords based on delimiters (“.” and “,”) and stopwords (Dutta, 2016). Keywords can be unigrams, bigrams and, in general, referred to as n-grams where an n-gram is a contiguous sequence of n words. RAKE separates the keywords based on the delimiters and removes the stopwords. Remaining words and phrases are considered as candidates for keywords. In order

II. Reasoning

17.3 Clinical Natural Language Processing (Clinical NLP) techniques

237

to identify the keywords, a cooccurrence matrix is created of candidate keywords. The matrix value aij contains the cooccurrence value between words i and j. The score for keyword i is calculated based on Eq. (17.3): P j aij (17.3) ScoreðiÞ 5 frequencyðiÞ This score is used for keyword ranking. Higher score represents the strong candidature of a word for becoming a keyword. These important keywords are further used in ML and DL applications.

17.3.2 Linguistic techniques in Clinical Natural Language Processing Linguistic techniques are language-based techniques which use the syntax of the language to extract important keywords. Linguistic approaches are used for finding medication, symptoms, and disease from medical text. Some of the linguistic techniques are: 17.3.2.1 Part of speech tagging Part of speech tagging (POS) tagging is the process of identifying subject, verb, punctuation marks, adjectives, adverbs, object, etc. from the sentence and assigning their rightful class tag (Marafino et al., 2014). It is used for recognizing medical named entities, relation extraction from clinical narratives, speech recognition, etc. POS tagging is categorized under two parts, rule-based and stochastic POS tagging. Rule-based POS tagging uses syntactical and contextual information of a sentence (Maynard et al., 2016) , whereas stochastic POS tagging uses measures like frequency and probability. Rule-based POS tagging was used for homeopathy clinical realm used in (Dwivedi and Sukhadeve, 2011) in which lexicon dictionary and manually written rules were used to assign the word with their tag. N-gram is a stochastic POS tagging that was used for protein sequence classification in Man et al. (2017). In N-grams, the probability of assigning a tag for the word is calculated. One way of calculating probability is with the help of n previous tags where tagging is already done. 17.3.2.2 Tokenization Any medical document is made up of words, phrases, and sentences. Tokenization splits the document into tokens that represent the words and phrases. Generally, the splitting criteria for words is space (“ ”). Sometimes other criterias such as delimiters and stopwords are used for splitting the phrases. Most common tool for tokenizing is Stanford Tokenizer. Tokenization is used in Wrenn et al., (2007) for retrieving distinguished meaningful terms. 17.3.2.3 Dependency graph Dependency graph is a directed graph in which the relationship between words of a sentence is depicted through directed edges. The graphs are modeled using Context Free Grammar. They give a visual syntactic representation of a sentence which makes the

II. Reasoning

238

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

interpretation and understanding easier. The basic concept of dependency graph is that every word in a sentence is related to others in some manner except the root. The applications of dependency parser are entity disambiguation, negation detection, and the extraction of proteinprotein interaction (Raphael and Elhadad, 2012). For example, in the sentence, “x” activates “b,” the two proteins are related with the verb “activates.”

17.3.3 Graphical techniques in Clinical Natural Language Processing In graphical techniques, the words in the medical text are represented as nodes in a graph and the relationship between words are shown with directed edges. This relationship is based on the cooccurrence of words in the medical text. TextRank and Hyper Link Induced Topic Search are two examples of graph-based approaches which are described below. 17.3.3.1 TextRank In the TextRank algorithm, candidate keywords are selected using keyword extraction method like POS tagging (Liu et al., 2015). A graph is created with candidate keywords as nodes and edges between the nodes represent the relationship. The relationship is established based on some criteria such as cooccurrence in a window size (the number of words considered at a time in the sentence). The rank is calculated using Eq. (17.4): X     1    (17.4) Sðwi Þ 5 1 2 df 1 df 3 OUT wj  S wj jAIN ðwi Þ

where S(wi) 5 rank of word wi, df 5 damping factor, IN(wi) 5 links incoming to wi, and OUT(wj) 5 outgoing links from wj. The damping factor is used to balance the scores between the keywords, so that any one keyword does not have very high value. Generally, df is initialized to 0.85. In Balipa et al. (2018), TextRank was used for text summarization of Psoriasis disease. For text summarization, terms, sentences, and phrases were extracted from the unstructured data. Importance of term was calculated from the rank scores calculated with TextRank. Only those terms having high importance value were used for text summarization. 17.3.3.2 Hyper link induced topic search Hyper link induced topic search (HITS) is used by search engines to calculate the importance of a node. It is also known as Hub and Authority algorithm. The Hub node in the graph has link to various nodes. The Authority node is connected by many Hubs. A good Hub page points to many good Authorities, whereas a good Authority page is pointed by many good Hub pages. The Hub and Authority scores can be calculated using Eq. (17.5). Let the Hub vector “u,” Authority vector “v,” and “A” be the adjacency matrix obtained from graph. The relationship between Hub and Authority vector is given by Eq. (17.5). u 5 Av

II. Reasoning

(17.5)

17.3 Clinical Natural Language Processing (Clinical NLP) techniques

239

HITS was used to extract biomedical documents from online sources or hypertext documents (HR and R, 2013) based on the keyword and the importance of the content of this page. The Hub scores represent the important keywords, and Authority scores describe the importance of content.

17.3.4 Machine learning techniques in Clinical Natural Language Processing ML techniques use mathematical model by which machines can learn without manual intervention (Li and Yao, 2018). The mathematical parameters are tuned using training samples. In the following subsections, two ML algorithms: support vector machine (SVM) and Word2Vec are explained. 17.3.4.1 Support vector machine SVM has been used to perform many functions like feature extraction, keyword extraction, and classification in medical domain (Bjo¨rne et al., 2013). SVM finds a hyperplane to separate the inputs into separate groups. There can be many hyperplanes that successfully divide the input vectors. To optimize the solution and find the optimum hyperplane, support vectors are used. The points closest to the hyperplane are known as support vectors. In SVM, the hyperplane having maximal distance from support vectors is chosen as the output hyperplane. The hyperplane and support vectors are shown diagrammatically in Fig. 17.1. Hyperplane is given by Eq. (17.6): wT x 1 b 5 0

(17.6)

where w represents the weight vector, x denotes the input vector, and b is bias. Kernel trick is used to convert the input data into a higher dimensional space from a lower dimensional space to convert nonlinearly separable data to linearly separable. There are three types of Kernels: linear Kernel, polynomial Kernel, and radial basis function (Zhang et al., 2006). In Wang et al. (2019), an SVM-based automatic clinical text classification approach was proposed. A rule-based clinical NLP approach was used to label the data with expert knowledge. Pretrained word embedding was used to convert the words into vectors and FIGURE 17.1 SVM hyperplane. SVM, support vector machine.

II. Reasoning

240

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

was fed into SVM for classification. SVM has also been used for “drug named entity recognition,” “drugdrug interaction” (Bjo¨rne et al., 2013), and “scalable procedure,” and “diagnosis classification” (Marafino et al., 2014). 17.3.4.2 Word2Vec Word2Vec is a ML approach, particularly, a neural network that converts words into vectors (Luo, 2017). The Word2Vec architecture takes one-hot vector as input and updates the weights in the hidden layer to reduce the error. In one-hot vector, a vocabulary of unique words is created, where the length of the vector is equal to the size of vocabulary. Every vector has only one entry equal to “1” that contains the information of word and all others are “0.” One-hot vector is memory wise inefficient because only one index contains the information in the vector of large size. The output of Word2Vec is a trained model used to convert the text input into vectors. In Word2vec, meanings are captured such that synonym words like “good” and “great” have similar weights, whereas opposite terms like “good” and “bad” have totally different vectors. Moreover, Word2Vec overcomes the sparsity problems faced in statistical approaches. Word2Vec can be implemented in two ways: Common Bag of Word (CBOW) and Skipgram (Ma and Zhang, 2015). In CBOW, the input context of the sentence is given, based on which, the next word is predicted. Skip-gram is the opposite of CBOW. Given the word, it tries to find the context in which it can be used. Skip-gram is useful for bigger datasets, whereas CBOW for smaller ones.

17.3.5 Deep learning techniques in Clinical Natural Language Processing DL is based on the working of neurons in the human brain. DL techniques have a deeply interconnected network and interchange the information with its adjacent layers through their weight connections (Wang et al., 2019). DL automatically extracts important features from given input. Two DL techniques, convolution neural network (CNN) and recurrent neural network (RNN), are explained below. 17.3.5.1 Convolution neural network CNN is a feed forward neural network having deeply interconnected layers. In these layers, nonlinear activation functions are used to capture the nonlinearity in the data (Yao et al., 2019). CNN has three types of layers: convolution, pooling, and fully connected layer. In convolution layer, a filter matrix slides over the input matrix. The product of values from overlapping input matrix and filter matrix gives the feature matrix. Pooling layer performs the dimensionality reduction on feature matrix. Fully connected layer is the last layer obtained by flattening the feature matrix. CNN architecture is shown in Fig. 17.2. CNN has been used in various medical applications like disease classification (Wang et al., 2019), extracting important terms (Soldaini et al., 2017), and protein sequence classification (Man et al., 2017). In a medical document, some terms are more important than others. In clinical NLP, term extraction is performed after removing redundant information and reduce the size of the document. CNN is used for extracting important terms from clinical notes as in Soldaini et al. (2017). The context in which the terms are referred, is an

II. Reasoning

17.3 Clinical Natural Language Processing (Clinical NLP) techniques

241

FIGURE 17.2 CNN architecture. CNN, convolution neural network. FIGURE 17.3 RNN architecture. RNN, recurrent neural network.

important criterion for term importance. To correctly capture the context of a sentence, words occurring before and after a specific term are also considered by CNN. The input terms are converted into feature vectors using techniques like Word2Vec and fed into CNN. 17.3.5.2 Recurrent neural network RNN is a DL technique which consists of feed forward connections along with feedback loops (Luo, 2017). In RNN, the output of one layer is fed to the next layer so that the information of the previous calculation can pass forward in next layers. Consequently, the output of one layer depends upon the output of the previous layer. RNN is recurrent in nature because it repeats the process of passing the output information to next hidden layer. RNN architecture is presented in Fig. 17.3. RNN is useful when information about sequence is considered important such as in case of sentences. Therefore RNN is a suitable choice for clinical NLP tasks.

II. Reasoning

242

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

RNN has been used in the identification of nonconfidential information from patient notes in Dernoncourt et al. (2016). In this chapter, RNN performed two tasks: first to encode the tokens into embedding vector and second to distinguish protected health information (PHI) and non-PHI.

17.4 Clinical Natural Language Processing and Semantic Web Many studies in the literature have described information overload and the work effectiveness being impaired by information overload in the clinical domain. The publishing of medical articles has shown a tremendous increase in the last decade (Wang et al., 2018). The medical data is vast; hence, the challenges of searching relevant data have also increased. Many efforts have been made to structure the data and extract important information. Unstructured format and nonuniformity of available data give rise to the challenges like noninteroperability. The data of one medical center is not understandable to other medical facility because of noninteroperability. SW provides a possible solution to solve the aforementioned interoperability problems. SW is a knowledge representation technique that enables machines to directly read the structured data present in webpages. Ontology is one of the building blocks of the SW (Shrivastava and Agrawal, 2016). Ontology provides a common platform through which data can be shared among different systems. It also facilitates the sharing of medical knowledge. It can serve as a tool for electronic transition of data from one medical center to another. Ontology can contribute to the standardization of medical terms, thereby making them understandable to everyone. In healthcare domain, various ontologies are available like International Classification of Diseases-Tenth Division (ICD-10) and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT).

17.4.1 Ontology creation from clinical documents Ontology creation is the process of extracting meaningful information from unstructured textual data and presenting it in the form of an ontology (Bertaud et al., 2012). Ontologies have their unique elements, concepts, and relations. Here, the information which is to be extracted, is available in textual format and the sources of this information are vivid. Therefore it is a challenging task to extract the relationship between ontology elements in an error free manner. The main steps to obtain ontology from unstructured data are shown in Fig. 17.4.

FIGURE 17.4

Process of ontology creation from clinical documents.

II. Reasoning

17.4 Clinical Natural Language Processing and Semantic Web

243

To obtain the ontology from clinical textual data, following steps are to be performed (Asim et al., 2018). • Retrieval and cleaning of input data In the medical domain, unstructured data which is generally available in the textual form can be obtained from different sources including medical articles, pathology reports, comment section of EHR, and reports by a radiologist. This data serves as input for ontology creation process. The cleaning of textual data involves removing the stopwords like the conjunctive words that are used to make sentences, but do not provide significant information. Along with stopwords, punctuation marks and special characters are also removed in the cleaning process. • Extract terms and concepts Cleaned textual data is then processed to extract important terms. Term extraction algorithms like linguistic and statistical approaches are described in above section. Further, by making a group of similar meaning terms, a concept is formed. Clustering methods are used for making cluster of similar terms and label them with a concept. • Extract relation between concepts Dependency graph is used to establish the relationship between terms and concepts. It is analyzed using parsing trees. The concepts having the shortest path in parse tree are assumed to be related. ML is also used to extract relation between concepts. ML model learns through a training dataset and finds the relation from unknown data. For inferring some results, axioms are added using the domain knowledge. The next section describes a framework using above-mentioned techniques.

17.4.2 Framework for classification of genetic mutations using ontologies from clinical document This section uses ontology for classification of genetic mutations. A gene can get mutated in many ways and these mutations may or may not turn the gene malignant. The task is to classify these mutations into one of the corresponding classes. This classification is done manually by medical experts. In this section, a framework has been proposed to automate the task of mutation classification. The framework is intended to classify the genetic mutations in one of its three designated classes namely “Neutral” (no effect of gene mutation), “Loss-of-function” (less production of gene protein due to some mutation), “Gain-of-function” (increase in production of gene protein due to mutation). A detailed flow diagram of the proposed framework is shown in Fig. 17.5. In the first step, framework extracts information about a gene and its mutation from the patient’s EHR. This extracted information is then used for retrieving corresponding medical article from clinical databases. This extracted article is used for ontology creation/ enrichment. EHR and gene ontology have similar structures, that is, hierarchical structure for storing clinical information, but contains different kinds of data. EHR is a medical unit that contains patient specific data such as demographic information, incharge doctor, time of appointment, diagnosis, etc. Although, gene ontology contains general information about different genes, including protein, gene mutations, effect of these mutations. The

II. Reasoning

244

FIGURE 17.5

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

Framework for genetic mutation classification using ontologies.

gene ontology along with gene and mutation information extracted from EHR are given as input to DL techniques for classification. A detailed description of the proposed framework is as follows: Step 1: In this step, information about the patient’s mutated gene say “g” is extracted from the patient’s record present in the EHR. This information is required so that details about the effects of this gene mutation can be extracted from the clinical database. Step 2: Using the information about gene “g” obtained in first step, clinical documents describing the specific gene are extracted from clinical databases. The retrieved gene articles are further used in creation/enrichment of ontology. Step 3: The terms and relations are extracted from the retrieved clinical article. For this purpose, clinical NLP techniques like TF-IDF and POS tagging are used. This knowledge extraction is followed by ontology creation based on the extracted information. Detailed procedure for the same is described in Section 17.4.1. This gene ontology is used as an input to DL technique(s). Step 4: The gene ontology is then used as an input for DL algorithm. It is flattened so that it can be provided as input to the DL algorithm.

II. Reasoning

17.5 Case study: Classification of Genetic Mutation using Deep Learning

245

Step 5: The flattened ontology is given as input to DL technique for classification. DL algorithm performs multiclass classification to classify the gene mutation in one of the three aforementioned classes. Step 6: The result of DL algorithm is updated into the patient EHR and used for patient diagnosis. Doctors can take further steps according to the obtained results. Thus this framework provides a systematic procedure for the classification of genetic mutations. The next section describes a case study based on this framework.

17.5 Case study: Classification of Genetic Mutation using Deep Learning and Clinical Natural Language Processing This section presents a case study based on the framework described previously. In this case study, information from clinical articles was extracted to classify genetic mutations in three classes namely “Gain of function,” “Neutral,” and “Loss of function.” Two datasets were used in this case study: The Catalog Of Somatic Mutations In Cancer (COSMIC) Mutation dataset (Bamford et al., 2004) and cancer-diagnosis dataset retrieved from the Kaggle repository. The COSMIC mutation dataset contains genetic information was used to enrich patient EHR. While the Kaggle dataset contains clinical articles which provides information about genes and the consequences of mutations in these genes. The detailed explanation of the case study implementation is given below. Step 1: EHR creation and information extraction: In this case study, the information about the patient gene and its mutation was required. In the real world, EHR is used for storing the patient’s gene and mutation information. Therefore an EHR was created using openEHR. openEHR uses predefined standards such as SNOMED-CT and creates the archetype-based EHR. For EHR creation, openEHR tools namely Archetype Editor and Ocean Template Designer were used. Archetype Editor was used to form the archetypes that store information in them. Ocean Template Designer was used to design the template that contains different archetypes in it. The output of Ocean Template Designer was an operational template which was fed to the EHR instance generator of openEHR toolkit. EHR instance generator returned an instance of properly structured EHR, which was able to store specific information. After this, the data to populate the EHR was required. COSMIC mutation dataset has been used to populate the EHR. COSMIC is an open-access database that stores the information of genes and its mutation that causes cancer in humans. COSMIC Mutation Data contains different attributes such as gene name, primary site, primary histology, genomic mutation id, mutation coding sequence of gene (CDS), samples of genes, sample name etc. Archetypes were created in the EHR to store these attributes of the dataset. Now, EHR contains the information about patient’s gene and mutation. The information present in EHR is in the XML format where different tags represent different archetype fields. Every archetype has its unique archetype_id. An XML API in Python named ElementTree was used to extract the gene name and gene mutation using their unique archetype_id in the EHR. The ElementTree traversed the EHR and returned the information of gene and mutation by their archetype_ids. This information was further used to extract

II. Reasoning

246

FIGURE 17.6

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

EHR created with openEHR tools. EHR, electronic health record.

the relevant gene article from Kaggle’s cancer-diagnosis dataset. Fig. 17.6 shows the EHR created with openEHR tools. Step 2: Extracting clinical article from Kaggle dataset Clinical articles provide general information about genes. The gene and the mutation extracted from EHR were used to extract the clinical articles about the extracted gene. These clinical articles are needed for the creation of Ontology. Kaggle’s cancer-diagnosis dataset containing articles that provide detailed information about the corresponding gene was used. The dataset is distributed into two files, “variants” and “text.” Variants file contains different variants of a particular gene including ID, Gene, Type of Mutation, and Class. The text file contains two columns, ID and Text. The Text field in text file contains the medical article that has the general information about gene and its mutation. The gene “g” and mutation “m” values retrieved from EHR were used to link the gene and mutation values from cancer-diagnosis dataset. The dataset was read with pandas library and iterated over the Gene and Mutation column of Kaggle’s cancer-diagnosis dataset until the “g” and “m” were matched with the column values. The corresponding article from the same row was retrieved from Text column of cancer-diagnosis dataset. Fig. 17.7 shows the article retrieval of gene “g” and mutation “m” from dataset. These articles were further used for knowledge extraction and creation of Ontology in SW component. Step 3: Knowledge extraction using clinical NLP and ontology creation The articles retrieved in the last step were used to create the ontology. For the same, knowledge must be extracted from these articles using clinical NLP techniques. Term extraction begins with data cleaning. For which python provides a library named nltk which removes the punctuations and stopwords stored in nltk dictionary. The term extraction was done through POS tagging, using Python’s spacy library. It assigns the word with II. Reasoning

17.5 Case study: Classification of Genetic Mutation using Deep Learning

247

FIGURE 17.7 Retrieved article of gene “g” and mutation “m.”

FIGURE 17.8 Keyword and their entities name.

their corresponding tag. The keywords having Noun, Noun Phrase, Verb, and Verb Phrase tags were considered as important terms in the implementation. The extracted terms were clustered together to form the concept. Concept extraction was implemented using named entity recognition implemented using Python’s Scispacy library. The model named en_ner_bionlp13cg_md and en_ner_bc5cdr_md have been used in the implementation. Fig. 17.8 shows keywords and their concept name. The words belonging to the same category were clustered together under a single concept. The relationships between the obtained concepts were established using hierarchical agglomerative clustering (HAC). The instances have been taken from every concept cluster and applied HAC clusters using Python library, the concepts were grouped to form a new cluster which is superset containing small clusters until one big cluster remains that makes the superset of all the clusters. The relation extraction was implemented like the method in Novelli and Oliveira (2012). These steps established a relationship between concepts and created an ontology that was to be given as input to the DL after flattening and preprocessing. Step 4: Ontology flattening and preprocessing The ontology created in the last step is given as input to the DL algorithm but it cannot be directly fed for classification in DL algorithm; therefore the Ontology is flattened as done in Magumb et al. (2018). The words present in flattened ontology were converted into tokens using nltk library. The input was padded with zeroes to make all the instances of same length. Glove embedding was II. Reasoning

248

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

used to convert the tokens into word embedding. In this case study, glove embeddings of size 100 were used. It was used as a lookup table in which every token was mapped into a word vector and this mapping was stored in the embedding matrix. Fig. 17.9 shows the obtained embedding matrix. This embedding matrix of flattened ontology was fed as input for DL module. Step 5: Classification using DL algorithm The embedding matrix of flattened ontology created in the previous step was fed to the DL algorithm. For classification, a DL technique namely, CNN was used. Three convolution

FIGURE 17.9

FIGURE 17.10

Embedding matrix act as input for DL algorithm. DL, deep learning.

Convolution neural network summary.

II. Reasoning

References

249

layers and three max pooling layers were used for the algorithms in the proposed model. The convolution layer used ReLU activation was used together with 128 filters. The size of pooling filter was five in first two CNN layers, and in third layer, it was 35. Lastly, two dense layers were used with ReLU and softmax respectively. Fig. 17.10 shows the model summary created by CNN. An accuracy of 79% was achieved by CNN using a subset of 1100 samples, 70% of this data was used for training and 30% data for testing. New test instances of patient gene and mutation were classified, and the result obtained using the model was used in the next step. Step 6: Updating the EHR After getting the results of classification of gene and its mutation, the next step is updating the EHR with the appropriate result. This result is further used by the doctors to take further actions. The doctors follow up with the treatment accordingly to the updated result. The feasibility of the framework is tested with the case study and found that the framework is working using clinical NLP techniques, ontologies, and DL techniques.

17.6 Conclusion The processing of the medical documents containing textual data is known as Clinical NLP. It can play an important role in improving health care by extracting important information from unstructured data in medical domain. This chapter presented various clinical NLP techniques that are used to extract important information from medical text like BOW, TF-IDF, TextRank, HITS, Part of Speech tagging, and Word2Vec. This extracted information acts as an input to ontology for creating a structured representation. The structured data represented through ontologies can be used in different applications of medical domain. A framework for the application of classifying cancerous genetic mutation has been proposed using ontologies and DL techniques. A case study has been presented for the proposed framework. The case study had taken the gene input from EHR, retrieved the clinical articles and extracted the knowledge using clinical NLP techniques. The extracted knowledge was used for creation of gene ontology which was then fed to classification algorithm. The classification was performed using CNN in the case study. The proposed framework has been implemented using clinical NLP, ontologies, and CNN with COSMIC mutation data and Kaggle’s cancer-diagnosis dataset.

References Asim, M.N., et al., 2018. A survey of ontology learning techniques. Database: J. Biol. Databases Curation 5 (12), 124. Balipa, M., Ramasamy, B., Vaz, H., Jathanna, C.S., 2018. Text summarization for psoriasis of text extracted from online health forums using TextRank algorithm. Int. J. Eng. Technol. 7 (3), 872873. Bamford, S., et al., 2004. The COSMIC (catalogue of somatic mutations in cancer) database and website. Br. J. Cancer 91 (2004), 355358. Bedi, P., Kaur, H., Marwaha, S., 2007. Trust based Recommender System for the Semantic Web. AAAI Press, Hyderabad, pp. 26772682. Bertaud, V., Re´gis, D., Anita, B., 2012. Ontology and medical diagnosis. Inform. Health Soc. Care 37 (2), 5161. Bjo¨rne, J., Kaewphan, S., Salakoski, T., 2013. UTurku: Drug Named Entity Recognition and Drug-Drug Interaction Extraction Using SVM Classification and Domain Knowledge. Association for Computational Linguistics, Atlanta, Georgia, pp. 651659.

II. Reasoning

250

17. Classification of genetic mutations using ontologies from clinical documents and deep learning

Dernoncourt, F., Lee, J.Y., Uzuner, O., Szolovits, P., 2016. De-identification of patient notes with recurrent neural networks. J. Am. Med. Inform. Assoc. 24 (3), 596606. Dutta, A., 2016. A novel extension for automatic keyword extraction. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 6 (5), 160163. Dwivedi, S.K., Sukhadeve, P.P., 2011. Rule based part of speech tagger for homoeopathy clinical realm. Int. J. Comput. Sci. Appl. 8 (4), 350354. Falessi, D., Cantone, G., Canfora, G., 2010. A Comprehensive Characterization of NLP Techniques for Identifying Equivalent Requirements. Association for Computing Machinery, Bolzano-Bozen, Italy, pp. 110. Kumar, S., Jain, A., Mahalakshmi, P., 2018. Enhancement of healthcare using naı¨ve bayes algorithm and intelligent fatamining of social media. Int. J. Appl. Eng. 13 (6), 41094112. Li, G., Yao, B., 2018. Classification of genetic mutations for cancer treatment with machine learning approaches. Int. J. Design, Anal. Tools Intergrated Circuits Syst. 7 (1), 1115. Liu, H., et al., 2015. Optimizing graph-based patterns to extract biomedical events from the literature. BMC Bioinform. 16 (S2). Luo, Y., 2017. Recurrent neural networks for classifying relations in clinical notes. J. Biomed. Inform. 72 (C), 8595. Ma, L., Zhang, Y., 2015. Using Word2Vec to Process Big Text Data. IEEE, San Jose, CA, pp. 28952897. Magumb, M.A., Nabende, P., Mwebaze, E., 2018. Ontology boosted deep learning for disease name extraction from Twitter messages. J. Big Data Volume 5 (1), 31. Malmasi, S., Ge, W., Hosomura, N., Turchin, A., 2019. Comparison of Natural Language Processing Techniques in Analysis of Sparse Clinical Data: Insulin Decline by Patients. AMIA, Washington Hilton, p. 610. Man, L., Cheng, L., Jingyang, G., 2017. An Efficient CNN-Based Classification on G-Protein Coupled Receptors Using TF-IDF and N-gram. IEEE, Heraklion, Greece, pp. 924931. Marafino, B.J., et al., 2014. N-gram support vector machines for scalable procedure and diagnosis classification, with applications to clinical free text data from the intensive care unit. J. Am. Med. Inform. Assoc. 21 (5), 871875. Marwaha, S., Bedi, P., Yadav, R., Malik, N., 2009. Diseases and Pests Identification in Crops—A Semantic Web Approach. IICAI, Tumkur, Karnataka, pp. 10571106. Maynard, D., Bontcheva, K., Augenstein, I., 2016. Natural language processing for the semantic web, synthesis lectures on the semantic web: Theory and technology, Morgan and Claypool Publishers, London, 6(2), 1194. Novelli, A.D.P., Oliveira, J. M. P. d, 2012. Simple method for ontology automatic extraction from documents. Int. J. Adv. Comput. Sci. Appl. 3 (12), 4451. Patterson, O., Hurdle, J.F., 2011. Document Clustering of Clinical Narratives: A Systematic Study of Clinical Sublanguages. AMIA, Washington DC, pp. 10991107. Raphael, C., Elhadad, M., 2012. Syntactic dependency parsers for biomedical-NLP. AMIA, Chicago, Illinois, pp. 121128. Shally, H.R., Rejimol Robinson, R.R., 2013. Survey for mining biomedical data from HTTP documents. Int. J. Eng. Sci. Res. Technol. 2 (2), 165169. Shrivastava, S., Agrawal, S., 2016. An overview of building blocks of semantic web. Int. J. Comput. Appl. 152 (10), 1720. Soldaini, L., Yates, A., Goharian, N., 2017. Denoising Clinical Notes for Medical Literature Retrieval with Convolutional Neural Model. ACM, Singapore, pp. 23072310. Turnip, T.N., Silalahi, E.K., Vicario, Y.A., 2019. Application of ontology in semantic web searching of flight ticket as a study case. J. Phys.: Conf. Ser. 1175 (1), 012092. Vazquez, M., Torre, Vd. l, Valencia, A., 2012. Chapter 14: Cancer genome analysis. PLoS Computational Biol. 8 (12), e1002824. Wang, Y., et al., 2018. Clinical information extraction applications: a literature review. J. Biomed. Inform. 77, 3449. Wang, Y., et al., 2019. A clinical text classification paradigm using weak supervision and deep representation. BMC Med. Inform. Decis. Mak. Volume 19 (1), 113. Wrenn, J.O., Stetson, P.D., Johnson, S.B., 2007. An Unsupervised Machine Learning Approach to Segmentation of Clinician-Entered Free Text. AMIA, Sheraton Chicago, pp. 811815. Yao, L., Mao, C., Luo, Y., 2019. Clinical text classification with rule-based features and knowledge-guided convolutional neural networks. BMC Med. Inform. Decis. Mak. 19 (3), 3139. 04 April. Zhang, K., Xu, H., Tang, J., Li, J., 2006. Keyword Extraction Using Support Vector Machine. Springer-Verlag, Berlin, Heidelberg, pp. 8596.

II. Reasoning

C H A P T E R

18 Security issues for the Semantic Web Prashant Pranav1, Sandip Dutta1 and Soubhik Chakraborty2 1

Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi, India 2Department of Mathematics, Birla Institute of Technology, Mesra, Ranchi, India

18.1 Introduction 18.1.1 Security and cryptography Security is one of the key requirements in the current era of technological advancement. Security needs to be provided to secure the confidential information of individual as well as the organization. With the help of many existing security protocols, one can be assured that his or her confidential data and messages communicated over any medium are free from the reach of any adversary. But are the existing security protocols enough? This contradictory question has many vague answers. Some groups emphasize that the security provided to the organization or individual by a third party or embedded in the applications or processes they use is free from eavesdroppers, whereas other groups emphasize that the existing protocols to provide security needs continuous and incremental improvement to cope up with the computing power of the adversary. With computing moving from the traditional approach to a more quantum oriented one, the need to develop algorithms that work well for the case of a quantum computer is the need of the current research in the field of Cryptography and Network Security. Cryptography is closely related to security. We first give a brief overview of cryptography and then move on to discuss the security of the Semantic Web. Cryptography is basically the formulation of mathematical models and one of the key requirements (and not the only requirement) to provide security. It assures that the messages or data communicated over any insecure medium is unreadable and unbreakable. Cryptographic algorithms are the premise of carefree exchanges over the web today. Classified data of an administration or private office or division is made sure about security using Cryptography. From doing top-secret exchange of messages to moving data of national significance, cryptographic algorithms plays a very significant role. Cryptography is essentially a mathematical model utilized for concealing secret data. With the headway in web innovations and dependence of pretty much everybody on the

Web Semantics DOI: https://doi.org/10.1016/B978-0-12-822468-7.00002-X

253

© 2021 Elsevier Inc. All rights reserved.

254

18. Security issues for the Semantic Web

utilization of web in everyday life, it has gotten a matter of most extreme significance to secure the private data shared over the web in a structure that can’t be perused by an intruder. There are three major categories of cryptographic algorithms viz. unkeyed, symmetric, and asymmetric algorithms. A detailed pictorial description of these three types is shown in Fig. 18.1. The distinction between these three categories is done on the basis of key distribution. Unkeyed algorithms do not require any secret key for their working. Symmetric algorithms use a single secret key that is shared by all the parties engaged in communication and requiring to process basic cryptographic operations such as encryption and decryption. Asymmetric algorithms are based on the usage of a pair of keys: public key and private key. The public key is used by the sender of the communicating party to encrypt the messages which can be decrypted back by the receiver with the employment of the private key. Cryptography is the art of writing something secretly; the first notable use of cryptography in writing dates back to c.1900 BCE when an Egyptian scribe used nonstandard hieroglyphs in an inscription. In data and telecommunications, the use of cryptographic algorithms is of the utmost importance when communicating over any untrusted medium, such as the internet. The primary functions or objectives achieved by cryptographic algorithms are: Privacy/Confidentiality: It implies no one except the intended receiver can read the message. The sole communication must be done between the two involved communicating parties and not any eavesdropper. Authentication: It prevents the attacker from impersonating as a communicating party. It is the process of proving one’s identity. Integrity: It assures the receiver that the delivered message has not been altered from the original on its way to the receiver. It prevents any third party from tampering the message without being noticed. Non-repudiation: It justifies the receiver that the sender has really sent the message and not anybody else. Key Exchange: The mechanism by which Crypto keys are shared between the senders and the receivers. It is basically important for asymmetric key cryptography.

FIGURE 18.1

Symmetric key cryptography.

III. Security

18.1 Introduction

255

In Cryptography we start with a plain text which is in an unencrypted format. The plaintext through some encryption technique is converted into ciphertext, which is then again decrypted back into usable plain text. Mathematically, the process is written as: C 5 Ek ðPÞ P 5 Dk ðCÞ where P 5 plain text, C 5 ciphertext, E 5 the encryption protocol, D 5 the decryption protocol, k 5 Key. Generally in the decryption process, the two communicating parties are called Alice and Bob. In the crypto field, this nomenclature is done to make it easier to identify or recognize the two communicating parties. In addition if the third and fourth party is present in the communication they are called Carol and Dave respectively. In the process, the trusted party is called Trent, a malicious is called Mallory and an eavesdropper is referred to as an Eve. Thus, cryptography is related to the creation and development of the mathematical algorithms employed to encrypt the plaintext and decrypt the ciphertext. Cryptanalysis is the science of breaching the cryptographic security systems and taking advantage of the encrypted text. The term Cryptology refers to the study of secret writing which consists of both cryptography and cryptanalysis. Cryptographic algorithms, categorized on the basis of the number of keys employed for both the cryptographic operations are discussed below: 18.1.1.1 Symmetric key cryptography or secret key cryptography In Symmetric key or secret key cryptography both the sender and the receiver share the same or common key for encryption and decryption of a message. There are two types of symmetric key cryptography: Stream cipher: Stream cipher is an encryption algorithm that encrypts the plaintext (1 bit or byte) at a time. Block cipher: Block cipher is an encryption algorithm that takes a number of bits and encrypts them as a single unit, padding the plaintext so that it is a multiple of the block size. The usual sizes of blocks are 64 bits, 128 bits, and 256 bits. 18.1.1.2 Asymmetric key cryptography or public-key cryptography In Asymmetric key or public-key cryptography, two different keys are used for encryption and decryption of a message. A public key is used for the encryption process and the private key is used for the decryption process. {{{Fig. 18.2}}} In the last 300400 years, Public Key Cryptography (PKC) is the most noteworthy improvement in cryptography. Present-day PKC was characterized in broad by Professor Martin Hellman of Stanford College and Whitfield Diffie, a graduate student in the year 1976. PKC relies upon the presence of the one-way works or numerical capacities which are anything but difficult to figure while its converse capacity is hard to process. Conventional PKC utilizes two distinct keys which are numerically related however the information on one key does not permit anybody to effortlessly decide the other key. One key is utilized for the encryption of the plain content while the other key is utilized

III. Security

256

FIGURE 18.2

18. Security issues for the Semantic Web

Asymmetric key cryptography flow.

for the decoding of the ciphertext. It doesn’t make a difference at all which key is utilized first in the process however the two keys are required in the process to work as a couple of keys are required. This is likewise called asymmetric cryptography. In PKC, one of the keys is called public key and it might be broadly publicized according to owner’s needs. The other key is known as the private key and it is never uncovered to another party.

18.1.2 Introduction to Semantic Web World Wide Web Consortium (W3C) is developing a technology to support web of data with the aim of supporting computers to do more efficient work and to develop a common technology that can entrust interactions over the network of computers. The Web of Data is made of organized information read by operators that relate organized information situated in servers through the presence of connections  this information is likewise called Linked Data since it is information associated through connections. The information that exists in the Web of Data is called metadata. The term Semantic Web refers to a Web where all data in a database are linked over the network of computers. It enables users to store their confidential and private data over the web, define some meaningful vocabulary for the created data, and formulate some rules to handle the stored data. Some of the technologies used to link the data in the Semantic Web are RDF, SPARQL, OWL, and SKOS. Various layers for the Semantic Web have been developed by Tim Berners as shown in Fig. 18.3 below. The lowest layer has various communication protocols. The next layer is devoted to extensible markup language (XML) and various XML schemas followed by RDF (Resource Description Framework) in the next upper layer. In the next upper layer, there are Ontologies and Interoperability. The highest layer contains the Trust Management layer. TCP/IP, HTTP, and SSL present in the lowest layer are mainly concerned with the transmission. These are used to send the pages of the web via some communication channel over the internet. The next upper layer contains the protocols for XML and XML schema both of which were developed by Tim Berners and W3C. XML concentrates only on the overall syntax of the document which is to be transmitted over the internet. However, the same document may have different interpretations at different website locations which finally hinder the integration of information over the web. RDF, developed by W3C in late 1990s overcomes this problem. RDF however uses XML syntax but it has the outer support of express semantics. RDF can be used for the integration of information over the web in a meaningful way. RDF is mere a specification language that can be used

III. Security

18.1 Introduction

257

FIGURE 18.3 Layers of the Semantic Web.

to express syntax and semantics. What entities need to be specified and how a common definition is accepted by the community is a more serious concern. Ontologies, which appear in the layer next to RDF, are a solution to such a problem. Various communities such as medical, financial, etc. use ontologies to come up with a common definition and specification of entities transmitted over the web. Ontologies can be used to describe various diseases or financial entries. Once defined by a specific community, these ontologies can be published. The final layer, that is, Trust Management Layer, is all about logic, proof, and trust. The layer specifies how one trusts the information communicated over the web. The paper focuses on the key security issues of the Semantic Web at all the layers. To impart security to a Semantic Web data, protocols must be employed at different layers. The used security protocols must adhere to the timely processing of data and also must not interfere with the job of the layers above or below. The issues confronting the Semantic Web people group are significant. In the late spring of 2001, a considerable lot of the large names in Semantic Web group gathered at Stanford University for a three-day working symposium. The reports from two of the three principle gatherings shown that there is a significant absence of solid work in this field, and more importantly the terrible absence of a solid way for additional examination. In the track on interoperability, while some advancement was made as far as building up a system for making Semantic Web conventions, it was inferred that “crucial problems were either absent or not managed with demonstrating that analysts still haven’t had the

III. Security

258

18. Security issues for the Semantic Web

opportunity to investigate completely this region and even concoct the entirety of the significant research questions.” The cosmology track appeared to make even less progress and at long last gave a baffling report with the accompanying ends: • • • •

Researchers need helpful outlets for their vitality There is no accord on building up a procedure for building a typical structure The overall significance of creating basic units of meaning is indistinct There is no proof to help that Semantic Web tools are valuable or commonsense henceforth raising doubt about whether experimentation should proceed or not

After almost two decades, the Semantic Web should step out of its identity crisis into adolescence. In search of a target market for adoption, research in semantic technologies has ridden others’ waves perhaps a little too often.

18.2 Related work Mishra et al. (2018) have a study which focuses on the combination of Semantic Web technologies with IoT systems. Several research issues and incidents in the field of Security for the Web of Things (SWoT) security have been discussed. Further study can be done on the incorporation of an interoperability of IoT implementation. Medic and Golubovic (2010) detail the number of ways of Semantic Web security application through layers. Result shows that it is more significant to generate trusted information. Security of information is considered as a proof for well-based security model and this is the beginning point for modeling trust. Thuraisingham (2005) presented upgradation in standards for the Semantic Web and also details the standard for secure Semantic Web. The standard plays a major role in the development of Semantic Web. W3C is very effective in identifying standards for XML, RDF, and Semantic Web. Security and privacy are the areas on which emphasis must be laid while developing standards. Kagal et al. undertook a research to concoct the marking of web elements alongside semantic arrangement language and the use of disseminated strategy the board as an option for universal access control model and a confirmation. Security structure dependent on arrangement language has been proposed which comprehends security challenges for operators, web assets, and administrations in the Semantic Web. The outcome shows that the philosophy embraced a lot more grounded and adaptable base for making sure about web substances when it was contrasted with normal security models. Bertino and Ferrari (2002) propounded an access control model that supports the selective distribution of XML documents among large community users. The approach used in the research requires minimum number of keys to encrypt documents. An approach based on the Crypt lope was proposed that permits one to send the same documents to all the users but the access control policies were yet to be implemented. Halpin H. has proposed a semantic attacker which attacks inference procedures. Alternatives were proposed which employed modern cryptography which prevents attacks from Transport Layer Security (TLS). TLS certificates are free and Semantic Web is yet not widely used which can prove that moving from HTTP to HTTPS will break the

III. Security

18.3 Security standards for the Semantic Web

259

real-world implications. Semantic Web vocabularies and Semantic Web equipment will soon switch to employing TLS- encrypted HTTPS URIs. Dwivedi et al. (2011) presented number of challenges with trust of Semantic Web Security. This was attained with the implication of Web services Security and XML Signature. The base of trustworthy Semantic Web security lay down by the essential specifications, tools, and protocols employed in the scenario of trustworthy Semantic Web. Jain, 2020 provides an overview of knowledge engineering. The author has focussed on the explanation of a Decision support system with the help of Semantic Web technologies. Jain and Patel (2020) proposed a technology for Event Identification using one of the layers of the Semantic Web, Ontology. Patel and Jain (2020) proposed a new approach to align well the Ontology layer of the web Semantic with other layers.

18.3 Security standards for the Semantic Web Security is closely related to trust. Security protocols must be implemented at all levels of the web semantic and not any individual one. At the lowest layer protocols which are used namely TCP/IP, HTTP and sockets must be secured. Algorithms have been developed over the course of the time to secure these lower-level protocols. Next, we need to secure the XML and XML schemas. Controlled access must be given at various portions of the documents for varied purposes. Securing RDF is a bit more challenging. To secure the RDF, we need a secure XML in the first place and develop protocols to secure interpretation and semantics. Next is to secure Ontologies and interoperation. Ontologies generally have a level of security attached. Some portion of the Ontologies may be kept secret and some part may be unclassified.

18.3.1 Securing the extensible markup language XML documents mainly follow a graph structure with the main question that arises to secure the XML with their XML schemas is whether to provide full access for XML documents or part of it. Authorization model for XML has been developed with focus on providing controlled access and dissemination policies. Pull and push architectures were also taken into consideration. Policies were explained in XML. The explanation of policies contains information regarding which user can access which part of documents. The algorithms for access control as well as computing views of the results are also presented (Jain, 2020). This is for the owners who publish documents that are subject to request access to the documents and provide access to the untrusted publishers to see the documents that they are allowed to. XML security is generally specified and standardized by the W3C. A recent project on XML security is mainly concerned with developing and implementing security standards for XML (Jain and Patel, 2020). The focus is on XML-Encryption Syntax and Processing, XML-Signature Syntax and Processing, and XML Key Management. W3C also has a number of working groups including XML Signature working group and XML encryption working

III. Security

260

18. Security issues for the Semantic Web

group (Patel and Jain, 2020; Bertino et al., 2002). The main focus of the standards is to check how and what of the protocols are to be implemented for providing XML security.

18.3.2 Securing the resource description framework RDF is the base of Semantic Web. RDF provides machine-understandable documents which is a limitation of XML. Therefore RDF imparts support for searching, cataloging, and interoperability. It details the contents of the documents and relationship between numbers of entity in the documents. While notations and syntax are provided by XML, RDF concentrates on extending the reach of standardized semantic information. Resources, Properties, and Statements are the three types of RDF model. Resource can be defined as anything which is explained in detail using RDF expressions. Properties are some pre-defined attributes assigned to a resource that can further highlight the detailed objective of the resource. The combination of resources and their attribute assigned through a property gives an RDF statement. Statement consists of subject, predicate, and an object and can be represented using an RDF diagram. There are numerous sides specific to RDF syntax. Intended information used for RDF sentences is of immense significance and can be done by RDF schemas. Schemas are nothing but a sort of dictionary which can be used to interpret the specific terms used in a RDF sentence. It can be solved using RDF and XML namespaces. RDF consists of a container model and statements about statements. Bag, Sequence, and Alternative are the three types of container objects of container model. A bag consists of literals arranged in an unordered list. So, if the property has many values, their order is of no significance as these are arranged in an unordered list. A sequence on the other hand is an ordered list containing resources. So, ordering is of utmost importance in sequence. Alternative consists of the list of resources that have alternative value of a property. Statements for other statements can be prepared by suitable aid from RDF. These can also be represented using object-like diagrams. Object-like diagrams can also be used to represent containers. Each RDF diagrams has a formal model and grammar associated with them. To make Semantic Web secure, RDF documents must be secured. For this, XML should be secured from the syntactic point side. Security should be protected from the semantic level. Providing security from the view of Resource, Property, and Statement is the main challenge faced in securing the RDF. Securing the access control, providing these access controls at a finer grain of granularity, protecting the Property and Statement of the RDF, are some of the main concerns as far as securing the RDF is concerned. What could be the security features of container model? How can one secure bag, list, and alternatives? How can security policies be specified in RDF? How can semantic inconsistencies be solved for the policies? What kind of securities are there for statements about statements? How can RDF schemas be protected? These are few other questions that must be answered by undertaking the research. XML security is relatively easier the main challenge is securing RDF.

III. Security

18.3 Security standards for the Semantic Web

261

18.3.3 Information interoperability in a secured way Raw data alone does not make any sense. These need to be refined at many levels to come out with any relevant meaning to the data. Data which makes sense is called information. Web world is a cloud of information. Since some decades, database community has been operating on database integration. Several challenges have come including interoperability of heterogeneous data sources. Schema has been employed for the purpose of the integration of number of databases. Schemas are essentially data describing the data in the databases (http://xml.apache.org/security/). The amalgamation of varied and separate data sources with the web is the need of the current scenario. Instead of storing the data in the databases, it can be stored in file system either structured or unstructured. The data may be stored in any form and number be it a textual data, image data, or audio or visual data. There is a need to develop technologies that could amalgamate the heterogeneous information sources on the web. Services of the Semantic Web are needed for the purpose of the amalgamation of information present on the web. The schema integration work of Sheth and Larson was extended for security policies (http://www.w3.org/Signature/; http://www.w3.org/Encryption/2001/). There are many persistent web sites with each site having its unique security policy. The need is to combine the varied security polices over the web to create an amalgamated database system. On the web, a major role is being played by Ontologies in information integration. The role of Ontologies for integrating securely the information on the web, the provision of allowing Ontologies to develop their own security policies, types, and nature of encryption technique that can be used to provide security and the reduction of trust that can be placed on information integration are a few of the many questions which must be researched to provide a secure information interoperability. 18.3.3.1 Management of trust for the Semantic Web In the last few years, there have been some works on the trust and Semantic Web. The challenge arises when one has to trust the information on the web and the sources of these information. Negotiation among different parties to come into contracts, the ways to assimilate constructs for trust management and negotiations into RDF and XML and different possible semantics as of now for the trust management are some other challenges that need focus. Several researchers are working on the development of protocols for the purpose of trust management. Unique languages for detailing the trust management constructs are being developed. There is a research on the base of trust management. Example  If X trusts Y and Y trusts Z then could X trust Z? How can data and information be shared on the Semantic Web with the autonomy still maintained? How can trust be disseminated? Example  If X trusts Y 50% of time and Y trusts Z 30% of time, then what value could be assigned for X trusting Z? How can trust be incorporated into semantic interoperability? What could be the nature of service primitives for negotiation and trust? So, for some situations there is a need for 100% trust on the other hand some situations require only 50% trust.

III. Security

262

18. Security issues for the Semantic Web

Trust propagation and propagating privileges is another topic for research. Example  If a privilege is granted to X then what privilege could be transferred to Y? How privileges are composed? Requirement of calculus and algebra for the formulation of privileges requires a lot more research. Logic, Proof, and trust are the layers of Semantic Web. This layer deals with trust management and negotiation among various agents. It also helps in evaluating the base and coming up with logics for trust management.

18.4 Different attacks on the Semantic Web What exactly is security? Defining security in layman language, (in terms of attackers) if messages are encrypted, an attacker cannot decrypt the original text without the aid of the secret keys. The original text is referred to as “plain text” (P) and the encrypted text with the secret key is referred as “cipher text” (C). The property of semantic security in words of Goldwasser and Micali: “Whatever is efficiently computable about the plain text given the cipher text is also efficiently computable without the cipher text” (Sheth and Larson, 1990). Privacy is defined in respect of an anonymity set, that is, entity required which is not identified within the set of entities by an adversary.

18.4.1 Importance of transport layer security on the Semantic Web The TLS is one of the well-known The Internet Engineering Task Force (IETF) standards which are employed for the purpose of encryption transmitted over HTTP. Encryption of data sent over the HTTP is required to ensure the protection of data. Further, the medium over which the data is sent needs to be authenticated so that the message origin can be kept as proof. Deployment of TLS to preserve the privacy and security of the information on the Semantic Web can be done. It can be asserted that a Semantic Web URI is merely a name so the use of TLS is not needed. URIs are no other than an arbitrary string in a knowledge representation language if there is no access whatsoever to the data of the URI available. So neither HTTP nor HTTPS has any effect on the formal semantics which can arbitrate the inferences drawn from the Semantic Web reasoning engine. Linking of URIs to the raw data which further can be retrieved by some applications using Semantic Web is one opposing view that has also been argued by the Linked Data community (Thuraisingham, 1994). Number of attacks can be performed by the network attacker if TLS is not employed on the origin. The objective of any network attacker is to take advantage of the access to the plain text. In trivial attacks, attackers can obstruct the data sent over HTTP. In addition to this, an attacker can transport whatever data they want to without being identified as the original is not authenticated. This attack can be done employing open-source tools like wireshark and sslstrip. W3C Upgrade Insecure Requests specification can be used for the purpose of opportunistic encryption. This automatic upgrading of HTTP to HTTPS can be done where a server requests that an HTTPS URI be used if possible via an HTTP Header (Thuraisingham, 1997). HSTS can be used by the server to prevent any downgrade attacks because of which the ‘S’

III. Security

18.5 Drawbacks of the existing privacy and security protocols in W3C social web standards

263

off the HTTPS in a URI gets removed (Goldwasser and Micali, 1984). Likewise, a Semantic Web application and a browser can recover the data of the HTTPS URI by using a Semantic Web HTTP URI. It is still asserted that on the Semantic Web, both the HTTPS URIs and HTTP URIs are equivalent. The only difference is the presence of an extra ‘s’ in HTTPS URI’s ID and as such, they could have never been used for varied purposes. There may be attacks on the Web Application by the web attackers even if TLS is employed accurately. Attackers visit websites not only to recover resources and knowledge but now origins are actors that operates on behalf of users and interact with other origins to gain information and knowledge about them. In the following context, we will look in detail at webID 1 TLS. The effort of WebID 1 TLS to identify people using URIs, there has been a continuous increase in awareness in the community of the Semantic Web about the TLS. The evocative goal for doing this is to create a decentralized social networking application (Bizer et al., 2009). WebID ensures that every unique person in the semantic community has its own unique URI. This will benefit both the community and the individual. For community, the identification of the individual can be maintained and for individual, it will benefit by being able to recover their personal data as provided by an RDF statement. The problem with WebID 1 TLS is that it ignores the privacy and security limits of the web. Currently, in WebID 1 TLS, the client certificate is generated using ,keygen. tag and is stored in the TLS key store. By merely asking or querying for the certificate, the attacker can track the use. Currently, browser service providers are deprecating keygen from HTML and client certificates are being maintained from the application layer to defy insecurity. Due to this, WebID 1 TLS will stop operating. The amalgamation of Semantic Web with date Cryptographic protocols is as of now a work of future. Passwords can be removed by hardware-driven tokens or other authenticators and the same can be used for the purpose of authentication. These are developed by the W3C Web Authentication API, which is designed both to not violate the same-origin policy (i.e. keys differ per origin) and use modern cryptographic primitives such as Elliptic Curve Digital Signature Algorithm (Mike West, 2015). There is also a dilemma over the layers of the web. TLS is a network-level protocol rather than application-level protocol.

18.5 Drawbacks of the existing privacy and security protocols in W3C social web standards Modern cryptography and adversaries are not taken into account with W3C Semantic Web standards. After authentication through WebID 1 TLS, reading and writing of data using HTTP GET and also HTTP POST by the user require Solid permission. W3C Linked Data Notification is employed by Solid to share RDF data of the user. This can be done by either employing the Linked Data Platform or publish-subscribe model. Signing data and a whitelist is needed by Linked Data Notification. But it does not specify how it is done. Linked Data Signatures neglect the base of digital signature. Digital signatures need robust byte-strings and they do not operate on abstract graphs without normalization.

III. Security

264

18. Security issues for the Semantic Web

WebSub asserts to aid digital signature for authentication to obstruct Sybil attacks but it does not employ asymmetric digital signatures. Generation of HMAC, that is, symmetric cryptography is dependent on shared secret. Scalable pub-sub systems authentication is a well-known problem and the current research trends are focussing on this. Some solutions have been proposed employing Identity-based cryptography which has not yet been used in WebSub (Hodges et al., 2012).

18.6 Semantic attackers The previous attacks were due to the lack of network-level encryption in TLS or bad protocol design. But the Semantic Attacker has an objective of amending the inference procedures. Semantic attacker constructs attacks on the web-level network to settle the RDF triples on which an inference procedure relies on and maliciously amend them, altering the outcome of the inference procedure. An inference procedure I is provided relying on a set of x triples R 5 R1 ^ R2 ^ . . .Rx in such a way that new triples S are generated by the inference procedure (I(R) - S). In case a malicious attacker knows the plaintext, they can change it randomly, such that plaintext R1 - Ra. So, malicious triples should be inserted R2 5 R1 ^ Ra . . .Rx and the inference procedure will obtain a result that is partially under the control of the attacker, I (R2) - S2 where S1 6¼ S2. The merely trusted Semantic Web inference procedure is the one that does not rely on the HTTP infrastructure. The triples derived from the Web-level protocol should maintain their security properties and also employ TLS in an accurate way to prevent attacks.

18.7 Privacy and Semantic Web In today’s scenario, security is of utmost importance from the vision of Semantic Web. Being a nonfunctional requirement, as far as the case of a Semantic Web is concerned, it must be considered in all layers of Semantic Web. Protecting the Semantic Web means an unaltered and an authorized access to data and information, and keeping a check on the unauthorized use of data and resources. Privacy and security go side by side. Because of these two measures the secrecy of the personal information of any individual is maintained in a Semantic Web. Preserving privacy means not disclosing the confidential data. There are various approaches to access control. The access control models must not reveal any information (data) to any other entity other than the authorized entity and hence privacy requirements must be guaranteed by these access control models. Previous approaches try to use the platform for Privacy Preferences (P3P) which unravels the privacy problem of access control models (Story et al., 2009). Expression power cannot be seen in P3P which is a major drawback. Further, semantic requirements that are clearly and concisely stated for the purpose of defining a complex user credential are also a major concern. Therefore to such extent semantic-aware privacy and access control models were propounded. This model provides some approaches that permit user’s credential contents to disintegrate into atomic components that led the users to clearly separate items which

III. Security

18.8 Directions for future security protocols for the Semantic Web

265

are to be released. The privacy access control policies need expressing, considering, and combining protection requirements by taking into consideration direct as well as indirect release of information about an entity who desires to access information and services. Amalgamation of privacy-preserving approaches and access control models avert from disclosing the personal data of Semantic Web users. Exchange of protected resources and services which are also secured to subjects in varied security domains can be given access by trust negotiation (Bharadwaj et al., 2016). Disclosing the credential for the establishment of trust can lead to loss of control of information and privacy. At the time of exchange of credentials with negotiation, the trust negotiation approaches should protect the privacy of sensitive personal data. In the exchange of among protection and winning trust, in which trust in a specific setting is earned by uncovering a specific number and kind of qualifications, and security of accreditation data is lost as the certifications are uncovered (Tariq et al., 2014; Cranor et al., 2002). Protection saving methodologies and trust exchange strategies can be consolidated to capture the arrival of an element’s data during revealing the required qualifications. Surmising is a procedure of deducting new data and presenting questions. The issue is the point at which the clients are unapproved to access and know the deducted data. Derivation issue and Semantic information-digging turns into a danger for the security of Semantic Web more than the current Web as the information model utilized for the Semantic Web speed up the data extraction procedure and surmising. Consequently, an individual may get to unapproved data by sending demands and surmising about the recuperated information. Deduction issues as concentrated in the factual and social database are like the backhanded revelations which results from the derivation abilities of the Semantic Web. Be that as it may, the attributes of these two situations contrast from a few points of view, for example, information fulfillment, the extent of information control, information models, adaptability, and information quality (Squicciarini et al., 2006). Access control models just as trust outfit a chance to fabricate a validation system during the procedure of approval. On a similar note, get to control choices depend on an individual way of life as well as depend on the client’s certifications and qualities. Moreover, the reliability of data and the supplier of administration will be viewed as independent from the controlling authorization of person. Amalgamation of security protecting methodologies and access control models turn away from unveiling the individual information of Semantic Web clients. Security safeguarding approaches and trust arrangement systems can be consolidated to catch the arrival of an element’s data during unveiling the required accreditations.

18.8 Directions for future security protocols for the Semantic Web As can be seen from the previous discussion that, although each layer of the Semantic network incorporates security, there is a need for continuous implementation of layer level security. XML security needs further research so that there is provision to more efficiently secure the XML documents and XML schemas. One approach can be to incorporate lowerlevel security to the XML layer so that each document and schema is two-way secured. RDF security needs very rigorous research as it incorporates the semantics also. There

III. Security

266

18. Security issues for the Semantic Web

must be provision for common security level for each unique Ontology. Semantic interoperability also should be made smooth with continuous monitoring of different Ontologies of different communities. Integration of policies for the Semantic Web and Ontologies is a big question. These issues need continuous improvement so that the transmission of documents over the web becomes smooth and secure and there is common policy for each action of the transmission.

18.9 Conclusion The importance of security to protect the confidential information of one and many while communicating with others is of utmost importance in today’s fast-moving world. Cryptographic algorithms play a major role in securing the data and information. But these algorithms alone cannot ensure security. Semantic Web is a growing field in which data on the web is made machine readable. Communication needs to be performed at different layers as proposed by Tim Berners. Security needs to be provided at all levels of a Semantic Web. Cryptographic algorithms alone cannot ensure security at all these layers. These Cryptographic protocols need to be accompanied by various other layer-wise protocols so that communication over the web of data is secured. Research is going on in various domains to protect all the layers of the Semantic Web be it for providing security to the XML layer, RDF layer, information interoperability, or for providing trust among different communities.

References Bertino, E., Ferrari, E., 2002. Secure and selective dissemination of XML documents. ACM Transactions on Information and System Security 5 (3), 290331. Bertino, E., et al., 2002. Access control for XML documents, data and knowledge engineering. North Holland 237260. Bharadwaj V., H.L.V. Gong, D. Balfanz, A. Czeskis, A. Birgisson, J. Hodges, M. Jones, R. Lindemann, and J.C. Jones. Web Authentication: An API for accessing scoped credentials, 2016. , https://www.w3.org/TR/ webauthn/ . . Bizer C., T. Heath, and T. Berners-Lee. Linked data-the story so far. Semantic Services, Interoperability and Web Applications: Emerging Concepts, pages 205-227, 2009. Cranor, L., Langheinrich, M., Marchiori, M., Presler-Marshall, M., Reagle, J., 2002. The Platform for Privacy Preferences 1.0 (P3P1.0) Specification. W3C. Dwivedi, A., Kumar, S., Dwivedi, A., Singh, M., 2011. Current security considerations for issues and challenges of trustworthy semantic web. International Journal of Advanced Networking and Application 3 (1), 978983. Goldwasser, S., Micali, S., 1984. Probabilistic encryption. Journal of Computer and System Sciences 28 (2), 270299. Halpin H. “Semantic Insecurity: Security and the Semantic Web”. Hodges J., C. Jackson, and A. Barth. HTTP Strict Transport Security (hsts), 2012. , https://tools.ietf.org/html/ rfc6797 . . Jain S., “Understanding Semantics-based Decision Support”, Nov 2020, 152 pages, CRC Press, Taylor& Francis Group. ISBN: 9780367443139 (HB). Jain S., A. Patel (2020), “Smart Ontology-Based Event Identification”, IEEE 13th International Symposium on Embedded Multicore/Many-core Systems-on-Chip(MCSoC-2019), 14 Oct 2019, pp: 135142. IEEE, ISBN: 978-1-7281-4882-3.

III. Security

References

267

Kagal L., Finin T., Joshi A., “A Policy based Approach to Security for the Semantic Web”, Springer. Medic, A., Golubovic, A., 2010. Making secure semantic web. Universal Journal of Computer science and Engineering Technology 1 (2), 99104. Mike West. Upgrade Insecure Requests, 2015. , https://www.w3.org/TR/upgradeinsecure-requests/ . . Mishra, S., Jain, S., Rai, C., Gandhi, N., 2018. December). Security Challenges in Semantic Web of Things. In: International Conference on Innovations in Bio-Inspired Computing and Applications. Springer, Cham, pp. 162169. Patel, Archana, Jain, Sarika, 2020. A novel approach to discover ontology alignment. recent advances in computer science and communications 13 In Print https://doi.org/10.2174/2666255813666191204143256. In press. Sheth, A., Larson, J., 1990. Federated database systems. ACM Computing Surveys 183236. September. Squicciarini, A.C., Bertino, E., et al., 2006. Achieving privacy in trust negotiations with an ontology-based approach. Transaction of Dependable and Secure Computing 3 (1), 1330. Story H., B. Harbulot, I. Jacobi, and M. Jones. FOAF 1 SSL: Restful authentication for the social web. In Proceedings of the First Workshop on Trust and Privacy on the Social and Semantic Web (SPOT2009), 2009. Tariq, M.A., Koldehofe, B., Rothermel, K., 2014. Securing brokerless publish/subscribe systems using identitybased encryption. IEEE transactions on parallel and distributed systems 25 (2), 518528. Thuraisingham, B., 1994. Security issues for federated database systems. Computers & Security 200212. Thuraisingham, B., 1997. Data Management Systems Evolution and Interoperation. CRC Press, Boca Raton, FL. Thuraisingham, B., 2005. Security standards for the semantic web. Computer Standards & Interfaces 257268.

III. Security

Index Note: Page numbers followed by “f” and “t” refer to figures and tables, respectively.

A ABox, 11 Actor concept, 13 14 Advisory systems, 118 120 Agent concept, 13 Amazon Alexa, 7 Amazon Echo, 9 10 Ant Colony Optimization (ACO) based classifier algorithm, 110 Archetype Management System, 47 Artificial Neural Network (ANN), 179 180 Asymmetric key cryptography/public-key cryptography, 255 256

B Bag of words (BOW), 236 Batch processing, 155 Bayesian network for COVID19 likelihood diagnosis, 88f Behavioral monitoring, 181 182 Big data, 155

C Catalog Of Somatic Mutations In Cancer (COSMIC) Mutation dataset, 245 Chronic Obstructive Pulmonary Disease (COPD), 168 Cipher text, 262 Clinical coding systems, 41 42, 43f Clinical decision support systems (CDSSs), 69 70, 81 82, 132 134 semantic knowledge graph for, 82 83 Clinical Document Architecture (CDA), 100 101 Clinical Natural Language Processing (Clinical NLP), 233 242 deep learning techniques in, 240 242 convolution neural network (CNN), 240 241, 241f recurrent neural network (RNN), 241 242, 241f genetic mutation classification using deep learning and (case study), 245 249 graphical techniques in, 238 239 hyper link induced topic search (HITS), 238 239 TextRank algorithm, 238 linguistic techniques in, 237 238

dependency graph, 237 238 part of speech (POS) tagging, 237 tokenization, 237 machine learning techniques in, 239 240 support vector machine (SVM), 239 240, 239f Word2Vec, 240 and semantic web, 242 245 framework for classification of genetic mutations using ontologies, 243 245 ontology creation from clinical documents, 242 243 statistical techniques in, 236 237 bag of words (BOW), 236 rapid automatically keyword extraction (RAKE), 236 237 term frequency-inverse document frequency (TFIDF), 236 Clinical terminology system and classification system, 43t Composite Capability/Preference Profiles (CC/PP) technique, 212 Conceptual interoperability, 157 Conditional random field (CRF), 168 Connected electronic health record, 95 analysis performance of, 111 112 architecture description, 105 107 background, 100 102 electronic health record-related standards and terminologies, 100 101 semantic interoperability, 101 102 data processing module, 107 110 data analysis based on data aggregation process, 108 110 data transformation, 107 108 preprocessing data, 107 electronic health records and EHR systems, 102 104 future works, 113 114 implementation, 110 response time of, 112 113 smart health unit, 99 100 ConversationItem, 14 Convology, 7 22, 13f in action, 17 19

269

270 Convology (Continued) Actor concept, 13 14 availability and reusability, 16 background, 9 10 construction of, 10 12 conceptualization, 12 integration, 12 knowledge acquisition, 11 specification, 11 ConversationItem, 14 Dialog concept, 13 Event concept, 15 future work, 20 resource sustainability and maintenance, 19 20 Status concept, 15 16 Convolution neural network (CNN), 179 180, 240 241, 241f COVID19 epidemic, 88 90 COVID19 likelihood diagnosis Bayesian network for, 88f MTheory for, 91f C-rater, 223 Cryptographic algorithms, 253 Cryptography, 253 254

Index

DialogAction concept, 14 Dialog concept, 13 DialogFlow platform, 8 DialogStatus, 11, 16 Dice Coefficient, 226 DIFUTURE architecture, 134 135 Digital Imaging and Communications in Medicine (DICOM), 100 101 DILIGENT, 10 Dynamic interoperability, 157

E Earthquake, 121 122, 125 Electronic Health Record (EHR), 97 98, 100 101, 168, 233 234, 246f Electronic medical record (EMR) system, 130 ELIZA, 9 Emergency Resource Ontology (ERO), 120 Emergency situations, 117 118, 121 122 EventAnswer, 15 Event concept, 15 EventQuestion, 15 Extensible markup language (XML), 256, 259 260 Extraction, transformation, and loading (ETL) tools, 130

D Data processing module, 107 110 data analysis based on data aggregation process, 108 110 data transformation, 107 108 preprocessing data, 107 DATAtourisme, 212, 215 Data warehouse system, 130 Decision support system (DSS), 117 119, 177 182, 178f, 195 IoT-enabled, 178 179 data acquisition, 179 data transmission and storage, 179 machine learning and deep learning techniques, application of, 179 182 behavioral monitoring, 181 182 identification of diseases, 180 181 smart electronic health records, 181 ontology’s role in, 182 187 issues and challenges, 185 186 technology available, 186 187 quality of service (QoS) and quality of experience (QoE) parameters in, 187 190 Deep Belief Network, 179 180 Deep Neural Network (DNN), 181 Density-based FS (DFS) approach, 110 Dependency graph, 237 238 Description Logic (DL) techniques, 181, 197

F Faster Region-based CNN (Faster R-CNN), 182 Fast healthcare interoperability resources (FHIR), 31 32, 37, 39 40 FastText embeddings, 169 Feature-based sentiment analysis, 160 First-order logic (FOL), 89 Formal concept analysis (FCA), 48 49 Free word order, 157

G Gloze approach, 25 26 Google Assistant, 7 Google Home, 9 10 Graph-based Aggregation Model (GAM), 109 110

H HasRelevantIntent object property, 14, 18 HasTriggerQuestion object property, 14 Healthcare Bigdata Hub, 60 62 Healthcare Cube Integrator (HCI), 129 152 future enhancement of work, 149 HCI conceptual framework and designing framework, 136 140 implementation framework and experimental setup, 140 148

Index

research methods and literature findings of research publications, 131 136 electronic health record availability in India and its privacies challenges, 132 electronic health records databases/system study, 132 133 health care processes and semantic web technologies, 135 136 Indian health policies and information technology, 131 132 research objectives, 136 study of existing health knowledgebases and their infrastructures, 133 134 study of existing solution available for health data integration, 134 135 result analysis, 148 state-of-the-art health care system, 129 131 Healthcare Enterprise (IHE), 100 101 Health Level 7 (HL7), 100 101 HL7 version 2.x, 37 38 HL7 version 3.x, 38 39 Heterogeneity, 155 Hierarchical agglomerative clustering (HAC), 247 Hierarchical Attention Networks (HANs) method, 170 Human phenotype ontology (HPO), 133 134 Hyper link induced topic search (HITS), 238 239 HyperText Markup Language (HTML), 165 166

I Inflammation, 49 Information extraction in the healthcare sector, 174 Insight analysis module, 109 110 Integrated Care EHR (ICEHR), 102 International Classification of Diseases (ICD)-11, 100 101 Internet of Things (IoT)-enabled decision support system, 178 179 data acquisition, 179 data transmission and storage, 179 Internet of things-based ontologies, 101 102 Issue-Procedure Ontology (IPO), 198 202, 199f Issue-Procedure Ontology for Medicine (IPOM), 198, 202 208, 203f

J JavaScript Object Notation (JSON) format, 39

K Knowledge-based DSS (KBDSS), 195 196, 198 Knowledge-based systems (KBSs), 69 Knowledge graphs, 77 78 Knowledge modeling, ontology’s role in decision support system (DSS) for, 182 187

271

issues and challenges, 185 186 technology available, 186 187 Knowledge organization system (KOS), 75 Knowledge representation (KR), 69, 74, 157 158

L Latent Dirichlet Allocation, 167 Latent Semantic Analysis (LSA), 169, 223 Linked Data, 256 Linked Open Data (LOD), 12, 157 158 Logical Observation Identifiers Names and Codes (LOINC), 42 44 Loinc, 100 101 Long Short Term Memory (LSTM), 168

M Machine learning, 157, 160 161 Machine learning and deep learning techniques, application of, 179 182 behavioral monitoring, 181 182 identification of diseases, 180 181 smart electronic health records, 181 Meaning-aware information search and retrieval framework for healthcare, 165 176 future research dimensions, 174 medical catalog terminology extractor, 171 medical ontology, 171 from ontologically annotated medical catalog database, 172 meaning-aware search results recommendations, 172 semantic query experience, 172 related work, 167 170 semantic healthcare information discovery, 173 174 semantic reasoner, 172 semantic search and information retrieval in healthcare, 170 semantic similarity computation, 172 173 Meaning-aware search result recommendation in healthcare, 172, 175 Medical catalog terminology extractor, 171 Medical Devices (MD), 97 99, 102 Medical institutions in Korea, 55 68 classification of, 60t hierarchical structure, 62f knowledge base, formal definition of, 56 knowledge graph of, 60 65, 66f data collection, 60 62 graph transformation, 64 65 model of administrative district, 62 63 model of medical institutions, 63 public data in Korea, 56 57 Medical ontology, 171

272 Medical Service Act, 57 58, 63 Medicine, issue-procedure ontology for, 203 208 Metadata, 256 Meteorological disaster ontology (MDO), 119 120 METHONTOLOGY, 10 MFrags, 89 Mixed script (code mixed) sentences, 157 MTheory, 89 for COVID19 likelihood diagnosis, 91f Multientity Bayesian networks (MEBN), 87, 89 91 and ontology web language, 91 92 and Probabilistic Context-Free Grammar (PCFG), 92 93

Index

Ontology-supported rule-based reasoning for emergency management, 117 128 future work, 127 inference of knowledge, 122 127 sample scenarios, 125 127 system in action, 123 125 literature review, 119 120 system framework, 120 122 construction of ontology, 120 122 Ontology Web Language (OWL) ontology, 24 25, 27 28, 124 OpenEHR, 100 101 OpenRefine, 64 65, 64f Otitis media, 41 42, 42f

N Natural language, 221 222 Natural language processing (NLP) system, 157, 167 168, 233 234 Natural language understanding (NLU) algorithms, 7 NeOn, 10 N-gram, 237

O Ocean Template Designer, 245 Ontology, 4, 175 Ontology-based decision-making, 195 210 issue-procedure ontology, 198 202 for medicine, 203 208 Ontology-based intelligent decision support systems, 177 194 application of machine learning and deep learning techniques, 179 182 behavioral monitoring, 181 182 identification of diseases, 180 181 smart electronic health records, 181 IoT-enabled decision support system, 178 179 data acquisition, 179 data transmission and storage, 179 QoS and QoE parameters in decision support systems for healthcare, 187 190 role of ontology in DSS for knowledge modeling, 182 187 issues and challenges, 185 186 technology available, 186 187 Ontology-based knowledge base method, 197f Ontology-based semantic similarity, profile identification using, 211 220 profile creation, 216 218 semantic matching, 214 216 build paths, 214 semantic similarity, 215 weight computing of the concept, 215 216 weight allocation for keyword, 213 214

P Part of speech (POS) tagging, 222 223, 237 Physical interoperability, 32 Plain text, 262 Pragmatic interoperability, 157 Privacy and semantic web, 264 265 Probabilistic Context-Free Grammar (PCFG) multientity Bayesian networks (MEBN) and, 92 93 Probabilistic ontologies, 92 Probabilistic Web Ontology Language (PR-OWL), 87, 92 Public Data Portal, 57, 60 62 Public Key Cryptography (PKC), 255 Publishing and consuming data on the web, 3 PuffBot, 17 19

Q Quality of service (QoS) and quality of experience (QoE) parameters in decision support systems, 187 190

R Rapid automatically keyword extraction (RAKE), 236 237 Recurrent neural network (RNN), 168, 179 180, 241 242, 241f Reference information model (RIM), 38 Relational database management system (RDBMS) model, 33 Representational State Transfer (REST) architecture, 39, 39t, 195 Resource description framework (RDF), 2, 24 25, 69 86, 118 119, 134 135, 165 166, 206, 207f, 256 clinical decision support systems (CDSS), 81 82 semantic knowledge graph for, 82 83 future possibilities, 83 84 graphs, 33 34, 34f

Index

RDF Query Language, 36 knowledge-based systems, 71 72 knowledge graphs, 77 78 knowledge organization system (KOS), 75 knowledge representation in knowledge-based system, 72 73 resource description framework for, 73 75 RDF in attributes (RDFa), 26 as a semantic data model, 24 25 semantic knowledge graph (SKG), 78 81 advantages, 83 simple knowledge organization system (SKOS), 75 77 and resource description framework, 76 77 Resource description framework schema (RDFS), 32, 34 35, 75 76 Rule-based engine, 184 Rule-based POS tagging, 237

S Sarcasm detection, 157 Semantic annotation, 159 160 Semantic attackers, 264 Semantic data models, 23 30 conceptual evaluation, 27 28 comparison study, 27 generalized architecture, 27 28 findings, 28 29 related work, 25 26 Resource Description Framework (RDF) as, 24 25 Semantic healthcare information discovery, 173 174 Semantic indexing module, 184 185 Semantic Intelligence, 2 4 publishing and consuming data on the web, 3 technologies applied within enterprises, 3 4 Semantic Intelligence technologies (SITs), 1 2 Semantic interoperability, 31 54, 101 102 clinical coding systems, 41 42, 43f clinical terminology systems and clinical classification systems, difference between, 42 44 healthcare interoperability, 32 and semantic web technologies, 32 37, 44 49 applications of, 36 37 resource data framework, 33 Resource Description Framework (RDF) graphs, 33 34, 34f SPARQL Protocol, 36, 45f vocabularies, RDFS and OWL, 34 35 semantic web technology, challenges with the adoption of at the semantic interoperability level, 50 at the syntactic interoperability level, 50 51

273

syntactic interoperability, 37 40 fast healthcare interoperable resource, 39 40 Health level 7 version 2.x, 37 38 Health level 7 version 3.x, 38 39 syntactic interoperability and semantic web technology, 46 47 Semantic knowledge discovery in healthcare, 174 Semantic knowledge graph (SKG), 69 70, 78 81 Semantic query experience in healthcare, 172, 175 Semantic reasoner, 172 Semantic search, 174 and information retrieval in healthcare, 170 Semantic similarity, 221 232 algorithm, 227 between a pair of sentences, 226 between words, 226 227 computation, 172 173, 212, 215 data set, 227 228 literature survey, 222 223 results, 228 229 word similarity, 225 226 Semantic web, 1, 233 234, 253 268 clinical natural language processing and, 242 245 framework for classification of genetic mutations using ontologies, 243 245 ontology creation from clinical documents, 242 243 different attacks on, 262 263 importance of transport layer security on semantic web, 262 263 future security protocols for, 265 266 privacy and semantic web, 264 265 related work, 258 259 security and cryptography, 253 256 asymmetric key cryptography/public-key cryptography, 255 256 symmetric key cryptography/secret key cryptography, 255 security standards for, 259 262 extensible markup language, securing, 259 260 information interoperability in a secured way, 261 262 resource description framework, securing, 260 semantic attackers, 264 and uncertainty, 90 91 W3C social web standards, drawbacks of existing privacy and security protocols in, 263 264 Semantic web rule language (SWRL), 119 120 Sentiment analysis, 156f, 160 Similarity matrix calculation, 226 227 Simple knowledge organization system (SKOS), 70, 75 77, 200 and resource description framework, 76 77

274

Index

Simple Object Access Protocol (SOAP) architecture, 39, 39t Smart electronic health records, 181 Smart Health Unit (SHU), 99 Smart mental healthcare systems, 153 164 architecture, 159 161 machine learning, 160 161 semantic annotation, 159 160 sentiment analysis, 156f, 160 benefits, 158 159 actionable knowledge, 158 159 contextualization, 158 early intervention or detection, 159 invasive and continuous monitoring, 159 personalization, 158 privacy and cost of treatment, 159 challenges, 155 158 big data, 155 heterogeneity, 155 157 invasive and continuous monitoring, 158 knowledge representation, 157 158 natural language processing, 157 classification of mental healthcare, 154 155 SMSD (Semantic Medical Sensor Data ontology), 102 Social data, 154 155 SPARQL language, 2, 26 SPARQL Protocol, 36, 45 46, 45f Standards developing organizations (SDOs), 100 101 Stanford Natural Language Processing, 214 Statistical Data and Metadata Exchange Health Domain (SDMX-HD), 100 101 Status concept, 15 16 StatusItem concept, 14, 18 Streaming processing, 155 Structured query language (SQL), 36 Substitutable medical applications and reusable technologies (SMART), 46 Support vector machine (SVM), 168, 239 240, 239f Surmising, 265 Symmetric key cryptography/secret key cryptography, 255 Syntactic interoperability, 32, 37 40

fast healthcare interoperable resource, 39 40 Health level 7 version 2.x, 37 38 Health level 7 version 3.x, 38 39 and semantic web technology, 46 47 Syntactico-semantic reasoning, 93 Systematized NOmenclature of MEDicine-Clinical Terms (SNOMED-CT), 31 32, 42 44, 48 50

T TBox, 11 Term frequency-inverse document frequency (TF-IDF), 236 Terminology and Data Elements Repositories for Healthcare Interoperability (TeRSan), 49 TextRank algorithm, 238 Tokenization, 237 Transport Layer Security (TLS), 258 259, 262 263

U UIMA Ruta, 168 Unified Medical Lexicon System (UMLS), 100 101 Uniform resource identifier (URI), 63, 73 74 Universal resource identifier (URI), 24, 33 User concept, 13 14 UserEvent concept, 11, 15 16 UserStatus concept, 11, 15 16

W Wearable devices, 136 137, 139 140 Web Ontology Language (OWL), 34 35, 75 76, 100 101, 165 166, 196 WebSub, 264 Weight of the keyword, 214 Word2Vec, 240 Word matching, 23 24 WordNet, 214, 224 228 World Wide Web Consortium (W3C), 256

X XML (extensible mark-up language), 24 26 XQuery language, 26 XSPARQL, 26