327 35 17MB
English Pages 348 [350] Year 2015
Irena Roterman-Konieczna (Ed.) Simulations in Medicine
Irena Roterman-Konieczna (Ed.)
Simulations in Medicine | Pre-clinical and Clinical Applications
Editor Prof. Dr. Irena Roterman-Konieczna Jagiellonian University Medical College, Department of Bioinformatics and Telemedicine Ul. Sw. Lazarza 16, 31-530 Krakow, Poland e-mail: [email protected]
This book has 247 figures and 5 tables.
The publisher, together with the authors and editors, has taken great pains to ensure that all information presented in this work (programs, applications, amounts, dosages, etc.) reflects the standard of knowledge at the time of publication. Despite careful manuscript preparation and proof correction, errors can nevertheless occur. Authors, editors and publisher disclaim all responsibility and for any errors or omissions or liability for the results obtained from use of the information, or parts thereof, contained in this work. The citation of registered names, trade names, trademarks, etc. in this work does not imply, even in the absence of a specific statement, that such names are exempt from laws and regulations protecting trademarks etc. and therefore free for general use. ISBN 978-3-11-040626-9 e-ISBN (PDF) 978-3-11-040634-4 e-ISBN (EPUB) 978-3-11-040644-3 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2015 Walter de Gruyter GmbH, Berlin/Boston Cover image: Eraxion/istock/thinkstock Typesetting: PTP-Berlin, Protago-TEX-Production GmbH, Berlin Printing and binding: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com
Preface “Simulations in medicine” – typing these words into Google produces a long list of institutions that train students in all practical aspects of medicine using phantoms. The student may learn to perform a variety of procedures and surgical interventions by interacting with a simulated patient. Such centers perform a great range of tasks related to medical education; however, medical simulations are not limited to manual procedures. The very word “simulation” is closely tied to computer science. It involves recreating a process which occurs over a period of time. The process may include actions performed manually by a student but it can also comprise events occurring in virtual space, under specific conditions and in accordance with predetermined rules – including processes occurring on the molecular (Chapter 1) or cellular (Chapter 2) level, at the level of a communication system (Chapter 3) or organs (Chapters 4 and 5) or even at the level of the complete organism – musculoskeletal relations (Chapter 6). “Simulations in medicine” also involve recreating the decision-making process in the context of diagnosis (Chapters 7, 8, 9), treatment (Chapter 10, 11), therapy (Chapter 12), as supported by large-scale telecommunication (Chapter 13) and finally in patient support (Chapter 14). This interpretation of the presented concept – focusing on understanding of phenomena and processes observed in the organism – is the core subject of our book and can, in fact, be referred to as “PHANTOMLESS medical simulations”. The list of problems which can be presented in the form of simulations is vast. Some selection is therefore necessary. While our book adopts a selective approach to simulations, each simulation can be viewed as a specific example of a generic phenomenon: indeed, many biological events and processes can be described using coherent models and assigned to individual categories. This pattern-based approach broadens the range of interpretations and facilitates predictions based on the observable analogies. As a result, simulation results become applicable to a wide category of models, permitting further analysis. One such universal pattern which we will refer to on numerous occasions is the concept of an “autonomous entity”. The corresponding definition is broad, encompassing all systems capable of independent operation, ensuring their own survival and homeostasis. This includes individual organisms, but also complex social structures such as ant colonies, beehives or even factories operating under market conditions. The structures associated with the autonomous operation of these entities share certain common characteristics – they include e.g. construction structures which fulfill the role of “building blocks” (Fig. 1 (a)), function-related structures which act in accordance with automation principles (Fig. 1 (b)) and, finally, structures responsible for sequestration of materials, making them compact and durable while also ensuring that they can be easily accessed when needed (Fig. 1 (c)).
VI | Preface
Energy storage Information storage
(a)
(b)
Effector
Signaling
Metabolism
Receptor
(c)
Cell
Fig. 1: Symbolic depiction of the structural and functional characteristics of the organism as an autonomous entity, comprising three basic types of components (a, b, c) corresponding to specific aims: (a) construction; (b) function; (c) storage.
Living organisms conform to the above described model. Each problem, when expressed in the form of a simulation, has its place in a coherent system – much like a newly acquired book in a library collection. The division presented also helps explain common issues and problems relevant to each group of models. Afflictions of the skeletal system, metabolic diseases or storage-related conditions can all be categorized using the above presented schema (although some of them may affect more than one category). Even randomly selected simulations follow this generalized model, contributing to proper categorization of biological phenomena. This fact underscores the importance of simulation-based imaging. Journal “Bio-Algorithms and Med-Systems” published by de Gruyter invites all Readers to submit papers concerning the wildly understood spectrum of PHANTOMLESS simulations in medicine. You are invited to visit: http://www.degruyter.com/view/j/bams Krakow, March, 2015
Irena Roterman-Konieczna
Contents Preface | V List of authors | XV
Part I: Molecular level Monika Piwowar and Wiktor Jurkowski 1 Selected aspects of biological network analysis | 3 1.1 Introduction | 3 1.2 Selected biological databases | 5 1.2.1 Case study: Gene Expression Omnibus | 6 1.2.2 RegulonDB | 9 1.3 Types of biological networks | 10 1.3.1 Relations between molecules and types of networks | 10 1.3.2 Biochemical pathways | 12 1.4 Network development models | 14 1.4.1 Selected tools for assembling networks on the basis of gene expression data | 14 1.4.2 Selected tools for reconstruction of networks via literature mining | 15 1.5 Network analysis | 16 1.5.1 Selected tools | 17 1.5.2 Cytoscape analysis examples | 23 1.6 Summary | 25
Part II: Cellular level Jakub Wach, Marian Bubak, Piotr Nowakowski, Irena Roterman, Leszek Konieczny, and Katarzyna Chłopaś 2 Negative feedback inhibition – Fundamental biological regulation in cells and organisms | 31 2.1 Negative feedback-based systems simulations | 41 2.1.1 Introduction | 41 2.1.2 Glossary of Terms | 41 2.1.3 Software model | 42 2.1.4 Application manual | 45 2.1.5 OS model example | 50 2.1.6 Simulation algorithm | 52
VIII | Contents
Irena Roterman-Konieczna 3 Information – A tool to interpret the biological phenomena | 57
Part III: Organ level Anna Sochocka and Tomasz Kawa 4 The virtual heart | 65 Marc Ebner and Stuart Hameroff 5 Modeling figure/ground separation with spiking neurons | 77 5.1 Introduction | 77 5.2 Figure/ground separation | 79 5.3 Spiking neural networks | 81 5.4 Lateral connections via gap junctions | 82 5.5 Simulation of a sheet of laterally connected neurons | 84 5.6 Basis of our model | 91 5.7 Conclusion | 92
Part IV: Whole body level Ryszard Tadeusiewicz 6 Simulation-based analysis of musculoskeletal system properties | 99 6.1 Introduction | 99 6.2 Components of a motion simulation model | 101 6.2.1 Simulating the skeleton | 102 6.2.2 Bone model simulations | 105 6.2.3 Muscle models | 110 6.2.4 Velocity-dependent simulations of the muscle model | 116 6.3 Summary | 118 6.4 Simulation software available for download | 118
Part V: Diagnostics procedure Andrzej A. Kononowicz and Inga Hege 7 The world of virtual patients | 121 7.1 Introduction | 121 7.2 What are virtual patients? | 121 7.3 Types of virtual patient | 122 7.4 The motivation behind virtual patients | 125
Contents | IX
7.5 7.5.1 7.5.2 7.6 7.6.1 7.6.2 7.6.3 7.7 7.7.1 7.7.2 7.7.3 7.7.4 7.8
Theoretical underpinnings of virtual patients | 125 Experiential learning theory | 126 Theory of clinical reasoning | 127 The technology behind virtual patients | 128 Virtual patient systems | 129 Components of virtual patients | 130 Standards | 132 How to use virtual patients? | 132 Preparation for or follow-up of face-to-face teaching | 133 Integration into a face-to-face session | 133 Assessment | 133 Learning-by-teaching approach | 134 The future of virtual patients | 134
Dick Davies, Peregrina Arciaga, Parvati Dev, and Wm LeRoy Heinrichs 8 Interactive virtual patients in immersive clinical environments: The potential for learning | 139 8.1 Introduction | 139 8.2 What are virtual worlds? | 140 8.3 Immersive Clinical Environments (Virtual Clinical Worlds) | 141 8.4 Virtual patients | 141 8.5 Interactive virtual patients in immersive clinical environments | 143 8.6 Case study: Using immersive clinical environments for Inter-Professional Education at Charles R. Drew University of Medicine | 144 8.6.1 Introduction to case study | 144 8.6.2 The case study | 145 8.6.3 Assessment | 158 8.6.4 Summary and lessons learned | 160 8.7 The potential for learning | 161 8.7.1 Why choose immersive clinical environments? | 161 8.7.2 Decide | 164 8.7.3 Design | 166 8.7.4 Develop | 167 8.7.5 Deploy | 172 8.8 Conclusion: “Learning by Doing . . . Together” | 173 Joanna Jaworek-Korjakowska and Ryszard Tadeusiewicz 9 Melanoma thickness prediction | 179 9.1 Introduction | 179 9.2 Motivation | 180 9.3 Clinical definition and importance | 181
X | Contents
9.4 9.5 9.6
Algorithm for the determination of melanoma thickness | 184 Melanoma thickness simulations | 187 Conclusions | 192
Part VI: Therapy Ryszard Tadeusiewicz 10 Simulating cancer chemotherapy | 197 10.1 Simulating untreated cancer | 197 10.2 Enhanced model of untreated cancer | 200 10.3 Simulating chemotherapy | 202 10.4 Simulation software available for the reader | 206 Piotr Dudek and Jacek Cieślik 11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications | 207 11.1 Introduction | 207 11.2 Reverse Engineering | 207 11.2.1 Phase one – Inputs of medical RE | 209 11.2.2 Phase two – Data acquisition | 210 11.2.3 Phase three – Data processing | 212 11.2.4 Phase four – Biomedical applications | 214 11.3 Software for medical RE | 215 11.3.1 Mimics Innovation Suite | 215 11.3.2 Simpleware ScanIP | 216 11.3.3 3D-DOCTOR | 217 11.3.4 Amira | 217 11.3.5 Other software for 3D model reconstruction | 218 11.3.6 RE and dimensional inspection | 219 11.3.7 Freeform modeling | 219 11.3.8 FEA simulation and CAD/CAM systems | 219 11.4 Methods of Rapid Prototyping for medical applications – Additive Manufacturing | 220 11.4.1 Liquid-based RP technology | 222 11.4.2 Stereolithography (SLA) | 222 11.4.3 Polymer printing and jetting | 223 11.4.4 Digital Light Processing (DLP) | 224 11.4.5 Solid sheet materials | 225 11.4.6 Fused Deposition Modeling (FDM) | 226 11.4.7 Selective Laser Sintering (SLS) | 227 11.4.8 Selective Laser Melting (SLM) | 227
Contents | XI
11.4.9 11.4.10 11.5 11.5.1 11.5.2 11.6
Electron Beam Melting (EBM) | 228 Tissue engineering | 229 Case studies | 230 One-stage pelvic tumor reconstruction | 230 Orbital reconstruction following blowout fracture | 232 Summary | 233
Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut 12 Computer simulations in surgical education | 235 12.1 Introduction | 235 12.2 Overview of applications | 235 12.2.1 Gray’s Anatomy Student Edition, Surgical Anatomy – Student Edition, digital editions of anatomy textbooks for the iOS (free) and Android (paid) | 236 12.2.2 Essential Skeleton 4, Dental Patient Education Lite, 3D4Medical Images and Animations, free educational software by 3D4Medical.com, available for iOS, Android (Essential Skeleton 3 – earlier version; paid editions of Essential Anatomy 3 and iMuscle 2) | 236 12.2.3 SpineDecide – An example of point of care patient education for healthcare professionals, available for iOS | 239 12.2.4 iSurf BrainView – Virtual guide to the human brain, available for iOS | 240 12.2.5 Monster Anatomy Lite – Knee – Orthopedic guide, available for iOS (Monster Minds Media) | 241 12.2.6 AO Surgery Reference – Orthopedic guidebook for diagnosis and trauma treatment, available for iOS and Android | 243 12.2.7 iOrtho+ – Educational aid for rehabilitationists, available for iOS and Android | 245 12.2.8 DrawMD – Based on General Surgery and Thoracic Surgery by Visible Health Inc., available for iOS | 247 12.2.9 MEDtube, available for iOS and Android | 250 12.3 Specialized applications | 254 12.3.1 Application description | 255 12.4 Simulators | 262 12.4.1 Selected examples of surgical simulators | 263 12.5 Summary | 265
XII | Contents
Part VII: Support of therapy Łukasz Czekierda, Andrzej Gackowski, Marek Konieczny, Filip Malawski, Kornel Skałkowski, Tomasz Szydło, and Krzysztof Zieliński 13 From telemedicine to modeling and proactive medicine | 271 13.1 Introduction | 271 13.2 ICT-driven transformation in healthcare | 272 13.2.1 Overview of telemedicine | 272 13.2.2 Traditional model of healthcare supported by telemedicine | 273 13.2.3 Modeling as knowledge representation in medicine | 274 13.2.4 Towards a personalized and proactive approach in medicine | 275 13.2.5 Model of proactive healthcare | 277 13.3 Computational methods for models development | 278 13.3.1 Computational methods for imaging data | 281 13.3.2 Computational methods for parametric data | 282 13.4 TeleCARE – telemonitoring framework | 282 13.4.1 Overview | 282 13.4.2 Contribution to the model-based proactive medicine concept | 284 13.4.3 Case study | 286 13.5 TeleDICOM – system for remote interactive consultations | 287 13.5.1 Overview | 287 13.5.2 Contribution to the model-based proactive medicine concept | 288 13.6 Conclusions | 290 14 Serious games in medicine | 295 Paweł Węgrzyn 14.1 Serious games for health – Video games and health issues | 295 14.1.1 Introduction | 295 14.1.2 Previous surveys | 296 14.1.3 Evidence review | 299 14.1.4 Conclusions | 310 Ewa Grabska 14.2 Serious game graphic design based on understanding of a new model of visual perception – computer graphics | 318 14.2.1 Introduction | 318 14.2.2 A new model of perception for visual communication | 319 14.2.3 Visibility enhancement with the use of animation | 322 14.2.4 Conclusion | 323
Contents | XIII
Irena Roterman-Konieczna 14.3 Serious gaming in medicine | 324 14.3.1 Therapeutic support for children | 324 14.3.2 Therapeutic support for the elderly | 327 Index | 329
List of authors Dr. Peregrina Arciaga Charles R. Drew/UCLA University, School of Medicine 1731 East 120th Street, CA 90059 Los Angeles, USA e-mail: [email protected] Chapter 8 Dr. Marian Bubak AGH – Cyfronet Nawojki 11, 30-950 Krakow, Poland e-mail: [email protected] Chapter 2 Katarzyna Chłopaś – Student Jagiellonian University – Medical College Sw. Anny 12, 31-008 Krakow, Poland email: [email protected] Chapter 2 Prof. Jacek Cieślik AGH – University of Science and Technology Al. A. Mickiewicza 30, 30-059 Kraków, Poland e-mail: [email protected] Chapter 11 Dr. Łukasz Czekierda AGH – University of Science and Technology Kawiory 21, 30-055 Krakow, Poland e-mail: [email protected] Chapter 13 Jakub Dąbroś AGH – University of Science and Technology Łazarza 16, 30-530 Krakow, Poland e-mail: [email protected] Chapter 12 Dr. Dick Davies Ambient Performance 43 Bedford Street, Suite 336, Covent Garden, London WC2E 9HA, UK e-mail: [email protected] Chapter 8
Dr. Parvati Dev Innovation in Learning Stanford, USA 12600 Roble Ladera Rd, CA 94022 Los Altos Hills, USA e-mail: [email protected] Chapter 8 Dr. Piotr Dudek AGH – University of Science and Technology Al. A. Mickiewicza 30, 30-059 Kraków, Poland e-mail: [email protected] Chapter 11 Jacek Dygut MD Canton Hospital – Wojewodzki Hospital Monte Casino 18, 37-700 Przemyśl, Poland e-mail: [email protected] Chapter 12 Prof. Marc Ebner Ernst-Moritz-Arndt University Greifswald Institute for Mathematics and Informatics Walther-Rathenau-Str. 47, 17487 Greifswald, Germany e-mail: [email protected] Chapter 5 Prof. Andrzej Gackowski Jagiellonian University – Medical College, Cardiology Hospital Prądnicka 80, 31-202 Krakow, Poland e-mail: [email protected] Chapter 13 Prof. Ewa Grabska Jagiellonian University Łojasiewicza 11, 30-348 Krakow, Poland e-mail: [email protected] Chapter 14
XVI | List of authors
Prof. Stuart Hameroff Departments of Anesthesiology and Psychology and Center for Consciousness Studies The University of Arizona Tucson Arizona 85724, USA e-mail: [email protected] Chapter 5 Dr. Inga Hege Ludwig-Maximilians-University München Ziemssenstr. 1, 80336 München, Germany e-mail: [email protected] Chapter 7 Dr. LeRoy Heinrichs Stanford University School of Medicine, USA 8 Campbell Lane, CA 94022 Menlo Park, USA e-mail: [email protected] Chapter 8 Dr. Joanna Jaworek-Korjakowska AGH – University of Science and Technology Al. A. Mickiewicza 30, 30-059 Krakow e-mail: [email protected] Chapter 9 Dr. Wiktor Jurkowski The Genome Analysis Centre, Norwich Research Park Norwich NR4 7UH, UK e-mail: [email protected] Chapter 1 Tomasz Kawa MSc Jagiellonian University Łojasiewicza 11, 30-348 Krakow, Poland Chapter 4 Prof. Leszek Konieczny Jagiellonian University – Medical College Kopernika 7, 31-034 Krakow, Poland e-mail: [email protected] Chapter 2 Marek Konieczny AGH – University of Science and Technology Kawiory 21, 30-055 Krakow, Poland e-mail: [email protected] Chapter 13
Dr. Andrzej Kononowicz Jagiellonian University – Medical College Łazarza 16, 31-530 Kraków, Poland e-mail: [email protected] Chapter 7 Filip Malawski AGH – University of Science and Technology Kawiory 21, 30-055 Krakow, Poland e-mail: [email protected] Chapter 13 Piotr Nowakowski MSc AGH – University of Science and Technology Nawojki 11, 30-950 Krakow, Poland e-mail: [email protected] Chapter 2 Dr. Monika Piwowar Jagiellonian University – Medical College Łazarza 16, 31-530 Krakow, Poland e-mail: [email protected] Chapter 1 Prof. Irena Roterman-Konieczna Jagiellonian University – Medical College Łazarza 16, 31-530 Krakow, Poland e-mail: [email protected] Chapters 2, 3, and 14 Kornel Skałkowski MSc AGH – University of Science and Technology Kawiory 21, 30-055 Krakow, Poland e-mail: [email protected] Chapter 13 Dr. Anna Sochocka Jagiellonian University Łojasiewicza 11, 30-348 Krakow, Poland e-mail: [email protected] Chapter 4 Dr. Tomasz Szydło AGH – University of Science and Technology Kawiory 21, 30-055 Krakow, Poland e-mail: [email protected] Chapter 13
List of authors |
Prof. Ryszard Tadeusiewicz AGH – University of Science and Technology, Chair of Automatics and Bioengineering Al. A. Mickiewicza 30, 30-059 Kraków, Poland e-mail: [email protected] Chapters 6, 9, and 10 Jakub Wach MSc AGH – Cyfronet Nawojki 11, 30-950 Krakow, Poland e-mail: [email protected] Chapter 2 Prof. Paweł Węgrzyn Jagiellonian University Łojasiewicza 11, 30-348 Krakow, Poland e-mail: [email protected] Chapter 14
XVII
Zdzisław Wiśniowski MSc Jagiellonian University – Medical College Łazarza 16, 30-530 Krakow, Poland e-mail: [email protected] Chapter 12 Prof. Krzysztof Zieliński AGH – University of Science and Technology, Informatics Institute Kawiory 21, 30-055 Krakow, Poland e-mail: [email protected] Chapter 13
| Part I: Molecular level
Monika Piwowar and Wiktor Jurkowski
1 Selected aspects of biological network analysis 1.1 Introduction Much has been made of the Human Genome Project’s potential to unlock the secrets of life [1, 2]. Mapping the entire human DNA was expected to provide answers to unsolved problems of heredity, evolution, protein structure and function, disease mechanisms and many others. The actual outcome of the project, however, differed from expectations. It turned out that coding fragments – genes – constitute only a minute fraction (approximately 2 %) of human DNA. Furthermore, comparative analysis of human and chimpanzee genomes revealed that despite profound phenotypic differences the DNA of these species differs by only 1.5 %. Despite being an undisputed technological tour de force, the Human Genome Project did not live up to the far-reaching hopes of the scientific community. It seems that genes alone do not convey sufficient information to explain phenotypic uniqueness – indeed, additional sources of information are required in order to maintain a coherent system under which the expression of individual genes is strictly regulated [3]. Cellular biology has historically been dominated by the reductionist (“bottomup”) approach. Researchers studied specific components of the cell and drew conclusions regarding the operation of the system as a whole [4, 5]. Structural and molecular biology reveals the sequential and structural arrangement of proteins, DNA and RNA chains. In recent years efficient technologies have emerged, enabling analysis of entire genomes (genomics) [6, 7], regulation of transcription processes (transcriptomics) [8], quantitative and qualitative properties of proteins (proteomics) [9] as well as the chemical reactions which form the basis of life (metabolomics) [10, 11]. Specialist literature is replete with breadth-first data analysis studies which are often jointly referred to as “omics” (e.g. lipidomics) [12]. The common factor of all these disciplines is the application of modern experimental methods to study changes which occur in a given cell or tissue [12]. The ongoing evolution of IT methodologies enables efficient processing of vast quantities of data and, as a result, many specialist databases have emerged. Progressive improvements in computational sciences facilitates more and more accurate analysis of the structure and function of individual components of living cells. Yet, despite the immense effort invested in this work, it has become evident that biological function cannot – in most cases – be accurately modeled by referring to a single molecule or organelle. In other words, the cell is more than merely a sum of its parts and it is not possible to analyze each part separately and then to assemble them together (like a bicycle). The fundamental phenomena and properties of life fade from focus when such a reductionist approach is applied. While an organism can be said to “operate” as determined by the laws of physics, and while it is composed of a wide variety of chem-
4 | Monika Piwowar and Wiktor Jurkowski
ical elements, it cannot be analyzed using the same tools which are successfully applied in other disciplines (e.g. linearization, extrapolation etc.) where our knowledge of the target system is complete [13, 14]. Molecules interact with one another forming a fantastically complex web of relationships. Hundreds of thousands of proteins are encoded by genes which themselves fall under the supervision of additional proteins. Genes and proteins act together to drive innumerable processes on the level of individual cells, tissues, organs and entire organisms. The end result is an enormously complicated, elastic and dynamic system, exhibiting a multitude of emergent phenomena which cannot be adequately explained by focusing on its base components [15]. The knowledge and data derived from efficient experimentation allow us to begin explaining how such components and their interactions affect the processes occurring in cells – whether autonomous or acting within the scope of a given tissue, organ or organism. This approach, usually referred to as “systems biology” has been gaining popularity in recent years. It is based on a holistic (“top-down”) approach which attributes the properties of biological units to the requirements and features of systems to which they belong [3]. While a comprehensive description of the mechanism of life – even on the basic cellular level – is still beyond our capabilities, ongoing developments in systems biology and biomedicine supply ample evidence in support of this holistic methodology. Barbasi et al. [16] have conducted several studies which indicate that biological networks conform to certain basic, universal laws. Accurately describing individual modules and pathways calls for a marriage between experimental biology and other modern disciplines, including mathematics and computer science, which supply efficient means for the analysis of vast experimental datasets. This formal (mathematical) approach can be applied to biological processes, yielding suitable methods for modeling the complex interdependencies which play a key role in cells and organisms alike [17]. Such a “network-based” view of cellular mechanisms provides an entirely new framework for studies of both normal and pathological processes observed in living organisms [16, 18]. Network analysis is a promising approach in systems biology and produces good results when the target system has already been accurately described (e.g. metabolic reactions in mitochondria; well-studied signaling pathways etc.). While such systems are scarce – as evidenced by the interpretation of available results – network methods are also good at supplying hypotheses or singling out candidates for further study (e.g. interesting genes). Existing mathematical models that find application in biology can be roughly divided into two classes based on their descriptive accuracy: continuous models, where the state of a molecule (its concentration, degree of activation etc.) and its interaction with other molecules (chemical reactions) can be formally described using ordinary differential equations (ODEs) [19, 20] under a specific kinetic model, and discrete models, where molecules exist in a limited number of states (typically two) interlinked in a directionless or directed graph. This second class includes Boolean networks, where each vertex assumes a value of 0 or 1 depending on the assumed topology and logic
1 Selected aspects of biological network analysis
|
5
[21, 22], and Bayesian networks, where the relations between molecules are probabilistic [23, 24]. As networks differ in terms of computational complexity, selecting the appropriate tool depends on the problem we are trying to solve. Boolean networks are well suited to systems which involve “on/off” switches, such as gene transcription factors which can be either present or absent, while continuous models usually provide a more accurate description of reaction kinetics where the quantities of substrates and products vary over time.
1.2 Selected biological databases Formulating more and more precise theoretical descriptions of protein/protein or protein/gene interactions would not have been possible without experimental data supplied by molecular biology studies such as sequencing, dihybrid crossing, mass spectrometry and microarray experiments. From among these, particular attention has recently been devoted to the so-called vital stain techniques. Their application in the study of cellular processes is thought to hold great promise since they enable analysis of dynamic changes occurring in a living cell without disrupting its function. As a result, this approach avoids the complications associated with cell death and its biochemical consequences. Vital stains provide a valuable source of information which can be exploited in assembling and annotating relation networks. Such efforts are often complemented by microarray techniques which “capture” the state of the cell at a particular point in its life cycle. Microarray experiments carried out at predetermined intervals, while imperfect, provide much information regarding the relations between individual components of a cell, i.e. proteins. Such detailed data describing specific “members” of interaction networks along with their mutual relations is typically stored in specialized repositories, including: – genomes – Ensembl (http://www.ensembl.org/index.html) – UCSD (http://genome.ucsc.edu/) – protein data – Protein (http://www.ncbi.nlm.nih.gov/protein/) – Uniprot (http://www.uniprot.org/) – PDB (http://www.rcsb.org) – microarray and NGS data – GEO (http://www.ncbi.nlm.nih.gov/geo/) – ArrayExpress (http://www.ebi.ac.uk/arrayexpress/)
6 | Monika Piwowar and Wiktor Jurkowski
1.2.1 Case study: Gene Expression Omnibus GEO (Gene Expression Omnibus; http://www.ncbi.nlm.nih.gov/geo/) is a database which aggregates publicly available microarray data as well as data provided by next generation sequencing and other high-throughput genomics experiments. GEO data is curated and annotated so that users do not need to undertake complex preprocessing steps (such as noise removal or normalization) when they wish to e.g. review gene expression levels in patients with various stages of intestinal cancer. Additionally, the database provides user-friendly query interfaces and supports a wide range of visualization and data retrieval tools to ensure that gene expression profiles can be readily located and accessed. Owing to its structure, GEO permits comparative analysis of results e.g. for different patients, applying statistical methods such as Student’s t-test (comparison of average values in two groups) or ANOVA (comparison of a larger number of groups). Graphical representation of microarray data with color maps or charts depicting the expression of selected genes in several different experiments facilitates preliminary assessment and enables researchers to pinpoint interesting results. The database also hosts supplementary data: primary datasets obtained directly from scanning microarrays and converting fluorescence intensity into numerical values, as well as raw microarray scans (see Gene Expression Omnibus info; http://www.ncbi.nlm.nih.gov/ geo/info/.) The information present in the GEO database may be retrieved using several types of identifiers; specifically: – GPLxxx: requests a specific platform. Platform description files contain data on matrices or microarray sequencers. Each platform may include multiple samples. – GSMxxx: requests a specific sample. The description of a sample comprises the experiment’s free variables as well as the conditions under which the experiment was performed. Each sample belongs to one platform and may be included in multiple series. – GSExxx: requests a specific series. A series is a sequence of linked samples supplemented by a general description of the corresponding experiment. Series may also include information regarding specific data items and analysis steps, along with a summary of research results. The identifiers of samples, series and platforms are mutually linked – thus, by querying for a specific microarray sample we may also obtain information on the platforms and series to which it belongs.
1 Selected aspects of biological network analysis
|
7
The GEO homepage offers access to gene expression profiles as well as sets of individual microarray samples obtained using identical platforms and under identical conditions. The repository also publishes its data via the National Center of Biotechnology Information (http://www.ncbi.nlm.nih.gov), with two distinct collections: GEO DataSet and Geo Gene Profiles. This division is due to practical reasons and a brief summary of the NCBI databases which aggregate GEO data is presented below.
GEO DataSet The Geo DataSet database comprises data from curated microarray experiments carried out with the use of specific platforms under consistent conditions. It can be queried by supplying dataset identifiers (e.g. GDSxxx), keywords or names of target organisms. ID-based queries produce the most accurate results – keywords and names are ambiguous and may result in redundant data being included in the result set. An example of a microarray dataset (comprising a number of samples) is GDS3027 which measures gene expression levels in patients suffering from early-stage Duchenne muscular dystrophy. The study involved a control group as well as a group of patients of varying age (measured in months) (Fig. 1.1).
10
GDS3027
9 8 7 6
Disease state
Male 5 mo
6 mo
7 11 18 33 36 50 60 108 mo mo mo mo mo mo mo mo
GSM1 39537
GSM1 39536
GSM1 39535
GSM1 39534
GSM1 39533
GSM1 39532
GSM1 39531
GSM1 39530
GSM1 39529
GSM1 39528
GSM1 39527
GSM1 39526
GSM1 39525
GSM1 39524
GSM1 39523
GSM1 39522
GSM1 39521
GSM1 39520
GSM1 39519
GSM1 39518
GSM1 39517
GSM1 39516
GSM1 39515
GSM1 39507
GSM1 39503
GSM1 39502
GSM1 39514
GSM1 39513
GSM1 39512
GSM1 39511
GSM1 39510
GSM1 39509
GSM1 39508
Control
Gender Age
GSM1 39506
GSM1 39505
GSM1 39504
5 4
GSM1 39501
95% 90% 75% median 25% 10% 5%
Duchenne muscular dystrophy Female 6 mo
8 1.5 2.5 mo mo mo
Male 3 mo
5 mo
6 mo
7 mo
8 mo
11 mo
12 mo
14 mo
15 mo
20 mo
22 28 47 61 mo mo mo mo
Fig. 1.1: Results of a microarray experiment involving a group of patients afflicted with Duchenne muscular dystrophy, along with a control group. GSMxxx identifiers refer to specific samples.
Graphical representation of GDS3027 results reveals the expression levels of individual genes (Fig. 1.2). Purple markers indicate high expression, green markers correspond to poor expression and grey areas indicate that no expression could be detected. In addition, the repository aggregates data in clusters depending on the correlation between expression profiles with regard to specific samples (columns) and genes (rows).
8 | Monika Piwowar and Wiktor Jurkowski
Expression level: High
Low
Absent
Correlation: –0.16
0.89
GSM1 39517 GSM1 39518 GSM1 39536 GSM1 39522 GSM1 39516 GSM1 39523 GSM1 39530 GSM1 39527 GSM1 39524 GSM1 39526 GSM1 39506 GSM1 39514 GSM1 39509 GSM1 39501 GSM1 39504 GSM1 39503 GSM1 39502 GSM1 39507 GSM1 39511 GSM1 39512 GSM1 39508 GSM1 39510 GSM1 39513 GSM1 39521 GSM1 39533 GSM1 39520 GSM1 39525 GSM1 39528 GSM1 39537 GSM1 39505 GSM1 39519 GSM1 39515 GSM1 39535 GSM1 39529 GSM1 39531 GSM1 39534 GSM1 39532
Disease state Gender Age
Gene list SOD3 EXOSC7 EVC RABL2A INTS7 WDR19 GAB2 ZNF232 FKBP15 ARHGAP26 SLC16A4 AK024158 216575_at SLCO4C1 SAP30 AGBL3 NAV3
Fig. 1.2: Graphical representation of gene expression levels in the GDS3027 microarray dataset. The inset frame shows a magnified fragment of the GDS matrix. Colors correspond to expression levels: purple – high expression; green – poor expression; grey – no expression.
GEO Gene Profiles Unlike GEO DataSet, this repository deals with expression of specific genes across a number of microarray experiments. Gene expression levels may be “observed” under a given set of experimental conditions (such as time of study, gender or other concomitant variables) to quickly determine whether there is a connection between expression levels and any of these variables. Additionally, the database supplies links to genes with similar expression profiles. Queries can be forwarded to other databases aggregated by NCBI, e.g. to obtain additional data regarding the target sequence or protein structure. GEO Gene Profile search interfaces are roughly similar to those provided by GEO DataSet.
1 Selected aspects of biological network analysis
|
9
GDS3027 / 206717_at / MYH8 Transformed count Percentile rank within the sample
Disease state
Male 5 mo
6 mo
7 11 18 33 36 50 60 108 mo mo mo mo mo mo mo mo
GSM1 39537
GSM1 39536
GSM1 39535
GSM1 39534
GSM1 39533
GSM1 39532
GSM1 39531
GSM1 39530
GSM1 39529
GSM1 39528
GSM1 39527
GSM1 39526
GSM1 39525
GSM1 39524
GSM1 39523
GSM1 39522
GSM1 39521
GSM1 39520
GSM1 39519
GSM1 39518
GSM1 39517
GSM1 39516
GSM1 39515
GSM1 39507
GSM1 39503
GSM1 39502
GSM1 39514
GSM1 39513
GSM1 39512
GSM1 39511
Control
Gender Age
GSM1 39510
0
GSM1 39509
25
4
GSM1 39508
50
6 GSM1 39506
8
GSM1 39505
75
GSM1 39504
100%
10
GSM1 39501
12
Duchenne muscular dystrophy Female 6 mo
8 1.5 2.5 mo mo mo
Male 3 mo
5 mo
6 mo
7 mo
8 mo
11 12 mo mo
14 mo
15 22 28 47 61 20 mo mo mo mo mo mo
Fig. 1.3: MYH8 (myosin, heavy chain) expression profile. As shown, the expression levels of this gene are higher in the test group than in the control group.
The GDS3027 dataset includes (among others) myosin, whose expression in the test group is higher than in the control group. The corresponding GEO Gene Profile data is presented as a bar graph (Fig. 1.3). Similar techniques can be applied to other genes. The database enables researchers to quickly discover homologues and genes with similar expression profiles (referred to as “profile neighbors”). Links to GEO DataSet profiles are also provided.
1.2.2 RegulonDB RegulonDB is a database that focuses on the gene regulatory network of E. coli – arguably the most studied regulatory network [25]. The database portal provides a range of online (browser accessible) tools that can be used to query the database, analyze data and export results including DNA sequences and biological interdependence networks. In conjunction with the E. coli microarray experiment results (which can be obtained from GEO), RegulonDB supports validation of regulatory network simulation algorithms.
Using RegulonDB to determine the efficiency of network construction algorithms The main page of RegulonDB (http://regulondb.ccg.unam.mx/index.jsp) provides links to a set of search engines facilitating access to gene regulation data. The most popular engines are briefly characterized below.
10 | Monika Piwowar and Wiktor Jurkowski
–
–
–
Gene: this interface returns data on a given gene, its products, Shine-Dalgarno sequences, regulators, operons and all transcription units associated with the gene. It also supplies a graphical depiction of all sequences present in the gene’s neighborhood, including promoters, binding sites and terminators (in addition to loci which do not affect regulation of the target gene). Operon: the operon is commonly defined as a set of neighboring genes subject to cotranscription. The database introduces a further distinction between operons and transcription units, treating the operon as a set of transcription units that are shared by many genes. In RegulonDB a gene may not belong to more than one operon. A transcription unit (TU) is a set of one or more genes which are transcribed from a common promoter. TU may also provide binding loci for regulatory proteins, affecting its promoter and terminator. The search engine returns all information related to a given operon, its transcription units and the regulatory elements present in each unit. Graph visualization is provided, showing the placement of all regulatory elements within the target region. A complete set of known TUs (with detailed descriptions) is also listed below each operon. Regulon: this search interface provides basic and detailed information concerning regulons, i.e. groups of genes regulated by a single, common transcription factor. In addition to such “simple” regulons, RegulonDB introduces the notion of a complex regulon where several distinct transcription factors regulate a set of genes, with each factor exerting equal influence upon all genes from its set. The Regulon interface also shows binding sites and promoters grouped by function.
1.3 Types of biological networks 1.3.1 Relations between molecules and types of networks Biological networks are composed of molecules: proteins, genes, cellular metabolites etc. These building blocks are linked by various types of chemical reactions. Among the simplest biological networks is the gene regulatory network (GRN) showing which genes activate or inhibit other genes. Networks are usually depicted as graphs (see inset); however this representation should not be confused with the graphical layout of networks stored in KEGG databases or wikipathways. Graphs as a representation of networks A graph is a collection of elements (called vertices) linked by mutual relationships (called edges). The interpretation of vertices and edges may vary – in gene regulatory networks vertices represent genes while edges correspond to activation/inhibition effects. In a simple graph there are no loops (edges which connect a vertex with itself) and only one edge may appear between each pair of vertices. The maximum number of edges in a simple graph with N vertices is N(N − 1)/2. In a directed graph each edge has a specific direction but there is no limit on the number of edges between each pair of vertices.
1 Selected aspects of biological network analysis
|
11
w6
6
6
3 w5
w1
–4
12 w3 2
20
1
3 w2
7
–10 1
w4 5
2
Protein-protein interaction networks are represented by simple graphs while signaling networks and gene regulatory networks usually rely on directed graphs. Metabolic networks describing reversible chemical reactions may use graphs with weighed edges – in these types of graphs each edge carries a numerical value which corresponds e.g. to its reaction rate constant. Graphs have many applications in information technology: for example they can be used for traffic modeling or Internet routing.
The most common types of vertices are genes, proteins and other molecules which participate in biochemical processes. Some networks also include cellular organelles (e.g. mitochondria, vacuoles etc.) viewed as “targets” of specific processes. The set of potential elements may be further extended with abstract concepts: UV radiation intensity, pH, ROS and any other phenomena which need to be taken into account when performing network analysis. Relations between elements can be direct – e.g. a simple chemical reaction between two molecules – or indirect where a number of intervening reactions are necessary. An example of an indirect relationship is mutual regulation of genes. Simply observing that “gene A regulates the expression of gene B” conceals the existence of a complicated chain where the product of gene A acts upon the transcription factor or other mechanisms which, in turn, regulate the expression of gene B. When the character of the relation is unknown, the relation is said to be directionless, i.e. we cannot determine which of the two interacting elements is the effector and which one is the receptor. This phenomenon occurs in many nonspecific protein-protein interactions: we may know that two proteins bind to each other but the purpose of the reaction is not known – unlike, for example, directed activation of adrenergic receptors via hormone complexation leading to release of protein G which, in turn, binds to its dedicated receptor. In some cases we possess knowledge not just of the relation’s direction but also of its positive or negative effects. A positive effect may involve upregulation of a chemical reaction by an enzyme, activation of gene expression or an increase in the concentration of some substrate. A negative effect indicates inhibition or simply a reduction in the intensity of the above mentioned processes. This complex interplay of directionless and directed reactions underscores the fundamental difference between protein-protein interaction (PPI) networks which fo-
12 | Monika Piwowar and Wiktor Jurkowski
cus on nonspecific interactions between proteins, and signaling networks (SN) which provide detailed insight into biochemical processes occurring in the cell. As shown, the types of network elements and their mutual relations are directly related to the scope of our knowledge regarding biological mechanisms and the accuracy of experimental data.
1.3.2 Biochemical pathways Several online databases store manually-validated process relationship data and visualize it by means of interaction diagrams: – KEGG (http://www.genome.jp/kegg/) – Reactome Pathways Database (http://www.reactome.org) – Wikipathways (http://www.wikipathways.org/) KEGG (Kyoto Encyclopedia of Genes and Genomes – GenomeNet; http://www.kegg.jp/ kegg/) is a database dedicated to researchers who study the properties of molecular interaction networks on the level of cells, organisms or even entire ecosystems [26, 27]. Among the most popular features of KEGG is the presentation of molecular interactions as activity pathways (KEGG PATHWAY). The relationships between individual molecules (typically proteins) are represented as block diagrams with directed or directionless links indicating the flow of information. The number of activity pathways has grown so large that attempts are currently being made to assemble a global network consisting of various interlinked pathways (Fig. 1.4). KEGG also includes a set of relations between individual pathway components (KEGG BRITE). This database is a set of hierarchical classifications representing our knowledge regarding various aspects of biological systems. KEGG DISEASE is an interesting database that stores molecular interaction data associated with various pathological processes in humans (http://www.genome.jp/kegg/) (Fig. 1.5). The ability to visualize individual proteins and other molecules, along with references to detailed information regarding their properties, provides substantial help in creating network models for analysis of disease-related processes. All KEGG databases are interlinked, permitting easy navigation between datasets. Although KEGG is popular as a source of gene-centric information applied for instance to overrepresentation and Gene Set Enrichment analysis, KEGG has limited applicability for network analysis. The main hurdle is the heterogeneity in the style applied to represent particular pathways arising from the incompleteness of available knowledge and missing annotations. Interactions represented as a graph are often accompanied by disjoined boxes describing phenotypes or states. Some pathways are described by a set of chemical reactions and some are just lists of genes.
1 Selected aspects of biological network analysis
|
13
Fig. 1.4: Global activity network consisting of multiple pathways. Each dot indicates (in most cases) a single pathway. A more detailed view of a representative pathway is shown in the central part of the image, indicating stages of fructose and mannose metabolism.
Fig. 1.5: KEGG interaction diagram corresponding to Alzheimer’s disease. The red-framed inset contains detailed information concerning the protein labeled “PSEN”.
14 | Monika Piwowar and Wiktor Jurkowski
Both Wikipathways and Reactome are focusing on gathering information that can be described in the form of biochemical reactions, therefore escaping the abovementioned problems. They are much more straightforward in defining simulation models or interaction graphs.
1.4 Network development models 1.4.1 Selected tools for assembling networks on the basis of gene expression data Assembling gene regulatory networks remains an open problem. Existing methods are not equally efficient in processing diverse datasets and it is often difficult to select the optimal algorithm for a given task. As few regulatory networks have been experimentally validated, assessment of the accuracy of hypothetical networks also poses a significant challenge. The DREAM (Dialogue for Reverse Engineering Assessments and Methods) consortium attempts to address these issues by organizing regular events where the efficiency of various network construction algorithms is independently validated (see http://www.the-dream-project.org/). This section discusses the fundamental aspects of the construction of regulatory networks based on gene expression data.
Gene Network Weaver – gene regulatory network processing software Gene Network Weaver (GNW) provides an efficient way to determine the validity of gene regulatory network construction algorithms. This software package can read input datasets created for the purposes of the DREAM project. The first analysis step involves construction of a realistic regulatory network from known fragments of reallife interaction networks. This is followed by generation of simulated gene expression data. GNW is bundled with a number of preassembled datasets (Escherichia coli and Staphylococcus gene regulation networks along with several customized DREAM databases). The program enables users to select subnetworks in order to carry out operations on smaller and more convenient sets of data. In addition to providing its own datasets, GNW can import and parse user-generated networks [28].
Cytoscape While Cytoscape will be presented further on in this chapter, we should note that it includes the CyniToolbox extension which can derive gene regulation networks from gene expression data [29]. Data analysis proceeds by detecting simple correlations on the basis of information theory concepts, such as mutual information and Bayesian networks. Additionally, CyniToolbox can fill in missing data and perform input discretization (as required by most processing algorithms). Similar tasks are handled by
1 Selected aspects of biological network analysis
|
15
another Cytoscape plug-in – MONET (http://apps.cytoscape.org/apps/monet). Each new version of Cytoscape comes with a range of plug-ins – up-to-date information can always be obtained on the toolkit’s homepage.
GenePattern – gene expression analysis features GenePattern is a comprehensive toolset for analyzing genomics data. Its feature analysis of genetic sequences, gene expression, proteomics and flow cytometry data. Tools can be chained into workflows to automate complex analyses. GenePattern is an opensource project and can be used free of charge for scientific purposes. User registration is required. Tools can be downloaded from the project’s website and users may either set up local copies of the software or connect to one of the available public servers. ARACNE – one of many GenePattern modules – can reconstruct cellular networks by applying the ARACNE algorithm. A thorough description of the data input format is available and data can also be imported from other modules using appropriate converters. The GEOImporter tool can download data directly from the GEO database (see Section 1.2.1). GenePattern also provides a server which recreates gene regulatory networks on the basis of selected DREAM methods, and implements a meta-algorithm assembled from the three highest ranked algorithms submitted to the most recent edition of DREAM.
1.4.2 Selected tools for reconstruction of networks via literature mining Networks can be reconstructed by analyzing peer-reviewed publications. This process involves specification of target elements (e.g. gene symbols) and relation types (genetic regulation, protein complexation, etc.) The resulting network can be exported to a file which may then serve as input for another software package, or visualized with a GUI to enable further analysis of a specific graph edge or to prepare a presentation. The methods described in this section can be roughly divided into two groups. The first group comprises event-centric methods, e.g. searching for information on physical interactions between two proteins. This approach offers a great advantage since by focusing on the description of a biological event we avoid potentially incorrect interpretation of experiment results – although on the other hand the interpretation task is left entirely to the user. The second group covers methods which attempt to determine causative factors in intermolecular relations. This approach offers a shortcut to useful results since – in most cases – correct interpretations may have already been obtained and can aid in the reconstruction of cellular networks. In both cases we should be mindful of the limitations inherent in combining the results of experiments carried out in various models (animals, tissues, cell lines) under differing conditions and with the use of dissimilar experimental techniques. The final
16 | Monika Piwowar and Wiktor Jurkowski
outcome of the process should be viewed with caution until it can be independently validated by a consistent series of experiments (e.g. differential gene expression analysis).
IntAct and MINT IntAct (http://www.ebi.ac.uk/intact/) and MINT (http://mint.bio.uniroma2.it/mint/ Welcome.do) contain validated interaction data for a broad set of proteins, genes and other micromolecules in various organisms, including humans. All data is traced to peer-reviewed publications presenting experimental results, and the databases only provide information on direct interactions without attempting to interpret their outcome. The databases can be queried by publication, author credentials and proteins set, and additionally by the quality of the applied experimental methods and target organisms. Networks can be displayed or saved in one of the popular network file formats. Each relation can be traced by supplying the corresponding PubMed ID.
Pathway Studio and Ingenuity Pathway Analysis Pathway Studio (Ariadne Genomics, http://ariadnegenomics.com) and Ingenuity Pathway Analysis (Ingenuity Systems, http://www.ingenuity.com) represent a different approach to literature mining: they subject publications to lexical analysis and submit preliminary results to a panel of experts in order to reduce the likelihood of mistakes. Query results indicate which publications discuss the specific relation and provide information on the organisms, tissues and cells analyzed in the context of these publications.
1.5 Network analysis The typical systems biology research process is a cycle comprising preliminary bioinformatics analysis generating new hypotheses concerning the operation of a given system followed by subsequent experiments to verify initial assumptions which can then be subjected to further analysis. The analysis of biological networks may be approached from various angles such as pathway analysis, which concerns itself with assembling rich gene ontology datasets and finding genes or biological processes overrepresented in the data under study; analysis of the flow of substrates in chemical reaction chains that allows precise quantification of perturbation; graph analysis, which seeks vertices of particular importance for a given process or cellular phenotype. Many software packages support the interpretation of biochemical data with the use of network analysis tools. This section introduces some of most popular tools.
1 Selected aspects of biological network analysis
|
17
1.5.1 Selected tools From among the multitude of open-source and commercial network analysis packages, the following tools are particularly noteworthy: Cytoscape (www.cytoscape. org/), COPASI (www.copasi.org), Cell Illustrator (www.cellillustrator.com), and igraph (http://igraph.org/redirect.html). They permit the user to trace (among others) metabolic pathways, signaling cascades, gene regulatory networks and many other types of interactions between biologically active molecules (DNA, RNA and proteins). They also support statistical analysis and visualization of results as well as of the networks themselves. COPASI and Cell Illuminator base their simulations on a broad knowledge base which describes many important reactions in terms of differential equations. In Cytoscape and igraph biological networks are represented by graphs – in these cases the underlying reactions are not described in detail and simulations are instead based on the existence (or lack of) directed links between various molecules.
COPASI COPASI is a noncommercial software package capable of analyzing and simulating biochemical reactions as well as any other processes which can be expressed in terms of mutual relations between entities [30]. It supports the SBML model description standard and can perform simulations using ordinary differential equations (ODEs) or Gillespie’s stochastic algorithms acknowledging arbitrary discrete events. COPASI can be used to simulate and study the kinetics of chemical reactions occurring in various zones (e.g. organelles) of the cell (Fig. 1.6). Biochemical processes are expressed as sets of reactions, using a standardized notation, with parameters such as reaction rate, stoichiometry and location taken into account. This functionality enables users to integrate various processes – chemical reaction, molecular aggregation, transport etc. The software comes with a rich set of metadata describing common reactions and, in most cases, the user only needs to select a given reaction from a list. In more complex scenarios users can define custom biochemical functions describing nonstandard reactions, along with a kinetic model expressing the relation between reagent concentrations and reaction rate. The tool also enables the user to determine where a given element can be found, which reactions it participates in and which kinetic models should be applied when simulating these reactions. Finally, COPASI can be used to define entirely new models describing phenomena other than chemical reactions. Each reaction is assigned to a differential equation which will be used to simulate its progress. In theory this permits the user to simulate highly complex processes comprising many different reactions. In practice, however, dealing with a large set of differential equations forces the user to provide the values for many distinct parameters (e.g. on the basis of experimental data) and incorrect values may lead to nonsensical
18 | Monika Piwowar and Wiktor Jurkowski
Fig. 1.6: Defining chemical reactions in COPASI [http://www.copasi.org/tiki-index.php?page= screenshots].
results. For this reason it is recommended to limit each simulation to not more than several dozen reactions. COPASI is a popular tool with an active user community offering assistance in the construction of new metabolic models and reuse of existing ones. The system’s appeal is also evidenced by the number of scientific publications which apply COPASI e.g. in the analysis of lactic acid fermentation [31], studying the TGF-beta 1 signal cascade in the context of 3D simulations of the epidermis [32] or modeling lipids which form actin microfilaments with a view towards validating hypothetical mechanisms proposed on the basis of experimental results [33].
Cell Illustrator Cell Illustrator [34] provides the so-called Biopathway Modeling Mode, which uses a modified version of the Petri net paradigm known as Hybrid Petri Net with Extensions (HFPNE). Unlike classic Petri nets which model discrete events, HFPNEs can be used to simulate continuous processes [35]. In the Gene Net mode Cell Illustrator can analyze and explore gene regulatory networks, however without the ability to directly simulate such networks. Once a gene interaction network has been set up the tool can be switched to the Biopathway Modeling Mode which provides a starting point for the development of simulation models.
1 Selected aspects of biological network analysis
|
19
The tool provides a robust graphical user interface where users may either carry out simulations of custom pathways or reuse models available from specialized databases (Fig. 1.7).
Fig. 1.7: Angiogenesis model (Transpath: endothelial_cell_angiogenesis_30.csml). The inset frame shows a magnified version of the network, revealing individual vertices as well as quantified relations.
Type
Entity
Process
Connector
Continuous Generic
Discreta, continous, generic
Biological images examples
Symbols of HFPNe
Discrete
Fig. 1.8: Technical and graphical symbols used by Cell Illustrator {Entity: quantity/concentration of biomolecule/object; Process: Rate/conditions/mechanism of interaction/reaction/transfer between connected biomolecules/objects (arrows); Connector: Process/interaction (complexation)/ inhibition}.
20 | Monika Piwowar and Wiktor Jurkowski
Cell Illustrator is known for its user friendliness. It provides a clear and intuitive menu facilitating easy access to all of its features. Another distinct advantage of Cell Illustrator is its support for graphical representations of proteins, DNA chains and other structures, including entire organelles (Fig. 1.8). In addition to an efficient simulation panel the program also provides a tool for generating high-quality diagrams and video files presenting simulation results.
Examples of simple chemical reaction models implemented using Cell Illustrator –
Translocation process (p53 protein; nucleus to cytoplasm).
p53_nuclei m9
c4
Translocation
0.0
100.0
c3 1.0
p53_cytoplasm m10 0.0
100 90 80 70 60 50 40 30 20 10 0 0,0
2,5
5,0
7,5 10,0 12,5 15,0 17,5 20,0 22,5 25,0 27,5 30,0 32,5 p53_cytoplasm
p53_nucleus
1 Selected aspects of biological network analysis
|
21
Increase of p53 concentration in cell cytoplasm and the corresponding decrease of its concentration in the nucleus. –
Degradation process: three separate variants simulating discrete/continuous changes in the quantity/concentration of the protein which undergoes degradation.
e1
m1
e2
100
m2
10 c2
c3
threshold
0
threshold
0
p1
0
p2
p3
and 0.1*m1
m3
0
c1 threshold
4/10
e3
and 1
5/10
6/10
and 1
10 9 8 7 6 5 4 3 2 1 0 0,0
2,5
5,0
7,5 10,0 12,0 15,0 17,5 20,0 22,5 25,0 27,5 30,0 32,5 35,0 e1
e2
e3
22 | Monika Piwowar and Wiktor Jurkowski
Visualization of the rate of degradation in relation to the process type and the properties of the molecule which undergoes degradation. –
Protein complexation: p53 and mdm2
p53
m1
c1 threshold 0
100
p53_mdm2
m3
c3
p1 and 0.1*m1*m2
1
0
c2 threshold 0 mdm2 m2
50
105 100 95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 0,0 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 0,9 1,0 1,1 1,2 1,3 1,4 1,5 1,6 1,7 1,8 1,9 2,0 2,1 mdm2
p53
p53_mdm2
1 Selected aspects of biological network analysis
|
23
Complexation process as a function of the quantity/concentration of substrates in the cell. Cell Illustrator enables: – –
– – –
drag-and-drop construction (in a manner similar to the examples shown above) of biological pathway models consisting of molecular components; selection of mathematical formulae to simulate the biochemical reactions which comprise biological pathways. Simulations may be carried out directly in the workspace (interactively) or uploaded to a remote server called the Cell Illustrator Server; storing simulation results in graphical files and assembling eye-catching animations using the Cell Animator plug-in; analyzing gene interaction networks (static analysis only); importing networks created in other programs (support for SBML and CellML) or downloaded from specialized libraries/databases such as Transpath or KEGG.
Cytoscape Cytoscape enables visualization and analysis of network topologies. Each network, regardless of the complexity of the underlying process, is always depicted as a graph. The graph is either provided by user or can be generated with text mining or coexpression analysis approaches provided by the multiple plug-ins available. In addition, integration with Wikipathways allows the easy import of biochemical pathways into the graph representation. Cytoscape provides a rich set of visualization options for vertices, edges and labels. The network is stored as a tabularized list of source and target vertices along with the properties of their relations and any additional user-defined attributes. In addition to basic topological analysis, users may access a set of plug-ins for advanced bioinformatics studies. As many of the available plug-ins have been contributed by community members the system’s authors take care to enforce coherent standards for code development, data sharing and integration with the common graphical user interface.
1.5.2 Cytoscape analysis examples Upon launch, Cytoscape displays its main window and a toolbar. The description of each tool is provided in the form of mouseover tooltips. In addition to the network display panel the main window also contains a control panel with several tabs, and a data panel which can be used to select individual vertices and edges, and display their attributes. The default control panel tab provides an overview of the available networks, showing how many vertices and edges they comprise (each imported network is available at all times – even when not currently displayed).
24 | Monika Piwowar and Wiktor Jurkowski
Another popular tab contains a selection of graphical widgets, providing different styles and layouts for network elements. All program features are comprehensively described in the user manual. Much like COPASI, Cytoscape boasts an active user community that provides assistance and helps troubleshoot technical problems.
Identifying communication hubs Identifying hubs enables us to determine which components of a biochemical process play a key role in the functioning of the cell. Accordingly, this is often the first operation carried out on a newly constructed network. For complex networks with hundreds of vertices and edges visual inspection alone does not reveal obvious hubs (Fig. 1.9 (a)). Instead, a formal algorithm is applied to determine the “centrality” of each vertex. The simplest measure is the degree of the vertex, i.e. its number of incoming and outgoing edges. This value, however, only indicates the local density of the network and does not acknowledge network-wide communication. A better indicator of centrality is the so-called closeness centrality criterion, which takes network topology into account. Closeness centrality is defined as the aggregate length of the shortest paths connecting the given vertex to all other vertices. It can be interpreted as a measure of effectiveness with which the given vertex can propagate information through the network. A similar measure, known as betweenness centrality, depends on the degree of the target vertex’s involvement in communication between each pair of vertices for which the shortest connecting path includes the target vertex. While closeness cen-
(a)
(b)
Fig. 1.9: Network hub analysis example. (a) Lipid metabolism pathway network consisting of over 100 genes and several hundred relations; (b) results of the analysis step where the centrality of each vertex is indicated by the radius and color of the corresponding circle (small/yellow – low centrality; large/red – high centrality).
1 Selected aspects of biological network analysis
|
25
trality expresses the vertex’s overall capability for communication, betweenness centrality indicates its involvement in mediating communication between other vertices. Similar measures are applied in social network analysis. In our example the network hubs are proteins which play a key role in regulating the expression of other proteins: ubiquitin (UBC) and the CREB-binding protein (CREBBP). Another important vertex corresponds to the PPARG transcription factor which is intimately involved in regulating metabolism and whose activity is modulated by dietary fat intake [36].
1.6 Summary Experimental methods continue to undergo rapid development and every decade seems to bring forth a new revolution in the scope of biological information processing. For example, rapid progress in sequencing tools and algorithms – Sanger sequencing, synthesis sequencing (Illumina), pyrosequencing (454) or real-time sequencing (Pacific Bioscience) to name just a few – has driven a thousand-fold decrease in the cost of sequencing DNA while simultaneously increasing its accuracy and reliability. As a result, recent years have witnessed the emergence of many specialized databases aggregating vast quantities of biological data. On the other hand it has also become apparent that amassing data cannot, by itself, explain the complexity and function of cells since it takes its constituent units (proteins, DNA, RNA etc.) out of context. The traditional reductionist (bottom-up) approach is therefore insufficient – instead, systems biology should rely on a holistic (top-down) strategy which takes into account the relations between components and a multilayered model of cell complexity and dynamics. Classification and analysis of specific types of elements in controlled conditions is slowly yielding to qualitative analysis of cells and organisms as a whole. This progress would not have been possible without the aid of technical sciences, and particularly of computer science which facilitates accurate simulations of complex systems and events. New models are being formulated to describe metabolic pathways, signaling cascades, gene regulatory networks and other types of relationships between metabolites, proteins and nucleic acids. This evolution calls for new data storage and exchange standards. One of the projects which address this issue is BioPAX (Biological Pathway Exchange; http://www.biopax.org/) – a language which permits integration, exchange, visualization and analysis of biological pathways. BioPAX provides a common format in which biological pathways can be expressed and stored. Work on a comprehensive description of specific data types and visualization of biological networks is currently ongoing (Tab. 1.1). While it is, in principle, possible to apply such common standards to the description of arbitrary biological data, practical implementations are lagging behind – possibly due to the fact that interdisciplinary studies aggregating various datasets are still somewhat infrequent and a common vision of data integration in the scope of molecular biology is yet to emerge.
26 | Monika Piwowar and Wiktor Jurkowski
Tab. 1.1: Selected standards for storing biochemical pathway data. Name
Type
Website
BioPAX CSML SBML SBGN PSI
Pathway representation standard (RDF/OWL) Pathway representation standard (XML) Pathway representation standard (XML) Network diagram creation Proteomics
http://www.biopax.org/ http://www.cellml.org http://sbml.org http://www.sbgn.org http://www.psidev.info/
The popularity of biological pathway and network analysis is expected to increase in the future. Access to growing datasets will enable researchers to assemble more complex networks which describe cellular biochemistry and, as a result, conduct advanced simulations focusing e.g. on cell-wide metabolism in search for factors which affect phenotypic differences.
References [1]
Venter JC et al. The sequence of the human genome. Science. 2001 Feb 16;291(5507):1304–51. Erratum in: Science 2001 Jun 5;292(5523):1838. [2] Lander ES et al. Initial sequencing and analysis of the human genome. Nature. 2001 Feb 15; 409(6822):860-921. Erratum in: Nature 2001 Aug 2;412(6846):565. Nature 2001 Jun 7; 411(6838):720. [3] Keller EF. The century beyond the gene. J Biosci. 2005 Feb;30(1):3–10. [4] Mazzocchi F. Complexity and the reductionism-holism debate in systems biology. Wiley Interdiscip Rev Syst Biol Med. 2012 Sep–Oct;4(5):413–27. [5] Wolfe CT. Chance between holism and reductionism: tensions in the conceptualisation of Life. Prog Biophys Mol Biol. 2012 Sep;110(1):113–20. [6] Alföldi J and Lindblad-Toh K. Comparative genomics as a tool to understand evolution and disease. Genome Res. 2013 Jul;23(7):1063–8. [7] Chain P, Kurtz S, Ohlebusch E and Slezak T. An applications-focused review of comparative genomics tools: capabilities, limitations and future challenges. Brief Bioinform. 2003 Jun;4(2):105–23. Review. [8] Tseng GC, Ghosh D and Feingold E. Comprehensive literature review and statistical considerations for microarray meta-analysis. Nucleic Acids Res. 2012 May;40(9):3785–3799. doi: 10.1093/nar/gkr1265. Epub 2012 Jan 19. Review. [9] Anderson NL and Anderson NG. Proteome and proteomics: new technologies, new concepts, and new words. Electrophoresis. 1998 Aug;19(11):1853–61. Review. [10] Astarita G and Langridge J. An emerging role for metabolomics in nutrition science. J Nutrigenet Nutrigenomics. 2013 Aug 31;6(4):179–198. [11] Bouatra S, Aziat F, Mandal R, Guo AC, Wilson MR, Knox C, Bjorndahl TC, Krishnamurthy R, Saleem F, Liu P, Dame ZT, Poelzer J, Huynh J, Yallou FS, Psychogios N, Dong E, Bogumil R, Roehring C and Wishart DS. The human urine metabolome. PLoS One. 2013 Sep 4;8(9):e73076. [12] Tomescu OA, Mattanovich D and Thallinger GG. Integrative analysis of -omics data: a method comparison. Biomed Tech (Berl). 2013 Sep 7.
1 Selected aspects of biological network analysis
|
27
[13] Cramer F. Gene technology in humans: can the responsibilities be borne by scientists, physicians, and patients? Interdisciplinary Science Review. 2001;26:1–4. [14] Lazebnik Y. Can a biologist fix a radio? – Or, what I learned while studying apoptosis. Cancer Cell. 2002 Sep;2(3):179–82. Biochemistry (Mosc). 2004Dec;69(12):1403–6. [15] Luisi, P. Emergence in Chemistry: Chemistry as the Embodiment of Emergence. 4 Foundations of Chemistry 183–200–200. Dordrecht, Netherlands: Springer; 2002. [16] Barabási AL and Oltvai ZN. Network biology: understanding the cell’s functional organization. Nat Rev Genet. 2004 Feb;5(2):101–13. Review. PubMed PMID: 14735121. [17] Sharma A, Gulbahce N, Pevzner S, Menche J, Ladenvall C, Folkersen L, Eriksson P, OrhoMelander M and Barabási AL. Network based analysis of genome wide association data provides novel candidate genes for lipid and lipoprotein traits. Mol Cell Proteomics. 2013 Jul 23; 12(11):3398–3408. [18] Gohlke JM, Thomas R, Zhang Y, Rosenstein MC, Davis AP, Murphy C, Becker KG, Mattingly CJ and Portier CJ. Genetic and environmental pathways to complex diseases. BMC Syst Biol. 2009 May 5;3:46.Basak S, Behar M and Hoffmann A. Lessons from mathematically modeling the NF-κB pathway. Immunol Rev. 2012 Mar;246(1):221–38. [19] BogdałMN, Hat B, Kochańczyk M and Lipniacki T. Levels of pro-apoptotic regulator Bad and anti-apoptotic regulator Bcl-xL determine the type of the apoptotic logic gate. BMC Syst Biol. 2013 Jul 24;7:67. [20] Wang RS, Saadatpour A and Albert R. Boolean modeling in systems biology: an overview of methodology and applications. Phys Biol. 2012 Oct;9(5):055001. [21] Berestovsky N, Zhou W, Nagrath D and Nakhleh L. Modeling integrated cellular machinery using hybrid petri-boolean networks. PLoS Comput Biol. 2013 Nov;9(11):e1003306. [22] Kim SY, Imoto S and Miyano S. Inferring gene networks from time series microarray data using dynamic Bayesian networks. Brief Bioinform. 2003 Sep;4(3):228–35. [23] Logsdon BA, Hoffman GE and Mezey JG. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control. BMC Bioinformatics. 2012 Apr 2;13:53. [24] Salgado H, Peralta M et.al. RegulonDB (version 8.0): Omics data sets, evolutionary conservation, regulatory phrases, cross-validated gold standards and more. Nucleic Acids Research 2013 Nov; doi: 10.1093/nar/gks1201. [25] Kanehisa M, Goto S, Furumichi M, Tanabe M and Hirakawa M. KEGG for representation and analysis of molecular networks involving diseases and drugs. Nucleic Acids Res. 2010;38: D355–D360. [26] Kanehisa M, Goto S, Sato Y, Furumichi M and Tanabe M. KEGG for integration and interpretation of large-scale molecular datasets. Nucleic Acids Res. 2012;40:D109–D114.Schaffter T, Marbach D and Floreano D. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods. Bioinformatics. 2011 Aug 15;27(16):2263–70. [27] Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13:2498– 2504. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=403769&tool= pmcentrez&rendertype=abstract. Accessed 9 July 2014. [28] Mendes P, Hoops S, Sahle S, Gauges R, Dada J and Kummer U. Computational modelling of biochemical networks using COPASI. Methods Mol Biol. 2009;500:17–59. [29] Oh E, Lu M, Park C, Oh H Bin, Lee SY, et al. Dynamic modeling of lactic acid fermentation metabolism with Lactococcus lactis. J Microbiol Biotechnol. 2011;21:162–169. Available: http://www.ncbi.nlm.nih.gov/pubmed/21364298. Accessed 18 August 2014. [30] Adra S, Sun T, MacNeil S, Holcombe M, Smallwood R. Development of a three dimensional multiscale computational model of the human epidermis. PLoS One 2010;5: e8511.
28 | Monika Piwowar and Wiktor Jurkowski
[31]
[32] [33]
[34]
[35] [36]
Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2799518&tool= pmcentrez&rendertype=abstract. Accessed 15 August 2014. Kühnel M, Mayorga LS, Dandekar T, Thakar J, Schwarz R, et al. Modelling phagosomal lipid networks that regulate actin assembly. BMC Syst Biol. 2008;2:107. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2628873&tool= pmcentrez&rendertype=abstract. Accessed 18 August 2014.Kühnel et al. 2008. Nagasaki M, Saito A, Jeong E, Li C, Kojima K, Ikeda E and Miyano S. Cell illustrator 4.0: a computational platform for systems biology. Stud Health Technol Inform. 2011;162:160–81. Nagasaki M, Doi A, Matsuno H and Miyano S. A versatile Petri net based architecture for modeling and simulation of complex biological processes. Genome Informatics. 2004;15(1): 180–197. Choi JH, Banks AS, Estall JL, Kajimura S, Boström P, et al. Anti-diabetic drugs inhibit obesitylinked phosphorylation of PPARγ by Cdk5. Nature. 2010;466:451–456. Available: http://www.nature.com/doifinder/10.1038/nature09291. Accessed 21 July 2010. Doi A, Nagasaki M, Fujita S, Matsuno H, Miyano S. Genomic Object Net: II. Modeling biopathways by hybrid functional Petri net with extension. Applied Bioinformatics. 2004;2:185–188. Jurkowski W, Roomp K, Crespo I, Schneider JG, Del Sol A. PPARγ population shift produces disease-related changes in molecular networks associated with metabolic syndrome. Cell Death Dis. 2011 Aug 11;2:e192.
| Part II: Cellular level
Jakub Wach, Marian Bubak, Piotr Nowakowski, Irena Roterman, Leszek Konieczny, and Katarzyna Chłopaś
2 Negative feedback inhibition – Fundamental biological regulation in cells and organisms The functioning of biological systems is governed by the laws of physics and chemistry. Biological processes occur spontaneously and are subjected to thermodynamic regulation. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems tend to a state of equilibrium. This state is achieved when the reaction proceeds at an identical rate in both directions (from substrate to product and from product to substrate) – a situation which can be denoted v1 = v2 . In terms of energy transfer this state is inert, with nil spontaneity (ΔG = 0) (Fig. 2.1 (a)). In thermodynamically open systems (including all biological systems) equilibrium is not achieved spontaneously – it can, however, be maintained by continually replenishing the required substrates. This so-called stationary state is characterized by nonzero spontaneity (ΔG ≠ 0) (Fig. 2.1 (b)). Maintaining a stationary state requires regulation. In nonsentient systems, such as the interior of a cell, regulation must be automatic and is typically based on negative feedback loops, as shown in Fig. 2.2. The negative feedback loop may be symbolically represented as a closed circuit in which a detector monitors the state of a controlled process while an effector counteracts detected changes. The function of the effector is dependent on signals received from the detector (see Fig. 2.2). In cells and organisms detectors are usually referred to as “receptors”. Most receptors are proteins which react with the products of a controlled process. Their genetically-conditioned affinity to a specific product enables them to control its concentration. Such structures can be found inside the cell, as well as in the cellular membrane, with receptors protruding into the environment and capable of registering external signals (Fig. 2.3 (a), (b) and (c)). Receptors are usually allosteric proteins, i.e. proteins which can adopt two structurally distinct forms, depending on interaction with a ligand. This interaction causes the receptor component of the feedback loop to flip to its alternative conformation, triggering a signal which is recognized by the effector. Intracellular receptors often form complexes with subunits responsible for effector processes (Fig. 2.3). Detector-effector complexation enables the signal to be conveyed directly to the effector by means of allosteric effects. Proteins that perform this function are called regulatory enzymes or proteins. Their receptor components are referred to as regulatory subunits while their effectors are called catalytic subunits (as they typically exhibit enzymatic properties).
32 | Jakub Wach et al.
(S)
ΔG = 0 V1 = V2 (a)
(S)
S ΔG ≠ 0 V1 = V2
t (b) Fig. 2.1: Two types of stability: equilibrium (closed system, (a)) and stationary state (open system, (b)).
Product
Effector
Receptor
Signal
Substrate
Fig. 2.2: Symbolic representation of a negative feedback loop with its principal components.
2 Negative feedback inhibition
(a)
(b)
| 33
(c)
Fig. 2.3: The function of an intracellular regulatory enzyme: (a) and (b) allosteric detector subunits responsible for binding the product in complex with effector (catalytic) subunits; (c) detectors built into the cellular membrane and capable of registering external signals.
(a)
(b)
(c)
(d)
Fig. 2.4: Inverse relationship between the receptor’s affinity for a controlled product and the product’s target concentration. Low affinity results in high concentration and vice versa.
The degree of affinity of the receptor for the controlled ligand determines that ligand’s target concentration (Fig. 2.4). Lower affinity permits greater concentration and vice versa. Affinity depends on the structure of the receptor protein and therefore on its genetic blueprint. This phenomenon explains the differing concentrations of various chemicals inside cells and organisms. Receptor affinity may also change as a result of additional structure-altering reactions, e.g. phosphorylation. This is often observed in hormonal signaling pathways which force the cell to alter its behavior (Fig. 2.5).
P P
(a)
(b)
Fig. 2.5: Downregulation of receptor affinity by phosphorylation. Arrows indicate receptor activation and signal initiation. This effect is often triggered by hormones.
The negative feedback loop is an abstraction of a biological process. Its constituent elements – receptors and effectors – provide answers to two key questions: “how much?” and “how?” respectively. The receptor addresses the “how much?” aspect since it determines the level of activity or product concentration. For its part, the effector initiates
34 | Jakub Wach et al.
How much
How
What
Fig. 2.6: Information conveyed by a negative feedback loop consisting of a receptor and an effector.
action when triggered by the detector; therefore it embodies the “how?” aspect – as shown in Fig. 2.6. Since almost all processes occurring in the cell and in the organism are subject to automatic control, we can assume that all cellular and organism related structures belong to regulatory circuits. This supports the conclusion that the negative feedback loop can be treated as a basic structural and functional unit of biological systems. Evidently, the goal of the cell and the organism is to maintain a steady state of chemical reactions and product concentrations. Signals originating beyond the regulatory circuits for a given process and altering the sensitivity of its receptor are called steering signals. They facilitate coordination of various biological processes, which, in turn, ensures targeted action and stabilizes the cell’s environment (see Fig. 2.7). Steering signal (receptor affinity control) Product
Signal
Substrate
Fig. 2.7: Negative feedback loop with indication of steering modifying the affinity of the receptor for the controlled product.
2 Negative feedback inhibition
| 35
Coordination is effected by ensuring that the product of one process acts upon the receptor of another process, modifying its affinity. These types of relationships aggregate cellular processes into higher-order chains (Fig. 2.8). Signals sent by the organism can also exert a coordinating influence, which is strictly hierarchical in nature, overriding cell-level control. In this way the organism coordinates the function of various cell types, ensuring overall homeostasis.
(a)
(b)
Fig. 2.8: Symbolic representation of coupling between regulatory circuits: (a) cooperation (the product of one circuit is used by the effector of another circuit as a substrate); (b) coordination (the product of one circuit modifies the receptor of another circuit, altering its affinity). Dashed line signal, continuous line product substrate.
Such action can be equated to a command: coordinating signals derived from the organism are usually based on covalent reactions and subject to strong amplification. In contrast, intracellular signals can be compared to suggestions: they manifest themselves as changes in concentrations of various substrates, made possible by the limited volume of the cell (this also explains why cells are universally small). Changes in the quantity of product of one regulated process act as a signal that can be recognized by the receptor of another process. The organism works by coordinating the function of various specialized tissues and cells. The relation between the organism and an individual cell can therefore be likened to the relation between the state and a single individual (Fig. 2.9). The principal task of the organism is to ensure homeostasis by counteracting external stimuli. Signals issued by the organism must be transmitted over long distances (via the bloodstream, other body fluids, nerves etc.). As a result, they require efficient encoding and strong amplification in the target area.
36 | Jakub Wach et al.
Regulation
Cell
Organism
Organism level
Regulation Cell level
intra-cell enhancement of the signal
Fig. 2.9: Schematic representation of the relation (coupling) between the organism and its cells. Signals issued by the organism modify the function of specific cells by altering the affinity of receptors (taking effect as a change in the concentration/activity setpoint).
The principal benefit of encoding is that the signal may retain relatively low intensity at its source. This enables rapid transmission since fewer signal carriers need to be synthesized. A fitting analogy is communication by radio: the encoded signal is highly specific and stands out against the background. It can be readily detected and amplified inside the target cell, and is not drowned out by ambient noise. Both the receptor and the intra-cellular signal pathway are elements of the decoding mechanism, while the product of the process triggered by the signal emerges as decoded (see Fig. 2.10). Amplification is typically provided by a signaling cascade: a multilayer system where the signal is amplified at each stage by dedicated enzymes (Fig. 2.11). The positive feedback loop is another very potent amplifier used in situations where amplification must be unusually strong (e.g. in extracellular processes). Examples of this mechanism include blood coagulation and complement system activation; it can also be used to modulate incoming signals (Fig. 2.12). Due to the commanding nature of organism derived signals, cells require a way to switch them off when necessary. This protects the cell from dangers which may result from prolonged activation of signals issued by the organism. As it turns out, cells are equipped with specific signal breakers – G proteins. Each G protein complex comprises a receptor which undergoes structural rearrangement when triggered by an input signal, activating an intracellular signaling pathway. At the same time the complex acts as an inhibitory enzyme, terminating the signal after a short period (see Fig. 2.13). The signaling cycle may recur as needed, producing the desired action while at the same time protecting the cell from uncontrolled activation. Organism derived signals which do not require an immediate effect are transmitted in an “economic” manner via body fluids – typically through the blood stream.
2 Negative feedback inhibition
| 37
Controlled level of function or product concentration Signal from the hormonal gland
Cell
Organism
Product – decoded signal
System activated (highlighted function – effect forced in favour of the organism)
Hormon – encoded signal
Signal enhancement and decoding
Fig. 2.10: Encoding and decoding signals sent by the organism to the cell (hormonal and nerve transmission).
Activity
Fig. 2.11: Cascade amplifier – principle of operation.
Signaling steps
Activity
(a)
Signaling steps (b) Fig. 2.12: The positive feedback loop as an amplifier (a) and the negative feedback loop as a regulator (b). Positive feedback amplifies the signal while retaining its directionality. Negative feedback counteracts the signal’s effect and therefore opposes its direction.
38 | Jakub Wach et al.
Ras(GDP) Raf
Sos Grb2
(a)
Ras(GDP) P
P
Raf
Grb2
(b) Fig. 2.13: The signal breaker as part of a hormonal signaling pathway. The hormonally activated receptor binds the signal inhibitor (Grb2), permitting brief activation of the enzyme which initiates signaling inside the cell.
When rapid reaction is required (such as in the case of muscle contraction), nerves are used instead and the signal is only converted into a humoral form immediately prior to penetrating into the target cell, where it can be decoded and amplified. In summary, negative feedback loops can be described as regulatory systems while their stabilizing influence can be characterized as a manifestation of regulation. Regulatory action may only occur when the loop is a closed one. Signals which affect the initial stabilization program are referred to as steering signals. By applying the concepts of control engineering we may further divide such signals into three groups. 1. Tracking (Fig. 2.14). In this type of mechanism the effect closely tracks the input signal. Tracking control is typically observed in metabolic processes and can be compared to the action of a mechanical servo.
Signal and effect relation
2 Negative feedback inhibition
Progress of the process
Progress of the process
Fig. 2.15: The principle of extremal control. The signal initiates (dashed line) the process which then proceeds to its expected conclusion (black line).
Sequential control – typical for development processes. Here, signals are produced sequentially according to a predetermined algorithm, following a previous phase (Fig. 2.16).
Signal and effect relation
3.
Fig. 2.14: The principle of tracking signalization. Dashed line – steering signal. Black line – effect.
Extremal control – control mode in which the signal acts as a trigger, unambiguously initiating some complex process such as blood coagulation or cell division (Fig. 2.15).
Signal and effect relation
2.
| 39
Progress of the process
Fig. 2.16: The principle of sequential control. The effect is achieved in stages. Each stage ends with a signal which triggers the subsequent stage (dashed line). Schematic depiction of sequential control – black line.
40 | Jakub Wach et al.
The negative feedback loop can operate in an automatic manner and is therefore autonomous (Fig. 2.17 (a)); however, its autonomy may be affected by external stimuli which we refer to as steering signals. When the steering signal originates in another feedback loop a coupling can be said to exist between both loops. Coupling influences regulation by subordinating the controlled circuit to the circuit which controls its receptor (see Fig. 2.17 (b)).
(a)
(b)
(c)
(d)
Fig. 2.17: Hierarchical coupling between feedback loops: controlling (dark) and controlled (light) components. (a) independent loop, (b) coupled loops with hierarchical dependence, (c) and (d) coupled loops with no-hierarchical dependence.
New regulation effects appear when mutual coupling between a number of circuits produces a closed loop and the (formerly) autonomous circuit begins acting in accordance with signals sent by the final circuit in the chain. As a result, each component of the control chain is, in itself, subject to external control. An interesting problem emerges at this stage: since none of the coupled circuits retains its autonomy. The system as a whole loses its point of reference and may become unstable. The emergent regulatory “supercircuit” (Fig. 2.17 (c) and (d)) is very flexible and hence requires access to “memory” which enables it to perform its regulatory function. Coupling is not the only way in which efficient regulation can be established. A different approach to control relies on correct arrangement of uncoupled feedback loops. Biological systems frequently employ autonomous loops which counteract each other in order to reduce undesirable fluctuations in the controlled process (Fig. 2.18).
Fig. 2.18: Regulatory circuits linked by a common program – amplitude control. Black arrows indicate that the receptor sends out signals triggering processes, which is able to counteract only decreases or increases in the observed quantity.
2 Negative feedback inhibition
| 41
Each feedback loop can only counteract changes occurring in one specific direction. An example is the blood sugar level control mechanism which releases glucagon as its receptor can only react to low glucose levels. Counteracting excessive concentrations of glucose requires a different mechanism which triggers the release of insulin – if this mechanism is not present or if it malfunctions, diabetes occurs. The interplay between both circuits ensures a steady glucose concentration, with limited fluctuations. Such mechanisms are quite common on the level of the organism. Inside cells, where reaction products can be quickly consumed or expelled, there is no need for such systems – instead, efficient regulation may be provided by receptors which detect low concentrations of a given substance and trigger increased production. It is worth noting that not all cellular processes need to be tightly controlled. Metabolic chains often involve many intermediates, with only the final product subject to control. The effector of a regulatory circuit may perform a number of successive actions limited only by the availability of substrates. The products of metabolic pathways are genetically conditioned and regulated. Likewise, genes determine the affinity of receptors to their target substances, thus ensuring proper functioning of the cell.
2.1 Negative feedback-based systems simulations 2.1.1 Introduction In order to simulate biological systems modeled using negative feedback loops, a specialized piece of software had to be created. The application is capable of running a simulation of user-defined organized systems, represented by a model and composed of multiple negative feedback inhibition systems (NFIS) and external receptors, connected via cooperation or coordination. It is available courtesy of Jagiellonian University – Medical College, at the following address: http://crick.cm-uj.krakow.pl:8080/nfs/
2.1.2 Glossary of Terms – – –
–
NFIS: negative feedback inhibition system. It is composed of a coupled receptor and effector in a closed loop (see Fig. 2.19). Effector: component of NFIS – responsible for delivering a product (see Fig. 2.20). Receptor: component of NFIS responsible for controlling an effect. If the programmed value of the regulation threshold is reached, the receptor issues a control signal, disabling the effector (see Fig. 2.21). External receptor: receptor coupled with another NFIS (considered here as the secondary NFIS). Regulation thresholds of NFISs may be affected by signals from such external receptors, coupling them. Interconnection is mostly realized by products of NFISs which thus play the role of coupling signals (see Fig. 2.22).
42 | Jakub Wach et al.
– – –
Organized systems (OS): all interconnected negative feedback inhibition systems. State: concentration of products delivered by effectors. Old/previous state: state of the OS, as defined above, during the previous time step used to describe a simulation algorithm from a dynamic perspective.
First-effector
First-receptor Fig. 2.19: NFIS.
First-effector
Fig. 2.20: Effector – the effector of the initially considered NFIS.
First-receptor Fig. 2.21: Receptor – the receptor of the initially considered NFIS.
External
Fig. 2.22: External receptor.
2.1.3 Software model The model represents a negative feedback-based biological system. The full model defines the organized system’s (OS) structure with connections between NFISs and values of the control parameters.
2 Negative feedback inhibition
| 43
The model expressed uses a language called JSON. The application provides two editors. One allows for changes in both OS structure and parameters, but requires knowledge of the JSON format. The other editor is more user-friendly but limited to changes in the defined parameters. The general model structure is as follows:
OS (organized systems) A single organized systems (OS) entry is composed of: – NFISs: list of coupled effectors and receptors. – Receptors: list of external receptors coupled with other NFISs . Each such receptor can modify threshold values of coupled NFISs. – Init: initial state of the OS. Defines concentration of products for the first run of the simulation. This is especially important for NFISs with the substrate parameter defined. In such cases, some substrate is required for any production. The initial concentration of each substrate should be defined in this section. An OS also has the following properties: – Name: user-defined identification of an OS. Users of the application can load an OS using the name. – Description: user-defined description of an OS. The description should elaborate on the purpose of the OS or biological phenomena it models.
NFIS One Negative Feedback Inhibition System (NFIS) is composed of the following components: – effector – receptor An NFIS has an id property. The id defines a logical name of an NFIS. It is used by the application in order to uniquely identify an NFIS through a simulation.
Effector A single effector is characterized by the following properties: – Product: mandatory – name of the product delivered by this particular effector. – Substrate: optional – defines a product that should be used as a substrate for this particular effector. The product is delivered by another effector. It is defined as the product parameter’s value. Substrate product exchange means that NFIS systems are connected by cooperation.
44 | Jakub Wach et al.
–
–
–
If the parameter is not defined, the substrate is considered as “always available”. This means that the effector always delivers the exact amount of product molecules as defined by the production parameter. Production: mandatory – maximum production rate of the effector defined by the number of molecules per time unit. “Maximum rate” means that if a substrate is available in an amount greater than or equal in relation to the value of the described parameter, the effector delivers exactly the same amount of product. Outflow: optional – product outflow rate defined as the number of molecules per time unit. Outflow models diffusion as occurring in biological systems.
Receptor A single receptor is characterized by the following properties: – Delay: optional – time required for this particular receptor to become active. If a product threshold is exceeded, a signal of this receptor (coupled) affects the effector only after a defined number of time units. This parameter was introduced to model communication in biological systems, which is rarely immediate. – Thresholds: mandatory – list of threshold levels (values) defined for this particular receptor. One of the defined values will be used to activate the receptor during simulation. Each threshold value configuration is characterized by the following properties: – Signal: optional – defines which external receptor becomes active in response to the surpassed threshold value. If this parameter is not defined, the threshold value will always be considered for the receptor’s activation. In other words, this parameter defines the condition under which a value will or will not be considered. – Product: mandatory – defines which product is considered when this configuration is active. The considered product’s concentration is checked versus the threshold value. For a proper negative feedback loop there has to be at least one threshold configuration with the value of this parameter equal to the product of coupled effector. If the value of this parameter refers to any other product, it means that there is coordination between this NFIS and the NFIS delivering the product. – Value: mandatory – defines the threshold concentration of the product. If it is exceeded for the amount of time defined by the value of the delay parameter, the signal of this receptor becomes active.
External receptor Single external receptor – receptor coupled with some other NFISs. It differs from a regular receptor by only one added property: id. This property gives the receptor a logical name, which can be referenced by other receptors.
2 Negative feedback inhibition
| 45
2.1.4 Application manual The application allows the user to select a predefined model, modify it and simulate for a chosen period of time. The user can modify either the structure of the OS or just control the parameters. Modifications of the first kind are facilitated by a JSON-based editor. The second type of modifications can be conducted with a more user-friendly parameter editor. After loading the application, the main screen looks as shown in Fig. 2.23.
Fig. 2.23: Main screen on start-up.
Main screen The majority of the screen is occupied by three sections, all empty at the moment because no OS has been selected. The sections are: 1. OS definition introduces the user to the software. As previously mentioned, this section allows the user to modify the model using one of two editor structures: JSON (text-based) or property (easy to use, graphic-based). 2. OS graph-graphical representation of the model is displayed. All NFISs, receptors, effectors and external receptors are presented using icons as depicted in Figs. 2.19 to 2.22. Additionally, connections between parties are captured on the graph. This includes cooperation (product – substrate), coordination (product to receptor) and regulation threshold change (external receptor to NFIS receptor). Both the product (substance) and the signal (receptor) connections have distinct graphical representations, as shown in Figs. 2.24 and 2.25. uct rod t-p s r Fi
3.
Fig. 2.24: Product connection type.
Product is denoted by a solid line. The color of the line is the same as the color of the product on the simulation’s concentration chart (described later). Additionally, the line is annotated with the name of the product as defined in the model. The signal is denoted by a dashed line. Simulation chart: represents the concentration of all products present in the model. The chart is only available after a simulation run.
46 | Jakub Wach et al.
Fig. 2.25: Signal connection type.
OS picker The most interesting part of the screen at this point is the OS picker, allowing the user to choose an OS to be simulated. Upon a right mouse click, a list of the available OS models will appear (see Fig. 2.26).
Pick a system... Pick a system... Single component Cooperation example Cooperation with external example Coordination example Fig. 2.26: List of OS models in the OS picker.
Once the user selects an OS, its model is loaded into the application and the two topmost sections are populated with data.
OS definition The left part of the OS definition section is occupied by the structure editor. Once an OS model is loaded, the editor is populated with the exact JSON representation. The format has already been described in the “OS Model Example” part. The editor appears as in Fig. 2.27. There are two buttons available for this editor. Submit button should be used to let the application know that user has made changes to the model. Upon clicking, the current JSON will be sent to the application. First, the model will be validated. In case of any errors within the model, it will be rejected with an appropriate error message displayed in the main window (see Fig. 2.28). If there are no errors within the model, it will be remembered by the application as the current one. From now on, each simulation run will use the submitted model.
2 Negative feedback inhibition
| 47
Fig. 2.27: Structure editor.
System definition validation error. Following errors were found. - Product ‘first-productdw’ not found for Receptor with ID ‘external’ Fig. 2.28: Validation error example.
Fig. 2.29: Property editor.
x
48 | Jakub Wach et al.
Format code button plays an auxiliary role. It can be used to format the model JSON to a nice and readable form. Therefore, a user making changes to the model does not need to care about formatting on their own. The right part of the OS definition section is occupied by the property editor. The editor is also initialized right after a model is selected or submitted (submit button). Upon initialization, each editable property of the model is turned into a small integer value editor (Fig. 2.29). Each NFIS and external receptor has its own box with a heading containing its name (“id” property). The NFIS’s heading also contains the name of the product delivered by the effector. For example, a heading that reads “NFIS ‘first’ (produces: ‘firstproduct’)” means that the box presents the properties of a negative feedback NFIS with the “id” property equal to “first”, delivering a product named “first-product”. The first row of the box contains editors for basic effector and receptor parameters, that is the effector’s “production” and “outflow” along with the receptor’s “delay”. The second row, preceded with the heading that reads “Thresholds”, contains editors for each receptor’s threshold defined in the model for this particular NFIS. Each threshold editor has a name which can take one of two forms: – if the “signal” parameter is defined, the form is “signal/product”, – if the “signal” parameter is not defined (threshold is always active), the form is “product”. Parameter values set in the editor are applied with the Set parameters button. Once a user has changed a value of any parameter and is ready to use the new settings for a simulation, the button should be clicked. Upon clicking the application will save the new model for simulation and will also update the structure editor.
OS graph This section is devoted exclusively to the OS model graph. This graphical representation of the model does not contain any property values as it aims to help the user understand the model’s structure. Let us consider the example in Fig. 2.30. The example depicts an OS consisting of two negative feedback NFISs, called “first” and “second”. Effectors and receptors are labeled as “first-effector”, “secondeffector” and “first-receptor”, “second-receptor”, respectively. There are also connections depicting the structure. Between the receptor and the effector of the same NFIS there is a pair of connections – product and signal, as the feedback loop is closed. There is also a product connection between the NFIS “first” and the receptor of the NFIS “second”. Such a connection means that a threshold of the latter NFIS depends on the product of the first one. In other words, there is coordination between the two NFISs. Two similar connections between empty space and effectors depict substrate that flows directly from the environment.
2 Negative feedback inhibition
| 49
First-receptor
t-product Firs
Substrate
uct od r p stFir First-effector
Second-receptor
uct rod d-p n co Se Second-effector
Fig. 2.30: Example OS graph.
Simulation run Once a model has been loaded into the application and all of the properties have been set to appropriate values, it is time to simulate the OS. In the top right-hand corner of the screen, the user can find a simulation run box (Fig. 2.31).
Time:
100
Simulate Fig. 2.31: Simulation run box.
The box allows the user to enter the desired simulation length (in arbitrary time units). The default value is 100 time units. Clicking the Simulate button will cause the simulation algorithm to be run.
Simulation chart Once the simulation has been run, the product concentration data is returned to the browser and displayed to the user. An example simulation chart is shown in Fig. 2.32. The result of each simulation is presented as a separate panel. The panel is composed of two parts – left and right. The right-hand part is a chart, presenting the concentration of two products called “first-product” and “second-product”. The colors of the products are exactly the same as on the OS graph. On the horizontal axis we have time steps of the simulation. On the vertical axis, the concentration in molecules is given. The user can export the contents of the chart as an image by using the small icon in the top right-hand corner of the chart.
50 | Jakub Wach et al.
Fig. 2.32: Simulation run chart.
The left-hand part summarizes the model that was used to run the simulation. The summary is composed of the model’s name (“cooperation example” in this case) and all of the control parameters with their values. The user can run as many simulations as desired. Each will be presented as a separate panel. A panel can be removed from the screen by using the small “X” button located in the top right-hand corner of each panel. The following section should be considered as optional, only for interested readers. It concerns technical details of the simulation software – in particular the OS model in JSON language and the algorithm used.
2.1.5 OS model example Definition The following box contains an example of an OS model, written in the JSON language, as supported by the application. { "name" : "Cooperation example", "description" : "Two NFISs for testing purposes", "receptors" : [], "NFISs" : [ { "id" : "first", "receptor" : { "delay" : 5, "thresholds" : [{ "product" : "first-product", "value" : 6 }] },
2 Negative feedback inhibition
| 51
"effector" : { "product" : "first-product", "production" : 3, "outflow" : 1 } }, { "id" : "second", "receptor" : { "delay" : 1, "thresholds" : [{ "product" : "second-product", "value" : 12 }] }, "effector" : { "product" : "second-product", "substrate" : "first-product", "production" : 2, "outflow" : 1 } } ], "init" : { "first-product" : 5 } }
Description The example presents an OS composed of the following: 1. No external receptors defined (note that the “receptors” array is empty). 2. Negative feedback NFIS named “first” (id parameter). The NFIS effector delivers product named “first-product” with a production rate of three molecules per time unit and outflow of one molecule per time unit. There is no substrate defined, therefore the software assumes that the substrate is always available in unlimited quantity. The NFIS receptor has a delay of five time units. The NFIS has only one threshold value defined. The programmed threshold value activates the receptor if the concentration of the product called “first-product” exceeds six molecules. Additionally, since the “signal” parameter is not defined, the value of six will always be used for activating the receptor. 3. NFIS with id “second”. The NFIS effector delivers a product called “secondproduct”. The maximum production rate is set to two molecules per time unit, with an outflow of one molecule per time unit. The substrate is set to the product called “first-product”. We can observe that it is exactly the same product as delivered by the NFIS called “first”. Therefore, it can be said that NFISs “first” and “second” are cooperating. The receptor has its delay property set to one
52 | Jakub Wach et al.
time unit. There is also a threshold value defined for the product called “secondproduct”. The threshold value is set to 12 molecules. As in the previous case, the property called “signal” is defined for the receptor, so it is always considered by the simulation software. 4. Initiation of the state. There is an entry setting the concentration of the product called “first-product” to five molecules. In summary, there are two cooperating NFISs in the model. The simulation will start with the first NFIS’s product concentration set to five molecules. There are a total of eight control parameters that can be changed by the user without changing the structure of the OS.
2.1.6 Simulation algorithm The application uses a very simple, discrete-time algorithm to simulate the model. For each point in time (also called a time step), the software simulates the new state taking into account the model, which is constant, and the previous state. A single iteration of the algorithm computing product concentrations and signal activations (the state) is described in the following.
Phase I – Outflow This phase models diffusion of products (substances), naturally occurring in a biological system. It starts with concentrations as in the previous state. If this is the first iteration, then the initial state is considered. If a product is not set at the initial state, the concentration is assumed to be zero. For each NFIS, the previous concentration of the product delivered by the NFIS is decreased by the value of the outflow parameter, as defined for the NFIS. If the result of this subtraction is negative, the resulting concentration is set to zero. The phase ends with a new state. It is defined as follows: 1. Product concentrations are decreased by outflow, but cannot be less than zero molecules, 2. Receptor signals are the same as in the previous state. What is worth mentioning is that the old state is also modified. Product concentrations are decreased by outflow. This step is necessary because the old state will later be used in the production phase to obtain substrate concentrations.
2 Negative feedback inhibition
| 53
Phase II – Production This phase encompasses the delivery of all products present in a model. Therefore, for each effector of the model, the following sequence of events occurs. The activation status of the coupled receptor is taken from the previous state. If the signal is active, no processing is done. We could say that this effector is in fact “turned off”. If there is no active signal, the effector is in a working state and can deliver product. First, the algorithm calculates the available substrate. If the parameter called “substrate” is not defined for the effector, it is assumed that the required substrate is delivered from the environment and that there is a sufficient quantity of it. In this case, effector production represents a maximum, equal to a value of the “production” parameter. If the “substrate” parameter is defined, the available substrate concentration is retrieved from the old state. The quantity of delivered product is then calculated as the minimum of two values – available substrate and “production” parameter. If there is less substrate available than the maximum production capability of the effector, the output quantity is equal to the substrate quantity. In other words, the effector is not able to deliver more molecules of the product than there is substrate available. In the last step, the concentration of the substrate is decreased by the quantity of the product. Subtraction is performed for both the new and the old state. The old state has to be updated since there can be more than one effector using the same substrate. The sequence of events ends with the new state updated once again. The concentration of the product is increased by the delivered value. The old state remains the same. It is noteworthy that when a “substrate” parameter is defined, the product delivery for an effector is quite complex. This is because the software has to take into account that multiple effectors can use the same product as the substrate. Therefore, if one NFIS uses a substrate, the quantity available for other effectors during the same time step (iteration) has to be decreased by the amount of the substrate used. On the other hand, multiple effectors could be cooperating, creating a chain or even closed cycle of product-substrate dependency. If, during one iteration, an effector delivers some product, the product can play the role of substrate for other effectors. In such a situation, the amount of delivered product is not available as a substrate in the same time step. Therefore, both the old and the new state are constantly updated in each time step. Most importantly, in both cases substrate concentration is always decreased by the amount used.
54 | Jakub Wach et al.
Phase III – Signal activation In this phase, all of the receptors coupled with effectors as NFISs are processed in order to calculate signal activation. As the production phase has already been conducted, the state of signals calculated in this phase will be used only in the next time step (iteration). The important thing to note is that every receptor has a delay property defined. Therefore, in spite of reaching the programmed threshold level, the receptor still remains silent. The product concentration has to stay above the threshold for the defined number of time units (delay parameter value) to start signalization. The process is called “receptor charging”. The same applies to a situation where product concentration drops below the threshold value. The receptor is not deactivated immediately, but fades out over time, lasting exactly the number of time units defined. Therefore, signal state per receptor is more complex than a simple Boolean on/off flag. It consists of the following data, per each product bound to any threshold configuration of the receptor: – active on/off Boolean flag – charge time, expressed in time units. This property is used for both charging and discharging (fading) of receptor For each receptor the following sequence is executed: First, the value of the regulation threshold needs to be calculated out of all available configurations. Some threshold configurations are only active conditionally when the appropriate signal is active. Therefore, out of all available threshold configurations those with an active signal or without any set signal property are taken into consideration. As the model allows for multiple threshold values to be defined for the same product, a list of threshold values to be considered may contain more than one value for a product. However, in order to calculate whether the concentration of a product exceeds the threshold and the receptor should be activated, only one value per product has to be chosen. In order to resolve this issue, the algorithm will arbitrarily choose the highest threshold per each product for further processing. For each product per active threshold list, the following algorithm runs: – The receptor checks if the new concentration of the product is above the threshold value, i.e. whether the receptor should signal now and whether its signal was active in the previous iteration. – If the receptor should signal, but was not previously active, and the receptor has been charging for a time shorter than the delay defined, the signal is set as not active and the charge time is increased. The receptor is charging. – If the receptor should not signal but was previously active, and the receptor was charging for a time shorter than the delay defined, the signal is set as active and the charge time is increased. The receptor is discharging, or in other words – the signal is fading.
2 Negative feedback inhibition
–
| 55
In all other cases (the current and the previous signals are the same) the signal is set as computed (active, not active) and the charge time is reset to zero. There is no charging or discharging in progress.
At the end of the sequence process the new state is updated with signal activation and charge times for each product present in this receptor threshold configuration. The old state remains the same.
Phase IV – Higher level NFISs (external receptors) Receptors considered external, that is, coupled with other NFISs, can be used to select an alternative configuration of thresholds defined for those NFISs. This is done via the already described “signal” property of a receptor. The new state of such receptors is calculated in exactly the same way as for receptors coupled with effectors. One important observation here is that since the algorithm always uses the previous state for the new state calculations, the order of receptor execution does not matter.
Irena Roterman-Konieczna
3 Information – A tool to interpret the biological phenomena Information management is an oft-neglected aspect of biology. Biologists usually study information in the context of DNA-related processes; however, the relevance of information processing and storage goes far beyond this narrow scope. Before we attempt a closer look at the problem we must first introduce some basic concepts derived from information theory. The first is information quantity, defined by Shannon as follows: I = − log2 (p) Here, p is the likelihood of a specific event. Clearly, the lower the probability of occurrence of an event the more information it carries. By the same token – if an event is certain to occur (p = 1.0), it carries no information whatsoever. We can also easily define the basic unit of information: the bit. By definition, one bit corresponds to the quantity of information carried by an event whose probability of occurrence is 12 . In the classic coin flip experiment each outcome (heads or tails) is equally likely and therefore the result of the coin flip carries 1 bit of information. In biology, a similar situation occurs at childbirth: excluding some fringe cases, the child is equally likely to be a boy or a girl – thus 1 bit of information is required to describe the child’s sex (of course this discussion in no way touches upon the emotional consequences of a given event). The above mentioned formula permits us to compute the quantity of information carried by an elementary event. Similar considerations apply to events with more than two possible outcomes, such as a fair dice roll where each of the six possible results is equally likely. In this case the quantity of information carried e.g. by rolling a 6 is I = − log2 ( 16 ). Why would a biologist require such knowledge? In order to answer this question we must acknowledge the growing importance of information technology for society at large, and – more specifically – for scientific research. We can, for example, inquire about the information quantity carried by a single amino acid. Biology tells us that protein chains are comprised of 20 common amino acids. If we were to assume that each of these amino acids occurs with identical fre1 quency, the information content of a single amino acid would be I = − log2 ( 20 ) = 4.32 bits. Since not all amino acids are equally common their actual information content varies from 3.661 bits [Ala] to 6.091 bits [Trp]. As already noted, the arrangement of amino acids determines the 3D structure of protein chains which, in turn, determines their biological properties. Many areas of biological research, such as drug design, require detailed knowledge of 3D protein structures in order to predict their function. For each protein the tertiary structure is
58 | Irena Roterman-Konieczna determined by a list of (Φ, Ψ ) angle pairs. These can be plotted on the Ramachandran plot which spans the full conformational space (1–360 degrees) for each angle separately. Assuming a 1-degree step, the likelihood of correctly identifying an angle pair is (1/(360 ⋅ 360)), which calls for 16.98 bits of information. Alternatively, if the Ramachandran plot is subdivided into 5 ⋅ 5 degree sections, the amount of information required to identify the correct section for a given angle pair is 12.34 bits. It is, however, important to note that the actual distribution of angle pairs is not uniform and that some areas are more heavily populated than others, in accordance with the intrinsic properties of each amino acid. Taking this diversity into account and based on the definition of information entropy we may predict the average quantity of information required to pinpoint the correct pair of conformational angles for each amino acid. N
H = − ∑ p i log2 p i i=1
H corresponds to our degree of ignorance with regard to assignment of each (Φ, Ψ ) angle pair to an appropriate 5 ⋅ 5 degree area of the Ramachandran plot. Comparing this value with the information “payload” of selected amino acids (8.33 bits for Pro and 10.6 bits for Gly) indicates an overall information shortage. It seems that the amino acid sequence alone does not carry enough information to enable us to predict the correct set of (Φ, Ψ ) angles for each residue. Additional information may be provided by mutual interactions between amino acids, which, in turn, depend on their specific locations along the chain and on inter-residue distances.
Event probability estimates Another important question in biology revolves around the likelihood of achieving success (interpreted as the occurrence of a favorable event) given a large enough number of attempts at some task. As an example, let us consider the need to deliver a message to a recipient under the assumption that the message itself may be copied and dispatched many times. Successful receipt is conditioned by two factors: (1) p, the probability that the addressee will receive any particular copy of the message (this depends e.g. on population density); (2) k, the number of copies sent out. Clearly, the more copies are dispatched the greater the likelihood that at least one of them will reach the intended recipient. In mathematical terms this may be expressed as follows: P = 1 − (1 − p)k Fig. 3.1 graphically presents the influence on P of k (number of events) and Fig. 3.2 of p (probability of an elementary event). The overall probability of contacting the recipient may be maximized by creating additional copies of the message or by trying to increase the value of p, e.g. by precisely specifying the recipient’s address in order to restrict the search to a narrow area.
3 Information – A tool to interpret the biological phenomena |
59
Reaching the goal
Probability "P"
1 0.8
0.4
k=100 k=300 k=500 k=700 k=900
0.2
P=[1–(1–p)k]
0.6
0 0
0.2
0.4
Probability "P" Fig. 3.1: One way to approach the goal (P = 1.0) – by more attempts, i.e. increasing k. http://crick.cm-uj.krakow.pl:8080/nfs/.
Reaching the goal p=0.004
p=0.0009
p=0.0001
p=0.00002
p=0.000002
Probability "P"
1 0.8 0.6 0.4 0.2 0 0
5
15 25 35 Number of repetitions k
45*103
Fig. 3.2: Second way to approach the goal (P = 1.0) – by increasing p (probability of elementary event). http://crick.cm-uj.krakow.pl:8080/nfs/.
Is the above phenomenon relevant to biologists? It seems so, since nature itself makes use of the presented mechanisms. Multiplying the number of attempts leads to a “solution by energy”, since copying the message requires an expenditure of resources. This type of situation occurs e.g. in farming where many seeds are sowed but only some of them encounter conditions which promote germination. Another way to approach the problem is by increasing p , a process which can be called “solution by information”. In this case, instead of expending energy we must first amass a certain quantity of information – for example by inquiring about the
60 | Irena Roterman-Konieczna
recipient’s address. If sufficient information is available we can reasonably hope that a single copy of the message will suffice. Another way to visualize the differences between both approaches is to compare a machine gun (which fires many “informationless” projectiles) with a guided missile (which requires a large quantity of information in order to find its mark). Analysis of the presented examples suggests one additional interpretation. When dispatching multiple messages to unknown recipients we cannot predict who will ac-
IF–3 IF–1
IF–3
IF–1
fMet GTP IF–2
5’
mRNA
IF–3
fMet
IF–3
GTP IF–2
3’
IF–1
PI fMet
GTP IF–2 IF–1
5’
mRNA
3’
Fig. 3.3: “Investment in information” – construction of a protein synthesis initiation complex as a multistage process (conjunction of conditions – all conditions must be met in order to ensure the desired outcome).
3 Information – A tool to interpret the biological phenomena |
61
tually receive and read them. Our situation can therefore be termed “unpredictable” – it may happen that nobody will act upon our message. The alternative solution, i.e. by amassing information, can be useful when the goal of our task is precisely determined. When do living organisms make use of each of the presented methods? In fact we have already indirectly answered this question. If the process plays a decisive role in proper functioning of the organism (or cell) and if its outcome is predictable (i.e. we have detailed knowledge of the desired result) the cell constructs something similar to a “guided missile”, investing in an information-rich carrier. In most cases this is done by synthesizing a complex consisting of various molecules, each with a specific function. An example of this approach is protein synthesis initiation – one of the most fundamental processes in life (Fig. 3.3). Each component of the ribosome carries a certain quantity of information while the overall structure of the complex ensures that all these pieces come together giving rise to a complex biological phenomenon. Here, maximizing P is done by increasing the value of p since only a carefully designed structure can successfully initiate protein synthesis. The opposite approach, i.e. an energy-based solution, can succeed when the favorable course of action is difficult to predict in advance. Such situations often emerge in response to unexpected external stimuli. For example, the organism may be invaded by an unknown bacterial pathogen, necessitating an effective response despite the inherently low value of p (there are many different bacteria so it is difficult to select the proper antibody) (Fig. 3.4). The organism reacts by producing a great variety of “missiles” (antibodies), assuming that at least one of them will prove effective in combating the invader (antigen). Changes in antibody specificity are introduced through recombination and mutations in a specific fragment of the polypeptide chain comprising
V1
V2
V3
Vn
V1
J1 J2 J3 J4 J5
Vk
Jm
J5
1 15 mm
Tab. 9.1 presents the most predictive criteria for each of the categories of melanoma thickness. In [6] Argenziano proposed an algorithm for the preoperative evaluation of melanoma thickness which is presented in Fig. 9.11. In cases other than those listed in the algorithm we are not able to determine whether the melanoma is thin, intermediate
9 Melanoma thickness prediction
|
187
or thick. In Fig. 9.10 we present three examples of melanoma thickness. The first example represents a thin melanoma. It is a palpable case without blue-whitish veil and atypical vascular pattern but it contains the pigment network. The second example is an intermediate melanoma. It is also a palpable lesion but additionally contains a blue-whitish veil and vascular structures. The last example is a thick melanoma. This is a nodular case with diameter greater than 15 mm and a blue-whitish veil.
Thickness evaluation
Flat melanoma
Palpable melanoma
Nodular melanoma
Thin = 100 %
Thin = 62 % Intermediate = 34 % Thick = 4 %
Thin = 6 % Intermediate = 44 % Thick = 50 %
Without dermoscopic criteria Thin = 81 %
Blue-whitish veil or vascular pattern
Blue-whitish veil and vascular pattern Inter. = 82 %
Blue-whitish veil and/or vascular pattern
Pigment network
Diameter > 15 mm
Thin = 73 %
Thick = 70 %
Without dermoscopic criteria Thin (rare)
Fig. 9.11: Algorithm for the determination of melanoma thickness (based on [6]).
9.5 Melanoma thickness simulations The automatic diagnostic system has been designed to enable the classification of the skin lesion with the determination of melanoma thickness algorithm. The overview of the algorithm is presented in Fig. 9.12. The system is divided into four main stages, including preprocessing (image enhancement), segmentation, dermoscopic criteria detection and classification. Every stage consists of a few smaller steps mostly presented as one algorithm.
188 | Joanna Jaworek-Korjakowska and Ryszard Tadeusiewicz
Clinical criteria Medical image
Preprocessing
Segmentation
Classification Dermoscopic criteria
Pigment network detection Blue-whitish veil detection Vascular structure analysis Fig. 9.12: System overview.
Now, we would like to describe the main goal of each stage presented in Fig. 9.12. The first step in every medical image processing system is image acquisition, which aims at obtaining an image of the best quality. In most cases, the next stage is called preprocessing and it is responsible for reducing the amount of artifacts and noise in the image, which results both in better border detection, segmentation and the classification stage. After the proper preprocessing of the image the next step, called segmentation, which focuses on the precise detection of the region of interest, is applied. The outcome of the segmentation stage is of great importance. Parameters and features are calculated on the basis of this region which, in turn, affects the next step – classification. In many cases it is quite difficult to obtain high accuracy in medical imaging due to the complexity of these kinds of images. After the segmentation stage, the feature detection and analysis can take place. The acquired parameters influence and determine the last stage, which is classification. For dermoscopy images the preprocessing step is obligatory, because of extraneous artifacts, such as skin lines, air bubbles and hairs which appear in almost every image. The preprocessing stage consists of three parts, because of the specific features of the dermoscopy images. The first step is the removal of black frames that are introduced during the digitization process. The second step is a simple smoothing filter. The last step is connected with dark and thick hairs which need to be unpainted before the segmentation process and local feature extraction (Fig. 9.13). The detailed description of the presented algorithms can be found in [21]. The segmentation process in one of the most important and challenging steps in dermoscopy image processing. It has to be fast and accurate, because the subsequent steps crucially depend on it. Furthermore, the analysis of clinical local features, such as pigment network, streaks, dots and globules, depends on the accuracy of the border detection. The segmentation process for dermoscopic images is extremely difficult due to the low contrast between the healthy skin and the mole. The skin lesion has mostly varied coloring, and the surrounding is covered with the traces of the preprocessing step which makes the process even harder to carry out. Because of the difficulties de-
9 Melanoma thickness prediction |
(a)
(b)
(c)
(d)
(e)
(f)
189
Fig. 9.13: Black frame and hair removal process: (a) input image, (b) after frame removal, (c) gray scale, (d) result of top-hat transform and threshold, (e) hair detection, (f) inpainting.
scribed above, numerous methods have been implemented and tested. The detailed description of the chosen region-growing segmentation algorithm can be found in works [22–24]. The clinical criteria including melanoma elevation and maximal diameter should be entered by the user of the application. The assessment of this criterion is difficult for an automatic diagnostic system. The detection of the pigment network consists of two steps. Firstly, we perform the extraction of the dark local structures including pigment network as well as dots and globules on the segmented area with the adaptive threshold algorithm on the RGB images. In the second step we have to differentiate globules from the pigment network. In our research we use the shape parameter solidity to distinguish the pigment network from dots and globules. Solidity refers to the extent to which the shape is convex or concave. The solidity of convex shapes is always 1. We calculate the solidity parameter for each labeled region. If the parameter is close to 1 we remove the object from the image. In Fig. 9.14 we show the result of each step. A blue-whitish veil appears as a diffuse gray-blue area. Detection of homogeneous areas is mostly based on the analysis of different color spaces like RGB, HSI or HSV. Through experimental studies we have obtained individual values for each color. For every pixel in the lesion the nearest reference color is found by calculating the Euclidean distance to the pixel color. The detection of atypical vascular structures is a challenging problem in dermoscopic image analysis. In dermoscopic images, up to seven different vascular structures can be found. Vascular structures that are relatively common in melanomas include irregularly distributed dotted vessels and irregular linear vessels. The evaluation of vascular structures can be achieved in two steps. Firstly, the red areas lying on
190 | Joanna Jaworek-Korjakowska and Ryszard Tadeusiewicz
(a)
(b)
(c)
Fig. 9.14: Pigment network detection: (a) medical image, (b) the first step (dark local structure detection, (c) the second step (removal of extant structures).
(a)
(b)
Fig. 9.15: Example for blue-whitish veil detection: (a) dermoscopy image, (b) result.
the segmented area of the skin mole have to be marked. Secondly, based on different experimental results the detected areas have to be classified as typical or atypical. The implemented Thickness App is based on the described algorithm for the determination of melanoma thickness. After launching the application the user can select an image from a proposed database (Fig. 9.16). After this step the user is asked to choose the means of evaluating the melanoma thickness (automatic diagnosis or self-assessment). If the user chooses the automatic diagnosis button, the options in Fig. 9.17 are automatically opened. If the user selects the self-assessment option, Fig. 9.17 is empty and ready to be marked by the user. After choosing the clinical and dermoscopic criteria based on the dermoscopic image (Fig. 9.18) the system calculates the final decision which is presented in Fig. 9.19.
9 Melanoma thickness prediction |
Fig. 9.16: Thickness App: the user selects the image to be assessed.
Fig. 9.17: Thickness App: the user selects the way the image is evaluated.
Fig. 9.18: Thickness App: the User selects the clinical and dermoscopic criteria.
Fig. 9.19: Thickness App: result for the evaluation of melanoma thickness.
191
192 | Joanna Jaworek-Korjakowska and Ryszard Tadeusiewicz
9.6 Conclusions In the last few years significant progress has been made in the fields of electronics, computer science and biomedical engineering. A very promising tool which combines all of these achievements is the implementation of automatic medical diagnostic systems that are becoming crucial in diagnostic processes and in healthcare systems today. Radiology services including X-ray, CT and MRI, which are extremely important for patients and their physicians for accurate diagnosis, would be impossible without innovative diagnostic technology. Undoubtedly, this also applies to dermatology. Screening systems can be used not only by young, inexperienced dermatologist but first and foremost by family physicians, which can contribute to the early detection of melanoma. Regular check-ups play a vital role in allowing for the early detection of melanoma. Screening systems also provide a new opportunity for people living in remote and rural areas, outside regional centers and thus facing difficulties in making an appointment with a dermatologist. The importance of diagnosing melanoma in the early stage and the reduction of the melanoma-related mortality rate can be achieved by precise computer-aided diagnostic systems. The proposed and described melanoma thickness application based on the publication [4] is another opportunity to help young and inexperienced dermatologists who might have problems with the diagnosis of pigmented skin lesions. The aim of the presented automatic diagnostic system is to increase the correct evaluation of melanoma thickness and reduce the number of unnecessary biopsies. You are more than welcome to familiarize yourself with these issues as well as experiment with the application on the website: home.agh.edu.pl/jaworek/ThicknessApp.
Acknowledgment Scientific work partly supported by the AGH University of Science and Technology as project number 11.11.120.612. The work of Joanna Jaworek-Korjakowska was funded by the National Science Center, Poland based on the decision number DEC-2011/ 01/N/ST7/06783.
References [1] [2] [3]
World Health Organization, “Cancer – Key Facts,” 2014. www.who.int/mediacentre/ factsheets/fs297/en/. Accessible Design and Consulting, Inc., “Cancer in Australia in brief,” 2014. http://www.aihw. gov.au/cancer-in-australia/in-brief/. World Health Organization, “Health effects of UV radiation,” 2014. http://www.who.int/uv/ health/uv_health2/en/index1.html.
9 Melanoma thickness prediction
[4] [5] [6] [7]
[8] [9] [10]
[11]
[12] [13] [14]
[15] [16] [17] [18] [19] [20] [21] [22]
[23]
[24]
[25]
|
193
Sun Smart, “Skin can facts stats,” 2014. www.sunsmart.com.au/about/skin-cancer-factsstats. Celebi ME, Stoecker WV and Moss RH. Advances in skin cancer image analysis. Computerized Medical Imaging and Graphics. 2011;35(2):83–84. Argenziano G, Soyer H, De Giorgi V et al. Interactive Atlas of Dermoscopy (Book and CD-ROM). Milano: Edra Medical Publishing and New Media, 2000. Jaworek-Korjakowska J. Automatic detection of melanomas: An application based on the ABCD criteria. In: Pitka E and Kawa J, eds. Information Technologies in Biomedicine Proceedings Lecture Notes in Computer Science. Berlin: Springer; 2012, pp. 67–76. Scharcanski J and Celebi ME, eds. Computer Vision Techniques for the Diagnosis of Skin Cancer. Berlin: Springer Verlag, 2014. Korotkov K and Garcia R. Computerized analysis of pigmented skin lesions: a review. Artificial Intelligence in Medicine. 2012;56(2):69–90. Fabbrocini G, De Vita V, Cacciapuoti G, Di Leo G, Liguori C, Paolillo A, Pietrosanto A and Sommella P. Automatic diagnosis of melanoma based on the 7-Point Checklist. In: Scharcanski J and Celebi ME, eds. Computer Vision Techniques for the Diagnosis of Skin Cancer. Berlin: Springer Verlag; 2014, pp. 71–109. Jaworek-Korjakowska J and Tadeusiewicz R. Assessment of asymmetry in dermoscopic colour images of pigmented skin lesions. In: Boccaccini AR, ed. Proceedings of 10th IASTED International Conference on Biomedical Engineering, (Biomed 2013) Innsbruck, Austria, 2013. Cancer Research UK, “Skin cancer Key Facts,” 2012. www.cancerresearchuk.org/. Human skin diagram, 2015, http://en.wikipedia.org/wiki/Skin. Binder M, Schwarz M, Winkler A, Steiner A, Kaider A, Wolff K and Pehamberger H. Epiluminescence microscopy: a useful tool for the diagnosis of pigmented skin lesions for formally trained dermatologists. Journal of the American Academy of Dermatology. 1995;131:286–291. Melanoma of the skin. In: Edge SB, Byrd DR, Compton CC, et al., eds. AJCC Cancer Staging Manual. 7th ed. New York, NY: Springer; 2010, pp. 325-44. Stages of melanoma, 2014. http://www.cancerresearchuk.org/about-cancer/type/melanoma/ treatment/stages-of-melanoma#bres. Melanoma, 2014, http://www.drugs.com/mcd/melanoma. National Cancer Institute, “Stages of melanoma,” 2014. www.cancer.gov. Melanoma Foundation of New Zealand, “About melanoma – key information,” 2014. http://www.melanoma.org.nz/About-Melanoma/Key-Information/. Melanoma stages, 2015, http://en.wikipedia.org/wiki/Melanoma. Jaworek-Korjakowska J and Tadeusiewicz R. Hair removal from dermoscopic colour images. Bio-Algorithms and Med Systems. 2013;9(2):53–58. Jaworek-Korjakowska J, Analiza i detekcja struktur lokalnych w czerniaku zlosliwym (Detection and analysis of local structures in malignant melanoma). PhD thesis, AGH University of Science and Technology, Krakow, Poland, 2013. Jaworek-Korjakowska J and Tadeusiewicz R. Determination of border irregularity in dermoscopic color images of pigmented skin lesions. Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE. 26–30 Aug. 2014, pp. 6459– 6462, Jaworek-Korjakowska J and Tadeusiewicz R. Assessment of dots and globules in dermoscopic color images as one of the 7-point checklist criteria. In: Proceedings of IEEE International Conference on Image Processing (ICIP 2013), Melbourne, Australia, 2013, pp. 1456–1460. National Cancer Register, “Onkologia – czerniak zlosliwy,” 2014. http://www.onkologia.org. pl/,2013.
| Part VI: Therapy
Ryszard Tadeusiewicz
10 Simulating cancer chemotherapy 10.1 Simulating untreated cancer As is commonly known, cancer occurs when healthy tissue mutates, forming cells which proliferate without bound and ignore external signals commanding them to stop multiplying. This spontaneous proliferation causes a progressive increase in the number of anomalous cells until they begin to disrupt the function of their host organ. At later stages of the process (which will not be considered in our simulation) individual “rogue” cells may detach from the primary tumor, penetrate into the bloodstream and invade the lymphatic system. These cells then cause metastasis by lodging in other organs, continuing to multiply and eventually causing the failure of those organs as well. Fig. 10.1 provides a schematic depiction of cancer development.
Low number of cancer cells
Noticeable tissue destroing state
Advanced state metastasis possible
Fig. 10.1: Schematic depiction of the development of cancer.
Let us try to model the presented process. In this section, as well as the following two sections, we will consider a specific type of childhood cancer known as neuroblastoma – although it should be noted that any other type of cancer may be modeled in a similar fashion. The state of the disease may be described by referring to the number of anomalous (cancerous) cells. This number changes over time: as cells continue to divide, the overall rate of proliferation increases in proportion to the number of existing cancerous cells. If we designate the number of proliferating cells as P, we obtain: dP = 𝛾P dt
(10.1)
In order to acknowledge the temporal variability of the number of proliferating cells we will use the notation P(t). The value of this function is given by a differential equa-
198 | Ryszard Tadeusiewicz
tion which relates the increase in the number of cells (dP/ dT) to the current number of such cells, P(t) multiplied by the proliferation rate constant 𝛾. Equation (10.1) describes a type 1 dynamic object which corresponds to the model presented in Fig. 10.2. X=γ Cancer Y(t)=P(t) development model
Fig. 10.2: Cancer development model as a type 1 dynamic object.
The model has one input – the proliferation rate constant 𝛾 which is arbitrarily set but remains fixed throughout the simulation phase: X=𝛾
(10.2)
We are also dealing with a single time-dependent process: the production of additional cancerous cells as a result of proliferation P(t), which corresponds to the output function Y(t): Y(t) = P(t) (10.3) It is easy to prove that for any initial value P(0) greater than 0 P(t) exhibits exponential growth. Such growth is initially slow but eventually accelerates. Fig. 10.3 presents sample solutions to equation (10.1) for various values of 𝛾. In practice, the proliferation rate depends on the patient’s age: in infants and young children whose organisms are developing rapidly, 𝛾 tends to be greater than in the elderly. The consequences – from the point of view of disease development – are illustrated in Fig. 10.3 which shows various time curves corresponding to approximate patient age. P(t) 70 Patient age 15 years 60 35 years 55 years 50 75 years 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
Fig. 10.3: Solutions to equation (10.1) for various values of γ.
10 Simulating cancer chemotherapy |
199
P(t) 30 Further progress of the disease results in death
25
Eventually the patient requires hospitalization
20 15 10 5
Cancer is initially unnoticeable for the patient
As the disease progresses, symptoms begin to appear
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
Fig. 10.4: The model explains the dynamics of the disease and predicts its conclusion.
It is evident that cancer progresses more quickly in children than in older patients (note that while older people are more likely to develop cancer in the first place, this correlation is beyond the scope of our sample model). We should note that solutions to equation (10.1) are characterized by a rapid increase in the rate of P(t) growth for high values of t, especially when 𝛾 is also large. This creates very unfavorable conditions for patients: early stages of the disease can go entirely unnoticed but the proliferation curve eventually becomes steep and the patient’s state will rapidly deteriorate, leading to inevitable death unless the cancer can be eradicated surgically or pharmacologically (Fig. 10.4). Further analysis of the curves plotted in Fig. 10.3 reveals another important aspect: solutions to equation (10.1) are strongly dependent on initial conditions. For low initial values of P(t), i.e. when the initial number of proliferating cells is small, cancer develops more slowly and is easier to control. On the other hand, starting with a large value of P(t) produces a much steeper curve, reducing the efficacy of treatment (see Fig. 10.5). This fact corresponds to the basic rule in oncology which emphasizes early detection as a prerequisite of successful treatment. The above mentioned properties of our biocybernetic model of a dynamic system (cancer development) follow from analysis of the time curves given by equation (10.1) for various combinations of initial conditions and proliferation rate constants. While the model is quite primitive and does not – by itself – yield much insight, the reader is encouraged to try out the simulation software available at www.tadeusiewicz.pl. Fig. 10.6 presents the main window of the application. The program is user-friendly and may lead to a number of interesting observations.
200 | Ryszard Tadeusiewicz P(t) 180 160 140 120 100
Number of cancerous cells at detection
80 60 40 20
Fig. 10.5: Time curves given by equation (10.1) for various initial conditions.
0 1
2
3
4
5
6
7
8
9
Simulation of tumor cell proliferation 7
x 108
The number of tumor cells P(t)
6.5
γ[1/h]
0.01
6 P(0) 2.0e+008
5.5 5
T[h]
4.5
120
4 3.5 3 2.5 SIMULATION
2 0
20
40
60 80 Time t [h]
100
120
Fig. 10.6: User interface of the cancer proliferation model application.
10.2 Enhanced model of untreated cancer In order to explain the principles of constructing advanced biocybernetic systems let us return to our sample model and observe that actual tumors often consist of rapidly proliferating cells (usually located near the tumor’s surface) as well as “quiescent” cells which have stopped dividing and are found deep inside the tumor. Assuming that P(t) denotes the number of proliferating cells and Q(t) corresponds to the number of quiescent cells, Fig. 10.7 presents the enhanced model of an affected organ.
10 Simulating cancer chemotherapy |
γ
201
P(t)
Canceraffected organ
Q(t) Fig. 10.7: Enhanced cancer development model.
For the sake of convenience we will divide the model into two parts which correspond to P(t) and Q(t) respectively, as shown in Fig. 10.8. The composition of both groups changes with time: some cells remain in their original group while others move to the other group. Let us define two coefficients α and β such that αP(t) proliferating cells become quiescent while βQ(t) quiescent cells resume proliferation. In Fig. 10.8 α and β represent the gain coefficients of blocks which “shift” cells from the proliferating group to the quiescent group and the other way around.
γ
Proliferating
P(t)
α Quiescent β
Q(t) Fig. 10.8: Cancer development as a biocybernetic model describing a complex object.
Comparing Figs. 10.7 and 10.8 reveals how the introduction of an internal structure mirrors the properties of a complex system. The formal (mathematical) definitions of objects described as “proliferating” and “quiescent” in Fig. 10.8 are given by equations (10.4) and (10.5). dP = [𝛾 − α]P + βQ dt dQ = αP − βQ dt
(10.4) (10.5)
Resolving these equations produces time curves similar to the ones depicted in Figs. 10.3, 10.4 and 10.5, but instead of individual curves we are now dealing with pairs of curves. While a competent mathematician might be able to draw a number of conclusions simply by looking at equations (10.4) and (10.5), biocyberneticians should instead refer to the simulation model: merely imagining the dynamic interplay of two separate cell populations in a general case is neither simple nor reliable. To avoid falling into the trap of unsupported correlations and false conclusions, we will now proceed with the analysis of the simulated behavior of our object. To facilitate computer-aided simulations it might be useful to begin by modifying the model once again, striking a
202 | Ryszard Tadeusiewicz
α
β
Proliferating
γ
Quiescent
P(t) Q(t)
Fig. 10.9: Schematic depiction of the cancer model adapted for simulations.
compromise between the general case shown in Fig. 10.7 and the rather complex system seen in Fig. 10.8. The resulting enhanced formal model is visualized in Fig. 10.9. As shown, the model remains a type 1 dynamic object, with one input (𝛾) and two output signals: P(t) and Q(t). Additionally, the model has two parameters (α and β) which may affect its behavior. This object can be simulated with the application available at www.tadeusiewicz.pl, as shown in Fig. 10.10. Simulation of tumor cell proliferation and calming
The number of tumor cells P(t) i Q(t)
10
x 108
9 8
α[1/h]
0.02
β[1/h]
0.004
γ[1/h]
0.01
7
P(0) 2.0e+008
6
Q(0) 8.0e+008
5
T[h]
120
4 3 SIMULATION
2 0
20
40
60 80 Time t [h]
100
120
Fig. 10.10: User interface of the cancer proliferation and quiescence model application.
10.3 Simulating chemotherapy Dynamic systems which comprise many interlinked processes – such as the ones described above – are usually more interesting than static models. Even so, they provide only a limited set of controls: having specified the initial values for the model’s parameters the user may only observe the simulation without being able to intervene.
10 Simulating cancer chemotherapy |
203
Such models of cancer (or of lethal epidemics) may be frustrating: while we can observe the dynamics of a disease, we can do nothing to affect its spread. As a result, modern biocybernetic models are usually developed in such a way as to provide the user with a measure of control over the simulated process. This step marks the transition from a level 1 (spontaneous) to a level 2 (controllable) dynamic model. Let us observe this transition on the basis of our cancer cell proliferation process which we aim to control (obviously by arresting its progress) through treatment. One of the available treatment options is chemotherapy via intravenous administration of Topotecan (TPT). This drug becomes active at a specific stage of cell division: it interferes with DNA replication, leading to cell death. Since cancer cells divide more rapidly than healthy cells, TPT affects cancer cells more strongly and therefore may prove effective in the treatment of cancer. We will now attempt to create a model which accepts the effect of TPT injections as its input signal. Note that we specifically refer to the concentration of TPT in blood plasma rather than to the actual injections as the drug only becomes effective if it can reach its target cells via the bloodstream. The correlation between the injection volume and the cellular uptake of the drug is a complicated one, and while we will consider it in more detail later on, for now it is enough to assume that we can accurately control the concentration of TPT in blood plasma, treating it as an input signal in our model. This signal will be designated X(t) since intravenous delivery of the drug (by means of an intravenous drip) occurs intermittently, producing time-dependent changes in TPT concentration in blood plasma. The updated model will retain the same output signals as its predecessor: the number of proliferating cells P(t) and the number of quiescent cells Q(t). The proliferation rate constant (𝛾) – formerly the model’s input signal – is now “relegated” to a configurable parameter, much like the still-present migration coefficients α and β. α
X(t)
β
γ
Proliferating Quiescent
δ P(t) Q(t) Fig. 10.11: Cancer model acknowledging the treatment process.
As it turns out, the enhanced model requires one additional parameter (see Fig. 10.11). We assume that the number of cells killed off by TPT is proportional to the drug’s concentration in blood plasma, X(t), and that the model is linear, i.e. the number of killed cells is proportional to the concentration of the drug and to the number of proliferating cells. A coefficient must therefore be added to the model to quantify this relation – we will designate it δ.
204 | Ryszard Tadeusiewicz
To obtain a mathematical description of the model visualized in Fig. 10.11, let us extend equations (10.4) and (10.5) with components expressing the destructive impact of TPT on proliferating cells. Equations (10.6) and (10.7) formally describe the updated model while Fig. 10.12 provides a graphical representation. dP = [𝛾 − α − δX(t)]P(t) + βQ(t) dt dQ = αP(t) − βQ(t) dt
(10.6) (10.7)
Of particular note in Fig. 10.12 is the change in input signal. Despite some superficial similarities, Figs. 10.8 and 10.12 describe two fundamentally different models: the former is a type 1 dynamic model while the latter is a type 2 (controllable) model. γ
δ
X(t) Proliferating
P(t)
α Quiescent β
Q(t) Fig. 10.12: Controllable cancer model suitable for simulating the treatment process.
Analysis of the model described by equations (10.6) and (10.7) calls for a preliminary specification of the properties of the control signal X(t). Let us assume that, following initial work-up, the patient begins to receive TPT intravenously 48 hours after cancer is detected. Since the treatment process is spread over time, we will try to maintain the IV drip for as long as possible, ensuring constant concentration of the drug in blood plasma at a level of 5 ng/ml. During this time TPT will destroy proliferating cells, hence P(t) will decrease rapidly (as opposed to exponential growth in the absence of treatment). More importantly, decreases in P(t) also cause corresponding decreases in Q(t), thus causing the entire tumor to shrink. Unfortunately, the presence of TPT in the patient’s bloodstream also causes undesirable side effects (which will be more thoroughly described in the next section) – hence the intravenous drip is only maintained until the end of the 7th day (exactly 168 hours after detection). The subsequent pause enables the patient’s organism to begin repairing the damage caused by the toxic drug. 216 hours after detection, i.e. at the beginning of the 10th day, the TPT drip is resumed and continues until the end of the 14th day (336 hours). Our simulation shows that at this point the tumor will have shrunk by more than 50 % compared to its initial volume. The patient is not yet cured but their condition will have improved significantly, with hopes for total remission.
Drug concentration in ng/ml
10 Simulating cancer chemotherapy |
205
X(t)
6 5 4 3 2 1 0 0
20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 Time elapsed (hours)
Fig. 10.13: Time curve describing the control signal. Refer to chapter text for an in-depth description.
The above described process can be conveniently simulated with the aid of a computer. The corresponding application (available at www.tadeusiewicz.pl) is shown in Fig. 10.14. Our application supports a range of simulations, for various drug administration regimens, tumor growth characteristics and tumor volume at detection. The treatment process whose control signal is visualized in Fig. 10.13 produces results shown in Fig. 10.15. We encourage the reader to experiment with the model’s settings in order to become familiar with the ways in which biocybernetic simulations can be fine-tuned and controlled. Simulation of tumor treatment
The number of tumor cells P(t) i Q(t)
9
x 108
8
α[1/h]
0.02
7
β[1/h]
0.004
γ[1/h]
0.01
δ[1/h]
0.66
6 5 4 3
P(0) 2.0e+008
2
Q(0) 8.0e+008
1 xmax
0
x(t)
0.2
0
50
100
150
200
250
300
0.15
350 ti[h] 48 168 216 336
0.1 0
SIMULATION
0
50
100
150 200 Time t[h]
250
300
350
Fig. 10.14: GUI of the presented cancer treatment simulator.
Number of cancer cells
x 108
x 10–3 5
10 9 8 7 6 5 4 3 2 1 0
P(t) Q(t) X(t) 2.5
0 0
50
100
150 200 250 Time elapsed [h]
300
350
TPT concentration in blood plasma [ng/ml]
206 | Ryszard Tadeusiewicz
Fig. 10.15: Results of simulated treatment administered over a two-week period (with IV drips between days 3–7 and 10–14).
10.4 Simulation software available for the reader As already mentioned, the authors have developed¹ a series of Matlab-based simulation applications. All these applications are available free of charge at www. tadeusiewicz.pl. We invite all interested readers to download them for their personal use.
1 The authors of the simulation software are Joanna Jaworek-Korjakowska, Eliasz Kańtoch, Janusz Miller, Tomasz Pięciak and Jaromir Przybyło.
Piotr Dudek and Jacek Cieślik
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications 11.1 Introduction As the applications of Reverse Engineering and Rapid Prototyping grow more widespread and diverse, it is worth outlining the technology and reviewing its development. Biomedical engineering is an emerging field where a lot of research is going on to combine engineering technologies for medical problems solving. Biomedical engineering is a technological field with great potential for future advances. This field encompasses medical treatment engineering, tissue engineering, genetic technology, and medicine engineering. Building sets of medical information, medical images, and biomedical materials and applying these sets of medical data assists in the development of all aspects of biomedical engineering. In recent years, Computer-Aided Design (CAD) has been increasingly applied in biomedical design [1]. The different knowledge related to medical information, the various medical images and different materials used in medical science are very easy to achieve with the help of biomedical engineering. Reverse Engineering has been successfully used in engineering applications to recover virtual models from existing components, for which a knowledge base does not exist. Recent developments in additive manufacturing processes have made it easy to create prototypes of complex objects, also with biocompatible materials. Reverse Engineering in medical applications is necessary for the following reasons: – the digital model does not exist, – the shapes of medical objects are complex. In this chapter methods of receiving and processing data in Reverse Engineering and subsequently preparing data for use in medical, dental, orthodontic and other areas are presented.
11.2 Reverse Engineering Reverse Engineering (RE) techniques encompass many engineering approaches in which an existing product is investigated either prior to or during the reconstruction process. Reverse Engineering is generally defined as a process of analyzing an object or existing system to identify its components and their interrelationships, investigate how it works in order to fix it, redesign or produce a copy without access to the design from which it was originally produced [2]. It can be used in three areas, presented in Tab. 11.1.
208 | Piotr Dudek and Jacek Cieślik
Tab. 11.1: Types of Reverse Engineering. Area of RE
Objectives
Industrial
Industrial RE is used to reconstruct 3D models of physical objects for Engineering Design, CAD/CAM/CNC, Product Development, Quality Control and Dimensional Inspection. For 3D models a high accuracy is required, from ±20 to ±50 μm. In the some areas, like Mold and Tooling or micro-manufacturing, the accuracy requirement is raised up to 1–5 μm. In other industries, such as the ship building or aerospace industry, the accuracy requirement is quite flexible, depending on the size of the objects and their functions.
Architecture and Art
Artistic and architectural RE is used for 3D modeling of the objects for architectural and fine art applications. The size of the objects can be very different, and varies from 10 × 10 × 10 mm (jewelry), to very large ones, including statues, architectural prototypes, houses and buildings. The accuracy requirement is normally low, but for some applications must be higher. In those cases the outside appearance is very important, including the general shape and forms of the objects. They are more important than the required accuracy.
Medical
Medical RE is normally applied by using patients’ data or biomedical objects to reconstruct 3D models of anatomical structures or objects of interest for development of different medical products, applications, and biomedical research. The accuracy requirements are dependent on the specific applications. For example, for the personalized cranio-maxillofacial implants, biomodels and training models, the accuracy requirement is raised up to 100 μm, which is not high compared to Industrial RE. But for the surgical tools and functional implants such as spine, hip and knee implants, the accuracy requirements are very high.
Reverse Engineering has been widely applied during recent years in medical and dental applications. The process of Reverse Engineering involves turning the physical product back to the virtual models, in three dimensions, from which the conceptual design can be obtained. The final target of all RE processes is to obtain 3D data representing the geometries of the objects. There are two types of end-use data representation that are commonly used: Polygons or Triangle Mesh and Non-Uniform Rational B-Spline (NURBS). A polygon or triangle mesh includes vertices and edges that define the shape of an object and also a normal, which defines inside and outside of the triangles. This type of data is the simplest way of representing the geometries of objects; however, it is not an accurate representation of the geometries. NURBS surfaces are the ultimate output of the RE process that we would like to obtain for applications where accuracy requirements are high. NURBS are basically an accurate way to define free-form curves and surfaces.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
209
Data processing
Data acquisition
Medical RE inputs
Additionally, it is possible to produce CAD data from the obtained data, but for medical applications it is normally difficult or not possible to do this, due to very complicated shapes [3, 4]. Reverse Engineering for medical applications can be described as four phases presented in Fig. 11.1.
Patients, impression casts, biomedical objects, etc..
Phase 1
CMM, measurement arm, 3D scanners
CT, MRT, PET, etc
Remove artefacts & unwanted regions
2D slice images
Register points
Image segmentation
Data filtering & smoothing
ROI growing
Phase 2
Phase 3 Generating mesh Mesh optimisation
Biomedical applications
NURBS/CAD
Biomedical applications
2D profiles
Phase 4
Fig. 11.1: Scheme of data processing and information flow in Reverse Engineering in medical applications.
11.2.1 Phase one – Inputs of medical RE This is a very important phase, because it determines the methods of data acquisition. It determines not only the techniques and methods for data acquisition, but also data processing and analysis.
210 | Piotr Dudek and Jacek Cieślik
Depending on the end-user application, there are different types of inputs for medical RE, which need to meet the technical requirements and clinical regulations. The state of the art of end-use applications includes personalized implants for bone reconstruction, dental implants and simulations, surgical tools, medical training, orthopedics, ergonomics, orthosis, prosthesis, and tissue engineering [5].
11.2.2 Phase two – Data acquisition The different methods of data acquisition can be divided on two main group – contact (tactile) or noncontact methods. Contact methods use sensing devices such as a hard probe, to obtain a cloud of points describing the measured object (Fig. 11.2). In the tactile approach a touch probe is used in conjunction with a robotic mechanism such as a coordinate measurement machine (CMM), an articulated arm or computer numerical control (CNC) devices to determine the position of the object (Cartesian coordinates). Accuracy is considered the main advantage of the tactile approach, but the digitization process is quite slow and it is difficult to digitize a complex geometry. A wide range of objects can be probed with this approach regardless of color, shininess and transparency. This approach is not appropriate for deformable materials.
Tactile
Digitalization methods
Non-tactile
Reflective
Pass-through
Fig. 11.2: Methods of digitalization.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications
|
211
In the noncontact approach, a medium is used to measure the physical object using the principle of reflection or penetration of radiation. Many three-dimensional scanners rely on the principle of reflection, such as laser beam or white/blue light scanners. Laser scanners can also be used in conjunction with an articulated arm for improved accuracy. Using laser beams and white light has the advantage of fast digitization and continuous data, but objects that are too shiny or transparent present complications for data acquisition. The medium travels from the generator to the object before it is reflected and transmitted to the receiver unit. The determination of geometry can be processed using at least one two-dimensional image combined with some optical parameters such as reflection angle, distance and time of flight. The initial geometry is presented in the form of a cloud of points or a polygon model. Although laser beam and white light systems are applied in many medical applications, the most efficient noncontact device is based on the principle of penetration. This system uses the medium that penetrates through the object to capture both internal and external geometries. The most popular devices are computed tomography scanners, which includes the use of X-ray. The digitization process starts the transmission of the X-ray through the object. Data acquisition is performed at constant intervals throughout the entire object which subsequently gives a series of slice images. Each slice contains information on the object’s position and the value of the Hounsfield unit (HU), which is proportional to tissue density. A higher Hounsfield value indicates a high-density object such as enamel or bone. A lower Hounsfield value indicates a low-density object such as fat or soft tissue. In order to reconstruct the three-dimensional human body model, the optimal Hounsfield values must be selected, using thresholds, but in this method also manual selection or semi-automated activities are needed. After that the selected regions of each slice are combined to construct the volumetric model (Figs. 11.3 and 11.4).
Dicom
Fig. 11.3: Creating a surface model from DICOM data using InVesalius software.
212 | Piotr Dudek and Jacek Cieślik
Nowadays, a CT scan is a fairly routine procedure. However, a number of criteria must be borne in mind to ensure the acquisition of useful data especially for use in biomedical RE applications, for example: – type of scanner (axial or helical) – slice thickness (recommend max. 1 mm) – scan spacing: 0.5 mm or at least half the smallest dimension of interest – X-ray strength in the case of CT, pulse sequence in the case of MR – resolution – for RE highest resolution is usually the best option – field of view (FOV). The object imaged should fill the field of view without extending beyond it. – X–Y dimensions of a single pixel. These dimensions and the scan spacing determine the resolution of the coordinate system for reconstruction. – artifacts. If significant variations in material densities exist within the object to be scanned, distortion can be experienced. In the case of metal artefacts, the distortion can be severe. Images reconstructed at 512 × 512 pixels with 16 bit/pixel resolution require about 0.5 MB (megabytes) of memory per slice. Average datasets can be expected to be in the range from 25 to 100 MB, but high resolution dataset can needed gigabytes and powerful computers for data processing [6, 7]. The slice or scan spacing is critical for 3D model reconstruction and should not be confused with slice thickness. Anything over 3 mm is generally not acceptable for complex structures. Slice spacing determines spatial accuracy. The accuracy in the z-axis is determined by the spacing and, if possible, should be at least half the size of the smallest feature that is to be reconstructed. Lastly, medical reconstruction requires a good understanding of anatomy, which can only come with experience, and an understanding of the types of tissue that are preferentially imaged by CT and MR scanners.
11.2.3 Phase three – Data processing This stage is based on two types of raw data from the acquisition process, including point clouds or 2D slice images. Different data processing approaches and workflow are used to obtain the right 3D models of the anatomical structures or objects of interest for medical applications development and research. For point clouds as an input for medical RE it is necessary to scan the object in different views in order to capture the entire geometry or the area of interest. Therefore, it is necessary to align, register and merge clouds of points. Usually, if clouds have points from other, noninterest regions, it is necessary to clean the clouds of these objects before aligning clouds of points in their proper orientation in a common coordinate system.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
213
In addition, some amount of error is always introduced into the scan data and points may be placed in undesirable regions or overlapped. This is because points have been scanned more than once when scanning complex shapes. Moreover, when point cloud aligning is applied, the registered scan data normally contains overlapping points. Additionally, scanners often produce some noise points or artifacts. Therefore, data cleaning and optimization is required. In this step two functions are used: noise and point redundancy reduction, and sampling points for minimizing the number of points in the point cloud so that it is easier to work with and structures the data well. Sometimes, to produce smoother shapes, noise reduction and smoothing operations are performed at this stage. Finally, the optimized point cloud data is triangulated to create 3D triangle mesh or polygon models of the object. Triangulation gives a discreet representation of an object via a set of polygons that defines the whole object without deviating from the collected points by more than a given tolerance. For visual and future computational purposes, this wireframe mesh approximates the shape and size of an object using triangles. Smaller triangles result in a better approximation, inevitably raising the file size and slowing the processing speed. 3D triangle mesh models are then cleaned, optimized, manipulated and controlled. Some imperfections are corrected – some holes filled, unwanted elements removed, etc. After the process of improvement, the polygon mesh can be converted into 3D NURBS or CAD models to meet the requirements from the end-use applications. For CT/MRI scanners, the images are normally stored in the DICOM (Digital Imaging and Communications in Medicine) format. However, with applications that use micro-CT imaging systems, different data formats such as BMP or PNG can be used. The image resolutions achievable with these micro-CT systems extend into the range of light microscopy, down to one or a few microns. This type of data requires specialized image processing tools and packages for image processing for 3D data reconstruction of the hard and soft tissues or objects of interest. Two basic steps are used for 3D reconstruction from 2D slice images: image segmentation, and Region of Interest (ROI) growing. Segmentation by threshold techniques is used to define the region of interest that presents the object for 3D reconstruction; it is based on the gray scale value of image pixels. ROI can be defined by a lower and a higher threshold from the Hounsfield scale or only by a lower threshold. In the former case the pixel value must be in between both threshold values to be part of the segmentation object. In the latter case, the segmentation object will contain all pixels in the images with a value higher than or equal to the threshold value. ROI growing provides the capability to split the segmentation into separate objects; it is useful for the separation of anatomical structures, especially bone and soft tissues. For segmentation, 2D image processing techniques can also be used such as smoothing, noise filtering, cavity fill, island removal filter, morphological filters, etc., which helps in image processing.
214 | Piotr Dudek and Jacek Cieślik
The outputs of image segmentation and ROI growing are 3D triangle mesh models or 2D contours of the ROI or anatomical structures. As in the case of point clouds, a triangle mesh from DICOM is processed in the same way. 2D contours are used to create the appropriate CAD or NURBS models to meet the requirements for the enduse biomedical applications.
11.2.4 Phase four – Biomedical applications The resulting object in the form of a 3D mesh can be used directly by a Rapid Prototyping application or by 3D graphics applications. However, for biomedical applications that require high accuracy for further complex geometrical modeling, design or analysis, it is necessary to transform the form of a triangle mesh into CAD or NURBS models, which are again used as the reference for medical product development and research in which CAD/CAM/CAE system are used. CT/MRI Images 2D segmentation 3D region growing MedCAD interface
Reverse ingenering interface
STL interface-triangular faceted model
Point data Polyline fit on contour of the model
Traingulated base model
Surface triangle decimation, smothening and refining
Surface processing aid refinement to reduce final CAD model Output polylines as IGES curves
Fitting of a NURBS patch on the surface
Fit a B-spline surfaces on the polyline on each slice
IGES format CAD model
Fig. 11.4: Scheme of data processing and information flow from CT/MRI images to CAD model.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
215
11.3 Software for medical RE Surgical operation simulation in a virtual computer model requires specific functions in the software to simulate surgical actions and to calculate certain parameters: volume, distance, bone density, etc. Besides pure simulation of surgery actions, associations with other software packages (CAD, FEA, CFD, etc.) can facilitate or even be necessary to complete the surgery simulation. Additionally, there is no single software that can fulfill and completely satisfy the requirements in data processing and geometrical modeling. So, the selection of the software depends on the end-use application, especially the complexity of the geometrical modeling processes. The following list includes typical software and tools that are necessary for the implementation of medical RE applications: medical image processing, Rapid Prototyping, Finite Element Analysis and simulation, Reverse Engineering and dimensional inspection, freeform modeling, CAD/CAM, and specialized dental applications.
11.3.1 Mimics Innovation Suite One of the best-known software packages for RE biomedical applications is Mimics Innovation Suite, Materialise NV (Leuven, Belgium), providing high-quality solutions supporting clinicians in diagnosis and decision-making. Materialise’s interactive medical image control system (Mimics) is an interactive tool for the visualization and segmentation of CT images as well as MRI images and 3D rendering of objects. A very flexible interface to rapid prototyping systems is included for building distinctive segmentation objects. The different modules link Mimics to various application fields: Rapid Prototyping (RP), Finite Element Analysis (FEA) and Computer-Aided Design (CAD). Selecting the hard tissues (bones) and leaving behind the soft tissue is performed by applying the “thresholding” function and selecting a certain range of high density threshold values, then using appropriate masking techniques to generate a 3D object. These 3D voxel masks can be used for visualization, consultation, planning of surgical procedures, design of the implant, and finally can be exported for the RP process. Also, it could be exported as an Initial Graphics Exchange Specification (IGES) or other form of file that can be manipulated as a CAD model or FEA can be used on the model e.g. in ANSYS software. Mimics performs all kinds of engineering operations that start from medical imaging data up to the error free STL file for Rapid Prototyping models. Mimics goes beyond visualization of the anatomical data; and has three main engineering applications. Using the 3-matic package included in Mimics it is possible to: – perform 3D measurements and engineering analyses – design patient-specific implants or surgical guides – prepare anatomical data and/or implants for Finite Element Method simulations.
216 | Piotr Dudek and Jacek Cieślik
11.3.2 Simpleware ScanIP This is a software system providing functionality for medical RE consisting of four modules: ScanIP – image processing, measurement and visualization; +FE module – volume/surface generation for FE/CFD; +NURBS – NURBS model generation for CAD; +CAD – integration of CAD models within image. ScanIP provides an image processing software environment for rapidly converting 3D scan data (MRI, CT, micro-CT, FIB-SEM . . . ) into computational models. The software offers image segmentation and quantification tools, enabling easy visualization and analysis of image data. Segmented images can be exported as surface models and meshes to CAD packages and for 3D printing. Additional module options are available for generating volume meshes for FEA and CFD, for integrating image and CAD data, and for exporting NURBS-based models (Fig. 11.5).
Fig. 11.5: ScanIP, courtesy of Simpleware Ltd., Exeter.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications
|
217
11.3.3 3D-DOCTOR A product of AbleSoftware Corp. that is an advanced 3D modeling, image processing and measurement software for MRI, CT, PET, microscopy, scientific, and industrial imaging applications. 3D-DOCTOR supports both gray scale and color images stored in DICOM, TIFF, Interfile, GIF, JPEG, PNG, BMP, PGM, MRC, RAW or other image file formats. 3D-DOCTOR creates 3D surface models and volume rendering from 2D crosssection images in real time (Fig. 11.6).
Fig. 11.6: 3D-DOCTOR (source: http://www.ablesw.com/3d-doctor/rapid.html).
11.3.4 Amira A product of Visage Imaging GmbH that is a multifaceted 3D software platform for visualizing, manipulating, and understanding data from computed tomography, microscopy, MRI, and many other imaging modalities. Amira enables advanced 3D imaging workflows for specialists in research areas ranging from molecular and cellular biology to neuroscience and bioengineering.
218 | Piotr Dudek and Jacek Cieślik
Fig. 11.7: 3D skull model done InVesalius.
11.3.5 Other software for 3D model reconstruction However, if we only need to reconstruct 3D models of the anatomical structures from CT/MRI data for further development, the free and open source MIP packages can be useful. Examples include: 3D Slicer (Slicer), Julius framework (CAESAR Research Center) and MedINRIA (INRIA Sophia Antipolis) or InVesalius (Figs. 11.7 and 11.8).
Fig. 11.8: 3D Slicer.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications
|
219
In comparison to commercial solutions, the open-source software 3D Slicer provides many modules for medical applications and for RP technologies. 3D Slicer is a free, open source software package for visualization and image analysis and it is natively designed to be available on multiple platforms, including Windows, Linux and Mac OS X. The module Editor provides functionality of selecting, thresholds, ROI Growing, etc. for creating 3D models of interesting structures. The final objects can be saved as mesh objects using the module Modelmaker. Most of the Rapid Prototyping packages allow basic operations for manipulating the STL files as well as editing, dividing, and repairing 3D models. The typical RP packages include Magics (Materialise NV), VisCAM RP (Marcam Engineering GmbH) and NetFabb.
11.3.6 RE and dimensional inspection This software provides powerful freeform modeling tools, especially triangle mesh control and manipulations which are not commonly available in RP and CAD packages. The typical RE packages are Geomagic Verify, Geomagic Control, Geomagic Studio, Polyworks, GOM Inspect, and more (Fig. 11.9).
11.3.7 Freeform modeling The freeform modeling techniques such as Geomagic Claytools or Sculpt modeling systems and ZBrush (Pixologic, Inc.) can be used for modeling the implant or anatomical structures for simulation or development of the medical training models.
11.3.8 FEA simulation and CAD/CAM systems There are many programs for finite element modeling and simulation tasks. FEA simulation packages are needed for optimizing the design as well as the biomedical engineering aspects of the applications. CAD/CAM packages are very powerful for 3D geometrical modeling tasks and are commonly used to implement the final CAD operations of the design tasks. There are also specialized applications for dental and orthodontic applications, such as Trios 3Shape or 3D Reshaper Dental (Fig. 11.10). These applications provide comprehensive solutions for creating implants, prostheses, etc., starting from scanners up to build-up, control and inspection of manufactured elements.
220 | Piotr Dudek and Jacek Cieślik
Fig. 11.9: Geomagic Studio v2013.
Fig. 11.10: 3D Reshaper Dental (source: http://www.3dreshaper-dental.com/en/3dreshaper-dental/ dental-model-creator/).
11.4 Methods of Rapid Prototyping for medical applications – Additive Manufacturing The terms Rapid Prototyping or Additive Manufacturing are used in a variety of industries to describe a process for rapidly creating a system or part representation before final release or commercialization.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
221
Additive Manufacturing (AM) is a layer-based automated fabrication process for making scaled three-dimensional physical objects directly from 3D-CAD data without using part-depending tools. It was originally called 3D Printing and is still frequently called that. Often used terms for this technology include Additive Layer Manufacturing, Rapid Prototyping, Direct Manufacturing, etc. The technical realization of AM is based solely on layers and therefore it is called “layer-based technology”, “layeroriented technology”, or even “layered technology”. AM involves a number of steps that move from the virtual CAD description to the physical resultant part, such as: creating a CAD model, converting to STL format, file transfer to machine, setup 3D printer, build-up, and if necessary removing support elements, post-processing [6]. The STL file format was made public domain to allow all CAD vendors to access it easily and hopefully integrate it into their systems. STL is now a standard output for nearly all solid modeling CAD systems and has also been adopted by AM system vendors. STL uses triangles to describe the surfaces to be built. Each triangle is described as three points and a facet normal vector indicating the outward side of the triangle, in a manner similar to the following: facet normal 4.121164e-001 -4.875889e-002 -9.098256e-001 outer loop vertex 8.035579e+002 -8.070238e+002 9.049615e+002 vertex 8.035070e+002 -8.069570e+002 9.049349e+002 vertex 8.035079e+002 -8.068416e+002 9.049291e+002 endloop endfacet
There are numerous ways to classify AM technologies. One of the methods is classifying by the state of the material before the part is manufactured. The prototype material can be liquid, molten material, solid sheet or powder. Additive Manufacturing has been used for medical applications almost from the very start of when this technology was first commercialized. The most significant technologies for medical applications are presented in the following. There are several types of 3D printer available. They may use different materials, but all involve the same basic approach for “printing” an object: spraying or otherwise transferring a substance in multiple layers onto a building surface, beginning with the bottom layer. Before printing can begin, a 3D image of the item to be printed must first be created by using a computer-assisted design (CAD) software program. That object is then sliced into hundreds or thousands of horizontal layers, which are placed one on top of another until the completed object emerges.
222 | Piotr Dudek and Jacek Cieślik
11.4.1 Liquid-based RP technology Liquid polymers appear to be a popular material. The first commercial system was 3D Systems’ stereolithography process based on liquid photopolymers. The selective solidification of liquid monomeric resin (of the epoxy, acrylate, or vinyl ether type) by ultraviolet radiation is called (photo)-polymerization. There are various processes that differ only in the way the UV-radiation is generated and by the way the contouring is done.
11.4.2 Stereolithography (SLA) It is not only the oldest but also still the most detailed AM process. A laser stereolithography machine consists of a build chamber filled with the liquid build material and a laser scanner unit mounted on top of it which generates the x–y contour. The build chamber is equipped with a build platform which can be moved in the build (z) direction. The laser beam simultaneously does the contouring and the solidification of each layer as well as the bonding to the preceding layer. The motion of the beam is controlled by the slice data of each layer and directed by the laser scanner (galvo), see Figs. 11.11 and 11.12. After solidification of one layer, the build platform, including the partially finished part, is lowered by the amount of one layer thickness and, using a recoater a new layer of resin is applied. The process continues until the part is finished. This technology requires supports. Such printers are manufactured by 3D Systems, DWS. Scanner (Galvo) Laser
Laser beam
Part being build by layers
X-Y Building platform
Z
Container
Liquid polymer
Fig. 11.11: Part creating process by SLA.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications
|
223
Fig. 11.12: SLA printed parts with support structures.
11.4.3 Polymer printing and jetting If the curable build material is applied by print heads, the process is called polymer printing or polymer jetting. The process is commercialized by Objet, Israel (now Stratasys). It can be regarded as a 3D printing process. However, due to the part building by UV curing of liquid monomers it is a polymerization or stereolithography process (Fig. 11.13). The build material is directly applied to the build platform through a multi-nozzle piezoelectric print head, like an ink-jet printer. Solidification is done simultaneously by a light curtain. The parts need supports during the build process and the supports are applied simultaneously by a second set of nozzles so that each layer consists either X-Y
Jetting head UV light
Z
Build material Support material Fig. 11.13: The Objet PolyJet process.
Building platform
224 | Piotr Dudek and Jacek Cieślik
of build or of support material. Consequently, the supports are solid and use a large amount of material. The support material can be washed out without leaving marks in a mostly automated post-process, so the part is very smooth. Using a proprietary technology called Poly-Jet Matrix together with a family of fabricators called Connex offers the unique ability to print parts and assemblies made of multiple model materials, with different mechanical or physical properties, all in a single build. This opens up the future possibility of composing multi-material parts. Typical parts are thin walled and detailed. They exhibit precise interior hollow structures (Fig. 11.14).
(a)
(b)
Fig. 11.14: Objet Connex examples (a, b) by Statasys (Euromold 2014).
11.4.4 Digital Light Processing (DLP) This variation of the photo-polymerization process works with a commercial Digital Light Processing (DLP) projector as the UV light source. It projects the complete contour of a cross-section of the actual layer and initiates the solidification simultaneously (Figs. 11.15 and 11.16). The process was commercialized by Envisiontec and is continued by Formlabs, DWS or Solidator. This method uses a projector for printing objects, like the kind used for office presentations, to project the image of the cross-section of an object into a container of photopolymer. The light selectively hardens only the area specified in that image. The most recently printed layer is then repositioned to leave room for unhardened photopolymer to fill the newly created space between the print and the projector. Repeating this process builds up the object one layer at a time. DLP is known for its high resolution, typically able to reach a layer thickness of under 30 μm, a fraction of a copy paper sheet. Objects printed using DLP are known to have less visible layers than are visible with other techniques, but require support structures, like the stereolithography (SLA) method. A digital micromirror device is the core component of DLP printers and projects a light pattern of each cross-sectional slice of the object through an
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications
Part being build in layers
|
225
Platform
Container Liquid ploymer Lens
DLP projector Mirror Fig. 11.15: Digital Light Processing 3D printing.
imaging lens and onto the photopolymer resin. The projected light causes the resin to harden and form the corresponding layer which fuses it to the adjacent layer of the model. Compared with SLA, DLP can have relatively faster build speeds. This is because a single layer is created in one digital image, as opposed to SLA’s laser process which must scan the container with a single point.
Fig. 11.16: Models created from intraoral capture devices, as a replacement of the traditional physical impression (source: http://www.dwssystems. com/printers/dental-biomedical).
11.4.5 Solid sheet materials The cutting of contours out of prefabricated foils or sheets of even layer thickness according to the sliced 3D CAD file and the subsequent bonding on top of the preceding layer is called laminated object manufacturing (LOM). A laser, a knife, or a milling machine can be used as a cutting device. The bonding of adjacent layers is done by glue, ultrasound, soldering, or diffusion welding. MCOR Technologies use a tungsten carbide drag blade instead of a laser in their machines. The process is based on loose sheets of office paper that are glued using standard white polyvinyl acetate (PVA) glue. Unfortunately, the printing process is very slow. Due to the possibility of printing in color, this machine can be used for creating teaching and training models (Fig. 11.17).
226 | Piotr Dudek and Jacek Cieślik
Fig. 11.17: Object printed on an MCOR machine (source: http://mcortechnologies.com/).
11.4.6 Fused Deposition Modeling (FDM) Developed by Stratasys Ltd., rapid prototyping systems can fabricate parts in a range of materials including elastomers, ABS (acrylonitrile butadiene styrene), polycarbonate (PC), and investment casting wax (Fig. 11.18). In the physical process of model fabrication, a filament is fed through a heated element and becomes molten or semi-molten. The liquefied filament is fed through a nozzle, using a solid filament as a piston, and deposited onto the partially constructed part. The newly-deposited material fuses with adjacent material that has already been deposited. The head moves on the X–Y plane and deposits material according to the geometry of the currently printed layer. After finishing a layer, the platform holding the part moves vertically in the z direction to begin depositing a new layer on top of the previous one. The process is similar to building a model with a very small hotglue gun. The production system possesses a second nozzle in the head that extrudes support material. This is responsible for building support for any structure that has an overhang angle less than 45° from horizontal as a default. Support materials can be broken away or dissolved.
Printing head with extrusion nozzles
Building platform Part Support Support material
Build material
Z
Fig. 11.18: Fused Deposition Modelling.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
227
11.4.7 Selective Laser Sintering (SLS) The method uses a laser to sinter powder-based materials together, layer by layer, to form a solid model. The system consists of a laser, part chamber, and control system. The part chamber consists of a build platform, powder cartridge, and leveling roller. A thin layer of build material is spread across the platform where the laser traces a twodimensional cross section of the part, sintering the material together. The platform then descends a layer thickness and leveling rollers push material from the powder cartridge across the build platform, where the next layer is sintered to the previous. This continues until the part is completed (Fig. 11.19). Scanner system
Laser Roller
Powder bed
Powder delivery system
Fig. 11.19: Selective Laser Sintering system.
SLS does not require the use of additional supports to hold an object together while it is being printed due to the fact that the part being constructed is surrounded by unsintered powder at all times, this allows for the construction of previously impossible geometries. SLS machines can print objects in a variety of materials, such as plastics, glass, ceramics and even metal (which is a related process known as Direct Metal Laser Sintering). The properties and possibilities of the SLS method make it a popular process for creating both prototypes as well as final products. The physical process can be full melting, partial melting, or liquid-phase sintering. Depending on the material, up to 100 % density can be achieved with material properties comparable to those from conventional manufacturing methods. In many cases a large number of parts can be packed together within the powder bed, allowing very high productivity.
11.4.8 Selective Laser Melting (SLM) The method uses a laser to melt powdered metal in a chamber of inert gas. When a layer is finished, the powder bed moves down and an automated roller adds a new layer of material which is melted to form the next layer of the model. SLM is ideal for applications where high strength or high temperatures are required as it results in extremely dense and strong parts that match characteristics of the target material. SLM is
228 | Piotr Dudek and Jacek Cieślik
a metal additive manufacturing technique similar to SLS. The main difference between methods is that SLS sinters the material, while SLM melts the material, creating a melt pool in which material is consolidated before cooling to form a solid structure. Both SLM and DMLS (Direct Metal Laser Sintering) require support structures.
Fig. 11.20: Removable partial dentures created on EOS M270 using cobalt chrome material (Euromold 2014).
Fig. 11.21: Custom made porous cranial implant using EOSINT M280/EOS titanium 64 (Euromold 2014).
The types of materials available for this process include stainless steel, tool steel, cobalt chrome, titanium and aluminum. This technology is used to manufacture direct parts for a variety of industries including aerospace, dental, medical and others that need small to medium size, highly complex parts as well as the tooling industry where it is used to make direct tooling insert elements (Figs. 11.20 and 11.21).
11.4.9 Electron Beam Melting (EBM) EBM is an additive manufacturing technique capable of fabricating surgical implants with a solid, porous or hybrid solid combined with porous geometries. The EBM process is a layered manufacturing technique capable of producing fully dense metal parts starting from metal powder. In this technique, a thin layer of loose metal powder is spread on a build plate, followed by melting of select areas using a finely focused electron beam. Elaborated CAD models are sliced into layers. Then the system indicates which areas of the loose titanium powder will be melted by the electron beam. The process is repeated layer by layer to achieve three-dimensional titanium parts. The equipment operates in a vacuum at a high temperature, typically 700 °C. The result is stress-free parts without the oxidation issues that can occur with other metal AM ap-
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
229
proaches. EBM allows for excellent control over part chemistry, mechanical properties and geometry with its ability to build unique structures for a wide range of applications. One example is a porous-coated surgical implant (Fig. 11.22). Using EBM, parts with an integrated porous structure can be produced in a single step and with greater control over the three-dimensional structure of the porous structure. In this way, pore size and relative density of the surface can be intelligently tailored to facilitate bone in growth. This is because medical implants with a porous structure present an increased surface area to which new bone can attach, promoting stable osteointegration. Cast or machined implants require a secondary process to etch a porous surface that is bonded in place. By creating an integral porous structure, EBM prevents the part from debonding over time [8].
Fig. 11.22: Acetabular cups using Arcam EBM technology (Euromold 2014).
11.4.10 Tissue engineering This technology has become significantly important to the future of medicine for many reasons. The problems of obtaining available organs for transplantation leave many patients on lengthy waiting lists for life-saving treatment. Being able to produce organs using a patient’s own cells can not only alleviate this shortage, but also addresses issues related to rejection of donated organs. This will be important for developing therapies and testing drugs. Tissue engineering provides a good practical means for researchers to study cell behavior, such as cancer cell resistance to therapy, and to test new drugs or combinations of drugs to treat many diseases. Tissue engineering is the use of a combination of cells, engineering and materials methods, suitable biochemical and physicochemical factors to improve or replace biological functions. While it was once categorized as a subfield of biomaterials, having grown in scope and importance it can be considered as a separate field in bioscience. Compared with nonbiological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. Tis-
230 | Piotr Dudek and Jacek Cieślik
Fig. 11.23: Envisiontec 3D-Bioplotter® (Euromold 2014).
sue engineering has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures [9].
11.5 Case studies Technologies of Reverse Engineering and Rapid Prototyping can be used in many medical applications. The most obvious application is as a means to design and develop medical implants, devices and instrumentation. Examples of medical instruments designed using RE and RP technologies include retractors, scalpels, surgical fasteners, display systems and many other devices (Tab. 11.2).
11.5.1 One-stage pelvic tumor reconstruction Reverse engineering techniques and 3D printed custom implants of part of the pelvis helped in creating custom implants for a 64-year-old patient with a pelvic chondrosarcoma (Fig. 11.24) according to a clinical case presented by Mobelife. No standard implant would allow a partial acetabular resection of the anterior column and superior pubic ramus. Therefore, a custom implant was requested to fill up the bone defect and cover the remaining acetabular bone, in order to reconstruct normal hip joint functionality. Based on the aMace® technology, Mobelife designed patient-specific cutting guides for a 1-stage resection and reconstruction surgery. The tumor was resected exactly according to plan and the custom implant – designed for perfect fit – positioned easily and fixed stably. Long spongious screws were pre-drilled exactly according to optimal patient-specific fixation planned, using the custom drill guides provided. Postoperative X-rays show extremely accurate reconstruction with retention of the posterior column [17].
According to Mobelife, one week after surgery the patient was already up and walking.
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications |
231
Tab. 11.2: Some RE and RP medical applications. Application
Example
Surgical tools
Drilling guides for spine, knee surgery (arthroscopy), the jigs to assist the process of removing tumors in bone reconstruction surgery, etc.
Surgical training
Medical training models for surgeons to enhance surgical skills, learn and practice physical examination, general medical procedures and clinical skills [10]. Virtual 3D models for medical simulation, biomedical analysis and study [11].
Personalized implants
Implants for bone reconstructions for patients with skull defects due to traffic accidents [12] or bone tumors; pelvic, acetabular etc. revision. Personalized implants for cosmetic cranio-maxillofacial surgery [13, 14].
Orthopedics
Development of hip and knee implants as well as the surgical tools such as orthopedic plates, fixation tools and screws. 3D models for biomedical analysis and study.
Dental applications
Implants for bone reconstruction of the mandible and for tooth reconstruction and replacement, drilling guide, digital impression models, long-term temporary crowns and bridges [15]. Simulation of an implant position on 2D and 3D models, identification of the mandibular canal, calculation of the bone density and surgical planning [16].
Prostheses
Design and manufacturing of personalized prostheses, personalized orthoses and ergonomic products such as shoes, and sport products, etc.
(a)
(b)
(d)
(e)
(c)
Fig. 11.24: Creating custom implants for a 64-year-old patient with a pelvic chondrosarcoma (a–e).
232 | Piotr Dudek and Jacek Cieślik
11.5.2 Orbital reconstruction following blowout fracture This example presents the case of a 29-year-old male who sustained trauma to the right orbit. Orthoptic examination revealed limited supra- and infraduction of the right eye (Figs. 11.25 and 11.26).
(a)
(b)
(c)
Fig. 11.25: Process of traditional orbital reconstruction (a–c) [18].
Surgeons first made CT scans of the patient’s eye sockets and used them to create a 3D virtual model. This model was mirrored for determination of the right position of orbital bone and 3D printed. An appropriately shaped titanium mesh implant was obtained using the printed model. Pre- or intraoperative titanium meshes shorten operating times and decrease the number of attempts required to position the implant in the orbital cavity and assess its shape and fit. This significantly reduces the risk of inferior rectus damage. As the implant is tailored to the shape of the orbit, the whole area of bony defect can be covered with the mesh. All of the above factors influence the long-term results, which are better than with the standard method [19].
AC
V
A
R
L
P
(a)
(b)
Fig. 11.26: Orbital reconstruction (a–c) (image courtesy of Dr. M. Elgalal).
(c)
11 Introduction to Reverse Engineering and Rapid Prototyping in medical applications
|
233
11.6 Summary In broad terms, Reverse Engineering and Rapid Prototyping are used extensively today in a variety of industries and fields, from more obvious applications (such as product design) to less intuitive ones (such as art or archaeology). These technologies are definitely widely spread in different fields of medicine and show great potential in medical applications. Various uses of RE and RP within surgical planning, simulation, training, production of models of hard tissue, prostheses and implants, biomechanics, tissue engineering and many other cases open up a new chapter in medicine. Due to these technologies doctors and especially surgeons are privileged to do some things which previous generations could only have imagined. CT, MRI, etc. imaging have provided foundations for computerized data visualization and tools for preprocedural planning. Rapid prototyping techniques have developed rapidly and provide means for the fabrication of solid replicas of anatomy as revealed by medical image data and can be very helpful in bridging the gaps in current digital planning and treatment delivery techniques. RP technology provides the possibility to produce biocompatible implants with a complicated shape and structure and in future maybe also organs using tissue engineering and bioprinters.
References [1]
[2]
[3]
[4]
[5]
[6]
Cieślik J and Kasperkowicz J. Zastosowanie systemów CAE do planowania operacji chirurgicznych: (Application of CAE systems for planning surgical procedures) Mechatronika, red. Lucyna Leniowska. Rzeszów: Uniwersytet Rzeszowski, 2011. Nauka dla Gospodarki; 2/2011. pp. 57–70. Pham DT and Hieu LC. Reverse Engineering hardware and software. In: Raja V and Fernandes KJ, eds. Reverse Engineering: An Industrial Perspective. London: Springer-Verlag London Ltd. 2006. pp. 33–70. Zwierzyński AJ and Cieślik J. Model kinematyczny redundantnego narzędzia laparoskopowego oraz wybrane zagadnienia planowania trajektorii (Kinematic model of redundant laparoscopic tool and selected aspects of trajectory planning). Prace Naukowe Politechniki Warszawskiej. Elektronika ; 2012 z. 182, Postępy robotyki, T. 1 pp. 129–138. Zwierzyński AJ and Cieślik J. Opis geometrii ciała dla celów planowania trajektorii redundantnych narzędzi laparoskopowych (Description of the geometry of the body for the purpose of trajectory planning of redundant laparoscopic instruments). Prace Naukowe Politechniki Warszawskiej. Elektronika, 2012 z. 182, Postępy robotyki, T. 1 pp. 139–148. Rajendra JS, Raut LB and Kakandikar GM. Analysis of integration of Reverse Engineering and generative manufacturing processes for medical science – a review. International Journal of Mechanical Engineering and Robotics Research. 2013;2(4). Gibson I. Advanced Manufacturing Technology for Medical Applications. Reverse Engineering, Software Conversion and Rapid Prototyping. Chichester: John Wiley & Sons Ltd. 2005.
234 | Piotr Dudek and Jacek Cieślik
[7]
[8]
[9] [10]
[11]
[12]
[13]
[14]
[15]
[16]
[17] [18] [19]
Hieu LC, Sloten JV, Hung LT, Khanh L, Soe S, Zlatov N, Phuoc LT and Trung PD. Medical Reverse Engineering Applications and Methods. 2ND International Conference on Innovations, Recent Trends and Challenges in Mechatronics, Mechanical Engineering and New High-Tech Products Development MECAHITECH ’10, Bucharest, 23–24 September 2010. Renovis-surgical.com. Bone Ingrowth into Tesera Trabecular Technology™ Porous Structure A Weight-Bearing Ovine Study. Renovis Surgical [Internet]. 2015 [cited 7 June 2015]. Available from: http://www.renovis-surgical.com. Sun W, Starly B, Nam J and Darling A. Bio-CAD modeling and its applications in computer-aided tissue engineering. Computer-Aided Design. 2005;37:1097–1114. D’Urso PS, Barker TM, Earwaker WJ, Bruce LJ, Atkinson RL, Lanigan MW, Arvier JF and Effeney DJ. Stereolithographic biomodelling in cranio-maxillofacial surgery: a prospective trial. Journal of Cranio-maxillo-facial Surgery. 1999;27:30–37. McDonald JA, Ryall CJ and Wimpenny DI. Medical models, the ultimate representations of a patient-specific anatomy. In: Wouters K, ed. Rapid Prototyping Casebook. Trowbridge: Professional Engineering Publishing Limited. 2001, pp. 177–182. Ciocca L, Fantini M, De Crescenzio F, Corinaldesi G and Scotti R. Direct metal laser sintering (DMLS) of a customized titanium mesh for the prosthetically guided bone regeneration of atrophic maxillary arches. Medical & Biological Engineering & Computing. 2011:49:1347–1352. Lethaus B, Poort M, Laeven P, Beerens M, Koper D, Poukens J and Kessler P. A treatment algorithm for patients with large skull bone defects and first results. Journal of Cranio-maxillofacial Surgery. 2011;39:435–440. Maravelakis E, David K, Antoniadis A, Manios A, Bilalis N and Papaharilaou Y. Reverse engineering techniques for cranioplasty: a case study. Journal of Medical Engineering & Technology. 2007;32(2):115–121. Metzger MC, Hohlweg-Majert B, Schwarz U, Teschner M, Hammer B and Schmelzeisen R. Manufacturing splints for orthognathic surgery using a three-dimensional printer. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology and Endodontics. 2008;105:1–7. Jiang N, Hsu Y, Khadka A, Hu J, Wang D, Wang Q and Li J. Total or partial inferior border ostectomy for mandibular contouring: indications and outcomes. Journal of Cranio-maxillo-facial Surgery. 2012;40:277–284. Mobelife.be. Clinical cases [Internet]. 2015 [cited 7 June 2015]. Available from: http://www. mobelife.be/clinical-cases/case/list/. Elgalal M, Walkowiak B, Stefańczyk L and Kozakiewicz M. Design and fabrication of patient specific implants using Rapid Prototyping techniques. http://www.euris-programme.eu. Kozakiewicz M, Elgalal M, Piotr L, Broniarczyk-Loba A and Stefanczyk L, Treatment with individual orbital wall implants in humans – 1-year ophthalmologic evaluation. J. Craniomaxillofac. Surg. 2011 Jan; 39(1):30–6.
Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
12 Computer simulations in surgical education 12.1 Introduction “Practice makes perfect” – goes the old saying. Unfortunately, due to the peculiar nature of surgery, prospective surgeons cannot follow the traditional “try until you succeed” approach. To attain professional excellence a surgeon must undertake many exercises, which – obviously – need to take place outside of the operating theater. In the early days of medicine such exercises were carried out on cadavers or live animals. As the centuries went by, the steady pace of scientific progress coupled with the emergence of modern digital technologies have led to the replacement of live organisms with inorganic objects, both physical (including phantoms and models of various organs) and virtual. Rapid developments in IT have turned computers into an indispensable attribute of medical science [1]. The concept of virtual reality, coined at the end of the 20th century as an amalgam of 3D graphics and motion detection, presents an interesting alternative to traditional training methods. Virtual reality bases on computer models, simulations and visualization, particularly in the context of 3D objects. The model describes elements of the virtual space in terms of mathematical equations. Such elements are then assembled – with the use of suitable algorithms – to realistically visualize the model’s reaction to external stimuli, along with its various possible states. The main goal of this chapter is to present the fundamental aspects of in silico surgical simulations [2, 3]. Many of the presented tools are already being applied in medical training curricula, with more to follow in the near future [4]. The term in silico represents an analogy to the well-known concepts of in vivo and in vitro. It is used to describe models, simulations and visualization of medical phenomena with the use of computers and electronic hardware. In silico medicine is the fruit of major advances which have occurred in IT and computer science over the years. It applies to computer software as well as to any technologies based on computerized data processing – including algorithms, systems, data mining tools, specialized numerical hardware, visualization platforms etc.
12.2 Overview of applications We will now proceed with an overview of publicly available applications which assist medical students in their training. The status presented is valid as of the end of 2014. Most of the presented applications come with a broad set of features and have many possible uses – the common trait is that they all provide support for surgical
236 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
education [5]. In its most basic form this education is passive and focuses on familiarizing students with the details of the human anatomy, various pathologies and video recordings of surgical procedures.
12.2.1 Gray’s Anatomy Student Edition, Surgical Anatomy – Student Edition, digital editions of anatomy textbooks for the iOS (free) and Android (paid) Archibald Industries has released mobile digital editions of two famous anatomy textbooks: Gray’s Anatomy by Henry Gray and Surgical Anatomy by Joesph Maclise. Both are available free of charge to medical students, doctors and nurses. The applications feature photos and descriptions which closely correspond to the original contents of both books. Gray’s Anatomy, originally published in 1858, contains illustrations and descriptions of human anatomy, while Surgical Anatomy (1856) is a set of descriptive drawings presenting the most important parts of the human organism. Both publications share a common interface with identical navigation features. Gray’s Anatomy comprises 1247 illustrations while Surgical Anatomy includes 180 figures. In addition to manual browsing tools the applications contain textual descriptions, along with introductory and summary sections. Each book is subdivided into chapters. The menu bar seen at the bottom of the screen provides thumbnail shortcuts, displays the title of the current chapter and of the illustration currently being displayed, and can help readers locate related content in Google and Wikipedia. The user may create custom bookmarks and access a concise help system. All illustrations are stored in very high definition, with no quality loss when zooming in on an interesting detail (within limits set by the application’s designers).
12.2.2 Essential Skeleton 4, Dental Patient Education Lite, 3D4Medical Images and Animations, free educational software by 3D4Medical.com, available for iOS, Android (Essential Skeleton 3 – earlier version; paid editions of Essential Anatomy 3 and iMuscle 2) Out of twenty seven educational applications offered by 3D4Medical three are available free of charge. They all focus on human anatomy and can be installed in the same way as SpineDecide.
Essential Skeleton 4 High-quality three-dimensional, fully controllable and scalable model of the human skeleton. Individual bones and bone clusters may be highlighted, rendered semitransparent (Fig. 12.1) or hidden entirely (Fig. 12.2).
12 Computer simulations in surgical education |
Fig. 12.1: Essential Skeleton4 – different types of bones presentation.
237
Fig. 12.2: Essential Skeleton4 – hiding some bones structure.
Users may add markings and labels to the model (Fig. 12.3), select portions of the skeleton using tabs or hide selected structures. All features are available from a sidebar which can be minimized to free up more space for the main display window. The tool provides an invaluable aid in learning anatomy and comes with a customizable quiz where the user must match labels (in English or Latin) to specific skeletal structures (Fig. 12.4). Upon completing the quiz the application displays the user’s score along with any previous records, enabling students to track their progress.
Dental Patient Education Lite A feature-limited version of a commercial application which provides an in-depth look at human dentition. Much like Essential Skeleton, it is controlled by a sidebar menu and supports adding markings and tags to the model. The starting screen shows a human head. By using the layer management tool we can remove individual muscle layers, finally uncovering the dental structure (Fig. 12.5), whose elements can be double-tapped to obtain a more detailed view. After selecting a tooth we can split it in two planes, revealing its internal structure. The navigation menu (bottom left-hand corner of the screen) contains links to animations presenting dental conditions, diagnoses, treatment and prophylaxis (Fig. 12.6). The Lite edition of the application is restricted to twelve sample animation while the commercial version contains nearly two hundred.
238 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.3: Essential Skeleton4 – adding labels to the model.
Fig. 12.4: Essential Skeleton4 – naming specific skeletal structures.
Fig. 12.5: Dental Patient Education Lite – presenting the dental structure.
Fig. 12.6: Dental Patient Education Lite – animation of the dental treatment.
3D4Medical Images and Animations A repository of high-quality 3D visualizations (Fig. 12.7) and animations covering various aspects of medicine and related disciplines (e.g. physical fitness), freely available for noncommercial use with the option to purchase commercial licenses. These assets
12 Computer simulations in surgical education |
239
Fig. 12.7: 3D4Medical Images & Animations – initial screen, choosing image category.
can be used to create presentations, posters, animations etc. The repository contains over 500 images divided into seven categories and 42 videos divided into five categories [6].
Other applications from 3D4Medical As already noted, 3D4Medical also offers commercial applications – 23 of them – in various price ranges, from € 1.79 for Student Muscle System (a counterpart of Essential Skeleton focusing on muscle tissue) to € 44.99 for the full version of Dental Patient Education and Orthopedic Patient Education (a set of 141 animations detailing orthopedic mechanisms, common injuries and treatment options). The average cost of an application license is € 10.16; however six applications are available at € 6.99. Of particular note is Essential Anatomy 4 – an enhanced version of Essential Skeleton, with a broader thematic range and good educational value. This software package is available at € 21.99.
12.2.3 SpineDecide – An example of point of care patient education for healthcare professionals, available for iOS The Decide – Point Of Care Patient Education for Healthcare Professionals series is a collection of nine applications developed by Orca Health and targeted at doctors who wish to explain medical concepts to their patients. It can also augment medical training courses, informing students about the various structures of the human body, known pathologies and treatment methods. The software is distributed free of charge and can be downloaded from the mobile device’s online store. Once all neces-
240 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
sary files have been downloaded, the user only needs to click “Free/Install” to install the application. As already noted, the software is subdivided into three segments. The first of these is devoted to 3D visualizations of the human spine or its individual parts (regions, vertebrae and intervertebral discs). The model can be rotated, zoomed and translated using a touchpad-like interface in the bottom part of the screen. Additionally, some screens present a scrollbar which decouples elements of the model. The model itself is richly detailed and comes with a range of color coded profiles. All textures are based on high-resolution images. The second visualization module focuses on pathological changes such as thoracic hyperkyphosis and provides a selection of videos or alternating displays presenting the normal and pathological spine, supplemented by 2D photographs and MRI scans, as well as in-depth textual descriptions of symptoms, diagnostic criteria, recommended treatments, post-operative care and prevention methods. The final module visualizes a range of surgical procedures using the previously described 3D models. The user is also provided with a set of drawing tools which can be used to add markings to any static image – for example to direct the patient’s attention to a pathological change. Users may upload their own photos and create video sequences (which, however, cannot be graphically annotated). The Decide – Point of Care Patient Education for Healthcare Professionals also includes other modules: knee, hand, shoulder, eye, foot, heart, dental and ear-nose-throat.
12.2.4 iSurf BrainView – Virtual guide to the human brain, available for iOS The application is billed by its authors as “a great tool for teaching brain MRI and for learning neuroanatomy”. It is available free of charge from the AppStore with a similar installation procedure to the applications described previously. iSurf Brain View, developed by Netfilter (netfilter.com), features a “model” brain composed of a great number of MRI scans in all three anatomical planes (coronal, sagittal and transverse) [7]. The visualization plane can be selected from the menu placed at the bottom of the viewport. A separate scrollbar is used to translate the model along the currently selected plane. While the scans themselves cover the entire head, any bones, muscles and sensory organs are skipped. The application provides a full range of views, from the tip of the head all the way to the base of the jaw. Despite pixelation artifacts, images are crisp and it is easy to discern individual characteristics. In order to enhance the user’s experience structures of the brain are color coded and can be highlighted as needed (Fig. 12.8). The application also provides an online interface which can download descriptions of selected structures from Wikipedia (Fig. 12.9).
12 Computer simulations in surgical education |
Fig. 12.8: iSurf Brain View – highlighting the brain structures.
241
Fig. 12.9: iSurf Brain View – presenting descriptions of the brain structures.
iSurf Brain View provides a rotatable 3D model of the left hemisphere, a quiz feature which asks the user to name specific structures of the brain, and a set of scientific articles which further in-depth information on the human brain.
12.2.5 Monster Anatomy Lite – Knee – Orthopedic guide, available for iOS (Monster Minds Media) Monster Anatomy Lite – Knee is very similar to iSurf Brain View but instead contains MRI scans (with a 4–5 mm spread) of the knee joint and its surroundings. Additionally, Monster Mind Media SAS offers two paid (€ 16.99) extended editions: Lower Limb and Upper Limb, both developed under the supervision of Professor Alain Blum from the Centre Hospitalier Universitaire in Nancy, France [8]. In addition to browsing scans in the coronal, sagittal and transverse planes, Monster Anatomy Lite enables the user to highlight and observe the structure of bones, tendons, muscles (Fig. 12.10), blood vessels and nerves (Fig. 12.11). Navigation is somewhat different than in iSurf Brain View – the user may click and drag a miniature view of the knee joint as well as change the current layer by one step (in either direction) to accurately observe areas of interest. Both applications can be supplemented by other similar software available from the AppStore. For example, Monster Anatomy Lite – Knee meshes well with the previously mentioned Decide toolkit, as well as with Essential Anatomy 4 and AO Surgery Reference (which is also available for Android devices). For its part, iSurf Brain View
242 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.10: Monster Anatomy Lite – Knee: cross-section view along the knee.
Fig. 12.11: Monster Anatomy Lite – Knee: cross-section view across the knee.
can be assisted by Brain Anatomy (a free, simple and user-friendly app by Gianluca Musumeci [9] showing frontal and transverse cross-sections of the human brain and enabling fragments to be selected and contrasted with MRI scans – see Fig. 12.12) or by 3D Brain (a 3D model of the brain where individual structures can be highlighted to provide a clear view of parts of the cerebrum not visible from outside – see Fig. 12.13).
Fig. 12.12: Brain Anatomy – scan selection.
12 Computer simulations in surgical education |
243
Fig. 12.13: Brain Anatomy – coloring brain structures.
12.2.6 AO Surgery Reference – Orthopedic guidebook for diagnosis and trauma treatment, available for iOS and Android Published by the AO Foundation, this free mobile application for Android and iOS devices can be downloaded from the Android Market or from the AppStore [10]. It is designed to serve as a handy database/catalogue of fractures, helping expedite diagnosis and ensure that the selected treatment is the right one. The application follows a “top-down” approach, taking its user through all stages of the fracture treatment process, starting with proper diagnosis. The initial screen shows a complete skeleton (Fig. 12.14) and asks the user to select an area of interest by tapping the corresponding element. Further intermediate screens may then be displayed to fine-tune the analysis (Fig. 12.15). The user indicates the fracture zone, selects its type (transverse, longitudinal, compound etc.) and describes its complexity (Fig. 12.16). Once the selection process is done, the application displays a list of possible treatment options, with scrollable descriptions accessed by clicking icons on the right-hand side of the screen. The list is often subdivided into surgeries, noninvasive procedures and general recommendations (Fig. 12.17). The next stage involves surgical preparation and surgery itself – the system shows how to position the patient on the operating table (if required), how to expose the surgical site and how to carry out the procedure (Fig. 12.18). All of these steps are shown using realistic drawings or photographs. The application then moves on to postoperative care (Fig. 12.19), including positioning protocols, activities to be avoided by the patient, recommended medication and further check-ups. The application has a
244 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.14: AO Surgery Reference – initial screen, skeleton parts (© AO Foundation, Switzerland).
Fig. 12.15: AO Surgery Reference – skeleton part selection (© AO Foundation, Switzerland).
Fig. 12.16: AO Surgery Reference – diagnosis of the fracture (© AO Foundation, Switzerland).
Fig. 12.17: AO Surgery Reference – list of general recommendations (© AO Foundation, Switzerland).
12 Computer simulations in surgical education |
Fig. 12.18: AO Surgery Reference – surgery procedure description (© AO Foundation, Switzerland).
245
Fig. 12.19: AO Surgery Reference – post-operative care description (© AO Foundation, Switzerland).
robust, user-friendly interface complete with pictures and links to external source of information. All descriptions are clear and concise.¹
12.2.7 iOrtho+ – Educational aid for rehabilitationists, available for iOS and Android A simple, user-friendly application by Therapeutic Articulations intended as a therapeutic and educational aid for rehabilitationists. It includes a set of reflex and reaction tests together with information on expected results and literature references [11]. Tests are launched in a way similar to AO Surgery Reference – by pointing to a part of the skeleton (Fig. 12.20). The free version of the application covers the foot and the ankle joint. Tests are divided into categories which correspond to selected parameters or system components (e.g. the function of a specific joint or tendon). Each test (Fig. 12.21) comes with a photograph, a video recording showing the applicable medical procedure, a description of the procedure’s course and aims, and a result interpretation sheet.
1 Images (Figs. 12.14–12.19), © Copyright by AO Foundation, Switzerland.
246 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Special Tests
Developed by Therapeutic Articulations, LLC
Special Tests
Fig. 12.20: iOrtho+ – initial screen, choosing the part of the skeleton.
Fig. 12.21: iOrtho+ – one of the tests for ankle and foot description.
12 Computer simulations in surgical education |
247
The menu bar displayed at the bottom of the screen includes a set of rehabilitation exercises (“mobilizations”), showing how and where to apply force (Fig. 12.22). Like most of the applications discussed in this chapter, iOrtho+ has a straightforward interface, although the free version is limited to two content packs out of a total of 16 (specifically – ankle tests and knee joint rehabilitation exercises).²
Fig. 12.22: iOrtho+ – presentation of rehabilitation exercises.
12.2.8 DrawMD – Based on General Surgery and Thoracic Surgery by Visible Health Inc., available for iOS The next application in this study comprises a set of visualization boards, with sixteen thematic packages available from the publisher’s website. In addition to the two sets discussed here, users can download the following packages: urology, anesthesia and critical care, cardiology, orthopedics, OBGYN, vascular, female pelvic surgery, ENT (ear-nose-throat), pediatrics, transplant surgery, ophthalmology, breast health, pulmonology and speech-language pathology. The visualizations focus on selected tissues and organs, enabling doctors to better explain medical problems and treatment options to their patients [12]. 2 Images (Figs. 12.20–12.22) from iOrtho+ Mobile App, Therapeutic Articulations, LLC, Spring City, PA USA.
248 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Each board supports freehand markings and textual annotations; additionally the program provides a set of custom “stamps” representing important organs, pathological changes and surgical instrumentation (Fig. 12.23). These can be used e.g. to present the placement of a stent, the location of a tumor or even the function of ion channels. In keeping with the latest AppStore trends, DrawMD can import custom pictures and video recordings (Fig. 12.24), such as X-ray, CT or USG scans. Annotated boards can be saved (Fig. 12.25) and accessed from the application’s starting screen (Fig. 12.26).³
Fig. 12.23: DrawMD – visualization of selected organs (© Visible Health, Inc.).
The user may also create a custom account with which they can later log into the application, post in forums and submit image sharing requests to other users. The application includes an English-language tutorial complete with screenshots, enabling new users to familiarize themselves with its features.
3 Images (Figs. 12.23–12.26) © Visible Health, Inc. Created using drawMD Urology (www.drawmd. com) and reproduced with permission by Visible Health, Inc.
Fig. 12.24: DrawMD – choosing video recording or pictures (© Visible Health, Inc.).
Fig. 12.25: DrawMD – making and saving annotations (© Visible Health, Inc.).
12 Computer simulations in surgical education |
249
250 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.26: DrawMD – starting screen, accessing saved annotations (© Visible Health, Inc.).
12.2.9 MEDtube, available for iOS and Android Likely inspired by the popular YouTube video sharing service, MEDtube [13] provides a mobile application with recordings of actual surgeries (Fig. 12.27). According to the distributor, over 10 thousand HD video files are available, each accompanied by a brief description of the corresponding case. The dataset is divided into categories corresponding to areas of medical practice (Fig. 12.28). MEDtube is also available on the www video education portal (Fig. 12.29) where medical student can find the online library with high-quality multimedia content of clear practical and educational value, counting over ten thousands professional medical videos also divided in specific categories (Fig. 12.30). MEDtube is free to use (only free registration is required). The resources are provided by physicians, medical societies, healthcare centers and universities from all over the world and before publication all the uploaded materials are accepted by medical experts (Figs. 12.31 and 12.32).
Fig. 12.27: MEDtube – video recording of the surgery.
Fig. 12.28: MEDtube – choosing medical category.
12 Computer simulations in surgical education |
251
252 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.29: MEDtube – educational medical video portal.
Fig. 12.30: MEDtube – choosing specialty.
12 Computer simulations in surgical education |
253
Fig. 12.31: MEDtube – choosing video recording of the surgery.
Fig. 12.32: MEDtube – presentation of the video recording of the surgery.
254 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
12.3 Specialized applications One of the most interesting and advanced applications discussed in this chapter is the Touch Surgery system [14]. It enables the user to conduct virtual surgery, taking them through all stages of the process and describing the procedures and instruments required at each stage. The authors’ (Jean Nehme, Andre Chow) goal is to promote best practices in surgery in order to improve the overall quality of medical care [14]. Realistic visualizations created by the developers in collaboration with renowned universities (including Duke University, Stanford University, University of Toronto, Imperial College London and Dartmouth College) prepare the student for participation in actual surgeries. Touch Surgery is an interactive mobile simulator (application on smartphone) which provides a step-by-step walkthrough of the entire surgical procedure, allowing the user to make decisions at each stage. Crucially, it fits in a pocket and can be run at any time and place of the user’s choosing [15]. It can serve young adepts of surgery as well as professionals interested in furthering their knowledge of orthopedic surgery, trauma surgery, neurosurgery, general surgery, plastic surgery, cardiosurgery, vascular surgery etc. – all in accordance with the time-tried concept of repetitio est mater studiorum. Advanced 3D visualization techniques recreate the work of actual surgeons, immersing the user in a lifelike virtual operating theater. All presentations are richly detailed and mimic real-world conditions and procedures. A dynamic flow of scenes closely tracks the movements of the surgeon, OR staff, the operating table, support equipment etc. This fluid motion is arguably the most important feature of the application, strengthening the illusion of participating in real surgery. Touch Surgery can assist trainees in becoming accustomed to the procedures required in a real clinical setting. It is clear that any hand motions and other actions should be well practiced prior to operating on real patients. “Decision before incision” – goes the surgical mantra. Computerized simulations are an invaluable training tool, enabling the student to obtain practical knowledge which will help them make the right decisions in the future. An important aspect of training is selection of the appropriate methodology. Here, we can differentiate between two broad approaches: teacher-assisted learning and individual learning. Touch Surgery unifies both concepts as on the one hand the student may individually perform certain actions (especially in the Test mode), while on the other hand the system plays the role of a tutor in the Learn mode, providing the student with hints and recommendations. Medical simulations can help students develop appropriate “reflexes” and ensure that these are effectively applied in practice. This, in turn, improves the quality of medical care as the trainee does not need to painstakingly recall each sequence of actions prior to making an incision or separating tissue layers. An undeniable advantage of Touch Surgery is that it implements the popular “edutainment” concept and can be freely experimented with at no risk to actual patients – note that a relaxed student can assimilate knowledge much faster than an anxious one, regardless of age.
12 Computer simulations in surgical education |
255
Individual modules of Touch Surgery are based on cognitive analysis of various surgical procedures. Each of them is divided into a number of stages and decision points. The application can assess the skills of prospective surgeons, tracking and grading their progress [14]. Each procedure can be practiced in the Test mode after being explained to the student in the Learn mode. The application also keeps track of the student’s results and maintains an individualized library of surgeries. According to Jean Nehme (who co-developed the application with several colleagues, including the orthopedic surgeon Andre Chow) Touch Surgery responds to the pressing need to train specialists without interfering with their everyday medical practice.
12.3.1 Application description Having created an account the user is presented with a list of available surgeries (Fig. 12.33). While the basic version of Touch Surgery contains only a few representative procedures, additional ones can be downloaded from the Internet free of charge. The full list includes (among others) neurosurgery, orthopedics, plastic surgery and general surgery, with 74 modules (including 32 orthopedic procedures) currently available and new ones being introduced on a regular basis. The following description details a sample orthopedic surgery, presenting firsttime users with a walkthrough of its individual stages: I – patient preparation, II – femoral preparation, III – proximal locking and IV – distal locking.
Patient preparation The femoral nailing simulation was prepared by the MSK Lab at Imperial College London in collaboration with the developers of Touch Surgery. The procedure itself consists of four stages, each of which is presented in detail, enabling users to repeat and memorize the necessary steps so that they can be smoothly performed in a real OR setting. In the authors’ opinion advanced, visually appealing animations can provide an invaluable aid in understanding and memorizing surgical procedures. In this case, the first stage, called “patient preparation” shows how to position the patient and otherwise prepare them for insertion of an intramedullary nail into the right femur (Fig. 12.34). The next step enables the user the select one of two content presentation modes. This is done by clicking buttons displayed above the main text field. The Learn button replays the procedure in guided learning mode, while the Test button allows the user to try their own hand at performing the necessary actions. The learning mode is primarily aimed at trainees (students or doctors seeking specialization). Activating it takes the user to a new screen which presents a detailed overview of the required actions (e.g. “drag the green ring onto the pink ring”). Once a given step is complete, the user may proceed to the next step, where the patient is shown sitting on the
256 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.33: Library module – The list of operation types. Selection of position (arrow) starts installation (via the Internet) of the next module – (Patient preparation).
Fig. 12.34: The explanation of correct patient preparation.
12 Computer simulations in surgical education |
257
operating table. The application will now monitor the patient’s status as the user goes through application of traction, alignment of limbs, insertion of the perineal post etc. (Figs. 12.35 and 12.36). The learning mode concludes with surgical site cleanup. The operating table is assembled by dragging the green ring displayed in the visualization window. The direction of motion conforms to the natural movement performed in an OR setting (as indicated by the red arrow in Fig. 12.37).
Insert the perineal post.
Wrap the foot and ankle in velband pading.
Ensure the patient’s perineum is abutting the post and testicles/labia are not compressed. Continue
Figs. 12.35 and 12.36: The sequence of action steps to prepare the operation table. The outcome of the user’s action (in this case placement of the perineal post). Dragging the green ring onto the pink ring in the direction of the patient’s crotch (as indicated by the arrow) advances the visualization to the next step, i.e. placement of the perineal post which provides countertraction while operating on the upper extremity of the femur.
Subsequently the application presents – in more than a dozen steps – the process of setting up the operating table and positioning the patient’s upper and lower limbs in order to enable the surgery. Each stage is supplemented by multimedia presentations showing the correct course of action (see Fig. 12.37 – patient is prepared for right-side femoral intramedullary nailing, with axial stability facilitating fixation of the fractured femur). The course is based on the simple “cause and effect” principle (Figs. 12.39 and 12.40). Another important feature offered by Touch Surgery as part of the patient preparation phase is a selection of intra-operative radiographs showing how the fractured femur is fixated on the operating table (Fig. 12.38). The presented module encourages surgeons to take scans of not just the fractured bone, but also of the proximate joints – in this case, the hip joint and the knee joint. This step minimizes malpractice risks by revealing additional joint injuries, should
258 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Well done – you have completed the module.
Examine the lateral radiograph of the right femur. Now, take an AP radiograph of the right knee.
Continue
Continue
Fig. 12.37: Patient ready for the next step of the operation.
Fig. 12.38: Intra-operative radiograph.
any exist. The Test mode of the application enables the user to apply their newly acquired knowledge by performing unassisted virtual surgery on the femur. At each step the user is presented with a multiple choice question with four possible answers only one of which is correct (Figs. 12.39 and 12.40). The presented procedure consists of 20 steps and therefore comprises 20 questions.
What should you do next?
What should you do next?
Figs. 12.39 and 12.40: Test module – the correct decision on the step shown in Figs. 12.39 leads to the next step as shown in Figs. 12.40.
12 Computer simulations in surgical education |
259
In order to answer each question the user taps the selected field (in this case – “increase femoral traction”), following which the application displays the resulting position of the patient (Fig. 12.40). Users may also tap other answers (some of which may be incorrect) – this does not cause the application to revert to the beginning of the test; however, any incorrect answers are tallied up at the end affect the user’s final score (Figs. 12.41 and 12.42).
Fail: 40% You have completed the operation with a score of 40, which unfortunately does not qualify as a Pass. OPERATION
MY SCORE
Please try the operation again. You will need to achieve a score of at least 70 in order to be awarded a Pass.
Share your progress
2 LEARNS 1
TESTS
40%
40%
40%
LAST TEST FAILED
AV. SCORE FAILED
TOP SCORE FAILED
?
Last test breakdown 100%
50%
0% Step number
Continue
Precision
Figs. 12.41 and 12.42: The results of the module Test shown in Figs. 12.41 – the decision “failed” due to the low number of points (40, expected 70) with graphic presentation of validation in each step of the procedure.
Femoral preparation The second part of Touch Surgery, i.e. femoral preparation, begins in the Learn mode as a continuation of the previously described preparatory step. The first task is to select the right scalpel. This is followed by a 3–5 cm incision proximal and posterior to the greater trochanter, extending 4 cm distally (Fig. 12.43). We are presented with an image of the operating field immediately prior to the first incision. Two rings are visible. Dragging the green ring onto the pink ring performs the incision and visualizes its results in the form of an animation (Fig. 12.44). The next step is to incise the fat layer. Again, two rings are displayed and the user is asked to drag the green ring onto the pink ring, simulating the incision. The actual procedure is too complex to cover in detail – instead we will briefly outline the remaining phases. The next phase, i.e. proximal locking, consists of 29 modules which together simulate closed surgery of a “freshly” fractured femur. With this surgical technique osteogenic cells are retained in the fracture hematoma, reducing
260 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Incise the skin 3-5 cm proximal and just posterior to the greater trochanter, extending 4 cm distally.
Fig. 12.43: Second step of operation module. The correct movement of the green ring toward the pink one opens a short animation of skin cut along a length of 3–5 cm.
Incise the fat layer.
Fig. 12.44: The result of the action shown in Fig. 12.43. The continuation of skin cutting makes the cut deeper.
the risk of nonunion and iatrogenic infections (e.g. by S. aureus). The second module of Touch Surgery focuses on precise, controlled setting of the broken bone using intraoperative imaging, with particular attention devoted to rotational, axial and lateral fractures. Initial imaging is used to locate the area between the greater trochanter and the piriformis fossa, which is where the intramedullary nail will be inserted. A surgical awl is used to create an entry portal to the intramedullary canal (Fig. 12.45). A guide wire is then inserted all the way into the canal, and its placement verified with a radiograph to ensure correct implantation of the intramedullary nail (Figs. 12.46 and 12.47). The guide wire is threaded through the entry portal between the greater trochanter and the piriformis fossa. This is followed by application of several cannulated reamers with progressively broader heads, until the diameter of the head exceeds that of the intramedullary nail by approximately 1.0–1.5 mm. This preparation of the intramedullary canal concludes the second stage of treatment and the application again presents the user with a choice between carrying on in the Learn mode or switching to the Test mode.
12 Computer simulations in surgical education |
Take an AP radiograph of the right hip to check the awl position.
Examine the radiograph. Insert the awl so that it penetrates the shaft of the femur to the level of the lesser trochanter to reach the intramedullary canal.
Continue
Continue
261
Select a scalpel.
Continue
Figs. 12.45, 12.46 and 12.47: The insertion point for the awl and guide wire with radial examination of action shown in Figs. 12.46 and 12.47 with comments for each action.
Proximal locking The third phase of the surgery involves proximal locking of the intramedullary nail. The nail is attached to a targeting guide with a locking pin, then introduced into the intramedullary cavity in such a way that its head protrudes from the entry portal. This phase comprises 28 animated steps (Fig. 12.48).
Select a scalpel.
Continue
Fig. 12.48: The visualization of the third step of operation module – nail fixation.
262 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
The illustration shows how to use the targeting guide to pinpoint the placement of the proximal locking canal (here, the user is asked to select a scalpel in order to expose the bone). Installation of two locking screws in the proximal part of the femur concludes the third phase of the surgery.
Distal locking The final part of the application explains the distal locking procedure, with a total of 47 animated and richly detailed visualizations (Fig. 12.49).
Ensure on radiograph that the drill guide is centered over the locking hole.
AKA “center of bulls eys”. Continue
Fig. 12.49: The fourth step of the distal locking procedure.
Here, another locking canal must be drilled to house the distal locking pin. This is done with the use of the “freehand locking” technique. The drill guide must be centered over the locking canal and in order to ensure this, the canal is drilled with help from image-enhanced radiography. The surgeon is reminded that before distal locking begins the length and rigidity of the femur must be verified. Much like the previously described modules, phase 4 of the surgery can be accessed in Learn or Test modes, enabling users to learn the procedure with help from an interactive tutorial or try their own hand at performing the required actions.
12.4 Simulators Surgical simulators represent a common ground between advanced IT systems and modern surgical practice. The concept has emerged from the world of videogames [16, 17], which – owing to 3D visualization and sensitive motion detection technolo-
12 Computer simulations in surgical education |
263
gies – can accurately track gestures, translating them into actions performed by the player’s avatar in virtual space. Clearly, virtual reality carries great promise for medical training [18], particularly in the area of minimally invasive surgery where detailed visualization and accurate eye-hand coordination is of utmost importance. Collections of CAT and NMR scans can be assembled into realistic virtual environments detailing any organ in the human body. Virtual controllers coupled to motion detectors and 3D visualization hardware can then simulate real-life surgeries. Modern simulation tools include feedback loops where the controller reacts differently depending on the force applied by the user and the position of the surgical implement in the patient’s body. Arguably the most technologically advanced surgical robot currently on the market is the Da Vinci Surgical System. Designed to advance the state of the art in minimally invasive surgery and introduced into clinical practice at the beginning of the 21st century, by 2012 it had been used in over 200 000 procedures carried out throughout the world. Its operation can be witnessed, among others, in the following videos (all available on YouTube): “Live robotic da Vinci radical prostatectomy during EAU congress” by the European Urological Association, “India’s 1st Da Vinci Robotic Live Surgery” by the Muljibhai Patel Urological Hospital, “Da Vinci Robotic Hysterectomies/Uterine Fibroids”.
12.4.1 Selected examples of surgical simulators LapSim by Surgical Science Surgical Science is a Swedish company founded in the 1990s. Its mission is to explore the application of virtual reality in developing minimally invasive surgery simulators [19]. In 2001 the company released LapSim (Figs. 12.50 and 12.51⁴) – a surgical simulator targeted at students and interns specializing in cholecystectomy, hysterectomy, nephrectomy and other MI procedures. LapSim trains users in operating cameras and surgical instruments, developing eye-hand coordination, rapid and accurate grasping, incising and clamping, wound closure etc. A detailed description of the simulator along with a variety of video presentations is available on the company website [20].
dV-trainer by Mimic Mimic Technologies Inc., an American company founded in the 1990s, specializes in VR medical training with the use of surgical robots [21]. In 2003 the company initiated collaboration with Intuitive Surgical, developer of the Da Vinci Surgery System and a global leader in robot-assisted minimally invasive surgery. This collaboration bore
4 Images (Figs. 12.50 and 12.51) from www.surgical-science.com.
264 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.50: LapSim – surgical simulator.
Fig. 12.51: LapSim – surgical simulator console.
fruit in 2007 with the release of dV-trainer (Figs. 12.52 and 12.53⁵), parts of which were copied over from the most up-to-date version of Da Vinci and subsequently updated in step with further development work on the robot itself. The operator has access to a console which closely corresponds to the controls of the actual robot. All input is processed by the simulator and its results visualized by a customized display, mimicking a real-life operating theater.
5 Images (Figs. 12.52 and 12.53) from www.mimicsimulation.com.
12 Computer simulations in surgical education |
Fig. 12.52: dV-trainer – surgical robot.
265
Fig. 12.53: dV-trainer – surgical robot, screen shot.
The Skills Simulator by Intuitive Surgical The Skills Simulator (co-developed with Mimic Technologies) [22] simulates a range of basic and advanced procedures preparing the user for interaction with the Da Vinci robot (Figs. 12.54 and 12.55⁶). The list of exercises covers the following categories: – operating the EndoWrist device designed to enhance the natural agility of the human hand – operating surgical cameras and forceps with 3D visualization feedback – resections carried out with various instruments and varying force – applying surgical needles in a variety of situations, closing various types of wounds – solving the so-called fourth hand problem, i.e. coping with situations which require sudden application of an additional instrument
12.5 Summary Innovative technical solutions – whether assuming the form of advanced customized hardware or computer applications available for mobile devices – create fresh opportunities for adepts of surgery [23]. Modern-day students, interns and doctors specializing in specific fields of surgery can obtain convenient, rapid access to vast volumes of information and knowledge. Simulation techniques will therefore continue to play a prominent role in medical education [24–28]. Many of the tools described in this chapter are already part of academic curricula at medical schools where students learn
6 Images (Figs. 12.54 and 12.55) from Skills Simulator, Copyright © 2015 Intuitive Surgical Inc.
266 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
Fig. 12.54: Da Vinci – surgical robot (© 2015 Intuitive Surgical Inc.).
Fig. 12.55: Da Vinci – surgical robot, screen shots (© 2015 Intuitive Surgical Inc.).
how to deal with medical issues and perform surgeries [29–31]. The same tools are also being used by specialists to further hone their surgical skills. Dedicated applications such as Touch Surgery are an example of how modern simulation techniques can help surgeons practice their trade in a nonclinical setting. The ability to repeat a procedure many times and to do so at the time and place of the user’s choosing paves the way to effective assimilation of knowledge. While computer applications offer limited opportunities to practice manual skills, they benefit greatly from their wide availability and ease of use [14].
12 Computer simulations in surgical education |
267
Although “real” surgical simulators remain very expensive, we can hope that ongoing progress in IT and computer hardware development will eventually make them more accessible to medical practitioners [32].
References [1] [2] [3]
[4] [5] [6]
[7] [8] [9] [10]
[11]
[12]
[13] [14] [15] [16] [17] [18]
Lenoir T. In: Thurtle P. ed. Semiotic Flesh: Information and the Human Body, Seattle, WA: University of Washington Press, 2002, pp. 28–51. Sutherland LM, Middleton PF, Anthony A, Hamdorf J, Cregan P, Scott D, et al. Surgical simulation: a systematic review. Ann Surg. 2006 Mar;243(3):291–300. Cooper JB and Taqueti VR, A brief history of the development of mannequin simulators for clinical education and training. Postgrad Med J., 2008:84 (997):563–570. doi: 10.1136/ qshc.2004.009886. Petty MD and Windyga PS. A high level architecture-based medical simulation system. SIMULATION. 1999;73:281–287. Ziv A, Ben-David S and Ziv M. Simulation based medical education: an opportunity to learn from errors. Medical Teacher. 2005;27:193–199. 3d4medical.com. Award Winning apps for medical students, educators and professionals – 3d4medical.com [Internet]. 2015 [cited 1 June 2015]. Available from: http://applications. 3d4medical.com/apps_home. netfilter.com.br. Mobile Platforms NetFilter [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.netfilter.com.br/mobile-platforms/?lang=en. Monster Minds Media. [Internet]. 2015 [cited 1 June 2015]. Available from: http://monstermindsmedia.fr/?page_id=12. Musumeci G. Brain Anatomy, Available: https://sensortower.com/ios/us/gianluca-musumeci/ app/brain-anatomy/548219833. AO Foundation. Mobile Apps [Internet]. 2015 [cited 1 June 2015]. Available from: https://aotrauma.aofoundation.org/Structure/education/self-directed-learning/mobile-apps/ Pages/mobile-apps.aspx#null. Therapeutic Articulations. iOrtho+ Mobile App for iPhone, iPad, & Android – Therapeutic Articulations [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.therapeuticarticulations.com/iPhone___iPad_App.php. Roth M, Roth M, Cox J, Roth M, Cox J, LaCava C, et al. drawMD Archives – Visible Health [Internet]. Visible Health. 2015 [cited 1 June 2015]. Available from: http://www.visiblehealth.com/ category/drawmd/. MEDtube.net. Medical Videos, Surgery, Procedures Videos [Internet]. 2015 [cited 1 June 2015]. Available from: https://medtube.net/. Touch Surgery Surgical Simulator. The Mobile Surgical Simulator App [Internet]. 2015 [cited 1 June 2015]. Available from: https://www.touchsurgery.com/. Al-Hadithy N, Gikas PD and Al-Nammari SS. Smartphones in orthopaedics. Int. Orthop. 2012 Aug;36(8):1543–7 Bradley H. Can video games be used to predict or improve laparoscopic skills? Journal of Endourology 2005;19(3): 372–376. doi: 10.1089/end.2005.19.372. Curet MJ. The impact of video games on training surgeons in the 21st century – Invited Critique. Arch Surg. 2007;142(2):181–186. doi: 10.1001/archsurg.142.2.186. Gurusamy KS, Aggarwal R, Palanivelu L and Davidson BR. Virtual reality training for surgical trainees in laparoscopic surgery. Cochrane Database Syst Rev. 2009 Jan;(1):CD006575.
268 | Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
[19] Grantcharov T, Bardram L, Funch-Jensen P and Rosenberg J. Learning curves and impact of previous operative experience on performance on a virtual reality simulator test laparoscopic surgical skills. Am J Surg. 2003;185(2):146–149. doi: 10.1016/s0002-9610(02)01213-8. [20] Surgical Science. LapSim The Proven Laparoscopic Training System, simulation [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.surgical-science.com/lapsim-the-proventraining-system/. [21] Mimic Simulation.com. Mimic Simulation |dV-Trainer [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.mimicsimulation.com/products/dv-trainer/. [22] Intuitive Surgical.com. Da Vinci [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.intuitivesurgical.com/company/media/. [23] Grunwald T, Krummel T and Sherman R. Advanced technologies in plastic surgery: how new innovations can improve our training and practice. Plast. Reconstr. Surg. 2004 Nov;114(6): 1556–67. [24] McGaghie WC, Issenberg SB, Petrusa ER and Scalese RJ. A critical review of simulation-based medical education research: 2003-2009, Med Educ. 2010 Jan;44(1):50–63. doi: 10.1111/j.13652923.2009.03547.x. [25] Akaike M, Fukutomi M, Nagamune M, Fujimoto A, Tsuji A, Ishida K and Iwata T. Simulationbased medical education in clinical skills laboratory. J Med Invest. 2012;59(1–2):28–35. [26] Milburn JA, Khera G, Hornby ST, Malone PSC and Fitzgerald JEF. Introduction, availability and role of simulation in surgical education and training: Review of current evidence and recommendations from the Association of Surgeons in Training. International Journal of Surgery. 2012;10(8):393–398. doi: 10.1016/j.ijsu.2012.05.005. [27] Dygut J and Sylwia P. Student medical education with the real orthopedic case presented as interactive computer simulation. Bio-Algorithms and Med-Systems. 2014;10(5). [28] Jacek D, Płonka S and Roterman-Konieczna I. Involvement of medical experts in legal proceedings: an e-learning approach. Bio-Algorithms and Med-Systems, 2014 Jan;10(3). [29] Betz R, Ghuysen A and D’Orio V. The current state of simulation in medical education. Rev Med Liege. 2014 Mar;69(3):132–8. French. PubMed PMID: 24830212. [30] Sesam-web.org. SESAM – Society in Europe for Simulation applied to medicine [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.sesam-web.org/. [31] ssih.org. The Society for Simulation in Healthcare [Internet]. 2015 [cited 1 June 2015]. Available from: http://www.ssih.org/. [32] Atesok K, Mabrey JD, Jazrawi LM and Egol KA. Surgical simulation in orthopaedic skills training. J. Am. Acad. Orthop. Surg. 2012 Jul;20(7):410–22.
| Part VII: Support of therapy
Łukasz Czekierda, Andrzej Gackowski, Marek Konieczny, Filip Malawski, Kornel Skałkowski, Tomasz Szydło, and Krzysztof Zieliński
13 From telemedicine to modeling and proactive medicine 13.1 Introduction Contemporary healthcare delivers vast amounts of data. Telemedical systems contribute considerably to this, broadening the spectrum of available information greatly to include: – general medical and social profiles – history of previous therapies – laboratory and imaging examination results – records of everyday activities of patients and prescribed rehabilitation – records of food calories and times of meals – basic body parameters such as blood pressure, heart rate, body weight, temperature, blood glucose level, etc. – records of drug intake The sensing and monitoring of our life activity is going to be pervasive [1]. It will be a source of very valuable information which may considerably improve healthcare operation leading to personalized and proactive medicine, currently perceived as a very promising research area. At the same time demand for healthcare will become more and more widespread due to commonly observed trends: – better living conditions (including medical care) and civilization progress mean that people live longer than before. An ageing population is, however, at greater risk of cognitive impairment and frailty – chronic diseases like obesity and diabetes affect a considerable percentage of the population in highly developed countries – intensive and successful promotion of a healthy lifestyle and awareness of the importance of self-management of health and diseases Efficient utilization of this information requires employing advanced ICT technologies such as big data processing, machine learning, cognitive computing, predictive analytics etc. One of the interesting areas of medical data processing is modeling of selected aspects of human body behavior, supporting medical diagnoses and proactive treatment. This chapter presents how existing telemedical systems may contribute to this vision of future medicine. In Section 13.2 the traditional model of healthcare supported
272 | Łukasz Czekierda et al.
by telemedicine is described, and a new model of personalized, proactive medicine to enhance it is specified. Section 13.3 contains an overview of computational methods which can be utilized in the new approach. Sections 13.4 and 13.5 characterize existing TeleCARE and TeleDICOM systems developed by the authors and their role in proactive data-driven medicine.
13.2 ICT-driven transformation in healthcare There are a few key factors contributing to transformation in healthcare systems evolution which results from wide ICT adoption, such as: remote pervasive access to medical data, the collection of large amounts of medical data which could be used for knowledge extraction and modeling, development of new methods for real-time analysis of vast streams of data which could be explored in medical decision support systems. The existing and future possibilities in these areas are elaborated in the following points.
13.2.1 Overview of telemedicine Continuing technical progress and the increasing adoption of the Internet have positioned it as the basic communication medium and infrastructure for providing services in today’s world. Applications of the Internet include telemedicine, which has become a very broad term, encompassing many remotely performed processes in medicine, given as follows [2]: – Telemonitoring of patients’ vital parameters (physical activity, weight, blood pressure, heart rate, etc.). If there is feedback on the basis of the parameters gathered, the term telecare is more suitable. – Telerehabilitation – observation and supervision of the process of rehabilitation and its progress. – Telediagnosis and teletreatment – facilitating remote cooperation between the patient and medical staff with the aim of obtaining clinical information that may help to introduce and optimize medical or paramedical aid. – Tele-education of patients – fostering medical consciousness and helping to promote a healthy lifestyle. – Tele-education of medical staff (medical students, nurses, postgraduate training of physicians, etc.). – Teleconsultation – involves cooperation between medical practitioners and experts in order to discuss particularly difficult cases, exchange ideas and make therapeutic decisions. This process is sometimes called tele-expertise. It may also be a part of tele-education of medical staff observing the teleconsultations.
13 From telemedicine to modeling and proactive medicine
| 273
Plenty of various telemedical systems have so far been successfully deployed in many countries around the world. Rapid progress in medical research and ubiquitous access to the Internet definitely mean that such systems will become pervasive in the near future.
13.2.2 Traditional model of healthcare supported by telemedicine The processing model of medical data generally continues to be traditional even if telemedical tools are utilized. In this approach, the data (symptoms, results of various examinations or information gathered from telemonitoring or telerehabilitation systems) are the input for the reasoning process performed by medical doctors. Their knowledge and experience are employed to diagnose the disease or medical problem, to evaluate the effectiveness of previous therapy and (possibly) to modify it appropriately. In some cases it may be possible for them to predict the progress of the disease in the future. When the knowledge and experience of the doctor taking care of a patient are not sufficient to diagnose and properly treat a case, a consultation within a local team of experts or remote consultation using appropriate telemedical systems can be organized. The described process is illustrated in Fig. 13.1. Patient-related data acquired from various sources is gathered in the medical patient profile module (which can be identified with the electronic health record although it can also be provided with some paramedical information, e.g. everyday activity). The quantity of medical information associated with a single patient and its complexity is growing considerably. As stated and justified in the introduction, healthPervasive sensing and monitoring traditional clinical data
Patient
Body parameters Activity
Telecare and telerehabili tation systems
Data Meals
Medical F patient profile
Medical doctor
Laboratory and imaging data
Treatment & medical care Fig. 13.1: Traditional healthcare model supported with telemedicine.
Teleconsultation systems
Medical expert
274 | Łukasz Czekierda et al.
care will become more and more widespread in the near future. As a consequence, although the described approach employs modern telemedical tools, it cannot be preserved in such a form for a longer period of time [1]: – It is poorly scalable and not cost-effective – to increase the scale of such a healthcare system, employing more medical personnel is inevitable, – it is generally reactive, i.e. medical doctors react to reported ailments and observed symptoms. Therapeutic processes are frequently initiated by patients themselves who visit their doctors when they feel bad. A solution to these issues is much better utilization of the potential of ICT technologies in the area of medical data processing. For efficient, large-scale operation, this process must be automated at least to some degree. This could help physicians with simple repetitive tasks allowing them to concentrate on more complicated issues. Efficiently utilized techniques of medical data mining and its intelligent analysis [3] may yield a more proactive approach to the treatment. These issues will be discussed in the following subsections.
13.2.3 Modeling as knowledge representation in medicine Mathematical and computer modeling has been practiced for many years as a very effective method of representing very complex relations and processes in medicine. Although it is impossible at the moment to create a full model of a human body, due to insufficient knowledge, many precise models of particular diseases (cardiovascular, diabetes etc.) [4, 5], single organs (heart, pancreas, etc.) [6, 7] or systems (cardiovascular, respiratory etc.) [8] exist. In general, modeling in medicine is a very broad area and cannot be discussed within a single book. This chapter focuses on models which can be developed using medical data stored in telemedical systems as part of teletreatment, telemonitoring and teleconsultation activities (see Sections 13.4 and 13.5). Considering these kinds of systems as a source of valuable diagnostic information, various models that can support physicians can be developed, for example: – Teletreatment and telemonitoring systems may analyze the received data related to a patient’s health state and use data mining methods to build personalized models of disease progression or therapeutic efficacy. Based on such models the systems can undertake various actions, e.g. recommend certain activities to a patient, report certain signs to their referring doctor or generate alerts to appropriate medical personnel on duty (nurses, general practitioners or specialists). – In imaging medicine various Computer-Aided Diagnosis (CAD) systems may be employed, which can build models from available data and suggest a diagnosis, or at least indicate some findings which are suspected to be atypical or potentially dangerous. The final decision is taken in this case by medical personnel.
13 From telemedicine to modeling and proactive medicine | 275
In both cases the obtained models result in better understanding of the disease itself but also contribute to monitoring positive or adverse reactions to therapy. Of course, the knowledge represented in the form of models always has to be applied with care and any actions suggested by such models should be verified by medical professionals.
13.2.4 Towards a personalized and proactive approach in medicine An obvious prerequisite for automatic support of treatment is having a very reliable, continuously verified and improved computer model describing the operation of the human body. The construction of each model is a long and complex process aiming at creating the best possible representation of a fragment of the real world. The same is true in the case of modeling in medicine. A crucial issue in this area is the selection of data representative of the considered disease and subpopulation which will be used for model development. This process is an iterative one with many feedback loops in preclinical and clinical phases, during which the model may be modified and improved. The already mentioned trends in healthcare operation may drastically facilitate acquisition of data essential for the development and verification of the models: – Recently, access to real medical data and its processing has become considerably simpler due to its digital representation. – A patient’s health state can be described more and more precisely. Patients’ medical records may contain information gathered not only during traditional diagnostic processes (laboratory, imaging, etc.) but also thanks to pervasive sensing. Some parameters can already be obtained noninvasively with available and relatively cheap sensors. Implantable devices are already able to provide unique data suitable for frequent or continuous monitoring. In the future, probably more sensor implants will be available providing not only physical but also biochemical parameters. Medical records may be supplemented with paramedical data (such as social conditions, physical activity etc.) which can be important in the treatment of some diseases. It is possible that advanced methods of automatic knowledge extraction and processing will soon be employed in the process of data acquisition and model verification supporting medical specialists whose expert knowledge in the given domain is so far indispensable in developing the model and evaluating the correctness of its operation. These two above-mentioned factors mean that medical models can be developed faster and are more reliable than before. As a consequence, they can be responsibly applied in support of the treatment process: automatically taking simple decisions and only referring the more important and toughest issues to medical doctors. Almost immediate feedback gathered through telemonitoring of patients for which the
276 | Łukasz Czekierda et al.
model has been run may facilitate the process of the model’s improvement or even self-improvement (the same relates to clinical trials). Individuality of human organisms makes it necessary to “personalize” the models. Models obtained using individual medical data of patients take into account their individual features which can be significant in the model utilization phase. With a reliable model of human body operation in some aspects (an organ, the course of a disease, etc.) it is possible to predict the body’s behavior in these aspects under certain conditions or in the future. It opens room for new applications (the first two are particularly important for this chapter’s discussion and thus will be elaborated in a few following paragraphs): – designing and verifying a strategy of therapy prior to applying it – preventive treatment – planning surgical operations or interventions (dental, beauty treatment etc.) – research It may be possible to choose the best possible therapy prior to applying it. When a therapy has been prescribed, its efficacy can be verified by comparing the compliance of its progress with previous assumptions. Significant unconformity can trigger its modification. It may also be possible to predict whether and how a patient’s disease will evolve in the near future (note that ethical issues are beyond the scope of this chapter). In consequence, such an approach may allow implementing preventive treatment. For example, by detecting susceptibility to some diseases and triggering earlier interventions it may be possible to either avoid development of a disease at all or to minimize its negative consequences. Earlier detection of risks associated with the progress of chronic diseases, ageing etc. is generally more efficient and cheaper than reactive treatment. Even if the prediction algorithms generate false positive hypotheses, additional examinations which may also be recommended by this automated procedure operation can determine the correctness of the hypothesis. Appropriate software tools made available to the patients which visualize the progress of the prescribed therapy may improve efficacy of the treatment by increasing their motivation to actively participate in the treatment of their own disease. It should be noted and emphasized that the goal of treatment automation is not to eliminate the need for medical doctors in the therapeutic process – their approach to the case is, and probably always will be, more holistic. Moreover, even the best state-of-the-art medical equipment cannot make a diagnosis without the knowledge, experience and intuition of medical doctors. Biology is also much more complex than technical science and there may be many different atypical situations making the modeling very difficult. Biological systems’ reactions may depend on multifactorial unpredictable reasons. However, the model-based approaches not only aim at supporting medical staff by taking over processing of simple and routine issues. Cognitive functions of new generation systems (according to IBM’s [9] definition of cognitive com-
13 From telemedicine to modeling and proactive medicine
| 277
puting) allow them to learn and efficiently penetrate the complexity of huge amount of data to help human experts make better decisions. In the discussed area it means, among others, indicating important facts to be considered by medical doctors and suggesting appropriate actions to take.
13.2.5 Model of proactive healthcare A new model of a healthcare, compliant with the above discussion, is presented in Fig. 13.2. The previously presented model (Fig. 13.1) has been complemented with a decision system and knowledge representation modules. Pervasive sensing and monitoring traditional clinical data
Patient
Body parameters Activity
A
Knowledge representation D B Decision system C
Telecare and telerehabilitation systems
Data Meals
Medical patient profile
Laboratory and imaging data
E
F
Teleconsultation systems
Medical doctor Treatment & medical care
Medical expert
Fig. 13.2: Model of proactive, knowledge-based healthcare.
The knowledge representation module contains models; since creation of a holistic model is still impossible, multiple models are used to describe a single patient suffering from multiple diseases (e.g. diabetes and respiratory system problems). A new model is instantiated for a patient when a new disease or disability is diagnosed. The new model is usually not devoid of any knowledge at the beginning of its operation as in most cases it can be obtained based on models that refer to the same disease of patients with a similar medical profile. Extraction of such generic models is performed using cross-patient data analysis methods. These generic models are next fed with specific patient data stored in a medical patient profile – depending on the disease or treatment (arrow A). The data passed to the model comprises not only results of examinations performed during a treatment or acquired from telemonitoring systems, but also basic information characterizing a patient (such as: age, weight, gender, comorbidities, etc.). Methods of extracting the knowledge in the form of models, as well as their further personalization are described in the next section.
278 | Łukasz Czekierda et al.
The decision system module is fed with the data stored in the knowledge representation module (arrow B) and is able to take decisions automatically. The decisions are of various categories: – diagnosis – diagnostic hypothesis, which is the request for gathering more data – providing a patient education module selected according to the diagnosis – initiation of a new therapy – optimization of the current therapy The module can operate in a fully automatic mode or in an advisory mode. In the latter mode it only suggests certain steps which need to be performed by dedicated medical personnel (nurse, medical doctor, medical experts). Probably a blended mode is optimal – more important decisions need approval. Selection of the best option in a particular case is up to a medical doctor and should depend on the complexity of the case, confidence in automatic reasoning etc. Sometimes the decision can be made solely by medical personnel (arrow F) using data stored in the medical patient profile – in exactly the same way as in the traditional model. The decision is announced to the patient either using traditional means of patienthealthcare contact or via the feedback channel of a teletreatment system. At the same time, it is stored in the medical patient profile for evidence purposes. The decision can also influence the parameters of the model (e.g. a new therapy strategy is run) (arrows D and E).
13.3 Computational methods for models development Various computational methods have been successfully applied in modeling the progression of the following categories of diseases: – neurological diseases such as Parkinson’s disease, Alzheimer’s disease, schizophrenia or dementia [10, 11] – cardiovascular diseases, e.g. arterial hypertension [12, 13], coronary artery disease [14, 15] or pulmonary hypertension [16] – metabolic diseases, e.g. diabetes [17] – neoplasms [5] – AIDS [18] These models allow for automatic extraction of selected aspects or features that are supportive to physicians in making decisions regarding the applied therapy. Such models are particularly useful in case of chronic diseases (e.g. diabetes, Parkinson’s disease, Alzheimer’s disease), which advance slowly over time. Modeling of these diseases’ progress is challenging due to various factors such as possible incomplete-
13 From telemedicine to modeling and proactive medicine
| 279
ness of data, irregularity in observations and considerable heterogeneity of patients’ conditions [19, 20]. Besides pure modeling of a disease’s progression, the models obtained from data amassed by telemedical systems can also be applied to simulations of the course of a disease when a certain therapy is applied. In this way they are often used for improving therapies, e.g. warfarin or insulin dosing [21], immunotherapy [22] or chemotherapy [23, 24]. Personalized models of a therapy’s effect are used by physicians to support them in the therapeutic process, thus their accuracy and comprehension is essential. Among the computational methods used for developing models of disease progression and therapy effects, the following principal groups of methods can be distinguished: – regression methods [10, 17, 24, 25] – supervised learning methods [10, 12, 15, 17] – unsupervised learning methods [11, 13] – Markov Decision Process (MDP)-based methods [16, 23] – Monte-Carlo methods [18] The first group of methods, i.e. regression-based prediction, roughly speaking relies on fitting a multidimensional function h(x), called a hypothesis, onto a given dataset, so that its values are as close as possible to the values in the dataset within a specific function form. The function found can then be used either to predict future values (in this case time is one of the function’s dimensions) or to classify patterns. From the medical modeling viewpoint, the first approach takes place when a disease’s progress or a therapy response is simulated [17, 24, 25], whereas the second case mainly refers to CAD software [10]. The supervised learning methods constitute a generalization of the regression methods. The generalization relies on applying more sophisticated hypothesis functions (e.g. based on composed functions – logistic regression [26], SVM [27]) or generative modeling of the function’s structure (e.g. neural networks [28], Bayesian networks [26]). Application of these methods in most cases improves the accuracy of prediction, nonetheless, fitting algorithms for these functions usually requires larger datasets for their feeding. In contrast to the regression and supervised learning methods, the unsupervised learning methods do not require a dataset with example prediction values, since they are devised to find structures in datasets. The structures returned by these methods can be identified with the prediction values, which in the case of supervised learning methods are known a priori. With regard to medical modeling these methods are most commonly used in CAD systems to extract certain information from complex medical data (especially imaging data), which is next used to make predictions about a disease’s progress. For example, in [11] the Linear Discriminant Analysis (LDA) method was applied to identify regions in MR images which facilitated discrimination
280 | Łukasz Czekierda et al.
between patients with Alzheimer’s disease and those without, who were the healthy control [11]. The methods based on MDP, like reinforcement learning [29], are used to model time series, i.e. processes over time. In the context of medical applications they are particularly useful for modeling interactions between diseases [16] or the influence of the drugs taken on a therapy [23]. In general, these methods constitute generalizations of the supervised learning methods in cases which require different classifications depending on the time moment. Finally, the Monte-Carlo methods constitute algorithms which return probabilistic predictions based on historical data. Their usage in medical simulations is only sensible in cases where limited data are available or when the other methods fail [18]. All the aforementioned methods were originally developed for the purpose of analyzing relatively small datasets, i.e. they are usually sufficient for predictions concerning a single patient. Although personalized models obtained from a single patient’s data are sufficient in many kinds of diseases, the rapid development of telemedical software and the ubiquity of medical sensors open new possibilities for cross-patient data analysis and screening. Such analyses are particularly useful for the correct diagnosis and treatment of diseases such as glaucoma [30, 31] or cancer[32], which can be detected early based on massive comparative analysis of results from medical examinations. The demand for massive data processing methods has yielded a new branch of analysis methods – Big Data – which has been widely studied in recent years. The big data methods constitute counterparts of traditional data processing methods (including those described above), however, they are designed for processing very large datasets (in the order of tera- or exabytes). Employment of these methods in telemedical systems to the anonymized analysis of amassed datasets can be beneficial in several cases, for example, in early risk detection and classification of patients to a risk group [33, 34]. It should be noted, however, that many of these applications are currently in an early research stage, thus their clinical impact is unknown. Another novel area of computational methods, which seems to be supportive for physicians but requires huge datasets, is cognitive computing [9]. This group of methods is used to make decisions under situations characterized by high ambiguity and uncertainty. With regard to proactive healthcare, cognitive computing systems such as IBM’s Watson have been successfully applied in surgery support [35, 36] and CAD systems [37]. Although initial research results seems to be very promising, more extensive use of cognitive computing in medicine requires further study since the existing approaches are currently immature. A conclusion from the above considerations is that many computational methods may be used for modeling various types of diseases and therapies in medicine. A common trait of these methods is the necessity of feeding them with medical data. As stated at the beginning of this section, telemedical systems constitute a potential source of medical data. Due to their functionality such systems inherently collect large volumes of medical data. Taking into account the selected categories of telemedical
13 From telemedicine to modeling and proactive medicine | 281
systems, i.e. telemonitoring, teletreatment and teleconsultation, the data stored in them can be roughly divided into two types: – Imaging data such as RTG, MR or tomography results. This kind of data is commonly exchanged in teleconsultation systems. – Parametric data, i.e. sets of parameters describing a patient’s health state in discrete time moments. This type of data is gathered by all telemonitoring and teletreatment systems. The following two subsections focus on applications of the described computational methods to these two types of medical data.
13.3.1 Computational methods for imaging data Medical images constitute a very important and rich source of medical information and extracting knowledge from them in an automatic or semiautomatic way is technically challenging. Medical imaging data is provided by dedicated acquisition equipment (computer tomographs, ultrasound devices, radiographs) and represented in the DICOM [38] format. DICOM documents contain not only the imaging information itself but also many parameters describing a patient and their examination conditions. Such metadata are added automatically during examination (some of it is taken from HIS [Hospital Information Systems] such as the patient’s personal data). The imaging data is typically stored in specialized repositories called PACS (Picture Archiving and Communication System). Most medium and large hospitals have their own PACS repositories. In a typical processing flow the data is accessed from a PACS repository and analyzed by a radiologist or another expert optionally supported with CAD software [39]. As a result of this process a diagnosis is created and stored in the PACS or HIS system. Since medical images are usually complemented with additional data the supervised learning methods can be applied in order to model a disease’s progress or for diagnosis support, similarly as was already done in [10, 17, 25]. Moreover, if the modeling functionality includes a specific patient therapy and treatment history, it would yield a simulation tailored to specific patient conditions (e.g. drugs taken, operations completed, etc.). In the case when an applied therapy constitutes a simulation parameter, the MDP-based methods can be more supportive than simple regression algorithms. In many cases model development requires a preliminary step devoted to recognition and extraction of certain structures in the medical data (e.g. abnormal areas of medical images) or dimensions which contribute most to changes (taking into account too many dimensions which do not contribute to an actual model results in the dimensionality curse issue – see [40, 41]). This step is often necessary since making predictions based on analysis of raw imaging data is often impossible [42, 43]. It can be implemented using the unsupervised learning methods (see [11]), or the recently
282 | Łukasz Czekierda et al.
developed deep learning methods [44]. Finally, the accuracy of existing models can be sometimes improved by application of boosting methods, e.g. AdaBoost [45] or initial screening with the big data analysis methods.
13.3.2 Computational methods for parametric data In contrast to teleconsultation systems, telemonitoring and teletreatment systems mostly collect numerical data describing a patient’s health state over a longer time horizon (see Section 13.4), such as blood pressure, weight, pulse, etc. The longer time horizon in which the data is collected involves model development methods based on time series analysis or probabilistic methods, i.e. reinforcement learning and the Monte-Carlo methods. These models are particularly useful, for example, to predict dosages of drugs which have to be taken by patients in order to maintain a specific health condition. For instance, the reinforcement learning algorithms can be applied to diabetics in order to predict personalized insulin dosages allowing the maintenance of an appropriate glucose level (see the case study for the TeleCARE system – Section 13.4.3). Another simulation type that can be achieved using models developed from the data collected by telemonitoring and teletreatment systems is simulation of future changes in a patient’s health state with specific treatment conditions assumed. Such simulations are made based on historical parameters describing the patient’s state, as well as on information about their therapy and medical history. As a consequence, the simulation outcome is personalized for a specific patient, thus it constitutes a good staple for instructing medical students and practitioners. By using such simulations the students can observe how an example therapy would influence a particular patient if it were applied. It should be noted, however, that in the case of short-term predictions which include only historical data from a limited time period, the standard regression and supervised learning methods can have better accuracy of prediction than the reinforcement learning methods. Moreover, training a model from scratch can be accelerated by initial classification of a patient to a risk group and starting from an average model calculated for the risk group. In such a case, however, some screening methods would have to be applied for the purpose of recognition of the risk groups [33].
13.4 TeleCARE – telemonitoring framework 13.4.1 Overview The TeleCARE system is a telemonitoring framework that can be used for monitoring patients’ health parameters. In addition, it can provide feedback based on automatic
13 From telemedicine to modeling and proactive medicine
| 283
analysis of the gathered data which means that it can also be classified as a teletreatment solution. The main goal of the TeleCARE framework is to deliver personalized medical care for a large group of patients. It is especially useful in cases of continuing patient care in their home environment. In particular for: – patients who require analysis of health parameters after a period of hospitalization (for instance, associated with prior surgery); – elderly patients who require continuous monitoring of health parameters; – people with chronic diseases who require continuous monitoring of a therapeutic process. Not only patients, but also medical personnel can benefit from using TeleCARE. The framework delivers functionality that allows physicians to create detailed personalized health plans. The plans include information such as schedules of medical data monitoring and daily dosing of medications. They can be used in short- as well as long-term treatment processes. In addition, physicians can adjust the treatment process based on the current data gathered by the patient in a proactive way. The health plans complement medical patient profiles with the ability to cooperate with different decision systems. As the result, it facilitates delivery of personalized healthcare that includes the knowledge and experience of many medical experts (see Fig. 13.2). Therefore, the framework allows for the monitoring and supervision of individual therapeutic processes with respect to the individual needs of patients. Notifications about changes in the treatment process are sent directly to the patients, without the need for scheduling medical office appointments. This is particularly important for patients with disabilities or poor exercise tolerance who may have mobility problems. The framework also allows for the identification of patients’ health conditions requiring immediate notification of medical personnel. With the support of the system, patients can have continuous contact with the medical specialist responsible for the patient’s treatment and, as a result, both safety and patient comfort associated with the therapeutic process are improved. The framework also delivers the functionality required for conducting statistical, cross-cutting surveys related to the progress of the medical treatment. The system is currently at the stage of pilot studies carried out in collaboration with medical practitioners from the John Paul II Hospital in Kraków. It is mainly used for treating heart failure patients. Heart failure (HF) is a chronic disease that decreases physical capacity limiting the patient’s ability to visit the cardiologist’s office. If not managed properly, HF may lead to decompensation requiring urgent hospitalization or causing death. Monitoring of the patient’s weight, heart rate and blood pressure as well as some invasive parameters from the implantable sensors (for example, pulmonary pressure sensor) allows therapy to be adjusted before decompensation occurs. The TeleCARE system was tested as a tool for such remote care. Thanks to such monitoring it was possible to up-titrate the drug doses to the maximum tolerated and in several cases to modify the diuretic therapy and avoid decompensation. The changes
284 | Łukasz Czekierda et al.
in the patient’s weight, modification of diuretic doses and the resulting weight decrease could be analyzed and used for modeling the behavior of the volume status. However, using such an automatic medication advisory model for the patient should be implemented with caution until the efficacy of such an approach has been proven in randomized trials. The system proved to be a good tool to follow-up the patients and optimize their treatment. Data acquisition subsystem
Data processing subsystem
Patient
Web subsystem Physician
Personal medical devices Smartphones, computers
Data
Medical data patient profile
Knowledge and decision system
Treatment support
Treatment & medical care Fig. 13.3: TeleCARE modules.
As shown in Fig. 13.3, the system consists of three main parts (from left): (i) data acquisition subsystem, (ii) data processing subsystem, and (iii) web subsystem. The first one is responsible for collecting data from patient’s medical devices and sending them for further processing. The subsystem is implemented as a native application executed on patients’ mobile smartphones. The medical data can be collected automatically (directly from devices, requires Bluetooth capable medical equipment) or manually (patient reads the single measurements from a device and submits them using an input form in the mobile application). The second part of the framework is responsible for receiving measured data from patients and later processing, analyzing and visualizing it in accordance with the rules previously individually defined by physicians [46]. The third part – the web subsystem – is implemented as a web interface for the users allowing them to interact with the system through a web browser.
13.4.2 Contribution to the model-based proactive medicine concept The TeleCARE system can not only utilize the knowledge provided by practitioners, but can also support the process of development of medical models by medical data acquisition.
13 From telemedicine to modeling and proactive medicine
| 285
Support for medical data acquisition Similar to existing data acquisition solutions, such as Microsoft HealthVault [47] or Apple HealthKit [48], the TeleCARE data acquisition module can be integrated with many different medical devices. However, new devices and new medical data sources can be added to the TeleCARE framework easily. In the case of the above-mentioned commercial solutions, this process may require an entire certification process for new devices. Based on current research [49], it is presently possible to acquire a huge range of medical data from available body sensor networks (for example ECG, EEG, SpO2, heart and respiration rates, blood pressure, body temperature, glucose level or spatial location) as well as built-in smartphone sensors (for example camera, GPS or accelerometers). In addition, the framework supports devices compatible with the Bluetooth Health Device Profile (HDP) [50]. During the pilot studies data from a blood pressure monitor, precision weight scales and a custom made pillbox as well as the built-in GPS sensor from the smartphone were gathered. All medical data describing a patient’s health state were collected over a long time span (up to a few years) and stored in a relational database.
Supporting diseases management The basic interaction between patients and physicians in the TeleCARE system has been presented in the previous subsection. Computational methods and modeling may extend the concept in terms of patient-physician interaction in remote monitoring systems. Model-based modules installed on mobile devices can improve the treatment process by shortening the time needed to receive feedback in the system on the patient’s actual condition. Such a module may, for example, proactively suggest the medicine dosages that patients should take, with respect to the type of activity they have started. The modeling modules on the server side of the system might be used in two ways. First, usage is similar to the previous scenario, however, these modules might be more complex because of the fact that servers offer more computational resources than mobile devices. Unfortunately, such a usage requires constant Internet access to the server side of the system. The second usage assumes that the physician may predict the changes in the patient’s condition using models and thus can proactively personalize the treatment process to avoid a deterioration in the patient’s condition.
Supporting education Apart from the patient-physician interaction, computational modeling might be used for educational purposes. In this case, models may mimic the behavior of the virtual patient reacting to different modifications of treatment plans. Telemedicine solutions with built-in computational modeling methods can be effectively used for prediction of a disease progress or therapy effects. The accuracy of
286 | Łukasz Czekierda et al.
prediction which is possible to achieve in this case is high, due to large volumes of data amassed by telemedical systems. Results can be applied in miscellaneous scenarios, e.g. therapeutic, diagnostic, research, educational and others.
13.4.3 Case study Diseases of affluence have vastly increased in prevalence in recent years. They include obesity, cardiovascular diseases, arterial hypertension, type 2 diabetes, osteoporosis and others. As mentioned in the previous section, the TeleCARE system was initially designed for patients with cardiovascular diseases who are being supervised by physicians. It can be extended to treat other diseases by acquiring relevant health data e.g. by recording the blood glucose concentration. Enriching the TeleCARE system with some intelligent support system might improve disease management. Diabetes management concentrates on lowering blood glucose to as close to normal as possible, without causing hypoglycemia. This can usually be accomplished with a diet, exercise, and use of insulin and other oral medication. The main problem is setting proper insulin doses because they are influenced by, among others, the nutritional content of meals, physical activity and the time when these activities took place. The TeleCARE system can be used for diabetes management by achieving the following objectives: 1. The system should record all the elements that might influence the required insulin dose. 2. The system should remind the patient of measurements, physical activities and other tests. 3. The system should allow for bidirectional communication between physician and patient. 4. The system should be able to analyze the data about the glucose level, insulin doses, diet and physical activity and, based on the recorded information, propose modifications to the insulin dose. The first and second objectives related to recording and reminding patients of measurements and activities are covered by the TeleCARE system because of its open architecture and the possibility for adding new medical devices and parameters. The third objective is related to the idea of telecare systems presented in the previous sections. The last objective can be achieved by the model-based supporting system. Intensive insulinotherapy is an approach used mainly in young individuals that stimulates the natural production of insulin by the pancreas. The goal of this treatment method is to keep the glucose level normal in spite of relatively flexible physical activity and meals. It requires several insulin injections daily (or insulin pump infusion) with many adjustments of the doses [51]. The parameters of the models predicting the level of the sugar in the blood and the proposed insulin doses could be
13 From telemedicine to modeling and proactive medicine | 287
provided by the physician remotely. Such a method can extend the mobile part of the system [52]. In this case, a mobile application suggests an insulin dose that the patient should take based on the nutritional content of planned meals. The data collected during treatment – insulin doses taken and the other parameters such as meals, level of blood sugar, activities, blood pressure – can be used for modeling patients’ responsiveness to various types of insulin and their reaction to nutrition, e.g. machine learning methods might be used [4, 53–56]. The results might be used by the physician for personalizing the treatment process. In the case of diabetes management, these methods might be more accurate than the simple bolus calculation performed on the mobile phone, but as the computations are performed on the server side, an Internet connection is necessary to send results remotely to the mobile device.
13.5 TeleDICOM – system for remote interactive consultations 13.5.1 Overview TeleDICOM [57] is one of the teleconsultation systems for imaging medicine which are currently used worldwide. Simple solutions offer medical file sending capabilities [58, 59], more sophisticated ones provide various communication channels (audio, video, chat, telepointer) and image processing tools [60–63]. TeleDICOM is a complete system for remote consultation of medical data stored in DICOM format as well as in other general purpose graphical formats. It offers interactive teleconsultations with full synchronization of workspace and a set of advanced measurement and image manipulation tools, based on professional DICOM viewers. Its features make it a useful tool not only for remote consultations of routine cases but also tough, complicated diseases requiring a discussion in a multidisciplinary team of experts. TeleDICOM has already been in use for eight years as a regular and emergency hospital service in two separate teleconsultation networks (covering ca. 40 hospitals located in southern Poland), currently reaching over 13 000 diagnosed cases. The TeleDICOM system is also regularly used during remote meetings of partners in the European Project of Rare Cardiovascular Diseases led by John Paul II Hospital in Krakow. This project aims to network leading experts in European countries to discuss and propose solutions to the problems of patients with very rare cardiovascular conditions. A typical usage scenario in TeleDICOM is as follows (see Fig. 13.4). A medical doctor wanting to consult about the imaging examination with one or more experts selects the appropriate data which is usually stored in the medical institution’s PACS archive. The physician then optionally provides the data with an appropriate description and initial annotations and invites the other experts to participate in an interactive or noninteractive consultation session. During the session the data are analyzed, and the specialist tools can be used where necessary. This process typically results in a diag-
288 | Łukasz Czekierda et al.
Medical image repository
Students
Image distribution Kraków Tarnów
Collaborative digital space Expert
Physician Nowy Targ
Nowy Sącz
Fig. 13.4: A collaboration view of a sample medical network using TeleDICOM (South Poland region).
nosis or a request for another iteration with additional data or in another team. The conclusions can be passed on orally or using an appropriate report (including DICOM SR document [64]) developed by the session participants. Moreover, the session can be recorded and all participants’ actions (annotations, measurements performed using specialized TeleDICOM tools) can be documented for the session. The record of the consultation session can be used not only for evidence but also as educational material for other doctors or students. Currently, a second version of the system, TeleDICOM II [2], is being developed, focusing on further improvements to doctors’ cooperation and providing flexible and extensible architecture.
13.5.2 Contribution to the model-based proactive medicine concept Medical imaging data is a very important source of information about the patient’s condition for doctors, but also a very challenging one in terms of interpretation for both humans and computers. Advanced algorithms are employed in CAD systems in
13 From telemedicine to modeling and proactive medicine | 289
order to help doctors in their analysis of medical images. TeleDICOM can not only be enhanced with CAD-like functionality, but can also improve the process of development of medical models by their validation and by supporting the extraction of knowledge.
Supporting diagnostics TeleDICOM has been developed as a tool for user-to-user interaction. It can be utilized by users in various roles: medical doctors seeking expert advice, medical experts offering such advice or discussing very complicated cases with other experts, and medical students learning how to efficiently use the tools of contemporary telemedicine. In general, expert knowledge in TeleDICOM is the knowledge of the user (medical doctor) and thanks to the real-time collaboration functionality the system provides efficient access to it. Nevertheless, it can also be beneficial to utilize the knowledge gathered thanks to machine learning processes – regarding diagnostics of some diseases or prediction of their progress. Provided that its utilization for medical analysis is reasonable (semi)automatic diagnostics functionality can be implemented. Such an approach was tested in cases of mammography [65] or computed tomography analysis [66]. TeleDICOM II employs Service Oriented Architecture (SOA) paradigms (i.e. its functionality is built from many interoperating services) so extending its functionality is a relatively simple task (comparing to TeleDICOM I which has a rather monolithic architecture). In TeleDICOM II terminology, scheduled consultation sessions are directed to socalled consultation service instances which are currently implemented as a standalone graphical application operated by a user. The consultation service can however be implemented in a different way – by realizing the analysis functionality independently or by being the proxy to a third-party system which performs such an analysis. Another option is to add an appropriate software module as an additional plug-in for the user application. Whichever way is chosen, computer aided-diagnosis (CAD) functionality able to interactively support detection of certain features in the medical images can be easily achieved and does not require any changes to the TeleDICOM II architecture. Once the image analysis systems are mature enough, the diagnosis process could be completely automatized (performed entirely by a computer) by providing the appropriate implementation of the consultation service. However, such a scenario is still in the rather distant future – despite the fact that CAD systems are present in many medical domains [67], they still cannot be safely employed without human supervision. Having access to a large database of imaging data, an appropriate TeleDICOM II service could find similar cases to the currently analyzed one, providing the doctors with additional information on how to diagnose and treat the patient. Such techniques are called Content-based Image Retrieval (CBIR) [68].
290 | Łukasz Czekierda et al.
Supporting model development and validation In the previous sections the position of medical models in proactive medicine was discussed and techniques of knowledge extraction and processing leading to model construction were presented. An important issue is the reliability of model operation. In the area of imaging medicine a selection of representative data showing the progress of the analyzed diseases is crucial. Equally important is the ability to properly extract significant features from the data [69] and later verify the correctness of the model [70]. Unsupervised learning methods discussed in Section 13.3 can be applied in this area. Model development and verification require specialists in the given domain. Those specialists may be in a distant location, therefore making appointments with them is problematic. TeleDICOM as the sophisticated collaboration framework can be particularly useful in this scenario by providing: scheduling of teleconsultations, which facilitate making virtual appointments; real-time communication essential for discussion; fully synchronized interactive tools for comparison of various imaging data.
Supporting knowledge extraction As has already been mentioned, the results of the teleconsultation session in TeleDICOM can take various forms. If annotations and measurements are precisely performed such a diagnosis can be extremely useful not only for the diagnosed patient but also as a valuable source of knowledge and good practices in imaging data analysis. The importance of a proper database has been pointed out in papers regarding building CAD systems [71, 72]. A special consultation session can even be organized which aims at selecting certain features in the imaging data. Such research has been conducted with the utilization of TeleDICOM with the goal of determining the shape of the heart’s left ventricle on echocardiographic images using a large set of reference data which was next used to develop an FPGA circuit [73].
13.6 Conclusions As a new and complementary paradigm of healthcare, proactive medicine opens up exciting system design opportunities. Properly harnessed communication and computing technologies supported by modeling techniques pave the way to broad utilization of rich medical data sets. Such an approach could have a great impact on medical services transformation and motivating lifestyle changes that prevent diseases. It also stimulates changes in medical practices to care for those who are already sick. To take full advantage of these new opportunities researchers must consider carefully their methodological, epistemological, and technical requirements. It should also be noted that the described concepts open new discussions on ethical and legal aspects of semiautomated, and more importantly automated, medical care.
13 From telemedicine to modeling and proactive medicine
| 291
References [1] [2]
[3] [4] [5]
[6]
[7] [8]
[9] [10]
[11]
[12] [13]
[14]
[15]
[16]
[17]
Mukhopadhyay S and Postolache OA. Pervasive and Mobile Sensing and Computing for Healthcare: Technological and Social Issues. Berlin: Springer Science & Business Media, 2012. Czekierda L, Masternak T and Zielinski K. Evolutionary approach to development of collaborative teleconsultation system for imaging medicine. Information Technology in Biomedicine, IEEE Transactions. 2012;16(4): 550–560. Brown DE. Introduction to data mining for medical informatics. Clin Lab Med. 2008 Mar;28(1):9–35, v. doi: 10.1016/j.cll.2007.10.008. Lehmann ED and Deutsch T. A physiological model of glucose-insulin interaction in type 1 diabetes mellitus. Journal of Biomedical Engineering. 1992;14(3):235–242. Edelman EJ, Guinney J, Chi JT and Febbo PG, equal contributor Mukherjee S. Modeling cancer progression via pathway dependencies. PLoS Comput. Biol. 2008; 4(2):e28. doi: 10.1371/ journal.pcbi.0040028. Sun H, Avants BB, Frangi AF, Sukno F, Geel JC and Yushkevich PA. Cardiac medial modeling and time-course heart wall thickness analysis. Med Image Comput Comput Assist Interv. 2008; 11(Pt 2):766–73. Tziampazis E and Sambanis A. Tissue engineering of a bioartificial pancreas: modeling the cell environment and device function. Biotechnol Prog. 1995 Mar–Apr;11(2):115–26. Science Buddies Staff. Modeling the human cardiovascular system: the factors that affect blood flow rate. Science Buddies. 2014, 18 Dec; Web. 2015, 10 Jan. http://www.sciencebuddies.org/science-fair-projects/project_ideas/HumBio_p032.shtml. IBM Research. What is cognitive computing? http://www.research.ibm.com/cognitivecomputing/index.shtml#fbid=BrUXYNtK6-r. Cheng B, Wee CY, Liu M, Zhang D and Shen D. Brain Disease Classification and Progression Using Machine Learning Techniques. In: Suzuki K, ed. Computational Intelligence in Biomedical Imaging, New York: Springer, 2014, pp. 3–32, doi: 10.1007/978-1-4614-7245-2_1. McEvoy LK, Fennema-Notestine C, Roddey JC, Hagler DJ Jr, Holland D, Karow DS, Pung CJ, Brewer JB and Dale AM. Alzheimer disease: quantitative structural neuroimaging for detection and prediction of clinical and structural changes in mild cognitive impairment. Radiology. 2009;251:195–205. Hudson FG, Amaral LSdB, Duarte SFP, et al. Predicting increased blood pressure using machine learning. Journal of Obesity. 2014, Article ID 637635, 12 pages. doi: 10.1155/2014/637635. Zhang G. An improved hypertension prediction model based on RS and SVM in the Three Gorges Area. Computer Science and Software Engineering, 2008 International Conference. Vol. 1, 12–14 Dec. 2008, pp. 810–814. doi: 10.1109/CSSE.2008.664 URL: http://ieeexplore. ieee.org/stamp/stamp.jsp?tp=&arnumber=4721873&isnumber=4721668. Lee KL, McNeer JF, Starmer CF, Harris PJ, Rosati RA. Clinical judgment and statistics. Lessons from a simulated randomized trial in coronary artery disease. Circulation. 1980 Mar;61(3):508–15. Exarchos K, Exarchos T, Bourantas C, Papafaklis M, Naka K, Michalis L, Parodi O and Fotiadis D. Prediction of coronary atherosclerosis progression using dynamic Bayesian networks. Conf Proc IEEE Eng Med Biol Soc. 2013:3889–92. Van Haaren J, Davis J, Lappenschaar M and Hommersom A. Exploring Disease Interactions Using Markov Networks, Expanding the Boundaries of Health Informatics Using Artificial Intelligence: Papers from the AAAI 2013 Workshop. Farran B, Channanath AM, Behbehani K and Thanaraj TA. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation
292 | Łukasz Czekierda et al.
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26] [27] [28] [29] [30]
[31]
[32] [33]
using national health data from Kuwait—a cohort study, Health Informatics, BMJ Open. 2013; 3:e002457 doi: 10.1136/bmjopen-2012-002457. Paltiel AD, Scharfstein JA, Seage GR 3rd, Losina E, Goldie SJ, Weinstein MC, Craven DE and Freedberg KA. A Monte Carlo simulation of advanced HIV disease: application to prevention of CMV infection. Med Decis Making. 1998 Apr–Jun;18(2 Suppl), pp. 93–105. Wang X, Sontag D and Wang F. Unsupervised learning of disease progression models. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining (KDD ’14). ACM, New York, NY, USA, 2014., pp. 85–94. doi: 10.1145/2623330.2623754 http://doi.acm.org/10.1145/2623330.2623754. De Winter W, DeJongh J, Post T, Ploeger B, Urquhart R, Moules I, Eckland D and Danhof M. A mechanism-based disease progression model for comparison of long-term effects of pioglitazone, metformin and gliclazide on disease processes underlying type 2 diabetes mellitus. Journal of Pharmacokinetics and Pharmacodynamics. 2006;33(3):313–343. Vadher B, Patterson DLH and Leaning M. Prediction of the international normalized ratio and maintenance dose during the initiation of warfarin therapy. Br J Clin Pharmacol. 1999 Jul; 48(1):63–70. Kronik N, Kogan Y, Vainstein V and Agur Z. Improving alloreactive CTL immunotherapy for malignant gliomas using a simulation model of their interactive dynamics. Cancer Immunology, Immunotherapy. 2008 Mar;57(3):425–439. Hassani A and Naghibi MB. Reinforcement learning based control of tumor growth with chemotherapy, System Science and Engineering (ICSSE), 2010 International Conference. 1–3 July 2010, pp. 185–189. doi: 10.1109/ICSSE.2010.5551776, URL: http://ieeexplore.ieee. org/stamp/stamp.jsp?tp=&arnumber=5551776&isnumber=5551700. Mani S, Chen Y, Li X, Arlinghaus L, Chakravarthy AB, Abramson V, Bhave SR, Levy MA, Xu H and Yankeelov TE. Machine learning for predicting the response of breast cancer to neoadjuvant chemotherapy. J Am Med Inform Assoc. 2013 Jul–Aug;20(4):688–95. doi: 10.1136/amiajnl2012-001332. Epub 2013 Apr 24. Corrigan B (Pfizer Global Research). A Comprehensive Clinical Trial Simulation Tool for Alzheimer’s Disease: Lessons for Model Collaboration, On behalf of the CAMD M&S Workgroup, September 26, 2013, Washington DC. James G, Witten D, Hastie T and Tibshirani R. An Introduction to Statistical Learning. New York: Springer, 2013. Cristianini N and Shawe-Taylor J. An Introduction to Support Vector Machines and other Kernelbased Learning Methods. Cambridge: Cambridge University Press, 2000. ISBN 0-521-78019-5 Werbos PJ. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, 1975. Sutton RS and Barto AG. Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press, 1998. Perna G. Research: Mobile App Can Help Detect Early Signs of Glaucoma [Internet]. 2015 [cited 8 Jan 2015]. Available from: http://www.healthcare-informatics.com/news-item/researchmobile-app-can-help-detect-early-signs-glaucoma. Chen TC. Spectral domain optical coherence tomography in glaucoma: qualitative and quantitative analysis of the optic nerve head and retinal nerve fiber layer (An AOS Thesis). Transactions of the American Ophthalmological Society 2009;107:254–281. Ridley EL. Big data in radiology will drive personalized patient care. Online: http://www.auntminnie.com/ (accessed 2015-01-08). Groves P, Kayyali B, Knott D and Van Kuiken S. The ‘big data’ revolution in healthcare. Center for US Health System Reform Business Technology Office. January 2013.
13 From telemedicine to modeling and proactive medicine
| 293
[34] Pentland A, Reid TG and Heibeck T. Revolutionizing Medicine and Public Health, Raport of the Big Data and Health Working Group 2013. [35] Taylor RH, Funda J, Joskowicz L, Kalvin AD, Gomory SH, Gueziec AP and Brown LMG. An overview of computer-integrated surgery at the IBM Thomas J. Watson Research Center, IBM Journal of Research and Development. 1996 Mar;40(2):163–183. doi: 10.1147/rd.402.0163, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5389453&isnumber= 5389444. [36] Gantenbein RE. Watson, come here! The role of intelligent systems in health care. World Automation Congress (WAC), 2014. 3–7 Aug. 2014, pp. 165–168. doi: 10.1109/ WAC.2014.6935748, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber= 6935748&isnumber=6935633. [37] Steadman I. IBM’s Watson is better at diagnosing cancer than human doctors, Technology, 11 February 13, Online: http://www.wired.co.uk/news/archive/2013-02/11/ibm-watsonmedical-doctor (accessed: 2015-01-09). [38] Graham RNJ, Perriss RW and Scarsbrook AF. DICOM demystified: A review of digital file formats and their use in radiological practice. Clinical Radiology. 2005 Jun;60:1133–1140. [39] Doi K. Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized Medical Imaging and Graphics. 2007;31(4–5):198–211. [40] Bellman RE. Dynamic Programming. Newburyport: Courier Dover Publications, 2003. [41] Radovanović M, Nanopoulos A and Ivanović M. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research. 2010;11:2487–2531. [42] Hnin WK. Data mining based fragmentation and prediction of medical data, Computer Research and Development (ICCRD), 2011 3rd International Conference, vol. 2, 11–13 March 2011, pp. 480–485. doi: 10.1109/ICCRD.2011.5764179, URL: http://ieeexplore.ieee.org/stamp/ stamp.jsp?tp=&arnumber=5764179&isnumber=5764069. [43] Paul R and Hoque ASML. Clustering medical data to predict the likelihood of diseases, Digital Information Management (ICDIM), 2010 Fifth International Conference, 5–8 July 2010, pp. 44–49. [44] Bengio Y. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2 (1). 2010 Fifth International Conference on Digital Information Management (ICDIM). doi: 10.1109/ICDIM.2010.5664638 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp= &arnumber=5664638&isnumber=5662242. [45] Freund Y and Schapire RE. A Short Introduction to Boosting. In: Dean T, ed. Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence. San Francisco: Morgan Kaufmann Publishers; 1999, pp. 1401–1406. [46] Skałkowski K and Zieliński K. Applying formalized rules for treatment procedures to data delivered by personal medical devices. Journal of Biomedical Informatics. 2013;46( 3):530–540. [47] Microsoft HealthVault, https://www.healthvault.com/. [48] Apple HealthKit, https://developer.apple.com/healthkit/. [49] Chiarini G, Ray P, Akter S, Masella C and Ganz A. mHealth technologies for chronic diseases and elders: A systematic review. IEEE Journal on Selected Areas in Communications. 2013; 31(9):6–18. [50] Bluetooth Health Device Profile, https://developer.bluetooth.org/TechnologyOverview/Pages/ HDP.aspx. [51] Pańkowska E and Błazik M. Bolus calculator with nutrition database software, a new concept of prandial insulin programming for pump users. Journal of Diabetes Science and Technology. 2010 May 1;4:571–576. [52] Kardyś B. Mobile phones in the management of affluence diseases, AGH Master Thesis, 2013.
294 | Łukasz Czekierda et al.
[53] Georga EI, Protopappas VC and Polyzos D. Prediction of glucose concentration in type 1 diabetic patients using support vector regression, Information Technology and Applications in Biomedicine (ITAB), 2010 10th IEEE International Conference. Nov. 2010, pp. 1–4. [54] Tarin C, Teufel E, Pico J, Bondia J and Pfleiderer HJ. Comprehensive pharmacokinetic model of insulin glargine and other insulin formulations. Biomedical Engineering, IEEE Transactions. 2005 Dec;52(12):1994–2005. [55] Roy A and Parker RS. Dynamic modeling of exercise effects on plasma glucose and insulin levels. Journal of Diabetes Science and Technology. 2007 May;1(3):338–347. [56] Kowalski P. Machine learning for the management of affluence diseases, AGH Master Thesis, 2013. [57] Gackowski A et al. Development, implementation, and multicenter clinical validation of the TeleDICOM—advanced, interactive teleconsultation system. Journal of Digital Imaging. 2011;24(3):541–551. [58] Gennari JH et al. Asynchronous communication among clinical researchers: A study for systems design. International Journal of Medical Informatics. 2005;74(10):797–807. [59] Lasierra N et al. Lessons learned after a three-year store and forward teledermatology experience using internet: Strengths and limitations. International Journal of Medical Informatics. 2012;81(5):332–343. [60] Lee JS et al. A real time collaboration system for teleradiology consultation. Int J MedInform. 2003;72(1–3):73–9. [61] Chang T, Lee J and Wu S. The telemedicine and teleconsultation system application in clinical medicine. Conf Proc IEEE Eng Med Biol Soc. 2004;5:3392–5. [62] Okuyama F et al. Telemedicine imaging collaboration system with virtual common information space. In: Computer and Information Technology, 2006. CIT’06. The Sixth IEEE International Conference. IEEE. 2006. [63] Hsu YC et al. Design and implementation of teleconsultation system for instant treatment. In: Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008. The 2nd International Conference. IEEE. 2008; pp. 1355–1358. [64] Noumeir R. Benefits of the DICOM structured report. Journal of Digital Imaging. 2006;19(4): 295–306. [65] Sovio U, Aitken Z, Humphreys K, Czene K et al. Comparison of fully and semi-automated areabased methods for measuring mammographic density and predicting breast cancer risk. British Journal of Cancer. 2014 Apr;110:1908–1916. [66] Folio LR1, Sandouk A, Huang J, Solomon JM, Apolo AB. Consistency and efficiency of CT analysis of metastatic disease: semiautomated lesion management application within a PACS. AJR Am J Roentgenol. 2013 Sep;201(3):618–25. [67] Suzuki K. Machine Learning in Computer-Aided Diagnosis: Medical Imaging Intelligence. Hershey, PA: Medical Information Science Reference, 2012. [68] Akgül CB et al. Content-based image retrieval in radiology: current status and future directions. Journal of Digital Imaging. 2011;24(2):208–222. [69] Doi K. Current status and future potential of computer-aided diagnosis in medical imaging. The British Journal of Radiology. 2005;78(1):3–19. [70] Faust O, Acharya UR and Tamura T. Formal design methods for reliable computer-aided diagnosis: a review. Biomedical Engineering, IEEE Reviews. 2012;5:15–28. [71] Yuan K et al. Brain CT image database building for computer-aided diagnosis using contentbased image retrieval. Information Processing & Management. 2011;47(2):176–185. [72] Horikoshi H et al. Computer-aided diagnosis system for bone scintigrams from Japanese patients: importance of training database. Annals of Nuclear Medicine. 2012;26(8):622–626. [73] Świerczak ZB, Kasperek J and Rajda P. Skeletonization hardware implementation in FPGA device. PAK. 2007;53(7):75–77.
14 Serious games in medicine Video games are of growing interest to many health professionals. In this chapter, we review the most interesting areas of research and scientific activity. Serious games are still in an early stage of development: progress is noticeable, but there is still a need for better cooperation between specialists from different fields of healthcare and game designers and developers. In addition, we focus on the role of graphic design tools applied to the development of serious games. Finally, we present several serious games for health designed for children and older adults.
Paweł Węgrzyn
14.1 Serious games for health – Video games and health issues 14.1.1 Introduction For current generations of children and youth video games are a popular and ubiquitous form of entertainment. Young people are exposed to video games throughout their entire lives. A game platform may be a computer, a TV console, a portable console, a tablet, a cell phone or any other smart devices with video displays and processors. For several years there has been a wide-ranging discussion about the advantages and detrimental effects of playing video games. The concept of serious games is a popular topic nowadays. A Google search on “serious games” renders about 114 000 000 hits [2015-02-21]. This oxymoron is usually associated with the popular book by Clark Abt [1]. Over forty years ago, the author was curious about how interactive games, simulations and other gamelike activities could facilitate education for various life roles in a highly technological and complex society. Although Abt referred rather to games in general, today serious games are mainly considered as computer games designed for educational purposes. Serious games are primarily educational games, but this term is usually interpreted much more broadly. Serious games also include applications such as computer games to support medical therapy and rehabilitation, to explore the world, to promote leading a healthy lifestyle or to open possibilities for new artistic creations. The most useful and common definition is that a serious game is a game in which it is about something more than just fun. Thus, we can define a serious game more precisely as “any form of interactive computer-based game software for one or multiple players to be used on any platform and that has been developed with the intention to be more than entertainment” [2]. In particular, following this definition we should take into account all products developed primarily for interactive entertainment that can also be used for interactive education, interactive training or interactive simulation [3]. Actually, there are so many different definitions of serious games in the scientific literature that there is a good deal of confusion about what this notion involves.
296 | Paweł Węgrzyn
An important area of serious game applications is using videogame technologies to improve healthcare [4]. Therefore, serious games for health define the intersection of serious games and healthcare issues. As a precise definition of health we may adopt that given by the World Health Organization: “a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity” [5]. Video games for health are increasingly incorporating various input sensors like 3D cameras, 3D glasses, accelerometers, balance boards, gloves or medical diagnostic devices. The part of the machine that handles the human–machine interaction is called the user interface. A Natural User Interface (NUI) is a user interface that is effectively invisible, so that a user operates the device through intuitive actions related to natural, everyday human behavior. The user interface is also responsible for communication between a patient and a healthcare professional through a machine. User interface design may have considerable influence on the efficacy of serious games for health. Recently, serious games for health have been the subject of numerous scientific research projects and conferences. Some studies suggest that serious games may be useful in healthcare and health professional education and training. However, there are few methodological studies on the evaluation and assessment of this type of applications. This situation is expected to improve in the near future. In this review, we will survey scientific studies on video games in the context of health issues. Therefore, our review includes papers on serious games for health as well as papers on health problems related to the usage of video games. First and foremost, we include papers that are review reports and meta-analyses.
14.1.2 Previous surveys In this section, we mention previous review articles on serious games for health. We assume that it is enough to take into consideration recent publications over the past few years. Let us start with the review by Kato [6], where older reviews are also mentioned. The author surveys positive examples of using video games in healthcare. The notion of serious games is used there, but with a narrower definition in which serious games are understood as video games that have been intentionally designed for training and education [7]. Various examples of video games for health education and health training are then reviewed. Nevertheless, examples are also given of commercial entertainment games used for certain goals in healthcare, like health improvement or surgical training. The main purpose of the review was to prove that playing video games may be considered a positive and effective intervention in the area of healthcare. Serious games in women’s healthcare are surveyed by de Wit-Zuurendonk and Oei [8]. They adopt another definition of serious games from Stokes [9]: games that are designed to entertain players as they educate, train, or change behavior. The authors
14.1 Serious games for health – Video games and health issues | 297
searched the scientific literature and selected 30 relevant papers. The studies reported in the selected papers indicate that serious gaming is a stimulating learning method and that students are enthusiastic about its use. The authors also comment that there is a lack of studies proving the clinical effectiveness of serious gaming. Many papers examine the well-recognized potential of games for learning (gamebased learning). Learning is usually defined as the acquisition, modification or reinforcement of knowledge, skills or attitudes (KSA components). Connolly et al. [10] review the literature on game-based learning. In particular, the authors have identified 129 papers reporting empirical evidence about the impacts and outcomes of computer games and serious games with respect to learning and student engagement. They developed a multidimensional framework for categorizing such games. Any game can be placed with respect to five dimensions: digital or nondigital game, primary purpose of the game, game genre, subject discipline and a platform for delivery of the game. They suggest eight relevant subject disciplines: health, society, mathematics, language, engineering, general knowledge, geography and science. Of the 129 empirical papers included 21 are on the subject of health (12 of them classified by the reviewers as higher quality papers). According to the conclusions of the review, there is much research about game-based learning, various positive and negative impacts and outcomes associated with playing digital games have been identified and there is quite a body of empirical evidence concerning the effectiveness of games-based learning. However, this evidence is not rigorous and well proved, e.g. lack of randomized controlled trial (RCT) studies for the effectiveness of games-based learning or poor understanding of the nature of engagement in games. The survey of 108 serious games for health is given by Wattanasoontorn et al. [11]. Their taxonomy of serious games for health is based on three dimensions: serious game characteristics, player characteristics and health problem characteristics. The serious game dimension is described by game purpose (focus on entertainment, focus on health or focus on acquiring health and medical skills) and by game functionalities (twelve technological characteristics). A player can be categorized as patient/nonpatient or professional/nonprofessional. The health dimension of serious games for patients refers to the stage of the disease that is the subject of the game (susceptibility, presymptomatic, clinical disease, recovery/disability) together with the relevant purpose of the serious game (health monitoring, health detection, treatment or therapy, rehabilitation, education). The health dimension of serious games for nonpatients refers to three categories: health and wellness, training and simulation for professionals and training and simulation for nonprofessionals. Finally, the authors classify and briefly describe the 108 surveyed games using fifteen characteristics (author, disease, purpose, application area, interactive tool, interface, players, genre, adaptability, progress monitoring, feedback, portability, engine, platform, connectivity). Primack et al. [12] present the results of their investigation of the scientific literature on studies about using video games to improve health outcomes. The authors did not use the concept of serious games. They canvass computer applications useful
298 | Paweł Węgrzyn
for health outcomes, and they focus rather on the question whether a computer application is actually a video game. They follow the American Heritage Dictionary that defines video game as an electronic or computerized game played by manipulating images on a video display or television screen. They assume that in order to be a game an interactive or competitive application should have a system of reward and should include fun elements. The authors use also inclusion criteria for selecting admissible scientific studies. An eligible study must be an RCT (observational studies are excluded), use a video game as the intervention and test its effect on a health-promoting, clinically relevant health outcome. Of 1452 surveyed relevant articles, only 38 (2.6 %) met the inclusion criteria described above. The excluded articles usually suffer from lack of a health-promoting, clinically relevant health outcome (46 %), do not involve computer applications and are not video games (38 %) or are not RCTs (14 %). Among the 38 included studies, the authors identified 195 examined health outcomes. The global conclusion is that purposeful video game-based interventions improved 69 % of psychological therapy outcomes, 59 % of physical therapy outcomes, 50 % of physical activity outcomes, 46 % of clinician skills outcomes, 42 % of health education outcomes, 42 % of pain distraction outcomes, and 37 % of disease self-management outcomes. This can be considered as a proof of the positive impact of video game-based interventions, but the studies are generally acknowledged as being of poor quality and having relatively brief follow-up periods. The total number of study participants assessed in all 38 studies was only 2662. Graafland, Schraagen and Schijven [13] also conducted a review of serious games for health based on the examination of scientific literature. They define serious games as digital games for computers, game consoles, smartphones or other electronic devices, directed at or associated with improvement of competence of professionals in medicine. This definition is the narrowest one and it excludes games for patients or nonprofessionals. The authors have identified 25 articles describing a total of 30 serious games. The games fall into two categories: games developed for specific medical education purposes (17) and commercial games useful for developing skills relevant to medical personnel (13). Only six serious games were identified that had a process of validation (three games for team training in critical care and triage and three commercial games useful to train laparoscopic psychomotor skills). None of the serious games had completed a full validation process for the purpose of use. In the recent review by Horne-Moyer et al. [14], the authors collected known electronic games that have been used for interventions for a variety of health goals. They reviewed electronic games for health-related problems and designed for therapeutic purposes, electronic games developed primarily for psychotherapy and commercial entertainment games used for psychotherapy. The general conclusion of the survey is that therapies with electronic games are rather equivalent but not superior in efficacy to traditional treatments. For some patients, therapies with electronic games may be more enjoyable or acceptable. The lack of suitable RCT studies has again been acknowledged.
14.1 Serious games for health – Video games and health issues | 299
Ricciardi and De Paolis [15] review serious games for education and training of healthcare professionals. The authors confirm that serious gaming is useful for healthcare professionals, but serious games for health are generally not widespread. There are only few health-related fields where serious gaming can be found (surgery, odontology, nursing, cardiology, first aid, dietitian and diabetes, psychology). The main advantage of serious games is that they are cheaper than traditional training methods. When using serious games for health outcomes, we must properly assess whether serious gaming for some individual games is safe and effective. The issue of assessment of serious games for health is discussed by Graafland, Dankbaar et al. [16]. They define serious games as digital applications instigating a specific behavioral change to its user, in the form of skills, knowledge, or attitudes useful to reality [4]. This definition excludes in particular games for health with only informational purposes. The assessment framework provides 62 items in five main themes: game description (metadata, development, sponsoring/advertising, potential conflicts of interest), rationale (purpose, medical device, user group, setting), functionality (purposes/didactic features, content management, potentially undesirable effects), validity (design process, user testing, stability, validity/effectiveness), data protection (data procession, data security, privacy). This assessment framework may be useful for developers of serious games for health. However, it has some drawbacks. For instance, it does not include visual perception mechanisms, user experience, immersion and flow. Maybe more important, this assessment framework does not touch on pure game elements (e.g. fun, entertainment, challenge). We know that these factors are not vague. Their influence on the effectiveness of game-based interventions is rather well recognized and pretty well understood. The entertainment factor may improve or deteriorate game user performance.
14.1.3 Evidence review We are going to review the literature on video games with respect to health issues. Most of the selected topics can be recognized as serious gaming for health, but our review is not to be circumscribed by some specific definition of serious games.
14.1.3.1 Health education for nonprofessionals Many video games designed for educating nonprofessionals about health topics have been developed. These games are not intended to support any healthcare interventions. The only purpose is to inform. Most of them are simple casual games one can play online without payment. However, there are also games that can be used in academic education. For instance, an educational game for learning human immunology [17]. A review of scientific literature on using video games for health education (and also for physical education) is given in [18]. This review presents the literature on var-
300 | Paweł Węgrzyn
ious projects, on empirical evidence on the educational effectiveness of video games and on future research perspectives concerning the subject. The main conclusion of the review is that video games as educational tools may effectively improve KSA components of young students.
14.1.3.2 Physical well-being Video games can support some actions taken to improve physical well-being. Taking into account the evidence, we select the most relevant areas of physical health issues: therapy, pain distraction, rehabilitation, disease self-management and life support.
14.1.3.2.1 Therapy Video gaming (or serious video gaming) may support medical treatment and therapy. The video game Re-Mission was developed to actively involve adolescents and young people with cancer in their own treatment [19]. Playing the serious game was a psychoeducational intervention during the therapy process to stimulate patients’ interest in understanding illness issues. The objective of the game was to change players’ illness representations in order to promote adherence to self-care during treatment and to teach self-care skills and related cancer knowledge. A randomized trial was conducted at 34 medical centers in the United States, Canada and Australia [20]. It showed that video game intervention significantly improved treatment adherence and indicators of cancer-related self-efficacy and knowledge in adolescents and young adults who were undergoing cancer therapy. A psychotherapeutic game for children with brain tumor is presented in [21]. The results showed a significant improvement regarding the behaviors of young patients. The treatment for amblyopia (visual impairment without apparent organic pathology) can be supported by a complex video game that trains contrast sensitivity [22]. They tested the game on 20 amblyopic subjects (10 children and 10 adults), and the improvement of contrast thresholds was observed as better for adults than for children. Improvement for patients with multiple sclerosis was observed after training with a video game balance board on Nintendo Wii [23]. The clinical tests indicated that the microstructure of superior cerebellar peduncles was modified. The suggestion is that high-intensity serious game playing could induce favorable microstructural changes in the brains of patients with multiple sclerosis.
14.1.3.2.2 Pain management Video games can be also used as a technique for pain management. A video game-based system was used to distract adolescent patients from high levels of pain during wound care [24]. Immersion in virtual world helps to draw attention away from the real world. Patients can better tolerate painful procedures. Serious gaming for pain and discomfort distraction at the dentist’s is presented in [25].
14.1 Serious games for health – Video games and health issues | 301
14.1.3.2.3 Prevention, rehabilitation and disease self-management Effective rehabilitation must be early, intensive and repetitive. Effective prevention or proper self-management must be regular. The real challenge is to maintain patient motivation and engagement. Using video games can be effective in building motivation, adhering to training or treatment regimes, leading exercises for motor recovery, managing the impact of disability or disease on functioning, emotions and interpersonal relationships. The games often incorporate novel input sensors and natural user interfaces. A survey and a classification of serious games for rehabilitation is given in [26]. The authors put forward their own definition of serious games: games that engage a user and contribute to the achievement of a defined purpose other than pure entertainment (whether or not the user is consciously aware of it). Different rehabilitation systems based on serious games have been identified and discussed. Serious games in prevention and rehabilitation are examined in [27]. Serious games for enhancement of upper limb stroke rehabilitation are discussed in [28]. Several examples of games are discussed and some general game design principles are postulated. They state that there are positive results associated with using such games (playability, usability) in both healthy users and users with varying degrees of impairment caused by stroke. Single-blinded clinical trial with serious games on two parallel groups involving stroke patients are presented in [29]. The feasibility, safety and efficacy of serious game-based therapy is better in comparison with some standard recreational therapy. A virtual reality system for stroke rehabilitation is presented in [30]. A significant improvement in dynamic balance in chronic stroke patients with this system has been demonstrated using RCT. Another virtual reality rehabilitation system (Gesture Therapy) is presented in [31]. The authors postulate some design principles for rehabilitation games: promote repetition, task-oriented training, appropriate feedback, motivating environment. Four small serious games with input sensors for physical (neuromuscular) rehabilitation are described in [32]. To meet different requirements for various therapies, they put forward a specialized configurable architecture that enables therapists to define game controls depending on individual patient needs. A system based on a serious game for balance rehabilitation of adults with cerebral palsy is presented in [33]. A 24-week physiotherapy intervention program was conducted with nine adults with cerebral palsy and the results were promising. A review of various Kinect applications in elderly care (fall detection and fall risk reduction) and stroke rehabilitation is given in [34]. The current stage of Kinect-based rehabilitation methods and their limitations are discussed. The main conclusion is that Kinect already shows notable potential in making therapy and alert systems financially accessible and medically beneficial to a large population of elderly and stroke patients. Feasibility, safety and outcomes of serious gaming with Kinect for patients with Parkinson’s disease are examined in [35].
302 | Paweł Węgrzyn
A Wii-based video games in patients with cystic fibrosis is presented in [36]. Exercise training increases exercise capacity, decreases dyspnea and improves healthrelated quality of life. The video game-based training is well tolerated by patients, but its effectiveness is to be examined in future. Management of chronic pediatric diseases with interactive health games is discussed in [37]. An interactive web application for education about asthma is described in [38]. Serious games for children with diabetes are presented in [39] and [40]. A serious video game to prevent type 2 diabetes and obesity among youth is presented in [41].
14.1.3.2.4 Life support Multiplayer video games, augmented and virtual reality systems can be very useful for creating new means for teaching and training the emergency treatments and techniques performed in an emergency situation in order to support life after failure. A massively multiplayer online video game (MMOG) has been created for repeated team training of cardiopulmonary resuscitation procedures [42]. There are four scenarios for virtual world team training with avatars. Self-efficacy, concentration and mental strain are measured. The design, implementation, and evaluation of multiplayer video game for advanced cardiac life support (ACLS) training are described in [43]. The efficacy and performance outcomes are compared with traditional ACLS training. The main conclusion is that virtual training can provide a learning experience similar to face-to-face training. Serious games directed at children and young adults that serve as a tool to create awareness of cardiac arrest and cardiopulmonary resuscitation are described in [44].
14.1.3.3 Mental well-being The challenge for modern psychotherapy is to use novel information and communication technologies (such as web-based technologies, mobile technologies, social media, virtual reality, virtual humans and video games) to address behavioral and mental health outcomes [45].
14.1.3.3.1 Perceptual and cognitive training Humans have the ability to learn, to acquire KSA components and change behavior as a result of experience. Skill learning can be related to improvement of perceptual, cognitive or motor performance. A review of ideas and papers about the effects of playing video games on perceptual or cognitive skills is given in [46]. A review of various studies on the subject whether video game players outperform nongamers on measures of perception and cognition is given in [47]. The authors of the review conclude that the effects of gaming on perception and cognition seem to be confirmed. However,
14.1 Serious games for health – Video games and health issues |
303
there are many methodological shortcomings of past studies and to develop validated methods of game interventions future studies are necessary with a set of clinical trial best practices and delimitation of alternative explanations for gaming effects. Playing action video games can modify visual skills, like visual selective attention and a range of visual skills [48]. Green and Bavelier have performed four experiments with visual performance tests for habitual video game players and nonplayers. The results show that changes in different aspects of visual attention are recognizably better for video game players. Moreover, the authors show that game players alter the spatial resolution of vision in tasks in which the location and time of arrival of the stimulus is unknown/surprising as well as known/foreseen to players [49]. Video game playing improves the ability to inhibit attention from returning to previously attended locations and the efficiency of visual search in easy and more demanding search environments [50]. The enhancement of visual skills has been confirmed and extended to a wider range of cognitive abilities, including attention, memory and executive control in [51]. Li et al. [52] claim an improvement of the contrast sensitivity function through action video game training. The contrast sensitivity function (CSF) measures the ability to detect objects of different sizes at lower contrasts. Video gaming may provide optimal regimen to increase self-control and choice skills [53]. The speed of processing may be increased and perceptual reaction times may be reduced. The act of playing action video games significantly reduces reaction times without sacrificing accuracy [54]. The effect of video game playing on the efficiency of attention allocation is examined in [55]. An action video game play for blind adolescents to train navigation and spatial cognition skills is presented in [56]. Multimedia learning is a form of learning supported by different sources of information. Multimedia learning theory was elaborated by Richard Mayer [57]. Optimal learning occurs when visual and verbal materials are presented together simultaneously [58]. A review of the literature about serious games for multimedia learning is presented in [59]. Design principles for educational serious games are discussed in [7]. The personalization principle, one of the most important design principles of multimedia learning, is discussed in [60]. Serious games are supposed to improve learning as they instigate engagement connected with a positive affective state and a high flow state. The study of the relationship between learning and engagement in serious games is reported in [61]. Designing serious games for elementary science education is discussed in [62]. The article presents the design of the educational game “Crystal Island” and promising results of the evaluation trial for classroom-based science education. Boyle et al. [63] provide a narrative literature review of games, animations and simulations to teach research methods and statistics. The final conclusion of the review is that the evidence proves the great potential of game-based learning, but there are currently few rigorous evaluations (only 26 papers of 4040 met the inclusion criteria defined by the reviewers).
304 | Paweł Węgrzyn
Transmedia learning is a form of learning based on transmedia storytelling. Henry Jenkins has defined that a transmedia story unfolds across multiple media platforms with each new text making a distinctive and valuable contribution to the whole [64]. Transmedia learning is the sustained experience that results in measurable behavior change. The behavior can be physical and overt, intellectual, attitudinal, or a combination of all [65]. In transmedia learning, students need to actively seek out content across multiple media platforms. An artificial neural network model for simulating student cognition is described by Lamb et al. [66]. The authors simulated a cognitive training intervention using a randomized control trial design of 100 000 students. Results suggest that it is possible to increase levels of student success using a targeted cognitive attribute approach and that computational modeling provides a means to test educational theory for future education research. It is crucial for serious game designers and developers to know how to apply cognitive neuroscientific principles to validate proposed games. Such a cognitive evaluation method is proposed in [67].
14.1.3.3.2 Mental aging Training on complex video games may be very useful for mental well-being of older adults. The decline of various cognitive abilities may be traced from feedback provided by such games. Traditionally, experimental tests of mental health are based on studying repetitive performance of individual perceptual and cognitive tasks. Serious games for use with older adults are usually very different from serious games for young persons (see for instance [68]). The question whether video game-based training may attenuate the decline of cognitive abilities in older adults is discussed in [69]. Some older adults were trained in a real-time strategy video game for 23.5 hours. Cognitive functions such as task switching, working memory, visual short term memory, and mental rotation improved significantly. A video game-based training improving five cognitive functions (visuospatial memory, attention, memory for prose, working memory and prospective memory) is discussed in [70]. The training includes several serious games [71].
14.1.3.3.3 Mental disorders A video game “PlayMancer” as a complementary therapy tool in mental disorders is described and evaluated in [72]. The main results of clinical tests are: the acceptance of the game by mental disorder patients, possibilities to work out problems that are difficult to treat (such as impulsiveness, emotional regulation, frustration), possibilities to apply techniques that tend to be difficult to apply in mental disorder patients (such as controlled intensive exposure, immediate positive and negative reinforcing, complex biofeedback approach, real-time monitoring of physiological-emotional reactions).
14.1 Serious games for health – Video games and health issues | 305
Serious games may increase participation of young adults with physical and intellectual disabilities in leisure activities [73]. The games with their associated physical and cognitive environment provide an opportunity for physical exercise in an enjoyable and motivating manner for people with intellectual disabilities. A serious game addressing money management skills for people with intellectual disability is described in [74]. The results of a qualitative evaluation have been also satisfactory and very promising. Serious games seem to be a promising technology for autistic children. A review of recent applications and publications is given in [75]. A video game with a multisensory environment called JeStiMulE for autistic children is presented in [76]. A game for teaching emotion identification skills to preschoolers is described in [77]. A serious game called ECHOES for fostering social communication in children with autism is presented in [78]. A serious game for treating ADHD is presented in [79]. Probably, future serious games for ADHD, and maybe for other mental disorders, should make use of braincomputer interfaces [80]. A game-based neuropsychological intervention on patients with alcohol dependence is reported in [81]. Treating cockroach phobia using a serious game on a mobile phone and augmented reality exposure is studied in [82]. An approach to the treatment of schizophrenia with computerized cognitive training and video game playing is summarized in [83]. Serious games may be also very useful for people with Alzheimer’s disease, which is one of the most important challenges for healthcare systems with an aging population [84]. The design of serious game with haptic technology for diagnosis and management of Parkinson’s disease is reported in [85]. There is some evidence for striatal dopamine release while playing video games [86].
14.1.3.4 Social well-being Online video gaming actually appears to play a role in the socialization of game players, particularly for those who play massively multiplayer online role-playing games (MMORPGs). Two of the researchers who explore the social interactions that occur both within and outside of MMORPGs are Cole and Griffiths [87]. The study demonstrates that players desire and appreciate the social interactions in online gaming. Moreover, virtual gaming allows players to express themselves in ways they may not feel comfortable doing in real life because of their appearance, gender, sexuality, and age. Game players are not the antisocial loners, gaming does not eliminate social interaction, but supplements it [88]. The impact of video gaming on relationship quality with parents and friends is examined in [89]. It is suggested that family therapists must learn about video games, their contextual impacts, addictive aspects and possible uses in the therapeutic setting while ignoring the myth that all gaming is harmful [90].
306 | Paweł Węgrzyn
Some authors have defined prosocial games in which game characters help and support each other in nonviolent ways [91]. Video gaming of that type is capable of increasing both short-term and long-term prosocial behaviors. As far as psychosocial adjustment is considered, playing video games for up to an hour a day can be beneficial for children aged 10 to 15 years [92]. When children play for more than for three hours a day, they are less well adjusted. Nevertheless, the impact of video games on children is rather small when compared with other factors. Video games can increase the realism of role plays used to help college women resist sexual attacks [93]. This may be a novel approach to assessment and intervention in the area of protecting women from sexual assault and coercion. Simulations of child sexual abuse interviews with computer-generated child avatars could improve interview quality [94].
14.1.3.5 Healthcare professionals Serious games and virtual/augmented reality systems are considered as useful tools for education and expert training of healthcare professionals. Some surveys about games and scientific literature can be found in [95] and [13]. There are serious games for various health professionals, like physicians, dentists, nurses, surgeons, dietitians, therapists, psychologists, social workers, audiologists, speech pathologists, optometrists, or emergency medical technicians.
14.1.3.5.1 Medical education A serious game that is an interactive web-based application for breast imaging in radiology for medical students is described and evaluated in [96]. The benefits of learning with games include interactivity, novelty, flexible scheduling, instructors being relieved of the need to deliver repetitious lectures, and greater consistency in quality. The disadvantages include the lack of human interaction and mentoring, material presented in a format that is less pleasant to read than in a textbook, and the possibility of a student being left with unanswered questions. Serious games can be an additional teaching method, not a replacement for other methods. A web-based game to teach pediatric content to medical students is presented in [97]. The authors point out an enjoyable and motivating method of learning enhanced by group interactions, competition, and fun as the advantages of game-based education.
14.1.3.5.2 Medical modeling and simulation In computer graphics, 3D modeling is the process of developing a mathematical representation of any object (3D model), that is a collection of numerical data, mathematical rules and algorithms about the visual appearance of the object (shapes and
14.1 Serious games for health – Video games and health issues | 307
colors). Then, the 3D model of the object can be equipped with some key characteristics and algorithms for simulating its functions and behaviors (physics and artificial intelligence algorithms). Medical modeling, defined consequently as the process of developing 3D models for objects in the medical domain, is a compelling business. The challenge is to incorporate data from medical imaging tools used for diagnosis to virtual 3D models. Medical simulation can be defined as the imitation of the operation of a real-world system over time, for the purpose of medical training, education, or assessment. If a simulation deals with patients and real-life clinical scenarios, then we occasionally refer to virtual patients. Obviously, simulations or virtual patients need 3D models to be developed in the first place. There is no substantial literature on medical modeling and simulations. Moreover, some medical simulations cannot be regarded as (serious) games. A review of the literature is given in [95]. There are many papers on individual simulations, for instance a simulation of acute coronary syndrome [98] or a simulation of post-partum hemorrhage [99]. Actually, most of simulations are not games, but the relevant technologies and models used there may be applicable for future game development. A game-based learning environment for virtual patients in Second Life is presented in [100]. Recently,a new brand of simulation games called biotic games [101] have been engineered. Biotic games operate on real biological processes, for instance real paramecia are guided by players using electric fields. The use of living things in games has posed some ethical questions.
14.1.3.5.3 Medical professional training Surgical training in the clinical setting is constrained by the complexity of procedures, legal and ethical concerns, and it is also time and money consuming. Therefore, many programs have been launched to teach trainees surgical skills outside the operating room using simulators and video games [102–105]. A literature review on using video games for training surgical abilities is given in [106]. Video game-based training has been studied in relationship to laparoscopic, gastrointestinal endoscopic, endovascular, and robotic surgery. A more recent review is to be found in [107]. The all-embracing conclusion is that video gaming seems to be useful to help surgeons acquire or retain of some skills, but there is no standardized method to assess video game experience. An educative game that teaches medical students how to carry out a neurological examination is described in [108]. In fact, it is a multimodal training with a natural user interface (Kinect). The users are tested with respect to both theoretical expertise and motor performance.
308 | Paweł Węgrzyn
14.1.3.5.4 Medical decision-making Serious games can be used for training in complex medical decision-making. Some randomized controlled trials with games show that students are able to improve their decision-making abilities, professional expertise and cost consciousness of medical care. We can mention serious games for clinical decision-making in surgery [109], surgeons’ response to equipment failure in the laparoscopic environment [110], physician decision-making in the context of trauma triage [111] and geriatric medical decisionmaking with weighing patient preferences, and appropriateness and costs of medical care [112].
14.1.3.6 Healthy lifestyle Video games and stories are useful in promoting health-related behavior change. It is still unclear how characteristics of the games are related to outcomes, but immersive, attention-maintaining properties of stories and fantasy, the engaging properties of interactivity, tailored messages and goal setting are the principal arguments advanced for game-based technologies [113]. A meta-analysis of serious digital games for healthy lifestyle promotion is given in [114]. The authors analyze 54 serious digital game studies for healthy lifestyle promotion. They conclude that serious games have positive effects on healthy lifestyles and their determinants (especially for knowledge) and on clinical outcomes.
14.1.3.6.1 Persuasive games Persuasion technologies are designed to change human attitude or behavior of the users through persuasion. Ian Bogost claims that video games open a new domain for persuasion [115]. Persuasive games for health are designed to alter human behavior or attitudes using various persuasive technology strategies. A review of the subject can be found in [116]. Recently, quite a lot of persuasive games for health have been developed, targeted at modifying one or more aspects of users’ behaviors and promoting healthy behavior change. Nutrition is one of the most popular targets of persuasive games for health. Obesity and overweight are serious problems nowadays. A serious game called Squire’s Quest! has been designed to enable children to increase fruit, juice and vegetable consumption [113]. The usage of the game was pretty successful. A platform of serious games for nutritional education is presented in [118]. Tailored health communication (tailored messages, tailoring) is any combination of information and behavior change strategies intended to reach one specific person based on information unique to that person, related to the outcome of interest, and derived from an individual assessment [119]. A review of computer-tailored health interventions delivered over the web is given in [120]. The authors have
14.1 Serious games for health – Video games and health issues | 309
screened 503 studies and selected 30 eligible ones. They claim that message tailoring is achieved through a combination of three main mechanisms including: feedback (individual recommendations based on an expert assessment of the individual’s needs or characteristics related to the targeted behaviors), personalization (inclusion of specific and personally identifiable information within the content gathered during the assessment phase) and adaptation (creating content packages that are pertinent to an individual and selected based on known determinants of the targeted behavior).
14.1.3.6.2 Exergames Exergames, called also fitness games, are video games that combine play and exercise [121]. The effectiveness of exergaming is examined in [122–124]. The evidence from different trials is rather mixed. Personalized exergames [124] combine methods and concepts of exergames and tailored health communication.
14.1.3.7 Video game-related health problems There is a long scientific debate about the detrimental effects of video gaming that has focused mainly on the adolescent population of gamers. Video gaming is suspected of causing addiction and dependency, lower academic achievements, and psychosocial and physical impairments. Recent studies show that most of these suspicions are rather groundless. An instrument for measuring problem video game playing in adolescents is suggested in [125]. The investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes are examined in [126]. Excessive video game playing and its negative consequences on players’ cognitive characteristics are examined in [127]. Associations between playing video games and substance use problems are discussed in [128]. Spending time playing video games does not involve negative consequences, but adolescents who experience problems related to video games are likely to also experience problems in other facets of life [129]. The effects of playing violent video games are indicated in [130], and metaanalytic procedures have been used to test the effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, empathy/desensitization, and prosocial behavior [131]. A meta-analytic review of positive and negative effects of violent video games is given in [132]. Results do not support the conclusion that violent video game playing leads to aggressive behavior. However, violent video game playing is associated with higher visuospatial cognition. It may be advisable to reframe the violent video game debate in reference to potential costs and benefits of this medium. There is a correlation between video game usage and academic performance markers, but it is not clear that a causal relation exists [133].
310 | Paweł Węgrzyn
Effects of playing video games on sleep quality are investigated in [134]. No significant effects of computer games are found on slow-wave sleep. Playing an exciting computer game affects sleep latency and REM sleep but a bright display does not affect sleep variables. Relationships between video gaming, obesity, academic performance and psychosocial functioning of adolescents are studied in [135]. They investigated 219 college-aged males. Current game players reported a weekly average of 9.73 hours of game play, with almost 10 % of current players reporting an average of 35 hours of play per week. The frequency of play is not significantly related to body mass index or grade point average. The results suggest that gaming may provide a healthy source of socialization, relaxation, and coping. The stereotype of the typical online player being a socially withdrawn young male with limited sex role identity is broken by the report [136]. Associations between violent video gaming, empathic responding, and prosocial behavior enacted toward strangers, friends, and family members are investigated in [137]. Nakamuro et al. report on a longitudinal experiment [138] that has checked the causal effect of hours of TV watched or of video games played on school-aged children’s problem behavior, orientation to school, and obesity. The results suggest that the magnitude of the effect is sufficiently small to be considered negligible. Principles for the wise use of computers by children are discussed in [139].
14.1.4 Conclusions Serious games related to health issues are growing rapidly in numbers and also in new areas of applications, but rather in a sparse manner. The numbers of research studies and scientific articles are growing as well, but most of the research studies only weakly fulfil criteria for high-quality scientific investigations. Research studies suffer mainly from poor knowledge about game design, user experience, human-computer interaction, and assessment of medical interventions by randomized controlled trials. It is rather well recognized that video game technologies can provide effective, efficient, appealing and low cost interventions for a variety of health issues. In particular, we can use advanced graphic and audiovisual performance, multiplayer mode, Internet communication and social media. We can build multisensory systems with novel input devices and virtual and augmented realities. Many studies demonstrated that serious game interventions had similar or even better results than traditional medical or educational interventions. However, their assessment is usually very weak. Progress will be made by investigating in more detail how characteristics of games lead to more effective health interventions. Serious games for health are still in their infancy: both in terms of game development and scientific studies. In order to have more effective games and higher quality research, we need to link professional knowledge about video game design
14.1 Serious games for health – Video games and health issues | 311
and analysis with professional knowledge about health issues. Video games are not merely platforms for well-established healthcare interventions, but internal features of game technologies (i.e. interactivity, engagement, challenge, avatars’ cooperation) should encourage clinicians and technicians to establish novel various healthcare interventions.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
[11] [12]
[13] [14] [15] [16]
[17]
Abt C. Serious Games. New York: Viking Press; 1970. Ritterfeld U, Cody M and Vorderer P (Editors). Serious Games. Mechanisms and Effects. New York: Routledge, Taylor & Francis; 2009. Zyda M. From visual simulation to virtual reality to games. Computer. 2005;38: 25–32. Michael D and Chen S. Serious Games: Games that Educate, Train, and Inform. Mason, USA: Course Technology, Cengage Learning; 2006. Grad F. The Preamble of the Constitution of the World Health Organization. Bulletin of the World Health Organization. 2002;80:981–984. Kato PM. Video games in health care: closing the gap. Review of General Psychology. 2010; 14:113–121. Annetta LA. The “I’s” have it: a framework for serious educational game design. Review of General Psychology. 2010;14:105–112. de Wit-Zuurendonk LD and Oei SG. Serious gaming in women’s health care. BJOG : An International Journal of Obstetrics and Gynaecology. 2011;118(SI, suppl.):3. Stokes B. Video games have changed: time to consider “serious games”. The Development Education Journal. 2005;11:108. Connolly TM, Boyle EA, MacArthur E, Hainey T and Boyle JM. A systematic literature review of empirical evidence on computer games and serious games. COMPUTERS & EDUCATION. 2012; 59:661–686. Wattanasoontorn V, Boada I, García R and Sbert M. Serious games for health. Entertainment Computing. 2013;4:231–247. Primack BA, Carroll MV, McNamara M, Klem ML, King B, Rich M et al. Role of video games in improving health-related outcomes. A systematic review. American Journal of Preventive Medicine. 2012;42:630–638. Graafland M, Schraagen JM and Schijven MP. Systematic review of serious games for medical education and surgical skills training. British Journal of Surgery. 2012;99:1322–1330. Horne-Moyer HL, Moyer BH, Messer DC and Messer ES. The use of electronic games in therapy: a review with clinical implications. Current Psychiatry Reports. 2014;16:1–9. Ricciardi F and De Paolis T. A comprehensive review of serious games in health professions. International Journal of Computer Games Technology. 2014, art. no 787968. Graafland M, Dankbaar M, Mert A, Lagro J, De Wit-Zuurendonk L, Schuit S et al. How to systematically assess serious games applied to health care. Journal of Medical Internet Research. 2014;16:1–8. Cheng MT, Su T, Huang WY and Chen JH. An educational game for learning human immunology: What do students learn and how do they perceive? British Journal of Educational Technology. 2014;14:820–833.
312 | Paweł Węgrzyn
[18] [19]
[20]
[21]
[22] [23]
[24] [25] [26] [27] [28]
[29]
[30]
[31]
[32]
[33]
[34] [35]
Papastergiou M. Exploring the potential of computer and video games for health and physical education: A literature review. Computers & Education. 2009;53:603–622. Beale IL, Kato PM, Marin-Bowling VM, Guthrie N and Cole SW. Improvement in cancer-related knowledge following use of a psychoeducational video game for adolescents and young adults with cancer. Journal of Adolescent Health. 2007;41:263–270. Kato PM, Cole SW, Bradlyn AS and Pollock BH. A video game improves behavioral outcomes in adolescents and young adults with cancer: A randomized trial. Pediatrics. 2008;122: E305–E317. Sajjad S, Abdullah AH, Sharif M and S. Mohsin. Psychotherapy through video game to target illness related problematic behaviors of children with brain tumor. Current Medical Imaging Reviews. 2014;10:62–72. Hussain Z, Astle AT, Webb BS and McGraw PV. The challenges of developing a contrast-based video game for treatment of amblyopia. Frontiers in Psychology. 2014;5:art. no 1210. Prosperini L, Fanelli F, Petsas N, Sbardella E, Tona F, Raz E et al. Multiple sclerosis: changes in microarchitecture of white matter tracts after training with a video game balance board. Radiology. 2014;273:529–538. Hoffman HG, Doctor JN, Patterson DR, Carraougher GJ and Furness TA. Virtual reality as an adjunctive pain control during burn wound care in adolescent patients. Pain. 2000;85:305–309. Bidarra R, Gambon D, Kooij R, Nagel D, Schutjes M and Tziouvara I. Gaming at the dentist’s – serious game design for pain and discomfort distraction. Games for Health. 2013:207–215. Rego P, Moreira PM and Reis LP. Serious games for rehabilitation: A survey and a classification towards a taxonomy. Sistemas y Tecnologias de Informacion. 2010:349–354. Wiemeyer J. Gaming for health – serious games in prevention and rehabilitation. Deutsche Zeitschrift fur Sportmedizin. 2010;61:252–257. Burke JW, McNeill MDJ, Charles DK, Morrow PJ, Crosbie JH and McDonough SM. Optimising engagement for stroke rehabilitation using serious games. Visual Computer. 2009;25: 1085–1099. Saposnik G, Teasell R, Mamdani M, Hall J, McIlroy W, Cheung D et al. Effectiveness of virtual reality using wii gaming technology in stroke rehabilitation a pilot randomized clinical trial and proof of principle. Stroke. 2010;41:1477–1484. Cho KH, Lee KJ and Song CH. Virtual-reality balance training with a video-game system improves dynamic balance in chronic stroke patients. Tohoku Journal of Experimental Medicine. 2012;228:69–74. Sucar LE, Orihuela-Espina F, Velazquez RL, Reinkensmeyer DJ, Leder R and Hernandez-Franco J. Gesture Therapy: an upper limb virtual reality-based motor rehabilitation platform. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014:634–643. Omelina L, Jansen B, Bonnechere B, Van Sint Jan S and Cornelis J. Serious games for physical rehabilitation: designing highly configurable and adaptable games. In: Proc. 9th Intl Conf. Disability, Virtual Reality & Associated Technologies: Laval, France. 2012; pp. 195–201. Jaume-i-Capo A, MaMartinez-Bueso P, Moya-Alcover B and Varona J. Interactive rehabilitation system for improvement of balance therapies in people with cerebral palsy. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014:419–427. Webster D and Celik O. Systematic review of Kinect applications in elderly care and stroke rehabilitation. Journal of Neuroengineering and Rehabilitation. 2014;11:1–24 art. no 108. Pompeu JE, Arduini LA, Botelho AR, Fonseca MBF, Pompeu SMAA, Torriani-Pasin C et al. Feasibility, safety and outcomes of playing Kinect Adventures!(TM) for people with Parkinson’s disease: a pilot study. Physiotherapy. 2014;100:162–168.
14.1 Serious games for health – Video games and health issues | 313
[36]
[37] [38]
[39] [40]
[41]
[42]
[43]
[44] [45]
[46] [47] [48] [49] [50]
[51] [52] [53] [54] [55]
del Corral T, Percegona J, Seborga M, Rabinovich RA and Vilaro J. Physiological response during activity programs using Wii-based video games in patients with cystic fibrosis (CF). Journal of Cystic Fibrosis. 2014;13:706–711. Lieberman DA. Management of chronic pediatric diseases with interactive health games: theory and research findings. The Journal of Ambulatory Care Management. 2001;24:26–38. Krishna S, Francisco BD, Balas EA, Konig P, Graff GR and Madsen RW. Internet-enabled interactive multimedia asthma education program: A randomized trial. Pediatrics. 2003;111: 503–510. Brown SJ, Lieberman DA, Gemeny BA, Fan YC, Wilson DM and Pasta DJ. Educational video game for juvenile diabetes: Results of a controlled trial. Medical Informatics. 1997;22:77–89. Fuchslocher A, Niesenhaus J and Kramer N. Serious games for health: An empirical study of the game “Balance” for teenagers with diabetes mellitus. Entertainment Computing. 2011; 2:97–101. Thompson D, Baranowski T, Buday R, Baranowski J, Thompson V, Jago R et al. Serious video games for health: how behavioral science guided the development of a serious video game. Simulation & Gaming. 2010;41:587–606. Creutzfeldt J, Hedman L, Medin C, Heinrichs WL and Fellander-Tsai L. Exploring virtual worlds for scenario-based repeated team training of cardiopulmonary resuscitation in medical students. Journal of Medical Internet Research. 2010;12:art. no e38. Khanal P, Vankipuram A, Ashby A, Vankipuram M, Gupta A, Drumm-Gurnee D et al. Collaborative virtual reality based advanced cardiac life support training simulator using virtual reality principles. Journal of Biomedical Informatics. 2014;51:49–59. Semeraro F, Frisoli A, Ristagno G, Loconsole C, Marchetti L, Scapigliati A et al. Relive: A serious game to learn how to save lives. Resuscitation. 2014;85:E109–E110. Mohr DC, Burns MN, Schueller SM, Clarke G and Klinkman M. Behavioral intervention technologies: evidence review and recommendations for future research in mental health. General Hospital Psychiatry. 2013;35:332–338. Green CS, Bavelier D. Exercising your brain: a review of human brain plasticity and traininginduced learning. Psychology & Aging.2008;23:692–701. Boot WR, Blakely DP and Simons DJ. Do action video games improve perception and cognition? Frontiers in Psychology 2011;2:art. no 226. Green CS and Bavelier D. Action video game modifies visual selective attention. Nature. 2003;423:534–537. Green CS and Bavelier D. Action-video-game experience alters the spatial resolution of vision. Psychological Science. 2007;18:88–94. Castel AD, Pratt J and Drummond E. The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search. Acta Psychologica. 2005; 119:217–230. Boot WR, Kramer AF, Simons DJ, Fabiani M and Gratton G. The effects of video game playing on attention, memory, and executive control. Acta Psychologica. 2008;129:387–398. Li RJ, Polat U, Makous W and Bavelier D. Enhancing the contrast sensitivity function through action video game training. Nature Neuroscience. 2009;12:549–551. Millar A. Self-control and choice in humans: Effects of video game playing as a positive reinforcer. Learning and Motivation. 1984;15:203–218. Dye MWG, Green CS and Bavelier D. Increasing speed of processing with action video games. Current Directions in Psychological Science. 2009;18:321–326. Dye MWG, Green CS and Bavelier D. The development of attention skills in action video game players. Neuropsychologia. 2009;47:1780–1789.
314 | Paweł Węgrzyn
[56]
[57] [58] [59] [60]
[61]
[62]
[63]
[64] [65] [66] [67]
[68] [69] [70]
[71]
[72]
[73] [74]
Connors EC, Chrastil ER, Sanchez J and Merabel LB. Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind. Frontiers in Human Neuroscience. 2014;8:1–8. Mayer RE. Multimedia Learning. New York: Cambridge University Press; 2001. Mayer RE. The Cambridge Handbook of Multimedia Learning. Cambridge, UK: Cambridge University Press; 2005. Mitchell A and Savill-Smith C. The use of computer and video games for learning: A review of the literature. London: Learning and Skills Development Agency; 2004. Brom C, Bromova E, Dechterenko F, Buchtova M and Pergel M. Personalized messages in a brewery educational simulation: Is the personalization principle less robust than previously thought? Computers & Education. 2014;72:339–366. Brom C, Buchtova M, Sisler V, Dechterenko F, Palme R and Glenk LM. Flow, social interaction anxiety and salivary cortisol responses in serious games: A quasi-experimental study. Computers & Education. 2014;79:69–100. Lester JC, Spires HA, Nietfeld JL, Minogue J, Mott BW and Lobene EV. Designing game-based learning environments for elementary science education: A narrative-centered learning perspective. Information Sciences. 2014;264:4–18. Boyle EA, MacArthur EW, Connolly TM, Hainey T, Manea M, Karki A, et al. A narrative literature review of games animations and simulations to teach research methods and statistics. Computers & Education. 2014;74:1–14. Jenkins H. Convergence Culture: Where Old and New Media Collide. New York: New York University Press; 2006. Raybourn EM. A new paradigm for serious games: Transmedia learning for more effective training and education. Journal of Computational Science. 2014;5:471–481. Lamb RL, Vallett DB, Akmal T and Baldwin K. A computational modeling of student cognitive processes in science education. Computers & Education. 2014;79:116–125. Lee S, Baik Y, Nam K, Ahn J, Lee Y, Oh S, et al. Developing a cognitive evaluation method for serious game engineers. Cluster Computing–The Journal of Networks Software Tools and Applications. 2014;17:757–766. Studenski S, Perera S, Hile E, Keller V, Spadola-Bogard J and Garcia J. Interactive video dance games for healthy older adults. Journal of Nutrition Health & Aging. 2010;14:850–852. Basak C, Boot WR, Voss MW and Kramer AF. Can training in a real-time strategy video game attenuate cognitive decline in older adults? Psychology and Aging. 2008;23:765–777. Laskowska I, Zając-Lamparska L, Wiłkość M, Malicki M, Szałkowska A, Jurgielewicz A, et al. A serious game – a new training addressing particularly prospective memory in the elderly. Bio-Algorithms and Med-Systems. 2013;9:155–165. Jurgielewicz, A., Lewandowski P. Serious game–diagnosis in elderly patients–house and shop. Jagellonian Univ. MSc Thesis. 2012 (in Polish); Buczkowski, K. Serious game in medicine – psychological disorders diagnostics – trip. Jagellonian Univ. MSc Thesis. 2014 (in Polish). Fernandez-Aranda F, Jimenez-Murcia S, Santamaria JJ, Gunnard K, Soto A, Kalapanidas E, et al. Video games as a complementary therapy tool in mental disorders: PlayMancer, a European multicentre study. Journal of Mental Health. 2012;21:364–374. Yalon-Chamovitz S and Weiss PL. Virtual reality as a leisure activity for young adults with physical and intellectual disabilities. Research in Development Disabilities. 2007;29:273. Lopez-Basterretxea A, Mendez-Zorrilla A and Garcia-Zapirain B. A Telemonitoring tool based on serious games addressing money management skills for people with intellectual disability. International Journal of Environmental Research and Public Health. 2014;11:2361–2380.
14.1 Serious games for health – Video games and health issues |
[75] [76]
[77]
[78] [79] [80]
[81]
[82]
[83]
[84]
[85]
[86] [87] [88] [89]
[90] [91]
[92] [93]
315
Boucenna S, Narzisi A, Tilmont E, Muratori F, Pioggia G and Cohen D, et al. Interactive technologies for autistic children: a review. Cognitive Computation. 2014;6:722–740. Serret S, Hun S, Iakimova G, Lozada J, Anastassova M, Santos A, et al. Facing the challenge of teaching emotions to individuals with low- and high-functioning autism using a new Serious game: a pilot study. Molecular Autism. 2014;5:art. no 37. Christinaki E, Vidakis N, Triantafyllidis G. A novel educational game for teaching emotion identification skills to preschoolers with autism diagnosis. Computer Science and Information Systems. 2014;11:723–743. Bernardini S, Porayska-Pomsta K and Smith TJ. ECHOES: An intelligent serious game for fostering social communication in children with autism. Information Sciences. 2014;264:41–60. Roh CH and Lee WB. A study of the attention measurement variables of a serious game as a treatment for ADHD. Wireless Personal Communications. 2014;79:2485–2498. Liarokapis F, Debattista K, Vourvopoulos A, Petridis P and Ene A. Comparing interaction techniques for serious games through brain–computer interfaces: A user perception evaluation study. Entertainment Computing. 2014;5:391–399. Gamito P, Oliveira J, Lopes P, Brito R, Morais D, Silva D, et al. executive functioning in alcoholics following an mhealth cognitive stimulation program: randomized controlled trial. Journal of Medical Internet Research. 2014;16:269–281. Botella C, Breton-Lopez J, Quero S, Banos RM, Garcia-Palacios A, Zaragoza I, et al. Treating cockroach phobia using a serious game on a mobile phone and augmented reality exposure: a single case study. Computers in Human Behavior. 2011;27:217–227. Subramaniam K, Luks TL, Fisher M, Simpson GV, Nagarajan S and Vinogradov S. Computerized cognitive training restores neural activity within the reality monitoring network in schizophrenia. Neuron. 2012;73:842–853. Robert PH, Konig A, Amieva H, Andrieu S, Bremond F, Bullock R, et al. Recommendations for the use of Serious Games in people with Alzheimer’s Disease, related disorders and frailty. Frontiers in Aging Neuroscience. 2014;6:art. no 54. Atkinson S and Narasimhan V. Design of an introductory medical gaming environment for diagnosis and management of Parkinson’s disease. Trends in Information Sciences and Computing. 2010:94–102. Koepp MJ, Gunn RN, Lawrence AD, Cunningham VJ, Dagher A, Jones T, et al. Evidence for striatal dopamine release during a video game. Nature. 1998;393:266–268. Cole H, Griffiths MD. Social interactions in massively multiplayer online role-playing gamers. Cyberpsychology. 2007;10:575–583. Taylor N, Jenson J, de Castell S and Dilouya B. Public displays of play: studying online games in physical settings. Journal of Computer Mediated Communication. 2014;19:763–779. Padilla-Walker LM, Nelson LJ, Carroll JS and Jensen AC. More than a just a game: video game and internet use during emerging adulthood. Journal of Youth and Adolescence. 2010; 39:103–113. Jordan NA. Video games: support for the evolving family therapist. Journal of Family Therapy. 2014;36:359–370. Gentile DA, Anderson CA, Yukawa S, Ihori N, Saleem M, Ming LK, et al. The effects of prosocial video games on prosocial behaviors: international evidence from correlational, longitudinal, and experimental studies. Personality and Social Psychology Bulletin. 2009;35:752–763. Przybylski AK. Electronic gaming and psychosocial adjustment. Pediatrics. 2014;134:1–7. Jouriles EN, McDonald R, Kullowatz A, Rosenfield D, Gomez GS and Cuevas A. Can virtual reality increase the realism of role plays used to teach college women sexual coercion and rape-resistance skills? Behavior Therapy. 2009;40:337–345.
316 | Paweł Węgrzyn
[94]
[95]
[96]
[97] [98]
[99]
[100] [101] [102] [103] [104]
[105]
[106] [107] [108]
[109]
[110]
[111]
Pompedda F, Zappala A and Santtila P. Simulations of child sexual abuse interviews using avatars paired with feedback improves interview quality. Psychology Crime & Law. 2015; 21:28–52. Hansen MM. Versatile, Immersive, creative and dynamic virtual 3-d healthcare learning environments: a review of the literature. Journal of Medical Internet Research. 2008;10: art. no e26. Roubidoux MA, Chapman CM and Piontek ME. Development and evaluation of an interactive web-based breast imaging game for medical students. Academic Radiology. 2002;9: 1169–1178. Sward KA, Richardson S, Kendrick J and Maloney C. Use of a web-based game to teach pediatric content to medical students. Ambulatory Pediatrics. 2008;8:354–359. Amine EM, Pasquier P, Rosencher J, Steg G, Carli P, Varenne O, et al. Simulation modeling and computer simulation (serious game)? The case of acute coronary syndrome. Annales Francaises d Anesthesie et de Reanimation. 2014;33:A202–A202. Galland A, Pasquier P, Kerneis MA, Monneins N, Chassard D, Ducloy-Bouthors AS, et al. Simulation modeling and computer simulation (serious game)? The example of post-partum hemorrhage (Hemosims). Annales Francaises d Anesthesie et de Reanimation. 2014;33: A203–A203. Imperial College London. Game-based Learning for Virtual Patients in Second Life. 2008. http://www.imperial.ac.uk/edudev/flyers/technologies/Game_Based_Learning.pdf. Harvey H, Havard M, Magnus D, Cho MK and Riedel-Kruse IH. Innocent fun or “microslavery”? An ethical analysis of biotic games. Hastings Center Report. 2014;44:38–46. Stefanidis D, Korndorffer JR, Sierra R, Touchard C, Dunne JB and Scott DJ. Skill retention following proficiency-based laparoscopic simulator training. Surgery. 2005;138:165–170. Stefanidis D, Scerbo MW, Sechrist C, Mostafavi A and Heniford BT. Do novices display automaticity during simulator training? The American Journal of Surgery. 2008;195:210–213. Hogle NJ, Widmann WD, Ude AO, Hardy MA and Fowler DL. Does training novices to criteria and does rapid acquisition of skills on laparoscopic simulators have predictive validity or are we just playing video games? Journal of Surgical Education. 2008;65:431–435. Verdaasdonk EGG, Dankelman J, Schijven MP, Lange JF, Wentink M and Stassen LPS. Serious gaming and voluntary laparoscopic skills training: A multicenter study. Minimally Invasive Therapy & Allied Technologies. 2009;18:232–238. Lynch J, Aughwane P and Hammond TM. Video games and surgical ability: a literature review. Journal of Surgical Education. 2010;67:184–189. Jalink MB, Goris J, Heineman E, Pierie JP and ten Cate Hoedemaker HO. The effects of video games on laparoscopic simulator skills. The American Journal of Surgery. 2014;208:151–156. Rybarczyk Y, Carrasco G, Cardoso T and Pavao Martins I. A serious game for multimodal training of physician novices. In: 6th International Conference of Education, Research and Innovation–ICERI 2013; 2013; Seville, Spain. p. 18–20. Graafland M, Vollebergh MF, Lagarde SM, van Haperen M, Bemelman WA and Schijven MP. A serious game can be a valid method to train clinical decision-making in surgery. World Journal of Surgery. 2014;38:3056–3062. Graafland M, Bemelman WA and Schijven MP. Prospective cohort study on surgeons’ response to equipment failure in the laparoscopic environment. Surgical Endoscopy and Other Interventional Techniques. 2014;28:2695–2701. Mohan D, Angus DC, Ricketts D, Farris C, Fischhoff B, Rosengart MR, et al. Assessing the validity of using serious game technology to analyze physician decision making. Plos One. 2014;9:art. no e105445.
14.1 Serious games for health – Video games and health issues | 317
[112] Lagro J, van de Pol MHJ, Laan A, Huijbregts-Verheyden FJ, Fluit LCR and Rikkert MGMO. A randomized controlled trial on teaching geriatric medical decision making and cost consciousness with the serious game GeriatriX. Journal of the American Medical Directors Association. 2015;15:art. no 957.e1. 2015. [113] Baranowski T, Buday R, Thompson D and Baranowski J. Playing for real – video games and stories for health-related behavior change. American Journal of Preventive Medicine. 2008;34:74–82. [114] DeSmet A, Van Ryckeghem D, Compemolle S, Baranowski T, Thompson D, Crombez G, et al. A meta-analysis of serious digital games for healthy lifestyle promotion. Preventive Medicine. 2014;69:95–107. [115] Bogost I. Persuasive Games. Cambridge, MA: The MIT Press; 2007. [116] Orji R, Vassileva J and Mandryk RL. Modeling the efficacy of persuasive strategies for different gamer types in serious games for health. User Modeling and User-Adapted Interaction. 2014;24:453–498. [117] Baranowski T, Baranowski J, Cullen KW, Marsh T, Islam N, Zakeri I, et al. Squire’s Quest! Dietary outcome evaluation of a multimedia game. American Journal of Preventive Medicine. 2003;24:52–61. [118] Barreira GJ, Carrascosa R and Segovia P. Nutritional serious-games platform. eChallenges. 2010:1–8. [119] Rimer BK and Kreuter MW. Advancing tailored health communication: a persuasion and message effects perspective. Journal of Communication. 2006;56:S184–201. [120] Lustria MLA, Cortese J, Noar SM and Glueckaluf RL. Computer-tailored health interventions delivered over the web: Review and analysis of key components. Patient Education and Counseling. 2009;74:156–173. [121] Bogost I. The Rhetoric of Exergaming. Paper presented at the Digital Arts and Cultures conference, Copenhagen Denmark, December 2005; [cited 7 June 2015]. Available from: http://bogost.com/writing/the_rhetoric_of_exergaming/ 2005. [122] Daley AJ. Can exergaming contribute to improving physical activity levels and health outcomes in children? Pediatrics. 2009;124:763–771. [123] Laikari A. Exergaming–Gaming for health: A bridge between real world and virtual communities. In: IEEE 13th International Symposium on Consumer Electronics; 2009; Kyoto. pp. 665– 668. [124] Göbel S, Hardy S, Wendel V and Steinmetz R. Serious games for health – personalized exergames. Proceedings ACM Multimedia. 2010:1663–1666. [125] Tejeiro Salguero RA and Bersabe Moran RM. Measuring problem video game playing in adolescents. Addiction. 2002;97:1601–1606. [126] Grusser SM, Thalemann R and Griffiths MD. Excessive computer game playing: Evidence for addiction and aggression? Cyberpsychology & Behavior. 2007;10:290–292. [127] Sun DL, Ma N, Bao M, Chen XC and Zhang DR. Computer games: a double-edged sword? Cyberpsychology & Behavior. 2008;11:545–548. [128] Ream GL, Elliott LC and Dunlap E. Playing video games while using or feeling the effects of substances: associations with substance use problems. International Journal of Environmental Research and Public Health. 2011;8:3979–3998. [129] Brunborg GS, Mentzoni RA and Froyland LR. Is video gaming, or video game addiction, associated with depression, academic achievement, heavy episodic drinking, or conduct problems? Journal of Behavioral Addictions. 2014:27–32. [130] Anderson CA. An update on the effects of playing violent video games. Journal of Adolescence. 2004;27:113–122.
318 | Ewa Grabska
[131] Anderson CA, Shibuya A, Ihori N, Swing EL, Bushman BJ, Sakamoto A, et al. Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: a meta-analytic review. Psychological Bulletin. 2010;136:151–173. [132] Ferguson CJ. The Good, The Bad and the Ugly: a meta-analytic review of positive and negative effects of violent video games. Psychiatric Quarterly. 2007;78:309–316. [133] Anand V. A study of time management: The correlation between video game usage and academic performance markers. Cyberpsychology & Behavior. 2007;10:552–559. [134] Higuchi S, Motohashi Y, Liu Y and Maeda A. Effects of playing a computer game using a bright display on presleep physiological variables, sleep latency, slow wave sleep and REM sleep. Journal of Sleep Research. 2005:267–273. [135] Wack E and Tantleff-Dunn S. Relationships between electronic game play, obesity, and psychosocial functioning in young men. Cyberpsychology. 2009;12:241–244. [136] Griffiths MD, Davies MNO and Chappell D. Breaking the stereotype: the case of online gaming. Cyberpsychology & Behavior. 2003;6:81–91. [137] Fraser AM, Padilla-Walker LM, Coyne SM, Nelson LJ and Stockdale LA. Associations between violent video gaming, empathic concern, and prosocial behavior toward strangers, friends, and family members. Journal of Youth and Adolescence. 2012;41:636–649. [138] Nakamuro M, Inui T, Senoh W and Hiromatsu T. Are television and video games really harmful for kids? Contemporary Economic Policy. 2015;33:29–43. [139] Straker LM, Pollock C and Maslen B. Principles for the wise use of computers by children. Ergonomics. 2009;52:1386–1401.
Ewa Grabska
14.2 Serious game graphic design based on understanding of a new model of visual perception – computer graphics 14.2.1 Introduction The purpose of serious games is to improve an individual’s knowledge, skills, or attitude in the real world. This section focuses on the role of graphic design tools applied to serious games in improving the ability to coordinate and share visual attention. The lack of this ability is one of the social communication problems which can be identified for example with children both with Attention Deficit/Hyperactivity Disorder (ADHD) and Autism Spectrum Condition (ASC). The challenge is to invent a method to help children with ADHD or ASC without drug treatment. The study of various media and techniques has shown that experience playing serious action games can influence visual selective attention [1]. Games in healthcare have been employed as tools to support a wide range of activities from therapy to the training of specific skills. Such a tool was described by Bernardini, Porayska-Pomsta and Smith [2], and it was used to help young children with autism. The authors presented ECHOES – the serious game designed to help children with ASC. The interactive learning activities take place in a two-dimensional magic sensory garden with different objects and the ECHOES virtual agent who is a partner to
14.2 Serious game graphic design based on visual perception | 319
children. The agent has a child-like physical appearance established in combined research studies. The focus of ECHOES is on supporting children’s social communication by inviting the child to practice basic skills, such as responding to bids for interaction, and initiating bids for interaction. Frequent playing of serious games that promote the principles and techniques needed for effective visual communication have profound effects on the visual system and motor responses. They are incorporated into rehabilitation techniques. Many computer tests exist that measure the visual attention of children with ADHD responding to visual stimulation. For instance, the test proposed by Roh and Lee [3] contains three objects: a triangle, a circle, and a square. A triangle in a square is the target object, while a nontarget stimulus is either a circle or a rectangle in a square. A response should be made only when the target object appears on the monitor screen. Research on variables that measure visual attention of children with ADHD has led to the development of computer games improving children’s attention. Kim and Hong [4] give another reason for a positive effect of games on attention improvement, namely their association with voluntary participation and motivation. The problems of effective control and allocation of visual attention are essential in visual communication. Learning their principles is an intensely personal venture. It is therefore necessary to create conditions conducive to personal exploration of these rules. Understanding how to create these conditions is a formidable challenge. Therefore it is advisable to develop theories and models of graphic design in the framework of visual perception useful to serious games.
14.2.2 A new model of perception for visual communication A new model of perception as a dynamic process has emerged over the last decade. Personal attentional capacity is very low and information unrelated to the current task is quickly replaced with something needed right now. According to Ware [5] we are conscious of the field of information to which we have rapid access rather than being immediately conscious of the world. Very limited pre-processing is used to direct attention. A new perspective for serious games is to develop graphic designs based on a scientific understanding of visual attention. The player usually tries to solve some kind of cognitive problem. In the framework of the new model of perception, playing the game for the player consists of a series of acts of attention called visual queries, driving eye movements and finding patterns. Understanding how visual queries work could allow the game designer to influence the movements. One direction of research into the discovery of this mechanism was to study the properties of simple patterns that made them easy to find. Some things seem to pop out from the monitor screen at the player. According to Triesman and Gormican [6] the relationship of a visual search target to the other objects that surround it plays an essential role in pop-out effects. The target becomes the center of fixation if it is distinct in some feature channel of the
320 | Ewa Grabska
primary visual cortex. In other words, graphic design can be helpful in programming an eye movement. Visual properties that can be used in the planning of the next eye movement are called tunable. According to Ware [5] an object that pops out can be seen in a single eye fixation and processing to separate the pop-out object from its surrounding takes less than a tenth of a second, whereas objects that do not pop-out require several eye movements to find and processing to notice them takes between one and a few seconds. Let us consider a configuration of objects on the monitor screen. When a single target object differs in some features from all other objects which are identical to one another then the strongest pop-out effects occur. Understanding pop-out effects is explained in terms of the basic features that are processed in the primary visual cortex. Color, shape, orientation, shadow and blinking are features leading to pop out. Examples of patterns that show the pop-out effect are presented in Fig. 14.1.
(a) Grey value
(b) Shape
(c) Cast shadow
(d) Orientation
Fig. 14.1: Patterns showing the pop-out effect (a–d).
So far we have only considered target objects showing the pop-out effect that differ from a single surrounding feature. Fig. 14.2 shows the target object that differs in a greater number of features from all other objects. It is easy to see that the pop-out effect for this pattern is stronger than in the case of the similar pattern in Fig. 14.1.
Fig. 14.2: Another pattern showing the pop-out effect.
Let us consider the more complex problem of focusing attention on the presence of a greater number of target objects based on two features. Trying to find these objects is called a visual conjunctive search. Fig. 14.3 presents a number of white and gray pots with flowers that have a round or conical shape.
14.2 Serious game graphic design based on visual perception | 321
Fig. 14.3: An example of the visual conjunctive search.
Color
There are six round white flower pots. They do not show a pop-out effect because the primary visual cortex can either be tuned for shapes, or colors, but not for both object attributes. The study included an analysis of corresponding feature space diagrams determined by axes of different feature channels that allows one to understand what makes objects different. Solutions to more complex design problems, for instance making several objects easily searchable at the same time, are based on the analysis.
4 2 3 1
Or ie
nt at
io
n
Le ng th 1 2 3 4 Number of elements with the same value features
Fig. 14.4: The graphic and corresponding feature space diagram.
Fig. 14.4 presents a graphic composed of visual elements that are segments and the corresponding diagram with the three feature canals represented by axes of length, color and orientation. The diagram characterizes the graphic by the number of its congruent visual elements for the values of all three features. The graphic in Fig. 14.5 has equipotent sets of congruent segments, i.e. this graphic consists of 48 segments and 16 sets of congruent elements.
Color
322 | Ewa Grabska
Or ie nt at
io
n
Le ng th 1 2 3 4 Number of elements with the same value features
Fig. 14.5: The graphic with 16 sets of congruent elements.
The designer usually creates a complex visual object. Its elements can differ in color, size, shape, texture, and orientation. The challenge for designers is to make a visual object with more than three elements rapidly show the pop-out effect. In Ware’s opinion creating a display containing more than eight visual elements supporting pop-out searching is probably impossible. Fig. 14.6 presents seven visual elements such that each would be independently searchable.
Fig. 14.6: Seven visual elements with pop-out effect.
14.2.3 Visibility enhancement with the use of animation Computers offer new possibilities in generating and manipulating graphics. If we consider differences between text and hypertext we see that graphic design on the computer need not be just a straight conversion of graphic design on paper. As has been considered, human perception goes under the name “active vision”, which means to understand perception as a dynamic process of constructing images. Thus it is not just restricted to perceiving objects and is often aided by computer tools enhancing
14.2 Serious game graphic design based on visual perception | 323
and extending a user’s mind. A computer screen is an example of such a tool, providing a platform for mental and external manipulation [7]. Viewer’s attention is the goal of visual communication. Animation effectively supports an orienting response. Animating objects is a method of visibility enhancement. In ECHOES, the serious game considered here, the virtual agent who is a partner to the child with ASC uses the programmer language of signs and symbols called Macato to support spoken language. For example: “Yes” is accompanied by a head nod, “Good job” by a thumb up. The more complex signs need sequences of agent gestures, for instance “Your turn” is indicated by the hand held in a fist and by the base of the hand pointing towards the person being addressed. The agent can perform a number of positive facial expressions that are implemented by changes in his lips and eyebrows and accompanied by body gestures corresponding to emotions. It should be noted here that we rapidly become habituated to simple motion [5]. According to Hilstrom and Yantis [8], the objects which most powerfully elicit the orienting response are not objects which move, but objects that emerge into the visual field. An example of key frames for an emergent object is shown in Fig. 14.7.
Fig. 14.7: Key frames for an emergent object.
14.2.4 Conclusion Effective visual communication is an integral part of designing serious games that needs logical thinking and strategies for problem solving. This section is an attempt to present some visual communication issues from the perspective of the new model of perception. The goal of visual communication is viewer’s attention and a necessary condition is a high level of continuous interest. It is also the first step to open up the players to previously unexplored avenues of problem solving in real life.
References [1]
[2]
Castel AD, Pratt J and Drummond E. The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search. Acta Psychologica. 2005; 119:217–230. Bernardini S, Porayska-Pomsta K and Smith TJ. ECHOES: An intelligent serious game for fostering social communication in children with autism. Information Sciences. 2014;264:41–60.
324 | Irena Roterman-Konieczna
[3] [4]
[5] [6] [7] [8]
Roh CH and Lee WB. A study of the attention measurement variables of a serious game as a treatment for ADHD. Wireless Pers Commun. 2014;79:2485–2498. Kim MY and Hong YG. A single case study on a child with ADHD by the application of attention improvement training composed of play and games. The Korean Journal of Elementary Counseling. 2009;8(1):15–32. Ware C. Visual Thinking for Design. Burlington, MA: Elsevier Inc. 2008. Triesman A and Gormican S. Feature analysis in early vision: Evidence from search asymmetries. Psychological Review. 1988;95(1):15–98. Grabska E, Ślusarczyk G and Szłapak M. Animation in art design. In: Gero JS, ed. Design Computing and Cognition’04. Dordrecht, Netherlands: Springer; 2004, pp. 317–335. Hilstron AP and Yantis S. Visual attention and motion capture. Perception and Psychophysics. 1994;55(4):109–154.
Irena Roterman-Konieczna
14.3 Serious gaming in medicine The subsection on serious gaming was included in the chapter devoted to simulations supporting therapy. A review of online therapeutic games is presented above. Here, we focus on two separate issues. The first concerns support for burdensome therapies in children (e.g. requiring dialysis or frequent collection of blood samples for analysis) while the second concerns training memorization skills in elderly patients. To begin with we need to note that therapeutic gaming can be useful when the outcome of therapy depends on close cooperation between the patient and the physician, or when the therapy requires the patient to submit to a strict treatment regimen (e.g. dietary requirements). A classic example involves weight loss strategies which critically depend on the patient’s attitude and perseverance. The type of gaming discussed in this chapter cannot be applied when the course of the disease does not depend on the patient’s actions – even though any form of emotional support and encouragement may bring medical benefits.
14.3.1 Therapeutic support for children Treating children – especially in an inpatient scenario – seems quite straightforward. Children like to win and most of them play video games, hence the language of therapeutic gaming should be immediately familiar to them. The outcome of therapy – especially when self-restraint is called upon – depends to a great extent on the patient’s eagerness to defeat the “opponent” (in this instance – the disease). When dealing with a child the need for restraint should be supported by emotional arguments since appealing to reason alone is often insufficient. A game which, by its nature, appeals to emotions – such as the need to defeat one’s opponent – promotes a positive approach to the entire therapeutic process. The goal of
325
Special Tests
Healthy
Parameter B
14.3 Serious gaming in medicine |
Parameter A
Healthy
Parameter B
(a)
Former Result
Former Result
Healthy Area
(b)
Current Result
Current Result
Parameter A
Fig. 14.8: Example of serious gaming aimed at child patients. (a) Simple graph presenting the current status, along with the patient’s history in a coordinate system (results of clinical examinations). (b) The same chart presented in a form which appeals to the child’s imagination. The goal is to reach “home” (which reflects the natural wishes of children being treated on a hospital ward). Original concept by Kinga Juszka.
326 | Irena Roterman-Konieczna
serious gaming is attained when clinical indicators (concentrations of certain substances in the patient’s blood or urine) approach or reach their respective normal ranges. These ranges define the conditions for a “full victory” (Fig. 14.8). The child patient usually begins in a state which is distant from the desired outcome. Successful therapy requires the patient to overcome behavioral obstacles. Whenever progress is observed, the child is rewarded by access to a game whose duration corresponds to the magnitude of medical improvements. Upon reaching the final goal, the child is offered a wide range of games which can be played over an arbitrarily long period of time. One typical example is Formula 1 racing, with the duration of the gaming session dependent on the outcome of therapy. Having reached his/her goal, the player may freely select their car model and continue playing the game for as long as they wish. The diagram presented in Fig. 14.8 (a) is not particularly interesting to a child. In order to maximize effectiveness, gamification of therapy must appeal to the child’s imagination – hence the simple diagram (Fig. 14.8 (a)) is transformed into a scene (Fig. 14.8 (b)) with the “home” acting as the desired final state (this concept has been suggested by Kinga Juszka – a student of applied computer science at the Department of Physics, Astronomy and Applied Computer Science of the Jagiellonian University). The player’s icon, representing the current state, traces a path on the meadow, keeping the child’s mind off the hardships associated with medical therapy. Positive results of such games are particularly evident in hospital wards where the competitive attitude is reinforced by group dynamics – each success attained by one patient mobilizes other patients to increase their own efforts. Another interesting concept involving gaming as a form of reward has been proposed by Anna Chyrek (student at the Department of Biotechnology of the JU). In this case, the game assumes the form of a jigsaw puzzle (see Fig. 14.9).
Fig. 14.9: Example of a puzzle game where the duration of the session depends on the observed progress of therapy. Left-hand image: initial chaotic state; central image: result of a brief gaming session; right-hand image: result of unhindered access to the game, awarded for reaching the desired medical result (expressed by the appropriate indicators). Original concept by Anna Chyrek.
14.3 Serious gaming in medicine |
327
14.3.2 Therapeutic support for the elderly A team of experts at Kazimierz Wielki University – Bydgoszcz Institute of Psychology has devised a therapy which aims to increase memorization skills in the elderly who go about performing their daily activities. In this case memory exercises focus on a set of questions associated with a museum trip, a meeting with friends and preparations for a visit by one’s grandchildren. The questions themselves concern cooking recipes, places visited during a walk or interpersonal relations between the invited friends. “Success” depends on the final score, which is displayed at the end of the session and reflects the patient’s memory skills on a given day. Some concerns raised with regard to this strategy point out that results may not necessarily correspond to one’s improved memory but simply follow from repetitive execution of similar tasks. Nevertheless, any form of memorization helps train the patient’s mental faculties and may have beneficial effects. Fig. 14.10 presents the selection of foodstuffs required by a given recipe. Clicking on a product displays a pop-up form where the user is asked to provide the required quantity. Correct answers (e.g. the correct number of tomatoes) increase the user’s final score.
(a)
(b)
Fig. 14.10: Purchasing products needed to prepare a dish (vegetable salad). (a) product selection; (b) quantity input box.
Both presented examples should be seen as a very brief introduction to the immense potential of gaming strategies in the treatment of medical conditions – such as mental disabilities or any therapies which critically depend on cooperation between the patient and the physician. A classic example involves weight loss strategies where taming one’s food cravings may be rewarded with games appropriate to the patient’s age and preferences (such as gender). Any serious therapeutic gaming should include a physician’s interface presenting aggregate results. The physician should be able to review the progress of therapy and generate concise reports, including charts and tables.
Index 3D graphics 235, 238, 240, 254, 262, 263, 265 big data processing 280
DNA replication 203 drug administration regime 203, 205
Additive Manufacturing 221 aggregate results 327 anatomical atlas 67 animation 236–239, 255, 259 animation 71 applications of biocybernetic models 99 automatic control 31, 34, 40
E-learning 125 electrical impulse 66 Electron Beam Melting 228 elementary event 57, 58, 59 encoding of signal 36, 37 equilibrium 31, 32 exergames 309 Experiential learning theory 126 exponential growth 198, 199, 204, 205
biochemical communication hubs 24 biochemical pathways 12 biological network databases 5 biological networks 10 biomedical applications 214 biotic games 307 bones 102, 104–107, 109 cancer 197, 199, 200 cardiac anomalie 75 chemotherapy 202, 203 clinical inter-professional education 144 clinical reasoning 127 cognitive computing 280 cognitive training with games 302 compressive stress 104–108 computer-assisted design 221 computer animation 322, 323 computer model 235 computer simulation 254 conformational space 58 conscious perceptions 77, 78, 91 cooperation 35, 41, 43, 45 coordination 34, 35, 41, 44, 45 crouching 106, 107 curricular integration 132 data acquisition 211 data processing 212 decision systems 278 DICOM 211, 213, 217, 281 Digital Light Processing 224 digitalization 210 disease self-management with games 301
feedback inhibition 31, 41, 43 feedback loop 31–34, 36–38, 40, 41, 44, 48 Figure/ground separation 77–79, 85, 87, 92 fitness games 309 full victory 326 game-based learning 297 game-informed learning 131 gamification 326 gamma oscillations 77, 78, 91 gap junctions 77–79, 81–83, 85, 87, 89–93 gene expression sets 7 gene profiles 8 graph 10 graphic design 318–323 heart contractions 66 heart rate 71 heart textures 70 hormonal signalling 33, 37, 38 immersive clinical environments 141 in silico 235 information entropy 58 informationless 60 initial conditions 198, 199, 202, 204 integrate and fire neurons 77, 78, 81, 91, 92 Interactive patient scenario 123 interactive virtual patients 142 Internal and external geometrie 211 investement in information 61 investment in energy 60 jigsaw puzzle as the award 326
330 | Index
knowledge representation 277 lateral coupling 77–79, 81–85, 91–93 liquid-based technology 222 machine learning 279 medical applications 230 medical education 165 medical image processing 181, 182, 188 medical imaging 65 medical professional training with games 307 medical simulations 307 memorization skills 324, 327 message delivery 58 mobile application 236, 239, 243, 250, 254, 265 mobile zone 77, 78, 91, 92 motion 99, 114, 115 motion capture 100 multimedia learning 303 muscles 102, 110, 111, 113–115 network analysis 16 network models 14 networks via literature mining 15 one bit 57, opponent 324 organ simulation 66 organizm and cell relation 31, 33–37, 41 pain management with games 300 patient’s medical record 275 pedagogy 163 pelvic tumor reconstruction 230 persuasive games 308 pervasive sensing and monitoring 273 phase of the cycle 66 pop-out effect 319, 320 probabilisty 57 proliferating cells 197–201, 203, 204 prosocial games 306 quiescent cells 200, 201, 203, 204 Ramachandran plot 58 Rapid Prototyping packages 219, 220 reconstruction following blowout fracture 232 rehabilitation with games 301 resistive coupling 78, 82, 83, 92 reverse engineering 207, 208
script concordance test 131 Selective Laser Melting 227 Selective Laser Sintering 227 self-restraint 324 serious games 295 serious games for health 296 Shannon – information quantity 57 shear stress 105 signal enhancement 37 skin mole 181, 184, 185, 190 solution by energy 58, 61 solution by information 58, 60 spiking neurons 77–82, 92 standards 132 static and dynamic models 65 steady state 31, 32 steering signals 34, 38, 39, 40 stereolithography 222 structure-function 31, 33–37, 40, 43, 45 subject 101–104, 106, 107, 111, 117 surgical education 235 surgical simulator 235, 262–264, 267 tailored health communication 308 team assessment 173 teleconsultation 287 telemedicine 272 telemonitoring 282 therapeutic support 325 therapies in children 325 therapy with games 300, 304 thickness evaluation 180–185 tissue engineering 229 TNM system 183 Topotecan 203 transmedia learning 304 treatment regimen 324 tumor 197, 200, 204, 205 vascular structures 186, 187, 190 velocity 114–118 video game-related health problems 309 virtual agent 318, 323 virtual clinical worlds 141 virtual patient system 129 virtual patients 121, 141, 307 virtual reality 235, 263 Virtual worlds 140 visual attention 318–320, 323
Index |
visual communication 318–321, 323 visual perception 77, 78 visual queries 319
visual search strategy 319–322 visual space 321 visualization 248, 254, 255, 257, 262, 264
331