279 20 9MB
English Pages XV, 195 [203] Year 2020
Advances in Experimental Medicine and Biology 1260
Paul M. Rea Editor
Biomedical Visualisation Volume 8
Advances in Experimental Medicine and Biology Volume 1260 Series Editors Wim E. Crusio, Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, CNRS and University of Bordeaux, Pessac Cedex, France Haidong Dong, Departments of Urology and Immunology, Mayo Clinic, Rochester, MN, USA Heinfried H. Radeke, Institute of Pharmacology & Toxicology, Clinic of the Goethe University Frankfurt Main, Frankfurt am Main, Hessen, Germany Nima Rezaei, Research Center for Immunodeficiencies, Children’s Medical Center, Tehran University of Medical Sciences, Tehran, Iran
Advances in Experimental Medicine and Biology provides a platform for scientific contributions in the main disciplines of the biomedicine and the life sciences. This series publishes thematic volumes on contemporary research in the areas of microbiology, immunology, neurosciences, biochemistry, biomedical engineering, genetics, physiology, and cancer research. Covering emerging topics and techniques in basic and clinical science, it brings together clinicians and researchers from various fields. Advances in Experimental Medicine and Biology has been publishing exceptional works in the field for over 40 years, and is indexed in SCOPUS, Medline (PubMed), Journal Citation Reports/Science Edition, Science Citation Index Expanded (SciSearch, Web of Science), EMBASE, BIOSIS, Reaxys, EMBiology, the Chemical Abstracts Service (CAS), and Pathway Studio. 2019 Impact Factor: 2.450. 5 Year Impact Factor: 2.324. More information about this series at http://www.springer.com/series/5584
Paul M. Rea Editor
Biomedical Visualisation Volume 8
Editor Paul M. Rea College of Medical, Veterinary and Life Sciences University of Glasgow Glasgow, UK
ISSN 0065-2598 ISSN 2214-8019 (electronic) Advances in Experimental Medicine and Biology ISBN 978-3-030-47482-9 ISBN 978-3-030-47483-6 (eBook) https://doi.org/10.1007/978-3-030-47483-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The utilisation of technologies in biomedical and life sciences, medicine, dentistry, surgery, veterinary medicine and surgery, and the allied health professions has grown at an exponential rate over recent years. The way we view and examine data now is significantly different to what has been done perhaps 10 or 20 years ago. With the growth, development and improvement of imaging and data visualisation techniques, the way we are able to interact with data is much more engaging than it has ever been. These technologies have been used to enable improved visualisation in the biomedical fields, but also how we engage our future generations of practitioners when they are students within our educational environment. Never before have we had such a wide range of tools and technologies available to engage our end-stage user. Therefore, it is a perfect time to bring this together to showcase and highlight the great investigative works that is going on globally. This book will truly showcase the amazing work that our global colleagues are investigating, and researching, ultimately to improve student and patient education, understanding and engagement. By sharing best practice and innovation we can truly aid our global development in understanding how best to use technology for the benefit of society as a whole. Glasgow, UK
Paul M. Rea
v
Acknowledgements
I would like to truly thank every author who has contributed to the eighth edition of Biomedical Visualisation. The lead authors are all now graduates of the MSc Medical Visualisation and Human Anatomy, the postgraduate taught degree is run jointly by the School of Life Sciences within the College of Medical, Veterinary and Life Sciences in the University of Glasgow and the School of Simulation and Visualisation, The Glasgow School of Art. Thank you also to our wonderful colleagues locally and nationally who supervised these projects and made this volume possible. By sharing our innovative approaches, we can truly benefit students, faculty, researchers, industry and beyond in our quest for the best uses of technologies and computers in the field of life sciences, medicine, the allied health professions and beyond. In doing so, we can truly improve our global engagement and understanding about best practice in the use of these technologies for everyone. Thank you! I would also like to extend out a personal note of thanks to the team at Springer Nature who have helped make this possible. The team I have been working with have been so incredibly kind and supportive, and without you, this would not have been possible. Thank you kindly!
vii
About the Book
Following on from the success of the first seven volumes, Biomedical Visualisation, Volume 8 will demonstrate the numerous options we have in using technology to enhance, support and challenge education. The chapters presented here highlight the wide use of tools, techniques and methodologies we have at our disposal in the digital age. These can be used to image the human body and educate patients, the public, faculty and students in the plethora of how to use cutting-edge technologies in visualising the human body and its processes, creation and integration of platforms for teaching and education, and visualising biological structures and pathological processes. The first six chapters in this volume show the wide variety of tools and methodologies that digital technologies and visualisation techniques can be utilised and adopted in the educational setting. This ranges from body painting, clinical neuroanatomy, histology and veterinary anatomy through to real time visualisations and the uses of digital and social media for anatomical education. The last four chapters represent the diversity that technology has to be able to use differing realities and 3D capture in medical visualisation, and how remote visualisation techniques have developed. Finally, it concludes with an analysis of image overlays and augmented reality and what the wider literature says about this rapidly evolving field.
ix
Contents
1 Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological Visualisations������������������������������������������������������������ 1 Christian Moro, Zane Stromberga, and Ashleigh Moreland 2 The Evolution of Educational Technology in Veterinary Anatomy Education�������������������������������������������������� 13 Julien Guevar 3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings���������������������������������� 27 Angelique N. Dueñas and Gabrielle M. Finn 4 TEL Methods Used for the Learning of Clinical Neuroanatomy �������������������������������������������������������������� 43 Ahmad Elmansouri, Olivia Murray, Samuel Hall, and Scott Border 5 From Scope to Screen: The Evolution of Histology Education�������������������������������������������������������������������� 75 Jamie A. Chapman, Lisa M. J. Lee, and Nathan T. Swailes 6 Digital and Social Media in Anatomy Education�������������������������� 109 Catherine M. Hennessy and Claire F. Smith 7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations �������������������������������������������������������������� 123 Ross T. Smith, Thomas J. Clarke, Wolfgang Mayer, Andrew Cunningham, Brandon Matthews, and Joanne E. Zucco 8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical Visualisation and Fabrication �������������������������������������������������������� 141 Mark Roughley
xi
xii
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation ���������������������������������� 161 Mario E. Giardini and Iain A. T. Livingstone 10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review������������������������������������������������������������������������ 175 Laura Pérez-Pachón, Matthieu Poyade, Terry Lowe, and Flora Gröning
Contents
Editor and Contributors
About the Editor Paul M. Rea is a Professor of Digital and Anatomical Education at the University of Glasgow. He is qualified with a medical degree (MBChB), an MSc (by research) in craniofacial anatomy/surgery, a PhD in neuroscience, Diploma in Forensic Medical Science (DipFMS) and an MEd with Merit (Learning and Teaching in Higher Education). He is an elected Fellow of the Royal Society for the Encouragement of Arts, Manufactures and Commerce (FRSA), elected Fellow of the Royal Society of Biology (FRSB), Senior Fellow of the Higher Education Academy, professional member of the Institute of Medical Illustrators (MIMI) and a registered medical illustrator with the Academy for Healthcare Science. Paul has published widely and presented at many national and international meetings, including invited talks. He sits on the Executive Editorial Committee for the Journal of Visual Communication in Medicine, is Associate Editor for the European Journal of Anatomy and reviews for 25 different journals/publishers. He is the Public Engagement and Outreach lead for anatomy coordinating collaborative projects with the Glasgow Science Centre, NHS and Royal College of Physicians and Surgeons of Glasgow. Paul is also an STEM ambassador and has visited numerous schools to undertake outreach work. His research involves a long-standing strategic partnership with the School of Simulation and Visualisation, The Glasgow School of Art. This has led to multi-million pound investment in creating world leading 3D digital datasets to be used in undergraduate and postgraduate teaching to enhance learning and assessment. This successful collaboration resulted in the creation of the world’s first taught MSc Medical Visualisation and Human Anatomy combining anatomy and digital technologies. The Institute of Medical Illustrators also accredits it. It has created college-wide, industry, multi-institutional and NHS research linked projects for students. Paul is the Programme Director for this degree.
xiii
xiv
Contributors Scott Border Centre for Learning Anatomical Sciences, University Hospital Southampton, Southampton, UK Jamie A. Chapman College of Health and Medicine, Tasmanian School of Medicine, University of Tasmania, Hobart, TAS, Australia Thomas J. Clarke IVE: Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia Wearable Computer Laboratory, University of South Australia, Adelaide, Australia Andrew Cunningham IVE: Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia Wearable Computer Laboratory, University of South Australia, Adelaide, Australia Angelique N. Dueñas Health Professions Education Unit, Hull York Medical School, University of York, York, UK Ahmad Elmansouri Centre for Learning Anatomical Sciences, University Hospital Southampton, Southampton, UK Gabrielle M. Finn Health Professions Education Unit, Hull York Medical School, University of York, York, UK Mario E. Giardini Department of Biomedical Engineering, University of Strathclyde, Glasgow, Scotland, UK Flora Gröning School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK Julien Guevar Division clinical neurology, Vetsuisse-Faculty, University of Bern, Bern, Switzerland Samuel Hall Neurosciences Department, Wessex Neurological Centre, University Hospital Southampton, Southampton, UK Catherine M. Hennessy Department of Anatomy, Brighton and Sussex Medical School, University of Sussex, Brighton, UK Lisa M. J. Lee Department of Cell and Developmental Biology, University of Colorado School of Medicine, Aurora, CO, USA Iain A. T. Livingstone NHS Forth Valley, Larbert, Scotland, UK Terry Lowe School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
Editor and Contributors
Editor and Contributors
xv
Brandon Matthews IVE: Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia Wearable Computer Laboratory, University of South Australia, Adelaide, Australia Wolfgang Mayer AI and Software Engineering Laboratory, University of South Australia, Adelaide, Australia Ashleigh Moreland School of Health and Biomedical Sciences, RMIT University, Melbourne, Australia Christian Moro Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia Olivia Murray Edinburgh Medical School: Biomedical Sciences (Anatomy), University of Edinburgh, Edinburgh, UK Laura Pérez-Pachón School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK Matthieu Poyade School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK Mark Roughley Liverpool School of Art and Design, Liverpool John Moores University, Liverpool, UK Claire F. Smith Department of Anatomy, Brighton and Sussex Medical School, University of Sussex, Brighton, UK Ross T. Smith IVE: Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia Wearable Computer Laboratory, University of South Australia, Adelaide, Australia Zane Stromberga Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia Nathan T. Swailes Department of Anatomy and Cell Biology, Roy J. and Lucille A. Carver College of Medicine, The University of Iowa, Iowa City, IA, USA Joanne E. Zucco IVE: Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia Wearable Computer Laboratory, University of South Australia, Adelaide, Australia
1
Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological Visualisations Christian Moro, Zane Stromberga, and Ashleigh Moreland
Abstract
Muscle physiology constitutes a core curriculum for students and researchers within biomedical, health and exercise science disciplines. The variations between skeletal and smooth muscle, mechanisms underlying excitation–contraction coupling, as well as the relationships between muscle anatomy and physiology are commonly taught from illustrations, static models or textbooks. However, this does not necessarily provide students with the required comprehension surrounding the dynamic nature of muscle contractions or neuromuscular activities. This chapter will explore alternative methods of visualising skeletal and smooth muscle physiology in real-time. Various recording hardware, isolated tissues bath experiments, neurophysiological applications and computer-based software will be discussed to provide an overview of the evidence-based successes and case studies for using these techniques when assisting students with their understanding of the C. Moro (*) · Z. Stromberga Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia e-mail: [email protected] A. Moreland School of Health and Biomedical Sciences, RMIT University, Melbourne, Australia
complex mechanisms underlying muscle contractions. Keywords
Contractile activity · Electromyography · Isolated tissue bath · LabChart · Muscle contraction · Muscle physiology · Neuromuscular · Skeletal muscle · Smooth muscle
1.1
Muscle Visualisations in Biomedical, Health and Exercise Science
Understanding the mechanical and electrical properties of muscles is essential in both clinical practice and research across biomedical, health and exercise science disciplines. This information is typically delivered via two-dimensional (texts, diagrams/illustrations) or static three- dimensional models. While this approach helps students to remember and understand the basic anatomy of a muscle, it fails to promote higher order thinking skills like applying, analysing or evaluating how muscles function. As discussed by Taylor and Hamdy (2013), adult learning should aim to progress from having knowledge about a topic, to then being able to make sense of
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_1
1
2
that knowledge, and finally being able to apply that knowledge. According to the theory of constructive learning, students learn best when they are able to construct new knowledge by engaging in activities that build on prior knowledge (Brandon and All 2010). Therefore, student academic achievement can be promoted by providing a learning environment that is based on the constructivist theory, where the learner is able to regulate their learning and promote engagement (de Kock et al. 2004). As such, in the context of human anatomy and physiology, it is beneficial to use exploratory, hands-on techniques during laboratory classes to consolidate this knowledge via guided discovery or experiential learning. Furthermore, laboratory practicals directly engage students in the process of scientific discovery (Sweeney et al. 2004), providing an enhanced learning experience that cannot be replicated in a traditional didactic lecture environment. This is especially beneficial for the overall learning experience, as many of the challenging concepts surrounding muscle physiology can be immediately observed by the learner. Coates (2009) identified several elements that contribute to student engagement, including active learning, a supportive environment, and student/staff interactions. Further, concepts like ‘transformative learning experiences’ and ‘student belonging’ are at the forefront for tertiary institutions. As such, pedagogical approaches that can facilitate this at the local course/subject level and broader institution level are a priority. Active learning occurs when the student is a stakeholder in their own learning experiences, and requires purposeful engagement with the learning material to enhance their knowledge. This shifts the dynamic of the teaching environment from the instructor ‘delivering’ information in their preferred manner and the student passively consuming it, to an environment where instructors facilitate engagement through adaptable and semi-structured learning experiences that encourage collaboration, problem-solving, thought-sharing and deep discussion of topics. However, to motivate students to engage with the content, consideration needs to be given to providing learning opportunities that can appeal to
C. Moro et al.
all students entering a class, regardless of their learning styles and preferences, whether they have a general intelligence (Burkart et al. 2017), or simply exhibit a bundle of skills needed to succeed in higher education (Gardner 2017). Learning activities that provide variety in the curricula, which engages a range of modalities and methods for learning, are usually preferred by students (Lujan and DiCarlo 2006) and even assists students in their transitions from school- based content and into the more applied and professionally focused university curricula (Moro and McLean 2017). As such, the straight provision of chalk-and-talk instruction appears best replaced by engaging interactive activities. Supportive learning environments align with student belonging by nurturing students and evoking feelings of legitimation within the institutional cohort (Coates 2009), which is known to enhance overall academic performance (Multon et al. 1991). This is facilitated by active learning modalities because of the necessity for collaborative teamwork, and the intrinsic sense of achievement that is inherent when students have achieved success in a learning task. Similarly, the increased student and staff interaction, such as that which occurs during these types of visualisation activities, has been shown to lead to higher levels of student satisfaction and academic achievement (Richardson and Radloff 2014). It is evident that a significant challenge for the modern-day educator is that they must not only share their knowledge and expertise on content matter, but do so in a way that most effectively promotes learning for the student. Active learning modalities should be implemented to build upon prior knowledge from the current course/ subject and the content scaffolded throughout their entire program/degree. Real-time visualisation of theoretical concepts in an exploratory style practical lesson can offer a solution to this in biomedical, health and exercise science disciplines. As such, this chapter will detail several real-time visualisation methodologies that can facilitate student understanding of the mechanical and electrical properties of skeletal and smooth muscle.
1 Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological…
1.2
Smooth Muscle Versus Skeletal Muscle – Visualising the Difference
Comparing the differences between the mechanisms of contraction within each muscle type can be challenging for both undergraduate and graduate students. Skeletal muscle contains striated muscle cells that contract under voluntary control. Smooth muscles, on the other hand, are found in visceral organs and in blood vessels where they perform their functions under involuntary control. To better understand the different properties of muscle contraction, the use of animal muscle remains a core component in many biomedical, medical, exercise and health professional courses. Animal tissue is easy to obtain for most universities, and laboratory practical sessions can be specifically structured to investigate differences in the properties and responses of tissues in real time. For example, the presence of myoglobin and the increased store of oxygen creates a much redder appearance in skeletal muscle when compared to the pale look of the smooth muscle. However, the colour of skeletal muscle can vary depending on its location and specific function. The difference between fast- and slow-twitch skeletal muscle cells and their power and fatigue properties is also an important concept to grasp. The two categories refer to the contractile kinetics, with the fast-twitch muscle contracting and relaxing much faster than the slow-twitch muscle (Head and Arber 2013). They also differ in terms of the metabolic pathway in which energy in the form of ATP is generated for muscle contractions. Slow-twitch cells use aerobic pathways (involving oxygen) to generate ATP, whereas fast-twitch cells predominantly use anaerobic (lacking oxygen) pathways to generate ATP. Through observation, fast-twitch fibres appear paler in colour compared to slow-twitch fibres that are much darker (visualise the pale colour of a chicken breast compared to the darker colour of the leg muscle). Slow-twitch muscles are predominantly used to maintain posture and for endurance activities, such as running a marathon, whereas fast-twitch muscles are used in
3
activities that require greater speed and power for a short period of time, such as sprinting or performing power exercises like throwing a javelin. Under the microscope, skeletal muscle looks striated, with long multinucleated cells containing very regular contractile elements. In smooth muscle, the cells often form sheets of uninucleate spindle-shaped cells. This appearance can then link to the contractile functions of the tissue, with real-time measurements and observations of the various forms of muscle useful for comprehension. As an alternative to many static ways to learn, real-time recording has become commonplace in many anatomical and physiological curricula (Anderson et al. 2013). Other approaches for visualising this activity include the use of virtual dissection tables (Periya and Moro 2019) and other modes in the ever-advancing methods of teaching in anatomical education (Papa et al. 2019).
1.3
Visualising the Activity of Isolated Muscles
Key concepts surrounding the differences between skeletal and smooth muscle involve the force and speed of contraction. However, it is challenging for students to understand just how these are different by simply using numbers and measurements, or still images of experimental traces from textbooks or research articles. By allowing the learner to see muscle contractions in a laboratory setting, they can visualise the speed, force, and other parameters in real time. This can be highly beneficial when comparing different muscle types to their functions or for experiencing variations between the contractile velocity of skeletal and smooth muscles. The use of real- time recording can be quite simple and particularly effective using isolated tissue baths. The isolated tissue bath system is one of the primary research tools in any physiology and pharmacology laboratory to investigate in vitro tissue preparations (Moro et al. 2011; Scheindlin 2001). It has allowed scientists across various disciplines to characterise different receptor types involved in both healthy and diseased states
4
by measuring isometric contractions (Moro et al. 2013, 2016). That, in turn, has provided targets for pharmaceutical therapy for numerous disorders, including hypertension, diabetes and bladder dysfunction. There are several advantages to using tissue baths in both scientific experiments and as a tool to visualise muscle contractions to enhance teaching practices. First, the contractions can be observed in real time, which allows for planning of the next steps or troubleshooting problems when they arise as the experiment unfolds (Jespersen et al. 2015). Second, the tissue is isolated from surrounding structures, which provides opportunities to visualise any contractile activity of the specific tissue of interest in response to the pharmaceutical agent. Disadvantages of this approach include the possibility of tissue damage during the preparation or mounting process, which can influence the contractile responses observed. Furthermore, the period of time in which the tissue is viable and capable of eliciting responses to stimulants vary between different tissue types and is usually limited to several hours. Therefore, laboratory practicals that incorporate isolated tissue baths need to be timed accurately to ensure that the tissues are capable of producing contractile responses and exhibited periods of relaxation. To perform isolated tissue bath experiments, strips of smooth muscle are carefully dissected from the surrounding tissue and mounted in an isolated bath. The preparation is suspended by hooks in the middle of the bath, which is heated to body temperature (37 °C) and filled with a physiological solution in order to keep the prepaFig. 1.1 Left: A smooth muscle preparation suspended in an isolated tissue bath, with the base being anchored to the bath and the top linked to an isometric force transducer. Right: isolated tissue baths set up in a research laboratory. (Images: C. Moro)
C. Moro et al.
ration alive (Fig. 1.1). Commonly used solutions include Ringer’s or Krebs-Henseleit solution, which provide nutrients for metabolism, a buffer to keep the pH constant and a supply of oxygen. The first hook is used to anchor the tissue in the bath, and the other hook is attached to an isometric force transducer. The transducer measures the changes in muscle length (in grams [g] or millinewtons [mN]) that are produced in response to muscle contraction or relaxation while the muscle length remains the same. These changes are then recorded in real time using the PowerLab (ADInstruments, Castle Hill, Australia) hardware and visualised on the computer screen via LabChart v8 software (ADInstruments, Castle Hill, Australia) (Fig. 1.2). The ability of the smooth muscle to respond to pharmaceutical treatments, neurotransmitters, hormones and other chemicals makes it fundamentally different in many ways to that of skeletal muscle (Stromberga et al. 2020a). Although tonic contractions are usually the main focus of tension recordings (Stromberga et al. 2020b; Mitsui et al. 2019), tissue relaxation can also be observed. Unlike skeletal muscle, smooth muscle can also relax, and this ‘active relaxation’ can be depicted in response to chemicals such as nitric oxide (Moro et al. 2012) or adrenoreceptor agonists (Moro et al. 2013). This means that students can visualise the unique phenomenon of smooth muscle contracting in two directions and further their comprehension of the overall mechanisms of muscle contraction and relaxation. One other advantage is using the nerves within the skeletal and smooth muscle to induce contrac-
1 Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological…
5
Fig. 1.2 A trace of a smooth muscle from the wall of the urinary bladder. This tissue contracts slowly, although can increase tension to an amount over ten times its weight in
grams. In this trace, the tissue responded to histamine (100 μM, stimulant. (Stromberga et al. 2019))
tion. For example, through a technique called electrical field stimulation (EFS), a small current can be passed through tissue, inducing neuronal depolarisation and neurotransmitter release (Moro and Chess-Williams 2012). This allows for a range of interventions and experimentations to take place, such as identifying neurotransmitter release to examine various pathologies (McCarthy et al. 2019) or visualising the variations in axon myelination or nerve physiology within muscle tissue (Rattay 1986). Once muscle contractile traces are developed and recorded, they can be viewed through a range of media. Although this is commonly visualised in real time on a computer monitor, these charts can later be exported for use in course lecture slides or even incorporated as real-time charts alongside models of muscles contracting through modes such as virtual and augmented reality (Moro et al. 2017a, b), other mixed reality displays (Birt et al. 2018) or through touchscreen tablet devices (Morris et al. 2012). These uses demonstrate highly interactive methods for linking the more traditional style learning in laboratories and workshops into the modern online and interactive digital anatomical and physiology curricula (Moro and Stromberga 2020; Kuehn 2018), and also allows portability of learning out-
side of the classroom. This type of engagement in educationally purposeful activities is also known to positively affect academic outcomes (Kuh et al. 2008) and enhance student engagement (Kala et al. 2010).
1.4
Visualising the Activity of Skeletal Muscles in Human Movement Control
A systems approach is often used to teach human anatomy theory, whereby content is packaged into modules like the skeletal system, muscular system, nervous system, and articulations. Once students know the anatomical and functional features of each system, the challenge arises for them to understand how they each contribute to producing and controlling human movement in various contexts and how the body behaves as a whole or how it responds in healthy and pathological conditions. There are many factors that influence mechanical output from muscles, including central and peripheral nervous system function, and properties within the muscle itself. Understanding the role and contribution of each of these in human movement is important for students in the biomedical, health and exercise sci-
6
ence disciplines. For example, students may need to understand concepts like muscle coordination and motor synergies as underlying factors in human performance, or the effects of central or peripheral fatigue on motor output. They may also be interested in investigating the neural factors influencing skeletal muscle function or force output, such as neural adaptations to strength training (Weier et al. 2012); the effects of popular devices like whole-body vibration machines on strength, power or muscle activation (Cormie et al. 2006; Weier and Kidgell 2012b); or even gaining an understanding of the effects of different rehabilitation strategies on musculoskeletal pathologies, such as the use of isometric muscular contractions to reduce pain in patients with patellar tendinopathy (Rio et al. 2015). There are various methodologies that can allow real-time visualisations for students learning about skeletal muscles or the control of skeletal muscles for human movement. These methodologies may involve recording directly from skeletal muscles or recording from the brain during movement to understand neuromuscular control. For example, near-infrared spectroscopy (NIRS) may be used to understand local skeletal muscle haemodynamic properties like oxygen saturation, oxygen consumption and blood flow (Ferrari et al. 2011; Jones et al. 2016), or it may be applied via a cap over the scalp to visualise the real-time brain haemodynamics involved during movement tasks requiring precise control of the timing, order and magnitude of skeletal muscle recruitment such as human gait (Herold et al. 2017; Perrey 2014). However, the most common technique used to understand the function and innervation properties of human skeletal muscle is through real-time visualisation of electrical activity using electromyography (EMG). EMG can be performed using intramuscular fine-wire electrodes or with electrodes adhered to the skin surface over the innervation zone of a target muscle. However, due to the non-invasive, i nexpensive and relatively technically simple nature of surface EMG (sEMG), this is the preferred method for studying real-time skeletal muscle activity in educational settings.
C. Moro et al.
SEMG first involves carefully preparing the skin that overlays the target muscle/s by shaving hair from the area, gently rubbing with sandpaper or an abrasive gel to remove any dead skin and thoroughly cleaning the area with isopropyl alcohol. Either wired or wireless positive and negative electrodes are then adhered to the skin over the innervation zone of the skeletal muscle of interest, and a ground electrode adheres over a bony landmark such as the patella or clavicle. When muscles are innervated, they produce an electrical current, and the size of this current is typically proportional to the level of muscle activity (Solomonow et al. 1990). This electrical activity is measured by the electrodes and then digitised via a biological amplifier and PowerLab hardware to be viewed in real time in LabChart software. When considering the use of continuous sEMG in practical classes, the decision to record from one muscle or multiple muscles will depend on the primary learning objective of the class. For example, understanding the relationship between muscle force and electrical activity can be easily achieved by recording sEMG from one muscle and pairing this with either a force dynamometer providing input to the PowerLab or some controlled external load. As either the external load (e.g. weight of a dumbbell) or the force applied to a dynamometer increases, the electrical activity of the muscle (i.e. visualising greater contraction) will also increase to overcome the force requirements of the task. However, continuous sEMG could be recorded from several muscles simultaneously (limited by the number of channels of the specific PowerLab hardware), and since each channel shares a consistent x-axis of time, the muscle contraction onset and offset can easily be determined to gain an understanding of coordination dynamics and muscle synergies. SEMG can also be used in combination with various stimulatory techniques to gain a better understanding of neuromuscular physiology. For example, the primary motor cortex (M1) is a somatotopically arranged area in the frontal lobe of the brain, which generates descending neural drive transmitted predominantly via the corticospinal pathways. This then synapses with the
1 Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological…
7
Fig. 1.3 An example visualisation of a motor evoked potential (MEP) showing time of stimulus, latency, amplitude and silent period
alpha motor neurons in the ventral horn of the spinal cord to contract skeletal muscles via the relevant peripheral nerve. It is possible to elicit responses in target muscles by applying transcranial magnetic stimulation (TMS) over the appropriate target muscle representation of the M1 and recording the evoked response, called a motor evoked potential (MEP; Fig. 1.3) from the skeletal muscle in real time using sEMG. Even before using TMS to understand neurophysiological properties of skeletal muscle, it can be valuable to transform the two-dimensional image of a motor homunculus into observable action to provide students with an introductory taste of the brain ‘controlling’ movement in contralateral limbs. For example, students can easily observe movement in upper limbs (a ‘twitch’ in the targeted skeletal muscle) in response to stimulation of the lateral aspect of the contralateral M1 and observe movement in the lower limbs in response to stimulation of the contralateral M1 closer to the longitudinal fissure. Once students understand the relationship between the central nervous system (CNS) and skeletal muscle, more complex physiological properties can be explored. While to an untrained eye, a MEP is merely a squiggly line on a computer monitor, this squiggly line provides a wealth of information about the physiological characteristics of the neuromuscular control of movement. Key aspects of a MEP that would be examined in a practical class include the latency period, which is the duration of time that occurs between the delivery of the stimulus (either to the M1, peripheral nerve or tendon proper) and the onset of the response (Kobayashi and Pascual-Leone 2003); the peak-
to-peak amplitude of the response, which represents the net excitability of the corticospinal pathway (Chen 2000); and the silent period (SP), which represents a period of electrical silence or refractoriness during which the corticospinal pathway is inhibited (Säisänen et al. 2008). Being able to identify each of these features in a sEMG trace is the first step to gaining an understanding of the neurophysiology of skeletal muscle activation. The ‘motor threshold’ is the lowest TMS stimulus intensity that elicits a response in a target muscle and is one way to gain information about the excitability of the corticospinal tract (Hallett 2000). Quite simply, the stimulus intensity of the TMS machine is reduced until there is no response in the sEMG recording of the skeletal muscle, and then the intensity of the stimulus is increased in small increments until a MEP is able to be observed in the sEMG trace. At any intensity below threshold, MEPs are not discernible, and at intensities at or above the threshold, MEPs are consistently achieved. This is a simple procedure for students to understand concepts of stimulation threshold and an application of an ‘all-or-none’ principle. The latency period represents the time it takes for skeletal muscle to respond to a stimulus, which can be influenced by factors like distance of the stimulus from the target muscle, level of background voluntary muscle activity and pathologies like axonal demyelination or neuropathy (Kallioniemi et al. 2015). For example, if using TMS to stimulate the M1, the latency period of a MEP recorded from the bicep brachii muscle (elbow flexor) would be approximately 13 ms, whereas the latency period of a MEP elicited in
C. Moro et al.
8
the rectus femoris (knee extensor) would be approximately 40 ms simply because the distance from the stimulus has increased; thus, it takes longer to reach the target. Similarly, the effects of voluntary muscle activation can be observed in real time by stimulating the peripheral nerve that innervates a muscle while the subject is relaxing and then repeating the procedure while the subject performs a maximum voluntary contraction. Even with the same stimulus intensity and a consistent distance from the stimulation site to the electrodes, the latency period will be shorter during a voluntary contraction due to increased neuronal excitability induced by the voluntary activation of the skeletal muscle. Other features that can be assessed in real time is the peak-to-peak amplitude of the MEP (mV) and the SP. Like latency, there are many factors that can influence the amplitude of a MEP and the duration of the SP at a constant stimulus intensity, like the level of background voluntary muscle activity. However, the most noteworthy observation for students with regards to these features of the MEP is the increase in amplitude and SP duration that occurs with corresponding increases in stimulus intensity. For example, MEP amplitude increases in a sigmoidal pattern until reaching a plateau (Fig. 1.4). If paired with a force transducer, the greater amplitudes in the
Fig. 1.4 An example TMS recruitment curve demonstrating the influence of increasing stimulus intensity on the MEP amplitude. Key parameters include the threshold amplitude (MIN), plateau value (MAX), steepness of the curve (SLOPE) and the stimulus intensity where MEP amplitude is midway (MID) between MAX and MIN (V50). (Weier and Kidgell 2012a)
sEMG trace will also correspond to greater force of the muscle twitch response, representing progressive recruitment of higher threshold motor units within the relevant motor neuron pool. Alternatively, evoked potentials can be achieved through electrical stimulation of the appropriate peripheral nerve (for example, stimulating the femoral nerve and recording sEMG from the rectus femoris muscle) or mechanical stimulation of stretch reflex pathways, such as the knee-jerk reflex by tapping the patella tendon with an instrumented tendon hammer connected to a PowerLab and using sEMG to record the response from the rectus femoris muscle. All of these techniques offer simple real-time visualisation strategies for students to piece together neuromuscular physiology and the control of skeletal muscles.
1.5
Conclusion
In order to enhance current pedagogical practices when teaching students about both mechanical and electrical activity of muscles, novel approaches to content delivery that promote active learning are essential to ensure student engagement and better comprehension of the learning material. One such approach is the use
1 Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological…
of real-time muscle contraction visualisation techniques. By utilising research equipment used in science laboratories, including data recording and visualising software, core components of muscle physiology can be delivered in an active, student-centred manner. This type of educational approach also further supports the findings of Michael (2006) where incorporation of active learning resulted in significantly higher exam scores when compared to traditional lecture-style methods. The practicals not only provide a better understanding of the physiological processes underlying muscle contraction but also provide comprehension of an array of laboratory techniques, analysis techniques and statistical training on the data collected during the practicals (Head and Arber 2013). The other advantages of this approach include having immediate feedback on the experiment and having clear examples of how research is conducted in active physiology laboratories. Structuring lessons around the visualisation of muscle activity in real time provides a range of kinaesthetic activities that learners need to perform. First, for the measurement of a muscle’s electrical activity from a human participant, students need to setup the demonstrations by placing electrodes onto the correct positions and preparing the recording software. In order to locate these correctly, students often use anatomical textbooks or muscle illustrations and palpation to guide their positioning. This enhances comprehension of anatomical structures under the surface of the skin and can be quite beneficial to their later recall. Second, the fact that the learner is using their own muscles or that of a classmate to complete this learning activity is also considered to be highly engaging and interactive, as the whole workshop involves multiple learning methodologies and the application of a range of different skills (Khalil and Elkhider 2016). Third, learning how muscles work by physically performing and recording the activity is likely to be more beneficial for learning and later recall, than simply perusing the phenomenon through an illustration in a textbook. Fourth, the focus on skin-level surface physiology,
9
through the use of live participants, usually links well to the anatomy curricula, which tends to focus on subsurface muscles and the skeletal system. As underlying structures are often visualised through X-rays, silicon moulds or digital 3D models obtained from CT scans or other devices (Moro and Covino 2018; Teager et al. 2019), the use of skin surface measurements of the underlying anatomy can be beneficial to the students’ overall comprehension. Finally, through the use of students as participants in these types of practical sessions, there is a fusion between bodily kinaesthetic activities to perform the muscle contractions and the use of logical–mathematical reasoning to decode the graphs and presented traces. As such, this type of laboratory engages a variety of skills in a way that is not possible through the provision of simple diagrams or lecture slides (Gardner 2011). For smooth muscle visualisations through isolated tissue baths, there are also a number of additional steps for learners to go through prior to recording the muscle activity. These include activities such as preparing the solution to place the smooth muscle in, along with calibrating the actual recording setup. This not only allows the use of kinaesthetic activities but also provides the students with research skills, which they may use later in their scientific careers. In addition to traditional lectures and laboratory practicals utilising technology that provides real-time visualisation of muscle contraction, other opportunities for visualising the contractile mechanisms are also possible. These include gamification opportunities such as PowerPoint media games (i.e. Jeopardy), quizzing tools (Moro and Stromberga 2020), classroom demonstrations (Meeking and Hoehn 2002) or interactive discussions surrounding images, which can facilitate student learning in courses such as undergraduate medicine (Moro and McLean 2017; Moro et al. 2019). Overall, these approaches can enhance student comprehension of both smooth and skeletal muscle contractile physiology and aid in their understanding of the underlying processes that are typically presented in a lecture-type environment.
10
References Anderson P, Chapman P, Ma M, Rea P (2013) Real-time medical visualization of human head and neck anatomy and its applications for dental training and simulation. Curr Med Imaging Rev 9:298–308 Birt J, Stromberga Z, Cowling M, Moro C (2018) Mobile mixed reality for experiential learning and simulation in medical and health sciences education. Information 9:31 Brandon AF, All AC (2010) Constructivism theory analysis and application to curricula. Nurs Educ Perspect 31:89–92 Burkart JM, Schubiger MN, van Schaik CP (2017) The evolution of general intelligence. Behav Brain Sci 40:e195. https://doi.org/10.1017/S0140525X16000959 Chen R (2000) Studies of human motor physiology with transcranial magnetic stimulation. Muscle Nerve Suppl 9:S26–S32 Coates H (2009) Engaging students for success—2008 Australasian survey of student engagement. Australian Council for Educational Research, Camberwell Cormie P, Deane RS, Triplett NT, McBride JM (2006) Acute effects of whole-body vibration on muscle activity, strength, and power. J Strength Cond Res 20:257–261 de Kock A, Sleegers P, Voeten MJM (2004) New learning and the classification of learning environments in secondary education. Rev Educ Res 74:141–170. https:// doi.org/10.3102/00346543074002141 Ferrari M, Muthalib M, Quaresima V (2011) The use of near-infrared spectroscopy in understanding skeletal muscle physiology: recent developments. Philos Transact A Math Phys Eng Sci 369:4577–4590. https://doi.org/10.1098/rsta.2011.0230 Gardner H (2011) Frames of mind: the theory of multiple intelligences. Hachette UK, London Gardner H (2017) Taking a multiple intelligences (MI) perspective. Behav Brain Sci 40:e203. https://doi. org/10.1017/S0140525X16001631 Hallett M (2000) Transcranial magnetic stimulation and the human brain. Nature 406:147–150. https://doi. org/10.1038/35018000 Head SI, Arber MB (2013) An active learning mammalian skeletal muscle lab demonstrating contractile and kinetic properties of fast- and slow-twitch muscle. Adv Physiol Educ 37:405–414. https://doi.org/10.1152/ advan.00155.2012 Herold F, Wiegel P, Scholkmann F, Thiers A, Hamacher D, Schega L (2017) Functional near-infrared spectroscopy in movement science: a systematic review on cortical activity in postural and walking tasks. Neurophotonics 4:1–25 Jespersen B, Tykocki NR, Watts SW, Cobbett PJ (2015) Measurement of smooth muscle function in the isolated tissue bath-applications to pharmacology research. J Vis Exp:52324–52324. https://doi.org/10.3791/52324 Jones S, Chiesa ST, Chaturvedi N, Hughes AD (2016) Recent developments in near-infrared spectroscopy (NIRS) for the assessment of local skeletal muscle
C. Moro et al. microvascular function and capacity to utilise oxygen. Artery Res 16:25–33. https://doi.org/10.1016/j. artres.2016.09.001 Kala S, Isaramalai S-A, Pohthong A (2010) Electronic learning and constructivism: A model for nursing education. Nurse Educ Today 30:61–66. https://doi. org/10.1016/j.nedt.2009.06.002 Kallioniemi E, Pitkänen M, Säisänen L, Julkunen P (2015) Onset latency of motor evoked potentials in motor cortical mapping with neuronavigated transcranial magnetic stimulation. Open Neurol J 9:62–69. https://doi. org/10.2174/1874205X01509010062 Khalil MK, Elkhider IA (2016) Applying learning theories and instructional design models for effective instruction. Adv Physiol Educ 40:147–156. https://doi. org/10.1152/advan.00138.2015 Kobayashi M, Pascual-Leone A (2003) Transcranial magnetic stimulation in neurology. Lancet Neurol 2:145– 156. https://doi.org/10.1016/S1474-4422(03)00321-1 Kuehn BM (2018) Virtual and augmented reality put a twist on medical education. JAMA 319:756–758. https://doi.org/10.1001/jama.2017.20800 Kuh GD, Cruce TM, Shoup R, Kinzie J, Gonyea RM (2008) Unmasking the effects of student engagement on first-year college grades and persistence. J High Educ 79:540–563. https://doi.org/10.1080/00221546. 2008.11772116 Lujan HL, DiCarlo SE (2006) First-year medical students prefer multiple learning styles. Adv Physiol Educ 30:13–16. https://doi.org/10.1152/advan.00045.2005 McCarthy CJ, Ikeda Y, Skennerton D, Chakrabarty B, Kanai AJ, Jabr RI, Fry CH (2019) Characterisation of nerve-mediated ATP release from detrusor muscle; pathological implications. Br J Pharmacol. https://doi. org/10.1111/bph.14840 Meeking J, Hoehn K (2002) Interactive classroom demonstration of skeletal muscle contraction. Adv Phys Educ 26:344–345. https://doi.org/10.1152/ advances.2002.26.4.344 Michael J (2006) Where’s the evidence that active learning works? Adv Physiol Educ 30:159–167. https://doi. org/10.1152/advan.00053.2006 Mitsui R et al (2019) Contractile elements and their sympathetic regulations in the pig urinary bladder: a species and regional comparative study. Cell Tissue Res. https://doi.org/10.1007/s00441-019-03088-6 Moro C, Chess-Williams R (2012) Non-adrenergic, non-cholinergic, non-purinergic contractions of the urothelium/lamina propria of the pig bladder. Auton Autacoid Pharmacol 32:53–59. https://doi. org/10.1111/aap.12000 Moro C, Covino J (2018) Nutrition and growth: assessing the impact of regional nutritional intake on childhood development and metacarpal parameters. Anat Cell Biol 51:31–40. https://doi.org/10.5115/ acb.2018.51.1.31 Moro C, McLean M (2017) Supporting students’ transition to university and problem-based learning. Med Sci Educ 27:353–361. https://doi.org/10.1007/ s40670-017-0384-6
1 Enhancing Teaching in Biomedical, Health and Exercise Science with Real-Time Physiological… Moro C, Uchiyama J, Chess-Williams R (2011) Urothelial/ lamina propria spontaneous activity and the role of M3 muscarinic receptors in mediating rate responses to stretch and carbachol. Urology 78:1442.e1449-1415. https://doi.org/10.1016/j.urology.2011.08.039 Moro C, Leeds C, Chess-Williams R (2012) Contractile activity of the bladder urothelium/lamina propria and its regulation by nitric oxide. Eur J Pharmacol 674:445– 449. https://doi.org/10.1016/j.ejphar.2011.11.020 Moro C, Tajouri L, Chess-Williams R (2013) Adrenoceptor function and expression in bladder urothelium and lamina propria. Urology 81:211.e211-217. https://doi. org/10.1016/j.urology.2012.09.011 Moro C, Edwards L, Chess-Williams R (2016) 5-HT2A receptor enhancement of contractile activity of the porcine urothelium and lamina propria. Int J Urol 23:946–951. https://doi.org/10.1111/iju.13172 Moro C, Stromberga Z (2020) Enhancing variety through gamified, interactive learning experiences. Med Educ. https://doi.org/10.1111/medu.14251 Moro C, Stromberga Z, Raikos A, Stirling A (2017a) The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat Sci Educ 10:549–559. https://doi.org/10.1002/ase.1696 Moro C, Stromberga Z, Stirling A (2017b) Virtualisation devices for student learning: comparison between desktop-based (Oculus Rift) and mobile-based (Gear VR) virtual reality in medical and health science education. Australas J Educ Technol 33(6). https://doi. org/10.14742/ajet.3840 Moro C, Spooner A, McLean M (2019) How prepared are students for the various transitions in their medical studies? An Australian university pilot study. MedEdPublish 8:25. https://doi.org/10.15694/ mep.2019.000025.1 Morris NP, Ramsay L, Chauhan V (2012) Can a tablet device alter undergraduate science students’ study behavior and use of technology? Adv Physiol Educ 36:97–107. https://doi.org/10.1152/advan.00104.2011 Multon KD, Brown SD, Lent RW (1991) Relation of self- efficacy beliefs to academic outcomes: a meta-analytic investigation. J Couns Psychol 38:30 Papa V, Varotto E, Vaccarezza M, Ballestriero R, Tafuri D, Galassi FM (2019) The teaching of anatomy throughout the centuries: from Herophilus to plastination and beyond. Med Hist 3:69–77 Periya S, Moro C (2019) Applied learning of anatomy and physiology: virtual dissection tables within medical and health sciences education. Bangkok Med J 15:121– 127. https://doi.org/10.31524/bkkmedj.2019.02.021 Perrey S (2014) Possibilities for examining the neural control of gait in humans with fNIRS. Front Physiol:5, 204. https://doi.org/10.3389/fphys.2014.00204 Rattay F (1986) Analysis of models for external stimulation of axons. IEEE Trans Biomed Eng BME-33:974– 977. https://doi.org/10.1109/TBME.1986.325670 Richardson S, Radloff A (2014) Allies in learning: critical insights into the importance of staff–student interactions in university education. Teach High Educ 19:603–615. https://doi.org/10.1080/13562517.2014. 901960
11
Rio E, Kidgell D, Purdam C, Gaida J, Moseley GL, Pearce AJ, Cook J (2015) Isometric exercise induces analgesia and reduces inhibition in patellar tendinopathy. Br J Sports Med 49:1277. https://doi.org/10.1136/ bjsports-2014-094386 Säisänen L, Pirinen E, Teitti S, Könönen M, Julkunen P, Määttä S, Karhu J (2008) Factors influencing cortical silent period: optimized stimulus location, intensity and muscle contraction. J Neurosci Methods 169:231– 238. https://doi.org/10.1016/j.jneumeth.2007.12.005 Scheindlin S (2001) A brief history of pharmacology. Mod Drug Discovery 4:87–88 Solomonow M, Baratta R, Shoji H, D’ambrosia R (1990) The EMG-force relationships of skeletal muscle; dependence on contraction rate, and motor units control strategy. Electromyogr Clin Neurophysiol 30:141–152 Stromberga Z, Chess-Williams R, Moro C (2019) Histamine modulation of urinary bladder urothelium, lamina propria and detrusor contractile activity via H1 and H2 receptors. Sci Rep 9:3899. https://doi. org/10.1038/s41598-019-40384-1 Stromberga Z, Chess-Williams R, Moro C (2020a) Alterations in histamine responses between juvenile and adult urinary bladder urothelium, lamina propria and detrusor tissues. Sci Rep 10:4116. https://doi. org/10.1038/s41598-020-60967-7 Stromberga Z, Chess-Williams R, Moro C (2020b) The five primary prostaglandins stimulate contractions and phasic activity of the urinary bladder urothelium, lamina propria and detrusor. BMC Urol 20:48. https:// doi.org/10.1186/s12894-020-00619-0 Sweeney LJ, Brodfuehrer PD, Raughley BL (2004) An introductory biology lab that uses enzyme histochemistry to teach students about skeletal muscle fiber types. Adv Physiol Educ 28:23–28. https://doi. org/10.1152/advan.00019.2003 Taylor DCM, Hamdy H (2013) Adult learning theories: implications for learning and teaching in medical education: AMEE Guide No. 83. Med Teach 35:e1561–e1572. https://doi.org/10.3109/01421 59X.2013.828153 Teager SJ, Constantine S, Lottering N, Anderson PJ (2019) Physiologic closure time of the metopic suture in South Australian infants from 3D CT scans. Childs Nerv Syst 35:329–335. https://doi.org/10.1007/ s00381-018-3957-9 Weier A, Kidgell D (2012a) Effect of transcranial magnetic stimulation protocol on recruitment curve parameters. J Sci Med Sport 15:S120. https://doi. org/10.1016/j.jsams.2012.11.292 Weier AT, Kidgell DJ (2012b) Strength training with superimposed whole body vibration does not preferentially modulate cortical plasticity. Sci World J 2012:876328. https://doi.org/10.1100/2012/876328 Weier AT, Pearce AJ, Kidgell DJ (2012) Strength training reduces intracortical inhibition. Acta Physiol (Oxford, England) 206:109–119. https://doi. org/10.1111/j.1748-1716.2012.02454.x
2
The Evolution of Educational Technology in Veterinary Anatomy Education Julien Guevar
Abstract
“All learning is in the learner, not the teacher.” Plato was right. The adage has passed the test of time and is still true in an era where technology accompanies us in not only professional but also recreational life every day, everywhere. On the other hand, the learner has evolved and so have the sources being used to satisfy curiosity and learning. It therefore appears intuitive to embrace these technological advances to bring knowledge to our pupils with the aim to facilitate learning and improve performance. It must be clear that these technologies are not intended to replace but rather consolidate knowledge partly acquired during more conventional teaching of anatomy. Veterinary medicine is no outlier. Educating students to the complexity of anatomy in multiple species requires that three-dimensional concepts be taught and understood accurately if appropriate treatment is to be set in place thereafter. Veterinary anatomy education has up to recently walked diligently in the footsteps of John Hunter’s medical teaching using specimens, textbooks, and drawings. The discipline has yet to embrace fully the benefits of J. Guevar (*) Division of clinical neurology, Vetsuisse-Faculty, University of Bern, Bern, Switzerland e-mail: [email protected]
advancement being made in technology for the benefit of its learners. Three-dimensional representation of anatomy is undeniably a logical and correct way to teach whether it is through the demonstration of cadaveric specimen or alternate reality using smartphones, tablets, headsets or other digital media. Here we review some key aspects of the evolution of educational technology in veterinary anatomy. Keywords
Education · Imaging anatomy · Technology · Veterinary anatomy · Veterinary medicine
2.1
Introduction
Whether the student aspires to become a radiologist, a surgeon, a neurologist, or a general practitioner, an intricate knowledge of anatomy is indispensable to treat animals appropriately. “Do no harm” is valid for all species and is more likely to be true if one knows correctly the anatomy of the animal to be treated. In order to teach the anatomy of multiple species, all of which bare differences at multiple anatomical levels, conventional methods using textbooks and cadaveric demonstration have
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_2
13
J. Guevar
14
been the backbone of anatomy education. However, the impact of formaldehyde (for specimen preservation) on health, the difficulty to obtain cadavers, as well as ethical concerns surrounding their origin, have generated a push toward the search of new alternatives. The use of computer-generated three-dimensional models, multimedia books, and game learning are amongst the available novel technologies within the arsenal of the educators. The anatomy student has also changed. In a context of always evolving technologies, on- demand and on-the-go knowledge has become a novel way to educate oneself. The “digital” student is technology sophisticated and is a receiver of information using smartphones, tablets, and computers. It therefore seems appropriate to believe that mixing conventional and novel methods would represent an adequate step forward to effective learning. Blended learning is the terminology used to describe this novel type of pedagogy (Khalil et al. 2018). In brief, it constitutes the integration of different learning approaches, new technologies, and activities that combine the traditional face-to-face teaching methods and the use of novel methodologies. The blended learning is based on the cognitive load theory, which assumes that the human cognitive system has a limited working memory capable of holding no more than five to nine information elements and actively process no more than two to four elements simultaneously. Novel information can only be dealt with for no more than 20 s, and almost all information is lost after about 20 s unless it is refreshed by rehearsal (Van Merriënboer and Sweller 2010). In this context, supplementation of the anatomy class with online anytime available educational material represents a logical pedagogy strategy. It is equally important to remember that there has also been a shift in the teaching of anatomy. The anatomist is not the sole teacher of the entire topic anymore but instead, each specialty (surgery, dermatology, neurology, etc.) is nowadays teaching the specific anatomy relevant to their field. Teachers have changed and along with them pedagogies have diversified. This great variety of teachers has undoubtedly helped to
engage, some more than others, in the crusade to identify new ways of teaching pupils the complexity of anatomy and supplement the conventional methods to travel along Papez circuit.
2.2
The Past
Dissection of animals has always taken place throughout history for diverse purposes, whether it be during antiquity with haruspices predicting omens through examination of sacrificed animals’ entrails (Rome, Greece, Inca empire) or in African divination systems (Abbink 1993). Over the course of anatomy history, it is interesting to discover that most of the early human anatomists practiced a vast amount of dissections on a great variety of animals, large and small, to learn about their anatomy and morphology. Because the use of human cadavers may not always have been permitted, animals offered a more readily available alternative. It is around 400 B.C. that the use of animal dissection for the purpose of anatomy learning appears within the “Hippocratis Corpus,” representing a collection of 70 or so treaties on medicine and surgery, where Hippocrates describes the anatomy of various animals (Craik 1998; Blits 1999). However, the first specifically anatomical investigation separate from a surgical or medical procedure is believed to be associated with Alcmaeon of Croton (c. 500 B.C.), who dissected animals and discovered amongst other findings, the optic nerve, the Eustachian tubes and the main sensory nerves pathways to the brain (Blits 1999). Despite these early texts, Aristotle (c. 300 B.C.) was nevertheless the first to describe an approach, which was comparative and incorporated an immense quantity of anatomical and morphological description using rigorous and systematic methods to describe animal anatomy. His study thus involved what we know nowadays as morphology, anatomy, physiology, reproduction, development, ethology, and ecology. He is for that very reason still regarded by most as the founder of comparative anatomy. Another pivotal character is Galen (c. 200 A.D.) with the human–animal analogy of anatomy. His work centered for a large part on the dissection of
2 The Evolution of Educational Technology in Veterinary Anatomy Education
animal, and he drew parallelisms based on physical similarities identified between animal and human to then treat patients. Influenced by Aristotle conclusions that animal neither felt pain in the same way as humans nor possessed anything like the same level of consciousness or capacity for independent thought, it was widely accepted that animal would “suffer nothing from such a wound,” referring to dissection (Conner 2017). “Genius lives on, all else is mortal” can be read on an anatomy plate of one of the most influential anatomists, the Belgian Andreas van Wesel, commonly known as Vesalius (Zampieri et al. 2015). He had a tremendous impact on today’s medicine and knowledge of anatomy and physiology, including through the use of anatomical demonstration with dogs. Reports circa 1540 depict the story of Vesalius vivisecting a dog to illustrate the relationship between the bark and the recurrent laryngeal nerve, as well as the relationship between heart and arteries (Klestinec 2004). Later on, John Hunter made the most of animal anatomy (which he would get from around the world) for the purpose of understanding both anatomy and physiology. A rather important landmark for veterinary medicine is when in 1761, Claude Bourgelat established the first veterinary school in France, in the city of Lyon for the study of anatomy and diseases of domestic animals. In the same manner than in human medicine, the teaching of veterinary anatomy was mostly made through lessons, dissections, prosections, and textbooks. What has been particular to the veterinary field is that anatomy incorporates multiple species, which all have subtle and less subtle anatomical variations. This important body of knowledge therefore constitutes of a profusion of descriptive, factual information. This mountain of information had to be absorbed through rote learning, and some would say that it would have to be learned and unlearned seven times before it would be committed to memory. The common sequence would first include anatomy lessons where the anatomy would be taught in class using sketches, then dissection classes, and finally textbooks for consolidation and revision. Not dissimilar to human anatomy overall.
15
Anatomy education has been declining in popularity in medical institutions and has become an area of much discussion, especially when it comes to finding the most effective way to use cadavers (Gummery et al. 2018). Similar challenges such as infectious agents, exposure to chemical fixatives, problems with sourcing cadavers, the ethics of their storage, as well as recruitment of staff and use of curriculum time are encountered in human and veterinary medicine. Although the human anatomy curriculum has been the subject of much debate, literature on the topic in veterinary medicine is very sparse. The value of cadaver practical classes through clinical skills within the anatomy curriculum has been shown to bring a positive impact on students’ perception of learning anatomy supporting the belief that anatomy classes using cadavers should still be part of the teaching strategy of veterinary students (Gummery et al. 2018). Although the study had limitations, it did bring the matter to the discussion table.
2.3
The Present
2.3.1 Animal Cadavers and Plastination To teach adequately large classes of veterinary students, an important number of cadavers of dogs, cats, horses, etc. are required. Over the years, partly due to ethical and budget limitations, the availability of cadavers has shrunk leading to the emergence of new systems in order to obtain and “optimize” cadaver usage. Regulations on sourcing cadavers for education vary from country to country, and pressure from the public has helped made the process more “ethically sourced” over the years. Because good science and good welfare go hand in hand, the 3Rs (reduction, replacement, refinement) have been at the core of animal research. It has recently been suggested to add a fourth R (Respect) to emphasize an ethical sourcing of an animal (Tiplady 2012). Ethically sourced cadavers can be obtained from animal body donation programs, where privately owned dogs that have been euthanized or died of medical reasons are
J. Guevar
16
donated by their owners (Kumar et al. 2001). These programs have not only benefited veterinary education and allowed the public to support education, but it also has underlined the respect and ethics existing within teaching and research. From a student perspective, it is interesting to note that a surveyed group of veterinary medicine students had no preference of the source of the cadaver, whether pound dogs or donated dogs were used, although the study found that they become more accepting of euthanasia of unwanted animals for education as they progressed through the curriculum (Tiplady et al. 2011). Another avenue to optimize anatomy class has been through the use of a cadaver reassignment system. It was shown that through this program, learning was enhanced, dissection skill building encouraged, and collaborative interactions fostered even in a context where less time was allocated in the curriculum. The authors stipulated that frequent specimen reassignments offered an opportunity to model public accountability for work and some aspects of the relationships between multiple health care teams caring for a patient. The International network for Humane Education has been an active driver for implementation of guidelines and fostering new initiatives for alternative methods toward a progressive, humane education (Jukes and Chiuia 2006). Once sourced, the next important issue with cadavers resides in its preservation and storage. They can be used fresh and utilized immediately, but storage is required otherwise autolysis will set in. Cold room storage will only allow for the cadaver to be used for a limited amount of time (days to weeks) and freezing is usually recommended. Preservation techniques are steering away from the use of toxics (like formalin) and freeze-drying;silicone impregnation and a range of plastination techniques are increasingly being used (Nacher et al. 2007). After addressing the problem of cadavers’ availability, the teaching of anatomy faces a further threat. In a context where the student should be able to learn and memorize the three- dimensional organization of multiple species, all of which have organs with variable conformations, the notion of time is paramount. The body
of knowledge is vast and requires the student and teacher to dedicate a substantial amount of hours for its learning. However, with the recent reduction of time allocated to teaching veterinary anatomy in the newer curricula, novel pedagogies have had to emerge. First, prosections have gradually supplemented animal dissections classes with plastination technique as a common resource. One of its major benefits is that it offers the student an exposure to accurate anatomical structures without the need to spend multiple hours dissecting them (Weiglein 1997). Another noteworthy advantage is that these models can be taken out of the anatomy class and used safely for small group teaching (Latorre et al. 2016) as they are clean, dry, odorless, nontoxic, and durable. They also can be handled without gloves and do not require specific storage conditions or care. Injection of latex into the vessels, as well as labeling of the organs, was suggested to improve these benefits even more (Latorre et al. 2007). Another advantage is that it allows for practical skill enhancement. For example, specimens have also been specifically designed for training in endoscopy and surgery to offer a close-to-reality experience for the user (Janick et al. 1997; Latorre et al. 2000, 2004, 2007). This technique also addresses the risk associated with the use of fixed anatomy specimen by preventing excessive exposure and decreasing health hazard (Fig. 2.1).
2.3.2 Imaging Anatomy Interestingly, if medical anatomy instruction has gradually lost its splendor within the medical curricula over the past decades, diagnostic imaging on the other hand has been increasingly gaining importance. It is indeed nowadays recognized as a central modality by which the medical profession visualizes the animal’s anatomy. At least in human medicine, imaging anatomy has been reported in numerous studies to enhance the quality and efficiency of anatomy instruction (Grignon et al. 2016). The drive to turn toward imaging anatomy is directly related to the fact that it represents a great part of the “useful” anatomy that students will use in their future clinical
2 The Evolution of Educational Technology in Veterinary Anatomy Education
17
Fig. 2.1 From left to right: a feline, a canine, and a canine brain-plastinated specimen. (https://www.meiwoscience. com/animal-plastinated-specimens/)
practice. What does not seem to be so clear at the of using imaging as a complement to anatomy. moment is how best it should be deployed to opti- However, it has improved the students’ engagemize its pedagogical purpose (Grignon et al. ment in the class, evaluating its impact on future 2016). In veterinary medicine, a web-based radi- clinical radiological skills in later years. It is fair ology learning software developed at the to say that imaging anatomy serves a dual purpose University of Oregon was found to be an effec- for students by not only allowing them to undertive tool for helping students to learn normal stand volumetric of organs during the anatomy radiographic anatomy. Interestingly, the skull and class but also introducing them at an early stage spine were found to be associated with the least to the connection between anatomy and clinical overall percentage improvement, which was imaging understanding. There seems to be evilikely inherent to the complexity of these struc- dence to show that the sooner they are exposed to tures (Reiter et al. 2018). When radiology (two- imaging, the better they will be at reading it in the dimensional imaging) was used alongside future, possibly suggesting that visual spatial anatomical specimen, students however often ability gets better with training. found challenging to relate a 2D image to what is a three-dimensional (3D) structure. Cross- sectional imaging such as computed tomography 2.3.3 Models, Mannequins, and Simulators (CT) and magnetic resonance imaging (MRI) has undoubtedly improved the diagnostic power of imaging but also brought the benefits of three- Plastic models of animals or organs depicting dimensional anatomy reconstructions to the fore- internal structures are available and commercialfront of imaging anatomy. CT 3D reconstruction ized worldwide. Whether they show the entire has for instance been shown to be an effective animal (i.e., skeleton of the cat), a particular tool to teach radiographic anatomy to veterinary structure (i.e., anatomy of the eye), the function students (Lee et al. 2010). Finally, ultrasonogra- of an organ (i.e., articulated knee) or the failure phy is often used as well in anatomy classes to of a function (i.e., model of intervertebral disc familiarize the students with ultrasound image extrusion and associated compression of the spiperception of anatomy (Feilchenfeld et al. 2017; nal cord), they have the benefit to be always readKnudsen et al. 2018), but its use in veterinary ily available for learning and memory recall. medicine is limited. They also bare the advantage that they can be Visual spatial ability is the ability to mentally used in clinical practice for the education of owninterpret and rotate 2D and 3D images. ers on the pathology of their companion animal Interpreting both anatomic orientation of struc- (Fig. 2.2). A new technology that has emerged in tures and also their functional relevance based on recent years and which is an important addition an image is source of difficulty for many stu- into the anatomy teacher arsenal is the 3D printed dents. This ability is a real hamper to the benefit model. Three-dimensional printing of organs can
18
J. Guevar
Fig. 2.2 From left to right: plastic models of a cat skeleton, canine vertebral column, and canine knee with illustration of various stages of osteoarthritis. (https://www.anatomystuff.co.uk/canine-vertebral-column-model.html)
Fig. 2.3 Three-dimensional printed cervical vertebrae and skull of a dog with an atlantoaxial subluxation (left) and a 3D print of a portion of the skull of dog with an
intracranial brain tumor (right). The models were used for educational purpose for the students and the owners, as well as for presurgical planning by the surgeon
be very accurate and very useful for the teaching of anatomy (Preece et al. 2013; Schoenfeld- Tacher et al. 2017; Hackmann et al. 2019). The accuracy of the printed specimen is very close to the original specimen, especially for bones, but also can be sized up or down and can be printed on the same day rapidly in single or multiple copies, anywhere in the world (Mcmenamin et al. 2014). Three-dimensional plastic models have been shown to be more effective than computer- based instruction, especially for nominal knowledge, where students are expected to name a
structure indicated by a label on an anatomic specimen (Khot et al. 2013). Fictive lesions can be created to represent pathologies, they can be drawn onto, and students can keep them. One of the main advantages is that it can be used in the teaching of anatomy and pathologies afflicting hospitalized patients (Fig. 2.3). This technology also avoids the cultural and ethical issues associated with cadaver specimens mentioned previously. Mannequins are lifelike representations of animals and are designed for clinical skills training.
2 The Evolution of Educational Technology in Veterinary Anatomy Education
19
Fig. 2.4 The use of a garment on a horse was useful to teach the anatomy of the skeleton (a) and the musculature (b)
Simulators are tools used for the teaching of clinical skills, surgery and emergency skills amongst others. They can be used for eye–hand coordination, instrument handling, suturing, etc. They attempt to realistically simulate situations in which organs or tissue will be manipulated by the student. Interesting concepts have been used such as haptic devices in bovine and equine anatomy to simulate rectal palpation of abdominal organs (Crossan et al. 2000; Baillie 2005; Kinnison et al. 2009; Bossaert et al. 2009). The haptic device offers the possibilities to vary the sensation of an organ or structure (either soft or hard, light or heavy, ability to assess size) allowing a tactile feedback to the user (https://www.youtube.com/ watch?v=ephvAcFeGnU). The haptic cow (http://www.live.ac.uk/haptic-cow) is now “active” at several veterinary schools in the United Kingdom, and the haptic horse is based at the Royal veterinary college in London. Other systems than the haptic system have been designed and studied for similar purpose. The benefits for learning improvement of one of these simulators (Bossaert et al. 2009) for pregnancy detection, however, need further evaluation (Annandale et al. 2018). Also, in large animal anatomy, creative avenues have been explored to make the teaching more alive such as the use of garment to cover live horses (Fig. 2.4) and depict their anatomy (Sattin et al. 2018) instead of body painting (Senos et al. 2015). Simulators for injection in the horse jugular vein have also been evaluated (Eichel et al. 2013). In small animals
(dogs, cats, rabbits), abdominal palpation and transrectal prostate palpation models have been studied (Parkes et al. 2009; Capilé et al. 2015).
2.3.4 Film Videos The use of videos to teach life sciences, independently of its format (online, CD-ROM, DVD, etc.), can bring added value to the anatomy courses. With the use of video editing, they allow for the provision of information on animals anatomy to the students that would have otherwise consumed a large amount of time if it had been performed in the laboratory. This is particularly true in veterinary medicine, as multiple species are dissected. Also, they have the potential to bring much more information than if it had been performed by the students themselves. For the purpose of anatomy learning, their main limitation is that they are passive. A combination of hands-on dissection course supplemented before and after (for recall purpose) would likely be the best use of this technology. They indeed can provide good background to a subject and instruct on the content of a class prior to it. The use of videos has also a powerful role to play in the revision of skills such as surgical procedures and provide a good alternative to the dissection of cadavers. Video-based teaching is a strong asset to teaching anatomy because it can be made available online for anytime, anywhere usage. The addition of auditory comments, as well as graphics, still
20
images, zoom in/out of specific regions make digital video a highly effective learning aid. When 2D and 3D stereoscopic videos were used before an anatomy class, it proved that interactive videos of a dissection, especially 2D videos, in preparation for a laboratory were superior to using the standard anatomy course guide as assessed by a postdissection quiz. Although performance by the 3D groups did not reach statistically significant separation from that of the guide groups, it was also not different from that of the 2D groups. It is also noteworthy to mention that the students’ engagement into learning was improved by the videos. This was evaluated by a survey highlighting the students belief that the latter helped them obtain a better spatial understanding of anatomic relationships and that students preferred the videos to the guide as well (Al-Khalili and Coppoc 2014).
2.3.5 Multimedia Computer Simulation There is no doubt that the exponential development of computer-based technology has greatly impacted the diversification of tools to spice up the anatomy class with imagination as the sole boundary. Although it started cautiously and slowly with digital textbooks or course notes available on computers or online, a breath of creative options exists nowadays. Productive collaborations between life sciences and other disciplines (art (design, photography), engineering, serious games…) have permitted to merge the needs of the medical community with the skills and creativity of the others. The World Wide Web is an inexhaustible source of material for the student. Through this computer-assisted learning, distant learning can benefit all learners anywhere. The caveat is, however, that students need to be directed towards an anatomically accurate source of information. Here, a few examples of different types of websites are described. The first example is a purely anatomy-based website developed by Professor Aige Gil at the University of Barcelona. The
J. Guevar
website (https://www.neuroanatomyofthedog. com) consists of a spectacular collection of fixed specimen where neuroanatomy is explained in intricate details using a combination of still images of anatomy and histology, video dissections, MRI images, and sketches. This type of website offers anatomical material to the learner, as well as explanation through the videos. Another type of web-based educational tool for anatomy is the anatomy website from the University of Minnesota (http://vanat.cvm.umn. edu), which offers a variety of courseware on carnivore and developmental anatomy, neuroanatomy, planar (CT and MRI) anatomy but also ungulate anatomy. The website is a superb compilation of material including plastinated and fixed specimens, histopathology and imaging sections, as well as quizzes and opportunities for the students to train. A final example is the neuropatho-atlas developped at the University of Bern ( https://vetsuisse.com/vet-iml/lernmodule/ htmls/npintro.html?neuropatho%7Cnpintro). This atlas of domestic animal neurological pathology and MRI elegantly correlates the two disciplines to enhance the pathology understanding and its associated MRI findings. Through a productive master program in medical visualization and anatomy , the school of veterinary medicine, the department of human anatomy, and the school of art of the Univeristy of Glasgow have developed interesting veterinary anatomy concepts. Raffan et al. showed that by using the MRI scan of a dog’s brain, it was not only possible to develop an anatomically accurate three-dimensional model of the brain vasculature but also to turn this platform into a teaching and learning tool (Raffan et al. 2017). The platform allowed for learning about these blood vessels pathways and for knowledge testing. The powerful advantage of this 3D reconstruction was that it allowed the student to understand better the difficult concept of spatial interconnections between blood vessels within the skull. The only material otherwise available to teach this part of the vascular anatomy were plastinated models and textbooks, which only included a limited number of drawings. Although, there was
2 The Evolution of Educational Technology in Veterinary Anatomy Education
21
Fig. 2.5 Proof of concept from Raffan et al. showing how anatomy can be visualized in 3D and serve as a teaching tool to learn about vascularization of the brain
no evaluation of students’ feedback on the technology, the platform represented a useful and engaging technology (Fig. 2.5). Further proof on the benefit of the concept for veterinary anatomy resides in the veterinary website IVALAlearn (https://www.ivalalearn.com). In brief, this web- based anatomy resource uses a similar technology to provide students with detailed anatomy, clinical perspective, and a great revision aid. Detailed anatomy is rendered possible through the use of 3D anatomy, which is clearly labeled and easy to interact with. A great amount of
details has gone through the creation of texture of the organs to give them a real life appearance. The clinical perspective is achieved through the focus on clinically relevant data, which the student can perceive as useful for future veterinary skills. Question banks and 3D content are also offered for the student to test acquired knowledge. This resource offers a wealth of information in an attractive tool that is intuitive to navigate and extremely accurate as it also includes scans of real organs. Similar options are the 3D canine anatomy material offered by easy
22
J. Guevar
Fig. 2.6 The reconstructed brain is seen floating in space through the phone or tablet with the MRI picture in the background serving as a “barcode” from Christ et al.
(2018). The display explains to the student what type of disorders can be seen with lesions affecting the forebrain of dogs
anatomy (https://easy-anatomy.com) being used by multiple universities, or the 3D dog, pig, and bovine anatomy from biosphera.org (https://biosphera.org/international/product/3d-dog-anatomy-software/). A last, but not least, example of the power of teaching complex anatomy and their pathway to students is the 3D anatomy material available from the University of Georgia. Although the site appears to be still under construction, some of the available teaching tools are very engaging. For example, the menace response pathway (http://vmerc.uga.edu/CranialNerves/ mrp.html) shows the 3D anatomy of all the structures along which information needs to travel from the menace gesture to the blink of the eyelids. Another technology that has started to emerge and to appeal to the students is learning anatomy through virtual reality technology. Christ et al. described the methodology on how to extract data from a CT and MRI scan and obtain a mixed reality anatomy proof of concept, whereby a canine skull or brain would appear in front of the user, being seen through a phone or a tablet (android software) (Christ et al. 2018). Didactic anatomy information supplemented the “floating" anatomy specimen, and the user could walk around the skull or brain, go under or over it, see-
ing all structures from a different perspective (Fig. 2.6). This represented a very useful and engaging tool where the user was active, and which also allowed to have a sense of depth or volumetry of the organ. The same augmented reality technology is in use by easy-anatomy. com, and they both have the advantage not to require any virtual reality headset. Colorado State University’s virtual veterinary education tools include a virtual reality one currently under construction (http://www.cvmbs.colostate.edu/ vetneuro/VR.html). Applications of smartphones and tablets have also started to appear (Dog anatomy: canine 3D and Horse anatomy: Equine 3D by Real bodywork). A cardiologyaugmented reality textbook has also been available for learning and understanding cardiology concepts. This is a pioneering book on the use of novel technology to teach anatomy and physiology (https://issuu. com/editorialservet/docs/39900_dosier_eng). Through the use of an application, the user is able to add a 3D virtual reality experience to the textbook and “bring to life” the anatomy (Fig. 2.7). Another very useful tool that has recently appeared along with technology development is “clean dissection tables.” Anatomage was the first virtual dissection table (https://www.anatomage. com/table/), which was used in human anatomy,
2 The Evolution of Educational Technology in Veterinary Anatomy Education
23
Fig. 2.7 A floating heart appears on the phone screen whenever the barcode in the textbook is read. In this example, the heart can be seen beating and the red blood cell flow is also depicted
and offered an advanced 3D anatomy visualization system for anatomy and physiology education. All layers of the anatomy can be evaluated, as well as fly-through planes, arthroscopy, blood flow, cadaver prosections, and interactive histology but also pathology cases. In a similar effort, a virtual canine dissection touch table prototype was developed by the University of Glasgow (https://doi.org /10.1080/17453054.2018.1505426) and another more elaborated one, by the collaboration of the veterinary schools of Ross University, Long Island University and Kansas State University (Little et al. 2019).
There is, however, a growing interest in evaluating new technologies to better supplement the anatomy class for the benefit of student learning. Is the future anatomy class a virtual one, where you can log into and participate virtually, hover over a 3D specimen and be taught the anatomy by a virtual anatomist? This does not seem to be a too distant future. Serious game might also have the potential to become a valuable, cost-effective addition to skills laboratories that can help improving the understanding of some procedures (Sabri et al. 2010; IJgosse et al. 2018) but it has not currently been evaluated in veterinary medicine.
2.4
2.5
The Future
Is the future of teaching veterinary anatomy without cadavers (McLachlan et al. 2004)? Not at the moment. Keeping in mind that the evaluation of teaching resources should be a systematic process of gathering, analyzing, and interpreting reliable information, there is currently too little information to support that dissection classes should be suppressed in veterinary medicine.
Conclusion
It is reasonable to say that the anatomy class allows the student to have a direct contact (visual, tactile, smell) with the anatomy and helps bring into perspective the organization and appearance (size, texture) of organs. However, the time allocated to experience anatomy is being crushed in many institutions, and the difficulty in sourcing animal cadavers requires that alternatives be
24
identified to make up for these losses. A technology that would display accurate anatomy, as well as being engaging and where the student would be actively learning, would likely represent a strong asset to complement the anatomy class. Further along the curriculum, when more practical skills are required, the combination of video tutorials with the use of manikins or simulators (through virtual reality, augmented reality, or serious games) would undoubtedly help the learning of practical skills where anatomy knowledge is essential.
References Abbink J (1993) Reading the entrails: analysis of an African divination discourse. Man 28:705–726. https://doi.org/10.2307/2803993 Al-Khalili SM, Coppoc GL (2014) 2D and 3D stereoscopic videos used as pre-anatomy lab tools improve students’ examination performance in a veterinary gross anatomy course. J Vet Med Educ 41:68–76. https://doi.org/10.3138/jvme.0613-082R Annandale A, Henry Annandale C, Fosgate GT, Holm DE (2018) Training method and other factors affecting student accuracy in bovine pregnancy diagnosis. J Vet Med Educ 45:224–231. https://doi.org/10.3138/ jvme.1016-166r1 Baillie S, Mellor DJ, Brewster SA et al (2005) Integrating a bovine rectal palpation simulator into an undergraduate veterinary curriculum. J Vet Med Educ 32(1):79– 85. https://doi.org/10.3138/jvme.32.1.79 Blits KC (1999) Aristotle: form, function, and comparative anatomy. Anat Rec 257:58–63. https://doi.org/10.1002/ (SICI)1097-0185(19990415)257:23.0.CO;2-I Bossaert P, Leterme L, Caluwaerts T et al (2009) Teaching transrectal palpation of the internal genital organs in cattle. J Vet Med Educ 36:451–460. https://doi. org/10.3138/jvme.36.4.451 Capilé KV, Campos GMB, Stedile R, Oliveira ST (2015) Canine prostate palpation simulator as a teaching tool in veterinary education. J Vet Med Educ 42:146–150. https://doi.org/10.3138/jvme.1214-120R1 Christ R, Guevar J, Poyade M, Rea PM (2018) Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality. PLoS One:13. https://doi. org/10.1371/journal.pone.0195866 Conner A (2017) Galen’s analogy: animal experimentation and anatomy in the second century C.E. Anthós 8(1). https://doi.org/10.15760/anthos.2017.118 Craik EM (1998) The hippocratic treatise On Anatomy. Class Q 48:135–167. https://doi.org/10.1093/ cq/48.1.135
J. Guevar Crossan A, Brewster S, Reid S, Mellor D (2000, August) A horse ovary palpation simulator for veterinary training. In: International workshop on haptic human- computer interaction. Springer, Berlin, Heidelberg, pp 157–164 Eichel J-C, Korb W, Schlenker A et al (2013) Evaluation of a training model to teach veterinary students a technique for injecting the jugular vein in horses. J Vet Med Educ 40:288–295. https://doi.org/10.3138/ jvme.1012-09R1 Feilchenfeld Z, Dornan T, Whitehead C, Kuper A (2017) Ultrasound in undergraduate medical education: a systematic and critical review. Med Educ 51:366–378 Grignon B, Oldrini G, Walter F (2016) Teaching medical anatomy: what is the role of imaging today? Surg Radiol Anat 38:253–260. https://doi.org/10.1007/ s00276-015-1548-y Gummery E, Cobb KA, Mossop LH, Cobb MA (2018) Student perceptions of veterinary anatomy practical classes: a longitudinal study. J Vet Med Educ 45:163– 176. https://doi.org/10.3138/jvme.0816-132r1 Hackmann CH, dos Reis D de AL, de Assis Neto AC (2019) Digital revolution in veterinary anatomy: confection of anatomical models of canine stomach by scanning and three-dimensional printing (3D). Int J Morphol 37:486–490. https://doi.org/10.4067/ S0717-95022019000200486 IJgosse WM, van Goor H, Luursema JM (2018) Saving robots improves laparoscopic performance: transfer of skills from a serious game to a virtual reality simulator. Surg Endosc 32:3192–3199. https://doi. org/10.1007/s00464-018-6036-0 Janick L, DeNovo RC, Henry RW (1997) Plastinated canine gastrointestinal tracts used to facilitate teaching of endoscopic technique and anatomy. Acta Anat 158(1):48–53 Jukes N, Chiuia M (2006) From Guinea pig to computer mouse: alternative methods for a progressive, humane education—second edition. InterNiche, Leicester Khalil MK, Abdel Meguid EM, Elkhider IA (2018) Teaching of anatomical sciences: a blended learning approach. Clin Anat 31:323–329 Kinnison T, Forrest ND, Frean SP et al (2009) Teaching bovine abdominal anatomy: use of a haptic simulator. Anat Sci Educ 2(6):280–285. https://doi.org/10.1002/ ase.109 Khot Z, Quinlan K, Norman GR, Wainman B (2013) The relative effectiveness of computer-based and traditional resources for education in anatomy. Anat Sci Educ 6:211–215. https://doi.org/10.1002/ase.1355 Klestinec C (2004) A history of anatomy theaters in sixteenth-century Padua. J Hist Med Allied Sci 59:375–412 Knudsen L, Nawrotzki R, Schmiedl A et al (2018) Hands-on or no hands-on training in ultrasound imaging: a randomized trial to evaluate learning outcomes and speed of recall of topographic anatomy. Anat Sci Educ 11:575–591. https://doi.org/10.1002/ase.1792 Kumar AM, Murtaugh R, Brown D et al (2001) Client donation program for acquiring dogs and cats to teach
2 The Evolution of Educational Technology in Veterinary Anatomy Education veterinary gross anatomy. J Vet Med Educ 28:73–x2. https://doi.org/10.3138/jvme.28.2.73 Latorre R, Uson J, Climent S, Sanchez-Margallo F, Vazquez JM, Gil F, Moreno F (2000) The use of dog complete gastrointestinal tracts to teach the basic external and internal anatomy necessary for flexible endoscopic training. 10th international conference on plastination, Saint Etienne, France, pp July2–July7 Latorre R, Lopez-Albors O, Sarasa M, Climent S, Uson J, Sanchez FM, Soria F. Plastination and minimally invasive surgery (digestive system). [DVD] imaging and communications service of the minimally invasive surgery Centre , 2004 Latorre RM, García-Sanz MP, Moreno M et al (2007) How useful is plastination in learning anatomy? J Vet Med Educ 34:172–176. https://doi.org/10.3138/ jvme.34.2.172 Latorre R, Bainbridge D, Tavernor A, Albors OL (2016) Plastination in anatomy learning: an experience at Cambridge University. J Vet Med Educ 43:226–234. https://doi.org/10.3138/jvme.0715-113R1 Lee H, Kim J, Cho Y et al (2010) Three-dimensional computed tomographic volume rendering imaging as a teaching tool in veterinary radiology instruction. Vet Med 55(12):603–609 Little WB, Artemiou E, Fuentealba C et al (2019) Veterinary students and faculty partner in developing a virtual three-dimensional (3D) interactive touch screen canine anatomy table. Med Sci Educ 29:223– 231. https://doi.org/10.1007/s40670-018-00675-0 McLachlan JC, Bligh J, Bradley P, Searle J (2004) Teaching anatomy without cadavers. Med Educ 38:418–424 Mcmenamin PG, Quayle MR, Mchenry CR, Adams JW (2014) The production of anatomical teaching resources using three-dimensional (3D) printing technology. Anat Sci Educ 7:479–486. https://doi. org/10.1002/ase.1475 Nacher V, Llombart C, Carretero A et al (2007) A new system to reduce formaldehyde levels improves safety conditions during gross veterinary anatomy learning. J Vet Med Educ 34:168–171. https://doi.org/10.3138/ jvme.34.2.168 Parkes R, Forrest N, Baillie S (2009) A mixed reality simulator for feline abdominal palpation training in veterinary medicine. Medicine meets virtual reality. IOS, Amsterdam Pérez-Cuadrado E, Latorre R, Carballo F, Pérez-Miranda M, Martín AL, Shanabo J, Esteban P, Torrella E, Mas P, Hallal H (2007) Training and new indications for
25
double balloon endoscopy (with videos). Gastrointest Endosc 66(3):S39–S46 Preece D, Williams SB, Lam R, Weller R (2013) “Let’s Get Physical”: advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy. Anat Sci Educ 6:216–224. https://doi. org/10.1002/ase.1345 Raffan H, Guevar J, Poyade M, Rea PM (2017) Canine neuroanatomy: development of a 3D reconstruction and interactive application for undergraduate veterinary education. PLoS One 12. https://doi.org/10.1371/ journal.pone.0168911 Reiter R, Viehdorfer M, Hescock K et al (2018) Effectiveness of a radiographic anatomy software application for enhancing learning of veterinary radiographic anatomy. J Vet Med Educ 45:131–139. https:// doi.org/10.3138/jvme.0516-100r Sabri H, Cowan B, Kapralos B et al (2010) Serious games for knee replacement surgery procedure education and training. Proc Soc Behav Sci 2:3483–3488 Sattin MM, Silva VKA, Leandro RM et al (2018) Use of a garment as an alternative to body painting in equine musculoskeletal anatomy teaching. J Vet Med Educ 45:119–125. https://doi.org/10.3138/jvme.0716-122r1 Schoenfeld-Tacher RM, Horn TJ, Scheviak TA et al (2017) Evaluation of 3D additively manufactured canine brain models for teaching veterinary neuroanatomy. J Vet Med Educ 44:612–619. https://doi. org/10.3138/jvme.0416-080R Senos R, Ribeiro MS, De Souza MK et al (2015) Acceptance of the bodypainting as supportive method to learn the surface loco motor apparatus anatomy of the horse. Folia Morphol (Warsz) 74:503–507. https:// doi.org/10.5603/FM.2015.0023 Tiplady C (2012) Animal use in veterinary education – the need for a fourth R: respect. Altern Lab Anim 40:5–6. https://doi.org/10.1177/026119291204000512 Tiplady C, Lloyd S, Morton J (2011) Veterinary science student preferences for the source of dog cadavers used in anatomy teaching. ATLA Altern to Lab Anim 39:461– 469. https://doi.org/10.1177/026119291103900507 Van Merriënboer JJG, Sweller J (2010) Cognitive load theory in health professional education: design principles and strategies. Med Educ 44:85–93 Weiglein AH (1997) Plastination in the neurosciences. Acta Anat (Basel) 158:6–9 Zampieri F, ElMaghawry M, Zanatta A, Thiene G (2015) Andreas Vesalius: celebrating 500 years of dissecting nature. Glob Cardiol Sci Pract 2015:66. https://doi. org/10.5339/gcsp.2015.66
3
Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings Angelique N. Dueñas and Gabrielle M. Finn
Abstract
3.1
Introduction
Art-based activities are increasingly being Innovative approaches to improving visualisation regarded as an accessible and engaging way to in the biomedical sciences need not always rely understand the human body and its processes. on the most technologically advanced methods. Such activities include body painting (both There are many art-based approaches that can be regular and ultraviolet [UV]), clay and used in the modern-day clinical education classmaterials- based modelling and drawing- rooms that engage students in deep and meaningfocused activities. Integrating art-based ful ways. In this chapter, we will describe such approaches into curricula can offer many ben- art-based methods and explore potential benefits efits and are often cost-effective ways to that educators, in particular, may find in such engage students, and improve on clinical acu- methods. We will also provide some practical men and visual understanding of the body. In ways to implement such practices with specific this chapter, we will introduce various art- examples of content and course structure where based visualisation methods, suggested uses art-based approaches may be efficacious. for their integration into curricula, as well as Of note, for all the approaches discussed in the associated pros and cons of each, in turn. this chapter, the key theme is creativity and emphasising the benefits of new ways of learning. Health professions and biomedically oriKeywords ented students, and even instructors, may be initially apprehensive towards any teaching Anatomy teaching · Art-based teaching · method with the term ‘art’ in the title; art Body painting · Modelling · Visualisation approaches can sometimes incorrectly be branded as puerile or ‘non-academic’. However, by keeping an open mind and drawing on the literature base that supports inclusion of such methods in clinical education (Shapiro et al. 2006; Bell and Evans 2014), we hope that this chapter can help A. N. Dueñas (*) · G. M. Finn guide you on best practice for these forms of Health Professions Education Unit, Hull York visualisation in your engagement with content. Medical School, University of York, York, UK e-mail: [email protected]; Many of these methods involve active learning [email protected]
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_3
27
A. N. Dueñas and G. M. Finn
28
techniques, which have been shown to be quite beneficial for a variety of students, particularly in clinical education (Harris and Bacon 2019). We present a summarised literature base for each method in order to argue for evidence-based implementation as opposed to norm-driven adoption (Finn and Matthan 2019).
for body painting to be the primary source of anatomy information within your health education curriculum. Rather, this art-based approach is best suited as a supplementary form of teaching. If interested in integrating body painting (or any art-based approach for that matter), into your curriculum, you would do well to first ask yourself a series of questions, such as the following:
3.2
• Which learning outcomes does this approach align to? • What are the current instructor/staff to student ratios that exist in your course? • Does your department have the resources to allocate funds and space to integrate a body- painting session? • Does body painting align with your course or university’s mission statements?
Body Painting
While art-based activities are being integrated more frequently into clinical education curricula, body painting is still considered a somewhat innovative technique. Body painting, within biomedical visualisation, can be described as the painting of internal body structures on the external surface of an individual (Op Den Akker et al. 2002). Body painting can be beneficial for students learning the clinical importance of surface anatomy and how this relates to structure and function (Nanjundaiah and Chowdapurkar 2012; Hafferty and Finn 2015; Finn et al. 2018). The educational implications of body painting for health sciences students are obvious: body painting offers the opportunity for students to engage in visualising (human) anatomy, without the need for cadavers or other human tissue specimens. Furthermore, the act of seeing structures on a living, mobile person offers the opportunity to correlate the experience with clinical information. And with body painting being an active and visually memorable experience for students, it is regarded as an effective learning technique (Finn and McLachlan 2010; Finn et al. 2011; Hafferty and Finn 2015; Jariyapong et al. 2016). Furthermore, it has been shown to be an enjoyable and productive educational experience for students and instructors alike (Finn and McLachlan 2010; Finn et al. 2011; Cookson et al. 2018).
3.2.1 Suggested Use in Curriculum Body painting best aligns with anatomical or clinical components of the curriculum. Whilst memorable and unique, it is also not often feasible
If you believe that body painting would be a ‘good fit’ in your anatomy or clinical classroom, the next step would be to consider where and with what content to integrate body painting. The unique visualisation aspect naturally lend well to certain subject matter. For traditional body painting, we recommend the following topics: dermatomes, areas of referred pain, abdominal viscera and quadrants, skeletal anatomy, musculature and facial muscles/neurovasculature (see Fig. 3.1 for some examples). There are certainly other topics for which body painting could be utilised, but it is important to consider the purpose. In many of the previous examples, body painting can be a wonderful teaching opportunity because such topics are frequently not well understood by students – known as threshold concepts (Meyer and Land 2006) – difficult to appreciate in non- moving/living models and have important clinical implications where body painting may contribute to a student’s understanding of clinical exams and disease presentation. Clinical correlation may be the key indicator of what topic to address using body painting; for example, in demonstrating the importance of understanding facial muscle anatomy in administering botulism toxin A, content with body painting of such muscles on a model was demonstrated to be an innovative teaching method (Boggio 2017).
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
29
Fig. 3.1 Images showing an example of body painting the bones of the hands and wrist
With a plan for your body-painting session in mind, the next step is to consider logistics. You will also need to order supplies in sufficient quantities so that all students can be engaged (Finn 2010). Educational body painting works best when students are provided some sort of physical guide, with details of what they will be painting in the session, key structures and how structures correlate with course learning outcomes. Images for reference are also a must: these can be select images printed for the students, access to atlases or encouragement for students to use their own mobile devices to research their own images. Body painting has the benefit of being a very group-oriented activity; however, it has been shown to be manageable, even in large class settings (Op Den Akker et al. 2002) and extensive public engagement settings (Finn et al. 2018). After introducing a session, materials and their use, it is then best to have students get into small groups of 2–3 students and suggest determination of roles. One student can act as a model, and the other two as painters. All can participate in the act of research and checking anatomical or clini-
cal information. This small group format not only promotes communication and teamwork but also ensures that body painting is an accessible art- based activity. Students with concerns or sensitivities to the close contact of body painting, cultural or religious stipulations, or those with disabilities can still be included in body painting via the addition of a researcher role. With this model, one student models, one paints and one researches or guides the painter by checking the painting against provided visuals, atlases or checking notes for key landmarks or information. For a thorough review of integrating body painting into a clinical curriculum, Op Den Akker et al. (2002) and McMenamin (2008) offer excellent overviews of integrating body painting for thoracic and abdominal organs content with medical students. Finn (2010) also provides a 12-tips style article for those interested in running a body-painting session. Below, however, are some key pros and cons one may wish to consider. Also, while many examples illustrate the use of body painting with human bodies, educators need not feel limited if they do not focus on
30
human anatomy education. Equine models for body painting have been used to demonstrate locomotion for veterinary anatomy students (Senos et al. 2015). If those interested in exploring animal models for painting have concerns about time and potential mess, garment body painting, a technique in which a thin garment is put on prior to body painting, may also be an option to explore. Such a method has been demonstrated on equine models as well (Sattin et al. 2018). Garments can also be used in humans to body paint on, as well, with the use of swim caps working well for cranial anatomy and gloves for easily removable hand anatomy (Finn et al. 2018). Even the addition of a vest, t-shirts or leggings to paint on, in lieu of directly onto skin, can be regarded as useful by students (Skinder- Meredith 2010; Finn 2010).
3.2.2 Pros of Body Painting Many of the benefits of body painting might seem obvious from the above description. Body painting has the benefit of creating an active learning environment and requires students to engage in materials in new ways, which can be perceived quite positively compared to methods that rely on memorisation or passive learning (McMenamin 2008; Finn and McLachlan 2010). This is of particular benefit in the clinical educational setting. While body painting, students are often engaging in palpating, determining anatomical borders and considering anatomy from the perspective of a living person, which can be excellent preparation for the use of anatomy in clinical practice. Body painting, like many of the art-based activities introduced in this chapter, is also a cost- effective innovation, or even alternative to traditional, modern anatomy image resources. In programs where access to cadaveric materials is limited, or where funding and technical management concerns over digital image visualisation resources is not possible, body painting can prove to be an affordable alternative. This informal technique can also be adapted in many environments to take the emphasis away from death and
A. N. Dueñas and G. M. Finn
cadavers and make students see anatomy from the perspective of working with the living.
3.2.3 Cons of Body Painting While a practical and engaging method, body painting does not come without specific disadvantages. As any activity with paint implies, potential mess is to be considered. This applies to both the space and subjects body painting is used with. Most standard body paints are water soluble and wash away, but still body painting is not recommended in spaces with carpet or fabric furniture. Additionally, depending on the time of day and content of a session, the student models in body painting might be less in favour of such an activity if they consider the mess to be too much of an inconvenience. Providing student’s access to changing rooms where they can switch in and out of body-painting attire, and wash up after, is key. Supplying sufficient access to paper towels, as well as wet wipes, can also prove important in managing body painting–associated messes. Another downfall is the discipline association. Many students may not be inclined to do such blatant art-based activity in contact time that they believe should be dedicated to science. In combination, concerns about mess and hesitation to participate in a blatant art-focused activity has been documented as potential negative evaluation of body painting from students (Green and Dayal 2018). However, student apprehension can often be quelled with key communication about the goals of body painting and the clinical perspective it can provide.
3.2.4 S ight Unseen: UV Body Painting as an Addition to Body Painting While standard body painting might prove an excellent starting place for integrating art-based approaches into your curriculum, the addition of ultraviolet (UV) body painting can prove even more beneficial for certain activities.
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
UV body paint can be used alone but is more visually striking and better conveys structure when a combination of traditional body paints is used with UV accents. When these paints are viewed under UV light, which is invisible to the human eye, the paints have a glowing effect. Depending on the colours, UV paints can be visualised without the presence of UV light, but the glowing nature makes a much more striking and easily seen effect. The variance in visualisation can make the utilisation of UV paints very distinctive. UV paints can be used to create hidden layers: students can add labels or features to body-painted structures, which can only be fully appreciated with UV light, making body painting more of a recall activity. This shifting presence can also be beneficial in creating the appearance of layered structures. UV paints also tend to have a wet, visceral, reflective appearance under UV lights. Such properties make UV paints excellent additional to body painting of muscular structures, making contraction and tension more obvious on living models. Figure 3.2 highlights how striking UV body painting can be. UV body painting, while sharing many pros and cons of traditional body painting, also presents unique benefits and challenges. One positive of UV body paints is that they tend to be more inclusive to all learners, more so than traditional body paints. For students with darker skin tones, traditional body painting with standard water-
31
based paints can present challenges: some colours need to be applied in multiple layers to be properly visualised. With UV paints, the addition of the light for visualisation means little amount of paint needs to be applied to be seen on any skin tone. This makes UV paints a more naturally inclusive tool. However, the benefits of layering also mean that addition of UV paints to traditional body painting can prove more of a time commitment. UV paints work best when applied to a dry surface; this may require allowing initial body- p ainted structures time to dry. Figure 3.3 highlights how the combination of regular and UV body paints can be used in combination. In sessions where time is a major concern, UV body paints may not be easily utilised. Lastly, the use of ultraviolet light does provide minor health concerns. Students should be well informed about the use of UV lights and have an opportunity to discreetly express any health concerns. Further, for all participants, UV lights should only be used as necessary.
3.3
Clay Modelling
Clay modelling is another simple art-based approach that can be utilised in the clinical education setting. This technique involves using modelling clay, whether it be air dry varieties, or
Fig. 3.2 UV body-painting images, highlighting examples of abdominal viscera, muscles of the back, hand anatomy and musculature painted in vivid, non-traditional colour
32
A. N. Dueñas and G. M. Finn
Fig. 3.3 Images showing how regular body paints can be used in combination, with muscles of the anterior thigh and muscles of facial expression
children’s reusable kinds, to create or build upon existing plastic models.
3.3.1 Suggested Use in Curriculum When thinking about clay modelling, it is often best to consider what concepts your students might find challenge to visualise or recall with confidence. Clay modelling can create a kinaesthetic opportunity for students to create their own visualisations, which works well with topics such as embryology, where processes can be difficult for students to understand via standard images, but models are often limited. Such concepts might include formation of cardiovascular structures, development of the bilaminar and trilaminar germ disc, embryonic folding or any number of developmental processes, as clay modelling allows one to easily construct a series of structures. Figure 3.4 shows how clay modelling can be used to allow students to visualise 3D processes in embryonic development. Similarly, clay modelling can be used with neuroanatomy to help students understand spatial relations of neural structures, such as those in the limbic system or basal ganglia, or even on a more physiological
level of neuronal communication. Using existing plastic models, the use of reusable clay can also allow students to add their own layers to existing structures, such as modelling the muscles of facial expression onto a plastic skull model (also depicted in Fig. 3.4) or creating 3D depictions of periventricular structures (Estevez et al. 2010; Kooloos et al. 2014; Hafferty and Finn 2015; Akle et al. 2018). The benefits of clay modelling can also be applied to even the most basic sciences, utilising clay structures for students to better understand molecular structures. Clay modelling activities also have the benefit of being used in a variety of educational settings, perhaps more so than body painting, which often requires its own session time due to the associated mess. Clay modelling can be integrated into a stand-alone large group session, incorporated as part of a lab, as a component in a flipped classroom setting or even introduced as an activity to break up more traditional lecture time. It can be utilised by individual students or in pairs or small groups. It simply depends on the selected task, the subject you are interested in and the goal of the activity. However, the active nature can prove very stimulating to students, even when integrated into a larger session with a variety of
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
33
Fig. 3.4 Images depicting clay model of muscles of facial expression (top left and right), as well as embryonic facial development (bottom left)
activities (Naug et al. 2011). Clay modelling has also been shown to be a valuable teaching method when integrated with technological approaches. Oh et al. (2009) found that creating and slicing clay models in conjunction with teaching radiology led to better short-term knowledge scores, and that the clay modelling was a very positively received activity by the students. Clay modelling can also be encouraged as a recommended activity, in lieu of being integrated into formal curriculum. If you provide students modelling resources outside of scheduled contact hours and provide guidance or explanation videos, students can select to use this art-based approach as a supplemental studying technique.
3.3.2 Pros of Clay Modelling One of the immediate recognisable positives of clay modelling is the versatility of this art-based approach. Dissimilar to body painting, which is very much aligned with gross anatomy, clay modelling works well for a variety of clinical education subjects from basic to anatomical sci-
ences. This is particularly useful with the rise of integrated curriculum: having a tool that functions well for many aspects of a different course or module is convenient and can provide a sense of continuity to students. Clay modelling is also very cost-effective; even with large class sizes, it is affordable and easy to provide all students equal access to modelling tools. While, depending on the brand and type of modelling materials purchased, the clay might not be long-lasting, it is a resource that can be easily replaced at low cost. Given how cost- effective clay modelling can be, it might prove especially effective in settings where full specimen dissection is not an affordable or attainable option. Several studies have found that in teaching settings that typically rely on animal dissection activities, clay modelling hand muscles proved to be an effective and well-received learning strategy, particularly when the focus of the content is human-based (Waters et al. 2005; Motoike et al. 2009; DeHoff et al. 2011; Haspel et al. 2014). It has been shown to be more effective than lectures alone when used as an adjunct (Myers et al. 2001).
34
3.3.3 Cons of Clay Modelling
A. N. Dueñas and G. M. Finn
apron, for example (Chan 2010). Another simple example is the use of a tennis ball and Velcro One of the major cons of clay modelling is that it strips, common and easily obtainable materials, is a low fidelity method. With accessible and which can be used for a simple but effective demcost-effective versions of this activity (such as onstration of extraocular muscle function the use of children’s modelling clay), models can (Velcro), acting on the eye (tennis ball). Figure 3.5 be quite crude. With most types of clay used, shows some stills of this tennis ball model, but it models are not long-lasting or structurally sound, is best depicted in practice, when pulling on and therefore cannot easily be reused and must be Velcro can demonstrate muscle movement. Many reproduced. Compared to traditional plastic mod- options for this same visualisation exist, using els or even 3D printed models, the low fidelity of materials on hand, such as a hamster ball ‘eye’ clay models might not prove as useful to instruc- with tied elastic ‘muscles’ or even a paper mâché tors or demonstrators. However, as outlined in version eye, with attached strings for muscles, the suggested curriculum section above, hands- also depicted in Fig. 3.5. on creation of such models might prove extremely Some material modelling may require more valuable to students, more so than demonstrators. assembly to create a visualisation, but this may result in a more viable, reusable model, such as the ‘Anato-Rug’, a material model that was cre3.4 Material Modelling ated from rugs and fabric to demonstrate equine anatomy on a living model (Braid et al. 2012). What we refer to as ‘material modelling’ is the Models using simple fabric swatches on a t-shirt use of easily obtainable materials to create sim- have been created to demonstrate human mesenple visualisations. Such materials may include teries and viscera (Noël 2013). Whether using pipe cleaners, cloth, balloons, tape or other basic simple materials, or constructing a more elaboart supplies and household items. While often rate model, the key point of material modelling is simple in design, use of materials can be novel to approach teaching sessions creatively, using and assist students in understanding complex everyday materials. processes. Even low fidelity examples have the Like many of these other techniques, material potential to improve upon spatial understanding modelling is a great way to also incorporate simand to encourage student participation and eager- ple active learning opportunities for students to ness in approaching material (Chan and Cheng complete into a variety of educational settings at 2011). varying levels (Zumwalt et al. 2010) and have been advocated as having a unique educational value despite their low-tech construction (Chan 3.4.1 Suggested Use in Curriculum 2015). The use of pipe cleaners, for example, is one of the easiest ways to have students engage in Material modelling is a wonderful option for active learning, as depicted in Fig. 3.6. For examdemonstrations within a didactic setting. Bringing ple, given the topic of the brachial plexus, many in material models to your lecture can be a great instructors introduce students to simple drawn way to engage students differently in a lecture- schematics to help them learn the names and distype setting. Use of cloth models can be used to tributions of the roots involved. Material modeldemonstrate a variety of processes, such as layers ling can build upon such a session, reinforcing of the abdominal wall or organ positioning relat- learning via a different technique. Distributing ing to the peritoneum (Chan 2010). Embryology pipe cleaners to students, and having them model content, with complex visualisations, is another the brachial plexus, can move students’ basic topic area where material models may be used; understanding of a 2D drawn plexus to a 3D condevelopment of the gut can be demonstrated by structed plexus, which is still simplified comgluing fabric strips or ribbon to a standard kitchen pared to a real plexus, but allows students to now
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
35
Fig. 3.5 Examples of low fidelity models including the tennis ball eye with Velcro extraocular muscles (top set) and a paper mâché eye with string extraocular muscles (bottom)
Fig. 3.6 Examples of the use of pipe cleaners to recreate sinuses within model skulls
36
A. N. Dueñas and G. M. Finn
Fig. 3.7 The images highlight how art-based approaches can be used in combination by showing a simple drawn schematic with pipe-cleaner material modelling of the brachial plexus
consider the distributions much differently. Figure 3.7 highlights a 3D brachial plexus model made of pipe cleaners. In addition to pipe cleaner, string, straws, wooden sticks and beads are also all easily distributable and affordable materials that can be used in teaching sessions, such as Rios and Bonfim (2013) demonstrated in their material model of sliding filament theory when teaching the musculoskeletal system.
Material modelling can also be taken a step further when applied to real models, just as clay modelling. In this brachial plexus scenario, students who have now drawn a plexus and built a simple pipe cleaner model can now take that model and construct it on a plastic skeleton to create more context for the structure. Another example of material modelling is the use of pipe cleaner or string/thread with skull models to
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
demonstrate various foramina and the structures that pass through them. Use of materials in this example can also be easily amenable for the appropriate knowledge difficulty level of the student population you are working with. For example, pipe cleaners of various colours can be used initially to have students practice identifying various foramina. The next order of this activity could then be to have students identify the structures that pass through a given foramina. For cranial nerves, in particular, students could also then be asked to describe the function of a nerve passing through a foramina or even asked to use string on skull models to demonstrate the neuron type passing through a given foramen.
3.4.2 Pros of Material Modelling Materials modelling, like many of these approaches, is very cost-effective. Materials such as cloth for lecturer demonstrations may even be repurposed from old sheets you have on hand. Further, many materials, such as pipe cleaners, can be collected after a teaching session and reused. Compared to methods using clay, the reusable nature of material modelling makes it one of the most cost-effective methods to use. This approach is also extremely time effective. As mentioned above, a demonstration using materials modelling does not need to be its own session and can be easily integrated into even a didactic setting. Further, materials can be provided to students and encouraged to be utilised outside of formal instruction hours. Pipe cleaners, for example, can be a study tool that is easily carried around by students. Lastly, material modelling can be quite a memorable experience for students, particularly when integrated into didactic or small group settings. While simple to produce, material modelling demonstrates creativity in teaching, particularly in the digital age. Such engagement for students may lead to better knowledge retention and recall.
37
3.4.3 Cons of Material Modelling Material modelling is most obviously a low fidelity activity. Models are often crude in appearance and imperfect, especially at first attempt. Further, many common material models are not designed to be long lasting; the tennis ball eye, for example, is not a durable model, merely a low fidelity way of demonstrating in 3D the complex coordination of eye movement. Further, working with materials requires some level of higher manual dexterity and visual acuity, particularly when working with brightly coloured materials, such as pipe cleaners and thread. Accommodating activities to be accessible for all students is a key consideration. For example, if students self-identify as colour blind, efforts should be made to provide students with pipe cleaners that can be visualised best by students. Varied sized pipe cleaners or strings may be of interest for the students. As for manual dexterity, students with qualms about this can be encouraged by the low fidelity of material modelling. Given that creating models is so low stakes in a sense, students can be encouraged by the fact that models are not meant to be perfect.
3.5
Drawing-Focused Activities
The most basic of art-based activities that can be beneficial for visualisation in the biomedical sciences is drawing. Across many science-related subjects, drawing activities have been shown to enhance student learning by acting as a cognitive tool that supports active reasoning and consolidation knowledge (Wu and Rau 2019). Many students may already be integrating their own drawings and schematics into their studying of subject matter, but formally adding evidence- based, structured, drawing activities can be highly beneficial (Hu et al. 2018).
3.5.1 Suggested Use in Curriculum Because of how diverse drawing activities can be, you are likely able to integrate drawing into any
38
A. N. Dueñas and G. M. Finn
session type with any topic matter. Below are integrating art and drawing into an anatomy pracsome suggestions and examples. tical session has been shown to be very engaging Drawing is very easily incorporated into a for students and educators (Backhouse et al. didactic setting, although if incorporating draw- 2017). This process requires students to engage ing into a didactic session, you may wish to con- in critical observation of anatomy, an incorporate sider bringing extra paper for students to ensure drawing into their construction of knowledge. that those who prefer to use laptops or tablets do Such a process could also be applied to other not feel excluded by inclusion of such an activity. visual content in the biomedical sciences. And One great way to maximise on student learning while many labs move towards more technologiand incorporate drawing into a didactic session is cally advanced systems and set-ups, having cost- to start a lecture session with a spaced repetition effective basics available, too, can prove quite activity that focuses on drawing. Starting a lec- beneficial. ture by having students draw a schematic introTime and staff allowing drawing may also fit duced days or weeks before combines the unique into a stand-alone session or workshop, particuvisual aspects of drawing with spaced reviewed larly when offered as optional, supplemental techniques that can be a powerful tool for long- opportunities (Borrelli et al. 2018). When comterm memory of material (Kang 2016). Figure 3.7 bined with living models or other models/specishows a simple drawn schematic of the brachial mens, as demonstrated in Fig. 3.8, these sessions plexus, accompanied by a pipe cleaner model can be regarded as a unique and powerful learnusing similar colours. These images highlight ing experience for students, which may even how art-based approaches can also build upon encourage student well-being (Reid et al. 2019; each other, either within sessions or across con- Moore et al. 2011; James et al. 2017). You may secutive sessions. also find that drawing does not always imply the Drawing can also be integrated throughout need for paper. In combination with approaches didactic settings, particularly those with inte- outlined previously in this chapter, drawing on grated content to help segue or connect portions. materials or wearables, such as gloves, has been For those inclined to use technology enhanced shown to be an effective method of teaching approaches, drawing screencasts have been dem- (Lisk et al. 2015). onstrated to be powerful for learning gains of students, compared to traditional presentation of information via textbooks (Pickering 2017). 3.5.2 Pros of Drawing-Focused Activities Drawing may also find a natural place in lab sessions of different types. While seemingly very basic, drawing histological images has been As stated above, drawing is extremely easy and shown to improve long-term knowledge retention versatile to introduce into a variety of teaching (Balemans et al. 2016), and art integrating visual sessions, with many different types of content. It arts into histology education can be viewed as is also extremely cost-effective, requiring miniquite favourable to students (Cracolici et al. mal to no materials that need to be provided by 2019). Bringing in white boards into your ana- the instructor. tomical labs can also be a simple way to encourAnd beyond the benefits for learning and age drawing activities in association with memory, drawing can be associated with an prosection- or dissection-focused content. White observational skills benefit, which may be associboards can be easily used by students to review ated with better clinical acumens. Even basic content when not directly involved with speci- drawings, including schematics, require attention mens and also offer easy opportunity for instruc- to detail and the ability to focus on patterns and tors to review a pertinent schematic while running presentations of material. These types of obsera specimen-focused session. The Observe- vational skills are also associated with clinical Reflect-Draw-Edit-Repeat (ORDER) process of
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
39
Fig. 3.8 Life drawing, either using living or plastic models, is another great way to incorporate art-based approaches into your curriculum, as shown in the above image
skills of observation, which can be reinforced for overall better clinical education.
active nature of drawing, not from the necessarily the artistic quality of their work.
3.5.3 Cons of Drawing-Focused Activities
3.6
While drawing is flexible for teaching sessions, focused sessions may require more time, which is generally not readily available in compact clinical curriculum. However, if there is institutional and faculty interest in coordinated heavily drawing focused sessions, there may be an opportunity to hold and assess these via optional teaching sessions for interested students. Another major con for drawing activities can also be initial student apprehension. Students may not initially be interested in what seems most obviously like ‘art’, particularly students deeply imbedded in science/clinical education. With any drawing activity, however, one major point that should be communicated to students is the focus is on the act, not the art. Some students may be initially also be apprehensive towards any activity with ‘art’ or ‘drawing’ in the title, as these can have the connotation that artistic skill is required or will somehow be judged. The key here is to clearly communicate to and often remind students that the benefit comes from the
Conclusions
Art-based activities are an active and often affordable way to present content to students in a novel and engaging way. Many of these approaches can be simply integrated into existing curriculum, after thorough consideration of what equipment, task, pros and cons are most beneficial to one’s educational goals. Table 3.1 provides a simplified comparison of these considerations for the various art-based approaches introduced in this chapter, with particular focus on the equipment needed and example tasks.
3.6.1 General Considerations All of these processes have general considerations that should be applied for a successful session, such as accessibility of activities for varying learner populations, photography consent and session preparation. As outlined in many of the pros and cons of each approach, art-based activities might seem initially uncomfortable or inaccessible for some student populations, and there
A. N. Dueñas and G. M. Finn
40 Table 3.1 Simplified comparison of methods presented in this chapter, focusing on equipment and example tasks
able with the clothing or undressing that often accompanies body painting, particularly Method Equipment Example tasks around peers. As an instructor, you should let Body Body paints, paint Muscles, all students know if they have such concerns, dermatomes, areas painting brushes, makeup that art-based activities are in nature, very of referred pain, sponges, eyeliner accommodating and students should feel comskeletal system, pencils, paint cups (with water), visceral organ fortable approaching you to discuss concerns. paper towels, wet anatomy • Clearly communicate the goals of activities. wipes, instruction Creating models or art can sometimes lead sheets students to believe they need to focus on the Muscles, UV body UV body paints, quality of the activity and that if what they are painting UV body-painting abdominal anatomy, labelling crayons, UV working on is not ‘perfect’ or even ‘good’, of structures lamps or torches, that the activity therefore holds no value. specific Reminding students during and after sessions instructions for that the focus of art-based activities is the proUV paints Embryonic Clay Modelling, cess, not the end results, can help students processes, modelling plasticine, or understand the value and learning goals. neuroanatomical children’s • Preparation is, as always, the key to success. structures, reusable clay You likely would never take students into a molecular or macromolecular cadaver lab, simulation room or provide a structures or technology-enhanced learning session with no interactions plan. The same thought processes should be Brachial plexus, Pipe cleaners, Material applied to art-based activities, even when conneuroanatomical modelling: scissors sidered to be generally of much lower fidelity. structures, pipe vasculature cleaners Plan a session that fits best with your curricuEmbryonic Any type of felt Material lum. Practice the activity ahead of time if processes (e.g. gut modelling: or fabric, rugs, possible. Prepare explicit instructions for sturotation), layered fabrics and string/yarn, dents. And be sure you order enough materimuscles, function Velcro, paper other of extraocular mâché materials, als. Preparation will make your art-based muscles scissors, fabric/ activity the best it can be. super glue • Consider consent. An overall positive of these Drawing Additional paper, Virtually any activities is that generally students are really subject matter; activities whiteboards with excited to create something and eager to share. histology, white board neuroanatomy, markers Remind students to check in with others, prior physiology, to putting anything on social media, particucross-sectional larly in the case of body painting. As an anatomy instructor who might also be interested in documenting sessions, it is also key to obtain can be learning curves for those who implement informed consent, in line with your universisuch activities. Therefore, consideration of all ty’s photography protocols, so that any stuparticipants is key when determining if an art- dents are aware of how and where their images based approach can or should be used in your may appear. clinical education setting. To make art-based activities successful, we recommend that the general best-practice steps be followed: 3.6.2 Note on Public Engagement
Component
• Create a comfortable classroom environment, so students who may have concerns feel safe to disclose them. For example, students with certain religious beliefs may not feel comfort-
While not specifically touched upon in this chapter, many of these techniques are also applicable in various public engagement and outreach set-
3 Body Painting Plus: Art-Based Activities to Improve Visualisation in Clinical Education Settings
tings. Body painting, in particular, is a great opportunity for attracting attending and using such attention as an opportunity for science communication about the body, and how it functions. There are numerous additional considerations that need to be made when implementing art- based activities in outreach settings, but the general considerations above should still be applied. In particular, consider the setting and the audience of the outreach, beyond any changes to content. Will the activity be observational or participatory (meaning will the ‘audience’ be watching you demonstrate an activity or actually doing an activity themselves)? What are the numbers you are expecting? Are there any additional considerations of consent, now that you are working with the general public? Your institution’s public and media relations office may be a great group to consult if you are interested in adapting any of these art-based approaches for the general public.
References Akle V, Peña-Silva RA, Valencia DM, Rincón-Perez CW (2018) Validation of clay modeling as a learning tool for the periventricular structures of the human brain. Anat Sci Educ 11:137–145 Backhouse M, Fitzpatrick M, Hutchinson J, Thandi CS, Keenan ID (2017) Improvements in anatomy knowledge when utilizing a novel cyclical “Observe-Reflect- Draw-Edit-Repeat” learning process. Anat Sci Educ 10:7–22 Balemans MC, Kooloos JG, Donders ART, Van Der Zee CE (2016) Actual drawing of histological images improves knowledge retention. Anat Sci Educ 9:60–70 Bell LTO, Evans DJR (2014) Art, anatomy, and medicine: is there a place for art in medical education? Anat Sci Educ 7:370–378 Boggio RF (2017) Dynamic model of applied facial anatomy with emphasis on teaching of botulinum toxin A. Plast Reconstr Surg Glob Open 5:e1525–e1525 Borrelli MR, Leung B, Morgan M, Saxena S, Hunter A (2018) Should drawing be incorporated into the teaching of anatomy? J Contemp Med Edu 6:34–48 Braid F, Williams SB, Weller R (2012) Design and validation of a novel learning tool, the “Anato-Rug,” for teaching equine topographical anatomy. Anat Sci Educ 5:256–263 Chan LK (2010) Pulling my gut out—simple tools for engaging students in gross anatomy lectures. Anat Sci Educ 3:148–150
41
Chan LK (2015) The use of low-tech models to enhance the learning of anatomy. In: Teaching anatomy. Springer, Cham Chan LK, Cheng MMW (2011) An analysis of the educational value of low-fidelity anatomy models as external representations. Anat Sci Educ 4:256–263 Cookson NE, Aka JJ, Finn GM (2018) An exploration of anatomists’ views toward the use of body painting in anatomical and medical education: an international study. Anat Sci Educ 11:146–154 Cracolici V, Judd R, Golden D, Cipriani NA (2019) Art as a learning tool: medical student perspectives on implementing visual art into histology education. Cureus 11:e5207 Dehoff ME, Clark KL, Meganathan K (2011) Learning outcomes and student-perceived value of clay modeling and cat dissection in undergraduate human anatomy and physiology. Adv Physiol Educ 35:68–75 Estevez ME, Lindgren KA, Bergethon PR (2010) A novel three-dimensional tool for teaching human neuroanatomy. Anat Sci Educ 3:309–317 Finn GM (2010) Twelve tips for running a successful body painting teaching session. Med Teach 32:887–890 Finn GM, Matthan J (2019) Pedagogical perspectives on the use of technology within medical curricula: moving away from norm driven implementation. In: Biomedical visualisation. Springer, Cham Finn GM, Mclachlan JC (2010) A qualitative study of student responses to body painting. Anat Sci Educ 3:33–38 Finn GM, White PM, Abdelbagi I (2011) The impact of color and role on retention of knowledge: a body- painting study within undergraduate medicine. Anat Sci Educ 4:311–317 Finn GM, Bateman J, Bazira P, Sanders K (2018) Ultra- violet body painting: a new tool in the spectrum of anatomy education. Eur J Anat 22:521–527 Green H, Dayal MR (2018) A qualitative assessment of student attitudes to the use of body painting as a learning tool in first year human anatomy: a pilot study. Int J Anat Res 6:5134–5144 Hafferty FW, Finn GM (2015) The hidden curriculum and anatomy education. In: Chan LK, Pawlina W (eds) Teaching anatomy: a practical guide. Springer, Cham Harris N, Bacon CEW (2019) Developing cognitive skills through active learning: a systematic review of health care professions. Athl Train Educ J 14:135–148 Haspel C, Motoike HK, Lenchner E (2014) The implementation of clay modeling and rat dissection into the human anatomy and physiology curriculum of a large urban community college. Anat Sci Educ 7:38–46 Hu M, Wattchow D, De Fontgalland D (2018) From ancient to avant-garde: a review of traditional and modern multimodal approaches to surgical anatomy education. ANZ J Surg 88:146–151 James C, O’connor S, Nagraj S (2017) Life drawing for medical students: artistic, anatomical and wellbeing benefits. MedEdPublish 6 Jariyapong P, Punsawad C, Bunratsami S, Kongthong P (2016) Body painting to promote self-active learning of hand anatomy for preclinical medical students. Med Educ Online 21:30833
42 Kang SH (2016) Spaced repetition promotes efficient and effective learning: policy implications for instruction. Policy Insights Behav Brain Sci 3:12–19 Kooloos JGM, Schepens-Franke AN, Bergman EM, Donders RART, Vorstenbosch MATM (2014) Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations. Anat Sci Educ 7:420–429 Lisk K, Mckee P, Baskwill A, Agur AMR (2015) Student perceptions and effectiveness of an innovative learning tool: Anatomy Glove Learning System. Anat Sci Educ 8:140–148 McMenamin PG (2008) Body painting as a tool in clinical anatomy teaching. Anat Sci Educ 1:139–144 Meyer J, Land, R (2006) Overcoming barriers to student understanding: Threshold concepts and troublesome knowledge. Routledge, London Moore CM, Lowe C, Lawrence J, Borchers P (2011) Developing observational skills and knowledge of anatomical relationships in an art and anatomy workshop using plastinated specimens. Anat Sci Educ 4:294–301 Motoike HK, O’kane RL, Lenchner E, Haspel C (2009) Clay modeling as a method to learn human muscles: a community college study. Anat Sci Educ 2:19–23 Myers DL, Arya LA, Verma A, Polseno DL, Buchanan EM (2001) Pelvic anatomy for obstetrics and gynecology residents: an experimental study using clay models. Obstet Gynecol 97:321–324 Nanjundaiah K, Chowdapurkar S (2012) Body-painting: a tool which can be used to teach surface anatomy. J Clin Diagn Res 6:1405 Naug HL, Colson NJ, Donner DG (2011) Promoting metacognition in first year anatomy laboratories using plasticine modeling and drawing activities: a pilot study of the “Blank Page” technique. Anat Sci Educ 4:231–234 Noël GPJC (2013) A novel patchwork model used in lecture and laboratory to teach the three-dimensional organization of mesenteries. Anat Sci Educ 6:67–71 Oh C-S, Kim J-Y, Choe YH (2009) Learning of cross- sectional anatomy using clay models. Anat Sci Educ 2:156–159
A. N. Dueñas and G. M. Finn OP Den Akker JW, Bohnen A, Oudegeest WJ, Hillen B (2002) Giving color to a new curriculum: bodypaint as a tool in medical education. Clin Anat 15:356–362 Pickering JD (2017) Measuring learning gain: comparing anatomy drawing screencasts and paper-based resources. Anat Sci Educ 10:307–316 Reid S, Shapiro L, Louw G (2019) How haptics and drawing enhance the learning of anatomy. Anat Sci Educ 12:164–172 Rios VP, Bonfim VMG (2013) An inexpensive 2-D and 3-D model of the sarcomere as a teaching aid. Adv Physiol Educ 37:343–346 Sattin MM, Silva VK, Leandro RM, Foz Filho RP, De Silvio MM (2018) Use of a garment as an alternative to body painting in equine musculoskeletal anatomy teaching. J Vet Med Educ 45:119–125 Senos R, Ribeiro M, De Souza Martins K, Pereira L, Mattos M, Júnior JK, Rodrigues M (2015) Acceptance of the bodypainting as supportive method to learn the surface locomotor apparatus anatomy of the horse. Folia Morphol (Warsz) 74:503–507 Shapiro J, Rucker L, Beck J (2006) Training the clinical eye and mind: using the arts to develop medical students’ observational and pattern recognition skills. Med Educ 40:263–268 Skinder-Meredith AE (2010) Innovative activities for teaching anatomy of speech production. Anat Sci Educ 3:234–243 Waters JR, Meter PV, Perrotti W, Drogo S, Cyr RJ (2005) Cat dissection vs. sculpting human structures in clay: an analysis of two approaches to undergraduate human anatomy laboratory education. Adv Physiol Educ 29:27–34 Wu SP, Rau MA (2019) How students learn content in science, technology, engineering, and mathematics (STEM) through drawing activities. Educ Psychol Rev 31:87–120 Zumwalt AC, Lufler RS, Monteiro J, Shaffer K (2010) Building the body: active learning laboratories that emphasize practical aspects of anatomy and integration with radiology. Anat Sci Educ 3:134–140
4
TEL Methods Used for the Learning of Clinical Neuroanatomy Ahmad Elmansouri, Olivia Murray, Samuel Hall, and Scott Border
Abstract
Ubiquity of information technology is undoubtedly the most substantial change to society in the twentieth and twenty-first centuries and has resulted in a paradigm shift in how business and social interactions are conducted universally. Information dissemination and acquisition is now effortless, and the way we visualise information is constantly evolving. The face of anatomy education has been altered by the advent of such innovation with Technology-Enhanced Learning (TEL) now commonplace in modern curricula. With the constant development of new computing systems, the temptation is to push the boundaries of what can be achieved rather than addressing what should be achieved. As with clinical practice, education in healthcare A. Elmansouri (*) · S. Border Centre for Learning Anatomical Sciences, University Hospital Southampton, Southampton, UK e-mail: [email protected] O. Murray Edinburgh Medical School: Biomedical Sciences (Anatomy), University of Edinburgh, Edinburgh, UK e-mail: [email protected] S. Hall Neurosciences Department, Wessex Neurological Centre, University Hospital Southampton, Southampton, UK e-mail: [email protected]
should be evidence driven. Learning theory has supplied educators with a wealth of information on how to design teaching tools, and this should form the bedrock of technology- enhanced educational platforms. When analysing resources and assessing if they are fit for purpose, the application of pedagogical theory should be explored and the degree to which it has been applied should be considered. Keywords
Technology-Enhanced Learning · Anatomy learning · Screencast · E-learning · Neuroanatomy education · Theory of multimedia learning
4.1
Definition
Technology-Enhanced Learning (TEL) is an obscure term, the definition of which has a variety of different interpretations (Kirkwood and Price 2014). They can be broadly categorised into the five definitions below for ease of understanding: 1. TEL is a synonym for any sort of educational technology. 2. TEL is a synonym for e-learning.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_4
43
A. Elmansouri et al.
44
3. TEL can refer to technology-enhanced classrooms (which can mean either a classroom with information and communication technology (ICT) or distance learning using a virtual classroom). 4. TEL can express an attitude in favour of ‘seamless’ (or integrated) interactive learning environments as opposed to content-driven e-learning environments. 5. TEL can mean learning with technology (as cognitive tools) as opposed to learning by/ through technology (Zufferey and Schneider 2016). For the purpose of this chapter, the fourth and fifth definitions of TEL are most applicable. A learning resource that is enhanced by technology should be grounded in student interaction and improving the learner experience rather than simply providing an alternative method for displaying previously published work, for example, a PDF of a textbook. The authors’ argument is that, by definition, TEL should enhance the learning experience beyond what is possible by traditional means. Given the ubiquity of computers and mobile devices, the act of merely accessing the same content through electronic means is the norm and not in itself an act of enhancement (Dror et al. 2011; Wong and Looi 2011). Electronic books, PDFs and other resources that were initially designed for print and transferred online may thus qualify as ‘educational technologies’ or ‘e-learning’ but not technology-enhanced learning. There have been clear distinctions made in the literature between resources that simply influence learning as opposed to those that enhance them (Lee et al. 2010; Kirkwood and Price 2014).
4.2
EL in Clinical Anatomical T Education
The number of published articles evaluating TEL in health professions has steadily increased since 2006 (Bajpai et al. 2019). The rise in popularity of TEL is multifactorial. It is partially driven by educators seeking to improve the ser-
vice they provide and the changing expectations of learners (Kirkwood and Price 2014). The rising costs of higher education, which is commonly imposed on the student, has created a focus on student satisfaction as a proxy marker of good education. Student satisfaction has been shown to be high with TEL placed within a wellstructured teaching programme (Bloom and Hough 2003). Satisfaction alone is not a direct measure of the effectiveness or even necessarily the usefulness of a resource (Clunie et al. 2018). Nonetheless, it is an important measure given its implications for institutions and for student well-being. The generation of undergraduate students entering medical school are accustomed to modern technology and expect to utilise it for learning (Dror et al. 2011; Wong and Looi 2011). In the last decade, it has been reported that 98% of students use YouTube as a source of information (Jaffar 2012). While didactic approaches continue to play a vital role, failing to offer technology- enhanced learning methods will leave students’ expectations unmet. Like the supplier of any product, higher education institutions must continue to adapt to meet the needs and wishes of their customer base (Christensen 2008). Nevertheless, educators have an obligation to ensure the methods they use are effective, not just popular. Medical curricula in both the United Kingdom and the United States have shown a decline in the proportion of time devoted to anatomy education (Kaufman 1997; Heylings 2002; Ellis 2002; Shaffer 2004; Older 2004; Sritharan 2005). Medical students are now entering clinical practice with a level of anatomy knowledge which they and their seniors feel is insufficient to practice safely, and this is the case in many other countries (Ellis 2002; Bergman et al. 2008; Pabst 2009). Contrary to traditional didactic teaching methods, TEL allows students the capability of accessing resources at times which are convenient. If the resource is designed well enough and imparts new knowledge, it circumvents the lack of timetabled student–teacher contact time by providing some instruction outside of the classroom. However, for an educator to effectively
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
succeed in this approach, they must decide exactly what knowledge they want a TEL resource to impart onto the learner, find a resource with the same instructional goal and blend them into a well-structured programme of learning (Mayer 2018). Clinical anatomy education has been at the forefront of developing innovative approaches to delivering its ever-changing curricula for a number of years (Drake 1999; Heylings 2002). Given the constant rejuvenation and revalidation of medical educational policy across the globe, there is a need to ensure that the creative resources being developed at a subject specific level are more effective in delivering the required learning objectives than what is currently offered in the classroom (Sugand et al. 2010). Such educational strategies must not compromise patient care in the longer term (Drake 2007; Collins 2009). Ideally, TEL resources should not simply be deployed based on an assumption that students enjoy engaging with or are intuitively able to utilise such resources for meaningful learning (Pickering 2015). The universality of innovative technology- based teaching applications within the field of clinical anatomy has brought into question the appropriateness of the available evidence used to demonstrate their educational effectiveness; this is apparent in terms of both the impact on knowledge and the student experience (Clunie et al. 2018). Many studies which scrutinise the use of e-learning technologies do so on a very simplistic and superficial level – evaluating through the modification of routine online course surveys or the student experience alone (Lochner et al. 2016). While it is fair to say that these approaches have value and are well intentioned, they also carry limitations (Terrell 2006). The most robust studies in the field aim to report upon the use of TEL resources in isolation from numerous unwanted variables and should aim to demonstrate a quantitative impact on learning gain (Van Nuland and Rogers 2016; Pickering 2017). Furthermore, the context by which instructors use these resources, and the degree with which they align themselves with established educational theories, also requires some consideration before
45
generalised conclusions can be made about their utility. Theories on human learning have facilitated a paradigm shift from teacher-centred to learner-centred classrooms during recent times (Terrell 2006), and much of what TEL offers aligns with this transition. The central tenant underpinning educational approaches to most medical curricula relate to the ideology of behaviourism. TEL resources, much like face-to-face delivery of teaching, are built around the concept of students demonstrating what they have learned through achieving learning outcomes. Self-contained resources that have clear objectives and boundaries are often well received online because they focus on doing ‘one thing well’ – testing knowledge (Woolfolk 1998). Self-assessment websites where students can practise common medical assessments are very popular, presumably because they allow users to evidence their knowledge directly via a behavioural outcome which is measurable. Although these resources do not involve complex programming or interactive design methods, they succeed in promoting the ‘mastery of learning’ (Sviniki 1999). Essentially, they reinforce the superficial principles of memorisation and recall, achieving higher educational performances, but they cannot assess a student’s ability to apply their understanding in different scenarios (Morrone and Tarr 2005). A further limitation of this approach is that students divert their attention towards becoming skilled in mastering the assessment format rather than focusing on achieving the knowledge they need to perform well in a test (Gredler 1992). This observation has led educators to label this as a strategic approach to learning. It is arguable that resources which have this focus are preparing students for assessment and are not designed or intended to be used by those learning the material for the first time. This is often reflected in the discourse of websites, allowing learners to buy packages by assessment name. They are intended to be used once the educational journey is (mostly) complete. Sites such as Passmedicine (www. passmedicine.com), PasTest (www.pastest.co. uk) and BMJ OnExamination (www.onexamination.com) are fitting examples, designed to help
A. Elmansouri et al.
46
students and clinicians pass high-stakes medical examinations.
4.3
TEL and Learning Theory
In order to develop TEL that directly complements face-to-face instruction, it is pertinent to first explore the literature on how students learn. The cognitive model has dominated educational psychology for many years (Shuell 1986). Understanding how information is processed can aid learning developers with instructional design principles (Sweller 2004). Cognitive load theory suggests that a student’s intellectual performance is optimised when the limitations of working memory are adequately circumvented through methods of instructional design (Baddeley 1992). The basis of this model proposes that TEL packages should be constructed to intentionally manage the extrinsic cognitive burden when learning new, unfamiliar information. For neuroanatomy more specifically, the intrinsic load (the natural and inherent difficulty of the subject matter) is high in the first instance, so the educator must manipulate the content so that it is cognitively digestible (Sweller 2004). One way this can be achieved is by delivery via small ‘bite-sized’ chunks – it was this philosophy the authors used when creating the ‘SotonBrainHub’ collection of videos on YouTube. The neuroscientific subject matter is divided into manageable, self-contained videos and arranged into short playlists to deliberately portion and pace the student’s learning journey (Geoghegan et al. 2019). In theory, this can help develop memory ‘schemas’ which allow the learner to categorise constituent parts of related information into a sophisticated serial network of interrelated connections (Van Merrienboer and Sweller 2005). This can be deployed successfully by adopting a ‘blended learning’ approach alongside face-to-face teaching (Pereira et al. 2007; Khalil et al. 2018). The recommended primary focus when designing multimedia resources is to allow working memory to become as devoted as possible to processing new information. There is substantial evidence to suggest that applications and teach-
ing approaches that align to these principles are highly effective in improving learning efficiency (Paas et al. 2004). Therefore, digital tools in anatomy should adhere to the instructional design recommendations set out by educational theorists. This can be achieved by integrating multiple channels of complementary information within a single coherent resource. In the authors’ view, this is achieved very well by the application ‘TeachMeAnatomy’ from the ‘TeachMeSeries’ which organises text and images in a hierarchical fashion with consistent structuring of information that successfully eases the pressure on encoding. The use of animations, video clips or interactive content should avoid the unnecessary processing of overlapping information via different channels simultaneously. Developers should select or create visuals that are mostly free of excessive, text-based, explanatory detail and provide descriptions audibly, using high-quality recording equipment to produce clearly understandable audio. This enables students to focus on visualising the structure of interest, rather than attempting to view the structure whilst reading the textual details. The latter, of which, can often contribute to increasing cognitive load (Mayer and Moreno 1998; Mayer 2005). A good example of how the minimisation of cognitive load can be achieved in clinical anatomy is through the creation of screencast videos which have yielded positive reports in anatomy education (Sugar et al. 2010; Evans 2011; Pickering 2015). Evidence suggests that when these video resources are developed based on the principles of the Cognitive Theory of Multimedia Learning (CTML), the retention of anatomical knowledge is enhanced to a greater extent than when textbooks are used (Mayer and Moreno 1998; Pickering 2017). When words are presented in audio form alongside a drawing (which gradually develops in complexity over time), it supports the successful management of the split attention principle by averting duplication of the same information via multiple channels (Mayer 2002). Integration of the audio narration carefully, with on-screen animations or sketches, additionally fulfils the contiguity principle (Terrell 2006). Evidence suggests that students
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
better understand words and pictures when presented together as opposed to when they are separated by time (Molitor et al. 1989). In anatomy, careful construction of instructional videos where the visual display is sensibly aligned to an audio explanation of a structure and its function and/or the related pathologies allows links to be made in the learner’s memory between the multiple stimuli experienced in close association. Applying these instructional preferences can therefore contribute to a reduction in cognitive load for the learner (Chandler and Sweller 1991; Mayer and Moreno 1998; Mayer 2005). In the information processing model, it is important to appreciate that long-term memory storage (LTM) is a repository for both knowledge and skills (Gredler 1992). There is also some evidence to suggest that the ability to transfer and apply knowledge within a novel context is heavily, and possibly even solely, dependent on the wiring of information within LTM (Sweller 2004). Although the capacity and duration of LTM is theoretically unlimited, retrieval becomes increasingly difficult with the passage of time (Houston 2014). There is a well-established evidence to suggest that having memorable learning experiences can make the recall process easier (Kensinger 2009). The molecular cascade of events that ultimately lead to synaptic modifications in the brain has an evolutionary basis. To facilitate survival, fear and memory are biochemically strongly linked in the mammalian brain through the amygdala and the hippocampal formation (Abe 2001; Phelps 2004). However, the combination of positive emotional experiences and learning has the potential to create memorable learning experiences that stand out amongst the background noise. This is probably more likely to occur with ‘real-life experiences’ such as through shadowing or apprenticeship-style training. This is the basis for simulation learning which is commonplace in healthcare and is supported by literature (Zendejas et al. 2013). However, the careful creation of online learning that constructs a narrative or story which is enjoyable and fun can captivate learners and encourage decision-making skills within a safe environment (Avraamidou and Osborne 2009).
47
In many ways, the constructivist model appears to offer the most natural fit when it comes to the instructional design of online multimedia resources. Since the focus is on learning and not teaching, it becomes the context in which learning occurs, and the associated experience which is of importance rather than the teacher (Morrone and Tarr 2005). The learner has full autonomy in what to engage with and the role of the instructor is purely to signpost and integrate the material to give a blended experience – essentially the student takes responsibility for their own learning (Derry 1996). In this sense, the role of TEL is to offer flexible, mobile learning opportunities that represent content which is either applied or provides a quasi-authentic experience. As alluded to previously, it is not uncommon to see e-learning packages that role-play clinical scenarios through the use of virtual patients and case studies to encourage decision-making and autonomy (Docherty et al. 2005). This places a stronger emphasis on reasoning, problem solving and predicting outcomes as opposed to factual recall in isolation (Shuell 1986). Web-based applications that allow for animated medical scenes with a complex narrative and multiple ‘pick a path’ endings are among the most elaborate on offer and often require substantial skill in animation or coding. The design and development of these tools often requires collaboration between subject area specialists and digital learning professionals to accomplish the final product. One particular strand of this paradigm is the social constructivist approach, which places significant emphasis on the social element of learning (Vygotsky 1980). The application of social media within learning anatomical sciences has risen to prominence over recent years (Jaffar 2012 and 2014; Hennessy et al. 2016) and highlights how online communities of learning can impact on the learner experience within higher education (Buzzetto-More 2014; Mukhopadhyay et al. 2014). The accessibility of these internet platforms allows for the facilitation of cooperative and reciprocal learning opportunities. Students are able to construct and reconstruct their knowledge based on group discussion and dialogue (Derry 1996). The face-to-face conver-
A. Elmansouri et al.
48
sations can continue online providing a collaborative and immersive educational experience. Although this provides a supportive approach, it is notable that education is now encroaching on a digital social space that students prefer to engage with to enjoy a work–life balance. Educators are advised to tread carefully since it has become evident that some students would prefer it if their social online environment was not invaded in this way (Border et al. 2019). The sole alignment of any single psychological paradigm is unlikely to fulfil the expansive range of educational outcomes required for the construction of the most effective multimedia learning resources. It is, however, of paramount importance that the development of new online tools is effectively supported and validated through robust empirical studies that can justify their use within anatomy curricula. In particular, the authors recommend that a component of all published work in the field of anatomical sciences education should contain some discussion of how the instructional design principles align themselves with the common educational models of learning.
4.4
EL Neuroanatomy Resource T Review
The authors have conducted a review of a variety of technology-enhanced neuroanatomy learning resources to establish whether there exists a foundation of learning theory within their presentation. Furthermore, this review can guide both educators and students through the features and structure of the resources, considering the benefits and weaknesses for their specific needs in a comprehensive and easily understood manner. It is likely that students and educators may have preferences based on their own objectives; however, by outlining the features and pedagogy underpinning each resource, this review provides some insight into the current state of technology- enhanced learning in relation specifically to neuroanatomy learning in medical students.
4.5
Search Strategy
In order to accumulate a list of relevant resources, an extensive online search for neuroanatomy themed TELs aimed at medical students was undertaken using Google (Google Inc., Mountain View, CA). The search terms used were ‘neuroanatomy’, ‘anatomy’ and ‘revision’ followed by the resource modality (e.g. videos, 3D). A primary list of neuroanatomy learning resources was assembled based on the top hits of the search. Each resource was then subject to the following inclusion and exclusion criteria. A subsequent literature search was then undertaken to identify all published work on that resource.
4.6
Inclusion Criteria
• There is previously published literature on the resource.
4.7
Exclusion Criteria
• Non-human anatomy • Non-neuroanatomy resources • E-books, online journal articles and periodicals • Non-multimedia web resources • Sites presenting only links to external websites with no original content of their own • Limited to neurohistology, neurophysiology, etc. • Aimed solely at non-clinical students • TEL that was not prepopulated – that is, educators expected to make their own content (e.g. Kahoot, Adobe Spark)
4.8
Modality Division
Resources were categorised by modality, and the most popular results for each modality were assessed based on the lists of resources created by multiple authors. The modalities reviewed were:
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
• • • •
3D rendering Medical imaging resources Video series Plain text resources
4.9
Review Strategy
A robust assessment was carried out on each resource by compiling and assimilating independent reviews made by both clinical and academic authors. The following criteria were deemed to be important based on the previously stated principles of learning and good resource design: 1. User experience 2. Cognitive taxonomy 3. Clinical relevance
4.9.1 User Experience Based on factors outlined in previous literature (Javaid et al. 2019) and on the experience of the authors, the following list was devised to make a robust assessment of user experience: • • • • • • • • • •
User interface Clarity of explanation Accuracy of content Method of instruction User control of pace Logical progression through material Ease of navigation through material Ability to visualise relationships of structures Level of interactivity with resource Integration of clinical correlates, clinical imaging and neurophysiology • Summary of information • Ability to assess own knowledge • Tracking of progress A scoring system was employed which assessed whether an element was absent (0 point), included in a limited capacity (1 point), or present and well developed (2 points). Where
49
disagreement of more than 1 point existed between assessors, the individual criterion was discussed, and mutual agreement reached.
4.9.2 Cognitive Taxonomy Often overlooked is the classification of information being taught via technology-based means. It has been reported that student satisfaction is the most common method of TEL assessment (Clunie et al. 2018; Adams 2015). With that in mind, Bloom’s Taxonomy was employed to define the educational goals being delivered to the user (Bloom 1956; Adams 2015). Conceptualised in 1956 and revised in 2001, Bloom and colleagues devised a simple, comprehensive framework to classify educational goals. These are simultaneously hierarchical and interrelated (Bloom 1956). The taxonomy is divided into cognitive, affective and psychomotor, but educators have primarily employed the cognitive model, which includes six different classification levels as follows (Anderson and Krathwohl 2001): • Remember - retrieval of information from memory. • Understand - comprehending the meaning of instructional messages. • Apply - using the previous two steps to complete a procedure. Appreciating how parts work together in a whole. • Evaluate - making judgements based on values and standards. • Create - putting parts together to form a structure or product. Whilst the taxonomy is a useful tool to ensure a teaching resource is well-developed in its learning objectives, critics rightly point out that it is a reductionist way to categorise learning and human cognition, so it was used alongside the other two assessment means (Anderson and Krathwohl 2001).
A. Elmansouri et al.
50
4.9.3 Clinical Relevance This chapter explores TEL resources in the context of medical students learning neuroanatomy. The primary objective of these learners’ higher education is to become licenced medical practitioners with the need to integrate knowledge of basic sciences such as anatomy into a huge range of potential clinical settings. Early integration is important, especially in the neural sciences (Jozefowicz 1994). The following list was defined as important elements to include when assessing the clinical application of TEL resources: • • • • •
Application of previously taught content Diverse range of clinical conditions Clinical presentation Pathophysiology Management of condition
The same (0, 1, 2) scoring system was applied to each element within this assessment criterion.
4.10 Available Published Work This chapter provides a general review of resources with a focus on their features, similarities and differences. It considers the degree to which they follow pedagogical theory, the authors’ experience using the resource and how a learner might find them. Furthermore, it considers whether any published work has been undertaken to evaluate the resource. It does not evaluate them directly. Methods have been designed to directly assess TEL resources and objectively measure their effectiveness using a mixed methods approach to gather quantitative and qualitative data (Pickering et al. 2019). This approach has been recommended to educators by leaders in the educational field and requires a substantial number of participants and preferably use of a randomised control trial methodology (Pickering et al. 2019). Unevaluated TEL should be used with caution. Although new resources have the potential to positively impact
learning, they also have the potential to negatively impact it (Trelease 2016).
4.11 Review 4.11.1 3D Rendering Review Primal Pictures was established in 1991. Although the user interface is dated in places, there remains a huge wealth of information available. The interface would benefit from modernisation, in particular the shortcomings of the 3D model’s responsiveness need to be addressed to support fluid navigation. Good control of the model is important as evidence suggests that the primary advantage of a 3D model over multiple views of still images is the user interaction (Garg et al. 2001). One possible reason for the difficulty in navigation is the vast amount of material available, particularly when trying to find information on a single topic, such as neuroanatomy. The abundance of data could be considered advantageous in that students with a very broad syllabus, who are expected to learn structures in detail, have a good encyclopaedic reference that is fit for purpose. However, the information is mostly displayed as text without specific goals which is repetitive and may contribute to cognitive fatigue. Cognitive theory suggests that multimedia resources are more effective when they are goal oriented and present relevant information in a memorable fashion. Without this focus, they can often become overwhelming and lead to an overload in working memory (Moreno and Mayer 1999) (Figs 4.1 and 4.2). The full benefits of 3D graphics in medical education are yet to be thoroughly explored; however, there are positive preliminary results supporting their use (Hackett and Proctor 2016; Battulga et al. 2012; Garg et al. 2001). Primal Pictures presents a ‘Real-time’ 3D model that is cartoonish in style and might not instinctively appeal to those who prefer realistically rendered images. Despite this, each of the many virtual prosections is accompanied by a cadaveric photographic image of the same dissection. The colour on the cadaveric image can
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
51
Fig. 4.1 Overview of 3D rendering TEL resources
Fig. 4.2 Summary of review of 3D rendering TEL resource – Primal Pictures
be altered to match that of the 3D model and vice versa, allowing the user to quickly and easily relate the simplistic appearance of the 3D model to its cadaveric correlate. The 3D model can be sectioned through a number of segments in coronal, sagittal and transverse planes, with the image compared to an MRI taken in the same plane, allowing the user to apply what they are learning on the 3D reconstruction to clinical imaging. This is an essential element of translational education as documented by Estevez et al. (2010). Additionally, this software allows users to label and annotate the model and save the result for future study, as well as providing an extensive collection of self-assessment questions. In the Anatomy and Physiology section, there is an abundance of detailed and relevant information on gross and micro-anatomy, as well as the associated physiology. Information is presented in a structured manner allowing the learner to locate everything they need to know about a structure in one place. The user interface is divided in half to
show a 3D model and text side by side. A submenu provides a list of related clinical conditions, where details of causes, clinical presentation, diagnosis, and treatment are discussed. The application is not tailored for the average student in terms of affordability. As of 2019, it is priced at £310.22, an annual fee for each of its three anatomy components. One would need to purchase at least two of these (3D atlas and Anatomy and Physiology) in order to avail of a full complement of sufficient neuroanatomy resources. Instead, this product is marketed towards growing institutional subscriptions, which would provide access to all of its features. It is the view of the authors that this application delivers reliable, relevant and detailed content to students, providing them with the tools to take control of their own learning in a flexible way. However, the single mode of delivery together with the large volume of data has the potential to contribute to cognitive overload, and instructional goals are unclear.
A. Elmansouri et al.
52
Fig. 4.3 Summary of review of 3D rendering TEL resource – BioDigital Human
BioDigital Human is a free online resource with many positive aspects, although it is not without some limitations. In the authors’ view, the user experience could be improved. However, what it lacks in visual appeal and ease of navigation, it counteracts with well explained and digestible clinical content (Fig. 4.3). This web- based resource is accessible on any device and has an additional mobile application version (BioDigital 2017). The full 3D human body render and all of its anatomy content are available without charge. BioDigital Human also provides paid premium content, mostly focused on clinical correlates, at an annual subscription of less than £50. Although somewhat difficult to navigate, Biodigital Human’s visual representation of pathologies is impressive. There are over 70 clinical conditions relating to the central and peripheral nervous systems listed, each of which contains a 3D gross or micro-anatomy model of the condition, animations of the pathophysiology and supplementary written text. The simplicity of the 3D models could potentially limit conveyance of some of the finer anatomical details and relationships (Lewis et al. 2014). For example, programmes such as Complete Anatomy and Visible Body allow the user to select and read the information on the superior colliculus. By comparison, this application only allows for selection of the midbrain as a whole. The available evidence suggests that recall is improved with pictorial instructions due to an increase in associative perception, when compared to written words alone (Kinjo and Snodgrass 2000). Further still,
images effectively depicting detail and their anatomical relationships result in enhanced recall (Waddil and McDaniel 1992). 3D rendering resources are in an optimal position to take advantage of this pedagogical finding (Papinczak et al. 2008). Some applications offer directed ‘lecture- style’ content on neuroanatomy, creating a self- contained educational resource with its own learning outcomes. Instead, this programme opts to simply provide users with an option to explore the normal anatomy of the human brain or alternatively make their way through an extensive list of clinical conditions. In most cases, it is likely that a student is looking to supplement what they have learned and fill gaps in their knowledge. This application, with its vast array of 3D gross and micro-anatomy models and detailed explanation of clinical conditions, will most likely satisfy that specific educational need for most undergraduate medical students, particularly because both the free and paid versions enable students to assess their own knowledge. Complete Anatomy is an application developed by 3D4Medical and is self-branded as ‘the world’s most advanced 3D anatomy platform’. It offers very accurate and striking visual elements, and its production value is highly sophisticated in comparison to many rival applications (Lewis et al. 2014). Unfortunately, its neuroanatomy and clinical content appear to be a little underdeveloped compared to other body systems (Fig. 4.4). This tablet or desktop application is available for students to trial on a 3-day free trial
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
53
Fig. 4.4 Summary of review of 3D rendering TEL resource – Complete Anatomy
basis, giving users access to the full atlas of the entire human body. This application, like many other 3D rendering programmes, is aimed at all systems anatomy. Post-trial the user is given two options: • A single payment for lifetime access, equating to approximately that of an anatomy textbook (£46.50) • An annual subscription with a reduced first year pricing (£20.50) and totalling to £58.40 p/a thereafter The single one-off payment gives users access to the ‘individual user’ version which provides access to the atlas, dissecting tools and some quizzes. The subscription option comes with additional access to hundreds of hours of content in the form of virtual lectures, the option to share content with peers, access material across multiple platforms and a cross-sectional virtual specimen bank. The application’s content is anatomically accurate, and the level of detail provided is excellent. Along with the detailed, realistic visuals, there is also supplementary text to support learning on each selectable structure. For example, each thalamic nucleus has textual functional detail associated with it 3D4Medical (2019). Within the atlas, it is up to the user to navigate around the 3D model and engage in active learning as they explore and discover each structure. Evidence suggests that actively interacting with reconstructed 3D objects as opposed to passively looking at them accomplishes more effective
recall (James et al. 2002). However, this may not suit all students as the semi-structured approach to the organisation of content could leave some users feeling overwhelmed. The systems-based anatomy course included with the subscription option gives students access to 200 neuroanatomy resources, including an extensive collection of 5-minute videos detailing the anatomy and the labelled 3D models. This approach to neuroanatomy provides students with an opportunity to learn at their own pace and to track progress through the course to aid consolidation. Such benefits have been alluded to in the literature, where video resources and screencasts have been deployed to supplement learning within curricula (Border 2019; Evans 2011; Pickering 2015, 2017). The instructional design of the videos fits with principles of the theory of multimedia learning. They consider the limited capacity assumption by including ‘breaks’ where learners are prompted to pause and process the new information they have just received. The dual-channel principle is also utilised, with clear, high-quality audio narrations overlapping animated screen recordings of the 3D software, highlighting the key structures as they are discussed. Considering the evidence, they are well positioned to provide favourable learning outcomes (Mayer 2002). Additionally, university subscriptions can be purchased by institutions wishing to offer a blended learning framework. The curriculum dashboard allows educators to assess student engagement analytics as well as view how their students are performing in quizzes. This feedback
A. Elmansouri et al.
54
could be used by tutors to allow them to focus on areas they notice their students struggle with and make face-to-face sessions more engaging. The application has been utilised for pedagogical research, with one Colombian study showing that students felt more motivated by the quiz element of the resource but that there was still a need for textbooks. This finding suggests that although students value enhancement of their current learning valuable, it should not be introduced at the expense of traditional resources (Martínez and Tuesca 2014). In summary, this application is visually impressive with multiple layers of context, but it is predominantly focused on the preclinical anatomy level, with limited inclusion of pathologies. It does not appear to support the integration of knowledge for clinical problem solving or details on how to manage each condition. In the authors’ view, this is its most significant limitation, since many of its competitors have successfully managed to incorporate this information. Visible Body is a provider of 3D anatomical technologies with users in over 200 countries (Visible Body 2019a, b). It contains well laid out and detailed information on many structures throughout the whole body along with a comprehensive virtual atlas of anatomy. Its visuals are rendered unrealistically – adopting colours and textures which may impact on appeal, particularly for those seeking a realistic learning experience. In the authors’ view, it makes the finer anatomical structures more difficult to visualise precisely. Its associated clinical information is also limited (Fig. 4.5). For a £30 one-off pay-
ment (Visible Body 2019a, b), the learner gains access to the full 3D human atlas. There is an option to add on a physiology video package for a further fee. This tablet or desktop-based application is available by purchase only, giving the user no option to trial the application to see if it meets their needs prior to purchase. It has a clean and intelligible user interface making the application intuitive to navigate. Neuroanatomy is presented in a number of different predefined ‘dissections’, such as the brain with its vasculature, in situ, within the cranial cavity. Upon selecting the dissection, the user is presented with a basic 3D image. The structures appear poorly defined, and as previously alluded to, basic 3D visuals reduce a student’s ability to translate information learned to cadaveric or surgical scenarios (Martin et al. 2013). Despite the graphical limitations, there is a great deal of functionality to this application. For example, a user can label and annotate the model and save it as a flash card for future study. This gives the user the opportunity to cut away distracting material, reducing extraneous processing and allowing their working memory to be used effectively on the learning objective (Mayer 2018). There is a useful quiz feature and each selectable structure has associated text, detailing the important anatomical relations and function, as well as a comprehensive list and description of associated clinical conditions. Additionally, there is a cross-sectional anatomy feature, allowing users to section each model at predefined levels and compare the outcome to MR or cadaveric images. The idea of initiating translational learn-
Fig. 4.5 Summary of review of 3D rendering TEL resource – Visible Body
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
ing is strongly encouraged in the teaching of anatomy since it is one of the most transferable aspects of knowledge in clinical practice (Estevez et al. 2010). As with many other 3D anatomy atlases, this application does not directly guide users through the anatomy. The lack of a goal-orientated focus may lead to superficial engagement rather than a deeper, more invested approach towards learning (Moreno and Mayer 1999). However, the flashcard feature is a useful method of reducing extraneous processing allowing the learner to focus.
55
relate to the neural sciences, there is an abundance of neurological and neurosurgical cases on this open-access website (Figs. 4.6 and 4.7). A vast range of professionals and clinicians registered to the website act as contributors. As with many open edit sites, this can lead to quality control issues; however, there is an editorial board to review content that is presented for publishing, and information can only be provided by registered contributors. In the assessment made for this review, the panel was satisfied with the accuracy of the neurological content. There is an impressive level of detail of individual anatomical structures and a vast number of 4.11.2 Medical Imaging Review neurological elements listed. Information is presented in a format that is easy to navigate and Radiopaedia was founded in 2005 as a wiki- supplemented by collections of colour labelled based collection of radiology resources. It has MRI scans which the reader can scroll through to since established itself as a hugely popular appreciate the more complex anatomy. There is resource for students, educators and clinicians. an individual page dedicated to each structure As of 2017, it has hosted over 10,000 articles and and a relevant level of detail. However, it should over 25,000 clinical cases. While not all of these be acknowledged that there is no obvious curric-
Fig. 4.6 Overview of Medical Imaging TEL resources.
Fig. 4.7 Summary of review of Medical Imaging TEL resource – Radiopaedia.org
A. Elmansouri et al.
56
ulum upon which the assimilation of content has been sourced. Given the volume of case-related information available, there are no direct links between the anatomical information and each clinical condition. For example, it would be useful to be able to read about the normal anatomy and radiological appearance of white matter tracts in comparison to examples in cases of Multiple Sclerosis, Alzheimer’s disease, etc. Radiopaedia depends on advertising which unfortunately provides distraction from the learning material, contributing to extraneous processing. A review by Mayer and Fiorella (2014) has shown that students learn better without redundant distractions. The plain text in the article often references imaging that can be seen by clicking on the relevant images on a separate section of the page. Sometimes, these have a caption further explaining the radiological findings. This lacks spatial contiguity, and the user is forced to try and bring the graphics and written descriptions together mentally. There is evidence to suggest that better integration of text and the graphics they describe result in better learning (Mayer 2018). Clinical cases are published with the patient’s anonymised demographics, presenting symptoms, diagnosis and clinical imaging. There is scant discussion of the pathophysiology or management of each condition – a feature which may be developed in the future. Some conditions have their own separate featured article that supplies this information. While users can quiz themselves on clinical cases, there is no opportunity to assess neuroanatomy knowledge as a standalone
topic. Although possibly a conscious decision by administrators of the site, the potential impact is that it may not appeal to some preclinical students who are studying material for the first time. Radiology Assistant is a free online resource, built by radiologist Robin Smithuis who is based in the Netherlands. The aim of this website is to provide up-to-date radiological education and references for everyone from undergraduate students to registrars. There is a focus on common clinical conditions in which imaging plays a major role in the management of the patient (Fig. 4.8). First impressions of this website reveal a well organised but outdated interface. Listed across the top of the page are several anatomical regions of imaging, one of which is neuroradiology. The whole brain is presented in axial MRI. Worthy of note is an impressive presentation of the Circle of Willis; however, overall there is a paucity of detail of normal anatomy. Distributed amongst the MRI slices are duplicate slides with simple labelling, taking advantage of the signalling principle (Mayer 2018). The clean appearance is free of distraction allowing the user to focus on the information they need to process. In the drop-down menu is an extensive list of clinical conditions. Upon selecting one, there is a brief explanation of the role of imaging in the management of each condition – a useful and uncommon feature. There is a unique combination of labelled diagrams, scrollable stacks of labelled and unlabelled MRI and CT, cadaveric dissections and plain text contributions, all illustrating the detail of each condition. This combined information gives the student a holistic
Fig. 4.8 Summary of review of Medical Imaging TEL resource – Radiology Assistant
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
57
Fig. 4.9 Summary of review of Medical Imaging TEL resource – E-Anatomy by IMAIOS
view of a condition and its diagnosis. There is also an impressive level of detail on the causes of the disease or condition, MRI protocols and images explaining what one would expect to find on MRI (with several different patient cases presented). This resource might prove a little overwhelming for some pre-clinical students, but it could become a useful resource for those further on in their education. Notably lacking from this website is the ability to search the repository for specific information – its absence is a significant drawback which may deter users returning to the site (Marchionini 2006). Additionally, this resource stops short of discussing each condition beyond the diagnosis stage, lacking specifics on management or outcome. This is likely a conscious decision on behalf of the website authors, and to some degree, it is understandable. IMAIOS was founded in 2008 as a medical imaging and e-learning company. The co- founders are two radiologists, MD Hoa Denis and MD Micheau Antoine. IMAIOS calls its E-Anatomy package an interactive atlas of human anatomy, and its focus on imaging, specifically MRI, is evident. The website features limited free content, such as an axial presentation of the brain; however, the majority of the content is locked in for premium subscribers. At the time of writing, the cost of the complete E-Anatomy package ranges from £12.99 if paid monthly to £74.99 annually (an almost 50% discount) (Fig. 4.9). On the E-Anatomy home page, the user is met with collapsible headings such as ‘Head and Neck’ and ‘Thorax Abdomen and Pelvis’. The headings
are populated with a number indicating the number of topics contained in that section. Once expanded, the user can see thumbnails with titles such as ‘brain MRI’ and ‘CT head’ which are labelled ‘premium’ or ‘free’ depending on their accessibility. Once the user selects the anatomical area in the imaging modality they would like to pursue, they are met with an interactive reconstruction of a scan which looks and acts very similarly to ones used in a hospital setting. Users are able to scroll through CT scans, for example, and analyse multiple axial slices. The images make use of signalling principle by precisely labelling key anatomical landmarks that the radiologists have identified (Mayer 2018). The resource does not prioritise knowledge of one anatomical structure over another. For example, an indication that knowledge on cranial nerve VI is more fundamental to clinical practice than the inferior semilunar lobule of the cerebellum could be helpful for clinical students trying to prioritise information for revision. Rather surprisingly, given its clinical face, the resource focuses almost entirely on topography and structural relationships. Little is provided in the way of clinical correlates. Although the information is clearly communicated, the text is densely arranged on each page, which may make it a challenge to read and assimilate. Evidence suggests that educators should manipulate content to avoid this and make it more cognitively digestible (Sweller 2004). The method of instruction is delivered through both plain text and radiological correlates. There are a multitude of diagrammatic representations
A. Elmansouri et al.
58
Fig. 4.10 Summary of review of Medical Imaging TEL resource – The Human Brain Project
available which utilise the signalling principle (Mayer 2018). Progression through the material can be topographical, via imaging, or alphabetical, this does, however, risk fatigability. Users would be better served identifying what their syllabus requires them to know independently and then actively seeking out the information on this resource. This provides a stark contrast to resources such as UBC Media and NeuroLogic, which provide users with a comprehensive syllabus based on their respective medical schools. This resource is less complete than the others in this section for medical students wishing to learn structural, functional and clinical anatomy together. It is a useful tool for those who want to familiarise themselves with neuroimaging and refresh their memory on topography. One could envisage it being a useful supplementary tool for students who are already familiar with their syllabus and want to gain a deeper understanding of how the structures relate to imaging. The Human Brain Project is an EU-based scientific research project that aims to allow researchers across Europe to advance their knowledge and understanding of the brain. Currently, 6 years along a 10-year timeline, the project is the work of many international neuroscientists that are contributing to the open-source environment. One of the most educationally and functionally relevant features of this free online resource is ‘The Big Brain’, which is a unique presentation of segmental neuroanatomy, and it is this feature which has been reviewed (Fig. 4.10). ‘The Big Brain’ is presented as an interactive imaging atlas and distinctively shows
the brain in all three planes in one window. Upon entering the website, the user is presented with a screen that has been divided into four equal sections, three of which contain a coronal, sagittal and transverse sections of the brain, respectively. In the remaining window is a 3D render of the brain with the section corresponding to the area being observed selected. As the user navigates around the brain (simply by moving the cursor), all three planes move to show the section at the specific area of the brain selected. This novel feature is particularly relevant to neuroanatomy, where it is often the complex 3D nature of the anatomy that students find difficult to appreciate (Brewer et al. 2012; Allen et al. 2016). This presentation of the brain is available in both histological preparations (shown in black and white) and MRI imaging, both of which are incredibly high quality. The five-layered cytoarchitecture of the cortex can be appreciated (and highlighted) in both preparations. The user is able to appreciate a vast number of functional areas of the cortex, as these can be highlighted in colour. Additionally, some deep anatomical structures, such as the hippocampus, can be labelled and explored in detail as the user navigates the brain. While this application provides a novel and useful method of interpreting the three- dimensional nature of the brain in very high quality, in the authors’ view there is an absence of instructional elements important to students who are learning to appreciate the brain. While functional areas and deep structures are selectable, there is no descriptive information available. There is also a lack of emphasis on clinical conditions; however, this may well be beyond the
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
scope of the website. Rather, this is a web space that students can use to appreciate sectional anatomy of the brain in high resolution. For those interested in understanding the internal workings and pathways of the brain, the long and short fibre bundles can be visualised easily, but information on them must be found elsewhere. Unlike many educational tech-based resources within this review, this website avoids adopting self-assessment and clinical correlation content. However, it does offer students the opportunity to contextualise what they might be reading in the clinical literature, without being overbearing and complex. It is a beautiful and unique resource without clearly defined instructional purpose. NeuroRad is a free application available on either desktop or mobile devices. It provides a fairly simplistic presentation of MRI, CT and angiography with minimal functionality. The user is presented with a tiled screen with a number of different radiological modalities. This application consists of several normal radiological presentations, specifically relating to the brain, that the user can explore which are easy to navigate (Fig. 4.11). At first glance, this application appears less aesthetically pleasing compared to similar sites. Despite this, it contains a comprehensive collection of different imaging modalities, including cross-sectional anatomy which is presented in all three planes giving the user a choice of labelled or unlabelled images. The user can scroll through the image stack to fully appreciate the nature of the brain and its associated vasculature. In addition to the cross-sectional presentation, there are impressive images of both MR and CT
59
angiography. The brightly coloured MR reconstructions can be rotated or tumbled, giving an appreciation of the complex network of arteries and venous structures that are associated with the brain. CT images are also featured with coloured vessels and can be scrolled through in any of the three planes. Additionally, there are coloured vascular drainage territory maps that present with or without labels. Although modest in appearance, this allows students to visually appreciate the areas that may be affected by territory strokes, providing some insights into clinical application. However, a notable omission is further information relating to the labelled structures – there is no descriptive detail. Furthermore, there is an absence of self-assessment functionality or indeed any summaries of core clinical conditions. While this free application has a clinical focus, there is an absence of pathology. It is assumed that this application is aimed at users who are looking to supplement their formal teaching provision. For those starting clinical practice, it also has merit and would be a useful addition to the freely available databases of educational material that students can already access.
4.11.3 Video Series Review Kenhub is a screencasting video series that contains an extensive collection of video lessons as part of a paid course structure. Signing up with a free account provides access to a limited selection of videos, which without a subscription remains incomplete. The impact of which is that the unpaid provision has very limited value.
Fig. 4.11 Summary of review of Medical Imaging TEL resource – NeuroRad
A. Elmansouri et al.
60
Fig. 4.12 Overview of Video Series TEL resources
Fig. 4.13 Summary of review of Video Series TEL resource – Kenhub
Without a subscription, users may find themselves disappointed when a video that appears to be freely available is interrupted for advertisements for premium membership to allow the viewer to finish watching the video (Figs. 4.12 and 4.13). Registration for a free account is straightforward and can be achieved through pre- existing Facebook or Google accounts. Full access currently costs £16 per month on a monthly scheme, £13 per month for a 3-month membership or £160 for lifetime membership (Kenhub 2019a, b). The premium content is impressive. The audio is clear, concise, descriptive and professional if not a little impersonal at times, although this may be due to cultural variations. Instruction is provided via still images which transition much like a slideshow. The images are navigated by a pointer and are occasionally animated with labels. In the view of the authors, the content is accurate. The company specifies their primary sources as Gray’s Anatomy for Students (by Richard Drake, A. Wayne Vogl and Adam
Mitchell) and Clinically Oriented Anatomy (by Keith L. Moore, Arthur F. Dalley II and Anne M. R. Agur) (Kenhub 2019a, b). Arguably the main attraction is the visual aesthetic created by their simple but elegant illustrations, which have been attributed to several key partners including Netter award winner Paul Kim. The illustrations are commonly seen in other publications (with permission) (Yoshioka 2016). This combination of clear images and corresponding audio relates particularly well to The Theory of Multimedia Learning (Mayer 2005) and successfully employs the contiguity principle. Spatially separating images and verbal information in audio or written form forces the learner to split their attention between the two. Evidence suggests that by simultaneously presenting information in a dual form, the instructor can overcome some of the barriers to learning (Moreno and Meyer 1999). Each of the videos is displayed alongside a navigation panel labelled ‘highlights’ which allows the user to navigate to the part of the video of interest, a simple yet effective navigation fea-
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
ture. Known as signalling or cuing, this method of directing the learner to the relevant instructional material is a further method of preventing cognitive overload (Jeung et al. 1997). One helpful component of in the subscribed package is that each video concludes by clinically contextualising the preceding material. Clinical context is not present in all the videos, but when present, they offer significant detail. The summaries of information are useful, with time allocated at the end of each video to revisit the salient points. As with most videos, there is not a clear way to assess knowledge; but as part of the online package, self-assessment is present in the modules and users can select assessments specifically related to the videos. One omission from Kenhub’s video series, when compared to other resources, is the lack of cadaveric specimens, which was a deliberate decision. Although the purpose of this article is not to weigh up the benefits and risks of this, cadaveric specimens are still considered as the cornerstone of medical training in anatomy (Dyer and Thorndike 2000; Azer and Eizenberg 2007). Therefore, this omission makes 3D visualisation difficult and may create a misalignment for students whose assessments include cadaveric specimens. Appreciating subtleties such as anatomical variation, relationships between structures and being able to visualise the relationship between structure and function become challenging without cadaveric specimens. Dr. Najeeb Lectures are revered in the online student community with a multitude of threads on Reddit and the StudentRoom. The lecture
61
series claims to be the world’s most popular medical lectures (Dr. Najeeb Lectures 2019). The YouTube videos were first released in 2009, a time where online open-access video tutorials for medicine were less common, making this one of the forerunners in the field. This has given the series time to gain traction and global reach. The lectures vary from 10 min to over 3 h depending on the complexity of the subject matter and cover all aspects of the typical medical school curriculum, offering over 800 videos. A distinctive feature of these videos is that they are recorded live in the classroom, clearly displaying the rapport between the lecturer and the students. The gestures made by the lecturer are therefore genuine and realistic, making use of the embodiment principle. This theory suggests that the student watching this behaviour is more likely to engage with the lecture and learn (Mayer 2018). Dr. Najeeb’s manner utilises the personalisation principle. He often uses quirky anecdotes and personal tips to make things more comprehensible. Research has shown that conversational language improves learning outcomes (Mayer 2018). Dr. Najeeb focuses on understanding rather than informing which gives the lectures a genuine feel but also addresses comprehension and application in terms of cognitive taxonomy (Adams 2015). Since the series is recorded live, students watching the videos have time to think about the questions that are being put to the student audience. This creates a classroom affect which is familiar to the target audience of medical students (Fig. 4.14). There is limited discussion of clinical correlates for the neuroanatomy videos,
Fig. 4.14 Summary of review of Video Series TEL resource – Dr. Najeeb Lectures
A. Elmansouri et al.
62
with no use of radiological or cadaveric specimens. The series maintains a very ‘traditional approach’. Using only a whiteboard and oration, Dr. Najeeb communicates a lot of anatomical information, including discussion of associated physiology. The series costs $99 for a lifetime membership, or through certain academic institution affiliations, students are granted access through their university (Mercer University 2019). If this cost is analysed in the context of a resource that can be referred to over the 4–7 years of medical school, and for future reference throughout the decades of the average clinical career, this works out as one of the better value options in this review. UBC Medicine has created high-quality video content within a massive open online course (MOOC), which is defined as a course available online with the aim of unlimited open access for public use (Kaplan and Haenlein 2016). Gaining popularity over the last two decades, MOOCs have been labelled as a form of disruptive innovation within educational technology, capable of giving global access to the best instructors via an organised mixed modality strategy (Barber et al. 2013) (Fig. 4.15). The creators refer to neurophobia, a phrase coined by Ralph F. Jozefowicz in 1994 which can be defined as a fear or apprehension of the neural sciences (Jozefowicz 1994). Dr. Krebs and the team behind this video series explain their rationale in the literature; the video series is targeted at students at their own institution, intended for use with a flipped classroom model (Krebs et al. 2014). Students are encour-
aged to engage with the video material before coming to practical sessions (Krebs et al. 2014). The flipped classroom approach, although intuitive, has witnessed mixed pedagogical outcomes when tested in medical education settings (Betihavas et al. 2016; Chen et al. 2017; Driscoll et al. 2012; Riddell et al. 2017). A recent systematic review found a lack of conclusive evidence for its effectiveness over traditional teaching methods in terms of knowledge acquisition, but students did find the approach more engaging (Chen et al. 2017). Theoretically, the approach should allow students to apply knowledge, engage in problem solving and ask questions in face-to-face sessions (Hwang et al. 2015). Until more evidence is available, it is important that such resources can also be utilised as a content rich and intuitive stand-alone resource which is flexible enough to be adopted with alternative educational approaches. It might be assumed that this video series tackles neurophobia by making the subject more accessible through high-quality filmmaking and its approaches to production that aid with reducing the cognitive load (Mayer and Moreno 1998; Mayer 2005). A team of anatomists and medical students have collaborated with digital media professionals to create high production quality content that supports student’s learning on key elements of neuroanatomy in a polished and attractive way (UBC Medicine 2019). The documentary style of the video series provides a contrast to the lower-budget filming styles most commonly seen on YouTube educational chan-
Fig. 4.15 Summary of review of Video Series TEL resource – UBC Medicine
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
63
Fig. 4.16 Summary of review of Video Series TEL resource – NeuroLogic Exam
nels. The competent technical team supply the videos with features such as animated labelling alongside narrated video taking advantage of signalling principle. However, it might be argued that the strength of many ‘home-made style’ tutorials is the personalisation principle. The inclusion of student narrators who have recently studied may also incorporate useful acronyms, tips and memory tricks they can pass on to the viewer. This proximity in learning is termed cognitive congruence and is a well-evidenced benefit of near-peer teaching (Hall et al. 2013; Khaw and Raw 2016). UBC Medicine has chosen to keep continuity in their narrators, rarely including student narrators. As a hosting platform, YouTube does not automatically provide a logical progression of video content through a series unless the creator organises their content within a playlist manually which UBC Medicine has done. Like some of the other providers on this list, they also tackle this problem using a website with a MOOC structure, which gives a more intuitive way to progress logically and navigate the material in the intended order. The material all corresponds to the syllabi published on the MOOC website (UBC Medicine 2019). The content has the most sophisticated interface of all the video series explored in this chapter, so it is surprising that it is part of a MOOC without purchasable premium content. It is assumed that the video series is not intended to provide clinical context which may be a limiting factor for some medical students. However, this means that it may have more general appeal to other audiences including those
studying neuroscience, psychology or cognitive psychology. In summary, the video series within this MOOC structure employs an innovative flipped classroom approach and takes advantage of cognitive theory. NeuroLogic Exam was established by Paul D. Larsen, MD, a paediatric neurologist from the University of Nebraska, and Suzanne S. Stensaas, PhD, a neuroanatomist from the University of Utah School of Medicine. This MOOC exemplifies some of the many arguments why anatomists and clinicians should collaborate to effectively blend the basic sciences with clinical context for medical education (Fig. 4.16). The website is organised by an index on the user’s left-hand side which splits the neurological exam into six parts: 1. 2. 3. 4. 5. 6.
Mental status Cranial nerves Coordination Sensory Motor Gait
Each exam is then broken down into the anatomy, normal and abnormal examination findings and quizzes. The incorporation of clinical relevance into the anatomy tutorage in this video series is superior to other examples reviewed within this modality. It does lack some information on the diagnosis and management of named conditions, but it could be argued that this is not especially relevant to the anatomical content. The aim of clinically orientated undergraduate anat-
A. Elmansouri et al.
64
omy training is to enable a student to identify the differences between function and dysfunction and to understand the relationships between structures and to apply them to a multitude of situations (Pathiraja et al. 2014). This video series is tackling the issue of neurophobia by teaching students the basic anatomy while simultaneously integrating it into clinical context (Jozefowicz 1994). Many of the videos are several years old and not as visually appealing as some of the more modern recordings, such as those provided by UBC Medicine. However, they are clear and concise and communicate the information well. The website is easy to navigate, and there are supporting materials available including cadaveric specimen-based tutorials. Lecturio has a neuroanatomy module targeted at medical students globally which covers 11 h and 47 min of video content. Founded in 2008, the German company’s mission statement is to ‘simplify and optimize online medical education’. (Fig. 4.17) The videos use green screen technology to blend a screencast with video footage of a teacher appearing face to camera, giving a virtual lecture effect. Similar to Kenhub, the website has a pleasing aesthetic giving it a polished and attractive user interface. At the present time, access can be purchased for £49.99 per month on the 1-month plan. Alternatively, learners can commit to £34.99 a month for a 3-month plan or the cheapest option of £24.99 per month for a 12-month plan, making Lecturio the most expensive video provider in our review.
The content is delivered clearly and comprehensively. The Neuroanatomy module is divided into sections discussing Neuroembryology, Neurohistology, Head and Neck Anatomy, and Brain and Nervous System Anatomy, with each section containing a series of virtual lectures. In each lecture, the teachers speak to the camera using a very formal style. Much like Kenhub, the screencasts use simple, easily visualised diagrams to illustrate difficult concepts. In the authors’ view, the content is accurate and concise, although some of the material overlaps. Repetition of content is important for memory consolidation, so the Lecturio model may well have a positive impact on knowledge gain (Hintzman 1976). Unlike the other resources offering supplementary self-assessment, the videos have an integrated quiz that starts the moment the video finishes playing. On answering correctly, the programme records that the user has ‘memorised the information for four days’ notifying the user after that time has passed that they need to revisit the assessment. This is a further example of spaced learning. A method of memory consolidation that is well supported in literature (Kerfoot et al. 2010; Mubarak and Smith 2008). The software uses an algorithm to track responses and modify the learner experience (computer-adaptive instruction). This is a supporting characteristic of TEL. Using com puter algorithms to optimise the learner experience cannot be achieved through traditional methods and has been shown to be effective albeit expensive and technically difficult to pro-
Fig. 4.17 Summary of review of Video Series TEL resource – Lecturio
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
duce (Cook 2012; Kohlmeier et al. 2003) Another aspect of this resource is the 3D interactive model accompanying each video. This model represents the structures discussed in each video. The 3D models are easy to interact with and manipulate. Similarly to Kenhub, Lecturio chooses to avoid the use of cadaveric specimens and radiological images, instead focusing on pictorial representation of the anatomy which offers the advantage of simplifying the subject matter to make it more accessible. Furthermore, laws dictating the use and distribution of images of cadavers for educational purposes vary from country to country. Reference is made to clinical context in some videos but is often minimal. A transcript is available for each video and can be read in the learner’s own time to reinforce information or to support those who are hard of hearing. This is an excellent method of allowing wider access to online resources. Although the videos are not summarised directly, the inbuilt quizzes act as a nice summarising feature.
65
4.11.4 Plain Text Review TeachMeAnatomy is part of the broader TeachMeSeries. It is a collection of articles covering medical topics from paediatrics to surgery. The concept was conceived by a group of clinicians and medical students including the company director (now doctor) Oliver Jones. The TeachMeAnatomy section was the first element to be built in the series and started receiving visitors in 2012. The content is now available on an app, and the website has received over 20 million visitors (LinkedIn 2019). TeachMeAnatomy is a comprehensive but concise guide to many areas of anatomy including neuroanatomy and has even been used in studies as a ‘modern textbook’ (Stepan et al. 2017) (Figs. 4.18 and 4.19). The TeachMeAnatomy website is organised into 11 categories, one of which is neuroanatomy. The neuroanatomy section is then subdivided further into:
Fig. 4.18 Overview of review of Plain Text TEL resources
Fig. 4.19 Summary of review of Plain Text TEL resource – TeachMeAnatomy
A. Elmansouri et al.
66
• • • •
Structures Brainstem Pathways Cranial nerves
This intuitive structure allows for ease of navigation through material. Once the user has navigated to the topic they wish to study, they are met with short blocks of text separated by subheadings, such as ‘Anatomical Position’ and ‘Vasculature’. The text uses simple and concise language to clearly communicate the content to the reader with little embellishment taking advantage of the coherence principle. Corresponding illustrated, labelled diagrams accompany the written material, making it easier to visualise the anatomy. This makes good use of the signalling principle. The two methods have been shown to be beneficial in reducing extraneous processing and improving learning (Mayer 2018). However, the diagrams sometimes exaggerate structures which are more ambiguous in life, and so cadaveric specimens would likely add to the viewer’s appreciation of the anatomy. Some articles are accompanied by an extra section at the bottom of the page in its own separate text field titled ‘Clinical Relevance’ where the anatomy from the topic covered is put into context, sometimes with accompanying radiological images. There is clinical relevance for many topics without this subsection, and it is unclear why this exists for some topics and not others. Short quizzes are available for a limited number of topics, though a much larger bank of questions is accessible with a premium subscription. Much like
Lecturio from the video review section, TeachMeAnatomy has incorporated a 3D anatomical model. Through a partnership with BioDigital Human, students can use the resource to supplement the articles providing they have a paid membership. There are also some hyperlinks to Lecturio video tutorials on the relevant topics. The TeachMeSeries is subtle when it comes to charging for their product and the TeachMeAnatomy. Pricing starts at $6 per month up to $139 for lifetime access. However, the free content is very useful as a standalone feature. Premium content is primarily focused on the 3D model element and adds a significant amount of self-assessment material, but all the core material is available to free users in stark contrast to resources like Kenhub and Lecturio whose content is close to unusable without premium membership. GeekyMedics gained popularity through comprehensive video tutorials designed to aid OSCE revision for clinical medical students (Fig. 4.20). In the authors’ view, this website is organised slightly less intuitively than the TeachMeSeries. The content is split into drop-down menus under topic headings on a bar at the top of the page, which is well organised and easily navigated. However, once a subtopic such as ‘Head and Neck Anatomy’ has been chosen, the user is taken to a page with all the resources under that subtopic arranged in large thumbnails, where only three resources fit on the screen at any one time without further subclassification. For example, the cranial nerves have a resource each, but users would not be able to select all cranial nerves as a topic and then choose the nerve they wanted from a menu.
Fig. 4.20 Summary of review of Plain Text TEL resource – GeekyMedics
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
67
Fig. 4.21 Summary of review of Plain Text TEL resource – UBC Neuroanatomy
GeekyMedics’ resources are structured with a video or still image at the top of the page and a scrollable block of text below. When a video is present, the text is often near to the transcript of the video material, but well-structured under relevant subheadings. The entire resource remains completely free for users with no pricing system or hidden premium content. Every page in the Head and Neck section has a short written piece at the bottom of the page in its own separate text field titled ‘Clinical Relevance’. The inclusion of references at the bottom of the page for both written content and images is reassuring for students, informing them where the information they are spending precious revision time committing to memory is coming from. It reiterates the need for evidence basis, something which underpins medical and educational practice and is an important first principle, and sometimes lacking in TEL resources. At the very end of some of the articles, there is a useful ‘Must Know’ section. Like the TeachMeSeries, it is unclear if GeekyMedics are using specific syllabi to guide them, so on what basis the claim that students ‘must know’ these points are unclear. However, it does pose as a useful summary section of the key take home messages the rest of the article aims to communicate. Self-assessment is not directly available within the articles like it is in TeachMeAnatomy. However, GeekyMedics has launched their own question bank of material. Similarly to Lecturio, GeekyMedics employs a spaced-repetition algorithm in their question bank. The literature supports the method of spaced repetition as a good
method of memory consolidation (Mubarak and Smith 2008). The platform is different to many others because it relies on community contributions to make the questions. The advantage of this method is that it employs a near-peer teaching approach which is credited in the literature for many reasons including social and cognitive congruence (Hall et al. 2013; Khaw and Raw 2016). As a MOOC, it is unsurprising that UBC Medicine managed to feature in both the video and plain text parts of this review. The breadth of the UBC MOOC resource dwarfs many others, clearly identifying it as a very useful resource (Fig. 4.21). The very comprehensively structured site can be navigated via several routes. One way of navigating is via the brief syllabus pages which give the user clear objectives as to what will be covered by using that page’s resources. The relevant videos corresponding to those objectives are then displayed underneath, followed by the written resources. The written resources come in the form of interactive stepwise modules. These are designed for students to work through either alongside or independently of the video series covered elsewhere in this review. The material is clear and accurate. The student navigates through via clicking the next button or interacting in some way to complete the material, similar to a presentation. At particular points, students are required to fill in a free text box or complete exercises such as matching content in order to advance to the next page. The main benefit is the user’s control over how much material they are exposed to at a time so as not to be overwhelmed by huge
A. Elmansouri et al.
68
Fig. 4.22 Summary of review of Plain Text TEL resource – Passmedicine
blocks of text, reducing the cognitive load (Mayer and Moreno 1998; Mayer 2005). The material progresses logically through the learning objectives in a well-designed and well-organised fashion. Clinical correlates are limited, but less so than they were in the video element, with some cases for students to work through. Passmedicine belongs to a particular subcategory of plain text resources: question banks. Online question banks have gained popularity among medical students. This is likely because they allow students to evidence their knowledge directly via an instant feedback behavioural outcome which is measurable. Passmedicine has been running since 2009, and currently, it boasts a range of question bank categories including medical school finals, GP exams, various physician exams and the applied knowledge test (Fig. 4.22). This is an entirely subscription-based resource. Cost structure for medical students preparing for finals is as follows: • • • •
4 months – £12 6 months – £15 9 months – £20 12 months – £25
For medical students in years 1–3 (typically the non-clinical/basic science years), a free subscription is available. The resource is split into questions and a virtual textbook. The questions section is further split into a standard question bank of cases testing applied knowledge or the ‘Knowledge Tutor’ questions with shorter stems and no cases that
directly test recall. Both these resources can be browsed by subject. The virtual textbook acts as a syllabus, covering all the information that is testable in the questions. It is arranged into topic, and similarly to the question bank selection menu, and subtopics are then arranged alphabetically on an index on the left-hand side of the screen. Although basic in appearance, the interface is very easy to navigate. Again, alphabetical order does not provide a logical means of learning the syllabus but does mean that the student can be comforted in the knowledge that they are unlikely to miss things and can easily track their progress. Explanations are clear, concise and accurate, often quoting relevant guidelines. This is an intuitive way for the user to read around a topic as the realisation dawns on them that they cannot recall what they were hoping they could. It saves time digging around for relevant information every time they struggle with a question. Furthermore, the question bank has a useful algorithm that displays relevant third-party resources that the user could access to learn even more about the subject from reliable sources such as informative YouTube videos from the likes of Osmosis, Dr. Najeeb medicine and SotonBrainHub and guidelines from relevant royal colleges or NICE. Another user-orientated algorithm employs spaced repetition to ensure that questions around similar topics are spaced out in the student’s revision for increased memory consolidation (Mubarak and Smith 2008). The interface is not particularly attractive compared to resources like the TeachMeSeries or Lecturio. But content is king and Passmedicine is
4 TEL Methods Used for the Learning of Clinical Neuroanatomy
content heavy. The final year and years 1–3 resources contain over 3000 questions each, with over 200 on neuroanatomy. The textbooks are extensive and cover a wide but sensible breadth and depth. The neuroanatomy sections of the resources for medical students display the odd cadaveric specimen on a couple of the pages. They also include useful diagrams and some summary tables. There is relatively rare use of radiological images. As a resource, it covers a wider syllabus including neurophysiology and pharmacology, but the anatomical sections do not connect those parts together well. It scores well in the clinical relevance area as the majority of the questions are case-based and so the student almost always has to apply knowledge and goes backwards to identify the anatomical relevance. Students are constantly assessing their knowledge, and progress can be tracked easily by checking the progress bar and a breakdown of statistics on the dashboard page.
4.12 Concluding Remarks There is a plethora of learning resources available for clinical students in 2020. Interestingly, many of them include student involvement or contributions as a way to better target the needs of today’s student population. Although there is an argument that the proximity of social and cognitive congruence provides for a more accessible and enjoyable learning experience (Lockspeiser et al. 2008), there is probably insufficient evidence to suggest that this approach leads to greater learning. However, it is likely that benefits exist for these educational pioneers’ careers and personal knowledge as well as for the generations of students using the resources as is often the case with near-peer teaching (Hall et al. 2018; Hill et al. 2010). Many of the resources reviewed exhibited features that related to pedagogical theory. However, very few employed more than one or two approaches, and even fewer had any justifications in the literature for their design. The emphasis on user experience is unsurprising from organisations looking to monetise their educational products,
69
since this may not be something that necessarily draws in their subscribers. However, institutions providing open-access digital learning resources created with the object of creating learning have more of a moral responsibility to champion an evidence-informed approach towards the instructional design of their learning content. If resources are going to be recommended to students to supplement their knowledge on formal degree programmes, it is important that those resources actually enhance learning. Investigating the degree to which resources actually impact learning in an objective way ensures that they are fit for purpose. If a resource is instructionally equal to or superior to its technically superior/more expensive competitors, it can be a viable alternative for students (Cook 2014). Several levels of assessment have been proposed to discern whether a TEL resource provides the needs of students (Pickering et al. 2019): • Level 0, considers the rationale/need for the TEL in a teaching programme. • Level 1, (Level 1a – learner satisfaction) (Level 1b – learning gain). • Level 2, broad impact on student outcomes. • Level 3, institutional judgement of value vs cost. These evaluative levels require planning, ethical approval and resource management by institutions wishing to integrate TEL into their educational programme (Pickering et al. 2019). Multimedia learning theory should underpin the development of TEL resources in order to effectively deliver their educational message by facilitating appropriate cognitive processing of new information to the learner whilst reducing the risk of cognitive overload (Mayer 2005). Resources that do this stand a better chance of performing well on objective measurements of effectiveness, as the theory has already been shown to be effective in practice (Mayer 2018). Finally, TEL should have a clear instructional goal (Mayer 2018). A syllabus of material makes this more straightforward. Previous literature has emphasised the importance of creating syllabi in anatomy, particularly in very specialist subjects
70
such as neuroanatomy (Moxham et al. 2015). This review has further highlighted the importance of having a core syllabus for the learner to consult, especially if taking a self-directed approach with TEL.
References 3D4Medical (2019) Pricing, 3D4Medical. https:// store.3d4medical.com/. Accessed Nov 2019 Abe K (2001) Modulation of hippocampal long-term potentiation by the amygdala: a synaptic mechanism linking emotion and memory. Jpn J Pharmacol 86(1):18–22 Adams NE (2015) Bloom’s taxonomy of cognitive learning objectives. J Med Libr Assoc 103(3):152 Allen LK, Ren HZ, Eagleson R, de Ribaupierre S (2016) Development of a web-based 3D module for enhanced neuroanatomy education. MMVR 220:5–8 Anderson L, Krathwohl DA (2001) Taxonomy for learning, teaching and assessing: a revision of Bloom’s taxonomy of educational objectives. Longman, New York Avraamidou L, Osborne J (2009) The role of narrative in communicating science. Int J Sci Educ 31(12):1683–1707 Azer SA, Eizenberg N (2007) Do we need dissection in an integrated problem-based learning medical course? Perceptions of first-and second-year students. Surg Radiol Anat 29(2):173–180 Baddeley A (1992) Working memory. Science 255(5044):556–559 Bajpai S, Semwal M, Bajpai R, Car J, Ho AHY (2019) Health professions’ digital education: review of learning theories in randomized controlled trials by the Digital Health Education Collaboration. J Med Internet Res 21(3):12912 Barber M, Donnelly K, Rizvi S, Summers L (2013) An avalanche is coming: higher education and the revolution ahead. The Institute of Public Policy Research, London Battulga B, Konishi T, Tamura Y, Moriguchi H (2012) The effectiveness of an interactive 3-dimensional computer graphics model for medical education. Interact J Med Res 1(2):e2 Bergman EM, Prince KJ, Drukker J, van der Vleuten CP, Scherpbier AJ (2008) How much anatomy is enough? Anat Sci Educ 1(4):184–188 Betihavas V, Bridgman H, Kornhaber R, Cross M (2016) The evidence for ‘flipping out’: a systematic review of the flipped classroom in nursing education. Nurse Educ Today 38:15–21 BioDigital (2017) BioDigital Human. https://www.biodigital.com/about. Accessed Nov 2019 Bloom BS (1956) Taxonomy of educational objectives: the classification of educational goals. Cognitive domain
A. Elmansouri et al. Bloom KC, Hough MC (2003) Student satisfaction with technology-enhanced learning. Comput Inform Nurs 21(5):241–246 Border S (2019) Assessing the role of screencasting and video use in anatomy education. Biomed Vis 1171:1– 13. Springer, Champions Border S, Hennessy C, Pickering J (2019) The rapidly changing landscape of student social media use in anatomy education. Anat Sci Educ 12(5):577–579 Brewer DN, Wilson TD, Eagleson R, De Ribaupierre S (2012) Evaluation of neuroanatomical training using a 3D virtual reality model. MMVR 173:85–91 Buzzetto-More NA (2014) An examination of undergraduate student’s perceptions and predilections of the use of YouTube in the teaching and learning process. Interdiscip J E-Learning Learn Objects 10(1):17–32 Chandler P, Sweller J (1991) Cognitive load theory and the format of instruction. Cogn Instr 8(4):293–332 Chen F, Lui AM, Martinelli SM (2017) A systematic review of the effectiveness of flipped classrooms in medical education. Med Educ 51(6):585–597 Christensen C (2008) Disruptive innovation and catalytic change in higher education. Forum Futur High Educ 3:43–46 Clunie L, Morris NP, Joynes VC, Pickering JD (2018) How comprehensive are research studies investigating the efficacy of technology-enhanced learning resources in anatomy education? A systematic review. Anat Sci Educ 11(3):303–319 Collins JP (2009) Are the changes in anatomy teaching compromising patient care? Clin Teach 6(1):18–21 Cook DA (2012) Revisiting cognitive and learning styles in computer-assisted instruction: not so useful after all. Acad Med 87(6):778–784 Cook DA (2014) The value of online learning and MRI: finding a niche for expensive technologies. Med Teach 36(11):965–972 Derry SJ (1996) Cognitive schema theory in the constructivist debate. Educ Psychol 31(3–4):163–174 Docherty C, Hoy D, Topp H, Trinder K (2005) eLearning techniques supporting problem based learning in clinical simulation. Int J Med Inform 74(7–8):527–533 Drake RL (1999) Anatomy education in a changing medical curriculum. Kaibogaku Zasshi J Anat 74(4):487–490 Drake RL (2007) A unique, innovative, and clinically oriented approach to anatomy education. Acad Med 82(5):475–478 Driscoll A, Jicha K, Hunt AN, Tichavsky L, Thompson G (2012) Can online courses deliver in-class results? A comparison of student performance and satisfaction in an online versus a face-to-face introductory sociology course. Teach Sociol 40(4):312–331 Dror I, Schmidt P, O’connor L (2011) A cognitive perspective on technology enhanced learning in medical training: great opportunities, pitfalls and challenges. Med Teach 33(4):291–296 Dyer GS, Thorndike ME (2000) Quidne mortui vivos docent? The evolving purpose of human dissection in medical education. Acad Med 75(10):969–979
4 TEL Methods Used for the Learning of Clinical Neuroanatomy Ellis H (2002) Medico-legal litigation and its links with surgical anatomy. Surgery (Oxford) 20(8):i–ii Estevez ME, Lindgren KA, Bergethon PR (2010) A novel three-dimensional tool for teaching human neuroanatomy. Anat Sci Educ 3(6):309–317 Evans DJ (2011) Using embryology screencasts: a useful addition to the student learning experience? Anat Sci Educ 4(2):57–63 Garg AX, Norman G, Sperotable L (2001) How medical students learn spatial anatomy. Lancet 357(9253):363–364 Geoghegan K, Payne DR, Myers MA, Hall S, Elmansouri A, Parton WJ, Harrison CH, Stephens J, Parker R, Rae S, Merzougui W (2019) The national undergraduate neuroanatomy competition: lessons learned from partnering with students to innovate undergraduate neuroanatomy education. Neuroscientist 25(3):271–280 Gredler ME (1992) Learning and instruction: theory into practice. Macmillan, New York Hackett M, Proctor M (2016) Three-dimensional display technologies for anatomical education: a literature review. J Sci Educ Technol 25(4):641–654 Hall S, Lewis M, Border S, Powell M (2013) Near- peer teaching in clinical neuroanatomy. Clin Teach 10(4):230–235 Hall S, Harrison CH, Stephens J, Andrade MG, Seaby EG, Parton W, McElligott S, Myers MA, Elmansouri A, Ahn M, Parrott R, Smith CF, Border S (2018) The benefits of being a near-peer teacher. Clin Teach 15:1–5 Hennessy CM, Kirkpatrick E, Smith CF, Border S (2016) Social media and anatomy education: using twitter to enhance the student learning experience in anatomy. Anat Sci Educ 9(6):505–515 Heylings DJA (2002) Anatomy 1999–2000: the curriculum, who teaches it and how? Med Educ 36(8):702–710 Hill E, Liuzzi F, Giles J (2010) Peer-assisted learning from three perspectives: student, tutor and co- ordinator. Clin Teach 7(4):244–246 Hintzman DL (1976) Repetition and memory. In: Psychology of learning and motivation, vol 10. Academic Press, New York, pp 47–91 Houston JP (2014) Fundamentals of learning and memory. Academic Hwang GJ, Lai CL, Wang SY (2015) Seamless flipped learning: a mobile technology-enhanced flipped classroom with effective learning strategies. J Comput Educ 2(4):449–473 Jaffar AA (2012) YouTube: an emerging tool in anatomy education. Anat Sci Educ 5(3):158–164 Jaffar AA (2014) Exploring the use of a Facebook page in anatomy education. Anat Sci Educ 7(3):199–208 James KH, Humphrey GK, Vilis T, Corrie B, Baddour R, Goodale MA (2002) “Active” and “passive” learning of three-dimensional object structure within an immersive virtual reality environment. Behav Res Methods Instrum Comput 34(3):383–390 Javaid MA, Schellekens H, Cryan JF, Toulouse A (2019) Evaluation of neuroanatomy web-resources for under-
71
graduate education: educators’ and students’ perspectives. Anat Sci Educ 13(2):237–249 Jeung HJ, Chandler P, Sweller J (1997) The role of visual indicators in dual sensory mode instruction. Educ Psychol 17(3):329–345 Jozefowicz RF (1994) Neurophobia: the fear of neurology among medical students. Arch Neurol 51(4):328–329 Kaplan AM, Haenlein M (2016) Higher education and the digital revolution: about MOOCs, SPOCs, social media, and the cookie monster. Bus Horiz 59(4):441–450 Kaufman MH (1997) Anatomy training for surgeons—a personal viewpoint. J R Coll Surg Edinb 42(4):215 Kenhub (2019a) Pricing, Kenhub, viewed Oct 2019. https://www.kenhub.com/en/pricing Kenhub (2019b) Pricing, Kenhub, viewed Oct 2019. https://www.kenhub.com/en/about Kensinger EA (2009) Remembering the details: effects of emotion. Emot Rev 1(2):99–113 Kerfoot BP, Fu Y, Baker H, Connelly D, Ritchey ML, Genega EM (2010 Sept 1) Online spaced education generates transfer and improves long-term retention of diagnostic skills: a randomized controlled trial. J Am Coll Surg 211(3):331–337 Khalil MK, Abdel Meguid EM, Elkhider IA (2018) Teaching of anatomical sciences: a blended learning approach. Clin Anat 31(3):323–329 Khaw C, Raw L (2016) The outcomes and acceptability of near-peer teaching among medical students in clinical skills. Int J Med Educ 7:188 Kinjo H, Snodgrass JG (2000) Is there a picture superiority effect in perceptual implicit tasks? Eur J Cogn Psychol 12(2):145–164 Kirkwood A, Price L (2014) Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review. Learn Media Technol 39(1):6–36 Kohlmeier M, McConathy WJ, Cooksey Lindell K, Zeisel SH (2003) Adapting the contents of computer- based instruction based on knowledge tests maintains effectiveness of nutrition education. Am J Clin Nutr 77(4):1025S–1027S Krebs C, Holman P, Bodnar T, Weinberg J, Vogl W (2014) Flipping the neuroanatomy labs: how the production of high quality video and interactive modules changed our approach to teaching (211.3). FASEB J 28(1):211–213 Lee EAL, Wong KW, Fung CC (2010) How does desktop virtual reality enhance learning outcomes? A structural equation modeling approach. Comput Educ 55(4):1424–1442 Lewis TL, Burnett B, Tunstall RG, Abrahams PH (2014) Complementing anatomy education using three- dimensional anatomy mobile software applications on tablet computers. Clin Anat 27(3):313–320 LinkedIn (2019) TeachMeSeries. About, TeachMeSeries. https://www.linkedin.com/company/teachmeseries/ about/. Accessed Nov 2019 Lochner L, Wieser H, Waldboth S, Mischo-Kelling M (2016) Combining traditional anatomy lectures with
72 e-learning activities: how do students perceive their learning experience? Int J Med Educ 7:69 Lockspeiser TM, O’Sullivan P, Teherani A, Muller J (2008) Understanding the experience of being taught by peers: the value of social and cognitive congruence. Adv Health Sci Educ 13(3):361–372 Marchionini G (2006) Exploratory search: from finding to understanding. Commun ACM 49(4):41–46 Martin CM, Roach VA, Nguyen N, Rice CL, Wilson TD (2013) Comparison of 3D reconstructive technologies used for morphometric research and the translation of knowledge using a decision matrix. Anat Sci Educ 6(6):393–403 Martínez EG, Tuesca R (2014) Modified team-based learning strategy to improve human anatomy learning: a pilot study at the Universidad del Norte in Barranquilla, Colombia. Anat Sci Educ 7(5):399–405 Mayer RE (2002) Cognitive theory and the design of multimedia instruction: an example of the two-way street between cognition and instruction. New Dir Teach Learn 2002(89):55–71 Mayer RE (2005) Cognitive theory of multimedia learning. Camb Handb Multimedia Learn 41:31–48 Mayer RE (2018) Designing multimedia instruction in anatomy: an evidence-based approach. Clin Anat. doi:https://doi.org/10.1002/ca.23265 Mayer RE, Fiorella L (2014) 12 principles for reducing extraneous processing in multimedia learning: coherence, Signaling, redundancy, spatial contiguity, and temporal contiguity principles. The Cambridge handbook of multimedia learning, New York, p 279 Mayer RE, Moreno R (1998) A cognitive theory of multimedia learning: implications for design principles. J Educ Psychol 91(2):358–368 Mercer University School of Medicine, Dr Najeeb Lectures, Skelton Medical Libraries. https://med.mercer.edu/library/board-review-drnajeeblectures.htm. Accessed Nov2019 Molitor S, Ballstaedt SP, Mandl H (1989) 1 problems in knowledge acquisition from text and pictures. Adv Psychobiol 58:3–35. North-Holland Moreno R, Mayer RE (1999, June) Visual presentations in multimedia learning: conditions that overload visual working memory. In: International conference on advances in visual information systems. Springer, Berlin/Heidelberg, pp 798–805 Morrone AS, Tarr TA (2005) Theoretical eclecticism in the college classroom. Innov High Educ 30(1):7–21 Moxham B, McHanwell S, Plaisant O, Pais D (2015) A core syllabus for the teaching of neuroanatomy to medical students. Clin Anat 28(6):706–716 Mubarak R, Smith DC (2008, January) Spacing effect and mnemonic strategies: a theory-based approach to E-learning. In e-learning (pp. 269-272) Mukhopadhyay S, Kruger E, Tennant M (2014) YouTube: a new way of supplementing traditional methods in dental education. J Dent Educ 78(11):1568–1571 Najeeb Lectures (2019) About, DrNajeeb Lectures. https://www.youtube.com/user/DoctorNajeeb/about. Accessed Nov 2019
A. Elmansouri et al. Older J (2004) Anatomy: a must for teaching the next generation. Surgeon 2(2):79–90 Paas F, Renkl A, Sweller J (2004) Cognitive load theory: instructional implications of the interaction between information structures and cognitive architecture. Instr Sci 32(1):1–8 Pabst R (2009) Anatomy curriculum for medical students: what can be learned for future curricula from evaluations and questionnaires completed by students, anatomists and clinicians in different countries? Ann Anat Anatomischer Anzeiger 191(6):541–546 Papinczak T, Young L, Groves M, Haynes M (2008) Effects of a metacognitive intervention on students’ approaches to learning and self-efficacy in a first year medical course. Adv Health Sci Educ 13(2):213–232 Pathiraja F, Little D, Denison AR (2014) Are radiologists the contemporary anatomists? Clin Radiol 69(5):458 Pereira JA, Pleguezuelos E, Merí A, Molina-Ros A, Molina-Tomás MC, Masdeu C (2007) Effectiveness of using blended learning strategies for teaching and learning human anatomy. Med Educ 41(2):189–195 Phelps EA (2004) Human emotion and memory: interactions of the amygdala and hippocampal complex. Curr Opin Neurobiol 14(2):198–202 Pickering JD (2015) Anatomy drawing screencasts: enabling flexible learning for medical students. Anat Sci Educ 8(3):249–257 Pickering JD (2017) Measuring learning gain: comparing anatomy drawing screencasts and paper-based resources. Anat Sci Educ 10(4):307–316 Pickering JD, Lazarus MD, Hallam JL (2019) A Practitioner’s guide to performing a holistic evaluation of technology-enhanced learning in medical education. Med Sci Educ 29(4):1095–1102 Riddell J, Jhun P, Fung CC, Comes J, Sawtelle S, Tabatabai R, Joseph D, Shoenberger J, Chen E, Fee C, Swadron SP (2017) Does the flipped classroom improve learning in graduate medical education? J Grad Med Educ 9(4):491–496 Shaffer K (2004) Teaching anatomy in the digital world. N Engl J Med 351(13):1279–1281 Shuell TJ (1986) Cognitive conceptions of learning. Rev Educ Res 56(4):411–436 Sritharan K (2005) The rise and fall of anatomy. BMJ 331(Suppl S3):0509332 Stepan K, Zeiger J, Hanchuk S, Del Signore A, Shrivastava R, Govindaraj S, Iloreta A (2017, October) Immersive virtual reality as a teaching tool for neuroanatomy. Int Forum Allergy Rhinol 7(10):1006–1013 Sugand K, Abrahams P, Khurana A (2010) The anatomy of anatomy: a review for its modernization. Anat Sci Educ 3(2):83–93 Sugar W, Brown A, Luterbach K (2010) Examining the anatomy of a screencast: uncovering common elements and instructional strategies. Int Rev Res Open Distrib Learn 11(3):1–20 Svinicki MD (1999) New directions in learning and motivation. New Dir Teach Learn 1999(80):5–27
4 TEL Methods Used for the Learning of Clinical Neuroanatomy Sweller J (2004) Instructional design consequences of an analogy between evolution by natural selection and human cognitive architecture. Instr Sci 32(1–2):9–31 Terrell M (2006) Anatomy of learning: instructional design principles for the anatomical sciences. Anat Record Part B New Anat 289(6):252–260 Trelease RB (2016) From chalkboard, slides, and paper to e-learning: how computing technologies have transformed anatomical sciences education. Anat Sci Educ 9(6):583–602 UBC Media for Medical Education Faculty of Medicine, Neuroanatomy Next Level Anatomy Education, UBC Media, viewed November 2019. https://education.med.ubc.ca/project/neuroanatomy-next-levelmedical-education/ Van Merrienboer JJ, Sweller J (2005) Cognitive load theory and complex learning: recent developments and future directions. Educ Psychol Rev 17(2):147–177 Van Nuland SE, Rogers KA (2016) The anatomy of E-learning tools: does software usability influence learning outcomes? Anat Sci Educ 9(4):378–390 Visible Body (2019a) About us, Visible Body, viewed November 2019. https://www.visiblebody.com/ Visible Body (2019b) Human Anatomy Atlas, Visible body, viewed November 2019. https://www.
73
visiblebody.com/anatomy-and-physiology-apps/ human-anatomy-atlas Vygotsky LS (1980) Mind in society: the development of higher psychological processes. Harvard university press, London Waddill PJ, McDaniel MA (1992) Pictorial enhancement of text memory: limitations imposed by picture type and comprehension skill. Mem Cogn 20(5):472–482 Wong LH, Looi CK (2011) What seams do we remove in mobile-assisted seamless learning? A critical review of the literature. Comput Educ 57(4):2364–2381 Woolfolk AE (1998) Readings in educational psychology. Prentice Hall/Allyn and Bacon, 200 Old Tappan Rd., Old Tappan, NJ 07675; fax: 800-445-6991; toll-free Yoshioka N (2016) Masseter atrophication after masseteric nerve transfer. Is it negligible? Plastic Reconstruct Surgery Global Open 4(4):e669 Zendejas B, Brydges R, Wang AT, Cook DA (2013) Patient outcomes in simulation-based medical education: a systematic review. J Gen Intern Med 28(8):1078–1089 Zufferey JD, Schneider D (2016) Grand challenge problem 6: technology to bridge the gap between learning contexts in vocational training. In: Grand challenge problems in technology-enhanced learning II: MOOCs and beyond. Springer, Cham, pp 29–31
5
From Scope to Screen: The Evolution of Histology Education Jamie A. Chapman, Lisa M. J. Lee, and Nathan T. Swailes
Abstract
Histology, the branch of anatomy also known as microscopic anatomy, is the study of the structure and function of the body’s tissues. To gain an understanding of the tissues of the body is to learn the foundational underpinnings of anatomy and achieve a deeper, more intimate insight into how the body is constructed, functions, and undergoes pathological change. Histology, therefore, is an integral element of basic science education within today’s medical curricula. Its development as a discipline is inextricably linked to the evolution of the technology that allows us to visualize it. This chapter takes us on the journey through the past, present, and future of histolJ. A. Chapman (*) College of Health and Medicine, Tasmanian School of Medicine, University of Tasmania, Hobart, TAS, Australia e-mail: [email protected] L. M. J. Lee Department of Cell and Developmental Biology, University of Colorado School of Medicine, Aurora, CO, USA e-mail: [email protected]
ogy and its education; from technologies grounded in ancient understanding and control of the properties of light, to the ingenuity of crafting glass lenses that led to the construction of the first microscopes; traversing the second revolution in histology through the development of modern histological techniques and methods of digital and virtual microscopy, which allows learners to visualize histology anywhere, at any time; to the future of histology that allows flexible self- directed learning through social media, live- streaming, and virtual reality as a result of the powerful smart technologies we all carry around in our pockets. But, is our continuous pursuit of technological advancement projecting us towards a dystopian world where machines with artificial intelligence learn how to read histological slides and diagnose the diseases in the very humans that built them? Keywords
History of histology · Modern medical education · Virtual microscopy · Technology- enhanced instruction · Social media (SoMe)
N. T. Swailes Department of Anatomy and Cell Biology, Roy J. and Lucille A. Carver College of Medicine, The University of Iowa, Iowa City, IA, USA e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_5
75
J. A. Chapman et al.
76
5.1
Introduction
Histology is the microscopic study of the structure and function of the body’s tissues. Tissues are composed of cells surrounded by an extracellular matrix (ECM), a collection of fibrous proteins embedded in a proteoglycan-rich ground substance. Together, the cells and the ECM work in symphony to provide a diverse but specific range of functions in the human body, from the absorption of nutrients in the digestive system to the contraction and relaxation of skeletal muscles during locomotion. To learn the tissues of the body is to learn the cellular and extracellular underpinnings of anatomy and achieve a deeper, more intimate insight into how the body is constructed, functions, and undergoes pathological change. Histology, therefore, serves as a linchpin for the integration of gross anatomy, physiology, and embryology and shares a kinship with the practice of clinical pathology. On its surface, histology is also arguably the most aesthetically beautiful of the basic sciences. Its colourful hues and abstract patterns blur the fine line between art and science making it as attractive visually as it is challenging intellectually to its students. A siren on the turbulent seas of medical school. However, a closer look at histology’s other attributes clearly reveals the major reasons why its place has been firmly cemented within the foundation of medical education and the history of anatomical science.
5.1.1 S tructure Begets Function; Function Begets Structure Histology, by the very nature of focusing on the microscopic elements of the tissues from which the human body is constructed, enables students to establish learning connections between gross anatomy and physiology. In this way, histology is often perceived as the integrating factor that clearly unites structure and function. The former informs the latter and vice versa.
5.1.2 Patterns and Differential Diagnoses The study of histology requires students to develop skills in pattern recognition. Through investigation of specimens, they must evaluate what they see based upon the presence or absence of particular features. It should not be underestimated how much these skills mimic those practiced by physicians when formulating a differential diagnosis that is arrived upon after evaluating a set of present or absent symptoms (patterns) observed in their patients.
5.1.3 Pathological Beginnings Histology is important in establishing a foundation level of knowledge on which to build skills in the diagnosis of disease through histopathology. An understanding of normal histological structure and function is an essential step in the training of medical students to recognize and explain pathological changes in tissues. Additionally, through the study of histology, students begin to build their medical vocabulary. Clinicians communicate using the language of medical science, and the role that the basic sciences play in cultivating this in growing doctors is often forgotten. Despite the valuable learning outcomes we pursue in teaching histology, it has, as you might expect, not always been central to the anatomical teachings of medicine. In fact, it wasn’t until the late nineteenth century that the discipline of histology was accepted under the wing of its ancient sibling, gross anatomy, to be recognized as a branch of the anatomical sciences. This is because the growth and development of histology proceed hand in hand through history with the development of the technology used to visualize it, from the earliest primitive convex lens to the latest in computerized virtual microscopy. This chapter offers an abridged, chronological insight into just some of the highlight events that have shaped the
5 From Scope to Screen: The Evolution of Histology Education
development, growth, and teaching of histology as an established anatomical science within the field of medicine. It is by no means a comprehensive history.
5.1.4 Terminologies Throughout This Chapter The literature is swarming with many different terms and their permutations used to describe traditional benchtop microscopy, computer-based virtual microscopy, and the technology that surrounds their use. Here, for consistency, we define the terms and some abbreviations used throughout this chapter to refer to the various tools and technologies used in histology education:
5.2
77
he Past: A Brief History T of Histology
5.2.1 A Technological Evolution 5.2.1.1 Optical Allusions: AD 65–1020 Everything that we now know and teach about the cellular world of histology and histopathology we owe to the physical properties of light as it passes through glass. It is something that plays a big part in our lives, and we take it for granted every day when we put in contact lenses, drive to work, take that selfie to update your Instagram, or watch a movie on TV. It is a phenomenon called refraction. Refraction occurs when a wave (in this case light) changes direction as it passes between one medium (like air) and another (for example, glass). It is a process that we have learned to manipulate by using highly refined convex glass lenses, and it is something that has fascinated humans throughout history. The law of refraction (the ratio of the sines of the angles of incidence and refraction of a wave is constant when it passes between two given media) was not fully defined until 1621 when Willebrord Snellius gave Snell’s law its name. However, from the writings of the Romans, we know that refraction and magnification were keenly observed by the earliest scientists. In 65 AD, Roman philosopher Seneca the Younger (Lucius Annaeus Seneca) while pondering the nature of the rainbows he saw in the sky wrote:
• Traditional microscope: a physical compound light microscope, with magnifying lenses. • Traditional microscopy (TM): the act of observing glass-slide-mounted tissues using a traditional microscope. • Digital histology (DH): the study of histological tissues using static tissue image files (micrographs) that are typically annotated and accessed using a digital device. • Virtual slide: the digital product of scanning a glass-slide-mounted tissue using slide scanning technology. Virtual slides are large tiled image files (typically .tiff or a proprietary format based upon the scanner used, such as .svs). • Virtual microscope: the software or application that runs on a digital device to simulate Every object much exceeds its natural size when the use of a traditional microscope. A virtual seen through water. Letters, however small and dim, are comparatively large and distinct when microscope is used to view a virtual slide. seen through a glass globe filled with water. Apples • Virtual microscopy (VM): the practice of floating in a glass vessel seem more beautiful than observing virtual slides with a virtual microthey are in reality. scope on a digital device. In pathology, VM is – Seneca the Younger in Naturales Quaestiones, Book I, Ch VI (Clarke 1910) referred to as whole slide imaging (WSI) when used for routine diagnostic service. • Technology-enhanced learning (TEL): learnA few years later in AD77, Pliny the Elder ing resources provided through a digital published his encyclopedic (37 books organized device, previously referred to as computer- into 10 volumes) interpretation of the natural assisted learning (CAL) or e-learning. world. In it he described how a volcanic glass
J. A. Chapman et al.
78
Accordingly, we maintain that the visual faculty [eye] is always deceived about things that are perceived through a transparent body [lens] different from air, and aside from [misperceptions] of their location, distance, colors, and light, [it is deceived about] their size and about the shapes of some of them, for things that are seen in water and through glass or transparent stones appear magnified. – Alhazen in The Book of Optics, Book 7 (Smith 2010)
called obsian (obsidian) could converge the sun’s rays to create a device that generates enough heat to burn garments or how crystal could be used by surgeons for cauterization: And yet, we find that globular glass vessels, filled with water, when brought in contact with the rays of the sun, become heated to such a degree as to cause articles of clothing to ignite. – Pliny the Elder in The Natural History, Book XXXVI (Bostock 1855a) I find it stated by medical men that the very best cautery for the human body is a ball of crystal acted upon by the rays of the sun. – Pliny the Elder in The Natural History, Book XXXVII (Bostock 1855b)
It is perhaps ironic that, according to romanticized historical lore, Pliny died during the infamous eruption of Mount Vesuvius that destroyed Pompeii in AD 79. His alleged cause of death: inhalation of toxic fumes emitted from the same volcanic lava from which obsian (obsidian) glass is born. Although realists claim he more likely succumbed to heart disease (Bigelow 1859). Despite these astute observations of magnification and refraction, it is clear that their findings were not being attributed to any underlying physics. Thus, they remained as observations until around AD 100 when Greek mathematician and astronomer Claudius Ptolemaeus (Ptolemy) performed a series of experiments with a bronze plaque placed in a container of water. Using this rudimentary device, he was able to measure and formulate a model of refraction. His findings are recorded in his fifth book Optics (Smith 1996). However, despite his work on refraction, Ptolemy appears never to have stumbled upon the magnifying effect. With the decline of the Roman empire, it was not until almost one thousand years later that an Arabian scholar by the name of Alhazen (Ḥasan Ibn al-Haytham) began experimenting with glass lenses and refraction. Between AD 1011 and AD 1021, he penned a seven-volume tome entitled the Opticae Thesaurus Alhazeni Arabis (Kitāb al-Manāẓir or The Book of Optics) in which he became the first to appreciate how a convex lens could be used to produce a magnified image.
The discovery went largely unexploited for an additional 200 years.
5.2.1.2 The Spectacle Debacle: 1286 With a Renaissance era now in sight, English philosopher, lecturer, and Franciscan friar Roger Bacon was asked by Pope Clement IV to write a summary of his “major works”. Bacon obligingly delivered an 840-page synopsis written entirely in Latin, entitled Opus Majus.1 In his observations on Optics, he frequently quoted Alhazen and also, significantly, went one step further by suggesting a clinical use for magnification: If a man looks at letters or other small objects through the medium of a crystal or of glass or of some other transparent body placed above the letters, and it is the smaller part of a sphere whose convexity is towards the eye, and the eye is in the air, he will see the letters much better and they will appear larger to him…Therefore this instrument is useful to the aged and to those with weak eyes. For they can see a letter, no matter how small, sufficiently enlarged. – Roger Bacon in Opus Majus (Burke 1962)
Bacon’s opus was widely disseminated, and as a result, while he never actually invented the spectacles, his notions about this use for lenses likely paved the way for their eventual manufacture. The date and location of the invention of the spectacles remain somewhat of a mystery with experts narrowing the event down to Pisa, Italy, sometime around 1286. However, despite an exhaustive archival study, the original inventor still remains lost to history (Rosen 1956a, b; Letocha 1986). This open-endedness is less than For fun, see also Bacon’s work in creating brazen (brass) head automatons that were able to predict the future. An invention that led to some thinking he was perhaps not a friar but rather a practitioner of the dark arts.
1
5 From Scope to Screen: The Evolution of Histology Education
satisfying, and the internet, as a result, continues to fill the gap by perpetuating myths regarding the origins of the first spectacles. For example, despite historical evidence to the contrary, their invention is frequently attributed to one of two Italians: a monk, Alessandro della Spina, or a nobleman, Salvino d’Armati. Monastery archives regarding Spina after his death in 1313 suggest that he was not the inventor but, in fact, a good-hearted tinkerer who merely recreated the eyewear he had heard being talked about (Rosen 1956b): Brother Alessandro della Spina, a modest and good man. Whatever he saw or heard had been made, he too knew how to make it. Spectacles were first made by someone else who was unwilling to share them. Spina made them, and shared them with a cheerful and willing heart. – Chronicle of the Monastery of St. Catherine at Pisa (Bonaini 1845)
Armati’s claim to the invention is based upon these simple words inscribed on his epitaph: Here lies Salvino degli Armati son of Armato, of Florence, the inventor of spectacles. May God forgive his sins. He died anno Domini 1317. – Epitaph of Salvino d’ Armati (as recorded by del Migliore 1684)
However, in a dramatic turn of events, this was revealed to be an elaborate hoax perpetrated by Ferdinando Del Migliore much later in 1684. He recorded the inscription, seen only by him, in a burial register that was subsequently destroyed along with the imaginary epitaph during renovations of the Church of Santa Maria Maggiore (Rosen 1956b). The current epitaph bears the same statement but dates from 1841 and is located on the wall of a monument now hidden from view in one of the chapel’s cloisters (Goes 2013). The spectacle debacle can be summed up using the words of renowned physicist and optics expert Vasco Ronchi who said: Much has been written, ranging from the valuable to the worthless, about the invention of eyeglasses; but when it is all summed up, the fact remains that this world has found lenses on its nose without knowing whom to thank. – Vasco Ronchi (1946)
79
What is safe to say, however, is that with the arrival of the spectacles came eyeglass manufacturers from whose skilled hands the refined lenses used in telescopes, cameras, and, our friend, the microscope would be crafted.
5.2.1.3 The First Microscopes: 1590–1610 It could be argued that the first spectacles were also the first simple microscopes since they were devices that consisted of single convex glass lenses built specifically to magnify an image. Given the number of spectacle makers working with convex lenses in the late sixteenth and early seventeenth century, it seemed almost inevitable that someone would eventually combine multiple lenses and create the first compound microscope. Sure enough, in 1590, Dutch spectacle makers Zacharias Janssen and his father Hans Janssen are alleged to have done just that by building a tube that contained two lenses. The device would allow an object, when viewed at one end of the tube, to be magnified beyond the power of any existing simple lens magnifying glass. However, despite the popular belief (even a Google search for “Who invented the microscope?” at the time of this writing brings up the name Zacharias Janssen alongside his picture), the legitimacy to the Janssens’ claims to this invention has not been without question. Discrepancies in historical dates (it has been reported that Zacharias Janssen would have been only about 5 years old in 1590) and witness testimonies have led some historians to believe that the Janssens’ claims should be discredited (Zuidervaart 2010). The Janssens were not the only ones experimenting with lenses at this time. Famed Italian scientists Galileo Galilei also created a similar device in 1610 by inverting an earlier invention of his, the tubulum opticum (telescope). The result was an instrument he described as the occhiale which allowed him to see “flies as large as a sheep” (Kalderon 1983; Purtle 1973). It was a colleague of Galileo, botanist Giovanni Faber, who is credited with naming the “microscope”. He used the word to describe Galileo’s discovery in a letter, dated April 13,
J. A. Chapman et al.
80
1625, written to Federico Cesi, the founder of the Italian science institute Accademia dei Lincei: I only wish to say this more to your Excellency, that is, that you will glance only at what I have written concerning the new inventions of Signor Galileo; if I have not put everything, or if anything ought to be left unsaid, do as best you think. As I also mentioned his new occhiale to look at small things and call it microscope, let your Excellency see if you would like to add that, as the Lyceum gave to the first the name telescope, so they have wished to give a convenient name to this also, and rightly so, because they are the first in Rome who had one. – Giovanni Faber, 1625 (Carpenter and Dallinger 1901)
5.2.2 Observing Life Through a Lens 5.2.2.1 Fleas: The 1600s Neither the Janssens nor Galileo took advantage of their newly created technology to conduct scientific observations or publish any scientific discoveries. In fact, it took a number of years before any observational findings of this sort started to emerge in the literature. This early period of the seventeenth century saw interest in microscopes continue to grow, but perhaps this was fuelled more by a delight in marvelling at the magnified images they could reveal rather than the pursuit of scientific observation. Many of the instruments in use at this time were short wooden tubes with a simple convex lens eyepiece at one end and a plane glass viewer at the other. They were nicknamed “flea glasses” because, when the parasite was placed into the device, any willing observer would come eye to eye with a terrifying, hairy monster – thus demonstrating the magnifying capability of the microscope in dramatic fashion (Bell 1966). As a result, flea glasses were popular as a parlour game or party trick among the middle classes. The value of the microscope as a tool for scientific discovery had not yet been fully realized. 5.2.2.2 Dolphins: 1653 French chemist Pierre Borel (Petrus Borellius) may have been the first to use a microscope, similar to that of Galileo’s, to make observations in the field of medicine and human anatomy (Singer
1915). His work Historiarium et Observationum Medicophysicarum published in 1653 describes rather vividly his microscopic observations of what are now thought to be erythrocytes and clots in human blood: Animals of the shape of whales or dolphins swim in the human blood as in a red ocean…these creatures, it may be supposed (since they themselves lack feet), were formed for the bodily use of more perfect animals within which they are themselves contained…If you would see them, take a sheep or ox liver, cut it in small portions and place in water, teasing and separating it with your hands, and you will see many such animals escaping from them. – Pierre Borel in Historarium et Observationum Medicophysicarum (Singer 1915) He also was the first person to hint at the existence of tissue histology: The heart, kidneys, testicles, liver, lungs and other parenchymatous organs, you will find to be full of little structures (organula) and they are like sieves by means of which nature arranges the various substances according to the shape of the holes. – Pierre Borel in Historarium et Observationum Medicophysicarum (Singer 1915)
Borel’s wonderfully elaborate descriptions may well have been the first microscopical observations of human cells and tissues, but they have been overshadowed in history by those made by the scientists whose observations followed his.
5.2.2.3 Capillaries: 1661 In 1661, Italian anatomist Marcello Malpighi documented perhaps the first major scientific breakthrough using the microscope. He placed the dried lungs of a frog under his microscopes and observed tiny blood vessels: capillaries. His deliberate investigative work using the microscope was done to try and explain the missing connection between arteries and veins – a story that had eluded the English physician William Harvey before his death in 1657. Harvey, famed for describing systemic circulation, was unsure of the mechanism that connected vessels and did not believe a connection existed. He wrote in his famous Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus/On the Motion of the Heart and Blood in Animals that blood could “permeate through pores in the flesh” to be absorbed by the veins (Harvey 1628).
5 From Scope to Screen: The Evolution of Histology Education
Malpighi’s discovery of capillaries demonstrated that blood flows entirely within a closed system. I had believed that the body of the blood breaks into empty space, and is collected again by a gaping vessel and by the structure of the walls…But the dried lung of the frog made my belief dubious. This lung had, by chance, preserved the redness of the blood in (what afterwards proved to be) the smallest vessels, where by [with the help of] a more perfect lens, no more there met the eye the points forming the skin called Sagrino [dark surface spots], but vessels mingled annularly. And, so great is the divarication of these vessels as they go out, here from a vein, there from an artery, that order is no longer preserved, but a network appears made up of the prolongations of both vessels. – Malpighi in De Pulmonibus (Young 1929)
Malpighi wrote about his discovery of capillaries in two letters to the eminent physiologist Alphonso Borelli entitled De Pulmonibus/About the Lungs (Young 1929). It was this research that demonstrated how a microscope could be used to observe whole tissues and something that would later earn him the title of “Founder of Modern Microscopical Anatomy and Histology”.
5.2.2.4 Cells: 1665 Englishman Robert Hooke went one step further than the Janssens and created a three-lens compound microscope to achieve even higher magnification than that possible in earlier two-lens designs. Hooke used his microscopes to make a number of, now famous, observations which he sketched and published in his book Micrographia (Hooke 1665). Most notable was his analysis of the structure of a piece of cork within which he observed pores that he described as “cells”, the origin of the term we use in histology today: I took a good clear piece of cork, and with a pen- knife sharpened as keen as a razor, I cut a piece of it off, and thereby left the surface of it exceedingly smooth, then examining it very diligently with a microscope, me thought I could perceive it to appear a little porous…I with the same sharp pen- knife cut off from the former smooth surface an exceedingly thin piece of it…and casting the light on it with a deep plano-convex glass, I could exceedingly plainly perceive it to be all perforated and porous, much like a honey-comb…these pores, or cells, were not very deep, but consisted of a great many little boxes, separated out of one continued long pore, by certain diaphragms…
81
– Robert Hooke in Micrographia, Observation XVIII (1665)
5.2.2.5 Animalcules: 1675 Despite the existence of compound microscopes, fabric worker turned biologist Anthony van Leeuwenhoek continued to build and use simple convex lens microscopes with great success. He focused on mastering the art of lens-craft through grinding and polishing to build simple microscopes with lenses that could achieve magnification superior to those of his colleagues (van Zuylen 1981). In the late seventeenth century, he created his most recognizable invention, a small, brass, handheld device that contained a single high-quality lens. It was this device that brought the microscope to the attention of scientists far and wide when he used it to observe bacteria, yeast, blood cells, and the “animalcules” (tiny protozoa he saw in old rainwater) that he famously described in his 1677 letter (Leeuwenhoeck 1677). In the year 1675 I discovered living creatures in rainwater which had stood but few days in a new earthen pot. This invited me to view this water with great attention…The first sort by me discovered in the said water, I observed at diverse times to consist of 5, 6, 7, or 8 clear globules, without being able to discern any film that held them together, or constrained them. When these animalcula or living atoms did move, they put forth two little horns, continually moving themselves. – Antony van Leeuwenhoeck (1677)
The idea that life might be made up of tiny components was not really conceived upon until this time. This, in addition to the fact that these tiny components could now be observed with relative ease using microscopy, undoubtedly led to the birth of the microscope as a serious scientific instrument rather than a Sunday parlour game.
5.2.3 The Rise of Histology 5.2.3.1 Müller Mulls It over: 1838 During the eighteenth century, there were leaps and bounds made in technology, lens-craft, and microscopy. However, despite the intriguing work of the early microscopists who used these inventions in their research and study, it took
J. A. Chapman et al.
82
almost 150 years before histology began to be taken seriously by the medical profession. Up until this time, some of the finest-quality microscopes being produced in Europe in the nineteenth century were primarily being used, not by clinicians for diagnosing disease, but instead by meat inspectors in the pork industry who were on the hunt for the pesky parasite Trichinella spiralis which causes trichinosis if ingested by humans. It wasn’t until German physiologist Johannes Peter Müller (1801–1858) began to look at the microscope as a means to enhance medical thinking and solve physiological problems that the practicing physicians of the day began to look up and take note. His book On the Nature and Structural Characteristics of Cancer and of those Morbid Growths which May be Confounded with It (Müller 1838) paved the way for the rise of histology and histopathology in medicine and medical curricula. In his writing, he stressed the importance of using the microscope to investigate and analyse pathological tissues, and he argued that disease is connected to the breakdown in structure, function, and development of its normal cellular and tissue characteristics. Characteristics, he emphasized, that can only be observed accurately within these “morbid growths” through the lens of a microscope. The minute microscopic elements of morbid growths are, in addition to capillary vessels; fibers, granules, cells both with and without nuclei, caudate or spindle-shaped bodies and vessels…The Cell is by far the most frequent element of morbid growths. Thus, it exists in sarcoma cellulare, in enchondroma, in carcinoma simplex, reticulare, and alveolare. In many growths this cellular texture is so coarse as to be evident by a very low magnifying power, or even to be distinguishable by the naked eye; but, generally, the cells, unless magnified 400 to 500 times… look like granules. – Johannes Müller (1838)
Medical doctors of the period had little appreciation for the potential of microscopic anatomy as a means of furthering our understanding of the human body and the disease mechanisms that affect it:
I cannot become a friend of these new activities in science. They all stem from the attitude that all we have seen with clear eyes has no further value… This is the chant of those who are only peeping through the microscope. Microscope – Kaleidoscope. – leading Berlin surgeon Professor J. J. Dieffenbach in a letter to his colleague Professor Stroymeyer of Munich (Kisch 1954)
Müller’s role in medical education saw him teach and mentor a number of other well-known scientists in the field of anatomy and pathology, including Rudolf Virchow (later dubbed “the father of modern pathology” by his peers); Theodor Schwann (who used microscopy to identify striated muscle in the upper one-third of the oesophagus and, of course, the eponymous nerve myelinating cells); and Jacob Henle, whose work with the microscope as a teacher not only led to his own anatomical discoveries but also introduced histology into the medical curriculum.
5.2.3.2 Henle’s Curriculum: 1841 Müller had a doctoral student by the name of Jacob Henle who, flying in the face of the hostile viewpoints of eminent surgeons and physicians of the time, rallied around his mentor and embraced the study of anatomy using the microscope. Henle’s descriptions of histology are today deeply woven into the very basics of our current histological teaching. For example, in his article About the Distribution of Epithelium in the Human Body (Henle 1838), he describes the location of epithelial cells and classifies them as cylindrical (columnar), ciliated, and cobblestone (squamous): I have termed cobblestone epithelium the type of flat epithelium that, as the epidermis, consists of more or less flat, round, or polygonal cells that contain a nucleus which, in turn, shows quite consistently a nucleus [nucleolus] in its center. – Jacob Henle (1838)
Ultimately, it was his book Allgemeine Anatomie (Henle 1841) that emphasized the importance of the cellular and tissue level components of the body in establishing form and function. Thus, in 1841, microscopic anatomy
5 From Scope to Screen: The Evolution of Histology Education
(histology) was born as a branch of “general anatomy”. It was also around this time that he introduced the idea of microscopic anatomy to the medical curriculum at The University of Heidelberg where he taught anatomy and pathology classes. With a class size of sixty, Henle encouraged his students to investigate glass slides of tissue preparations using microscopes. His belief was that this would help his students learn and think independently. By introducing this technique into his curriculum, he became one of the first to utilize the microscope as a significant pedagogical tool for the teaching of histology in medical education (Merkel 1891; Tuchman 1993). His handson methods engaged and mesmerized medical students. Albert Kolliker, professor extraordinary of physiology and comparative anatomy at the University of Würzburg, Germany, reflected enthusiastically on his time as a student of Henle’s in his memoir Erinnerungen aus meinem Leben/Memories from my Life: I still see the narrow, long hallway in the university building next to the auditorium where Henle, for lack of another room for demonstrations, showed us and explained the simplest things so awe inspiring in their novelty, with scarcely five or six microscopes: epithelia, skin, scales, cilia cells, blood corpuscle, pus cells, semen, then teased-out preparations from muscles, ligaments, nerves, sections from cartilage, cuts of bones, etc. – Albert Kolliker (1899)
5.2.4 T he Age of Machines and Chemistry 5.2.4.1 The Slice Is Right: 1842 With improvements in the quality of microscopes and their increasing use within the scientific community, focus began to shift away from the observational device itself to centre instead on the quality of the specimens being observed. Tissue preparations at this time were poorly fixed often with alcohol, and specimens were cut by hand using a razor blade. The results were tissue sections that were often so thick that even the finest-quality microscope of the time would have
83
trouble resolving an informative image. As a result, scientists began to employ other techniques designed to make specimens that were even thinner. Czech anatomists Jan Purkyně and German naturalist Christian Ehrenberg were both aware that the best images come from the thinnest specimens. So, independently, they set about building a mechanical device that would compress the nervous tissue samples they were studying between glass slides to make them thin enough to be viewed in the microscope (Chvátal 2017). Purkyně published technical drawings of his compressor (Purkyně 1834) and eventually went on to use the invention to describe the structure of the axons of neurons in 1858: During my microscopy work I recognized early the necessity to expand transparent soft matters, especially animal, by gentle compression…Therefore, I invented and manufactured a convenient device, always usable, in which all grades of pressure are given into our power. In fresh nerves, longitudinally mounted and compressed by the compressor under water, expelled from the sheaths themselves there appeared transparent medium lines, which I later discovered were dense and I named them nervous axial cylinders (axons). – Jan Purkyně (Chvátal 2017)
Purkyně knew acutely that his method of essentially crushing tissues could create artefacts, and so he along with other scientists (including Benedikt Stilling, Pieter Harting, and Wilhelm His the Elder) continued to work on methods that would prevent this. Their independent contributions between 1842 and 1868 have all been acknowledged in the creation and development of the modern microtome – a machine used for cutting thin, serial tissue sections.
5.2.4.2 A Formalin Solution: 1858 Even with the technology to view slides and the equipment to create consistently thin sections, the tissues themselves were still undergoing putrefaction. Quality specimens are equally as important in bringing microscopy to the medical, scientific, and educational arena as the technology being used to image them. For much of Henle’s tenure, fixation techniques for teaching
J. A. Chapman et al.
84
the histology of tissues was poor or non-existent. However, this was about to change. In 1858, a Russian chemist by the name of Alexander Butlerov was trying to hydrolyze methylene acetate in his laboratory when he began to observe a distinctive smell (Leicester 1940; Seymour and Kauffman 1992). With this smell, he had accidentally discovered the aldehyde HCHO (also known as formaldehyde). By 1868, the methods for the production of an aqueous formaldehyde solution were developed (Hofmann 1867), which eventually allowed formaldehyde to be produced commercially by the end of the nineteenth century. In 1893, physician Ferdinand Blum was testing the antiseptic properties of formaldehyde. He diluted it 1:9 with water and began adding it to various preparations of bacteria to see how they behaved (Blum 1893a). It was during these experiments that he began to notice that the skin on his fingertips that had come into contact with the solution had begun to harden (Blum 1893b): This slow and certain disinfectant appears to rest on a peculiar transformation of the organic material by which the tissues are changed from their soft state into a harder and more resistant modification. I first made this observation on my own fingers which, upon working with formaldehyde, had completely hardened the epidermis – Ferdinand Blum (1893b)
With his lapse in occupational health and safety came the discovery of formaldehyde as a tissue fixative (Blum 1893b). It quickly became the standard fixative for preserving and hardening tissues so that they could be sectioned more easily using the microtome. If you have ever wondered why many laboratories today use formaldehyde in a 4% solution for fixation purposes, it is not necessarily the result of any specific scientific evidence per se but rather that this was the working concentration used by Dr. Blum when he accidentally fixed his own fingers.
5.2.4.3 The Future’s Bright, the Future’s Purple: 1863 The other incredibly important factor that revolutionized histology was tissue staining. Fixed
and fresh tissue specimens when sliced thinly are transparent. Dyes provide a contrast to the tissues that enable visualization and detailed study of their features. The most commonly used stain in histology today is the purple basic dye, hematoxylin. It is frequently used alongside its acidic and pink counterstain, eosin. Combined, they are known as H&E (hematoxylin and eosin), and their story is an interesting one. Hematoxylin is a pigment extracted from the dark-red logwood tree Hematoxylum campechianum (literally “blood-wood from the Campeche region” in Latin). Its origins as a stain can be traced back to the indigenous Mayans of the region who used it to dye fabric. However, it was not introduced to the rest of the world until Spanish conquistadores observed its use as a fabric dye during their colonization of the Americas while exploring the Yucatan Peninsula, Mexico, in 1502 (Ortiz-Hidalgo and Pina-Oviedo 2019; Kahr et al. 1998). They noted its potential as a garment dye and began harvesting the logwood to transport across the Atlantic Ocean back to Europe. Its subsequent use in the fashion industry meant that the monetary value of the logwood skyrocketed. The rarity of this now high demand commodity made the Spanish galleons that carried the tree prime targets for plundering by pirates (OrtizHidalgo and Pina-Oviedo 2019). There are many tales of piracy on the high seas of the Atlantic surrounding the logwood, and so it is perhaps not surprising that hematoxylin’s debut in the world of histology did not come about until over 300 years later. During the mid-late nineteenth century, at a time when histology was becoming fashionable itself, German anatomist Heinrich Wilhelm Gottfried von Waldeyer began using the aqueous logwood extract to try and visualize the axons of neurons and muscle fibres (von Waldeyer 1863). His attempts were likely influenced by botanists of the era of whom Thomas Knight was perhaps the first to write about the effect of the logwood derived dye in demonstrating vessels in his potatoes:
5 From Scope to Screen: The Evolution of Histology Education I waited till the tubers were about half grown; and I then commenced my experiment by carefully intersecting, with a sharp knife, the runners which connect the tubers with the parent plant, and immersing each end of the runners, thus intersected, in a decoction of logwood. At the end of twenty-four hours, I examined the state of the experiment; and I found that the decoction had passed along the runners in each direction were using the dye to highlight vessels in plants. – Thomas Knight (1803)
Waldeyer’s attempts to transfer the technique to animal tissues were only moderately successful. This was probably because he had failed to notice that the leaders in the fashion industry at the time were actually using the dye in its oxidized form, hematein, and with a mordant – an inorganic oxide that combines with the dye to fix it within the material. It was in 1865 that Franz Böhmer (1865) combined the hematein with alum as the mordant to debut an effective “hematoxylin” tissue stain. Eosin, a member of the aniline family of dyes, didn’t make it onto the histological scene until a few years later. In the 1870s, scientists began experimenting more with counterstaining – the process of adding a second dye to the tissue whose colour contrasts the principal stain. The discovery and manufacture of eosin is attributed to Adolf von Baeyer; his pupil (and future winner of the 1902 Nobel Prize in Chemistry for his work on sugar and purine synthesis) Emil Fischer; and his friend Heinrich Caro, in 1875 (Baeyer 1875). The dye is named after the pink morning sky, a nod to Homer’s “rosy-fingered dawn” description of the Greek goddess Eos and, more than likely, a pun based upon the colour it stains your fingers while working with the chemical. In the subsequent year, experiments using chick embryos showed that eosin stained the hemoglobin- rich cytoplasm of their erythrocytes but not their nuclei, and it was suggested that the dye be used as a counterstain to the nuclear dye hematoxylin (Wissozky 1877). Thus, the famed duo of hematoxylin and eosin (H&E), seen in the microscopes of medical students and histopathologists worldwide, made its histological debut.
85
5.2.5 A Digital Era 5.2.5.1 Computers: 1821–1951 The history of computing far predates the history of the modern hardware “computers” that are so tightly woven into our modern-day lives. It is also an area of history beyond the scope of this writing, yet without the rise of these machines, virtual microscopy (VM) and the ways in which we teach histology in modern curricula would not exist. The development of the computer is a fascinating subject, and I encourage any readers interested in this to explore the field (the references provided here are simply a starting point). However, if there was a single point in time that could symbolize the commencement of the birth of the modern autonomous computer. it would be this story… It was 1821, a period at the height of the Industrial Revolution in England, and two Victorian mathematicians, Charles Babbage and John Herschel, were both poring over two sets of handwritten astronomical arithmetic that had been performed by independent experts. The tables were so strewn with mathematical errors and discrepancies that Babbage frustratedly exclaimed, “I wish to God these calculations had been executed by steam!” only for Herschel to reply, “It is quite possible”. (Collier 1970). The next year, Babbage had created a working model of the difference engine, an automated machine that was capable of tabulating polynomial functions. From here the developmental trajectory of the modern computer was rapid, just 30 years, from experimental theory to the first commercial sale of a computer. This development was influenced by many along the way. Of note is Alan Turing who, with his influential paper On Computable Numbers, with an application to the Entscheidungsproblem (Turing 1937), invented a theoretical machine that operated based on a program of instructions stored in the form of symbols (memory) that could read and write further symbols – essentially inventing the principle of the modern computer. Turing’s work contributed to the development of the first set of programmable computers known as Colossus. Built
86
between 1943–1945, the machines helped the Allies intercept coded military intelligence created by the German Lorenz cipher machines during World War II (Flowers 1983). At around the same time, the first digital computers were also being created like the Atanasoff-Berry Computer (ABC) developed at Iowa State College between 1937 and 1942 (Mollenhoff 1988) and the more complex Electronic Numerical Integrator and Computer (ENIAC) that was first booted up at the University of Pennsylvania in 1945 to assist the US Army in calculating firing tables for new weapons (Mauchly 1997). It was the inventors of the ENIAC (J. Presper Eckert and John Mauchly) who would, in 1951, go on to produce the UNIversal Automatic Computer I (UNIVAC I), the world’s first commercially produced computer, designed for business and administration (Stern 1979) that paved the way for the personal desktop computers that we use today.
J. A. Chapman et al.
It wouldn’t be until the 1990s that the desktop computer would improve enough to make the digitization of entire glass slides functionally viable, and when it did, the arrival of the “virtual slide” looked set to revolutionize the way in which histological specimens were analysed, stored, and retrieved – all via a computer’s hard drive. But this was only the tip of the iceberg; the histopathology community did not realize it yet, but the implications of digitizing images would have even farther-reaching implications for education and collaboration after March 1989. For this was the date when Englishman Timothy John Berners-Lee proposed the creation of an information management system that used something called “hypertext” to his boss at CERN (Berners- Lee 1989). His proposal was, then, described as “vague but exciting…”, but now, it is more commonly referred to as the World Wide Web.
5.2.5.3 Scanners, Scopes, and Annotations: The 1990s 5.2.5.2 Binocular to Binary: 1985 The first attempt to share virtual histology images It was inevitable that the worlds of microscopy took place in the 1960s when images of blood and computing would eventually collide to allow smears and urine specimens were shared between scientists to digitally archive the images that they Massachusetts General Hospital and Logan glimpsed beneath their magnifying lenses. Airport Medical Station, in Boston, MA, for The first methods for scanning glass-slide- interpretation. They were transmitted in real time mounted specimens utilized a microscope using “television microscopy”, and as such, this equipped with a precision motorized stage and a became the first attempt at practicing “telepatholvideo camera mounted to the microscope. As the ogy” (Weinstein 1986). stage was moved, the camera captured multiple Now, with the arrival of the internet, it was microscopic fields of view, known as “digital clear that the sharing and streaming of virtual image tiles”, and the computer was subsequently slides across the “World Wide Web” would be used to call on these tiles to create a digital mon- much easier and could dramatically change the tage (Silage and Gil 1985). way histology is taught by anatomists, learnt by In a world where it is conservatively estimated medical students, and diagnosed by pathologists. that more than 200,000,000 paraffin blocks and During the 1990s while desktop computer glass sides of surgical pathology specimens are processor speeds were increasing, RAM was getstored in laboratories across the United States ting bigger, machines were dialling up the inter(Burgess 2004), the physical prospects for slide net, and slide scanning technology was also digitization were clear. However, at this time, the improving. Digital tile scanning technology like large number of tiles needed to reconstruct even a the Bacus Laboratories, Inc. Slide Scanner small area of a slide at 40× magnification meant (BLISS) and MicroBrightField Virtual Slice that file sizes far outweighed the processing capa- System (VSS) had entered the commercial marbilities of the regular desktop computers that ket. By the turn of the century, new methods of were available. digitization had been developed like the Aperio
5 From Scope to Screen: The Evolution of Histology Education
ScanScope T2 linear scanner that acquired digital information in stripes across the whole slide rather than tiles and the DMetrix DX-40, an array microscope for the rapid simultaneous capture of whole slide images using multiple microlenses (Weinstein et al. 2004). By 2006, there were many different slide scanning technologies available commercially (Rojo et al. 2006). Alongside these products grew a range of software that utilized the scanned virtual slides to provide users with a realistic emulation of the traditional microscope known as a virtual microscope. Together, a virtual microscope loaded with a virtual slide could be used to perform VM. In 1997, faculty at the University of Iowa recognized the value that virtual slides could have on education in histology and pathology and were the first to digitize their entire teaching collection of over 1000 histology and pathology slides, eventually making them publicly available for anyone in the world to use in their classes (Dee et al. 2003). The collection is still available today (see Part 2). The final major development in the field of VM was the capacity to allow users to integrate annotations into their virtual slides. This element provided educators with the ability to fully markup virtual slides with helpful arrows, shapes, and text overlays so that students could work independently when studying histology. Taken together, the technological trifecta of converting glass slides to virtual slides by scanning, adding annotations, and loading them onto a server for access using virtual microscope technology via an internet connection looked set to change the landscape of telepathology and histology education forever. VM would go on to become the primary method in which medical students engage with histological tissues (see Sect. 5.3).
5.2.5.4 2020: Has Video Killed the Radiostar? As Jacob Henle wandered around his classroom at The University of Heidelberg making adjustments to the microscope of the young Albert Kölliker so that he could better see the newly discovered erythrocytes on the slide before
87
him – he could not possibly have predicted how those same cells would be studied by the twentyfirst-century students of anatomy and pathology. For in the modern medical histology classroom, there is not a microscope in sight. The introduction of VM now allows classes composed of hundreds of medical students to simultaneously access and individually manipulate the same slide while searching for features that have been highlighted and labelled by their instructors. They see cells and tissues not through the refined convex lens of a microscope nor even the monitor of a computer, but through the touchscreen of a tablet or smartphone. In fact, today’s students of histology are not even in a classroom. They are at home, riding the bus, or in an airplane seat 30,000 ft. above the once pirate-infested Atlantic Ocean whose trade routes were used to transport the very logwood whose purple hues they admire digitally from the comfort of 34E. Meanwhile, physicians in remote and rural parts of the planet consult with world expert pathologists in big-city hospitals to present and review rare patient cases using digitized histopathology specimens. As technology continues to surge forwards at a rapid pace leaving history in its wake, anatomists can only briefly pause and reflect on how it has changed our classrooms and shaped our curricula over the years, hopefully for the better and without regret, because like all of history that precedes us, we can’t rewind we’ve gone too far.
5.3
he Present: Current State T of Histology Education
5.3.1 Complexities in Histology Education 5.3.1.1 Becoming a Jedi of Visual Literacy Consider how many different organs and tissues make up a body and how thin each tissue section must be for microscopic observation. Now consider how many tissue slides (glass or virtual) would be required to represent the entire human body. Speculate about the difficulties in observing and interpreting a single tissue section under
88
a traditional or virtual microscope and attempting to identify its anatomic origin. Recall that these sections are 2D representations of tissues that must be spatially reconstructed in the mind to “see” the 3D object. Now add to that the fact that most of these tissues would also be stained with H&E, rendering them various hues of pink and purple. With all of this in mind, it is easy to see just how daunting a task it is for novice learners of histology to acquire visual literacy (the ability to recognize patterns) in histology. From the pedagogical perspective, one of the main objectives in histology courses is to ensure that students practice, develop, and acquire the skills necessary to become visually literate. As a result, histology courses commonly include a laboratory component in which students have the opportunity to make the appropriate amount of visual observations to learn the patterns of various tissues. A typical traditional histology laboratory provides microscopes, a laboratory manual, and a set of boxes containing anywhere between 100–300 tissue sections. During designated histology lab hours, students spend time viewing glass tissue slides under traditional microscopes either individually or in a group. Commonly, instructors would be available to direct the lab activities, answer questions, or confirm tissue identity. Traditional histology laboratories thus require designated space, materials, and equipment, and they demand adequate time commitment from students to acquire visual literacy. The relatively recent development of technologies described in Sect. 5.2.5, combined with a wave of curricular changes in the professional health sciences programs at a global scale, has significantly influenced changes in histology education.
5.3.1.2 The Force Shaping Histology Education Change in medical education has been in motion since the Flexner report, which drove the standardization of medical schools with a curricular emphasis on scientific discovery (Flexner 1910). A hundred years after the Flexner report, a lapse in the original report was highlighted, stating that
J. A. Chapman et al.
“the physician as scientist has taken precedence over the physician as healer” (Duffy 2011). The American Medical Association (Chicago, IL) began the Flexner initiative to address this shortcoming in medical education and called for reform to emphasize the professional formation of future physicians and developing specific core competencies (Inui 2003; Cooke et al. 2010; Duffy 2011). Perhaps preconceiving the upcoming shift towards competency-focused education, and buoyed by the exponential increase in knowledge gained from scientific discoveries, the early 1990s marked the beginning of a widespread trend towards multidisciplinary integrated curriculum (Schmidt 1998; Ling et al. 2008). This global trend led to a shift in medical education from a primarily discipline-focused and predominantly didactic lecture-based pedagogy to one that is now highly integrated and presented through case/problem-based delivery. These changes coincided with a decline in the time allocated to the basic sciences (Cotter 2001; Gartner 2003; Bloodgood and Ogilvie 2006; Sugand et al. 2010; Bergman et al. 2011; Findlater et al. 2012). Gross anatomy teaching in the United States, for example, has seen a dramatic decline in teaching hours from over 500 hours in the 1900s (Blake 1980), to 165 hours in 1997 (Leung et al. 2006), to an average of 147 hours (range 33–249 hours) in the most recent report (McBride and Drake 2018). There have been similarly reported reductions in the United Kingdom (Heylings 2002; Turney 2007; Gillingwater 2008; Findlater et al. 2012; Lewis et al. 2016) and Australia (Parker 2002; Craig et al. 2010), leading some to raise concerns about the ability of new graduates to conduct safe practice (Prince et al. 2005; Waterston and Stewart 2005; Bergman et al. 2008, 2014; Gupta et al. 2008), even by the students themselves (Fitzgerald et al. 2008; Farey et al. 2014). In comparison, the teaching of histology has also not been spared from this reduction. The earliest documentation of histology curricular hours in medical education reports an average of over 150 contact hours (Berry et al. 1956). By 2002, the contact hours for histology reduced to nearly half at 79 hours with approximately 50% of the
5 From Scope to Screen: The Evolution of Histology Education
time being utilized for histology laboratory (Drake et al. 2002; Gartner 2003). Histology contact hours between 2002 and 2014 seemed to have plateaued with only a slight reduction being reported during this time frame (Bloodgood and Ogilvie 2006; Drake et al. 2009, 2014). However, the most recent report in 2018 revealed yet another sharp decline in histology curricular hours to 51 total hours on average, with approximately 50% of the time devoted to histology laboratory (McBride and Drake 2018). Between 2009 and 2018, the number of medical schools reporting that histology is taught in a fully or partially integrated curriculum rather than in a stand- alone course increased from 50% to 98% of the responding schools (Drake et al. 2009; McBride and Drake 2018), demonstrating the correlation between the integrated curriculum and the reduction in histology curricular hours (Moxham et al. 2017).
5.3.2 A New Hope 5.3.2.1 Virtual Microscopy (VM) Awakens One factor that has enabled the drastic decline in educational contact hours for histology is the growth of educational technology. This has led to the development of a variety of computer-aided instructional resources including the most consequential of them all, VM which debuted in the early 1990s (Heidger Jr et al. 2002). As mentioned in Part 1, VM is the act of using a computer program designed to mimic the use of a traditional microscope (a virtual microscope) to observe a digitized tissue slide (virtual slide). VM works in a way similar to Google Maps (Google Inc., Mountain View, CA), whereby the learner/user can zoom in or out and move to different areas of the virtual slide to make observations and recognize patterns. Since its emergence, VM has transformed the way histology is taught. VM labs have eliminated requirements for traditional laboratory space, freeing up in-class curriculum time and a need for students to access to the lab, the bulky traditional microscopes, and the fragile glass tissue slide boxes stored there.
89
As a result, the practice of VM was reported to be more economical than TM in the long run (Krippendorf and Lough 2005; Bloodgood and Ogilvie 2006; Coleman 2009; Paulsen et al. 2010; Mione et al. 2013). Instructors and students found VM to be a more efficient way to teach and learn the visual aspects of the tissues because all the students in the class could view the same tissue simultaneously and receive specific instructions on the virtual tissue slide rather than taking turns to look through a single traditional microscope to make observations. The ability to perform VM via the internet at any time and in any location was also identified as a significant advantage of VM over TM laboratories. Perhaps, it is a natural progression to shift away from the use of TM in laboratory classes towards the exclusive use of VM (Fig. 5.1). Although the exact number exclusively using TM was not presented, 86% of institutions used TM to teach histology in 2002, although many of those responding did state that “digital images” were used to supplement TM delivery (Drake et al. 2002). These numbers have since dropped to 20% by 2018, with only 10% of responding institutions using TM exclusively for histology lab and another 10% using a blended approach of both TM and VM (McBride and Drake 2018). Interestingly, 13% of institutions didn’t use microscopy at all but instead used digital histology (DH) (static digital images) to teach histology.
5.3.2.2 The Dark Side of Virtual Microscopy There have been some concerns and even outrage over the increasing usage of VM in histology education. Early proponents for the retention of TM seemed largely based upon a historical romantic attachment by students or those who previously trained with TM (Farah and Maybury 2009; Pratt 2009). Others argued that learning histology online results in a loss of valuable skills that are developed through “slide scanning” which are necessary for work in pathology (Cotter 2001; Lowe 2018). This is, of course, refuted since VM simply provides glass slides in a simulated form (Mione et al. 2013, 2016) and
90
J. A. Chapman et al.
Fig. 5.1 Virtual microscopy with annotations: An example of a virtual slide displayed within virtual microscope software (Biolucida) with interactive annotations created
by the instructor (Captured from The University of Iowa Virtual Slidebox)
students are often required to perform the same “slide scanning” to find regions of interest (Cotter 2001; Schutte and Braun 2009), which is actually one of its advantages over DH. Furthermore, in 2017, the U.S. Food and Drug Administration (FDA) approved whole slide imaging (WSI; the term used by pathologists for VM) as a routine diagnostic service (Boyce 2017). With regard to the validity of WSI in diagnostic settings, a recent study compared the diagnostic outcomes of 19 pathologists who used glass slides and TM versus digitized slides and VM for 5845 specimens. No differences were found between the primary diagnosis in anatomic pathology (Borowsky et al. 2020). It seems VM will, in fact, provide valuable skills to prepare for a future pathologists’ digitized workflow (Medical Futurist 2018). Other concerns with the introduction of VM included the lost opportunities for students to learn how to use traditional microscopes, to handle the physical tissue sections mounted on glass slides, and the lack of depth and variety in the digital tissue images in the VM. Additionally, in bypassing contact with any physical slides/tis-
sues, it is easy for students to forget the amount of time and level of skill required to obtain tissue specimens, section, stain, and mount them for digitization since the digitized images appear magically in front of the user at the click of a button. Appreciating this process is particularly important for physicians who “send specimens to the lab” on a regular basis. Xu (2013) also raised concerns about the use of VM, questioning whether training students with “perfect slides” dampened their curiosity, thereby affecting motivation and reducing learning. Additionally, digital devices and the accompanying bells and whistles could introduce a risk of distraction (Xu 2013) which, interestingly, is supported somewhat through self-reporting by students (Zureick et al. 2018). There has also been some scepticism over the educational effectiveness of VM; however, numerous studies over the years have shown that the introduction of VM to a histology course is, at a minimum, benign resulting in equivalent learning outcomes (Blake et al. 2003; Krippendorf and Lough 2005; Goldberg and Dintzis 2007; Scoville
5 From Scope to Screen: The Evolution of Histology Education
and Buskirk 2007; Braun and Kearns 2008; Pinder et al. 2008; Dee 2009; Husmann et al. 2009; Paulsen et al. 2010; Barbeau et al. 2013; Mione et al. 2013; Brown et al. 2016) or is even beneficial to students (McReady and Jham 2013; Mione et al. 2016; Wilson et al. 2016; Felszeghy et al. 2017; Fernandes et al. 2018; Nauhria and Ramdass 2019), when compared with TM. As previously mentioned, even if VM does produce equivalent results at best, there are many added benefits that are truly valued by students, such as increased ease of use, increased flexibility in learning opportunities (spatial and temporal), and the increase in effective communication (peer- peer and student-instructor) that working with VM promotes. Students have also often reported frustration and/or difficulty with TM, with one estimation suggesting that it may take as many as 12 weeks to become proficient in traditional microscope use (Cotter 2001), while many students report struggling physically (Krippendorf and Lough 2005; Farah and Maybury 2009; Szymas and Lundin 2011; Ahmed et al. 2018). The introduction of VM has always been welcomed with increased student satisfaction and many students indicating a preference for VM over TM (Harris et al. 2001; Pinder et al. 2008; Farah and Maybury 2009; Schutte and Braun 2009; Weaker and Herbert 2009; Khalil et al. 2013; Gatumu et al. 2014; Felszeghy et al. 2017; Fernandes et al. 2018; Nauhria and Ramdass 2019).
5.3.3 Barriers to Virtual Microscopy 5.3.3.1 Economical It should be noted that VM, despite its numerous advantages, is not without its limitations and disadvantages. Although VM is touted to be more economical than the setup and maintenance of TM laboratories (Krippendorf and Lough 2005; Coleman 2009), this is debatable. For example, creating and maintaining VM: –– Demands considerable human resource costs particularly with regard to the amount of time
––
–– ––
––
91
required of experts in collecting, curating, and creating virtual slides Requires access to expensive scanning equipment and software that can produce the high- resolution virtual slides, tile them, and make them accessible over the network Needs enough dedicated server space to host the large virtual slide image files Requires the infrastructure and information technology support to host and provide secure access to virtual slides by a large number of users simultaneously May need a web designer and support staff to maintain all VM lab materials.
Taken together, creating and maintaining a VM could be at the very least as costly as operating a TM laboratory and, at worst, exponentially more costly (Paulsen et al. 2010).
5.3.3.2 Sharing Interestingly, another limitation arises when institutions try to implement one of the benefits of VM – sharing and allowing open access to virtual slides. Once VM is produced and made available to the public, then access to the educational resource is exponentially greater than that of the TM lab (Coleman 2009; Paulsen et al. 2010). This can put large demands on institutional servers and does not come without significant labour and monetary costs. As a result, many institutions put firewall or password restrictions in place so that only students and faculty of that institution can access the VM. This greatly reduces the diversity of virtual slides available to learners and educators. 5.3.3.3 Slides It has already been noted that the automated machines that can scan and produce virtual slides are expensive making it uneconomical for most educational institutions to purchase them. Some larger educational institutions may have access to the equipment and personnel via core facilities, whereas smaller institutions or those with a limited education budget do not have access at all to the resources required to produce their own virtual slides. Further, considering the significant
92
amount of cost and time required to generate a VM laboratory, each institutional collection of virtual slides is often minimal, ranging from 100 to 500 digitized tissue images, compared to the many thousands typically held in student glass slide collections. This introduces a significant limitation in ensuring that students achieve visual literacy in histology because, in order to develop pattern recognition skills, one must observe a diverse range of tissues with the same or different morphologies and staining patterns. Additionally, limited virtual slide availability restricts the instructor’s ability to assess student learning accurately simply because there are not enough unique virtual slides to retain for assessment purposes (Pinder et al. 2008).
5.3.3.4 Experts Lastly, it is essential to emphasize the critical role of an expert in histology education. Although VM has superior accessibility via the internet, students report an inefficient and ineffective learning experience using VM in the absence of expert guidance (Bloodgood and Ogilvie 2006; Yen et al. 2014). This means that either the VM software needs more meaningful and robust educational tools associated with it in order to simulate how experts help students achieve visual literacy, or histology curricula must have sufficient contact hours for student-expert interaction. More and more annotations and other extensions that attempt to simulate student-expert interactions are being incorporated into existing and new VM software, and these technologies are listed, and their features are discussed in detail in Sect. 5.4.
5.3.4 Overcoming the Barriers to VM To address some of the limitations of VM outlined here, there have been a number of innovations in recent years, including the introduction of more affordable digital slide scanners, more
J. A. Chapman et al.
affordable ways to build and host VM, and inter-institutional collaborative efforts to share VM assets, such as the Virtual Microscopy Database (Lee et al. 2018). Virtual Microscopy Database (VMD), launched in April 2017, was created by histology educators with funding from the Innovation Program in the American Association for Anatomy (AAA; Bethesda, MD) and technological support from MBF Bioscience Inc. (Williston, VT) in the United States. The VMD’s mission is to reduce the limitations of VM listed above and to serve as a centralized repository of virtual slides to promote high-quality educational and research resource sharing on a global scale. Currently, the VMD hosts over 3500 virtual slides from 21 academic institutions in the United States, Canada, United Kingdom, Australia, and Taiwan. Access to VMD is limited to educators and researchers from non-profit institutions. To date, over 1200 individuals from over 70 countries around the world are registered VMD users, many of whom teach histology in the developing countries, or in undergraduate colleges, or even in high schools where they do not have access to any form of VM or TM. Histology educators from institutions with their own VM also access VMD for its diverse virtual slide collections to use in their teaching and assessments (Lee et al. 2018). In addition to VMD, there are many more VM resources and other forms of histology learning tools on the World Wide Web than ever before. A comprehensive (but not exhaustive) list of VM resources currently available on the internet is provided in Table 5.1. One thing is clear, as medical curricula inevitably continue to change and evolve to better serve our students and our society, so too will the innovations and technologies that have an impact on visualization and education of histology in the future. You can’t stop change any more than you can stop the suns from setting. Shmi Skywalker (Episode I, The Phantom Menace)
5 From Scope to Screen: The Evolution of Histology Education
93
Table 5.1 Virtual microscopy sites with free access VM resource (Institution) American Association for Anatomy Anatomy A215 Virtual Microscopy Brain Maps Virtual slides Histology @Yale Loyola University Medical Education Network Virtual Histology Medical Histology and Virtual Microscopy Resources Duke University Medical School The University of Iowa Virtual Slidebox University of Alabama at Birmingham (UAB) Virtual Microscopy University of British Columbia School of Medicine Virtual Histology University of Colorado Virtual Histology Lab University of Michigan Histology and Virtual Microscopy Learning Resources University of Minnesota Histology Guide – Virtual Histology Laboratory University of New South Wales Electron Microscopy Virtual Slides Virtual Pathology at the University of Leeds
Virtual Virtual slides only slides + annotations X X X X X
Digital histology
Online lab Flash- guide based
X X
X
X
X X
X
X
X X
X
X X
X
X X
X
X
X
X X
URLs for these sites can be found here: https://rebrand.ly/m28h0a3; Flash-based resources may not work after December 2020
5.4
he Future: Where Is T Histology Heading?
5.4.1 T he Shift to Self-Directed Learning The almost ubiquitous adoption of VM and reduction in histology teaching contact hours have forced instructors to seek more effective and flexible teaching methods. Worldwide, in most areas of higher education, there has been a drive towards increased use of blended learning, which implements technology-enhanced learning (TEL) resources to supplement traditional delivery. Blended learning is a highly effective method of education (Means et al. 2013) and will become, if it hasn’t already, the norm for course delivery (Ross and Gage 2006; Alammary et al. 2014). TEL resources are becoming the primary methods of instruction in pedagogical methods such as the flipped classroom, and there has been an increase in research into their effective design and evaluation (Mayer 2010; Pickering and
Joynes 2016; Pickering et al. 2019). As such, methodologies have swung from instructor- centred to student-centred delivery, with a large focus on student self-directed learning (McLean et al. 2016; Morton and Colbert-Getz 2017; Chen et al. 2017; Fleagle et al. 2018; Day 2018).
5.4.1.1 Digital Histology (DH) In anatomy, TEL resources have been successfully used as a self-directed learning tool by students for quite a while (Peplow 1990; Arroyo-Jimenez et al. 2005; Smythe and Hughes 2008). In histology, DH in particular has been used as a self-directed learning resource for traditional histology (TH) instruction with demonstrated success in improving student outcomes (Cotter 2001; Pakurar and Bigbee 2004). DH utilizes static images of tissues (micrographs) that are typically annotated and accessed on a digital device. One of the benefits of DH is that by focusing on only one specific area of interest within a tissue, much of the extraneous information that is normally generated with a whole glass or virtual
94
slide can be removed (through a process called “weeding”; Mayer and Moreno 2003). As a result, DH has been suggested to reduce cognitive overload and improve the knowledge retention of learners (Cotter 2001; Mayer and Moreno 2003; Mayer 2010). Recently, DH has been used to create a rapid-fire histology diagnostic training software for students, known as the Novel Diagnostic Educational Resource (NDER; Parker et al. 2017). The training module displayed DH images along with a multiple choice question containing five options. After answering the question, learners received immediate feedback. Exposure time to the DH image was adaptive so it lengthened or shortened based upon the user’s performance (ranging from 1.5 to 10 s). Pre- vs. post-test analytics before and after usage of this resource showed that student’s diagnostic accuracy increased from 73% to 96%, with a concurrent boost in the student’s confidence for histology (Parker et al. 2017). Not tested, but acknowledged by the authors as a limitation, was the length of retention of this diagnostic ability. Additionally, these sorts of DH training methods fail to provide important opportunities for learners to develop skills in the “self-weeding” of extraneous information from a slide (e.g. recognizing and then ignoring artefacts), part of the skills developed through the aforementioned “slide scanning”. Development of these skills can often promote and improve communication with peers and/or instructors (Khalil et al. 2013).
5.4.1.2 Online Multimedia Learning Modules Recently, Thompson and Lowrie Jr (2017) reported on the replacement of histology laboratories with online self-study modules at the University of Cincinnati’s College of Medicine. Having shifted to VM-based instruction in 2006, they ultimately moved these VM laboratories completely online, largely in response to the (aforementioned global) declining contact hours. This online laboratory was then populated with TEL resources designed following the cognitive theory of multimedia learning to reduce cognitive overload (Mayer 2014). Static, labelled DH images were presented with instructor narration,
J. A. Chapman et al.
then further supported by narrated videos of virtual slides which were, in turn, accompanied by a link to the slides in the virtual microscope for further self-directed learning. Analysis found that the online self-directed students performed statistically significantly better in practical examinations than the students who experienced the previous methods of delivery. This study did not, however, describe how students communicated with faculty or each other in the online environment. This so-called hidden curriculum of laboratory classes (Bloodgood and Ogilvie 2006) is an important part of the learning process that may be lost when instruction is moved online. Where then do online, self-directed learners of histology go when they have questions? To whom do they turn if the resources provided do not meet their needs? Perhaps that place is where many people expect Millennial and Generation Z students to already be – social media.
5.4.2 Supporting Histology Teaching Through Social Media Like it or not, social media (SoMe) has become embedded within our personal and professional lives. While a recent survey of fourth-year medical students found 100% were regular SoMe users (Facebook>Snapchat>Instagram>Twitter), most had never used these platforms for educational purposes (Guckian et al. 2019). In one sense, SoMe can potentially help to create the virtual spaces for interacting, participating, sharing, and collaboration that the shift to self- directed learning is missing; however, there are also negative aspects, such as SoMe overload (Whelan et al. 2019), which instructors should be mindful of. It is only recently that academics have started to move into areas of SoMe that students use regularly, with institutions previously focusing on types of SoMe (e.g. blogs and Wikis) that are less popular among students (Sleeman et al. 2020). Instructors are beginning to see the potential educational value in the more popular informal SoMe platforms (Hennessy 2017; Keenan et al. 2018; Stone and Barry 2019); how-
5 From Scope to Screen: The Evolution of Histology Education
ever, evidence-based cognitive, learning, teaching, and ethical strategies should be enacted when embedding them into everyday teaching (Cole et al. 2017; Guckian et al. 2019; Hennessy et al. 2020). While many of these studies looking at the formal introduction of SoMe into teaching have found a positive student response, perhaps surprisingly, students tended to prefer to use these platforms for formative quizzing or as repositories of supplementary content (Lee and Gould 2014). Getting students to participate in discussions on SoMe, however, was difficult to elicit from students because of their fear of being identified and ridiculed (El Bialy and Jalali 2015; Hennessy 2017; Border et al. 2019; Guckian et al. 2019). VM, because it is already online, makes the creation of resources to support students in SoMe relatively easy. In fact, one of the benefits of VM over TM is that students can take snapshots of virtual slides and create their own TEL resources to support their own learning. The authors, for example, engage in many different SoMe platforms in an open way (that is, not directly created for, or embedded into, their teaching), using resources generated from VM to provide supporting supplementary material as open educational resources (OERs). Most of these platforms are, of course, carried around with students all the time on their smartphones and may serve as anytime, anywhere learning resources. Here, we describe three examples.
5.4.2.1 Twitter Twitter™ is a free microblogging and social networking site in which you can post 280 character status updates/messages, otherwise known as “tweets”, that can be seen by anyone but particularly those who have chosen to follow you. Users can post images or videos to accompany their tweets, and tweets can be tagged using hashtags (#) so that tweets corresponding to a particular theme can be followed (e.g. #histology). Tweets can be sent directly to fellow users by addressing them with their Twitter handle (e.g. @IHeartHisto), which is also known as @’ing [at-ing] someone. Sixty-three percent of the 330 million monthly and 145 million daily
95
users of Twitter are aged 35–65 years old (Oberlo 2019). All three authors fall into this category and can be found on Twitter – @LLCoolProf (Lisa Lee), @IHeartHisto (Nathan Swailes), and @ Chapman_Histo (Jamie Chapman) – where we regularly post annotated micrographs explaining histological concepts or threads encouraging engagement with histology (e.g. The A-to-Z of Histology, Fig. 5.2). There is a community of anatomical science educators thriving on Twitter, many of which are available to provide expert advice (Marsland and Lazarus, 2018). Despite this community of on-call experts at people’s fingertips, Twitter appears to be the SoMe application that is least frequented by the very students we are trying to reach (Guckian et al. 2019). Twitter is actually an efficient means for networking, sharing of information and resources (e.g. Free Open Access Medical Education; #FOAMed; Lewis 2017), and formative quizzing and is an effective mechanism for interprofessional learning. Many of us engage with colleagues and disciplines on Twitter that previously time (different time zones) and space would have prevented. A more concerted effort may be required to demonstrate the educational and professional value of Twitter to students, especially as a means for communication with experts while they are engaged in self-directed learning (Lewis 2017). Students concerned about potential identification and/or ridicule can always try direct messaging (DM) as a means of communication or even creating an anonymous account.
5.4.2.2 Instagram Instagram™, with its 1 billion monthly and 500 million daily users is a free SoMe app that allows users to share photos and videos, predominantly from a smart device (e.g. smartphone or tablet computer) but also from PCs through browser extensions/plugins. Users, 67% of which are aged 18–29, upload their photos/videos and then either share them openly or privately to a selected group of followers (Hootsuite 2020). Users of the app can also view, like, and comment on posts. Instagram, with its emphasis on images, is the perfect SoMe platform for devel-
96 Fig. 5.2 #AtoZHistology on Twitter: A sample of a Twitter thread project aiming to educate and engage social media users in histology. The authors posted a panel of histological images and a brief description (within the 280-character limit). Each tweet related to a consecutive letter of the alphabet, and the authors posted each day for 26 days. K, L, and M are shown
J. A. Chapman et al.
5 From Scope to Screen: The Evolution of Histology Education
97
Fig. 5.3 @IQuizHisto on Instagram. Image (far left panel) shows the image without answers displayed, and the associated questions appear in a caption (far right
panel). Users can then swipe left with their finger to reveal the image labelled with answers (middle panel). Users can comment and leave feedback
oping histology education and science communication materials. @IHeartHisto (Nathan Swailes) curates two highly successful Instagram sites – one with artistically adapted micrographs of histological samples that appear similar to everyday or familiar objects that encourages engagement and science communication (scicomm) through fun artwork with histology (@ihearthisto; 33K followers) and the other (@iquizhisto; 7K followers) with over 150 formative histological quizzes that provide instant feedback to followers via a simple swipe to the right (think of it as Tinder for histology; see Fig. 5.3). Due to the sharing nature of SoMe, these posts have the potential to reach thousands of people with an interest in that content. For example, on Instagram alone, posts by @ihearthisto and @iquizhisto regularly reach between 15,000 and 45,000 people on this platform. “Reach” is defined as the total number of people who physically see (view, read) content but don’t necessarily physically interact with it (by liking or commenting). Again, these are OERs that colleagues who have difficulty generating their own resources are encouraged to use (with appropriate attribution). Due to its visual nature, there are numerous other histology-related sites on Instagram that also provide excellent content for review, and it should also be noted that communication via this platform is quick and easy. @Chapman_Histology
(Jamie Chapman) has also recently joined Instagram and started posting annotated micrographs with explanatory notes as a means of providing OERs. DM is also available via this site for questions/requests for use of resources that people do not wish to be made public.
5.4.2.3 YouTube YouTube™ is a user-generated video-sharing platform where users can watch videos on diverse subjects ranging from cat videos, to entertainment, to education. Currently, over 2 billion users log in to YouTube per month, to watch over 1 billion hours of video per day, in over 100 countries, delivered in 80 different languages (YouTube 2020). There is a Chapman Histology YouTube channel which consists of a series of guided VM videos of normal histology. One major theme on the channel is the delivery of guided histological instruction in 3 minutes or less, called 3 Minute Histology. These short videos are presented as simple interpretations of virtual slides with audio narration (generally no text), presenting a concept in one visual and one auditory channel to reduce the potential for cognitive overload (Mayer and Moreno 2003; Mayer 2010, 2014). The guidance, provided by voice and enlarged mouse cursor pointing out key areas, sometimes referred to as “feed-forward training” (Koury et al. 2019), also helps to reduce Type 3 cognitive
98
J. A. Chapman et al.
their intrinsic motivations for a sense of a utonomy (afforded through choice in which activities they engage with) and a feeling of competency (afforded through thoughtful and constructive instruction and feedback) (Niemiec and Ryan 2009). But, live-streams also meet a viewer’s need for a sense of relatedness, an extrinsic motivation that can be met by engaging in warm and respectful learning environments. Live-streams and the sites they are broadcast on contain many of the elements that are known to drive students to engage with learning environments, and therefore propose an intriguing potential for use to support learning in formal education environments. When combined with VM, in a sense, live-streaming could be seen as the next jump in the evolution of the technological breakthrough of telepathology (see Part 1: 2.5.3 Scanners, Scopes, and Annotations). Where in the 1960s, live microscopy images were broadcast between two hospitals in real time using “television microscopy” (Weinstein 1986), in the future, those images could be broadcast to anyone in the world who had the correct app on their device and an interest in the content. The use of live-streaming social-media plat5.4.3 Back to the Future? forms provides a number of advantages over Live-Streaming established online-lecturing systems, such as Blackboard Collaborate. For example, live- Live-streaming of videos is a global phenome- streaming platforms are open access and are easnon. It is estimated that as early as 2022, four- ily accessed through platforms that most students fifths of all global internet traffic will be video, already use (e.g. YouTube and Facebook), and and the majority of this video will be live-streams these platforms exist, in large part, because they (Cisco 2019). At present, the largest contributors foster a relaxed, engaging, and interactive envito live-video content are live-streams of video ronment. In addition, live-streams offer an unpargames, through platforms such as Twitch, alleled level of flexibility, for both the presenter YouTube Live, and Facebook Live. Every day, and the viewer. Live-streams can be watched live around 200 million people tune in to watch other on any smart device, enabling viewers to ask people play video games. While this may seem questions and have them answered in real time. odd to some of us, recent research (Payne et al. Alternatively, streams can be viewed as recorded 2017; Sjoblom and Hamari 2017) suggests that sessions – creating an “anytime, anywhere” there are two main reasons why viewers engage learning resource. Great Scott! with live-streams: to seek information and for social interactions. Intriguingly, these reasons 5.4.3.1 Twitch mirror the motivations, as identified by the self- In 2019, Chapman_Histology (Jamie Chapman) determination theory, that drive people to engage piloted “Histology 101” educational live-streams with learning activities (Ryan and Deci 2000; on the platform Twitch. Twitch, which is predomReeve 2002). Live-streams help viewers fulfil inantly a live video game streaming service, has overload through “signalling” whereby cues are provided on how to process the material presented (Mayer and Moreno 2003). Again, these resources are presented as OERs, and a number of institutions from around the world have embedded or linked to them in their own teaching. Besides instructor-supplied videos through institutional learning management systems, YouTube is likely to be the next most common source of supplementary content videos for students (Barry et al. 2016; Balogun 2019). Evaluation of those resources by instructors to ensure their accuracy, quality, and that they meet current ethical standards is important (Raikos and Waidyasekara 2014; Barry et al. 2016; Jones 2016; Miller and Lewis 2016; Balogun 2019). There is no DM capability on YouTube, only a thumbs up/down voting and comment function on each individual video, or a Community section which allows for posts, polls, and discussion. YouTube, because it’s owned by Google, has some excellent in-built analytics that users can potentially use to generate research regarding the use of their resources.
5 From Scope to Screen: The Evolution of Histology Education
99
the market share of live-streaming with more than 70% of all live video hours streamed (Perez 2019). Recently, it has started to expand its streamer and audience base through the introduction of non-game streaming, including In Real Life (IRL), music, art and crafts, and even cooking streams. Science and technology live-streams are also a relatively new theme available, and this is where the Histology 101 Twitch live-stream pilot was launched. A live-streaming studio was created with a high-end Windows PC, HD video camera, video capture card, microphone, and green screen. Using free streaming software (Streamlabs Open Broadcast Software (OBS); Streamlabs, Logitech), VM run through a web browser could be streamed to a live audience who could ask questions and have them answered in real time through a text-chat box (Fig. 5.4). Live- streams could also be set to automatically be saved as video, which could then be exported and uploaded to other sites, including an institution’s LMS for asynchronous engagement. A number of the live-streams during this pilot were subsequently uploaded to the Chapman Histology YouTube channel. Twitch is an open platform, and users have to register to be able to ask questions. As a result, students could choose their own username so that
they could remain anonymous if they wished– not only from the streamer but also from each other. While this anonymity does interfere with tracking student engagement and direct analysis of this TEL resource on learning and knowledge retention, it also eliminates some of the anxiety reported with other SoMe studies in getting students to engage with discussions (El Bialy and Jalali 2015; Hennessy 2017; Border et al. 2019; Guckian et al. 2019). With the simple addition of a drawing tablet (Wacom Intuos), a drawing tool extension in the Chrome web browser (Page Marker) and a blank webpage, anatomical and histological drawings could be made in real time for students to follow along with, similar to those reported by Pickering (2014, 2015). As a feed-forward training model, the pilot proved relatively successful from a technical standpoint – we could deliver live-streams to provide supplementary histology learning material and students actually attended. Due to its open nature, in addition to the students from the live- streamer’s institution, those from other institutions in Australia and overseas also engaged with the live-stream. This demonstrates the potential of live-streaming as a means to increase public engagement with anatomical sciences. Open access and anonymous nature of Twitch do
Fig. 5.4 Live-streaming on Twitch using virtual microscopy and green-screen filtered webcam. Viewers participate via the text-based “chat” on the right-hand side which
appears in real time, enabling live discussions and answering of questions
100
increase the potential risk for negative engagements; however, many of these interactions can be managed through robust moderation within the text chat. The Twitch platform itself, for example, has a built-in moderation tool that prevents the use of common offensive words. However, this moderation tool actually became a challenge while teaching reproductive anatomy, and the words “testis” and “gonad”, for example, were filtered out in the titles or descriptions of the streams. Future expansion of this project will focus more on evaluation and pretraining of students in the technology to demonstrate its ease of use and perceived usefulness (the two primary factors in the technology acceptance model; Davis et al. 1989) to promote further engagement. Twitch also allows the projection of the streamer’s visual scan paths through eye tracking to viewers, which may prove useful as a further feed-forward training method. Previous studies have shown that presenting novices with expert eye movement modelled examples can improve their diagnostic abilities in radiology (Litchfield et al. 2010) or improving visual literacy in histology (Koury et al. 2019). This mode of delivery then encompasses many of the best aspects of the cognitive theory of multimedia learning (Mayer and Moreno 2003; Mayer 2009, 2010; Pickering 2014, 2015; Balemans et al. 2016) and blends it with SoMe, which can support student learning in a flexible, synchronous or asynchronous, manner.
5.4.4 T he Future: Where We’re Going We Don’t Need Roads 5.4.4.1 Virtual Reality and Augmented Reality While virtual and augmented reality (VR and AR, respectively) have been embraced as a mode of anatomy education for well over a decade (Nicholson et al. 2006; Levinson et al. 2007; Codd and Choudhury 2011; Falah et al. 2014; Ma et al. 2016), such offerings for histology education appear sparse. Interestingly, the first descriptions of VM were actually described as VR at the time (Zito et al. 2004; Hortsch 2013). Nowadays,
J. A. Chapman et al.
VR/AR is typically delivered through a head- mounted display and deeply immersive, although many AR technologies are still delivered through smart devices (phones, tablets). The AR technology HoloLens (Microsoft, Redmond, WA) has been tested to view virtual slides in an evaluation of its use in the workflow of anatomic pathology (Hanna et al. 2018). While this study found that Flash-based VM websites were inaccessible with the HoloLens, virtual slides were able to be viewed on several non-Flash-based viewers such as DigitalScope (Aptia, Houston, TX), Image Zoomer, and OpenSeadragon (Microsoft, Redmond, WA). Simple gestures such as moving a hand upwards or downwards allowed users to operate the zoom in and out functions of the virtual microscope.
5.4.4.2 3D Modelling 3D modelling in histology is a relatively new phenomenon also. Sieben et al. (2017) report on the development of 3D virtual models of epithelium and its delivery online (www.epithelium3d. com) as a means of supporting student learning in histology. These resources were well received by students; however, their effectiveness in teaching an understanding of epithelial structure was not evaluated. These sorts of models, though, could theoretically be adapted for use in VR systems to allow a more immersive engagement in three dimensions by learners. Fascinatingly, 3D models of histological specimens have also been used to create 3D printed tactile models to help blind and/or visually impaired students (Kolitsky 2014). Confocal microscopy z-stack reconstructions have also been used to create virtual 3D models of various organs and tissues, which are increasingly being used in VR environments, to teach various topics such as vascular function in physiology, for example (Daly 2018). One of the often- cited limitations of VM is its inability to move through the z-axis (Dee 2009; Donnelly et al. 2013); using these sorts of scanning and reconstruction methods may one day allow routine VM focusing through the different focal planes, thereby providing a more realistic VR simulation of TM.
5 From Scope to Screen: The Evolution of Histology Education
5.4.4.3 Artificial Intelligence As mentioned above in Sect. 3.1.4, the daily routine for a modern pathologist is increasingly shifting towards a completely digitized workflow. With the adoption of WSI for diagnostic purposes being the first step, it was then perhaps inevitable that with increasing demand, decreasing funds, and an aging and diminishing pathologist workforce (Robboy et al. 2013; Satta and Edmonstone 2018; Metter et al. 2019), the next step would be to reduce the reliance upon human intervention as much as possible. With the development of machine learning and deep learning, both subsets of artificial intelligence (AI), there is a strong drive to produce technology that can, at this early stage, act as a digital support system for pathologists but eventually equal, or even improve upon (and therefore replace), their diagnostic expertise (Abel et al. 2019; Campanella et al. 2019; Parwani 2019). Computational pathology, whereby image analysis algorithms are used to automate or semi-automate immunohistochemistry quantification through the use of machine learning and other AI techniques, is already in practice and said to improve the speed and accuracy of diagnosis (West Jr 2017). Of course, for these diagnostic algorithms to function correctly, the AI needs to learn from a large, standardized dataset. Another limiting factor to complete automation in pathology is a human one – at this stage, pathologists are still required to perform the “grossing” (making gross observations and documentations) and “cut-up” (careful dissection) of pathological biopsies and surgical resections. Further, histo-technicians are still required to section tissues, and each one of these steps poses as an opportunity to introduce potential variability and artefacts, which can affect the accuracy of the algorithm (Koelzer et al. 2019). Automated tissue sectioning appears to be improving in quality, however, and may soon be adopted into the automated digitized pathology workflow (Onozato et al. 2013; Fu et al. 2018). As these sorts of advances proceed, the need for a pathology workforce may be diminished considerably, which for those of us who have gotten lost because of our reliance on our car’s Sat Nav directions, may warrant a pause. It is not yet
101
known what the trickle-down effects of increasing AI use in the field of pathology would have on the ways in which histology is taught in medical schools.
5.5
Conclusion
Histology is a history of visualization, a testament to human innovations, and our unrelenting pursuit of knowledge to discover our make-up and what makes us “tick”. Histology education, therefore, has its roots deeply embedded in the history of medicine, the history of science, and a tenacious human desire to understand and heal diseases through the understanding of our makeup. At present, histopathological examination remains the primary and more “economic” modality of most disease diagnosis and grading, staging, and profiling of cancer which informs therapeutic strategies and prognosis. With the advancements in diagnostic and imaging technologies which allow visualization of the human make-up at the molecular level, we are now able to look even deeper into the anatomy. It may only be a matter of time before the human body and its pathologies are visualized and diagnosed and therapeutic strategies formulated from molecular analytics in binary numbers. How will histology and other arms of anatomical sciences education evolve with this inevitable and ongoing revolution in science and technology? Unfortunately, we don’t have a magic crystal ball, but one thing is clear: histology can only exist because of its reliance on scientific breakthroughs in visualization technologies (lenses, microtomes, stains, microscopes, cameras, computers, scanners, software, the internet), and histology education has had to adapt and evolve accordingly. It is rare now, at the beginning of 2020, to see an isolated student hunched over a compound microscope with a box full of glass slides and textbook open. Instead, students are crowded around a phone or laptop, accessing virtual slides or YouTube videos, uploading photos to social media, and commenting on how “everything looks the same”. But accompanying these massive changes in technology, there is, importantly, a whole new
102
J. A. Chapman et al.
omy is enough?. Anatomical Sciences Education 1(4):184–188 Bergman EM, Vleuten CPMD, Scherpbier AJJA (2011) Why don’t they know enough about anatomy? A narrative review. Med Teach 33:403–409 Bergman E, Verheijen I, Scherpbier A et al (2014) Influences on anatomical knowledge: the complete arguments. Clin Anat 27:296–303 Berners-Lee T (1989) Information management: a proposal. http://info.cern.ch/Proposal.html. Accessed 11 Feb 2020 Berry GP, Clark SL, Dempsey EW et al (1956) Association of American Medical Colleges: the teaching of anatomy and anthropology in medical education. In: Report of the Third Teaching Institute, Association of American Medical Colleges, Swampscott, MA. 18–22 October 1955. J Med Educ 31:1–146 Bigelow J (1859) The death of Pliny the elder. Mem Am Acad Arts Sci 6(2):223–227 Blake JB (1980) Anatomy. In: Numbers RL (ed) The education of American physicians: historical essays. University of California Press, California References Blake CA, Lavoie HA, Millette CF (2003) Teaching medical histology at the University of South Carolina Abel E, Pantanowitz L, Aeffner F et al (2019) School of Medicine: transition to virtual slides and virComputational pathology definitions, best practices, tual microscopes. Anat Rec 275B:196–206 and recommendations for regulatory guidance: a Bloodgood RA, Ogilvie RW (2006) Trends in histology white paper from The Digital Pathology Association. J laboratory teaching in United States medical schools. Pathol 249:286–294 Anat Rec B New Anat 289:169–175 Ahmed R, Shamim KM, Talukdar HK, Parvin S (2018) Blum F (1893a) Der formaldehyd als antisepticum. Light microscopy for teaching-learning in histolMunch Med Wochenschr 40(32):601–602 ogy practical in undergraduate medical education of Blum F (1893b) Der formaldehyd als härtungsmittel. Z Bangladesh: a teacher’s perspective. Southeast Asian Wiss Mikrosk 10:314–315 J Med Educ 12:26–31 Böhmer F (1865) Zur pathologischen anatomie der meninAlammary A, Sheard J, Carbone A (2014) Blended gitis cerebromedullaris epidemica. Aerztl Intelligenzb learning in higher education: three different design Munchen 12:539–550 approaches. Australas J Educ Technol 30:440–454 Bonaini F (1845) Chronica antiqua conventus sanctae Arroyo-Jimenez MDM, Marcos P, Martinez-Marcos Catharine de Pisis. Arch Stor Ital 6(2):391–593 A et al (2005) Gross anatomy dissections and self- Border S, Hennessy C, Pickering J (2019) The rapidly directed learning in medicine. Clin Anat 18:385–391 changing landscape of student social media use in Baeyer A (1875) Zur geschichte des eosins. Ber Dtsch anatomy education. Anat Sci Educ 12:577–579 Chem Ges 8:146–148 Borowsky AD, Glassy EF, Wallace WD et al. (2020) Balemans MCM, Kooloos JGM, Donders ART, Van der Digital whole slide imaging compared with light Zee CEEM (2016) Actual drawing of histological microscopy for primary diagnosis in surgical patholimages improves knowledge retention. Anat Sci Educ ogy. Arch Pathol Lab Med. Advanced online publica9:60–70 tion. https://doi.org/10.5858/arpa.2019-0569-OA Balogun WG (2019) Using electronic tools and resources Bostock J (1855a) The natural history. Pliny the Elder to meet the challenges of anatomy education in sub- (trans Bostock J, Riley HT). Book XXXVI, Chapter Saharan Africa. Anat Sci Educ 12:97–104 67. Taylor and Francis Barbeau ML, Johnson M, Gibson C, Rogers KA (2013) Bostock J (1855b) The natural history. Pliny the Elder, The development and assessment of an online micro(trans Bostock J, Riley HT). Book XXXVII, Chapter scopic anatomy laboratory course. Anat Sci Educ 10. Taylor and Francis 6:246–256 Boyce BF (2017) An update on the validation of whole Barry DS, Marzouk F, Chulak-Oglu K et al (2016) slide imaging systems following FDA approval of a Anatomy education for the YouTube generation. Anat system for a routine pathology diagnostic service in Sci Educ 9:90–96 the United States. Biotech Histochem 92:381–389 Bell CS (1966) The early history of the microscope. Bios Braun MW, Kearns KD (2008) Improved learning effi37(2):51–60 ciency and increased student collaboration through Bergman EM, Prince KJAH, Drukker J, van der Vleuten use of virtual microscopy in the teaching of human CPM, Scherpbier AJJA (2008) How much anatpathology. Anat Sci Educ 1:240–246
level of connectedness and support that students have never had the opportunity to experience before. Today’s students of histology are connected digitally not only with their peers but with experts and not only within their own institution but with a global community of learners and educators. Modern histology education, designed with evidence-based learning theories, can be open and flexible, improving histology literacy and embedding the importance of histology as a basic science into the consciousness of medical education. What is the next step in the evolution of histology? The future is in our hands.
5 From Scope to Screen: The Evolution of Histology Education Brown PJ, Fews D, Bell NJ (2016) Teaching veterinary histopathology: a comparison of microscopy and digital slides. J Vet Med Educ 43:13–20 Burgess DS (2004) Laser microdissection: making inroads in research. Biophoton Int 11:46–49 Burke RB (1962) The opus majus of Roger Bacon, vol 2. Russel and Russel Inc, p 574 Campanella G, Hanna MG, Geneslaw L et al (2019) Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med 25:1301–1309 Carpenter WB, Dallinger WH (1901) The microscope and its revelations, 8th edn. P Blakiston’s Son and Co, Philadelphia, p 125 Chen F, Lui AM, Martinelli SM (2017) A systematic review of the effectiveness of flipped classrooms in medical education. Med Educ 51:585–597 Chvátal A (2017) Jan Evangelista Purkyně (1787– 1869) and his instruments for microscopic research in the field of neuroscience. J Hist Neurosci 26(3):238–256 CISCO (2019) Cisco Visual Networking Index: forecast and trends, 2017–2022. White Paper. Available: https://www.cisco.com/c/en/us/solutions/collateral/ service-provider/visual-networking-index-vni/whitepaper-c11-741490.html Clarke J (1910) Physical science in the time of Nero: being a translation of the Quaestiones Naturales of Seneca (trans Clarke J). Book I, Chapter VI. Macmillan and Co. Ltd, p 29 Codd AM, Choudhury B (2011) Virtual reality anatomy: is it comparable with traditional methods in the teaching of human forearm musculoskeletal anatomy? Anat Sci Educ 4:119–125 Cole D, Rengasamy E, Batchelor S et al (2017) Using social media to support small group learning. BMC Med Educ 17:201 Coleman R (2009) Can histology and pathology be taught without microscopes? The advantages and disadvantages of virtual histology. Acta Histochem 111:1–4 Collier B (1970) The little engines that could’ve: the calculating machines of Charles Babbage. Thesis, Harvard University Cooke M, Irby D, O’Brien B (2010) Educating physicians: a call for reform of medical school and residency. Jossey-Bass, San Francisco Cotter JR (2001) Laboratory instruction in histology at the University of Buffalo: recent replacement of microscope exercises with computer applications. Anat Rec 265:212–221 Craig S, Tait N, Boers D, McAndrew D (2010) Review of anatomy education in Australian and New Zealand medical schools. ANZ J Surg 80:212–216 Daly CJ (2018) The future of education? Using 3D animation and virtual reality in physiology teaching. Physiol News 111:43 Davis FD, Bagozzi RP, Warshaw PR (1989) User acceptance of computer technology: a comparison of two theoretical models. Manag Sci 35:903–1028
103
Day LJ (2018) A gross anatomy flipped classroom effects performance, retention, and higher-level thinking in lower performing students. Anat Sci Educ 11:565–574 Dee FR (2009) Virtual microscopy in pathology education. Hum Pathol 40:1112–1121 Dee FR, Lehman JM, Consoer D et al (2003) Implementation of virtual microscope slides in the annual pathobiology of cancer workshop laboratory. Hum Pathol 34(5):430–436 del Migliore FL (1684) Firenze città nobilissima illustrata [The noble city of Florence illustrated]. Nella Stamp della Stella, p 431 Donnelly AD, Mukherjee MS, Lyden ER et al (2013) Optimal z-axis scanning parameters for gynecologic cytology specimens. J Pathol Inform 4:38 Drake RL, Lowrie DJ Jr, Prewitt CM (2002) Survey of gross anatomy, microscopic anatomy, neuroscience, and embryology courses in medical school curricula in the United States. Anat Rec 269:118–122 Drake RL, McBride JM, Lachman N, Pawlina W (2009) Medical education in the anatomical sciences: the winds of change continue to blow. Anat Sci Educ 2:253–259 Drake RL, McBride JM, Pawlina W (2014) An update on the status of anatomical sciences education in United States medical schools. Anat Sci Educ 7:321–325 Duffy TP (2011) The Flexner report—100 years later. Yale J Biol Med 84(3):269–276 El Bialy S, Jalali A (2015) Go where the students are: a comparison of the use of social networking sites between medical students and medical educators. JMIR Med Educ 1:e7 Falah J, Khan S, Alfalah T et al (2014) Virtual reality medical training system for anatomy education. Sci Inform Conf 37:752–758 Farah CS, Maybury S (2009) The e-evolution of microscopy in dental education. J Dent Educ 73:942–949 Farey JE, Sandeford JC, Evans-McKendry GD (2014) Medical students call for national standards in anatomical education. ANZ J Surg 84:813–815 Felszeghy S, Pasonen-Seppänen S, Koskela A, Mahonen A (2017) Student-focused virtual histology education: do new scenarios and digital technology matter? Med Ed Publish. https://doi.org/10.15694/ mep.2017.000154 Fernandes CIR, Bonan RF, Bonan PRF et al (2018) Dental student’s perceptions and performance in use of conventional and virtual microscopy in oral pathology. J Dent Educ 82:883–890 Findlater GS, Kristmundsdottir F, Parson SH, Gillingwater TH (2012) Development of a supported self-directed learning approach for anatomy education. Anat Sci Educ 5:114–121 Fitzgerald JEF, White MJ, Tang SW et al (2008) Are we teaching sufficient anatomy at medical school? The opinions of newly qualified doctors. Clin Anat 21:718–724 Fleagle TR, Borcherding NC, Harris J, Hoffmann DS (2018) Application of flipped classroom pedagogy to the human gross anatomy laboratory: student
104 preferences and learning outcomes. Anat Sci Educ 11:285–296 Flexner A (1910) Medical education in the United States and Canada: a report to the Carnegie Foundation for the Advancement of Teaching. Carnegie Foundation for the Advancement of Teaching, New York Flowers TH (1983) The design of colossus. IEEE Ann Hist Comput 5(3):239–252 Fu X, Klepeis V, Yagi Y (2018) Evaluation of an automated tissue sectioning machine for digital pathology. Diagn Pathol 4:267 Gartner LP (2003) Anatomical sciences in the allopathic medical school curriculum in the United States between 1967-2001. Clin Anat 16:434–439 Gatumu MK, MacMillan FM, Langton PD et al (2014) Evaluation of usage of virtual microscopy for the study of histology in the medical, dental, and veterinary undergraduate programs of a UK University. Anat Sci Educ 7:389–398 Gillingwater TH (2008) The importance of exposure to human material in anatomical education: a philosophical perspective. Anat Sci Educ 1:264–266 Goes FJ (2013) The eye in history. Jaypee Brothers Medical Publisher, New Delhi, p p130 Goldberg HR, Dintzis R (2007) The positive impact of team-based virtual microscopy on student learning in physiology and histology. Adv Physiol Educ 31:261–265 Guckian J, Leighton J, Frearson R et al (2019) The next generation: how medical students use new social media to support their learning. Med Ed Publish. https://doi.org/10.15694/mep.2019.000227.1 Gupta Y, Morgan M, Singh A, Ellis H (2008) Junior doctors’ knowledge of applied clinical anatomy. Clin Anat 21:334–338 Hanna MG, Ahmed I, Nine J et al (2018) Augmented reality technology using Microsoft HoloLens in anatomic pathology. Arch Pathol Med 142:638–644 Harris T, Leaven T, Heidger P, Kreiter C, Duncan J, Dick F (2001) Comparison of a virtual microscope laboratory to a regular microscope laboratory for teaching histology. Anat Rec 265:10–14 Harvey W (1628) On the motion of the heart and blood in animals (trans: Willis R). PF Collier & Son, New York, p p1910 Heidger PM Jr, Dee F, Consoer D et al (2002) Integrated approach to teaching and testing in histology with real and virtual imaging. Anat Rec 269:107–112 Henle J (1838) Ueber die ausbreitung des epithelium im menschlichen körper. In Müller’s Archiv 5:103–128 Henle J (1841) Allgemeine anatomie. Lehre von den mischungs und formbestandtheilen des menschlichen körpers. In Vom bau des menschlichen körpers (von Sommerring ST), vol 6. Leipzig: Voss Hennessy CM (2017) Lifting the negative cloud of social media use within medical education. Anat Sci Educ 10:98–99 Hennessy CM, Royer DF, Meyer AJ, Smith CF (2020) Social media guidelines for anatomists. Anat Sci
J. A. Chapman et al. Educ.: Advanced online publication. https://doi. org/10.1002/ASE.1948 Heylings DJA (2002) Anatomy 1999–2000: the curriculum, who teaches it and how? Med Educ 36:702–710 Hofmann AW (1867) Contributions to the history of methylic aldehyde. Proc R Soc Lond 16:156–159 Hooke R (1665) Micrographia: or some physiological descriptions of minute bodies made by magnifying glasses. Jo Martyn and Ja Allestry, Printers to the Royal Society, London, pp 112–120 Hootsuite (2020) 37 Instagram Stats That Matter to Marketers in 2020. [ONLINE] Available at: https:// blog.hootsuite.com/instagram-statistics/. (Accessed 21 Feb 2020 Hortsch M (2013) From microscopes to virtual reality: how our teaching of histology is changing. J Cytol Histol 4:e108 Husmann PR, O’Loughlin VD, Braun MW (2009) Quantitative and qualitative changes in teaching histology by means of virtual microscopy in an introductory course in human anatomy. Anat Sci Educ 2:218–226 Inui T (2003) A flag in the wind: educating for professionalism in medicine. Association of American Medical Colleges, Washington, DC Jones DG (2016) YouTube anatomy education: sources of ethical perplexity. Anat Sci Educ 9:500–501 Kahr B, Lovell S, Subramony JA (1998) The progress of logwood extract. Chirality 10:66–77 Kalderon AE (1983) The evolution of microscope design from its invention to the present days. Am J Surg Pathol 7:95–102 Keenan ID, Slater JD, Matthan J (2018) Social media: insights for medical education from instructor perceptions and usage. Med Ed Publish. https://doi. org/10.15694/mep.2018.0000027.1 Khalil MK, Kirkley DL, Kibble JD (2013) Development and evaluation of an interactive electronic laboratory manual for cooperative learning of medical histology. Anat Sci Educ 6:342–350 Kisch B (1954) Forgotten leaders in modern medicine, Valentin, Gruby, Remak, Auerbach. Trans Am Philos Soc 44:139–317 Knight TA (1803) Account of some experiments on the descent of the sap in trees. Philos Trans R Soc Lond 93:277–289 Koelzer VH, Sirinukunwattana K, Rittscher J et al (2019) Precision immunoprofiling by image analysis and artificial intelligence. Virchows Arch 474:511–522 Kolitsky MA (2014) 3D printed tactile learning objects: proof of concept. J Blin Innov Res 4(1) Kolliker A (1899) Erinnerungen aus meinem Leben. W Engelmann, Leipzig, p 8 Koury HF, Leonard CJ, Carry PM, Lee LMJ (2019) An expert derived feedforward histology module improves pattern recognition efficiency in novice students. Anat Sci Educ 12:645–654 Krippendorf BB, Lough J (2005) Complete and rapid switch from light microscopy to virtual microscopy for teaching medical histology. Anat Rec B New Anat 285:19–25
5 From Scope to Screen: The Evolution of Histology Education Lee LMJ, Gould DJ (2014) Educational implications of a social networking application, Twitter™, for anatomical sciences. Med Sci Educ 24:273–278 Lee LMJ, Goldman HM, Hortsch M (2018) The virtual microscopy database-sharing digital microscope images for research and education. Anat Sci Educ 11:510–515 Leeuwenhoeck A (1677) Observation, communicated to the publisher by Mr. Antony van Leeuwenhoeck, in a Dutch letter of the 9 October 1676 here English’d: concerning little animals by him observed in rain- well-sea and snow water; as also in water wherein pepper had lain infused. Philos Trans 12:821–831 Leicester HM (1940) Alexander Mikhailovich Butlerov. J Chem Educ 17(5):203 Letocha CE (1986) The origin of spectacles. Surv Ophthalmol 31(3):185–188 Leung K-K, Lu K-S, Huang T-S, Hsieh B-S (2006) Anatomy instruction in medical schools: connecting the past and the future. Adv Health Sci Educ 11:209–215 Levinson AJ, Weaver B, Garside S et al (2007) Virtual reality and brain anatomy: a randomised trial of e-learning instructional designs. Med Educ 41:495–501 Lewis TL (2017) Social media and student engagement in anatomy education. Anat Sci Educ 10:508 Lewis TL, Sagmeister ML, Miller GW et al (2016) Anatomy, radiology, and practical procedure education for foundational doctors in England: a national observational study. Clin Anat 29:982–990 Ling Y, Swanson DB, Holtzman K, Bucak SD (2008) Retention of basic science information by senior medical students. Acad Med 83:S82–S85 Litchfield D, Ball LJ, Donovan T et al (2010) Viewing another person’s eye movements improves identification of pulmonary nodules in chest x-ray inspection. J Exp Psychol Appl 16:251–262 Lowe AJ (2018) E-learning in medical education: one size does not fit all. Anat Sci Educ 11:100–101 Ma M, Fallavollita P, Seelbach I et al (2016) Personalized augmented reality for anatomy education. Clin Anat 29:446–453 Marsland MJ, Lazarus MD (2018) Ask an anatomist: Identifying global trends, topics and themes of academic anatomists using twitter. Anatomical Sciences Education 11(3):270–281 Mauchly HP (1997) Before the ENIAC. IEEE Ann Hist Comput 19(2):25–30 Mayer RE (2009) Multimedia learning (2nd ed.). Cambridge University Press Mayer RE (2010) Applying the science of learning to medical education. Med Educ 44:543–549 Mayer RE (2014) Cognitive theory of multimedia learning. In: Mayer RE (ed) The Cambridge handbook of multimedia learning, 2nd edn. Cambridge University Press, New York Mayer RE, Moreno R (2003) Nine ways to reduce cognitive load in multimedia learning. Educ Psychol 38:43–52
105
McBride JM, Drake RL (2018) National survey on anatomical sciences in medical education. Anat Sci Educ 11:7–14 McLean S, Attardi SM, Faden L, Goldszmidt M (2016) Flipped classrooms and student learning: not just surface gains. Adv Physiol Educ 40:47–55 McReady ZR, Jham BC (2013) Dental students’ perceptions of the use of digital microscopy as part of an oral pathology curriculum. J Dent Educ 77:1624–1628 Means B, Toyama Y, Murphy R, Baki M (2013) The effectiveness of online and blended learning: a meta- analysis of the empirical literature. Teach Coll Rec 115:1–47 Medical Futurist (2018) The digital future of pathology – the medical futurist. [online] Available at: https://medicalfuturist.com/digital-future-pathology/. Accessed 12 Sept 2019 Merkel F (1891) Jacob Henle: a German scholarly life; according to records and memories. Brunswick:154–155 Metter DM, Colgan TJ, Leung ST et al (2019) Trends in the US and Canadian pathologist workforces from 2007 to 2017. JAMA Netw Open 2:e194337 Miller GW, Lewis TL (2016) Anatomy education for the YouTube generation: technical, ethical, and educational considerations. Anat Sci Educ 9:496–497 Mione S, Valcke M, Cornelissen M (2013) Evaluation of virtual microscopy in medical histology teaching. Anat Sci Educ 6:307–315 Mione S, Valcke M, Cornelissen M (2016) Remote histology learning from static versus dynamic microscopic images. Anat Sci Educ 9:222–230 Mollenhoff CR (1988) Atanasoff: forgotten father of the computer. Iowa State University Press, Ames Morton DA, Colbert-Getz JM (2017) Measuring the impact of the flipped anatomy classroom: the importance of categorizing an assessment by Bloom’s taxonomy. Anat Sci Educ 10:170–175 Moxham BJ, Emmanouil-Nikoloussi E, Brenner E et al (2017) The attitudes of medical students in Europe toward the clinical importance of histology. Clin Anat 30:635–643 Müller J (1838) On the nature and structural characteristics of cancer and of those morbid growths which may be confounded with it (trans: West C). In Med Chir Rev (1840) 33(65):119–148 Nauhria S, Ramdass PVAK (2019) Randomized cross- over study and a qualitative analysis comparing virtual microscopy and light microscopy for learning undergraduate histopathology. Indian J Pathol Microbiol 62:84–90 Nicholson DT, Chalk C, Funnell WRJ, Daniel SJ (2006) Can virtual reality improve anatomy education? A randomized controlled study of a computer-generated three-dimensional anatomical ear model. Med Educ 40:1081–1087 Niemiec CP, Ryan RM (2009) Autonomy, competence, and relatedness in the classroom: applying self- determination theory to educational practice. Theory Res Educ 7:133–144
106 Oberlo (2019) 10 Twitter statistics every marketer should know in 2020 [Infographic]. [ONLINE] Available at: https://au.oberlo.com/blog/twitter-statistics. Accessed 21 Feb 2020 Onozato ML, Hammond S, Merren M, Yagi Y (2013) Evaluation of a completely automated tissue- sectioning machine for paraffin blocks. J Clin Pathol 66:151–154 Ortiz-Hidalgo C, Pina-Oviedo S (2019) Hematoxylin: Mesoamerica’s gift to histopathology. Palo de Campeche (logwood tree), pirates’ most desired treasure, and irreplaceable tissue stain. Int J Surg Pathol 27(1):4–14 Pakurar AS, Bigbee JW (2004) Digital histology: an interactive CD atlas with review text. Wiley, Hoboken Parker LM (2002) Anatomical dissection: why are we cutting it out? Dissection in undergraduate teaching. ANZ J Surg 72:910–912 Parker EU, Reder NP, Glasser D et al (2017) NDER: a novel web application for teaching histology to medical students. Acad Pathol 4:1–5 Parwani AV (2019) Next generation diagnostic pathology: use of digital pathology and artificial intelligence tools to augment a pathological diagnosis. Diagn Pathol 14:138 Paulsen FP, Eichhorn M, Brauer L (2010) Virtual microscopy- the future of teaching histology in the medical curriculum? Ann Anat 192:378–382 Payne K, Keith MJ, Schuetzler RM et al (2017) Examining the learning effects of live streaming video game instruction over Twitch. Comput Hum Behav 77:95–109 Peplow PV (1990) Self-directed learning in anatomy: incorporation of case-based studies into a conventional medical curriculum. Med Educ 24:426–432 Perez S (2019) Twitch continues to dominate live streaming with its second-biggest quarter to date. [online] Techcrunch.com. Available at: https://techcrunch. com/2019/07/12/twitch-continues-to-dominate-livestreaming-with-its-second-biggest-quarter-to-date/. Accessed 16 Feb. 2020 Pickering JD (2014) Taking human anatomy drawings for teaching outside the classroom. Surg Radiol Anat 36:953–954 Pickering JD (2015) Anatomy drawing screencasts: enabling flexible learning for medical students. Anat Sci Educ 8:249–257 Pickering JD, Joynes VCT (2016) A holistic model for evaluating the impact of individual technology-enhanced learning resources. Med Teach 38:1242–1247 Pickering JD, Lazarus MD, Hallam JL (2019) A practitioner’s guide to performing a holistic evaluation of technology-enhanced learning in medical education. Med Sci Educ 29:1095–1102 Pinder KE, Ford JC, Ovalle WK (2008) A new paradigm for teaching histology laboratories in Canada’s first distributed medical school. Anat Sci Educ 1:95–101 Pratt RL (2009) Are we throwing histology out with the microscope? A look at histology from the physician’s perspective. Anat Sci Educ 2:205–209
J. A. Chapman et al. Prince KHAH, Scherpbier AJAA, van Mameren H et al (2005) Do students have sufficient knowledge of clinical anatomy? Med Educ 39:326–332 Purkyně JE (1834) Der microtomische quetscher, ein bei mikroscopischen untersuchungen unentbehrliches instrument (The microtomic compressor, an instrument indispensable in microscopical investigations). Müller’s Archiv 1:385–390 Purtle HR (1973) History of the microscope. In: Gray P (ed) The encyclopedia of microscopy and microtechnique. Van Nostrand Reinhold, New York, pp 252–260 Raikos A, Waidyasekara P (2014) How useful is YouTube in learning heart anatomy? Anat Sci Educ 7:12–18 Reeve J (2002) Self-determination theory applied to educational settings. In: Deci EL, Ryan RM (eds) Handbook of self-determination research. University of Rochester Press, Rochester Robboy SJ, Weintraub S, Horvath AE et al (2013) Workforce Project Work Group. Pathologist workforce in the United States: I. development of a predictive model to examine factors influencing supply. Arch Pathol Lab Med 137(12):1723–1732 Rojo MG, García GB, Mateos CP et al (2006) Critical comparison of 31 available digital slide systems in pathology. Int J Surg Pathol 14(4):285–305 Ronchi V (1946) Perche non si ritrova l’'inventore degli occhiale? (Why hasn't the inventor of eyeglasses been found?). Rivista di Oftalmologia 1:140–144 Rosen E (1956a) The invention of eyeglasses, part 1. J Hist Med All Sci 11(1):13–46 Rosen E (1956b) The invention of eyeglasses, part 2. J Hist Med All Sci 11(1):183–218 Ross B, Gage K (2006) Global perspectives on blended learning: insight from WebCT and our customers in higher education. In: Bonk CJ, Graham CR (eds) Handbook of blended learning: global perspectives, local designs. Pfeiffer Publishing, San Francisco Ryan RM, Deci EL (2000) Self-determination theory and the facilitation of intrinsic motivation, social development and well-being. Am Psychol 55:68–78 Satta G, Edmonstone J (2018) Consolidation of pathology services in England: have savings been achieved? BMC Health Serv Res 18:862 Schmidt H (1998) Integrating the teaching of basic sciences, clinical sciences, and biopsychosocial issues. Acad Med 73:S24–S31 Schutte A, Braun M (2009) Virtual microscopy: experiences of a large undergraduate anatomy course. HAPS Educ 14:39–42 Scoville SA, Buskirk TD (2007) Traditional and virtual microscopy compared experimentally in a classroom setting. Clin Anat 20:565–570 Seymour RB, Kauffman GB (1992) Formaldehyde: a simple compound with many uses. J Chem Educ 69(6):457 Sieben A, Oparka R, Erolin C (2017) Histology in 3D: development of an online interactive student resource on epithelium. J Vis Commun Med 40:58–65
5 From Scope to Screen: The Evolution of Histology Education Silage DA, Gil J (1985) Digital image tiles: a method for the processing of large sections. J Microsc 138(2):221–227 Singer C (1915) The dawn of microscopical discovery. J Roy Micr Soc Aug:336–338 Sjoblom M, Hamari J (2017) Why do people watch others play video games? An empirical study on the motivations of Twitch users. Comput Hum Behav 75:985–996 Sleeman J, Lang C, Dakich E (2020) Social media, learning and connections for international students: the disconnect between what students use and the tools learning management systems offer. Australas J Educ Tec 36:44–56 Smith M (1996) Ptolemy’s theory of visual perception: an English translation of the optics with introduction and commentary. Trans Am Philos Soc 86(2):231 Smith M (2010) Alhacen’s theory of visual perception: a critical edition with English translation and commentary of book 7 of Alhacen’s ‘De Aspectibus’, the medieval Latin version of Ibn al-Haytham’s ‘Kitāb al-Manāẓir’. Trans Am Philos Soc New Series 100(3):309 Smythe G, Hughes D (2008) Self-directed learning in gross human anatomy: assessment outcomes and student perceptions. Anat Sci Educ 1:145–153 Stern N (1979) The BINAC: a case study in the history of technology. IEEE Ann Hist Comput 1(1):9–20 Stone DM, Barry DS (2019) Improving virtual learning interactions: reducing transactional distance of online anatomy modules. Anat Sci Educ 12:686–687 Sugand K, Abrahams P, Khurana A (2010) The anatomy of anatomy: a review for its modernization. Anat Sci Educ 3:83–93 Szymas J, Lundin M (2011) Five years of experience teaching pathology to dental students using the WebMicroscope. Diagn Pathol 6:S13 Thompson AR, Lowrie DJ Jr (2017) An evaluation of outcomes following the replacement of traditional histology laboratories with self-study modules. Anat Sci Educ 10:276–285 Tuchman AM (1993) Science, medicine, and the state in Germany: the case of Baden, 1815–1871. Oxford University Press, New York Turing AM (1937) On computable numbers with an application to the Entscheidungsproblem. P Lond Math Soc 42(2):230–265 Turney BW (2007) Anatomy in a modern medical curriculum. Ann R Coll Surg Engl 89:104–107 van Zuylen J (1981) The microscopes of Antoni van Leeuwenhoek. J Microsc 121:309–328 von Waldeyer W (1863) Untersuchungen über den ursprung und den verlauf des axsencylinders bei wirbellosen und wirbeltieren sowie über dessen endverhalten in der quergestreiften muskelfaser [Studies on
107
the origin and course of the axon cylinder in invertebrates and vertebrates as well as on its behavior in the cross-striated muscle fiber]. Z Rat Med 20:193–256 Waterston SW, Stewart IJ (2005) Survey of clinicians’ attitudes to anatomical teaching and knowledge of medical students. Clin Anat 18:380–384 Weaker FJ, Herbert DC (2009) Transition of a dental histology course from light to virtual microscopy. J Dent Educ 73:1213–1221 Weinstein RS (1986) Prospects for telepathology. Hum Pathol 17(5):433–434 Weinstein RS, Descour MR, Liang C et al (2004) An array microscope for ultrarapid virtual slide processing and telepathology: design, fabrication, and validation study. Hum Pathol 35:1303–1314 West Jr D (2017) Digital pathology gives rise to computational pathology. [ONLINE] Available at: http://www. clpmag.com/2017/10/digital-pathology-gives-risecomputational-pathology/. Accessed 1 Feb 2020 Whelan E, Isla N, Brooks S (2019) The effect of social media overload on academic performance. Available at SSRN: https://ssrn.com/abstract=3498265 Wilson AB, Taylor MA, Klein BA et al (2016) Meta- analysis and review of learner performance and preference: virtual versus optical microscopy. Med Educ 50:428–440 Wissozky N (1877) Ueber das eosin als reagens auf hämoglobin und die bildung von blutgefässen und blutkörperchen bei Säugetier und Hühnerembryonen [Eosin as a reagent to hemoglobin and the formation of blood vessels and blood cells in mammalian and chick embryos]. Archiv Mikr Anat 13:479–496 Xu C-J (2013) Is virtual microscopy really better for histology teaching? Anat Sci Educ 6:138 Yen PY, Hollar MR, Griffy H, Lee LMJ (2014) Students’ expectations of an online histology course: a qualitative study. Med Sci Educ 24:75–82 Young J (1929) Malpighi’s “De pulmonibus”. Proc R Soc Med 23(1):1–11 YouTube (2020) YouTube for Press. [ONLINE] Available at: https://www.youtube.com/intl/en-GB/about/press/. Accessed 21 Feb 2020 Zito FA, Marzullo F, D’Errico D et al (2004) Quicktime virtual reality technology in light microscopy to support medical education in pathology. Modern Pathol 17:728–731 Zuidervaart HJ (2010) The ‘true inventor’ of the telescope. A survey of 400 years of debate. In: Van Helden A et al (eds) The origins of the telescope. Amsterdam University Press, Amsterdam, p 43 Zureick AH, Burk-Rafel JB, Purkiss JA, Hortsch M (2018) The interrupted learner: how distractions during live and video lectures influence learning outcomes. Anat Sci Educ 11:366–376
6
Digital and Social Media in Anatomy Education Catherine M. Hennessy and Claire F. Smith
Abstract
The use of images in various forms (drawing, photography, digital applications) has always been intrinsically associated with anatomy; however, the way in which anatomy educators and students create, access, view and interact with images has changed dramatically over the last 20 years. The method that anatomy educators use to engage with students and the wider public and how students engage with each other and faculty has also changed since the turn of the century, largely due to the emergence of social media. These two facets, the move towards digital images and the use of social media, are now intricately interlinked because social media enable anatomy educators to share digital learning resources easily and instantly to a global audience. This new trend of using social media to share digital images has created some ethical dilemmas that anatomy educators are researching and seeking guidance on to ensure that they are representing the potential conflicting needs and/or requirements of different stakeholders, including donors, donor families, students, the
C. M. Hennessy (*) · C. F. Smith Department of Anatomy, Brighton and Sussex Medical School, University of Sussex, Brighton, UK e-mail: [email protected]; [email protected]
public, regulators and anatomy educators themselves. Meeting the various needs of stakeholders is complex; however, this chapter suggests an ethical approach for how digital images and social media can continue to be part of anatomy education. Keywords
Digital images · Social media · Cadaveric images · Anatomy education · Consent
6.1
Introduction
The use of images has been intrinsically associated with anatomy since the early days of cave paintings. As a visual subject it is perhaps not surprising that drawings and paintings of the human body served as an essential way to communicate the intricacies of the human form to others. At the turn of the century, with increasing accessibility to the World Wide Web and Web 2.0 technologies, digital media began to evolve, meaning that the creation and dissemination of anatomy images was quicker, cheaper and easier than ever. It has therefore been a natural evolution that anatomy digital media (photographs, drawings, virtual reconstructions, etc.) have become part of the digital world. Another cate-
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_6
109
C. M. Hennessy and C. F. Smith
110
gory of digital media that have penetrated the field of anatomy are social media, including platforms such as Facebook, Twitter, Instagram and YouTube. Anatomy educators worldwide have been using social media as educational adjuncts to communicate with students and share educational content. Largely these have received extremely positive feedback from students. One of the benefits of social media is that they allow instant communication with a worldwide audience and many social media platforms. In particular, educational Facebook pages as well as Twitter and Instagram accounts are publicly available, allowing for increased reach of anatomy educational content. Whilst the use of digital media has brought many positives, their use does not come without challenges, for example, posting inappropriate content. There is a wider discussion needed as to whether photographs and videos of cadaveric material should be publicly shared on social media. It could be argued that if the donor gave consent for images to be taken then there should not be any need for concern. However, with the multigenerational and far-reaching global capabilities of the Internet, it is very much possible that family and friends could recognise a donor or body parts, especially if more identifiable areas, for example, face and hands, are displayed. Moreover, even if the cadaveric material being shared is not identifiable, cadaveric material by nature is sensitive and for many social media users could be overly explicit particularly when considering cultures and beliefs towards the dead vary worldwide. This debate extends to other anatomy education digital media, including 3D prints and virtual anatomy reconstructions, since the origin of the donors (and possibility consent) on which the reconstructions are based is not always clear. These issues raise the need for more ethical considerations when sharing content on digital media to ensure that respect, confidentiality and the anonymity of donors are maintained. This piece explores the evolution of images in anatomy education with a particular focus on the involvement in digital and social media, and the
ethical considerations surrounding the combination of their use for the anatomy profession.
6.2
Pre-digital Age
The study of anatomy is believed to have started in the Stone Age with cave paintings of body structures; these perhaps represent the first images of anatomy. Over time, anatomy had been influenced by various notable figures such as Aristotle (384–322 B.C.) and later Galen (129– 199 A.D.), who both dissected and wrote anatomical texts, which became the standard for many years. Although the correctness in the content we now know not to be true in some examples. A key part of the dissemination of knowledge at this time was the use of written word and drawings, and this continues today even if the means of delivery has changed. Drawings from Leonardo da Vinci (1452–1519), who himself dissected over 30 cadavers (Bouchet 1996), were used in teaching anatomy, although today they are more likely to be classed as a mixture of accurate drawings and art. Andreas Vesalius (1514–1564) published his seven-volume De humani corporis de fabrica (On the Structure of the Human Body), in which he carefully integrated text and drawings made from his observations of dissections and helped set anatomy on a new course towards a more scientific method (Anderhuber 1996). Accompanying the textbooks were fine woodcuts depicting human bodies at various stages of dissection (Dyer and Thorndike 2000), illustrating the need to retain what had been discovered during dissection. Anatomy and art and the use of drawing of the human form remained closely linked during the eighteenth and nineteenth centuries where anatomy played a significant role in the establishment of art academies (Lee 2019). The trend of drawings being closely associated with text continued with more modern anatomy textbooks containing collections of images showing the human form at progressively deeper layers of dissection and through different modalities, for example, ultrasound (Smith et al. 2018). Arguably, the most famous anatomy textbook is Gray’s
6 Digital and Social Media in Anatomy Education
Anatomy, named after Sir Henry Gray who studied at London’s St. George’s Hospital Medical School in 1858 (Richardson 2008). However, whilst the book is only known by one name, Gray’s, it was created by two surgeons/anatomists. Henry Vandyke Carter created the illustrations for the text written by Henry Gray (Richardson 2008). The text book used the tried and trusted method of integrating text with diagrams, with the images produced by Carter illustrating the anatomy of the human body to a very high level. A key challenge for the production of early anatomy textbooks was the replicability of the words and images; however, as printing technologies improved, so did the quality of the final product, and moreover, the price of textbooks reduced, making them much more accessible to the mass student market. The historical description serves to illustrate that throughout the history of anatomy education, capturing human anatomy in some pictorial form has always existed in the form of drawing, painting and producing wax models; however, this later progressed to photography.
111
themselves above other medical schools. The same level of privilege remains today regarding accessing human cadavers, which is evident by the fact that anatomy laboratories are normally highly secured areas and only accessible by specific relevant individuals. With the advent of radiological examination, microscopy and computer-generated images through CT and MRI, the ways of viewing the human body, diagnosing and treating disease have progressed dramatically from plain photography. Anatomy, as well as other disciplines, embraced these new technologies due to their ability to support and enhance healthcare curricula. One key advance was digital photography, which together with more sophisticated computer software (e.g. Photoshop) meant that anatomists could now take, edit and reproduce their own images, with relative speed and ease. However, there was limited scope to move such images around with educational CD ROMs being the primary method of dissemination. Today, these resources are no longer confined by size or location and can be easily distributed to global audiences through the World Wide Web.
6.2.1 Photography of Cadaveric Material
6.3
The introduction of photography in anatomy education allowed for the creation and collection of real cadaveric images and subsequently the production of anatomy dissection atlases. Photographic atlases of dissected cadavers brought the ‘reality’ of anatomy much closer to the learner and offered learners the opportunity to study ‘real’ cadaveric material (Abrahams et al. 2002). Although for many people being able to view, study or dissect human cadavers is not an opportunity they would ever want. Within the healthcare professions, having this opportunity has always been deemed to be a great privilege. Historically, medical students were not rated with any regard unless they had dissected a human body. This contributed to the era of ‘body snatchers’ when medical schools were desperate to give their students the opportunity to dissect and to set
In the early or ‘Web 1.0’ era (1990–2000) of the Internet, information and communication technologies consisted of static websites where content could only be altered by the website owner. However, the arrival of the ‘Web 2.0’ era of social media has allowed Internet users to evolve from being passive browsers of the Internet to active content creators (Cheston et al. 2013). For clarity, social media are said to be Internet-based tools, such as websites and applications, which allow users to retrieve, explore and actively participate in content creation and editing (including information, ideas, personal messages, images and other content), through open and often real-time collaboration with other users (McGee and Begg 2008; Ventola 2014). Perhaps because of the history intrinsically linking anatomy, art and the need to display
Digital Age
C. M. Hennessy and C. F. Smith
112
images, anatomy became an early adopter of the Internet. Universities in the United Kingdom were using the Internet on the JANET Network (JANET 2019) in the early 1990s, which was set up as not-for-profit to provide computing support for education. At that time, there was little commercial Internet-based activity, apart from in Higher Education institutions, with the other main users of the Internet being government organisations and perhaps surprisingly the pornography industry (Brooks 1999). Early computer-based Web 1.0 applications in anatomy emerged, such as ‘The Embryonic Disc’ (Cook 2008) and 3D Skeleton (Cox 1996), where a student could on a library-based PC access early digitally created images. The Acland’s Anatomy video series is another popular anatomy education resource known globally. This series of videos contains expertly dissected cadaveric specimens of the entire human body complete with narrations and demonstrations by Dr. Robert Acland himself, for learners to view and learn at their own pace. Originally, these videos were only available on library PCs; however, University libraries can now subscribe to the online versions meaning that the Acland videos are available to students anytime and anywhere via the Acland Anatomy website (Acland 2019). Later, interactive anatomy educational websites such as ‘Primal Pictures’ (Primal Pictures 2019) and ‘Visible Body’ (Visible Body 2020) emerged where users can control the content they are viewing such as adding and removing anatomical layers and viewing the image from various angles. As technology has developed, the capacity to generate ‘digital computer based’ or ‘virtual’ images has evolved, as has the ability of learners to interact with images through VR and AR (McMenamin et al. 2018). The transition described above reflects six generations of anatomy digital learning: desktop based, mobile based, digital dissection tables, augmented reality, virtual reality and multiuser experiences. These generations are not completely linear as AR and VR technology has progressed along similar time frames; it also does not include the role social media plays in anatomy education.
6.4
Social Media
Social media has become ubiquitous in today’s society, and for the majority of current students within higher education, using social media to acquire information and communicate with colleagues comes as second nature. Today’s generation of ‘Millennial students’ (meaning those who were born close to the turn of the millennium) are said to display aptitudes, attitudes, expectations and learning styles correlating with their digitally enriched upbringings (Roberts 2005; DiLullo et al. 2011). In line with this, Jones and Shao (2011) describe that the obvious changes in behaviour amongst student cohorts today are their use of social networking sites to access multidigital media and their use of handheld devices to access mobile Internet. Keenan et al. (2018) reported that 94% of medical students use at least one social media platform to support their learning, a trend which seems to be continuing according to the previous findings of Bosslet et al. (2011) and George et al. (2013). Facebook, YouTube and Twitter have been reported as the most commonly used by medical students (Hall et al. 2013; Foley et al. 2014; El Bialy and Jalali 2015; Al Wahab et al. 2016) to source and share information (Barry et al. 2016; Jaffar 2012; Mukhopadhyay et al. 2014). In more recent years, Instagram and Snapchat have become increasingly popular amongst medical students (Knight-McCord et al. 2016). Teachers within higher education have responded to this trend by incorporating social media platforms into their teaching practice, and have reported positive outcomes for students, including online discussion opportunities, sharing resources and assessment preparation (Wang 2013; Donlan 2014; Albayrak and Yildirim 2015; Ali 2016). Within medical education, social media have been advocated as a modern means for educators to communicate and engage with their learners (Kind et al. 2014; Choo et al. 2015; Roy et al. 2016). Especially since social media were reported to enable members of faculty to provide feedback to learners and increase learner satisfaction (Cheston et al. 2013). Bergl and
6 Digital and Social Media in Anatomy Education
113
Muntz (2016) have proposed furthering the use of social media into the later clinical training years of medicine since they provide a convenient platform for sharing knowledge, reflective writing, shared problem solving and peer teaching amongst busy contemporary clinician- educators and learners alike. In effect, social media are said to support online communities of practice by facilitating communication and providing a platform for members to share and access learning resources (Guckian et al. 2019), which is exemplified by the growing phenomenon, that is #FOAMed, the Free Open Access Medical Education hashtag (Shah and Kotsenas 2017).
6.5
ocial Media in Anatomy S Education
Social media are widely used in the field of anatomy as educational adjuncts (Chytas 2019). Dedicated Facebook education pages have been created by anatomy educators to support medical student’s anatomy learning. One example is the Human Anatomy Education Page created by Jaffar (2014) for second-year medical students in the United Arab Emirates, the aim being to supplement classroom-based teaching with posts containing information and tasks, including images where students were asked to identify the labelled structures (see Fig. 6.1), multiple-choice questions, explanatory comments, video links or links to other online anatomy resources, short-answer questions and anatomy art. A similar module-specific educational Facebook page was set up for medical students at the University of Leeds; however, no learning resources were posted on this page since it was created purely as a space for medical students to communicate with each other and with the anatomy lecturer to ask questions relating to anatomy learning and assessments (Pickering and Bickerdike 2017). Twitter has also been used by anatomy educators to support medical students learn anatomy. Hennessy et al. (2016) created a course-specific Twitter hashtag for students to follow and engage
Fig. 6.1 An example of a post from the Human Anatomy Education Facebook page containing an image of an anatomical model. Students are asked questions on the arrowed structures. (Permission to use has been received with thanks from the owner Dr. Akram Jaffar)
with whilst learning neuroanatomy. The advantage of the hashtag was that it streamlined and curated all tweets containing the hashtag to one location on Twitter for easy access. The anatomy faculty encouraged students to tweet using the hashtag for course-related issues, but no specific instructions for how the hashtag should be used were given. The types of tweets that were posted included faculty and students sharing learning ideas, such as recreated diagrams and acronyms, students asking questions, morale-boosting messages and tweets sharing worries. Hennessy et al. (2016) concluded that the Twitter hashtag created a learning environment for students to communicate with faculty and engage with the anatomy course material. An unexpected finding was that the hashtag also provided a platform for students to offload stress and worries, effectively creating a supportive network amongst the students whilst they were learning the notoriously difficult subject of neuroanatomy. Anatomy educators at the University of Bristol have also been using Twitter, particularly its Polling tool to support medical students learning (Gunn et al. 2016) using the
114
C. M. Hennessy and C. F. Smith
uting to the social media platforms designed to supplement anatomy learning.
6.5.1 C hallenges of Using Social Media in Anatomy Education
Fig. 6.2 An example of a Twitter poll from the Bristol Anatomy Demos Twitter account. Students are asked to vote for whichever answer they think is correct by clicking on the relevant option similar to a multiple-choice question. Once the poll closes, it can be retweeted with the correct answer to provide feedback to students. (Permission to use has been received with thanks from the owners of the Bristol Anatomy Demos Twitter account at The Centre for Applied Anatomy at University of Bristol, who have updated their Twitter handle to @ UoBrisAnatomy)
Twitter handle @Bristoldemos (now updated to @UoBrisAnatomy). These polls were designed to act as multiple-choice quizzes for students to monitor their learning during anatomy courses, in effect acting as regular mini formative assessments (see Fig. 6.2). In all of the above studies, the Facebook and Twitter education tools were never made compulsory for students to use. However, each study demonstrated a favourable percentage uptake by students [89% by Jaffar (2014), 91% by Hennessy et al. (2016), 48% by Pickering and Bickerdike (2017) and 31% by Gunn et al. (2016)]. These studies suggest that students were interested and willing to engage with academic social media platforms. However, only a minority of students actively made contributions to the platforms, instead the majority of students opted to merely observe the contributions made by others (Hennessy et al. 2016; Pickering and Bickerdike 2017). There are several reported barriers or challenges that prevent students, particularly students within the healthcare professions, from contrib-
6.5.1.1 Maintaining a Professional Digital Footprint For medical students any inappropriate use of social media, such as sharing offensive or confidential information, is a breach of the professionalism guidelines and standards worldwide (Hennessy et al. 2019a). It is common practice for medical program directors to give medical students firm warnings about not posting unprofessional content on social media. An example is the following email Harvard Medical School students received: ‘items that represent unprofessional behaviour that are posted by you on such networking sites reflect poorly on you and the profession. Such items may become public and could subject you to unintended exposure and consequence’ (Jain 2009). Such warnings can be seen as necessary due to frequent reports of unprofessional behaviour being displayed on the social media profiles of medical students, such as excessive drinking and drunkenness, overt sexuality, foul language and patient privacy violations, particularly in the early years of Facebook (Thompson et al. 2008; Chretien et al. 2009). Although it has been suggested that today’s ‘Z-Generation’ of medical students are savvier and more conservative about what they post on social media (Iqbal 2018). There are several reports from more recent years of unprofessional content being posted by medical students (Langenfeld et al. 2014; Koo et al. 2017; Barlow et al. 2015; Kitsis et al. 2016). The consequences of unprofessional content ranging from receiving warnings to being removed from medical programs. Unsurprisingly, as a result some medical educators have concerns about incorporating social media into educational practice due to the potential exposure of unprofessional behaviours on social media by students (Walton et al. 2015; George and Dellasega 2011; Cheston et al. 2013). Similarly, medical students are becoming increas-
6 Digital and Social Media in Anatomy Education
ingly aware of their digital footprint and the idea that social media posts may ‘come back to bite you’ in the future (Guckian et al. 2019) and so refrain from unnecessary contributions to social media.
6.5.1.2 Exposing A Lack of Knowledge Another common barrier expressed by medical students is the fear of showing a lack of knowledge and this being exposed on social media. In the study by Pickering and Bickerdike (2017), students reported feeling that showing a lack of knowledge may damage their reputation and how they are perceived by their peers and educators alike. Guckian et al. (2019) supported this finding adding that although students report having a fear of ‘missing out’, they also have a fear of getting involved in open discussions due to feeling ‘exposed’. This is linked to students being conscious of their digital footprint and the consequences of what they say today impacting on their future. Students reported a preference therefore to communicate via private group chats such as Facebook Messenger (Guckian et al. 2019). 6.5.1.3 Social Media Fatigue Border et al. (2019) identified a decline in the uptake of anatomy education social media tools by recent cohorts of students and proposed that social media fatigue (Bright et al. 2015) is a contributing factor. The aforementioned ‘fear of missing out (FOMO)’ feeling described by students in Guckian’s study is well recognised as a contributor to social media fatigue and there is increasing evidence and awareness (including amongst students) that overuse of social media is linked to an increase in both anxiety and depression (Dhir et al. 2018). Students report finding the vast amount of educational support available on social media as ‘overwhelming’ (Guckian et al. 2019) which raises questions about how much information our brains can cope with to create effective learning. Cognitive load theory (Young et al. 2014) suggests that an overload of information, especially in short-term memory can hinder processing and hence learning (Smith and Border 2019). Similar concerns have been raised about the educational value of anatomy
115
apps which provide a full 360 degree view of the body with research showing that six key views is enough to improve retention of anatomy knowledge (Garg et al. 2001).
6.5.1.4 Invasion of Privacy In an early study by Szwelnik (2008), students reported an unease around using their social media with educators as they considered it their ‘private personal space’, which is a lingering but possibly diminishing barrier. Jaffar (2014) reported that although students expressed concerns about their privacy, the majority of students engaged with the Human Anatomy Education Facebook Page, which possibly could have been due to a FOMO or perhaps an element of curiosity in a new learning platform (Guckian et al. 2019). Jaffar (2014) suggested that educators create educational pages on Facebook which students join rather than using a Facebook account (which would require students and educators to become Facebook ‘friends’) to communicate with students so that students feel there is a level of distance between their personal social media space and their educators. Despite these barriers, the literature suggests that the benefits of using educational social media platforms outweigh any potential negative outcomes (Al Wahab et al. 2016; Ali 2016; Cartledge et al. 2013; Hennessy et al. 2016; Jaffar and Eladl 2016; Pickering and Bickerdike 2017). It can be argued that as long as anatomy educators acknowledge and respect the potential challenges (Peluchette and Karl 2008; Chretien et al. 2009; Marnocha et al. 2015), introducing social media platforms may provide crucial early professional development for medical students who face a society where using social media to communicate and exchange knowledge is the reality (Bergl and Muntz 2016; Choo et al. 2015).
6.5.2 B enefits of Using Social Media in Anatomy Education Using social media as anatomy educational aids appears to be overwhelmingly positive. Studies have shown that large majorities of students per-
116
ceived that interacting with Facebook pages helped their learning (Jaffar 2014; Pickering and Bickerdike 2017). The majority of interactions on Pickering’s Facebook page were students asking questions about anatomy (Pickering and Bickerdike 2017). The quick and easy means of communication with educators that these platforms offered was also highly valued by students (Hennessy et al. 2016), which supports previous findings by Kind et al. (2014). Further common positive outcomes from the use of these Facebook and Twitter educational aids included increasing student engagement with course material and increasing motivation to learn. Also, the Twitter hashtag described by Hennessy et al. (2016) appeared to develop into a space where students could share their stresses and worries during the demanding neuroanatomy course – the hashtag supported and acted as a support network for students and reportedly boosted morale. A similar theme was identified by Pickering and Bickerdike (2017) with students reporting that making use of the learning support information on the Facebook page in the lead up to assessments reduced student anxiety levels. As already explained, Facebook and Twitter have been used as formative assessment methods via creating polls, multiple-choice question posts and picture labelling posts (Jaffar 2014; Gunn et al. 2016). Such posts containing quizzes and revision techniques have been found to be the most valued by students (El Bialy and Jalali 2015) perhaps because students can get timely feedback on their learning since the correct answer is generally feedback by the educator shortly after the initial post. It has yet to be determined whether such use of social media impacts anatomy knowledge scores (which is perhaps the ultimate goal particularly for students), with studies to date reporting conflicting results (Arnbjörnsson 2014). However, perhaps unsurprisingly, it has been consistently shown that the students who contribute and engage more with social media educational tools, by doing things like ‘liking’, ‘commenting’ and participating in discussions, tend to be the high-achieving students (Michikyan et al. 2015; Hennessy et al.
C. M. Hennessy and C. F. Smith
2016; Jaffar and Eladl 2016; Pickering and Bickerdike 2017). Another way in which educators are using social media is as an access point to direct students to recommended learning materials (Cole et al. 2017) which for some educators might be their own online channels. For example, anatomy faculty at the University of Southampton have created a bank of short videos containing narrated screencasts of anatomical drawings explaining various anatomical structures and concepts, which can be viewed on their YouTube channels entitled ‘Soton Brain Hub’ and ‘Soton Anatomy Hub’ (Border 2019). Links to these YouTube videos are regularly posted on their associated Facebook, Twitter and Instagram social media accounts to remind and encourage students and followers, particularly Southampton medical students to use these videos since they are tailor- made for home student’s learning. Jaffar (2014) also used the Human Anatomy Education Facebook page as a means of directing students to YouTube videos created by the same author, since these were considered most relevant to the learning outcomes for that particular anatomy course. There is evidence that students prefer to use learning materials that are specifically designed for the anatomy course they are studying and learn from resources created by their own anatomy educators rather than from generic resources (Pickering and Bickerdike 2017).
6.5.3 Challenges for Educators and the Profession of Anatomy Visual learning resources and images are the cornerstone of anatomy education evidenced by the long-standing connection between anatomy and art, drawing and images. Certain social media, including YouTube, Facebook and Instagram, are particularly suited to sharing images and videos, which is probably why increasing numbers of image-based anatomy learning resources are appearing on social media, most of which are freely and publicly available. Of course, in the field of anatomy such images frequently contain
6 Digital and Social Media in Anatomy Education
human cadaveric material and hence with this comes the unique challenge of using social media in the field of anatomy. Commonly, human cadaveric material is inappropriately (with no educational purpose) shared on public, anatomy- related, social media accounts (Bond 2013; Anonymous 2014; Hutchinson 2018). This has clear ethical implications for the anatomy profession since there is no explanation of where the cadavers were sourced or whether consent was received from donors to share such images (Hildebrandt 2019).
6.5.4 Ethics and Consent The creation of artwork or images, especially those involving human tissue, raises some interesting points for discussion. It could be argued that in the creation of art, the intellectual property is with the artist, although it has been argued that in cases where the art is so excellent and lifelike it is close to replicating a photograph (realism) (Lee 2019). When considering anatomy and digital images (photographs and videos), there is a question as to who owns this artwork – the donor, the donor’s family, the anatomist or student who took the photo? This raises a further question: is explicit consent needed to take digital images to be shared on public social media platforms?
6.5.4.1 Consent In the United Kingdom, under the Human Tissue Authority 2004, donors are required to give signed consent for the capturing of images (HTA 2019). In cases where consent is not provided (e.g. if the donor completed the previous version of the donation form from the 1984 Anatomy Act which did not specify consent to capture images), consent must not be implied meaning the donor is deemed to have not have given consent if it is not explicitly stated. This indicates the level of caution expected of anatomists in the United Kingdom regarding the appropriate use of cadaveric images and the strong ethos of respect for the wishes of the donor and their families. Although the same levels of respect for donors are expected as standard practice for anatomists
117
worldwide, the levels of consent for capturing images of donors vary greatly globally. Typically, no explicit consent is received from donors in the United States of America (National Conference of Commissioners on Uniform State Laws 2009) and Australia (UWA 2019) with donation programs often merely receiving broad consent for educational use of donated bodies. Furthermore, donation programmes are the exclusive source of bodies in only 32% of anatomy laboratories worldwide, and unclaimed bodies remain the primary source of cadavers in 57% of countries worldwide (Habicht et al. 2018), meaning that consent cannot be guaranteed or even possible in most cases. This is not to suggest that anatomists routinely abuse the ability to capture images of cadaveric material; however, it does highlight that it is more likely that cadaveric images that appear on social media have not been consented to by the individual in question. This is significant because in some countries, such as the United Kingdom, explicit consent for taking images is the standard practice, and the culture of sharing cadaveric images on social media is rarely acceptable. Likewise the guidance from The Anatomical Society (of Great Britain and Ireland) prohibits members from sharing cadaveric material (Hennessy et al. 2019b), meaning anatomists may feel it is quite worrying to see cadaveric images on public social media platforms. Due to the global reach of social media, anatomists who share cadaveric images must be more mindful of their potential audience and the possibility that viewing such images on social media might be offensive to local laws and cultures.
6.5.4.2 Access to Images Once, cadaveric images were reserved for medical and allied healthcare students in a university or hospital library. However, nowadays these images can easily be found on Internet search engines such as Google and social media platforms. This has led to an unintended change in the privileges as to who sees images which has positive and negative outcomes. One positive outcome of broadening access to cadaveric images is that it demystifies the human body and
C. M. Hennessy and C. F. Smith
118
such images can be used to promote health- related messages and to educate the public about anatomy (Rai et al. 2019). It may also inform the public about the option to donate their body to anatomical science. It may also help individuals understand how anatomists use donor bodies for anatomy education. This is assuming that the cadaveric images shared on social media by anatomists are always of a professional and educational nature displaying utmost respect towards the donor, which, of course, cannot be guaranteed. However, there are many areas of concern that arise from the various stakeholder perspectives surrounding an anatomical donor. The donor themselves might not have considered the consent for images to include that the images would be shared on online platforms, including social media, due to the fact that donors might not have received enough information in advance of donation (Farsides and Smith 2020). At the same time, sharing cadaveric images on public social media platforms without explicit consent from donors may cause concern or distress for relatives of donors. It is also possible that users, including those who are well used to viewing cadaveric material, may also find viewing the content on social media inappropriate and distressing. The repercussions of sharing images of patients without consent was highlighted in a recent case where a man discovered an image of his amputated leg was being used as a health warning on cigarette packets in France (BBC 2019). Although the intention in this case was to promote health, the man and his family who were able to identify the leg due to characteristic scars, felt ‘betrayed’ and ‘stunned’. In a similar way, an anatomist may believe that it is ethical to share human cadaveric material if the content being shared is for educational purposes; however, without informed consent it cannot be assumed that donors would have anticipated images of their donated body would be shared publicly on social media. Two examples of unethical practice in the field of anatomy have been reported in the literature: one involving a student taking a selfie with a donated cadaver (Anonymous 2014) and one where staff were disciplined for posting human body parts on
Instagram (Bond 2013), both highlighting that these are real issues facing the anatomy profession in today’s digital society.
6.5.5 Solutions Creating international standards in the field of anatomy is challenging due to the methods of and laws surrounding body donation varying so greatly globally. The International Federation for the Association of Anatomists have previously recommended that only donated bodies be accepted by anatomy departments worldwide, and although this would be an ideal system, it is far from the reality (IFAA 2012). Changing body donation systems and laws takes time and hence it is very likely that it will take a considerable amount of time before donation forms in all anatomy departments worldwide ask donors for explicit consent to capture and share cadaveric images on social media. However, anatomists cannot continue to share cadaveric images on social media without acknowledging the source of the donor. Previously, anatomists had a dubious reputation due to ambiguity around how bodies were sourced, so it is imperative that anatomists maintain their more recently earned professional reputation by being explicit about where cadaveric material is used and sourced. With this in mind, we believe that all cadaveric images shared on social media must be accompanied by a statement indicating that informed consent has been received from the donor for images of their body to be shared on social media. We believe this will ensure transparency for donors and donor families surrounding how anatomical donors are used and it builds confidence amongst the public that anatomists share donated material ethically.
6.5.5.1 Statement of Consent Jones (2019) raised a similar argument about donor expectations in reference to using donor bodies for creating 3D printed anatomy models, stating that donors are likely to expect their bodies will be used for local medical and healthcare education rather than prints or images of their
6 Digital and Social Media in Anatomy Education
body being spread and sold worldwide. Cornwall et al. (2016) have also argued that anatomists are at risk of giving an impression to the public that the value of body donation is undermined by anatomists using donors so indiscriminately. Jones (2019) has suggested that explicit informed consent must be received from donors ahead of creating anatomical 3D printed material, and the distribution of anatomical 3D prints ‘should be accompanied by a statement regarding details of the consent provided by body donors and an acknowledgement of the body donor’s contribution’ to anatomy education, a suggested standard practice which is transferrable to posting cadaveric material on social media.
6.5.5.2 Suggested Actions for Creating More Ethical Social Media Use in Anatomy • Think twice before hitting post – think about the text or images that you are sharing on social media and the impact it might have on your digital footprint, donors, donor families and the profession of anatomy especially if you are sharing images containing human cadaveric material. • Use a statement confirming that ‘consent has been received by the donor’ and acknowledge the donor’s contribution, if sharing images containing human cadaveric material. • If images containing human cadaveric material appear on your social media, ask the author of the post if explicit consent has been received from the donor. • Report content which is deemed to be unethical. • Make clear statements to staff and students about the local regulations.
6.6
Conclusion
Art and images will undoubtedly be forever linked to the subject of anatomy. As the sophistication and ease of creating and sharing anatomical images advances, so too do the possibilities for anatomy education. Moreover, the rise of social media has meant that capturing and shar-
119
ing images, including cadaveric images, become boundaryless due to the global reach of social media. However, it must not be forgotten that anatomical donors are an invaluable resource in anatomy education. Anatomists must take a step back and consider if and how sharing cadaveric images on social media can be made more ethical so that the reputation of and trust in the anatomy profession is maintained.
References Abrahams PH, Marks SC, Hutchings RT (2002) McMinn’s color atlas of human anatomy, 5th edn. Elsevier, London Acland R (2019) Acland’s video atlas of human anatomy. Wolters Kluwer. https://aclandanatomy.com. Accessed 13 Jan 2020 Al Wahab A, Al-Hajo S, AlAhmdani G, Al-Mazroua N, Edan M, Nugud S, Jaffar A (2016) The patterns of usage and perceived impact of social networking sites on medical students’ education. J Nur Healthcare 1(2):1–4 Albayrak D, Yildirim Z (2015) Using social networking sites for teaching and learning: students’ involvement in and acceptance of Facebook as a course management system. J Educ Comput Res 52:155–179 Ali A (2016) Medical students’ use of Facebook for educational purposes. Perspect Med Educ 5(3):163–169 Anderhuber F (1996) The concept of anatomy – classical topographic anatomy. Surg Radiol Anat 18:253–256 Anonymous (2014) High school student sparks anger after posting selfie with dead body. News.com.au, 7 February 2014. News Corp Australia, Surry Hills, Sydney. http://www.news.com.au/technology/online/ highschool-student-sparks-anger-after-posting-selfiewith-dead-body/story-fnjwmwrh-1,226,820,128,333. Accessed 13 Jan 2020 Arnbjörnsson E (2014) The use of social media in medical education: a literature review. Creat Educ 5:2057–2061 Barlow CJ, Morrison S, Stephens HO, Jenkins E, Bailey MJ, Pilcher D (2015) Unprofessional behaviour on social media by medical students. Med J Aust 203:439.1.e1–7 Barry DS, Marzouk F, Chulak-Oglu K, Bennett D, Tierney P, O'Keeffe GW (2016) Anatomy education for the YouTube generation. Anat Sci Educ 9:90–96 BBC (2019) British Broadcasting Corporation. Man finds ‘own amputation’ on cigarette packets without consent. BBC News, 19 July 2019. British Broadcasting Corporation, London. https://www.bbc.co.uk/news/ world-europe-49029845. Accessed 13 Jan 2020 Bergl P, Muntz M (2016) Using social media to enhance health professional education. Clin Teach 13:399–404
120 Bond A (2013) ‘Hello from the stiffs!’ University staff disciplined for posting pictures of body parts on Instagram in Switzerland. Daily Mail.com News, 2 August 2013. Daily Mail and General Trust Ltd., London. https://www.dailymail.co.uk/news/article-2383718/Zurich-University-staff-disciplinedpostingpictures-body-parts-Instagram-Switzerland. html#ixzz3h3LtLi42. Accessed 13 Jan 2020 Border S (2019) Assessing the role of screencasting and video use in anatomy education. In: Biomedical visualisation. Springer, Cham, pp 1–13 Border S, Hennessy C, Pickering J (2019) The rapidly changing landscape of student social media use in anatomy education. Anat Sci Educ 12:577–579 Bosslet GT, Torke AM, Hickman SE, Terry CL, Helft PR (2011) The patient-doctor relationship and online social networks: results of a national survey. J Gen Intern Med 26(10):1168–1174 Bouchet A (1996) In defence of human anatomy- a commentary. Surg Radiol Anat 18:159–165 Bright LF, Kleiser SB, Grau SL (2015) Too much Facebook? An exploratory examination of social media fatigue. Comput Hum Behav 44:148–145 Brooks M (1999) The porn pioneers. The Guardian. https://www.theguardian.com/technology/1999/ sep/30/onlinesupplement. Accessed 13 Jan 2020 Cartledge P, Miller M, Phillips B (2013) The use of social- networking sites in medical education. Med Teach 35(10):847–857 Cheston CC, Flickinger TE, Chisolm MS (2013) Social media use in medical education: a systematic review. Acad Med 88:893–901 Choo EK, Ranney ML, Chan TM, Trueger NS, Walsh AE, Tegtmeyer K, McNamara SO, Choi RY, Carroll CL (2015) Twitter as a tool for communication and knowledge exchange in academic medicine: a guide for skeptics and novices. Med Teach 37:411–416 Chretien KC, Greysen SR, Chretien JP, Kind T (2009) Online posting of unprofessional content by medical students. J Am Med Assoc 302:1309–1315 Chytas D (2019) Use of social media in anatomy education: a narrative review of the literature. Ann Anat 221:165–172 Cole D, Rengasamy E, Batchelor S, Pope C, Riley S, Cunningham AM (2017) Using social media to support small group learning. BMC Med Educ 17(1). https://doi.org/10.1186/s12909-017-1060-7. Accessed 13 Jan 2020 Cook J (2008) The embryonic disc. University College London. https://www.ucl.ac.uk/innovations/embryonic/. Accessed 13 Jan 2020 Cornwall J, Callahan D, Wee R (2016) Ethical issues surrounding the use of images from donated cadavers in the anatomical sciences. Clin Anat 29:30–36 Cox B (1996) The ultimate 3D skeleton (CD-Rom): a multimedia guide to the human skeleton. Dorling Kindersley Publishers Ltd Dhir A, Yossatorn Y, Kaur P, Chen S (2018) Online social media fatigue and psychological wellbeing – a study
C. M. Hennessy and C. F. Smith of compulsive use, fear of missing out, fatigue, anxiety and depression. Int J Inf Manag 40:141–152 DiLullo C, McGee P, Kriebel RM (2011) Demystifying the millennial student: a reassessment in measures of character and engagement in professional education. Anat Sci Educ 4:214–226 Donlan L (2014) Exploring the views of students on the use of Facebook in university teaching and learning. J Furth High Educ 38(4):572–588 Dyer G, Thorndike M (2000) Quidne Mortui Vivos Docent? The evolving purpose of human dissection in medical education. Acad Med 75(10):969–979 El Bialy S, Jalali A (2015) Go where the students are: a comparison of the use of social networking sites between medical students and medical educators. JMIR Med Educ 1:e7 Farsides T, Smith CF (2020) Consent in body donation. European Journal of Anatomy. http://sro.sussex.ac.uk/ id/eprint/89622/1/FARSIDES_European_Journal_of_ Anatomy_JAN_2020_author_copy.pdf Foley NM, Maher BM, Corrigan MA (2014) Social media and tomorrow’s medical students–how do they fit? J Surg Educ 71:385–390 Garg AX, Norman G, Sperotable L (2001) How medical students learn spatial anatomy. Lancet 357(9253):363–364 George DR, Dellasega C (2011) Use of social media in graduate level medical humanities education: two pilot studies from Penn State College of Medicine. Med Teach 33(8):e429–e434 George D, Rovnial L, Kraschnewski J (2013) Dangers and opportunities for social media in medicine. Clin Obstet Gynecol 56(3):1–10 Guckian J, Leighton J, Frearson R, Delgaty L, Finn G, Matthan J (2019) The next generation: how medical students use new social media to support their learning. MedEdPublish 8 Gunn EGM, Cundell D, Duffy S, Thomson N, Patel S, Allsop SA (2016) Anatomy in 140 characters or less: developing @Bristoldemos, a social media based additional learning resource. In: Abstracts of Anatomical Society and British Association of Clinical Anatomists 2016 Summer Meeting, Brighton, UK, July 19–21, 2016. Abstract 19. The Anatomical Society, London, UK Habicht JL, Kiessling C, Winkelmann A (2018) Bodies for anatomy education in medical schools: an overview of the sources of cadavers worldwide. Acad Med 93:1293–1300 Hall M, Hanna LA, Huey G (2013) Use and views on social networking sites of pharmacy students in the United Kingdom. Am J Pharm Educ 77:9 Hennessy CM, Kirkpatrick E, Smith CF, Border S (2016) Social media and anatomy education: using Twitter to enhance the student learning experience in anatomy. Anat Sci Educ 9:505–515 Hennessy CM, Smith CF, Greener S, Ferns G (2019a) Social media guidelines: a review for health professionals and faculty members. Clin Teach 16:442–447
6 Digital and Social Media in Anatomy Education Hennessy CM, Keenan ID, Matthan J (2019b) Social media guidelines for engagement with membership and members of the public. Anatomical Society, London. http://www.anatsoc.org.uk/docs/defaultsource/aaac-guidelines/anatomical-socity-some-useguidance-v2c66afd0b7e616b04b21aff0000f035ae. pdf?sfvrsn=410b96a1_2. Accessed 13 Jan 2020 Hildebrandt S (2019) The role of history and ethics of anatomy in medical education. Anat Sci Educ 12:425–431 HTA (2019) Human Tissue Authority. Donating your body. Human Tissue Authority, London. https://www. hta.gov.uk/donating-your-body. Accessed 13 Jan 2020 Hutchinson B (2018) Dental students’ selfie with severed cadaver heads prompts crackdown at Yale. ABC News, 6 February 2018. American Broadcasting Company, New York. https://abcnews.go.com/US/dental-students-selfie-severed-cadaver-heads-prompts-crackdown/story?id=52872606. Accessed 13 Jan 2020 IFAA (2012) Recommendations of good practice for the donation and study of human bodies and tissues for anatomical examination. Plexus: newsletter of the IFAA. January 2012:4–5. http://www.ifaa.net/wp-content/uploads/2017/09/plexus_jan_2012-screen.pdf. Accessed 13 Jan 2020 Iqbal N (2018) Generation Z: ‘We have more to do than drink and take drugs’. The Guardian, 21 July 2018. Guardian News and Media Limited, London. https:// www.theguardian.com/society/2018/jul/21/generation-z-has-different-attitudes-says-a-new-report. Accessed 13 Jan 2020 Jaffar AA (2012) YouTube: an emerging tool in anatomy education. Anat Sci Educ 5:158–164 Jaffar AA (2014) Exploring the use of a Facebook page in anatomy education. Anat Sci Educ 7:199–208 Jaffar AA, Eladl MA (2016) Engagement patterns of high and low academic performers on Facebook anatomy pages. J Med Educ Curric Dev 3:1–8 Jain SH (2009) Practicing medicine in the age of Facebook. N Engl J Med 361(7):649–651 JANET. 2019. Janet Network. JISC. https://www.ja.net/ janet. Accessed 13 Jan 2020 Jones DG (2019) Three-dimensional printing in anatomy education: assessing potential ethical dimensions. Anat Sci Educ 12:435–443 Jones C, Shao B (2011) The net generation and digital natives: implications for higher education. The Open University. http://oro.open.ac.uk/30014/. Accessed 13 Jan 2020 Keenan ID, Slater JD, Matthan J (2018) Social media: insights for medical education from instructor perceptions and usage. MedEdPublish 2018:27 Kind T, Patel PD, Lie D, Chretien KC (2014) Twelve tips for using social media as a medical educator. Med Teach 36:284–290 Kitsis EA, Milan FB, Cohen HW, Myers D, Herron P, McEvoy M, Weingarten J, Grayson MS (2016) Who’s misbehaving? Perceptions of unprofessional social media use by medical students and faculty. BMC Med Educ 16:67
121 Knight-McCord J, Cleary D, Grant N, Herron A, Lacey T, Livingston T, Emanuel R (2016) What social media sites do college students use most. J Undergr Ethnic Minority Psychol 2(21):21–26 Koo K, Ficko Z, Gormley EA (2017) Unprofessional content on Facebook accounts of US urology residency graduates. BJU Int 119:955–960 Langenfeld SJ, Cook G, Sudbeck C, Luers T, Schenarts PJ (2014) An assessment of unprofessional behavior among surgical residents on Facebook: a warning of the dangers of social media. J Surg Educ 71:e28–e32 Lee TC (2019) Anatomy and academies of art II− a tale of two cities. J Anat. https://doi.org/10.1111/joa.13130. Accessed 13 Jan 2020 Marnocha S, Marnocha MR, Pilliow T (2015) Unprofessional content posted online among nursing students. Nurse Educ 40:119–123 McGee JB, Begg M (2008) What medical educators need to know about web 2.0. Med Teach 30:164–169 McMenamin PG, McLachlan J, Wilson A, McBride JM, Pickering J, Evans DJ, Winkelmann A (2018) Do we really need cadavers anymore to learn anatomy in undergraduate medicine? Med Teach 40(10):1020–1029 Michikyan M, Subrahmanyam K, Dennis J (2015) Facebook use and academic performance among college students: a mixed-methods study with a multi- ethnic sample. Comput Hum Behav 45:265–272 Mukhopadhyay S, Kruger E, Tennant M (2014) YouTube: a new way of supplementing traditional methods in dental education. J Dent Educ 78:1568–1571 National Conference of Commissioners on Uniform State Laws (2009) Revised uniform anatomical gift act. https://www.uniformlaws.org/ HigherLogic/System/DownloadDocumentFile. ashx?DocumentFileKey=6705441e-40b7-fbd4-edd55748c63fbd79andforceDialog=0. Accessed 13 Jan 2020 Peluchette J, Karl K (2008) Social networking profiles: an examination of student attitudes regarding use and appropriateness of content. CyberPsychol Behav 11:95–97 Pickering JD, Bickerdike SR (2017) Medical student use of Facebook to support preparation for anatomy assessments. Anat Sci Educ 10:205–214 Primal Pictures (2019) Primal Pictures powering anatomy. TV. The leading 3D anatomy resource. https://www. primalpictures.com. Accessed 13 Jan 2020 Rai R, Shereen R, Protas M, Greaney C, Brooks KN, Iwanaga J, Loukas M, Tubbs RS (2019) Social media and cadaveric dissection: a survey study. Clin Anat 32:1033–1041 Richardson R (2008) The making of Mr. Gray’s anatomy bodies, books, fortune, fame. Oxford University Press, Oxford Roberts GR (2005) Technology and learning expectations of the net generation. In: Oblinger D, Oblinger J (eds) Educating the net generation, 1st edn. Educause, Boulder, pp 3.1–3.7
122 Roy D, Taylor J, Cheston CC, Flickinger TE, Chisolm MS (2016) Social media: portrait of an emerging tool in medical education. Acad Psychiatry 40:136–140 Shah V, Kotsenas A (2017) Social media tips to enhance medical education. Acad Radiol 24(6):747–752 Smith CF, Border S (2019) The twelve cranial nerves of Christmas: mnemonics, rhyme, and anatomy–seeing the lighter side. Anat Sci Ed 12(6):673–677 Smith CF, Dilley A, Mitchell BS, Drake RL (2018) Gray’s surface anatomy and ultrasound. A foundation for clinical practice. Elsevier, Edinburgh Szwelnik A. 2008. Embracing the Web 2.0 culture in business education: the new face of Facebook. The Higher Education Academy, BMAF Subject Centre Thompson LA, Dawson K, Ferdig R, Black EW, Boyer J, Coutts J, Black NP (2008) The intersection of online social networking with medical professionalism. J Gen Intern Med 23(7):954–957 UWA. 2019. The University of Western Australia. Body Donation Program. The University of Western
C. M. Hennessy and C. F. Smith Australia, Perth, WA, Australia. https://www.uwa. edu.au/science/resources/body-donation-program. Accessed 13 Jan 2020 Ventola CL (2014) Social media and health care professionals: benefits, risk and best practices. Pharm Therapeut 39:491–499,520 Visible Body (2020) Visible body. https://www.visiblebody.com. Accessed 13 Jan 2020 Walton JM, White J, Ross S (2015) What’s on your Facebook profile? Evaluation of an educational intervention to promote appropriate use of privacy settings by medical students on social networking sites. Med Educ Online 20:28708 Wang J (2013) What higher educational professionals need to know about today’s students: online social networks. Turkish Online J Educ Tech 12:180–193 Young J, Van Merrienboer J, Durning S, Ten Cate O (2014) Cognitive load theory: implications for medical education: AMEE guide no. 86. Med Teach 36(5):371–384
7
Mixed Reality Interaction and Presentation Techniques for Medical Visualisations Ross T. Smith, Thomas J. Clarke, Wolfgang Mayer, Andrew Cunningham, Brandon Matthews, and Joanne E. Zucco Abstract
Keywords
Mixed, Augmented and Virtual reality technologies are burgeoning with new applications and use cases appearing rapidly. This chapter provides a brief overview of the fundamental display presentation methods; head- worn, hand-held and projector-based displays. We present a summary of visualisation methods that employ these technologies in the medical domain with a diverse range of examples presented including diagnostic and exploration, intervention and clinical, interaction and gestures, and education.
Mixed reality · Augmented reality · Virtual reality · Medical visualization · Presentation techniques
R. T. Smith (*) · T. J. Clarke · A. Cunningham B. Matthews · J. E. Zucco IVE: Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia Wearable Computer Laboratory, University of South Australia, Adelaide, Australia e-mail: [email protected]; thomas.clarke@ mymail.unisa.edu.au; [email protected]. au; [email protected] W. Mayer AI and Software Engineering Laboratory, University of South Australia, Adelaide, Australia e-mail: [email protected]
7.1
Introduction
Immersive technologies such as mixed and virtual reality provide a method of interacting with 3D data that is perceived differently compared to traditional desktop displays. These technologies provide spatial experiences and interactions that go well beyond those available on flat screens with mouse and keyboard interactions. Immersive technologies enable the user to freely explore virtual information by means of zooming, changing viewpoint, highlighting important aspects while hiding others, and interacting with the virtual world in ways that would be impossible to achieve in the real world. Some techniques even enable users to walk around in the virtual environment as if it were real. For example, using a mixed reality display, a virtual human can be presented in a 1:1 scale and appear to be sitting at an office desk, unlike a desktop display where the virtual human always appears inside the computer. One area these technologies have focused on is delivering immersive and interactive experiences that allow users to move freely around 3D environments to deliver compelling visualisations. There are a diverse set of visualisa-
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_7
123
R. T. Smith et al.
124
tions that are being developed in the medical domain, from X-ray vision techniques that allow users to peek inside the human body to abstract data representations that support planning of procedures or analysis of scan data. This chapter provides an overview of emerging visualisations that are employing interactive presentation methods as an adjunct for current medical processes. A summary of hardware technologies used to deliver Mixed Reality (MR) and Virtual Reality (VR) experiences and the importance of calibration for medical applications is discussed. Preceding this, we describe several aspects of mixed reality visualisations with exemplars around diagnostic and exploration, interventional and clinical, interaction and gestures, and educational examples.
7.2
ixed and Virtual Reality M Display Technologies
A diverse set of hardware technologies are currently available that allow immersive Virtual and Mixed reality content to be presented. This section provides a brief overview of major hardware categories, examples of hardware devices and pros and cons of each. Commonly identified categories include head-worn, hand-held and projector- based displays for presenting mixed and virtual reality environments. For each of these technologies, we use the term Mixed Reality and Augmented Reality interchangeably to indicate computer-generated content registered to the physical world.
as the Sony Glasstron (1996) provided an augmented reality display solution connecting to a desktop computer. More recent, complete systems incorporate several sensors to support standalone operation or wireless transmission of data thus dispensing with the need for cable attachments. The Microsoft HoloLens was first shipped in 2016 and recently updated in 2020 with the Hololens 2 (shown in Fig. 7.1) and is a good example of a stand-alone computer integrated with a display device to provide a complete mixed reality solution to deliver holographic images that are aligned (or registered) in the physical world. The HoloLens has many technical features that include spatial mapping to allow the registration between the holographic images with the physical environment, gesture and speech recognition, 3D sound and more. Currently, there are several commercially available mixed reality displays appearing, including the Meta, Magic Leap One, and the HoloLens 2 with improved features such as wider viewing area, object tracking, and automatic calibration. Virtual Reality displays are available in many different forms and are accompanied with supporting technologies such as position tracking, front facing camera, eye tracking, EEG sensors, hand controllers to deliver compelling virtual experiences. Desktop systems such as the Oculus Rift S and HTC Vive Pro offer high quality display systems that coupled with a high-end PC present immersive 3D environments in real-time. Desktop systems such as the Oculus Rift S and HTC Vive
7.2.1 Head-Worn Displays Head-worn displays offer a means of presenting mixed or virtual reality content suitable for medical visualisations, training, step-by-step guidance, and many other applications. The first virtual reality system used a head-worn display developed by Ivan Sutherland in 1968 (Sutherland 1968). There have been ongoing improvements in the quality, ergonomic design, resolution, and capability of these devices. Early displays, such
Fig. 7.1 Microsoft HoloLens mixed reality device
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
125
Fig. 7.2 Mixed reality annotations delivered through a tablet computer
Pro offer high quality display systems that coupled with a high-end PC present immersive 3D environments in real time. They also incorporate tracking systems that enable hand-held controllers to support interactions in virtual worlds. Standalone hardware options such as the Oculus Quest and Vive Focus have also become available which remove the necessity to be tethered to a PC and allow for wide area virtual environments to be configured. One of the advantages of head-worn displays is that they leave the operators’ hands free to perform other tasks, unlike hand-held technologies, such as smartphones, which require the user to hold the device during operation.
7.2.2 Hand-held Displays Modern smartphones and tablets provide a suitable platform for presenting video see through mixed reality experiences that use the smartphones camera to capture the physical world which is augmented with digital content (shown in Fig. 7.2). The wide availability of smartphones has also made them a popular choice for mixed reality experiences. One advantage of this approach is the brightness of the display, which often outperforms head-worn displays that can appear washed out. In comparison to head-worn displays, smartphones need to be held by the user
or require a stand to hold during the operation. A use case that has been increasing in popularity is for training or education applications where 3D content is presented as an adjunct to a paper- based medium. Figure 7.2 provides an example that can be viewed from any angle using a smartphone as a hand-held viewing window into rich 3D content.
7.2.3 Projector-Based Displays Projector-based mixed reality systems, also known as Spatial Augmented Reality, employ projectors to alter the appearance of physical objects that can be highly organic in shape instead of a typical projector screen. This form of augmented reality has received less attention compared to head-worn and hand-held solutions. However, projected information provides some unique features that have potential for several applications. An advantage of projected information overlaid on physical objects is that multiple observers can be perceived as the same projection simultaneously and from multiple points of view. For example, visualisations of internal anatomical details directly projected onto the human body (Hoang et al. 2018) can collaboratively be viewed by a group of people but does not require individuals to wear or hold the equipment on their bodies (Fig. 7.3).
126
R. T. Smith et al.
Fig. 7.3 Projector-based visualisation on the human body. (Image provided courtesy of Hoang et al. 2018)
7.2.4 Cave Automatic Virtual Environments (CAVES)
tion of the virtual content and the physical objects. For head-mounted and see-through devices, this also includes the accurate estimation of the user’s Cave environments are another instance of spa- and the device’s position and orientation in the tial augmented reality technology. Caves are physical and virtual worlds. Alignment must be three-dimensional cubicles that use several pro- established and maintained throughout the user’s jected images to simulate an immersive 3D envi- interaction with the augmented world using regisronment without users needing to wear a headset. tration techniques. This section presents several As such, caves can provide a collaborative AR calibration methods that aim to improve registraexperience for one or more medical professionals tion performance. Calibration includes how mulwho wish to (collaboratively) explore vast tiple hardware systems operate together such as a amounts of information. camera and projector pair need to operate in a For example, medical data can be visualised in common coordinate space while registration is the the form of three-dimensional models that can be alignment between the physical and virtual viewed collaboratively in a cave environment for information. diagnosis (Knodel et al. 2018). Digital Imaging Registration between the two spaces has and communications in medicine (DICOM) files received ongoing attention to create a reliable and were processed into 3D models using an algo- precise alignment between the physical and digital rithm that converts voxels to polygons that are information. Registration techniques can rely on refined into visualisations to be viewed using 3D markers whose positions in both spaces are known glasses in a collaborative environment. (Cutolo et al. 2016). Techniques also exist that make use of sensing and computer vision methods to infer the location and orientation of key objects 7.2.5 Calibration and Registration in physical space that are also present in the virtual world for alignment purposes (Hoe et al. 2019). Augmented reality techniques rely on computer- The precision of sensing and vision technologies generated content to be overlaid on real-world as well as latency introduced by computational physical objects. The user experience in such set- complexity of the underlying algorithms pose tings critically depends on accurate and precise challenges for maintaining precision alignments. registration (or alignment) between the visualisa-
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
In the medical domain, precise registration may not be required for some tasks, for example, when adding annotations such as the temperature of the skin over a person’s forehead, a little jitter or movement may not have a big impact on the function. However, a precise registration is important for several medical applications, for example, an MR system used to support or guide operations for a surgery. In such cases, the calibration and registration are both important parameters to ensure a positive outcome. The number of markers required for precise registration can be reduced if high resolution sensors are utilised. Cutolo et al. explored the use of high-resolution cameras for this purpose. In their system, two cameras tracked the position of several monochromatic spheres whose location in the physical space and the virtual model was known. The higher resolution sensing and the known positions of the spheres enabled the system to infer accurate distances and angles (Cutolo et al. 2016). Moreover, this technology has the potential to enable precision alignment of virtual information on top of the physical environment as perceived through a head-mounted display. Alignment is also concerned with the physical placement of a head-worn display on a user’s head with the aim of preventing it from slipping or moving at all during use. Current systems use bands, helmets, and caps, all of which have the potential to slip during use. Recent improvements in the head-worn display hardware are compensating by using cameras facing the user to detect the exact eye location. This allows the interpupillary distance (the distance measured between the centre of the pupils of the eyes) to be detected and can also compensate for changes in the position of the device. Calibration of Mixed reality systems is an important aspect that has an impact on the visualisation design. Although, current systems are typically not able to provide sub millimetre accuracy with the augmented information, there are still very useful pieces of information that can be incorporated into visualisations that are presented through mixed reality hardware and provide new possibilities that can enhance existing practices.
7.3
127
ixed and Virtual Reality M Visualisations
Visualisations for data exploration is well established in commercial and research domains providing a means of communicating data. Graphical representations leverage elements such as colour, size, symbols and relationships to communicate datasets in a visual form. The purpose of exploring visualisation methods is to uncover clear ways in which information can be communicated effectively. This allows users to more easily understand the underlying data that can be used to support reasoning and decision making. Methods include showing connections or relationships between data entities, quantities or geometries that are difficult to clearly understand in their raw data form. Medical visualisations have a well-established history for supporting diagnosis in the form of X-rays (late 1890s) and advanced to CT and MRI scans. While these current representations of this data work well, using 3D representations of this data in a mixed reality setting is providing a new opportunity to explore the data and further enhance the tools available to medical professionals with increased utility. We present several exemplars that highlight three areas of promising research, including Diagnostic and exploration, intervention and clinical, and education applications.
7.3.1 Diagnostics and Exploration Virtual Reality and Artificial Intelligence technologies have shown potential to shape the future of key activities in the areas of medical imaging and precision medicine (Comaniciu et al. 2016). Medical imaging technologies can help guide the planning and execution of minimally invasive procedures that are tailored to the specific needs of the patient. For example, advanced photorealistic rendering derived from CT and MRI scans, artificial intelligence methods for image understanding, and computational methods providing decision support to doctors are all important technologies that support medical professionals
128
R. T. Smith et al.
Fig. 7.4 Original CT image (left) and a generated realistic rendering (right). (Image courtesy of Comaniciu et al. 2016)
Fig. 7.5 VRRRRoom system concept diagram connecting radiologist with 3D visualisation of tomographic data. (Image courtesy of Sousa et al. 2017)
in their decision-making processes and help communicating findings. For example, generated photorealistic three-dimensional renderings from CT images can enhance the raw images and help doctors in planning invasive procedures and visualise processes and conditions that may be difficult to comprehend in traditional medical images. Figure 7.4 shows a traditional CT image and its photorealistic rendering in a 3D model using appropriate colour, textures and illumination. Interpreting the tomographic data used in radiology and associated fields can be impacted by the environment that the data is viewed in. When performed on a traditional display, the ambient and direct lighting conditions found in radiology reading rooms, coupled with the radiologist’s positioning to the display, can result in errors due to poorly calibrated displays and reflections. Virtual reality has the potential to
diminish or remove these aspects of the environment. Sousa et al. developed VRRRRoom – Virtual Reality for Radiologist in the Reading Room – to examine the viability of using virtual reality to visualise and interact with tomographic data in a radiology context (Sousa et al. 2017). VRRRRoom immerses the radiologist in their data by visualising the tomographic data as a full 3D projection in front of the user (Fig. 7.5). The system provides a natural user interface for manipulating this data, providing two-handed gestures to slice, adjust the brightness, and change the orientation of the 3D projection. A qualitative study of the system with radiology experts suggest that the immersion afforded by virtual reality reduced environmental distractions and conditions that would normally negatively affect readings. The study also suggests that the natural user interfaces are an efficient way for navigating tomographic data. Another example of a multi-modal interactive system where radiologists can enter patient notes and explore MRI scans in a virtual environment was presented by Prange et al. (2018). The system provides an immersive experience to doctors who can walk around in the virtual room and interact with the system via speech and hand gestures. The system includes a natural language- based dialogue system, through which doctors can retrieve patient records and interrogate records in using natural language questions. The dialogue system has been integrated with a machine learning–based medical
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
129
Fig. 7.6 Whole-body CT Scan, identified organs and anatomical landmarks. (Image courtesy of Comaniciu et al. 2016)
expert system that enables doctors to obtain information about therapies that may best suit a given patient from within the virtual environment. Artificial intelligence and machine learning techniques have also been applied to interpret and enhance medical images. Comaniciu et al. (2016) present an overview of recent advances in this area. For example, image segments corresponding to anatomical structures such as organs can be automatically identified, highlighted, and labelled in medical images. Figure 7.6 shows an example of overlaid anatomical landmarks and structures that were identified in medical images. Such methods can lead to improved quality of diagnosis, reduced uncertainty, and increased efficiency through comprehensive automated analysis, reporting, and comparison functions that combine medical images and test results
with data sourced from specific patient history, similar cases, and treatments. Precision medicine has benefitted from Artificial Intelligence methods. Model reconstruction techniques combined with machine learning methods can infer important landmarks, infer anatomy, and help synthesise precision surgical materials tailored to the specific conditions of a given patient. It is expected that three- dimensional models continuously reconstructed from medical sensors and images will be used to guide interventions in real time in the future. Computational models are increasingly being used to infer physiological conditions from medical imaging, thus replacing traditional invasive measurement techniques. For example, fractional flow rates characterising the severity of coronary stenosis can be accurately inferred from CT images and fluid dynamic models. Moreover,
130
quantitative models supported by machine learning technologies can be used to create patient- specific virtual models and predict outcomes under different scenarios. Virtual reality render-
Fig. 7.7 Effect of ventricular pacing on cardiac electrophysiology as predicted by a patient-specific model. (Image courtesy of Comaniciu et al. 2016)
R. T. Smith et al.
ings of predicted treatment outcomes may assist doctors in comparing treatment options and identifying patient-specific optimal treatment plans. For example, Fig. 7.7 shows the treatment effect on heart function as predicted by a personalised quantitative treatment model for a patient. Beyond the volumetric data, we commonly see in medical visualisation, there is a wealth of medical related information that does not naturally project into 3D space. The data visualisation community refers to this type of data as abstract data. Examples of abstract data in the medical domain are the results of biochemical analysis from blood tests or genomics data from DNA sequencing. These types of data can be high dimensional. How these dimensions are projected into 2D- or 3D-space can directly influence the insights that can be derived from the information. ImAxes (Cordeil et al. 2017) demonstrates an immersive VR system for the visualisation of multi-dimensional abstract data. In ImAxes, the dimensions of the data are embodied as physical constructs within the space that can be grabbed and manipulated by the user (shown in Fig. 7.8). The physical positioning of dimensions relative to each other gen-
Fig. 7.8 ImAxes used to explore a dataset where the user is able to freely construct visualisations around their body. (Image courtesy of Cordeil et al. 2017)
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
erates data visualisation such as scatterplots or parallel coordinate plots of the given dimensions. From this basic set of interactions, emergent visualisations can be developed that reveal clusters and relationships in the data—and potentially new insights that were not previously realised. In the medical context, a system such as ImAxes could support medical experts in exploring and understanding the relationships of their abstract data.
7.3.2 Intervention and Clinical Mixed reality provides the opportunity to bring detailed visualisations and present them in situ with patients, medical tools, and the operating environments. New techniques are being developed regularly that are exploiting the capabilities
131
of mixed and virtual reality technologies to better support clinical practices, surgery, and training; they often provide an adjunct capability to a well- established practice. Internal anatomy visualisations that allow users to see human organs with an immersive 3D environment or in situ with a patient are well leveraged use cases for mixed reality systems. One example developed by Kasprzak et al. (2019) presented a system to visualise echocardiography imagery in real time with a mixed reality system. Figure 7.9 provides a depiction of the 3D holographic image at the users’ fingertips. The system converts echo data from a CT scan into a 3D stream of images presented through a Microsoft HoloLens. This process was used to view a patient’s heart in a 3D environment rather than on a 2D screen. CT data are sent to a PC for 3D rendering and transmitted to the HoloLens for view-
Fig. 7.9 Real-time echocardiography visualisation presented with a mixed reality system. (Image courtesy of Kasprzak et al. 2019 under Creative Commons license. No changes were made http://creativecommons.org/licenses/by/4.0/)
132
ing. Hand gestures and voice commands are used for interaction without the need of any hand-held devices. Overlaying holographic images directly onto the human body allows users to see a visualisation of the internal anatomy inside the human body. A recent example of this technique was demonstrated by Pratt et al. (2018), who considered the effectiveness of using 3D overlays to aid in surgeries. These models were segmented in regions and then presented on a HoloLens capable of moving the objects around the real world overlaid on their patients. The calibration was done manually by having a human move the 3D object over the patient limb and position the patient’s limb in the right spot. Six case studies indicated positive results with a reduction in operating time, thus reducing the anaesthetic required and the patient’s morbidity. In anatomic pathology, co-registration of digital radiographs with real-world specimens has been explored to identify items of interest. Hanna (Hanna et al. 2018) conducted a study with pathologists’ assistants to overlay virtual radiographs onto corresponding gross specimens to identify the location of a biopsy clip or lesions. Voice and hand gestures were employed to scale and position the radiograph in place. Results of a usability study found that users were able to identify the precise location of a biopsy clip faster than using conventional methods. In addition, several other use cases were explored. Live streaming (audio and visual) and bidirectional annotation were employed via Microsoft’s Skype Collaboration platform to facilitate a remote pathologist guiding a pathology trainee/assistant through an autopsy procedure or to obtain tissue sections. The HoloLens was used to view reconstructed 3D images of cellular and subcellular structures and to view and manipulate (i.e. resize and rotate) 3D specimens. Using AR for needle guidance is another application that has received attention in the research community. Agten et al. (2018) found that it possible to match the results of conventional guidance methods to that of their HoloLens prototype while using a lumbar vertebrae phantom. The HoloLens display showed a 3D model
R. T. Smith et al.
of the lumbar vertebrae as well as a blue target for the needle insertion. Augmented reality needle guidance techniques have also been investigated for transperineal prostate procedures (e.g. biopsy and ablation). Li et al. (2019) developed a needle guidance application for use with a smartphone or smart glasses. Data derived from pre-procedural MRI/CT images was used to display anatomic information, the planned needle trajectory, and the target lesion. The information was designed to be overlaid onto the patient and tracked using an image marker (along with five fiducial markers placed on the back of the image marker) attached to the patient’s perineum. Smartphone functionality allowed the user to select a target, define the needle plan and view surrounding anatomy in 3D. In usability studies using a prostate phantom, participants were asked to insert a biopsy needle to reach a target using either a smartphone (iPhone) or smart glasses (ODG R-7) to present the information. The needle end and planned needle path were displayed for each target. Results of the study indicating for needle placement smartphone guidance had a trend towards less errors compared to the smart glasses. The Vivo Light Vein Finder is an example of a commercially available spatial augmented reality device used for vascular imaging. The device provides a real-time visualisation of structures inside the body by projecting images directly onto the skin. This visualisation supports finding accurate location for injections, which was proven to be very effective with less experienced nursing students, who demonstrated improved time and efficiency (Fukuroku 2016). Veins were located by projecting near infrared light onto the human body, where the haemoglobin in the blood absorbed some of this light and allowing veins and vascular structures to be identified. This information was then augmented with projected light revealing the underlying structures on the skin’s surface. This method was demonstrated to work on over 90% of patients depending on their age, health, and race (Chiao et al. 2013). García-Vázquez et al. investigated a new approach to device navigation for endovascular aortic repair (Garcia-Vazquez et al. 2018). Their
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
133
Fig. 7.10 Endovascular aortic repair – Figure reproduced with the permission of the Institute for Robotics and Cognitive Systems (University of Lübeck, Germany).
h t t p s : / / s c i e n c e d i s c o v e r i e s . d e g r u y t e r. c o m / augmented-reality-aortic-repair-first-steps-reducing-radiation-exposure/
approach used an electromagnetic tracking system to find the position of the catheter throughout the body using three or more exterior landmarks that also served as calibration markers for the HoloLens. This enabled the user to see the approximate location and movement of the catheter tip inside the patient through the HoloLens as shown in Fig. 7.10. This approach used in conjunction with conventional practices are great examples of the increase in efficiency that Mixed Reality could bring to medical intervention. Projected overlays have also been employed with the aim of assisting in radiofrequency ablation (RFA) of liver tumours. Si et al. (2018) investigated moving the needle in relation to MRI data that was overlaid on an object. The MRI data were represented as a holographic overlay on a patient (in the case of this research, a physical abdominal phantom). The hologram of the MRI data was overlaid over the phantom by using a HoloLens and an NDI tracking system. The system used tracking markers placed on the phantom to gain an accurate estimation of the physical position. Three-dimensional object recognition technology was used to track the position of the needle and show the user where the needle was located within the object. Results of a subsequent
user study suggested that the mixed reality approach benefitted surgeons through simplicity and is more efficient and precise when compared with the un-aided approach. Augmented reality has also been applied to orthopaedics. An example is the use of augmented reality to assist and guide hole drilling along the axis of the femoral neck for hip resurfacing (Liu et al. 2018). Liu et al. employed a depth camera mounted on a robotic arm, a tracked surgical drill, and a HoloLens to guide the drilling of a hole on a femur phantom. A red arrow indicating the entry point and drilling direction for the guide hole was shown in situ, overlaid onto the surgeon’s view via a HoloLens (seen in Fig. 7.11). The arrowhead and shaft turned green when the position and orientation were accurate indicating to the surgeon to proceed with drilling an estimate of the femur pose by processing a scanned 3D model of the femur coupled with depth data (obtained by a depth camera in the form of a 3D point cloud). Accurate femur pose was obtained via an iterative closest point algorithm which runs during the procedure. In addition, two markers were used to align the HoloLens with the robot and camera, and a cube marker (Fig. 7.12) was attached to the surgical drill so that it could be tracked by the HoloLens. Results
134
Fig. 7.11 Surgeon’s view via HoloLens. (a) Red arrow depicting entry point and drilling direction; (b) Green arrow indicating correct position; (c) Green arrow and
R. T. Smith et al.
shaft depicted to proceed with drilling. (Image courtesy of Liu et al. 2018 under the Creative Commons Attribution 4 license http://creativecommons.org/licenses/by/4.0/)
Fig. 7.12 Cube marker attached to surgical drill to assist with HoloLens tracking. (Image courtesy of Liu et al. 2018 under the Creative Commons Attribution 4 license http:// creativecommons.org/ licenses/by/4.0/)
obtained from usability studies revealed that position and direction mean errors of 2 mm and 2 degrees, respectively, were achieved when compared against the preoperative plan.
7.3.3 Interactions and Gestures Interactions in mixed reality brings many new challenges and opportunities to the users and
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
developers alike. The tracking of tools, recognition of hand gestures, voice commands, and even gaze gestures are novel options for interaction and enable opportunities to develop applications for nearly all health professionals. Although research in this field has delivered tremendous advances in recent years, many challenges related to accuracy, reliability, technology, and integration into clinical contexts remain the subject of ongoing research activity. Currently, systems have been either in the form of prototypes that have been tested in controlled environments, or have been employed for training of medical professionals. Deployments of advanced AR and MR systems in the routine clinical context with direct involvement of patients are still the exception, not the rule. For many surgical procedures, it may not be feasible for surgeons to use hand gestures to manipulate or interact with digital information. For example, given the nature of vascular interventions, surgeons must have both hands engaged in the surgical task. Although augmented reality guidance systems have been developed, they predominantly rely on hand gestures for navigation and control. Researchers have explored interaction techniques that may address this limitation. For example, Grinshpoon et al. (2018) developed a system that employed voice commands and gaze gestures for interaction. In this system, the user could move, rotate, enlarge and scale digital content by looking at specific points in the environment. Scrub nurses working in a high-stress surgical environment are governed by many strict rules about sanitisation. In such an environment using hand-based interaction techniques and even voice commands may be unsuitable as they may be distracting for the surgeon. Unger et al. (2019) developed a gaze-based gesture control interface that may be suitable for this environment. The system supported two main tasks: users could adjust the lighting of the surgical room by repeatedly gazing at specific sections of the room, and users could receive information about surgical items by looking at them. 2D barcodes were placed next to the surgical equipment to facilitate object recognition. This system proved the viability of voice- and gesture-less control inter-
135
faces, and participants in a trial performed well and indicated their satisfaction with the proposed interaction technique. Some tasks in the field of medical imaging require medical professionals to view live medical data while the patient is still being scanned. Wearing a headset for this type of research can be impractical due to the magnetic nature of the machines and the lack physical of space within the gantry. The current conventional methods use 2D monitors located outside of the gantry require good situational awareness and hand–eye coordination from the user. Mewes (Mewes et al. 2018) successfully set out to find a more convenient way to represent medical data for professionals working within a scanner. Several mirrors were used to move a projected user interface over the top of the patient inside the scanner. This system tracked needles using 3D object recognition and had two different user interfaces: a 2D interface and a 3D interface. The 2D interface aimed to help the surgeon find a suitable point of entry by displaying yellow and red arrows that would guide the surgeon towards the point of insertion. A bar was shown to let the surgeon know how much further inside of the patient the needle needed to go. The 3D display showed an approximate insertion point, its target area, and how far the needle had gone within the patient’s. The distance of the needle to the entry point was displayed as a line segment connecting the needle to its entry point. Both user interfaces proved to be more beneficial than the current methods of performing these needle insertions within a scanner as reported by both inexperienced and experienced professionals. In VR systems, realistic interactions with virtual objects and visualisations presents an additional challenge. Controllers and gestures can provide interactions; however, they lack the haptic feedback provided through direct touch. Providing the ability to directly touch, feel or hold elements of the virtual environment requires a physical counterpart in the real world. Haptic technologies aim to address this issue through active devices such as haptic gloves1 haptic conhttps://haptx.com/
1
136
trollers (Choi et al. 2018; Benko et al. 2016) and robotic systems (Vonach et al. 2017; Whitmire et al. 2018). Alternatively, passive haptics or simple static objects can approximate virtual objects and have been shown to enhance virtual experiences (Insko 2001). Passive haptics can be further extended through illusions such as space warping (Kohli 2010) and haptic retargeting (Azmandian et al. 2016), which leverage the dominance of visual perception over haptic perception and proprioception. These illusions enable realistic interaction with a virtual object that is dynamic or more complex than the physical passive haptic that approximates it. These techniques could be applied in medical visualisations to provide direct touch interaction and exploration methods. Spillman et al. have applied space warping to an arthroscopy surgical simulator for medical training to enhance the detail provided by a passive haptic (Spillmann et al. 2013). Virtual models from a variety of patients can be mapped onto a single physical model, enabling the user to explore and operate on the models with realistic feedback.
7.3.4 Education Virtual and augmented reality have become increasingly popular in medical education, as technology enables students to interact with virtual representations of specimen in ways that would be difficult to achieve using the actual physical objects. In this context, virtual and augmented reality technologies enable instructors to add explanatory material and demonstrate varying physical conditions, whereas students can easily focus on specific elements of the subject under study by hiding unwanted parts and use zoom and colour to highlight parts of interest. Additionally, teaching using virtual specimen scales to a large audience whereas teaching using physical objects can be limited by the number, location, and physical condition of the available specimen. Balian et al. (2019) investigated the use of augmented reality for training health care providers in providing cardiopulmonary resuscitation
R. T. Smith et al.
(CPR). To this end, the authors employed the use of a HoloLens to provide audio and visual feedback for participants performing CPR on a training manikin. Quantitative chest compression data such as rate, depth and recoil were recorded as hands-only CPR was performed on the manikin. The data were used to provide visual feedback by means of blood flow to vital organs and is depicted in a circulatory system placed in front of the user via HoloLens head-worn display. Depending on the quality of the chest compressions, blood flow to the vital organs is shown to either improve or deteriorate. An audible heartbeat at 110 bpm was provided if the user performs chest compressions outside of the guideline range of 100–120 cpm. In addition, a CPR quality score was displayed at the end of the session. Results from a usability study indicated that the system was a beneficial training tool that facilitates good quality CPR and was well received by participants (82% perceived the experience as realistic, 94% of participants were willing to use the application in future training and 98% acknowledged that the visualisations were helpful for training). Karambakhsh et al. presented an application for anatomy education that utilises virtual reality techniques for presentation of anatomy models and leveraged augmented reality and machine learning for controlling the interaction with the specimen (Karambakhsh et al. 2019). The system acquires models using 3D scanning techniques, presented the model using augmented reality techniques, and provided an interface for instructors to annotate the model. The anatomy models are acquired by scanning the physical object using an infrared camera that also senses the distance to the object. Multiple partial scans are then aligned and merged into a complete mesh model based on corresponding key points featuring in multiple models. Instructors subsequently post-process the model and label key elements in the model using a dedicated user interface application Fig. 7.13 (right). Students explored the resulting anatomy model while wearing a HoloLens, which displays the model from the correct viewpoint, and they control the system using hand gestures,
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations
137
Fig. 7.13 User interface for interacting with the 3D model. Left: whole-body menu. Right: smartphone-based interface for interaction with the model. (Image courtesy of Karambakhsh et al. 2019)
Fig. 7.14 Simulation performance outcome diagrams including undercut and overcut areas. (Image courtesy of Yin et al. 2018)
which are tracked using the camera built into the HoloLens. Supported control gestures include Pan, Pinch, Zoom, Fist, and Tap, which the system can learn from examples using Deep Convolutional Neural Networks. This approach was shown to provide superior gesture detection accuracy than previous approaches that relied on geometric properties and support vector machines for gesture classification. Applications of MR technologies for education can be found in the field of Dentistry (Huang et al. 2018). For example, student dentists can train to perform tasks, such as cavity and crown preparation, on physical and virtual models. In the educational settings, systems exist where virtual tutors monitor the students and assess multiple factors such as ergonomic posture and quality
of the task outcome and provide formative feedback. For example, Su Yin et al. present a simulator for endodontic surgery that provides formative feedback to dental students about the quality of the resulting drill hole (Yin et al. 2018). Students perform drilling on a virtual 3D model of the tooth and receive feedback about their physical movement (speed, force) and their performance outcome in terms of the deviation from the optimal result. Figure 7.14 shows a view indicating areas where deviations from the optimal results have occurred. The detailed results are additionally presented in a standard scoring approach that is commonly used in dentistry education. This approach was shown to provide scoring results that are consistent with scoring by human experts.
R. T. Smith et al.
138
7.4
Conclusion
Proceedings of the 29th annual symposium on user interface software and technology, pp 717–728 Chiao FB, Resta-Flarer F, Lesser J, Ng J, Ganz A, Pino- The exciting possibilities afforded by mixed realLuey D, Witek B (2013) Vein visualization: patient characteristic factors and efficacy of a new infrared ity systems have been foreseen by early visionarvein finder technology. Br J Anaesth 110:966–971 ies such as Ivan Sutherland who developed their Choi I, Ofek E, Benko H, Sinclair M, Holz C (2018) own hardware to uncover a new presentation Claw: a multifunctional handheld haptic controller for method for computer-generated information in grasping, touching, and triggering in virtual reality. In: Proceedings of the 2018 CHI conference on human the early 1960s. Now almost 60 years later, we factors in computing systems, p 654 are seeing a rapid acceleration of hardware techComaniciu D, Engel K, Georgescu B, Mansi T (2016) nologies that can deliver compelling real-time Shaping the future through innovations: from medivisual information that is accessible and affordcal imaging to precision medicine. Med Image Anal 33:19–26 able. Mixed and virtual reality technologies are now being considered in more depth for a diverse Cordeil M, Cunningham A, Dwyer T, Thomas BH, Marriott K (2017) Imaxes: immersive axes as embodset of applications and often are providing new ied affordances for interactive multivariate data visusupporting roles with well-established practices alisation. In: Uist'17: Proceedings of the 30th annual ACM symposium on user interface software and techin medical domains. nology, pp 71–83 This chapter described the core presentation Cutolo F, Freschi C, Mascioli S, Parchi P, Ferrari M, methods for mixed and virtual reality systems Ferrari V (2016) Robust and accurate algorithm for that have found applications in the medical field wearable stereoscopic augmented reality with three indistinguishable markers. Electronics 5:59 today and briefly explored the current limitations in terms of calibration and the current research Fukuroku K, Narita Y, Taneda Y, Kobayashi S, Gayle AA (2016) Does infrared visualization improve selecmethods that are aiming to reduce or overcome tion of venipuncture sites for indwelling needle at the these challenges. Exemplars covering a wide forearm in second-year nursing students? Nurse Educ Pract 18:1–9 spectrum of tasks were presented to demonstrate the breadth of visualisations and non-traditional Garcia-Vazquez V, Von Haxthausen F, Jackle S, Schumann C, Kuhlemann I, Bouchagiar J, Hofer AC, Matysiak F, interaction techniques that are being employed to Huttmann G, Goltz JP, Kleemann M, Ernst F, Horn M enhance medical procedures from in situ anat(2018) Navigation and visualisation with hololens in endovascular aortic repair. Innov Surg Sci 3:167–177 omy visualisations, abstract data representations Grinshpoon A, Sadri S, Loeb GJ, Elvezio C, Feiner SK and interaction methods. (2018) Hands-free interaction for augmented reality in vascular interventions. 25th IEEE conference on virtual reality and 3D user interfaces, VR 2018 - proceedings. May:751-752 References Hanna MG, Ahmed I, Nine J, Prajapati S, Pantanowitz L (2018) Augmented reality technology using Microsoft HoloLens in anatomic pathology. Arch Pathol Lab Agten CA, Dennler C, Rosskopf AB, Jaberg L, Pfirrmann Med 142:638–644 CWA, Farshad M (2018) Augmented reality- guided lumbar facet joint injections. Investig Radiol Hoang TN, Ferdous HS, Vetere F, Reinoso M (2018) Body as a canvas: an exploration on the role of the 53:495–498 body as display of digital information. In: Proceedings Azmandian M, Hancock M, Benko H, Ofek E, Wilson of the 2018 Designing Interactive Systems conference AD (2016) Haptic retargeting: dynamic repurposing (DIS '18) of passive haptics for enhanced virtual reality experiences. In: Proceedings of the 2016 CHI conference on Hoe Z-Y, Lee I-J, Chen C-H, Chang K-P (2019) Using an augmented reality-based training system to prohuman factors in computing systems, pp 1968–1979 mote spatial visualization ability for the elderly. Univ Balian S, Mcgovern SK, Abella BS, Blewer AL, Leary M Access Inf Soc 18(2):327–342 (2019) Feasibility of an augmented reality cardiopulmonary resuscitation training system for health care Huang TK, Yang CH, Hsieh YH, Wang JC, Hung CC (2018) Augmented reality (Ar) and virtual reality (Vr) providers. Heliyon 5:E02205 applied in dentistry. Kaohsiung J Med Sci 34:243–248 Benko H, Holz C, Sinclair M, Ofek E (2016) Normaltouch and texturetouch: high-fidelity 3d haptic shape ren- Insko BE (2001) Passive haptics significantly enhances virtual environments. Ph.D. thesis dering on handheld virtual reality controllers. In:
7 Mixed Reality Interaction and Presentation Techniques for Medical Visualisations Karambakhsh A, Kamel A, Sheng B, Li P, Yang P, Feng DD (2019) Deep gesture interaction for augmented anatomy learning. Int J Inf Manag 45:328–336 Kasprzak JD, Pawlowski J, Peruga JZ, Kaminski J, Lipiec P (2019) First-in-man experience with real-time holographic mixed reality display of three-dimensional echocardiography during structural intervention: balloon mitral commissurotomy. Eur Heart J Knodel MM, Lemke B, Lampe M, Hoffer M, Gillmann C, Uder M, Bäuerle T (2018) Virtual reality in advanced medical immersive imaging: a workflow for introducing virtual reality as a supporting tool in medical imaging. Comput Vis Sci 18:203–212 Kohli L ( 2010) Redirected touching: warping space to remap passive haptics. In: 2010 IEEE symposium on 3d user interfaces, pp 129–130 Li M, Xu S, Mazilu D, Turkbey B, Wood BJ (2019) Smartglasses/smartphone needle guidance Ar system for transperineal prostate procedures. SPIE medical imaging, March 2019. SPIE Liu H, Auvinet E, Giles J, Rodriguez Y, Baena F (2018) Augmented reality based navigation for computer assisted hip resurfacing: a proof of concept study. Ann Biomed Eng 46:1595–1605 Mewes A, Heinrich F, Hensen B, Wacker F, Lawonn K, Hansen C (2018) Concepts for augmented reality visualisation to support needle guidance inside the Mri. Healthc Technol Lett 5:172–176 Prange A, Barz M, Sonntag D (2018) Medical 3d images in multimodal virtual reality. In: Companion of the 23rd international conference on Intelligent User Interfaces (IUI’18) Pratt P, Ives M, Lawton G, Simmons J, Radev N, Spyropoulou L, Amiras D (2018) Through the
139
Hololens looking glass: augmented reality for extremity reconstruction surgery using 3d vascular models with perforating vessels. Eur Radiol Exp 2:2 Si W, Liao X, Qian Y, Wang Q (2018) Mixed reality guided radiofrequency needle placement: a pilot study. IEEE Access 6:31493–31502 Sousa M, Mendes D, Paulo S, Matela N, Jorge J, Lopes DS (2017) Vrrrroom: virtual reality for radiologists in the reading room. In: Proceedings of the 2017 ACM Sigchi conference on human factors in Computing Systems (CHI'17), pp 4057–4062 Spillmann J, Tuchschmid S, Harders M (2013) Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator. IEEE Trans Vis Comput Graph 19:626 Sutherland IE (1968) A head-mounted three dimensional display. Proc AFIPS 68:757–764 Unger M, Black D, Fishcher NM, Neumuth T, Glaser B (2019) Design and evaluation of an eye tracking support system for the scrub nurse. The International Journal of Medical Robotics and Computer Assisted Surgery 15(1):e1954 Vonach E, Gatterer C, Kaufmann H (2017) Vrrobot: robot actuated props in an infinite virtual environment. 2017 IEEE Virtual Reality (Vr), pp 74–83 Whitmire E, Benko H, Holz C, Ofek E, Sinclair M (2018) Haptic revolver: touch, shear, texture, and shape rendering on a reconfigurable virtual reality controller. In: Proceedings of the 2018 CHI conference on human factors in computing systems, p 86 Yin MS, Haddawy P, Suebnukarn S, Rhienmora P (2018) Automated outcome scoring in a virtual reality simulator for endodontic surgery. Comput Methods Prog Biomed 153:53–59
8
Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical Visualisation and Fabrication Mark Roughley
Abstract
8.1
Three-dimensional (3D) scanning of the human skin for 3D medical visualisation and printing does not often produce the desired results due to a number of factors including the specularity of human skin, difficulties in scanning fine structures such as the hair and the capabilities of the scanning technologies utilised. Some additional 3D modelling may be required to make the surfaces more suitable for use in the production of anatomical and medical teaching resources, computerised facial depiction and design of bespoke prostheses. Three-dimensional scanned surfaces can be enhanced through digital sculpting and embossing of high-resolution photographs of the human skin.
Three-dimensional capture of the human body proves useful in a number of medical situations including the development of custom-designed prostheses (Bibb et al. 2000) and medical visualisation for education purposes. In recent years, the presence of computerised 3D scanning, modelling and fabrication technologies in the studios of the medical artist, forensic artist and maxillofacial prosthetist has become more commonplace (Palousek et al. 2014; Roughley and Wilkinson 2019; Wilkinson 2005). Such technologies have been adopted from engineering laboratories and visual effects (VFX) studios, and can assist in the production of accurate digital science communication materials, reduce the manufacture time and increase the aesthetic appearance of high- quality prostheses (Markiewicz and Bell 2011). The use of these technologies often demands higher investment in software, hardware and training for the practitioner (Mahoney and Wilkinson 2010; Palousek et al. 2014). More accessible, low-cost hardware solutions are available but require skilled practitioners and advanced training in order to elevate production quality and finish. Available 3D scanning technologies can be categorised into two types: low-cost solutions and high-cost solutions. Currently, most of these solutions operate without the need for physical
Keywords
Digital sculpting · 3D scanning · 3D modelling · 3D visualisation · ZBrush · Skin
M. Roughley (*) Liverpool School of Art and Design, Liverpool John Moores University, Liverpool, UK e-mail: [email protected]
Introduction
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_8
141
142
M. Roughley
Fig. 8.1 A low-cost structured light scanner (3D Systems Sense© scanner) being used to 3D capture a face
scanner registration targets to be placed on the object being captured. Low-cost solutions are often aimed at novice users or freelance practitioner and are typically USB plug and play, handheld, structured light devices that boast cameras similar to high-end smartphones (Fig. 8.1). As described by Erolin (2019), structured light devices work by ‘projecting a known pattern onto an object and capturing a sequence of photographs. The deformation of the pattern is then measured to determine the object’s shape and dimensions’. These low-cost solutions capture shape reasonably well but not fine details such as pores and wrinkles due the lower-resolution cameras employed. A USB plug and play structured light handheld scanner or a smartphone capable of 3D scanning can range in value from a few hundred dollars to a few thousand dollars but are typically seen as affordable and do not require high-performance computers to run their associated software. High-cost solutions range from tens of thousands to hundreds of thousand dollars and include more advanced handheld scanners, such as laser scanners or professional structured light scanners (Fig. 8.2), and clinical devices such as computed tomography (CT) or magnetic resonance imaging (MRI) machines. In-depth specialist training is
often required in order to operate the scanners and software and is often provided at an extra cost unlike most low-cost solutions, which are usable out of the box with little or no training. It is a common misconception that clinical imaging machines capture high-resolution images. Eggbeer et al. (2006) state that the ‘typical images size for images output from these scanning devices is 512 × 512 pixels’. When these images are 3D volume rendered in software such as Mimics© or InVesalius©, they often require additional detailing to achieve desired textural finishes for use in medical visualisation or printing. It is possible to enhance low- resolution 3D surfaces by embossing the surface using high-resolution photographic images or by using an array of virtual brushes. High-end computer- aided design (CAD) and VFX software, such as Pixologic ZBrush© and Autodesk Maya©, are required to achieve realistic results. High-cost solutions enable highly accurate replicas of the human skin to be produced (Bibb et al. 2000), and when capturing external surface topography, handheld 3D scanners are preferable to CT scanning as there is no X-ray exposure (Liacouras et al. 2011). In the case of both low-cost and high-cost 3D scanning solutions, a
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
143
Fig. 8.2 A high-cost structured light scanner (Artec Spider©) being used to 3D capture a face
number of factors including the specularity1 of the human skin, difficulties in scanning fine structures such as hair, mean that some additional 3D modelling may be required to make the captured surfaces more suitable for use. This chapter details a particular workflow specific to sculpting digital skin with realistic results using 3D captured surfaces, a variety of virtual tools and high- resolution photographs.
8.2
pplications of 3D Captured A and Modelled Skin Surfaces
8.2.1 Medical Visualisation As a three-dimensional object, the importance of thinking in 3D when learning about and simulating the human body is paramount to increasing learning and understanding (Cingi and Oghan, 2011). Vernon and Peckham (2002) and more recently Erolin (2019) have detailed the historical use of wax and plastic models for medical education alongside two-dimensional (2D) illustrations and cadavers in anatomy labs, before the introThe reflectance value of a surface (Vernon and Peckham 2002).
1
duction of cutting-edge digital 3D technologies. Studies by McMenami et al. (2014), Thomas et al. (2016), and Moore et al. (2017) demonstrate that the use of 3D scans of anatomical specimens and living patients in anatomical, medical and surgical education is a rising trend. These scans can be 3D volume rendered and stored in virtual databases such as BodyParts3D (https://lifesciencedb.jp/bp3d/), to be used in extensive virtual or tactile 3D printed teaching and learning resources. In these cases, the donor of the data, either the body donor or patient, will have consented for their data to be used in these formats. In the production of such resources, 3D human anatomy models are often anonymised, in order to desensitise the images and prevent confidentiality breaches. This means creating 3D topographical models with the skin that can be virtually cut or presented as see-through in order for the viewer to see underlying anatomy and are void of identifiable skin textures. These human mannequins are created by 3D scanning the external anatomy of a living person using handheld 3D scanners and clinical imaging machines or by using freely available clinical datasets. Anonymised clinical datasets are readily available online, including the National
144
M. Roughley
limbs in order for customised devices to be produced when high-cost solutions are not available. The 3D LifePrints LifeArm, a functional 3D printed trans-radial prosthetic device based upon 3D scans of the patient, is one example of this (http://3dlifeprints.com/3dlp/wp-content/ uploads/3DLP-LifeArm-Brief-update-1.pdf). Of particular note is a pilot study by Eggbeer et al. (2006). The authors note that most studies that discuss the use of 3D scanning and CAD technologies for prosthetic design focus on the capture of 3D shape alone and suggest that fine details, such as wrinkles, make a prosthesis more visually convincing. Here, they note the importance of texture in contributing to the aesthetic success of a prosthesis and describe the effectiveness of the traditional practice of adding details to wax models on plaster replicas of the patient’s topography that was assessed by eye and carved with hand by a trained prosthetist (Eggbeer et al. 2006). In the pilot study, they assess suitable methods to either 3D capture topography and create textures manually using the CAD software 8.2.2 Maxillofacial Prosthetics Geomagic Freeform© or to directly 3D capture and Medical Devices topography with visible textures, in order to produce 3D printed prosthesis moulds. They discuss A number of studies describe methods for the use limitations surrounding the capture of extremely of 3D captured skin surfaces in personalised fine details by the used structured light scanner’s prosthetic design including Liacouras et al. cameras and the ability of the 3D printer used to (2011), Palousek et al. (2014) and Singare et al. effectively replicate fine embossed details, in (2010). Ciocca et al. (2010) describe the use of a order for highly convincing impressions to be low-cost NextEngine© desktop 3D scanner to made during the production of the final prostheobtain 3D digital models of the defective regions sis mould. of a patient’s face, in conjunction with 3D scans Three-dimensional scanning technologies can of unaffected, symmetrical anatomical parts. The also be used effectively to produce medical scans can be mirrored and edited in CAD soft- devices. Briggs et al. (2016) describe the use of ware to create a customised prosthesis. A further low-cost 3D scanners in the production of 3D study by the authors details the production of an printed treatment masks for localised radiotherEar&Nose Digital Library that comprises a num- apy. Compared to traditional moulding methods ber of 3D models of digitised anatomic stone that apply thermoplastic to the patient’s face, the models (Fantini et al. 2013). These models are use of the 3D scanner results in the production of designed to be used in conjunction with 3D scans a better fitting 3D mask than the thermoplastic of a patient’s face to create prostheses for non- mould method. symmetrical anatomical features such as noses. Almost all of the available studies evidence Other researchers are working out in the field in that low-cost 3D scanning solutions are becomdeveloping countries and are using portable low- ing more commonplace in medical environments. cost plug and play scanners to 3D scan patient’s While 3D scanning often requires the patient to Library of Medicine’s Visible Human Project® (https://www.nlm.nih.gov/research/visible/visible_human.html), and can be used to construct custom 3D mannequins under a licence that allows reuse and distribution (Fuijeda and Okubo 2016). It is possible that advanced virtual sculpting of 3D captured human skin is required when developing anatomical and medical teaching and learning resources. This is mostly commonplace when producing dermatology models or models with particular a disease, where 3D representations better communicate the relief or texture of the surface of the skin disorder than viewing photographs alone (Challoner and Erolin 2013). Sculpted skin textures may also be useful for 3D models used in animations for training plastic surgeons, where an observation of how the skin creases and wrinkles is important for effective surgery and scar management (Vaiude 2017).
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
remain still for several minutes (Liacouras et al. 2011), the scanners are easy to use, require only simple training to operate and increase the efficiency of the prosthesis design process that in turn produce better results in shorter amounts of time (Briggs et al. 2016; Palousek et al. 2014). However, while high-cost 3D scanning solutions are less prevalent, they do not often require advanced CAD skills to add additional textures. However, they require further training in post- processing and 3D print preparation to produce detailed models for the production of custom prostheses.
8.2.3 Computerised Facial Depiction A commonality between medical artists, prosthetists and forensic artists is the use of the same or similar 3D scanning, modelling and manufacturing technologies, which require creative and knowledge-based processes to use effectively (Fantini et al. 2013). In the case of computerised facial depiction2, 3D laser and structured light scanners and clinical imaging technologies can produce digital copies of skeletal remains in preparation for virtual facial depiction in CAD and VFX software. Three-dimensional technologies have been used to assess the accuracy of computerised facial depictions, where forensic artists are able to work on digital copies of skulls of living people, taken from CT or MRI scans. These skulls are 3D volume rendered, and then a facial depiction is produced in computer software. A finished facial depiction is then compared to a 3D volume rendering of the individual’s soft tissue face from the same clinical data (Wilkinson et al. 2006; Short et al. 2014; Lee et al. 2012). Three-dimensional scanned topographic surfaces can also be used as templates to speed up the 3D modelling process in computerised facial The objective of a computerised facial depiction is to generate a life-like appearance of an individual from their digitised skeletal remains (Claes et al. 2010).
2
145
depiction. Three-dimensional databases have been developed to facilitate this and include facial features such as noses, lips and ears plus anatomical components that cannot be estimated from the skull alone, but which are integral to the final appearance of the depiction, including the shoulders. This is similar to the Ear&Nose Digital Library developed by Fantini et al. (2013) where ‘clinicians and prosthetists are able to choose different models according to the correct anatomy of the patient in terms of both size and shape’. Some practitioners use entire 3D scanned face surfaces, which are then deformed to fit the anatomical structure of a skull, following skeletal analysis (Miranda et al. 2018). The deformation of a 3D skin surface to fit the skull creates a unique face; however, further modelling or the addition of facial features from a database is often required in order to ensure that used feature cannot be linked to the donor face. Echoing an observation by Eggbeer et al. (2006), these databases mostly comprise 3D shape. The lack of texture details on these 3D scans is especially important in computerised facial depiction, as the primary goal of a facial depiction in a forensic identification context is to identify unidentified individuals. By using 3D scanned facial features that contain highly detailed facial textures, there is a possibility that this will affect the recognition of the unknown individual. In these cases, it is common practice to either use low-cost, low-resolution 3D scanning solutions that capture shape reasonably well but not texture or to edit highly detailed 3D models taken by high-cost, high-resolution 3D scanning solutions so that they have smooth surfaces and are void of identifiable textures. Virtual sculpting of skin textures then occurs manually using CAD software, in order to create unique, non-identifiable skin surface. Sculpting techniques commonplace in the computer games industries can be employed to add marks or wounds to the surface of the model by sculpting additional shapes and textures specific to a particular disease or injury, for example, the raised bumpy skin of someone with early eczema.
M. Roughley
146
8.3
Capturing Skin Surfaces
8.3.1 Human Skin The skin is the largest organ and covers the human body. It consists of three layers: the epidermis, the dermis and the hypodermis. Each layer contains different cells, fibres, follicles and glands: • Epidermis – consists mainly of cells including keratinocytes made from keratin protein and Langerhans cells. • Dermis – consists of fibres of collagen and elastin, networks of blood vessels and nerves, hair follicles and glands with ducts that pass up through the skin. • Hypodermis – a layer of fat that also insulates the body. Each of these components contributes to the overall appearance of the skin due to their differing textural qualities and light scattering and absorption attributes. All layers of the skin are translucent. Light travels through the skin tissues, and the components that make up each layer of the skin cause light to scatter along the way (Jones 2006). This specularity often affects the capability of a 3D scanner to capture skin surfaces effectively. Eggbeer et al. (2006) note that visible skin surface texture is caused by the orientation and depth of lines made by epidermal cells and hair follicles, which form a criss-cross, polygonal pattern on the skin surface and are only noticeable on closer observation.
8.3.2 Three-Dimensional Capture To capture the faceted surface appearance of the skin, high-resolution 3D scans are ideal (Jones 2006). In this section, the capture of skin surfaces using three 3D scanning technologies will be described: a low-resolution 3D Systems Sense© 3D scanner, a high-resolution Artec 3D Spider© scanner and reconstruction of data captured by clinical imaging machines. For guidance on 3D
capture using photogrammetry from images captured by DSLR cameras, see Erolin (2019), who states that with laser scanners and structured light scanners, dark, transparent and shiny surfaces including the hair scatter and bounce light in uncontrollable directions making it difficult for the scanner to capture data. These factors are often not an issue when using photogrammetry systems and software; however, with all of these technologies, standardised lighting conditions are preferred to accentuate the grooves and patterns on the surface of the skin (Jones 2006). The Sense© 3D scanner from 3D Systems is an example of a low-cost structured light solution. It has an accuracy of 0.90 mm, captures colour images and comes with a proprietary Sense© software that has a simplified interface and post-processing tools (https:// www.3dsystems.com/applications/3d-scanning). After plugging the scanner into a USB port on a computer and launching the Sense© software, the user points the scanner at the object to be scanned, holding it approximately 30–60 cm away from the object depending on its size. The user then slowly moves around the object while the scanner captures a number of images that will later be processed to construct a 3D model. The captured surface appears as a solid colour in the software, showing any missed parts as holes in the scan. Then, the software prompts the user to process the 3D scans and produce a solid, watertight 3D model, with options to crop, erase and edit the newly created shape using a number of virtual tools. Figures 8.3 and 8.4 show a 3D model produced from a Sense© 3D scan (as both a solid object and with captured colour visible) that could be exported as a number of 3D file types for 3D printing or imported into a number of CAD software packages. While this scanner does not capture fine surface details, it captures surface topography relatively well as shown in Fig. 8.4 and is extremely easy to use. Fine details would need to be added by virtually sculpting the surface of the 3D model in CAD software. One problem with this method is that the user can only scan surfaces that are visible. For example, if a hand is being 3D scanned but it is laid flat, palm down on a table, the scanner will only
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
147
Fig. 8.3 A 3D model shown as a solid object, produced from data scanned by a Sense© 3D scanner
Fig. 8.4 A 3D model shown with captured colour visible, produced from data scanned by a Sense© 3D scanner
capture the visible skin and possibly some of the surface of the table that the hand is placed on. Capturing the other side of the hand would require another scan to be taken. The capture and merging of both of these scans raise a number of design problems. The Sense© software does not allow for two separate scans to be imported into one virtual workspace, and there are no alignment tools available in the software. To merge
these surfaces, two separate 3D models would need to be produced, output as 3D files such as a .STL file and imported into an additional CAD software for manual alignment. This raises a number of issues with accuracy. A problem with the capture of non-rigid surfaces like the arms and hands is that they move. In rotating ones hand to scan every visible surface with the Sense© 3D scanner, there is a high probability
148
that the person will move and the two scans will not match due to involuntary movement. The Artec Spider© is an example of a professional high-resolution blue-light scanner from Artec 3D that captures intricate details and complex forms such as the ears, up to 0.05 mm accuracy, and high-resolution colour textures similar to photographs (Fig. 8.5) (https://www.artec3d. com/portable-3d-scanners). Multiple cameras enable the scanner to capture surfaces with complex morphology unlike low-cost scanning solutions including some smartphones and the Sense© 3D scanner. The Artec Spider© requires a high-performance computer with a dedicated
M. Roughley
graphics card in order to run its brand named Artec Studio Professional© software. Unlike the Sense© scanner, the Artec Spider© scanner requires both mains power and a USB connection, and the scanner needs to reach an optimal temperature in order to operate at maximum accuracy. Once connected to a computer via USB, the Artec Studio Professional© software needs to be launched. When optimal temperature has been reached, the ‘Scan’ function should be selected and the scanner positioned approximately 10–15cms away from the surface to be scanned. The ‘Record’ button on the scanner should be pushed upwards to start scanning,
Fig. 8.5 Photograph of an ear that was 3D scanned; the 3D model of the ear captured by an Artec Spider© 3D scanner and the ear scans processed in Artec Studio Professional© to create a 3D model (with and without colour textures)
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
and on the screen, a green outline around the visualisation of the object in view of the scanner’s cameras will indicate that scanning is taking place. The user should slowly move around the object, keeping their distance within the centre of the green tolerance bars in the scanning workspace in the software interface. To stop scanning, the button on the scanner should be pushed downwards. This process can be repeated multiple times to ensure that the entire object has been scanned. A useful feature of the Artec Studio Professional© software is that multiple scans of an object or surface can be taken. The object being scanned can be repositioned in between scans in order for the user to capture every surface, and rather than having to output them as individual 3D models for manual alignment in other CAD software, the different scans are added as layers in the ‘Workspace’ window, similar to the ‘Layers’ window in Adobe Photoshop CC© (Fig. 8.6). Unwanted scans can be deleted at any time by clicking on the individual scan in this window or by opening the scan’s captured frame list and deleting individually captured image frames, by hitting the ‘Delete’ key on the keyboard. These features provide optimum flexibility when capturing data. Multiple scans can be
149
aligned using the ‘Align’ tool, which asks the user to mark similar surfaces on each scan before pressing the ‘Align’ button. The software will then match these surfaces to each other with high accuracy (Fig. 8.7). Once the scans are aligned, a variety of functions and tools, including ‘Global Registration’ to register individual frames and improve alignment accuracy and ‘Outlier Removal’ to remove unwanted stray scan data can be used before a ‘Fusion’ tool is selected and a 3D surface from the scans, is created (Fig. 8.8). Settings can be altered in the ‘Fusion’ tool’s drop-down menus in order to increase the quality of the rendered texture detail. Additional tools, including the ‘Hole Filling’ tool to fill holes and create a watertight model ready for 3D printing and the ‘Mesh Simplification’ tool to reduce the number of polygons that make up the 3D mesh prior to texturing or animation, are available. Colour textures can be added to the 3D model if required, using the ‘Texture’ tools (Fig. 8.9), and the high- resolution 3D model can be exported as a number of file types. As additional sculpting of intricate skin textures is not often required here due to the accuracy and detail of the scanned data, the generated 3D surfaces can be simply cropped and exported for inclusion in a database or to be
Fig. 8.6 Multiple 3D scans of a face captured by an Artec Spider© scanner in Artec Studio Professional©
M. Roughley
150
Fig. 8.7 Alignment of multiple 3D scans in Artec Studio Professional©
Fig. 8.8 A ‘fusion’ 3D model of a face generated in Artec Studio Professional©
exported to CAD software for further editing or 3D print preparation. This workflow can be used to develop moulds for custom-designed prostheses. A number of open-source software exist to 3D volume render clinical imaging data. InVesalius© (https://invesalius.github.io/) is a free volume rendering software and can create 3D models from imported stacks of .TIFF or .DICOM file
outputs from clinical imaging technologies (Roughley and Wilkinson 2019). Figure 8.10 shows a volume render of the MELANIX dataset, freely available for teaching and research from the online OsiriX DICOM Image Library (https:// www.osirix-viewer.com/resources/dicom-imagelibrary/). Here, it is possible to see that the generated low-resolution skin surface does not have fine or intricate skin surface details, and further
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
151
Fig. 8.9 A ‘fusion’ 3D model of a face generated in Artec Studio Professional with captured colour textures applied
Fig. 8.10 The MELANIX dataset 3D volume rendered in InVesalius©
sculpting would be required to add details such as pores and wrinkles.
8.4
Sculpting Skin Textures
Virtual sculpting of skin requires specialist 3D modelling software such as Autodesk Mudbox© and Pixologic ZBrush© to produce 3D models
that appear consistent with high-resolution photographs (Wilkinson 2005). This software is affordable and can be purchased with an education licence. When used with a low-cost scanning solution, the overall monetary investment to scan and sculpt 3D surfaces is kept at a minimum. There are a small number of basic operations needed to add detailed textures to the surface of a 3D model, meaning that even novice users can
M. Roughley
152
generate highly detailed textured 3D surfaces (Vernon 2011). Virtual sculpting can be seen as an intuitive and efficient design method for texturing 3D models (YiFan and Kavakli 2006).
8.4.1 T exturing 3D Scanned Surfaces with Virtual Tools and High-Resolution Photographs Figure 8.4 shows a low-resolution 3D scan of a face capture by a Sense© 3D scanner. Fine skin details can be added to the 3D surface in Pixologic ZBrush©. If the goal of the low-resolution 3D scan is to simply render the surface of the captured skin in VFX software without any colour textures that were captured during scanning process, the captured colour texture maps can be assigned to an applied skin shader material as a bump map. This simulates the textural surface of skin and is useful if you do not need highly detailed sculpted skin or wish for any erroneous information from the original colour texture map, such as freckles, to be present in a final render (see Fig. 8.11). Once exported from the Sense© 3D software as an .STL file and imported into Pixologic ZBrush© using the ‘3D Print Importer’, the model should be retopologised using the ‘ZRemesher’ or ‘DynaMesh’ tools. These tools reorganise the polygons that make up the mesh (Fig. 8.12) and create suitable low-poly models prior to digital sculpting (Briggs et al. 2016; Roughley and Wilkinson, 2019). While UV maps are not important if the model is to be 3D printed, producing one at this stage means that colour can be potentially painted onto the surface of the model at a later date and used in VFX software for colour rendering and animation. In order to create topology for fine detailing, the number of polygons should be increased by sub-dividing the mesh a number of times with the ‘Divide’ tool. This is a good practice as models intended for animation purposes can be exported as low-poly models with any high-resolution textures sculpted on other high-poly sub-division layers and exported separately using displace-
ment maps. This makes rendering of the model in VFX software more efficient (see Webster 2017). Once the model has been sub-divided approximately four times, the sculpting of fine skin textures can commence. Sculpting human skin on a 3D model (the primary form) should be approached in three stages: sculpting of micro- details, tertiary details and secondary forms (Spencer 2011). In ‘Zadd’ sculpting mode, the tools used will deboss a surface, and in ‘ZSub’ sculpting mode, the tools used will emboss the surface of the 3D model.
8.4.2 Micro-details Micro-details include features from the dermis – pores and hair follicles – and can be easily added to the surface of a 3D model by using the ‘Standard’ brush with a ‘DragRect’ or ‘Spray’ stroke and a grayscale ‘alpha’ image. Alpha images can be downloaded for free from online libraries including the Pixologic download centre (https://pixologic.com/zbrush/downloadcenter) (Fig. 8.13) and imported into Pixologic ZBrush© for sculpting. Custom ‘Skin’ brushes can also be downloaded and imported to directly sculpt skin textures without alphas. To apply an alpha to a 3D surface in ‘Edit’ mode, the user selects ‘Zsub’ and a ‘Standard’ brush with a moderate sculpting intensity of approximately 15. A ‘DragRect’ stroke with a ‘pore’ alpha downloaded from the Pixologic download centre can be assigned to the brush, and then the user can simply place the cursor on the surface of the 3D model and, then, holding down the left mouse button, can click and drag the cursor across the surface until the alpha image is embossed onto the surface (Fig. 8.14). The user can alter the intensity of the sculpting mode until the desired texture detail is realised.
8.4.3 Tertiary Details Tertiary details include the criss-cross patterning of the epidermis and light wrinkling.
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
153
Fig. 8.11 A low-resolution 3D scan from a Sense© 3D scanner viewed in the Sketchfab© model uploader (https:// sketchfab.com/) with and without a bump map applied
After micro-details have been sculpted, the tertiary details can be added using the ‘DragRect’ stroke and the ‘Standard’ brush and ‘skin’ and ‘wrinkle’ alphas downloaded from the Pixologic download centre. Custom alphas including pre- made crow’s feet, eye bags, lip prints and neck and forehead creases are available for free download from similar online repositories and used in the same manner. Criss-cross patterning of the epidermis can be added to the surface of the model in ‘Zsub’ edit mode by using the ‘Standard’ brush with a ‘Spray’ stroke and ‘Alpha 56’
selected. The user draws the alpha in a randomly sprayed pattern by using the left mouse button and dragging the cursor over the surface and can alter the brush size until the desired effect is produced (Fig. 8.15). The user can be quite carefree with this approach, and effective results are often achieved with very little effort. ‘Alpha 08’ and ‘Alpha 07’ can also add deeper pore details either using the ‘Spray’ or ‘DragRect’ strokes. The ‘Dam Standard’ brush can be used to detail shallow wrinkles.
M. Roughley
154
Fig. 8.12 A retopologised, low-resolution 3D model using the ‘ZRemesher’ tool in Pixologic ZBrush©
Fig. 8.13 Sample skin alphas (wrinkles on the left and pores on the right) from the Pixologic ZBrush© download centre https://pixologic.com/zbrush/downloadcenter
8.4.4 Secondary Forms Secondary forms comprise deeper wrinkles and skin creases. The ‘Dam Standard’ brush is highly effective in sculpting these details, especially when the ‘Zsub’ intensity and the brush size are increased. Deeper pores can be added by using the ‘Standard’ brush, ‘Spray’ or ‘DragRect’
strokes and with ‘Alpha 07’ selected (Fig. 8.16). To add additional details such a warts, moles and skin tags, ‘Alpha 51’ and ‘Alpha 15’ can be used and the ‘Zadd’ intensity increased depending on the surface relief required. In addition to the methods described above, high-resolution images including photographs can be embossed on to the surface of a 3D model.
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
155
Fig. 8.14 Micro-detailing of the surface of a 3D model in Pixologic ZBrush© using imported alphas (see Fig. 8.13)
Fig. 8.15 Tertiary details added to the surface of a 3D model in Pixologic ZBrush© using a variety of alphas and brushes
Human skin micro-maps – extremely detailed grayscale images of the different layers of the human skin – are available for purchase from online libraries including Texturing XYZ (https:// texturing.xyz/). They add both tertiary and micro- details to the surface of a 3D model when used as alphas in Pixologic ZBrush© (Fig. 8.17). Potentially, this is an easier method for novice
sculptors or for those with limited time, as the maps emboss multiple details at once. However, this method lacks flexibility and relies on the details from a donor face whose topography might be vastly different than the one being sculpted on. High-resolution photographs, preferably those captured by DSLR cameras with standardised
156
M. Roughley
Fig. 8.16 Secondary forms added to the surface of a 3D model in Pixologic ZBrush© using a variety of alphas and brushes
Fig. 8.17 A cheek micro-map from Texturing XYZ being used as an alpha to add both tertiary and micro-details to the surface of a 3D model in Pixologic ZBrush©
studio lighting, can be embossed onto the surface of a 3D model using the ‘Spotlight’ function. High-quality images can be downloaded and used for free from the ZBrush download centre (http://pixologic.com/zbrush/downloadcenter/ texture). From the ‘Texture’ menu, the ‘Spotlight’ tool can be selected and a high-resolution photograph imported. When ‘Spotlight’ is activated,
the imported image will be visible above the 3D model in the main view (Fig. 8.18). This image can be repositioned and the opacity altered using the ‘Spotlight’ wheel. Once in the desired position and with ‘Zadd’ or ‘Zsub’ edit mode selected and the intensity set to approximately 15, a ‘Standard’ brush can be drawn over the pho-
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
157
Fig. 8.18 High-resolution photograph projected on top of the 3D model in Pixologic ZBrush© using the ‘Spotlight’ function
Fig. 8.19 High-resolution photograph embossed on to the 3D model in Pixologic ZBrush© using the ‘Spotlight’ function and a ‘Standard’ brush
tograph, and the textures will be embossed on to the surface of the 3D model (Fig. 8.19). Figure 8.20 shows a low-resolution 3D scan of a face capture by a Sense© 3D scanner, which was retoplogised and sub-divided prior to sculpting. In order to show the realistic finish possible following the methods described here, micro-,
tertiary and secondary skin textures were added on one side with reference to methods described by Kingslien (2011) and Spencer (2010, 2011). In comparison with a 3D scan of the same individual using an Artec Spider© 3D scanner (Fig. 8.8), the newly sculpted skin textures versus the 3D captured skin textures are compara-
M. Roughley
158
Fig. 8.20 A low-resolution 3D scan from a Sense© 3D scanner that has been retoplogised and sub-divided and fine skin textures added on one side
ble. Upon completion of virtual sculpting, the 3D model can be exported as an .OBJ file (with an associated UV, texture and displacement maps) for rendering in VFX software or as a .STL file for 3D print preparation ahead of fabrication.
8.4.5 Skin Diseases If the purpose of the 3D model is for it to be used as a teaching aid to describe a particular disease, either as a 3D printed model or as a rendered 2D image, additional sculpting tools can be used to sculpt the skin surface. Figure 8.21 shows a 3D model that has been edited to depict squamous cell carcinoma. Here, a number of brushes were used to deboss and emboss the surface of the skin. The ‘ClayBuildup’ brush can be used to quickly relief or engrave the surface by dragging the cursor over the required area, increasing the intensity and size of the brush and fluctuating between ‘Zadd’ and ‘Zsub’ modes. A number of alphas can then be sculpted in layers to create the desired textural finish. The ‘Inflat’ brush can raise the surface with a higher relief. This could be useful in sculpting warts, carbuncles and lipomas.
In order to preserve already sculpted skin textures and focus the sculpting of a skin disease to a defined area, a mask can be used to protect the areas of the 3D model that do not require editing. To mask an area, the user holds the ‘Ctrl’ key down on the keyboard, which activates the masking brush, and paints the area of the model to be protected with an opaque dark grey colour. Once sculpting of the pathology is complete, the mask can be cleared by navigating to the ‘Masking’ tool palette in the right-hand drop-down menus and clicking ‘Clear Mask’. If the skin condition has a smooth shiny surface, the ‘Smooth’ brush can be used to remove any sculpted details. This follows the same procedure as the masking brush but with the ‘Ctrl’ key substituted for the ‘Shift’ key on the keyboard.
8.5
Summary
Software and workflows primarily used in engineering and visual effects industries can be adopted to augment 3D scanned skin for use in medical visualisation, medical illustration (2D image rendering and 3D animation), custom prostheses design and computerised facial depiction. Three-dimensional captured surfaces can be
8 Pores, Pimples and Pathologies: 3D Capture and Detailing of the Human Skin for 3D Medical…
159
Fig. 8.21 A 3D model with sculpted textures to represent squamous cell carcinoma on the cheek
textured using high-resolution photographs and a variety of virtual tools with realistic results. The 3D scanning technologies and computer modelling software referred to throughout this chapter are currently utilised by research groups and educational and healthcare institutions worldwide including the Sense© 3D scanner used by Face Lab at Liverpool School of Art and Design3, the Artec 3D Spider© scanner used by the Royal Hospital for Sick Children4, and Pixologic ZBrush© utilised by the Centre for Anatomy and Human Identification at the University of Dundee5.
Face Lab research group online profile https://www.ljmu. ac.uk/research/centres-and-institutes/institute-of-art-andtechnology/expertise/face-lab. Accessed 12/01/2020. 4 Stewart K (2017) Using 3D scanning and printing to help children with ear deformities. Available online: https:// www.artec3d.com/cases/prosthetic-3d-printed-earimplants. Accessed 12/01/2020. 5 Erolin C (2016) Anatomical 3D visualisation: Scanning bones to reconstruct the appearance of people and animals. Available online: https://www.artec3d.com/news/ anatomical-3d-visualization-at-university-of-dundee. Accessed 12/01/2020. 3
References Bibb R, Freeman P, Brown R, Sugar A, Evans P, Bocca A (2000) An investigation of three-dimensional scanning of human body surfaces and its use in the design and manufacture of prostheses. Proc Inst Mech Eng H J Eng Med 214(6):589–594 Briggs M, Clements H, Wynne N, Rennie A, Kellett D (2016) 3D printed facial laser scans for the production of localised radiotherapy treatment masks – a case study. J Vis Commun Med 39(3–4):99–104 Challoner A, Erolin C (2013) Creating pathology models from MRI data: a comparison of virtual 3D modelling and rapid prototyping techniques. J Vis Commun Med 36(1–2):11–19 Cingi C, Oghan F (2011) Teaching 3D sculpting to facial plastic surgeons. Facial Plast Surg Clin 19(4):603–614 Ciocca L, De Crescenzio F, Fantini M, Scotti R (2010) CAD/CAM bilateral ear prostheses construction for Treacher Collins syndrome patients using laser scanning and rapid prototyping. Comput Methods Biomech Biomed Engin 13(3):379–386 Claes P, Vandermelen D, De Greef S, Williems G, Clement J, Suetens P (2010) Computerized craniofacial reconstruction: conceptual framework and review. Forensic Sci Int 201(1–3):138–145 Eggbeer D, Evans PL, Bibb R (2006) A pilot study in the application of texture relief for digitally designed facial prostheses. Proc Inst Mech Eng H J Eng Med 220(6):705–714 Erolin C (2016) Anatomical 3D visualization: Scanning bones to reconstruct the appearance of people and animals. Available online: https://www.artec3d.com/ news/anatomical-3d-visualization-at-university-ofdundee. Accessed 12/01/2020
160 Erolin C (2019) Interactive 3D digital models for anatomy and medical education. In: Rea P (ed) Biomedical visualisation. Springer, Cham, pp 1–16 Fantini M, De Crescenzio F, Ciocca L (2013) Design and rapid manufacturing of anatomical prosthesis for facial rehabilitation. Int J Interact Des Manuf 7(1):51–62 Fujieda K, Okubo K (2016) A reusable anatomically segmented digital mannequin for public health communication. J Vis Commun Med 39(1–2):18–26 Jones B (2006). Approximating the appearance of human skin in Computer Graphics http://citeseerx.ist.psu. edu/viewdoc/download?doi=10.1.1.99.2576&rep=rep 1&type=pdf Kingslien R (2011) ZBrush studio projects: realistic game characters. Wiley, New York Lee W-J, Wilkinson CM, Hwang H-S (2012) An accuracy assessment of forensic computerized facial reconstruction employing cone-beam computed tomography from live subjects. J Forensic Sci 57:318–332 Liacouras P, Garnes J, Roman N, Petrich A, Grant GT (2011) Designing and manufacturing an auricular prosthesis using computed tomography, 3-dimensional photographic imaging, and additive manufacturing: a clinical report. J Prosthet Dent 105(2):78–82 Mahoney G, Wilkinson C (2010) Computer generated facial depiction. In: Wilkinson CM, Rynn C (eds) Craniofacial identification. Cambridge University Press, Cambridge, pp 222–237 Markiewicz MR, Bell RB (2011) The use of 3D imaging tools in facial plastic surgery. Facial Plast Surg Clin 19(4):655–682 McMenami PG, Quayle MR, McHenry CR, Adams JW (2014) The production of anatomical teaching resources using three-dimensional (3D) printing technology. Anat Sci Educ 7(6):479–486 Miranda GE, Wilkinson CM, Roughley M, Beaini TL, Melani RFH (2018) Assessment of accuracy and recognition of three-dimensional computerized forensic craniofacial reconstruction. PLoS ONE 13:5 Moore CW, Wilson TD, Rice CL (2017) Digital preservation of anatomical variation: 3D-modeling of embalmed and plastinated cadaveric specimens using uCT and MRI. Ann Anat Anatomischer Anzeiger 209:69–75 Palousek D, Rosicky J, Koutny D (2014) Use of digital technologies for nasal prosthesis manufacturing. Prosthetics Orthot Int 38(2):171–175
M. Roughley Roughley MA, Wilkinson CM (2019) The affordances of 3D and 4D digital technologies for computerized facial depiction. In: Rea P (ed) Biomedical visualisation. Springer, Cham, pp 87–101 Short LJ, Khambay B, Ayoub A, Erolin C, Rynn C, Wilkinson C (2014) Validation of a computer modelled forensic facial reconstruction technique using CT data from live subjects: a pilot study. Forensic Science International, p 237 Singare S, Zhong S, Xu G, Wang W, Zhou J (2010) The use of laser scanner and rapid prototyping to fabricate auricular prosthesis. In: 2010 international conference on E-product E-service and E-entertainment, IEEE, pp 1–3 Spencer S (2010) Zbrush digital sculpting human Anatomy. Wiley, Hoboken Spencer S (2011) ZBrush character creation: advanced digital sculpting. Wiley, Hoboken Stewart K (2017) Using 3D scanning and printing to help children with ear deformities. Available online: https:// www.artec3d.com/cases/prosthetic-3d-printed-earimplants. Accessed 12/01/2020 Thomas DB, Hiscox JD, Dixon BJ, Potgieter J (2016) 3D scanning and printing skeletal tissues for anatomy education. J Anat 229(3):473–481 Vaiude, P (2017) Surgical-Art: art in surgery, presented at the Liverpool Medical Institution, 16/10/18 Vernon T (2011) Zbrush. J Vis Commun Med 34(1):31–35 Vernon T, Peckham D (2002) The benefits of 3D modelling and animation in medical teaching. J Audiov Media Med 25(4):142–148 Webster NL (2017) High poly to low poly workflows for real-time rendering. J Vis Commun Med 40:40–47 Wilkinson C (2005) Computerized forensic facial reconstruction. Forensic Sci Med Pathol 1(3):173–177 Wilkinson C, Rynn C, Peters H, Taister M, Kau CH, Richmond S (2006) A blind accuracy assessment of computer-modeled forensic facial reconstruction using computed tomography data from live subjects. Forensic Sci Med Pathol 2:179–187 YiFan GAO, Kavakli M (2006) VS: facial sculpting in the virtual world. In: 2006 international conference on computational intelligence for modelling control and automation and international conference on intelligent agents web technologies and international commerce (CIMCA’06), IEEE, pp 35–35
9
Extending the Reach and Task- Shifting Ophthalmology Diagnostics Through Remote Visualisation Mario E. Giardini and Iain A. T. Livingstone
Abstract
Driven by the global increase in the size and median age of the world population, sight loss is becoming a major public health challenge. Furthermore, the increased survival of premature neonates in low- and middle-income countries is causing an increase in developmental paediatric ophthalmic disease. Finally, there is an ongoing change in health-seeking behaviour worldwide, with consequent demand for increased access to healthcare, including ophthalmology. There is therefore the need to maximise the reach of resource- limited ophthalmology expertise in the context of increasing demand. Yet, ophthalmic diagnostics critically relies on visualisation, through optical imaging, of the front and of the back of the eye, and teleophthalmology, the remote visualisation of diagnostic images, shows promise to offer a viable solution. In this chapter, we first explore the strategies at the core of teleophthalmology and, in particular, real-time vs store-and-forward M. E. Giardini Department of Biomedical Engineering, University of Strathclyde, Glasgow, Scotland, UK e-mail: [email protected]
remote visualisation techniques, including considerations on suitability for different tasks and environments. We then introduce the key technologies suitable for teleophthalmology: anterior segment imaging, posterior segment imaging (retinal imaging) and, briefly, radiographic/tomographic techniques. We highlight enabling factors, such as high-resolution handheld imaging, high data rate mobile transmission, cloud storage and computing, 3D printing and other rapid fabrication technologies and patient and healthcare system acceptance of remote consultations. We then briefly discuss four canonical implementation settings, namely, national service provision integration, field and community screening, optometric decision support and virtual clinics, giving representative examples. We conclude with considerations on the outlook of the field, in particular, on artificial intelligence and on robotic actuation of the patient end point as a complement to televisualisation. Keywords
Teleophthalmology · Remote visualisation · Teleconsultation · Store-and-forward · Retinal imaging · Anterior segment imaging · Virtual clinics
I. A. T. Livingstone (*) NHS Forth Valley, Larbert, Scotland, UK e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_9
161
M. E. Giardini and I. A. T. Livingstone
162
9.1
Background
The world population is in constant growth. From 7.7 billion in 2019, it is expected to reach 9.8 billion in 2050, with a current growth rate of 83 million/year (United Nations 2019). Simultaneously, the life expectancy is rapidly increasing. In 2015, the population aged 60 and over, or 80 or older, was 600 million and 125 million respectively, and is expected to increase to 2 billion and 430 million, respectively, by 2050, with 80% of these living in low- and middle-income countries (World Health Organization 2018). One of the biggest challenges of healthcare provision in the twenty-first century is therefore to find innovative solutions to build capacity, yet without sacrificing the quality of care, in a context of a globally decreasing resource-to-demand ratio. In lower income countries, eye diseases such as river blindness and trachoma, constitute the primary cause of sight loss. In higher-income countries, other population-level threats to eyesight are emerging. Driven by lifestyle changes and age, diabetes is increasing amongst population groups, both old and younger, and diabetic retinopathy has been added to the priority list for visual impairment of the World Health Organization. Glaucoma remains a public health priority due to its complex early diagnosis and its requirements for lifelong treatment. The prevalence of age-related macular degeneration (AMD), currently 8.7% globally, is rapidly increasing, and is now the first age-related cause of visual impairment in high-income countries (World Health Organization 2019a). In 2010, the International Council of Ophthalmology conducted a survey to establish, on a global scale, the size of the population needing ophthalmic services, the number of ophthalmologists in practice and training and the related temporal trends (Resnikoff et al. 2012). The survey highlighted that, globally, the ophthalmic population is growing faster than the number of ophthalmologists, albeit somewhat slower than the general population. Amongst those aged 60 or more, the growth rate is double than that of the number of available ophthalmologists. In the cur-
rent service delivery model, a shortfall in the number of practising ophthalmologists is therefore expected, both in high- and in low-income countries. Indeed, already at present, there is a clear inverse correlation between the prevalence of visual impairment and number of locally available ophthalmologists (Bastawrous and Hennig 2012) indicating that the incidence of visual impairment is resource-limited by eye care availability. Additionally, health-seeking behaviour (HSB) is rapidly increasing, with direct impact on service utilisation. Population and healthcare systems characteristics and external environment, including social and economic determinants, are key influencing factors (Mackian et al. 2004). While the modelling of the interactions between these factors and service utilisation is complex, it is clear that, on a global scale, there is a trend towards an HSB-driven increase on the global service utilisation burden, both in high- and in low-income countries and across multiple medical disciplines (Clewley et al. 2018; Shahik and Hatcher 2004; Ahmed et al. 2000). Ophthalmology is no exception, and the understanding of the interrelation between the perceived severity of manifest conditions and the necessity to proactively seek eye health is common both to highand low-income countries (Ebeigbe 2018; Fallatah 2018). Finally, worldwide 10% of births are preterm. Yet, oxygen administration in incubators, while necessary for survival, can induce anomalous retinal vessel growth (retinopathy of prematurity, ROP), ultimately resulting in complete retinal detachment and consequent blindness. Incidence is difficult to estimate in a single comprehensive figure, as it critically depends on gestational age at birth, weight at birth, preterm management protocols and survival rates amongst other factors. For reference, incidences of 25–35% of severe ROP in infants with a gestational age at birth of 27 weeks or less have been reported in high-income countries (Hellstrom et al. 2013) and, in 2010, approximately 170,000 preterm infants developed any stage of ROP globally, of which 20,000 became blind or severely visually impaired, and a further 12,000 mildly/moderately
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation
163
Both modalities are employed in the widest range of environments from clinics to community care to field medicine. The use cases are, however, significantly different, as the two approaches imply significantly different prerequisites in the need for patient presence. Real-time telemedicine is suitable to provide remote consultation to patients who require immediate advice. To all intents, it aims to replace in-person meetings, reducing the need for the patient or physician to travel to a separate facility, for example, for a separate consultation (AMD Global Telemedicine 2019). Akin to a traditional face-to-face consultation, it allows the clinician to explore additional history or examination findings and enables dialogue between the clinician, referrer and/or patient. Conversely, as entirely reliant on qualified data collection and interpretation, store-and- forward consultation are predominantly used for doctor–doctor interactions, with significant prevalence in radiology, pathology, dermatology and ophthalmology (National Center for Connected Health Policy 2019a). 9.2 Strategies Unsurprisingly, therefore, the two techniques, if used within the same context, can yield signifiIn a client-to-provider context, the World Health cantly different outcomes. Comparative studies Organization recommends telemedicine as a in teledermatology highlight that, when used complement, rather than a replacement, to face- without use case differentiation, real-time and to- face healthcare delivery, with institutional store-and-forward approaches agree in half of monitoring of standard operating procedures, cases only, with an agreement in the resultant patient consent collection, data protection and treatment plans in less than half. This significant provider licensing and credentials. Provider-to- discrepancy has been directly attributed to the provider telemedicine is seen as a method to absence of patient–doctor interaction in store- extend coverage for individual healthcare provid- and- forward approaches, which, while signifiers (World Health Organization 2019b). In both cantly cheaper than real time, constrains the cases, telemedicine consultations can proceed in ability to gather clinically relevant useful infortwo modalities. In real-time telemedicine, the mation (Loane et al. 2008). Yet, the same study remote (‘doctor’) and local (‘patient’) sites can highlights how, on the same cases, store-and- be connected through a direct video and audio forward paradigms require an order of magnitude feed, enabling live interaction between doctor less time than real-time consultations, and this and patient. Alternatively, in store-and-forward reduction in time may arguably act as confound(asynchronous) telemedicine, information, such ing factor in reliability analysis. With specific as health records, radiology scans and other regard to teleophthalmology, the evidence base imaging diagnostics, and test results, is gathered for an effectiveness comparison between store- at the local site, and transmitted to, or retrieved and-forward and real-time consultations is fragby, the remote site at a later time, for review when mented. Yet, the landscape of teleophthalmology the patient is no longer present. trials is active and dynamic globally, on a very impaired (Blencowe et al. 2013). Importantly, 65% of ROP-related visually impaired infants were born in middle-income countries, where survival rates are increasing due to improving perinatal care (Freitas et al. 2018). Similar growth trends are reported in high-income countries (Holmstrom et al. 2018). It is therefore clear that there is a need to maximise the reach of resource-limited ophthalmology expertise in a context of increasing demand. Beyond the resource limitations inherently encountered in low-income countries, also in high-income countries, a clear, direct link between scarcity of ophthalmology resources and social deprivation is beginning to emerge. As early as 2014, electronic connectivity was brought forward in policy recommendations by several UK charities as a viable solution to extend the reach of ophthalmology services, for the provision of accessible, equitable and timely integrated access to eye services (The College of Optometrists 2014).
164
M. E. Giardini and I. A. T. Livingstone
reasonable presumptive basis, given the prerequi- at the specific patient locations and for operation sites (Sim et al. 2016; Sreelatha and Ramesh by a non-specialist when required. Both anterior and posterior segment imagers 2016). Understandably, reimbursement rates and pol- can be implemented using full-featured full- icies regulating the two approaches are signifi- performance instruments or task-specific low- cantly different, with both private insurers and cost, low-skill instrumentation, as required by the public legislation favouring real-time patient– specific use cases. doctor video consultations over store-and- Anterior segment imaging is traditionally perforward doctor–doctor, or non-video real-time formed in the clinic using a binocular slit lamp interaction (e.g. phone-based), that are consid- (Carl Zeiss Meditec AG n.d.). The slit lamp ered inherently less expensive, and in most design has remained essentially unchanged since instances, already intrinsically integrated in the its invention in 1911 (Timoney and Breathnach current workflows as a matter of fact (National 2013). The patient rests their head on a head-and- chin rest, and the ophthalmologist looks at the Center for Connected Health Policy 2019b). anterior segment using a long-working-distance stereomicroscope. The illumination is provided 9.3 Technologies and Factors by a light source that projects light onto the eye, and that can be sized/rotated to illuminate the eye 9.3.1 Eye Visualisation Technologies from different directions. Additionally, for the visualisation of de-epithelialised lesions, fluoresTraditionally, ophthalmology critically relies on cent eye drops can be applied to the eyes, the eye the visualisation of eye structures, supported by a illuminated with an auxiliary blue light. set of auxiliary measurements (intraocular presGiven that the slit lamp is essentially a stesure, corneal topography, etc.) and by functional reomicroscope with a highly specialised light tests (e.g. visual acuity and visual field tests). source, digital images can easily be captured Within teleophthalmology, imaging constitutes either through a digital microscope camera port the first-line approach and can be broadly divided or by attaching the camera to the slit lamp eyeinto three core families. In anterior segment pieces via add-on adapters. While these soluimaging, optical images of the components of the tions indeed can provide an easy path to remote eye directly visible from a general external visualisation, for example, by transmitting the inspection (eyelids, conjunctiva, cornea, sclera, video feed or the images captured by the camera anterior chamber, iris and lens) are imaged opti- through commercial videoconferencing softcally through some form of microscope or close- ware, at the patient side the eyes are in any case up inspection lens. In posterior segment imaging, visualised using a slit lamp, thus requiring an visual access to the retina is gained through the appropriately trained operator. Yet, in emerpupil, and the retina is visualised through the lens gency and field ophthalmology, the majority of using appropriate optics, such as an ophthalmo- necessary anterior segment imaging informascope or a retinal camera. In radiographic and tion can be obtained, much more simply, by tomographic techniques (Optical Coherence observation through a magnifying glass. Indeed, Tomography – OCT, ultrasound imaging and emergency teleophthalmology in the field has large-scale tomography) images are derived been demonstrated using nothing more than a through computer reconstruction rather than by simple macro lens attached to a mobile phone direct optical visualisation. (Ribeiro et al. 2014). Dedicated anterior segPotentially, all these imaging modalities lend ment imagers working on this principle, comthemselves to remote transmission and visualisa- prising a macro lens, a white/blue light source tion of images, under the prerequisite that the and a battery, are now commercially available as related instrumentation needs to be compatible in mobile phone add-ons (Eidolon Optical 2019; cost, ruggedness and ease of use for deployment Tuteja 2016).
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation
The optical visualisation of the retina (posterior segment imaging) is somewhat more complex. Indeed, the sole optical access to the retina is through the pupil, effectively a small hole at the centre of the iris. Yet, by its intrinsic nature, the retina lies on the focal surface of the imaging optical elements of the eye (primarily cornea and lens). This means that the retina is optically conjugate to infinity or, in other words, if we were able to look directly into a subject’s pupil, we would see the retina as a large surface at infinite distance. Indeed, the simplest way to look at the retina is for the ophthalmologist to move very close to the eye, limited only by the contact with the patient, and to inspect the retina through the pupil using an appropriate device to illuminate the interior of the eye. This technique, known as direct ophthalmoscopy, has been the mainstay for posterior segment visualisation since its invention by Hermann von Helmoltz in 1851, upon prior studies by Charles Babbage (Keeler 2003). As the retina is being visualised through the tiny aperture of the pupil, the field of view is small, akin to visualising the interior of a darkened room through the keyhole of the door. Even when dilating the pupil pharmacologically, the angular aperture does not exceed 5–10°. When a larger field is desired, a large converging lens with short focal length can be interposed in front of the eye, creating a wide-field image of the retina through the pupil, which in turn can be observed using optical instruments (indirect ophthalmoscopy) (Bass 2009). Using this strategy, a slit lamp can indeed be employed also for posterior segment imaging. In order to do this, the converging lens is held by hand between the slit lamp and the eye, creating an image of the retina through the pupil, which is then inspected using the slit lamp microscope. Alternatively, instead of the slit lamp, an imaging system, effectively a dedicated magnifying loupe with illuminator, can be worn head-mounted by the ophthalmologist, again holding by hand the converging lens in front the patient. In both cases, the alignment and mutual distances between patient, lens and slit lamp/head-mounted optics are critical, and the technique requires significant skill and training to master, tending to be limited to secondary/ter-
165
tiary care. Simplified monocular versions can be miniaturised as handheld devices, reducing the skills required by somewhat sacrificing the field of view, which reduces to approximately 25° (Welch-Allyn 2019a). To further simplify and de- skill the procedure, in principle, the microscope and lens can be rigidly mounted on a frame. Indeed, this is the operating principle of retinal cameras, whereby a self-contained unit encompasses the converging lens, the light source, the microscope and a (digital) camera for image acquisition. The field of view of retinal cameras can be on the order of 90°. These devices, albeit expensive and requiring a tabletop stationary mount, are sufficiently user friendly to be operated by lay personnel with simple spot training. Due to the significant skills required at the patient end, slit lamps and head-mounted binocular indirect ophthalmoscopes are generally not suitable for community-based teleophthalmology, where they are operated by lay operators. Retinal cameras, in providing a direct digital image feed, can be easily interfaced to data transmission and teleconferencing software, and have indeed been employed in teleophthalmology trials, including trials in austere environments and using lay operators (Bastawrous 2014). For lowand middle-income settings, major downsides relate to the cost of the instruments, their size and weight and the need for a connection to mains electricity, often problematic in the field. Yet, handheld teleophthalmology-enabled devices can be obtained by using either direct or monocular indirect ophthalmoscopes, and replacing the operator’s eye with the camera of a mobile phone. The accommodating power of the operator’s eye, necessary for these devices to function is, in this case, replaced by the autofocus features of the phone camera. Indeed, monocular handheld indirect ophthalmoscopes, when used with a mobile phone, can yield similar fields of view than when used by direct vision (Welch-Allyn 2019b). Interestingly, the phone camera can be used to replace not only the operator’s eye, but also to act as the full imaging microscope/loupe optics. In this case, the sole optical element required to perform indirect ophthalmoscopy using a phone is the large short focal converging lens that is held
166
in front of the patient’s eye. Indeed, commercial implementations of this principle exist, with intermediate field between ‘standard’ handheld monocular indirect ophthalmoscopes and retinal cameras. (oDocs Eye Care 2019a). Interestingly, in direct ophthalmoscopy, the simplest posterior segment imaging technique, the field of view is essentially limited by the ophthalmologist and the patient coming into contact or, in other words, by the minimum distance between the operator and the patient’s eyes. Yet, a phone camera can be brought significantly closer to the patient’s eye than the eye of a human operator. For this reason, several phone-based direct ophthalmoscopes have been reported on the market, with field of view comparable to monocular handheld indirect ophthalmoscopes (Giardini et al. 2014; Tuteja 2016; D-Eye 2019). Tomographic techniques have also been reported in teleophthalmology, using both OCT and ultrasonography (Kelly et al. 2011; Lapere 2018). As these imaging techniques are computer- based, they intrinsically lend themselves to remote visualisation both through computer screen sharing and through storage and retransmission of screenshots and images. In combination with multimodal ultra-wide-field imaging, OCT imaging enables assessment of a range of retinal diseases (macular degeneration, diabetic retinal disease). Such medical retina Virtual Clinics present a popular option within western Hospital Eye Services (Lee et al. 2018), where networked systems afford an opportunity for asynchronous diagnosis and management.
9.3.2 Enabling Factors While, from a conceptual point of view, teleophthalmology relies essentially on simple remote visualisation of digital images, the practical deployment of clinically meaningful implementations relies on the convergence between a key set of core technologies, policies and cultural aspects, which only recently are appearing as sufficiently mature for realistic impact on healthcare service.
M. E. Giardini and I. A. T. Livingstone
9.3.2.1 High-Resolution Handheld Imaging Remote visualisation solutions for teleophthalmology are ideally compact to minimise the need for dedicated facilities during field deployment. In this sense, low-cost miniaturisation of the camera, the sensing element at the core of all non-tomographic ophthalmic imaging and of the digital displays for local visualisation (viewfinders, monitors) has proven to be a key enabling factor. The early development of teleophthalmology for community screening has indeed been driven by the mobile phone market (Bolster et al. 2014). While no formal theory is available on the actual resolution requirements, it is the experience of the authors that a 3-megapixel resolution is appropriate for meaningful still images during field screening. Indeed, in the early 2010s, the availability of cameras with resolution in the megapixel range on board of mainstream smartphones has coincided with the commercial or pre-commercial availability of smartphone adapters for direct and indirect ophthalmoscopy (Peek Vision 2019; Welch Allyn 2019b; D-Eye 2019; oDocs Eye Care 2019a). In terms of actual instrumental complexity, we note that the autofocus system of modern smartphone cameras can adjust the imaging optics well beyond the capabilities of the range of accommodation of a human eye, thus enabling the capture of images and video through traditional optical arrangements for slit lamps and ophthalmoscopes with little to no modification, bar simple mechanical adaptation of the phone to the instrument eyepieces (Loomba et al. 2019; Poyser et al. 2019). 9.3.2.2 High Data Rate Mobile Interconnections While still images can indeed relay meaningful clinical information from the field to a remote ophthalmologist, most screening and diagnostic visualisation protocols rely on some form of real- time observation of the patient’s eyes. In a comprehensive teleophthalmology solution, video streaming capabilities are therefore highly desirable. Also in this case, no formal theory is available on the actual resolution, frame rate and image compression requirements. It is, however,
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation
direct experience of the authors that such parameters are less important than may have been expected, as long as the video communication is comparable to the quality normally attainable using mainstream Internet video calling software. Arguably, this video quality enables all examinations of the front and back of the eye, as long as no direct visualisation of individual cells floating in the transparent components of the eye is required, as is observed in forms of uveitis. As resolution of camera sensors and data transmission technology evolves, the limitations on cell visualisation are expected to improve. It has been our experience that 3G data connectivity is sufficient for this level of performance. Again, mobile phones appear as the ideal technology platform. Starting from the last decade, mobile phones integrate a camera sensor with appropriate resolution, sophisticated autofocus optics, a high-resolution display and a full data telecommunications system. Given the extreme global ubiquity of mobile phone technology, the globally high level of mobile phone literacy, and the globally available mobile infrastructure (GSM Association 2019), mobile telephony has been identified as the key teleophthalmology enabler for community-based screening programmes on a global scale (Bolster et al. 2014; Giardini 2015).
167
have resorted to custom implementations (Bastawrous et al. 2016). The necessity to process the data streams through appropriate cloud infrastructure, rather than through a mere end-to-end teleconference, offers the opportunity to implement forms of automated analysis of the image streams. Indeed, in the specific case of diabetic retinopathy screening, the most robustly evaluated canonical example of teleophthalmology (Surendran and Raman 2014), there is supporting evidence that automatic pre-screening for obviously non-diseased retinal images is safe, thus reducing the workload for expert human graders (Fleming et al. 2011), with machine learning classification techniques, a central part of Scotland’s diabetic retinal screening network for over 10 years.
9.3.2.4 Rapid Fabrication Technologies Given the close interconnection between the evolution of mobile telephony devices and the development of handheld teleophthalmology solutions, the very rapid life cycle of commercial smartphone models puts the corresponding teleophthalmology devices under very high pressure to adapt to smartphone market evolutions and to avoid obsolescence. Indeed, at the time of writing, several of the mainstream phone-based retinal imaging systems are reliant on smartphones no longer available commercially or at the end of 9.3.2.3 Cloud Technologies their commercial life cycle (D-Eye 2019; Welch- Teleophthalmology is clinically meaningful Allyn 2019b; Volk 2019). For this reason, a numwhen in-person consultations between doctor and ber of manufacturers of handheld ophthalmology patient would not be reasonably viable. In this devices have resorted to rapid fabrication in the sense, for community-based teleophthalmology, initial pre-production studies (Bastawrous et al. there is often a requirement for the patient-side 2016), at times maintaining this approach well end point of the connection to operate in remote into commercial distribution (oDocs Eye Care geographical areas, with data connectivity of 2019b). poor quality. In typical community screening sceIn at least one notable case, the commercial narios, flexibility needs therefore to be provided implementations of the teleophthalmology to adapt the data exchange between the telecon- devices have been pursued in parallel with an sultation end points to the respective local Open Source distribution model, whereby end- network conditions as a matter of course, for users can either buy the instruments or download example, through buffering and temporary stor- the full component design from an online reposiage, effectively atypical forms of store-and- tory, and leveraging consumer-grade low-cost forward. To our knowledge, comprehensive rapid fabrication technology or purchasing a pre- commercial solutions are still unavailable, and fabricated kit, can self-assemble a smartphone- the largest field teleophthalmology campaigns based retinal imager (oDocs Eye Care 2019b).
M. E. Giardini and I. A. T. Livingstone
168
While the humanitarian reasons for this approach can easily be understood, to our knowledge there is currently no established pathway for open- source self-fabricated devices to comply with relevant regulatory constraints for human medical use, unless appropriate quality systems are put in place around the full design and fabrication processes. Yet, the democratisation of design enabled by low-cost rapid prototyping, combined with the Open Source ethical drive towards co-creation of technologies, sharing of ideas, designs, test criteria, safety and performance data, appears optimally suited to tackle major global healthcare challenges. Indeed, strategic effort is being dedicated by key international organisations to create a viable infrastructure and methodology for the democratic development of open-source healthcare solutions (UBORA 2019).
9.3.2.5 Patient and Healthcare System Acceptance of Remote Consultations The evidence base on the acceptance of telemedicine by the clinician and patient communities provides conflicting information, with examples such as the USA, where the number of telemedicine consultations exceeds in-person consultations for key healthcare providers (Owens 2018), and South Korea, where, as of 2014, telemedicine was deemed illegal (Rho et al. 2014). Perceived ease of use and perceived usefulness have been identified as the key determinant factors in stakeholder acceptance of telemedicine (Yu et al. 2009; Rho et al. 2014). In this sense, teleophthalmology is no exception. While no systematic study is available to our knowledge to date, cohort observations report very high satisfaction rates both amongst doctors and clinicians (Grisolia et al. 2017; Rani et al. 2006; Poyser et al. 2019). Indeed, in selected countries, the national healthcare infrastructure is investing in teleconsultation services, with the implementation of dedicated software platforms that, in addition to conventional videoconferencing capabilities, allow for the virtualisation of the full consultation journey, for example, by providing virtual waiting rooms and queuing mechanisms, indicating that the core stakeholders are now shifting towards telemedicine as an accepted working
practice (Attend Anywhere 2019; NHS Near Me 2019). This coincides with our experience implementing teleophthalmology in emergency services in Scotland.
9.4
Implementation Examples
9.4.1 N ational Healthcare Services: Emergency Teleophthalmology in NHS Scotland Healthcare in Scotland is provided by the National Health Service (NHS), publicly funded through taxation. The primary care backbone of eye care in Scotland is constituted of optometrists operating in community practices (Jonuscheit 2019). Secondary care in Scottish ophthalmology departments exceeds 400,000 cases per year, with an annual increase close to 15%. The entry point for emergency eye care is provided by general Accident and Emergency (A&E) and Minor Injuries Units (MIU), where unscheduled patients are first seen by a non-specialist healthcare operator, who then decides, often assisted by a phone consultation with an on-call ophthalmologist, whether to refer the patient to an available ophthalmology department, where the patient will need to transfer, wait for the specialist to be available and receive specialist care. Given the nature of the Scottish territory, where significant segments of the population live in remote areas, this second immediate emergency referral, often requested by the non-specialist A&E staff on a precautionary basis only, is onerous both for the patient and the NHS, with inappropriate referrals creating an avoidable burden upon the Service. At the time of writing, there is paucity in the literature regarding real-time applications of teleophthalmology, despite evolving examples of realtime video transmission as a novel solution in low-resource settings, for the screening of rural populations (Loomba et al. 2019). In 2017, under direct funding by the Scottish Government, the NHS Forth Valley Health board, based in Central Scotland, in collaboration with the University of Strathclyde, Glasgow, developed hardware elements and implemented a pilot teleophthalmology pipeline aimed towards optimising triage via
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation
remote decision support, reducing the waiting times and providing real-time feedback from ophthalmology specialists to emergency services, with a view to reducing the number of unnecessary specialist referrals from the A&E departments as well as unnecessary urgent A&E visits by ophthalmic staff. Using the Web browser-based tele-consultation platform of NHS (NHS Near Me 2019), the scheme uses two-way real-time audio-visual communication between the on-call ophthalmologist and the patient and staff in the A&E unit through a tablet computer. Specially designed ergonomic adapters connect the tablet computer to the ophthalmology equipment (slit lamps) in the A&E or MIU, for the ophthalmologist to visualise the patient’s eyes, and to give instructions to the A&E staff, thus allowing in-depth triage and institution of simple therapy. To the date of writing, the 2017 pilot has been successfully completed, with an estimated 50% reduction in emergency referrals to specialist emergency ophthalmology services. The technology has been introduced into practice and is now standard protocol in the Forth Valley NHS board. In 2019, the Scottish Government has agreed to fund the extension of the initial pilot to the NHS Highlands, NHS Grampian and NHS Greater Glasgow and Clyde health boards, to gather the evidence base for an eventual national roll-out and to evaluate the applicability of further phone- based handheld retinal imaging solutions. Modifications of the protocol are also undergoing seminal concept studies, to expand the reach of the service to community optometrists, involving both slit lamp real-time and OCT offline imaging. In view of potential extensions of the methodology to higher-resolution imaging, a 4K video consultation, utilising a 5G mobile data network, has been demonstrated using a modified version of the NHS Scotland tele-consultation platform (Communications NHS Forth Valley 2019).
9.4.2 Field and Community Screening: Smartphone Glaucoma Screening in Kenya In 2013–2014, the London School of Hygiene and Tropical Medicine, in collaboration with the
169
University of Strathclyde and NHS Forth Valley, performed a validation study to compare store- and-forward teleophthalmology for the grading of optic nerve images captured with a handheld smartphone-based retinal imaging adaptor, with those of a reference fundus camera. In particular, more than 2000 images were captured over 100 population clusters in Kenya between 2013 and 2014 and transmitted to the Moorfield Eye Hospital Reading Centre, in the UK, to be independently graded (Bastawrous et al. 2016). The examinations in Kenya were performed in field clinics, after pupil dilation, using the smartphone-based retinal imager and a reference benchtop fundus camera by two independent examiners when possible. Both experienced ophthalmologists and lay operators, with no prior experience of healthcare, were used for the imaging. The images were subsequently uploaded from Kenya to the Moorfields Reading Centre in London using a custom software platform. The study showed a comparable clinical performance between handheld retinal imaging and traditional fundus photography in the grading of the optic disk images for glaucoma. Importantly, there was no observable difference in clinical readability between the images taken by experienced ophthalmologists and lay community operators. Indeed, the non-clinical photographers using the low-cost smartphone imaging adapter were able to acquire optic nerve images that enabled grading to the same clinical standard of the images acquired using the desktop retinal camera operated by an ophthalmic assistant. The principle of employing lay patient- side operators to task-shift smartphone-based teleophthalmology from secondary care to the field appeared therefore well posed, at least for glaucoma screening, making it attractive for public health interventions.
9.4.3 O ptometric Decision Support: Australia and Scotland Using a combination of store-and-forward and real-time teleophthalmology, investigators in Australia (Bartnick et al. 2018) retrospectively audited the makeup of the referrals over a
M. E. Giardini and I. A. T. Livingstone
170
12-month period, encompassing 683 remote consultations connecting optometry with hospital eye services in Western Australia. Equipment of referring optometric practices varied widely, from slit lamp alone, to more extensive setups with OCT, visual fields and wide-angle retinal photography capacity. The referrals represented a mix of scheduled consultations and emergency decision support. Over the year-long period of evaluation, the authors deemed that 287 patients were managed effectively via teleophthalmology, hence saving ten full-day outreach clinic days, expediting cataract surgery for those in rural settings via direct bookings from teleophthalmology consultations. Within Central Scotland, extending the paradigm outlined in 4.1 to primary care optometric practices, the high-fidelity screen mirroring functionality of the NHS browser-based telemedicine platform (NHS Near Me 2019) has also been leveraged to cascade live digital data towards real- time decision support and enhanced triage. At the time of writing, four optometric practices within the NHS Forth Valley community, in a Scottish Government funded trial (presently unpublished), have collectively referred to secondary care, more than 70 consecutive patients over a 9-month period. Live video slit lamp imaging of anterior and posterior segment, as well as OCT volume scans and ultra-wide field images are mirrored directly from community practices to ophthalmology, complemented by webcams for face-to- face patient counselling. Although data collection is in progress, secondary care appointments to date have been judged by optometrist and ophthalmologist to be obviated between 40% and 50% of cases, with high indices of patient, optometrist and clinician satisfaction.
between local and remote care providers (ophthalmologists, optometrists, community care) (Caffery et al. 2019). Yet, scheduled care indeed proceeds along a paradigm whereby patients are seen in groups, in physician-led clinics. Unsurprisingly, as previously mentioned, existing large-scale telemedicine software infrastructure enables virtualisation of patient pathways modelled along in-person clinics, with virtual waiting areas, virtual queues, etc. (Attend Anywhere 2019; NHS Near Me 2019). Indeed, as for 2014, 42% of US hospitals offered some form of outpatient virtual clinics (Adler-Milstein et al. 2014). In the specific case of teleophthalmology, evidence base is accumulating on the fact that, at least from the point of view of immediate clinical outcomes, this practice is essentially safe, with remote visualisation-based clinics demonstrated for diabetic retinopathy, glaucoma, anterior segment diseases, non-diabetic retinopathies and more (Caffery et al. 2019). Yet, telemedicine suffers from potential unintended consequences related to the change in relations, both interpersonal and interprofessional, between all stakeholders involved. Patients, physicians, nurses and allied health professionals are effectively asked to establish a professional and interpersonal trust, rapport and organisational hierarchy with counterparts who they have potentially never met in person, impacting on cultural, organisational and socioeconomic correlates of healthcare provision (Harrison et al. 2007), with poorly understood, yet potentially far-reaching and disruptive impacts (Kahn 2015).
9.4.4 Digital Wards: Virtual Clinics
The ability of computerised systems to carry out tasks that we normally associate with intelligence (Artificial Intelligence, AI) or, more specifically, machine learning, especially in the embodiments that rely on massive datasets of template examples used to optimise the performance of computer algorithms, broadly described as ‘deep learning’, are opening new pathways to medi-
In teleophthalmology models of care, a prevalence of scheduled, namely, outpatient screening, general services and disease-specific consultations, as opposed to unscheduled (emergency and pre-operatory) care is observed. These models are typically structured around a collaboration
9.5
Outlook
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation
cine. In particular, they are enabling automated systems to extract accurate diagnostic interpretations from patient data, most often image-based. Further medical AI applications touch upon careflow optimisation and consumer-end processing of healthcare data for healthcare promotion such as, for example, in mainstream consumer fitness monitors, or more recently, in full regulatory- compliant diagnostic tools based on consumer devices (Topol 2019). The most robust medical AI implementations revolve around imaging-intensive medical specialities, in primis radiology, pathology and dermatology. Teleophthalmology relies critically on real-time or deferred remote visualisation of images or videos collected at the patient site. For this reason, teleophthalmology appears as ideal candidate for AI. Indeed, in digital ophthalmology, effective algorithms have been demonstrated for the detection of disease from retinal images (diabetic retinopathy, glaucoma, age-related macular degeneration, retinopathy of prematurity, refractive error, cardiovascular risk factors) and OCT scans (several macular conditions, diabetic macular oedema, early stages of AMD, choroidal neovascular membrane) (Ting et al. 2019). Models of care for AI-enhanced telemedicine envisage the AI engine to be deployed either as a cloud-based platform or on one of the imaging end points, arguably reducing the need for high- resolution image transfer and for the related high-data-rate digital infrastructure, hence better suited in low-resource settings. Indeed, low-cost low-power AI-dedicated processing platforms are now available from mainstream vendors (Intel Movidius 2019; Coral 2019; nVidia Corp 2019) enabling imaging-based inference algorithms to be implemented on handheld devices at a price point compatible with mass distribution. Yet, this all is not without risk. Indeed, AI algorithms are ‘black boxes’ delivering an apparently consistent output from raw data input, on the basis of a large set of template examples. The closer the machine behaviour is to a ‘black box’ mimicking human intelligence, with no clear sight of the internal workings, the higher the risk of physician deskilling, loss of context in favour
171
of data, loss of perception of the intrinsic uncertainty of medical findings (Cabitza et al. 2017), bias, privacy and security breaches, lack of transparency (Topol 2019), susceptibility to attack and manipulation and lack of regulatory accountability (Kelly et al. 2019), just to mention a few of the risk factors emerging from recent literature. It is the belief of the authors of this chapter that remote visualisation-based teleophthalmology will affect clinical practice, whether with or without AI support, and the potential for AI-enhanced clinical value improvement must be balanced against the risk of negative outcomes on a pragmatic basis. Innovative approaches are emerging also on the development of teleophthalmology-dedicated hardware. In particular, teleophthalmology platforms have been demonstrated whereby remote slit lamp visualisation is complemented by remote actuation of the slit lamp position, resulting in an effective robotic platform, entirely teleoperated by the ophthalmologist (Chatziangelidis 2014). More recently, a robotised slit lamp platform has been integrated with full stereoscopic visualisation, and tested over a long-distance satellite data link (Nankvil et al. 2018). Indeed, teleophthalmology is entering a new, exciting development stage, as ideal platform where medicine, system and telecommunications engineering, computer science and robotic technologies converge for remote visualisation, AI and remote actuation to express their full integration potential.
References Adler-Milstein J, Kvedar J, Bates DW (2014) Telehealth among US hospitals: several factors, including state reimbursement and licensure policies. Influence Adop Health Aff 33:207–215. https://doi.org/10.1377/ hlthaff.2013.1054 Ahmed SM, Adams AM, Chowdhury M et al (2000) Gender, socioeconomic development and health- seeking behaviour in Bangladesh. Social Sci Med 51:361–371 AMD Global Telemedicine (2019) Telemedicine technologies: real-time versus store and forward. Available at https://www.amdtelemedicine.com/blog/article/ telemedicine-technologies-real-time-versus-storeand-forward. Accessed 31 Oct 2019
172 Attend Anywhere, Australia. https://attendanywhere.com. Accessed 13 Dec 2019 Bartnik SE, Copeland SP, Aicken AJ, Turner AW (2018) Optometry-facilitated teleophthalmology: an audit of the first year in Western Australia. Clin Exp Optom 101:700–703. https://doi.org/10.1111/cxo.12658 Bass SJ (2009) Examination of the posterior segment of the eye. In: Rosenfield M, Logan N (eds) Optometry, 2nd edn. Butterworth, pp 277–298 Bastawrous A (2014) Get your next eye exam on a smartphone. TED. Available at https://www.ted.com/talks/ andrew_bastawrous_get_your_next_eye_exam_ on_a_smartphone. Accessed 13 Dec 2019 Bastawrous A, Hennig BD (2012) The global inverse care law: a distorted map of blindness. Br J Ophthalmol 96:1357–1358. https://doi.org/10.1136/ bjophthalmol-2012-302088 Bastawrous A, Giardini ME, Bolster NM et al (2016) Clinical validation of a smartphone-based adapter for optic disc imaging in Kenya. JAMA Ophthalmol 134:151–158. https://doi.org/10.1001/ jamaophthalmol.2015.4625 Blencowe H, Lawn JE, Vazquez T et al (2013) Preterm- associated visual impairment and estimates of retinopathy of prematurity at regional and global levels for 2010. Pediatr Res 74:35–49. https://doi. org/10.1038/pr.2013.205 Bolster NM, Giardini ME, Livingstone IAT, Bastawrous A (2014) How the smartphone is driving the eye-health imaging revolution. Exp Rev Ophthalmol 9:475–485. https://doi.org/10.1586/17469899.2014.981532 Cabitza F, Rasoini R, Gensini GF (2017) Unintended consequences of machine learning in medicine. JAMA 318:517–518. https://doi.org/10.1001/ jama.2017.7797 Caffery LJ, Taylor M, Gole G, Smith AC (2019) Models of care in tele-ophthalmology: a scoping review. J Telemed Telecare 25(2):106–122. https://doi.org/10.1 177/1357633X17742182 Carl Zeiss Meditec AG (n.d.) Eye Examination with the Slit Lamp. Publication no. 000000-1152-355 Chatizangelidis I (2014) Construction of low cost remote controlled slit lamp (R/C/S/L) in order to provide teleophthalmology services to rural areas and islands. Adv Ophthalmol Vis Syst 1:00025. https://doi. org/10.15406/aovs.2014.01.00025 Clewley D, Rhon D, Flynn T et al (2018) Health seeking behaviour as a predictor of healthcare utilization in a population of patients with spinal pain. PLoS One 13:e0201348. https://doi.org/10.1371/journal. pone.0201348 Communications NHS Forth Valley (2019) World’s first 5G tele-examination of an eye. Available from: https:// nhsforthvalley.com/worlds-first-5g-tele-examinationof-an-eye/. Accessed 10 Dec 2019 Coral. https://coral.ai. Accessed 13 Dec 2019 D-Eye, Italy. D-Eye retina. https://www.d-eyecare.com/. Accessed 1 Nov 2019 Ebeigbe JA (2018) Factors influencing eye-care seeking behaviour of parents for their children in Nigeria. Clin
M. E. Giardini and I. A. T. Livingstone Exp Optom 101:560–564. https://doi.org/10.1111/ cxo.12506 Eidolon Optical, USA: Photo Bluminator 2. https://www. slitlamp.com/photo-bluminator-ii. Accessed 1 Nov 2019 Fallatah MO (2018) Knowledge, awareness, and eye care-seeking behavior in diabetic retinopathy: a cross- sectional study in Jeddah, Kingdom of Saudi Arabia. Ophhthalm Ther 7:377–385. https://doi.org/10.1007/ s40123-018-0147-5 Fleming AD, Philip S, Goatman KA et al (2011) The evidence for automated grading in diabetic retinopathy screening. Curr Diab Rev 7:246–252. https://doi. org/10.2174/157339911796397802 Freitas AM, Mörschbächer R, Thorell MR, Rhoden EL (2018) Incidence and risk factors for retinopathy of prematurity: a retrospective cohort study. Int J Retin Vitr 4:20. https://doi.org/10.1186/s40942-018-0125-z Giardini ME (2015) The portable eye examination kit: mobile phones cand screen for eye disease in low-resource settings. IEEE Pulse 2015 Nov/Dec. Available at https://pulse.embs.org/november-2015/ the-portable-eye-examination-kit. Accessed 13 Dec 2019 Giardini ME, Livingstone IAT, Jordan S et al (2014) A smartphone based ophthalmoscope. Conf Proc IEEE Eng Med Biol Soc 2014:2177–2180. https://doi. org/10.1109/EMBC.2014.6944049 GrisoliaABD,Abalem F, LuY et al (2017) Teleophthalmology: where are we now? Arq Bras Oftalmol 80:401–405. https://doi.org/10.5935/0004-2749.20170099 GSM Association (2019) The mobile economy 2019. Available at https://www.gsma.com/r/mobileeconomy/. Accessed 13 Dec 2019 Harrison MI, Kppel R, Bar-Lev S (2007) Unintended consequences of information technologies in health care – an interactive sociotechnical analysis. J Am Med Inform Assoc 14:542–549. https://doi.org/10.1197/ jamia.M2384 Hellström A, Smith LEH, Dammann O (2013) Retinopathy of prematurity. Lancet 382:1445–1457. https://doi.org/10.1016/S0140-6736(13)60178-6 Holmström H, Tornqvist K, Al-Hawasi A et al (2018) Increased frequency of retinopathy of prematurity over the last decade and significant regional differences. Acta Ophthalmol 96:142–148. https://doi. org/10.1111/aos.13549 Intel Movidius. https://www.movidius.com/. Accessed 13 Dec 2019 Jonuscheit S (2019) General ophthalmic services in Scotland: value for (public) money? Ophth Phys Opt 39:225–231. https://doi.org/10.1111/opo.12632 Kahn JM (2015) Virtual visits – confronting the challenges of telemedicine. N Engl J Med 372:1684–1685 Keeler CR (2003) A brief history of the ophthalmoscope. Optometry Prac 4:137–145 Kelly SP, Wallwork I, Haider D, Qureshi K (2011) Teleophthalmology with optical coherence tomography imaging in community optometry. Evaluation of a quality improvement for macular patients. Clin
9 Extending the Reach and Task-Shifting Ophthalmology Diagnostics Through Remote Visualisation Ophthalmol 5:1673–1678. https://doi.org/10.2147/ OPTH.S26753 Kelly CJ, Karthikesalingam A, Suleyman M et al (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17:195. https://doi. org/10.1186/s12916-019-1426-2 Lapere S (2018) Tele-ophthalmology for the monitoring of choroidal and iris nevi: a pilot study. Can J Ophthalmol 53:471–473. https://doi.org/10.1016/j. jcjo.2017.11.021 Lee JX, Manjunath V, Talks SJ (2018) Expanding the role of medical retina virtual clinics using multimodal ultra-widefield and optical coherence tomography imaging. Clin Ophthalmol 12:2337–2345. https://doi. org/10.2147/OPTH.S181108 Loane MA, Bloomer SE, Corbett R et al (2008) A comparison of real-time and store-and-forward teledermatology: a cost-benefit study. BJD 143:1241–1247. https://doi.org/10.1046/j.1365-2133.2000.03895.x Loomba A, Vempati S, Davara ND et al (2019) Use of a tablet attachment in teleophthalmology for real- time video transmission from rural vision centers in a three-tier eye care network in India: eyeSmart cyclops. Int J Telemed Appl 5683085. https://doi. org/10.1155/2019/5683085 Mackian S, Bedri N, Lovel H (2004) Up the garden path and over the edge: where might health-seeking behaviour take us? Health Policy Plan 19:137–146. https:// doi.org/10.1093/heapol/czh017 Nankvil D, Gonzalez A, Rowaan C et al (2018) Robotic controlled stereo slit lamp. TVST 7:1–13. https://doi. org/10.1167/tvst.7.4.1 National Center for Connected Health Policy (2019a) Store-and-forward (asynchronous). https://www. cchpca.org/about/about-telehealth/store-and-forwardasynchronous. Accessed 31 Oct 2019 National Center for Connected Health Policy (2019b) Current state laws & reimbursement policies. https:// www.cchpca.org/telehealth-policy/current-state-lawsand-reimbursement-policies. Accessed 31 Oct 2019 NHS Near Me, UK. https://nhsh.scot/nhsnearme. Accessed 13 Dec 2019 nVidia Corp (2019) Jetson Nano. https://wwwnvidiacom/ en-gb/autonomous-machines/embedded-systems/jetson-nano/. Accessed 13 Dec 2019 oDocs Eye Care, New Zealand (2019a) visoScope. https:// www.odocs-tech.com/visoscope/#. Last visited 1 November 2019 oDocs Eye Care, New Zealand (2019b) Fundus. http:// www.odocs-tech.com/fundus/#. Last visited 1 November 2019 Owens B (2018) Telemedicine on the rise but lagging in Canada. CMAJ 190:E1149–E1150. https://doi. org/10.1503/cmaj.109-5634 Peek Vision Ltd., UK. Peek Retina. https://www. peekvision.org/en_GB/peek-solutions/peek-retina/. Accessed 13 Dec 2019 Poyser O, Livingstone I, Ferguson A et al (2019) Real- time Tele-ophthalmology in the emergency department. IOVS 60:6124
173
Rani PK, Raman R, Manikandan M et al (2006) Patient satisfaction with tele-ophthalmology versus ophthalmologist-based screening in diabetic retinopathy. J Telemed Telecare 12:159–160 Resnikoff S, Felch W, Gauthier T-M et al (2012) The number of ophthalmologists in practice and training worldwide: a growing gap despite more than 200,000 practitioners. Br J Ophthalmol 96:783–787. https:// doi.org/10.1136/bjophthalmol-2011-301378 Rho MJ, Choi IY, Lee J (2014) Predictive factors of telemedicine service acceptance and behavioural intention of physicians. Int J Med Inform 83:559–571. https:// doi.org/10.1016/j.ijmedinf.2014.05.005 Ribeiro AG, Rodrigues RAM, Guerreiro AM, Regatieri CVS (2014) A teleophthalmology system for the diagnosis of ocular urgency in remote areas of Brazil. Arq Bras Oftalmol 77:214–218. https://doi. org/10.5935/0004-2749.20140055 Shahik BT, Hatcher J (2004) Health seeking behaviour and health service utilization in Pakistan: challenging the policy makers. J Public Health 27:49–54. https:// doi.org/10.1093/pubmed/fdh207 Sim DA, Mitry D, Alexander P et al (2016) The evolution of teleophthalmology programs in the United Kingdom: beyond diabetic retinopathy screening. J Diabetes Sci Technol 10:308–317. https://doi. org/10.1177/1932296816629983 Sreelatha OK, Ramesh SVS (2016) Teleophthalmology: improving patient outcomes? Clin Ophthalmol 10:285–295. https://doi.org/10.2147/OPTH.S80487 Surendran TS, Raman R (2014) Teleophthalmology in diabetic retinopathy. J Diab Sci Technol 8:262–266. https://doi.org/10.1177/1932296814522806 The College of Optometrists (2014) A strategy to improve ophthalmic public health 2014. Available at https:// www.college-optometrists.org/resourceLibrary/astrategy-to-improve-ophthalmic-public-health-2014. html. Accessed 31 Oct 2019 Timoney PJ, Breathnach CS (2013) Alvar Gullstrand and the slit lamp 1911. Ir J Med Sci 182:301–305. https:// doi.org/10.1007/s11845-012-0873-y Ting DSW, Peng L, Varadarajan AV et al (2019) Deep learning in ophthalmology: the technical and clinical considerations. Prog Retin Eye Res 72:100759. https://doi.org/10.1016/j.preteyeres.2019.04.003 Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25:44–56. https://doi.org/10.1038/s41591-018-0300-7 Tuteja SY. The Arclight: A ‘pocket’ ophthalmoscope to revitalise undergraduate teaching? Eye News, 1 Dec 2016. Available from: https://www.eyenews.uk.com/ education/trainees/post/the-arclight-a-pocket-ophthalmoscope-to-revitalise-undergraduate-teaching. Accessed 13 Dec 2019 UBORA: Euro-African open biomedical engineering e-platform for innovation through education. http:// ubora-biomedical.org/. Accessed 13 Dec 2019 United Nations (2019) World population prospects 2019. https://population.un.org/wpp/Publications/. Accessed 13 Dec 2019
174 Volk. iNview. https://volk.com/index.php/volk-products/ ophthalmic-cameras/volk-inview.html. Accessed 13 Dec 2019 Welch-Allyn, USA (2019a) PanOptic Ophthalmoscope. https://www.welchallyn.co.uk/content/welchallyn/ emeai/uk/products/categories/physical-exam/eyeexam/ophthalmoscopes%2D%2Dwide-view-direct/ panoptic_ophthalmoscope.html. Accessed 1 Nov 2019 Welch-Allyn, USA (2019b) iEXAMINER. https:// www.welchallyn.com/en/microsites/iexaminer.html. Accessed 1 Nov 2019 World Health Organization (2018) Ageing and health. Available at https://www.who.int/news-room/factsheets/detail/ageing-and-health. Accessed 13 Dec 2019
M. E. Giardini and I. A. T. Livingstone World Health Organization (2019a) Priority eye diseases. Available at https://www.who.int/blindness/causes/ priority/en/. Accessed 13 Dec 2019 World Health Organization (2019b) Guideline “Recommendations on digital interventions for health system strengthening”. Available at https://www.who. int/reproductivehealth/publications/digital-interventions-health-system-strengthening/en/. Accessed 12 Dec 2019 Yu P, Li H, Gagnon MP (2009) Health IT acceptance factors in long-term care facilities: a cross-sectional survey. Int J Med Inform 78:219–229. https://doi. org/10.1016/j.ijmedinf.2008.07.006
Image Overlay Surgery Based on Augmented Reality: A Systematic Review
10
Laura Pérez-Pachón, Matthieu Poyade, Terry Lowe, and Flora Gröning
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow Electronic Supplementary Material The online version of this chapter (https://doi.org/10.1007/978-3-030-474836_10) contains supplementary material, which is available to authorized users. L. Pérez-Pachón (*) · F. Gröning School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK e-mail: [email protected] M. Poyade School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK T. Lowe School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
healthcare professionals and researchers to develop and implement these systems in- house. Most studies used non-invasive markers to automatically track a patient’s position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient- specific 3D models and the patient’s body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients’ recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 P. M. Rea (ed.), Biomedical Visualisation, Advances in Experimental Medicine and Biology 1260, https://doi.org/10.1007/978-3-030-47483-6_10
175
176
Keywords
L. Pérez-Pachón et al.
discussed AR-based image overlay in open surgery. The authors used a single database for their Augmented reality · Mixed reality · Surgical systematic search (PubMed) and excluded studguidance · Surgical navigation · Holographic ies on neurosurgery, orthopaedics and maxillofaheadsets · Head-mounted displays cial surgery, which resulted in a fairly small sample of 13 studies. In addition, they did not include a critical reflection of the tracking and registration methods used in their reviewed 10.1 Introduction studies. Our systematic review focuses on AR-based AR-based image overlay surgery superimposes surgical guidance where patient-specific digital patient-specific digital data onto the patient’s data are overlaid onto the patient’s body surface body using Augmented Reality (AR), i.e. it aug- (incl. the patient’s internal anatomy once exposed ments the real surgical scene by means of com- during open surgery) and in line with the surputer graphics (Azuma 1997). This approach geon’s view of the surgical site. In contrast to helps to reduce surgery times, e.g. by preventing Eckert et al. (2019), our narrower area of study the need for surgeons to recall image data pro- allowed for a detailed analysis and discussion of duced in the planning of the surgery or by facili- the results across studies that share a particular tating the interpretation of 3D data during surgery aim: to guide surgeons by overlaying content on (Hummelink et al. 2015; Jiang et al. 2018; Khor the patient’s body surface. For instance, we et al. 2016; Kim et al. 2017; Profeta et al. 2016; excluded surgical training as well as studies on Vávra et al. 2017). It also has the potential to surgical guidance for minimally invasive surgery reduce intra- and post-operative complications, because this type of surgery presents different e.g. by indicating the exact location of high-risk tracking and registration challenges than those in anatomical structures adjacent to the surgical site open surgery, e.g. tracking markers or anatomical that are not to be injured or facilitating the accu- landmarks inside the patient’s body using an rate placement of implants (Fritz et al. 2013; Liu endoscopic camera (Li et al. 2016). In addition, et al. 2014). Typically, AR-based image overlay we included all types of open surgery in our surgery consists of three major steps: (1) track- search and used eight databases, which resulted ing, i.e. acquisition of positional information in a larger sample of studies than in Fida et al. about the patient; (2) registration, i.e. scaling and (2018). Finally, we discussed the implications of alignment of the patient-specific imaging data different registration methods in terms of their with the previously acquired positional informa- application in clinical practice. Other reviews diftion and (3) overlay, i.e. projection of the patient- fer from ours in that they cover a particular surgispecific digital data onto the patient’s body cal discipline (Joda et al. 2019; Bertolo et al. surface using a display device, e.g. a headset. 2019; Sayadi et al. 2019; Bosc et al. 2019; Wong Tracking and registration methods determine et al. 2018) or do not explore the technical aspects key technical aspects of AR-based image overlay of the tracking and registration methods surgery systems, e.g. the level of technical skill (Contreras López et al. 2019; Sayadi et al. 2019; required to implement and/or use these systems Yoon et al. 2018; Kolodzey et al. 2017). within a surgical setup. A recent review by Eckert The aim of this review is to assess which existet al. (2019) used a large sample of studies ing tracking and registration methods and techobtained from PubMed and Scopus to discuss nologies allow healthcare professionals and tracking methods in AR-based medical training researchers to develop and implement these sysand treatment. However, their research does not tems in-house. As main objectives, we: (a) idenprovide a detailed analysis of the state-of-the-art tify the most commonly used tracking methods of AR-based image overlay for surgical guid- and the computational methods that are easiest to ance. Another recent review by Fida et al. (2018) implement and (b) explore the registration accu-
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
racy of these systems and to what extent they improve surgical outcomes and reduce invasiveness for patients. This work is part of a larger research project which aims to create a methodological and technological framework for AR-based image overlay surgery within the context of reconstructive surgery.
10.2 Materials and Methods This review follows the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines (Liberati et al. 2009). The following scientific databases were used for the systematic search in August 2018: Ovid, Medline, Embase, Scopus, Web of Science, PubMed, IEEE (accessed via the University of Aberdeen) and Google Scholar. The search was performed using the following search terms: Augmented Reality AND Image Guided Surgery OR Surgery OR Computer Assisted Surgery AND Tracking OR Registration OR Projection OR Head Mounted Display OR Heads up display OR Smart Glasses OR Autostereoscopic OR Microscopy OR Retinal Displays. Specific and generic terminology as well as alternate spellings and plurals were considered in the search. The full systematic search strategy is provided in the appendix: Table 10.5. We considered research on AR-based image overlay surgery published since 2004 when AR was implemented on a mobile device for the first time (Mohring et al. 2004). Outcomes were restricted to scientific journal and conference papers written in English and involving animals, humans (including cadaveric material and/or in vivo clinical data belonging to males and females of all ages) and phantom representations. A selection of the retrieved studies was done by one author (LP) through the screening of their titles and abstracts after all authors agreed on the eligibility criteria. The selected studies were classified according to the variables described in Table 10.1. The experiments conducted by the selected studies were classified according to the Fiducial Registration Error (FRE) and Target Registration Error (TRE) because they were the
177
Table 10.1 Variables used to classify the reviewed studies Variable Surgical task Surgery type Tracking method Non-invasive for patients Registration method
Compact
Wireless
Surgical site directly visible Hands-free tracking Stand-alone application
Type of display
Includes accuracy metrics N accuracy experiments Fiducial and target registration errors (FRE and TRE, respectively)
Experimental approach
Description Surgical step for which the system provided guidance. Surgical procedure for which the system provided guidance. Method used to obtain positional information about the patient. The system does not require the use of invasive markers attached to the patient’s body (yes/no). Method used to compute the registration between the patient-specific digital data and the patient’s body surface. The system integrates the tracking, registration and display capabilities in a single device (yes/no). The system does not require the use of cables within the operating room (yes/no). The system components do not occlude the surgeon’s direct view of the surgical site (yes/no). The surgical team does not need to manipulate the system throughout surgery (yes/no). The system is presented as a portable program which does not rely on an operating system (yes/ no). Type of device used by the system to project the patient-specific digital data on the patient’s body surface. The study includes experiments to measure the registration accuracy of their system (yes/no). Number of accuracy experiments extracted from each reviewed study. Distance between corresponding real and digital points after registration of the patient-specific digital data with the patient’s body. Typically, the FRE is measured at points used to set the registration, while the TRE is measured at points other than those used for registration (Fitzpatrick and West 2001). Subject on which the FRE and TRE were measured. (continued)
L. Pérez-Pachón et al.
178 Table 10.1 (continued) Variable N subjects N measurements Success rate reported Surgery time reported Long-term study
Type of study
Evidence quality
Description Number of subjects per experiment. Number of measurements per experiment. The study includes information about the post-operative outcomes (yes/no). The study includes information about the time required to perform the surgery (yes/no). The study includes monitoring data about the patient’s recovery and surgical outcomes (yes/no). Type of study design (randomised control trial or observational study). Quality of the evidence provided by the reviewed studies according to GRADE guidelines [21].
most common accuracy metrics considered across the reviewed studies. To perform a risk of bias assessment, we ranked the individual reviewed studies based on their quality of evidence following the GRADE guidelines (Guyatt et al. 2008): ‘high’ for randomised control trials and ‘low’ for observational studies. An upgrade/downgrade of the resulting level of quality was done based on each study’s characteristics: inclusion of accuracy metrics, sample size and inclusion of information about the surgical outcomes. To assess the risk of bias across studies, we considered the uniformity of the tracking and registration methods and display technologies used across them. This research did not require the involvement of patients or members of the public.
10.3 Results The systematic search yielded 1352 publications, 724 after removing duplicates (Fig. 10.1). Publications were selected using the following eligibility criteria: (1) the patient-specific digital data were displayed on the patient’s body surface (incl. the patient’s internal anatomy once exposed during open surgery) either directly (e.g. using
conventional projection) or indirectly (e.g. on live images of the patient seen through a tablet) and (2) the visualisation was in line with the surgeon’s view of the surgical site. Therefore, we excluded studies presenting systems which overlaid the patient-specific digital data onto digital scans or images of the patient’s internal anatomy (e.g. as in endoscopic procedures) as well as those requiring the surgeon to look away from the surgical site in order to see the digital images (e.g. on a monitor). Among studies on minimally invasive surgery, we included only those in which the tracked features were part of the patient’s external anatomy or environment and the patient- specific digital data were overlaid onto the patient’s body surface. In total, we selected 76 publications and generated a database (electronic supplementary material: S10.1). These studies covered a variety of surgical tasks (Table 10.2) and procedures (appendix: Table 10.6) showing that some clinical applications had a much wider representation within our sample than others.
10.3.1 Tracking Methods We classified the reviewed studies into the following categories: electromagnetic tracking, optical marker-less tracking and optical marker- based tracking with a complex or simple set-up (Fig. 10.2). Most studies used marker-based optical tracking (64%) (Fig. 10.3), e.g. a system which uses a camera to detect the position of a marker fixed to a patient’s teeth and, based on this position, projects osteotomy lines onto the patient’s skull (Zhu et al. 2016). Among these, infrared cameras that detect retro-reflective markers were the most commonly used tracking device (41%) (Ma et al. 2019; Maruyama et al. 2018; Si et al. 2018), followed by RGB cameras (20%) to detect 2D images with easily recognisable features (Jiang et al. 2017; Lin et al. 2015; Zhu et al. 2016) or simple shape objects (Cutolo et al. 2016; Sun et al. 2017; Wang et al. 2015). A few studies used marker-less optical tracking (12%) (Gibby et al. 2019; Wu et al. 2018; Zeng et al. 2017), e.g. a camera to detect the contour of the patient’s dentition which is matched with its correspond-
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
179
Fig. 10.1 Flow diagram showing the systematic search strategy used for this review
Table 10.2 Classification of reviewed AR-based image overlay surgery studies according to surgical tasks Surgical task Locate internal anatomical structures, tumours and haematomas
Studies % N 36.8 28
Indicate correct entry points and trajectories of surgical instruments
31.5
24
Indicate correct soft tissue resection margins and osteotomy lines
21.0
16
Indicate correct position of implants Assist more than one surgical task Indicate anatomical asymmetry
3.9
3
Articles Maruyama et al. (2018), Zhang et al. (2015, 2017), Jiang et al. (2017), Wen et al. (2014, 2017), Yang et al. (2018), Sun et al. (2017), Scolozzi and Bijlenga (2017), Drouin et al. (2017), Hou et al. (2016), Cabrilo et al. (2015), Wang et al. (2014, 2015), Pauly et al. (2015), Suenaga et al. (2015), Yoshino et al. (2015), Kramers et al. (2014), Deng et al. (2014), Parrini et al. (2014), Han et al. (2013), Mahvash and Tabrizi (2013), Müller et al. (2013), Kersten-Oertel et al. (2012), Volonte et al. (2011), Tran et al. (2011), Sugimoto et al. (2010), and Giraldez et al. (2007) Andress et al. (2018), Cutolo et al. (2016), Eftekhar (2016), Fichtinger et al. (2005), Gavaghan et al. (2012), Gibby et al. (2019), Khan et al. (2006), Krempien et al. (2008), Lee et al. (2010), Liang et al. (2012), Liao et al. (2010), Ma et al. (2017, 2018), Martins et al. (2016), Rodriguez et al. (2012), Shamir et al. (2011), Si et al. (2018), Suenaga et al. (2013), Vogt et al. (2006), Wacker et al. (2005), Wang et al. (2016), Wen et al. (2013), Wesarg et al. (2004), and Wu et al. (2014) (Badiali et al. (2014), Besharati Tabrizi and Mahvash (2015), Kosterhon et al. (2017), Lin et al. (2016), Lin et al. (2015), Marmulla et al. (2005), Mischkowski et al. (2006), Mondal et al. (2015), Pessaux et al. (2015), Qu et al. (2015), Shao et al. (2014), Sun et al. (2016), Tang et al. (2017), Wang et al. (2017), and Zhu et al. (2011, 2016) Ma et al. (2019), Mahmoud et al. (2017), and Zeng et al. (2017)
3.9
3
He et al. (2016), Hu et al. (2013), and Wu et al. (2018)
2.6
2
Huang et al. (2012) and Mezzana et al. (2011)
180
L. Pérez-Pachón et al.
Fig. 10.2 Main tracking methods identified in this review: electromagnetic, optical marker-less and optical markerbased with a complex or simple set-up. The diagram also
shows the devices used for tracking (yellow), registration (green), overlay (orange) or tracking, registration and overlay using a single device (holographic headset)
Fig. 10.3 Reviewed studies organised according to their tracking method. Marker-based tracking, use of cameras to detect objects attached to the patient’s body; marker- less tracking, superficial body features or a stripy pattern projected onto the patient’s body surface; electromagnetic
tracking, use of an electromagnetic transmitter to detect sensors placed on a surgical instrument’s tip; manual registration, freehand alignment of the patient-specific digital data onto the patient’s body surface. EM electromagnetic, RGB red, green, blue, RGB-D red, green, blue and depth
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
ing points on video images of the patient (Wang et al. 2017). Some studies used electromagnetic tracking (3%) (Ma et al. 2018; Martins et al. 2016) or a manual approach (10%) (Eftekhar 2016; Hou et al. 2016; Pessaux et al. 2015) to detect the patient’s position. The remaining studies used alternative methods (Andress et al. 2018; Mahmoud et al. 2017; Scolozzi and Bijlenga 2017) or did not specify their tracking method (Rodriguez et al. 2012; Sun et al. 2016). A complete list of the reviewed studies classified based on these categories is available in the appendix: Table 10.7. Henceforth, the data analysis focuses on the studies using automatic optical tracking (58 studies).
10.3.2 Registration Methods Most reviewed studies used custom algorithms to align patient-specific digital data with the patient’s position (Ma et al. 2019; Maruyama et al. 2018; Si et al. 2018) (Table 10.3), e.g. matching two sets of 3D points corresponding to the position of markTable 10.3 Reviewed studies organised according to the computation method used for automatic optical tracking and registration. Some studies using fully integrated plat-
181
ers on the patient’s body and their corresponding points on the patient’s scans (Ma et al. 2019). Some studies used computer tracking libraries and/or Software Development Kits (SDKs) (Cutolo et al. 2016; Wang et al. 2016; Zeng et al. 2017), such as OpenCV (Shao et al. 2014), ARToolkit (http://www.hitl.washington.edu/ artoolkit/) (Lin et al. 2016; Qu et al. 2015; Zhu et al. 2016) or Vuforia SDK (https://www.vuforia. com/) (Kramers et al. 2014; Wen et al. 2017). Both ARToolkit and Vuforia SDK provide algorithms to track 2D and 3D feature points on images and define a shared coordinate system between the digital data and the real world (e.g. the patient). They are sometimes used in combination with game engines (e.g. Unity, https://unity3d.com/ or Unreal, https://www.unrealengine.com/en-US/) and capture devices such as conventional webcams or other RGB/−D camera systems (Jiang et al. 2017; Wu et al. 2018). Game engines with embedded computer tracking libraries and SDKs (e.g. Vuforia SDK) are user-friendly tools for an easy development of mobile AR applications that automatically register digital data with real-world forms, tracking libraries/SDKs and game engines also developed custom algorithms
Registration method Custom algorithms
Studies % N 56.9 33
Fully integrated platforms
15.5
9
Tracking libraries/SDKs
20.6
12
Tracking libraries/SDKs and game engines Not specified
3.4
2
Articles Badiali et al. (2014), Deng et al. (2014), Giraldez et al. (2007), He et al. (2016), Hu et al. (2013), Krempien et al. (2008), Lee et al. (2010), Liang et al. (2012), Liao et al. (2010), Lin et al. (2015), Ma et al. (2019), Ma et al. (2017), Maruyama et al. (2018), Müller et al. (2013), Pauly et al. (2015), Shamir et al. (2011), Si et al. (2018), Suenaga et al. (2013, 2015), Tang et al. (2017), Tran et al. (2011), Vogt et al. (2006), Wacker et al. (2005), Wang et al. (2014, 2015, 2017), Wen et al. (2013, 2014), Wu et al. (2014), Yang et al. (2018), Yoshino et al. (2015), and Zhang et al. (2015, 2017) Cabrilo et al. (2015), Drouin et al. (2017), Gibby et al. (2019), Khan et al. (2006), Kosterhon et al. (2017), Mischkowski et al. (2006), Sun et al. (2017), Wesarg et al. (2004), and Cutolo et al. (2016) Gavaghan et al. (2012), Huang et al. (2012), Kersten-Oertel et al. (2012), Kramers et al. (2014), Lin et al. (2016), Qu et al. (2015), Shao et al. (2014), Wang et al. (2016), Wen et al. (2017), Zeng et al. (2017), and Zhu et al. (2011, 2016) Jiang et al. (2017) and Wu et al. (2018)
3.4
2
Marmulla et al. (2005) and Parrini et al. (2014)
182
features. For instance, Wu et al. (2018) used the Vuforia SDK and Unity to deploy the tracking of an image marker placed in the surgical scene. However, their registration strategy also required custom calculations that detect the patient’s position. In contrast, Jiang et al. (2017) used ARToolkit and Unity to deploy both the tracking of an image marker and the registration of the patient-specific digital data with the patient’s body surface without relying on custom calculations. Only 16% of the reviewed studies used fully integrated platforms (Drouin et al. 2017; Gibby et al. 2019; Sun et al. 2017), e.g. the Brainlab neuronavigation system (Brainlab, Germany).
10.3.3 Key Aspects of Augmented- Reality-Based Image Overlay Systems 10.3.3.1 Ease of Use Most reviewed studies required the set-up of separate pieces of equipment in the operating room (83%), while a minority used compact systems (12%), e.g. those using headsets, smartphones or a microscope with an integrated tracking device (Gibby et al. 2019; Jiang et al. 2017; Sun et al. 2017) (Fig. 10.4). Headsets can be video see-through or optical see-through and display digital data on a screen or on transparent lenses in front of the surgeon’s view, respectively. In most cases, the display device occluded
L. Pérez-Pachón et al.
the surgeon’s view of the surgical site (66%), except for those studies which used optical seethrough headsets, smart glasses or projectors (28%) (Gibby et al. 2019; Maruyama et al. 2018; Wu et al. 2018). A minority of studies used hands-free tracking (33%) (Gibby et al. 2019; Ma et al. 2017; Yang et al. 2018), while most required the manipulation of tracking devices (66%). For instance, some systems required the use of a navigation pointer to localise predefined registration landmarks on the patient’s body during surgery (Kosterhon et al. 2017). Only a few studies presented their systems as stand-alone applications (7%), combined with smart glasses (Maruyama et al. 2018), smartphones (Kramers et al. 2014) or holographic headsets (i.e. optical see-through AR headsets that integrate tracking, registration and display capabilities and recognise voice and gesture commands) (Gibby et al. 2019; Wu et al. 2018) (appendix: Table 10.8). In addition, most studies relied on hardware with wired connections (84%), while only a few studies used wireless technology such as holographic headsets, smartphones or tablets (Gibby et al. 2019; Sun et al. 2017; Wu et al. 2018). A classification of the reviewed studies according to the display device used is shown in the appendix: Table 10.9.
10.3.3.2 Registration Accuracy A total of 38 studies on automatic optical tracking (66%) measured the registration accuracy of
Fig. 10.4 Classification of reviewed automatic optical tracking studies according to a system’s usability
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
their system, while the remaining studies did not explore this or measured variables not considered in this review, e.g. the area of tumour successfully removed during AR-based image overlay surgery (Scolozzi and Bijlenga 2017). In total, we extracted the mean FRE and/or TRE from 44 experiments (Table 10.4). Most experiments measured the TRE, which has been described as the actual distance between matching real and digital points after registration as it includes all the errors which may occur during the registration process (Fitzpatrick and West 2001; West et al. 2001). This review shows that many authors achieved TREs between 1 and 5 mm (52%), e.g. those using computer tracking libraries/SDKs and game engines (Jiang et al. 2017; Wu et al. 2018) and most studies using
183
headsets (Badiali et al. 2014; Cutolo et al. 2016; Gibby et al. 2019; Jiang et al. 2017; Si et al. 2018; Wang et al. 2016; Wu et al. 2018). Some studies achieved a sub- millimetre accuracy (32%), e.g. a study which used a video seethrough headset (Lin et al. 2015) and another one using a non-holographic optical see-through headset (Lin et al. 2016). Many reviewed studies included low numbers of subjects and/or measurements in their experiments and only a few were clinical studies (14%), while most measured the registration accuracy on phantoms. Large number of studies did not measure the accuracy of their systems.
Table 10.4 Classification of experiments according to the registration accuracy and measurement approach. Some articles presented more than one experiment Registration accuracy FRE 5 mm Not specified
0 81.8
0 36
5 mm
11.3
5
4.5
2
1–5 mm
TRE
Experiments % N 11.3 5
Not specified
Articles Krempien et al. (2008), Ma et al. (2019), Wang et al. (2014, 2015), and Zeng et al. (2017) Maruyama et al. (2018), Yang et al. (2018), and Zhang et al. (2017) – Badiali et al. (2014), Cutolo et al. (2016), Deng et al. (2014), Gibby et al. (2019), Giraldez et al. (2007), He et al. (2016), Jiang et al. (2017), Khan et al. (2006), Lee et al. (2010), Liang et al. (2012), Liao et al. (2010), Lin et al. (2015, 2016), Ma et al. (2017), Maruyama et al. (2018), Mischkowski et al. (2006), Qu et al. (2015), Si et al. (2018), Suenaga et al. (2013, 2015), Wacker et al. (2005), Wang et al. (2016, 2017), Wen et al. (2013, 2014, 2017), Wesarg et al. (2004), Wu et al. (2014, 2018), Yoshino et al. (2015), and Zhu et al. (2016) Giraldez et al. (2007), He et al. (2016), Liao et al. (2010), Lin et al. (2015, 2016), Mischkowski et al. (2006), Suenaga et al. (2013), Suenaga et al. 2015, Wang et al. (2014, 2015, 2017), Zeng et al. (2017), and Zhang et al. (2017) Badiali et al. (2014), Cutolo et al. (2016), Deng et al. (2014), Gibby et al. (2019), Jiang et al. (2017), Krempien et al. (2008), Lee et al. (2010), Liang et al. (2012), Ma et al. (2017, 2019), Maruyama et al. (2018), Qu et al. (2015), Si et al. (2018), Wang et al. (2016), Wen et al. (2013, 2014, 2017), Wu et al. (2018), Yoshino et al. (2015), and Zhu et al. (2016) Khan et al. (2006), Wacker et al. (2005), Wesarg et al. (2004), and Wu et al. (2014) Maruyama et al. (2018) and Yang et al. (2018) (continued)
L. Pérez-Pachón et al.
184 Table 10.4 (continued) Registration accuracy Experimental Surgery approach performance Surgery simulation on: Phantom
N subjects per experiment
N measurements per experiment
Experiments % N 13.6 6
31.8
14
Animal Cadaver Only AR overlay on: Patient Phantom
6.8 4.5
3 2
2.2 38.6
1 17
Cadaver < 10
2.2 97.7
1 43
10–50 > 50 < 10
2.2 0.0 50.0
1 0 22
10–50
34.0
15
> 50
15.9
7
Articles Deng et al. (2014), Krempien et al. (2008), Maruyama et al. (2018), Mischkowski et al. (2006), Qu et al. (2015), and Zhu et al. (2016)
Cutolo et al. (2016), Gibby et al. (2019), He et al. (2016), Liang et al. (2012), Lin et al. (2015, 2016), Ma et al. (2017, 2019), Si et al. (2018), Wacker et al. (2005), Wen et al. (2013, 2014, 2017), and Wesarg et al. (2004) Ma et al. (2017), Wacker et al. (2005), and Wu et al. (2014) Khan et al. (2006) and Wang et al. (2016)
Suenaga et al. (2015) Badiali et al. (2014), Deng et al. (2014), Giraldez et al. (2007), Jiang et al. (2017), Lee et al. (2010), Liao et al. (2010), Maruyama et al. (2018), Suenaga et al. (2013), Wang et al. (2014, 2015, 2017), Wu et al. (2018), Yang et al. (2018), Yoshino et al. (2015), Zeng et al. (2017), and Zhang et al. (2017) Giraldez et al. (2007) Badiali et al. (2014), Cutolo et al. (2016), Deng et al. (2014), Gibby et al. (2019), Giraldez et al. (2007), He et al. (2016), Jiang et al. (2017), Khan et al. (2006), Krempien et al. (2008), Lee et al. (2010), Liang et al. (2012), Liao et al. (2010), Lin et al. (2015, 2016), Ma et al. (2017, 2019), Maruyama et al. (2018), Mischkowski et al. (2006), Qu et al. (2015), Si et al. (2018), Suenaga et al. (2013, 2015), Wang et al. (2014, 2015, 2016, 2017), Wen et al. (2013, 2014, 2017), Wesarg et al. (2004), Wu et al. (2014, 2018), Yang et al. (2018), Yoshino et al. (2015), Zeng et al. (2017), and Zhang et al. (2017) Zhu et al. (2016) – Badiali et al. (2014), Giraldez et al. (2007), He et al. (2016), Jiang et al. (2017), Lee et al. (2010), Liang et al. (2012), Ma et al. (2017, 2019), Mischkowski et al. (2006), Qu et al. (2015), Si et al. (2018), Suenaga et al. (2013), Wang et al. (2014, 2015, 2017), Wu et al. (2018), Yang et al. (2018), Yoshino et al. (2015), and Zhang et al. (2017) Cutolo et al. (2016), Deng et al. (2014), Gibby et al. (2019), Khan et al. (2006), Krempien et al. (2008), Liao et al. (2010), Lin et al. (2015), Maruyama et al. (2018), Wang et al. (2016), Wen et al. (2014, 2017), Wesarg et al. (2004), Wu et al. (2014), Zeng et al. (2017), and Zhu et al. (2016) Deng et al. (2014), Lin et al. (2016), Maruyama et al. (2018), Suenaga et al. (2015), Wacker et al. (2005), and Wen et al. (2013)
Maruyama et al. (2018), Wu et al. (2018), Ma et al. (2017), Deng et al. (2014), Giraldez et al. (2007), and Wacker et al. (2005). FRE fiducial registration error, TRE target registration error, AR augmented reality
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
10.3.3.3 Surgical Outcomes and Invasiveness for Patients Only few studies compared the surgical success rates (Cutolo et al. 2016; Gibby et al. 2019; Huang et al. 2012; Liao et al. 2010; Lin et al. 2016; Ma et al. 2017; Qu et al. 2015; Si et al. 2018) and times (Khan et al. 2006; Liao et al. 2010; Mischkowski et al. 2006; Müller et al. 2013) with those achieved in conventional surgery. Similarly, only few authors performed long-term studies (Kosterhon et al. 2017). In terms of invasiveness, most marker-based optical tracking studies used non-invasive tracking markers (Giraldez et al. 2007; Huang et al. 2012; Kramers et al. 2014; Krempien et al. 2008; Lee et al. 2010; Maruyama et al. 2018; Wang et al. 2015; Wen et al. 2013, 2014). These markers were attached to the patient (Cutolo et al. 2016; Parrini et al. 2014; Si et al. 2018; Sun et al. 2017), a probe that digitises anatomical landmarks (i.e. superficial body features) (Hu et al. 2013; Kosterhon et al. 2017; Ma et al. 2017; Tang et al. 2017), a surgical tool (He et al. 2016) or fiducial markers. Fiducial markers are easily identifiable landmarks fixed to the patient’s body surface at the time of scanning which help to preserve the spatial relationships between the patient-specific digital data obtained from the scans and the patient’s anatomy. Fiducial markers were attached to dental retainers (Ma et al. 40 2019; Qu et al. 2015; Suenaga et al. 2013; Tran et al. 2011; Yoshino et al. 2015; Zhu et al. 2011, 2016), placed in the surgical scene (Shao et al. 2014), or non-invasively attached to the patient (Besharati Tabrizi and Mahvash, 2015; Cutolo et al. 2016; Deng et al. 2014; Drouin et al. 2017; Kersten-Oertel et al. 2012; Liao et al. 2010; Müller et al. 2013; Shamir et al. 2011; Tran et al. 2011; Wu et al. 2014; Yang et al. 2018; Zhang et al. 2015, 2017; Zhu et al. 2016).
10.3.4 Risk of Bias Most reviewed studies were case series and reports (Maruyama et al. 2018; Tang et al. 2017;
185
Kosterhon et al. 2017; Sun et al. 2017; Zhu et al. 2016; Cabrilo et al. 2015, Deng et al. 2014; Zhu et al. 2011; Krempien et al. 2008; Giraldez et al. 2007; Mischkowski et al. 2006; Marmulla et al. 2005). Only one reviewed study was a randomised control trial (Qu et al. 2015). Due to their non-inclusion of accuracy metrics, the small sample size in their experiments and/or the lack of information about surgical outcomes, most reviewed case series and reports were downgraded to studies of ‘very low’ quality of evidence, and the randomised control trial was downgraded to ‘moderate’ quality of evidence (electronic supplementary material: S10.1). In addition, a wide variety of tracking and registration methods and display technologies was found across the reviewed studies (Table 10.3 and appendix: Tables 10.7 and 10.9).
10.4 Discussion To the authors’ knowledge, this is the first review that: (a) identifies the most commonly used tracking and registration methods and technologies that overlay patient-specific digital data onto the patient’s body surface and in line with the surgeon’s view of the surgical site; (b) evaluates the suitability of these methods for their in-house implementation by healthcare professionals and researchers without relying on advanced engineering and/or programming skills and; (c) discusses the key challenges of AR-based image overlay surgery. Our results show that the tracking method most commonly used among the reviewed studies is marker-based optical tracking, i.e. the use of markers with an easily recognisable pattern to establish a shared coordinate system between the real environment including the patient and the patient-specific 3D dataset (Fig. 10.3). This is in line with the findings by Eckert et al. (Eckert et al. 2019) who explored a wider area of study: AR-based medical training and treatment. In addition, the registration between the patient- specific digital data and the patient’s body surface is normally achieved by using custom
186
algorithms, while the combination of tracking libraries/SDKs and game engines is very recent (Table 10.3). This review also demonstrates that these systems, which have normally involved the use of several hardware components and cables, do not normally allow the surgeon’s direct view of the surgical site or hands-free tracking, and have rarely been presented as stand-alone applications (Fig. 10.4). As key challenges for current AR-based image overlay surgery, we identified the need to validate these systems through more extensive accuracy metrics and to explore approaches that minimise invasiveness for patients.
10.4.1 Why Is Marker-Based Tracking the Commonest Approach? The use of markers to register patient-specific digital data with the patient’s body surface is very common (Fig. 10.3). There are alternatives to using markers, e.g. marker-less optical tracking where anatomical features with well-defined borders (e.g. contour of the patient’s dentition) are detected (Suenaga et al. 2015; Wang et al. 2014, 2017). However, the application of marker-less optical tracking is limited as many surgeries do not necessarily involve the exposure of anatomical features with well-defined borders (e.g. soft tissue flap surgery). Similarly, electromagnetic tracking allows the detection of sensors even when they are not visible, e.g. because they are placed in a surgical instrument’s tip inside the patient’s body. However, this method may compromise surgical accuracy in operating theatres which include several metallic items as magnetic fields are usually affected by metallic artefacts (Poulin and Amiot 2002). In the absence of anatomical features with well-defined borders or in environments with metallic items, marker-based optical tracking is a convenient tracking method. This might explain its high prevalence in our reviewed studies. Two aspects must be considered to prevent an increased risk of intra- and post-operative com-
L. Pérez-Pachón et al.
plications when exploring the use of marker- based tracking: (1) to avoid occlusion of the surgeon’s view of the surgical site caused by the markers and (2) to implement solutions which ensure both an optimal accuracy and low invasiveness for patients. This review shows that there is a variety of options that currently allow the efficient use of non-invasive markers attached to the patients’ body surface that minimise their discomfort and facilitate their recovery, e.g. 2D images detected by holographic headsets can be attached to dental splints (Qu et al. 2015; Zhu et al. 2011, 2016). However, the use of other types of non-invasive markers (e.g. skin adhesives) can lead to a registration mismatch, e.g. due to changes in the soft tissue shape during resection (Jiang et al. 2017).
10.4.2 What Computational Method Is Easiest to Implement? Traditionally, the development of AR-based image overlay systems has required advanced engineering and programming skills. Fully integrated platforms are highly efficient and easy to implement in the operating room, but also expensive and not suitable for in-house adjustment to particular surgical needs (Drouin et al. 2017). The customisation of AR-based image overlay surgery systems often involves the development of tracking and registration algorithms (Badiali et al. 2014; Wen et al. 2013; Yang et al. 2018) and/or the use of computer tracking libraries and/ or SDKs (e.g. OpenIGTLink) (Gavaghan et al. 2012; Huang et al. 2012; Kersten-Oertel et al. 2012; Kramers et al. 2014; Wang et al. 2016; Wen et al. 2017; Zeng et al. 2017). For this reason, this type of development is not available for a wide range of healthcare professionals and researchers. Some reviewed studies overcame this issue by combining computer tracking libraries (e.g. ARToolkits) or SDKs (e.g. Vuforia SDK) with game engines that can be used to create simple mobile AR applications (Andress et al. 2018; Jiang et al. 2017; Wu et al. 2018). In addition,
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
game engines are increasingly becoming more popular due to their improved graphics performance. However, the number of studies using these tools is still relatively small (Table 10.3).
187
data (Andress et al. 2018; Jiang et al. 2017; Si et al. 2018; Wu et al. 2018). In summary, the combination of holographic headsets, tracking libraries/SDKs and game engines allows a wide range of healthcare professionals and researchers to develop simple 10.4.3 What Are the Benefits AR-based image overlay systems in-house, withof Holographic Headsets? out relying on engineering expertise or commercial providers of fully integrated platforms. In Holographic headsets are compatible with the addition, while a wide variety of wearable techpreviously described tracking and registration nology including AR headsets shows promising methods. Game-based applications using track- results in several clinical areas (Kolodzey et al. ing libraries and SDKs can be deployed not only 2017; Tepper et al. 2017; Keller et al. 2008), on mobile devices such as smart phones, but also holographic headsets are better in facilitating the on more specialised displays such as holographic development of readily available, portable, and headsets (e.g. Microsoft HoloLens®, https:// easy-to-set-up AR-based image overlay surgery www.microsoft.com/en-us/hololens). In addi- systems which do not alter the surgical workflow tion, these tools provide easy access to algorithms significantly (Kramers et al. 2014) (Fig. 10.4). that detect image markers and align patient-spe- However, studies exploring suitable methodologcific digital data with them, allowing for the ical frameworks for the use of holographic headimplementation of automatic optical tracking. sets and testing their registration accuracy are Holographic headsets integrate mobile hard- very scarce to date (appendix: Table 10.9). Two ware, a Holographic Processing Unit (HPU) and reasons for this are their fairly recent release (e.g. Depth (RGB-D) cameras (i.e. cameras able to Microsoft HoloLens® in 2016) and relatively capture both colour and depth information), high prices: e.g. Microsoft HoloLens® and allowing their use as tracking, registration and Magic Leap® currently cost over $2000 (develdisplay device without relying on an external oper editions). Therefore and in spite of their CPU. AR applications can be loaded into their advantages, assessing the potential of holoHPU and used as stand-alone applications. Their graphic headsets for their implementation in clinRGB-D cameras can be easily set up for marker- ical practice remains a challenge. based optical tracking by using game engines like Unity (Andress et al. 2018; Si et al. 2018; Wu et al. 2018) and computer tracking software such 10.4.4 Study Limitations as Vuforia SDK. In addition, their RGB-D cameras can be used to detect surface patterns in the Outcomes from this systematic review show that environment (e.g. a patient’s body surface) and the number of studies measuring the accuracy of allow aligning patient-specific 3D models with AR-based image overlay surgery systems is low the patient’s body in a fixed position regardless of (Table 10.4), especially if they are analysed sepathe user’s movement around the room (Gibby rately based on specific characteristics of the syset al. 2019). The digital data are overlaid on the tem such as its tracking and registration method headset’s transparent lenses without occluding (Table 10.3 and appendix: Table 10.7). Similarly, the surgeon’s view of the surgical site. They rec- studies that compare the achieved surgical sucognise voice and gesture commands, eliminating cess rates and times with those of conventional the need to manipulate tracking devices and surgery and that include data about the patient’s allowing hands-free interaction with the digital recovery and surgical outcomes in the long term
L. Pérez-Pachón et al.
188
are scarce in this review (electronic supplementary material: S10.1). To validate surgical guidance systems that overlay patient-specific digital data onto the patient’s body surface, it is necessary to perform more clinical studies that include larger samples of subjects and accuracy measurements (Table 10.4) and that explore the aforementioned variables. For these reasons, most reviewed studies using automatic optical tracking were ranked as ‘very low’ evidence quality (electronic supplementary material: S10.1) are scarce in this review and thus we considered that their accuracy estimates remain uncertain. In spite of our restricted eligibility criteria and even though we downsized our sample to automatic optical tracking for the analysis, there was a lack of methodological homogeneity between studies, e.g. due to the wide variety of approaches within each tracking method (appendix: Table 10.7), which affects the risk of bias across the reviewed studies. This has also been reported in other reviews with different eligibility criteria, e.g. those reviews focusing on a specific type of surgical procedure (Contreras López et al. 2019; Joda et al. 2019) or on wearable technology (Kolodzey et al. 2017). This lack of homogeneity and the low number of studies using common methodological and technological frameworks impeded statistical comparisons between the categories defined in our classifications (Table 10.4). Such a statistical analysis would have allowed us to explore potential relationships between registration accuracy and tracking and registration methods and thus make more specific recommendations for improving registration accuracy in future studies. This contrasts with some AR-based guidance tools for minimally invasive surgery such as those for laparoscopy where Eckert et al. (Eckert et al. 2019) found a high level of research maturity, i.e. they were considered as successfully validated. Incomplete retrieval of relevant publications must also be considered as our search was limited to publications in English. The search, selection
and classification of studies was done by the first author only and our qualitative assessments may be biased due to their subjective nature. Finally, research published after August 2018 is not included in our review.
10.5 Conclusions AR-based image overlay surgery is becoming more available to healthcare professionals and researchers by combining holographic headsets, computer tracking libraries and/or SDKs and game engines. However, manufacturers and researchers are facing key challenges for the implementation of these systems in clinical practice, such as the need for validation. Current research on AR-based image overlay surgery struggles to provide a sufficient level of registration accuracy for their use in clinical practice. There is also the need for more clinical studies that include larger numbers of subjects and measurements as well as data about patients’ recovery and surgical outcomes. In addition, further research must explore to what extent these systems improve surgery times and success rates and minimise invasiveness for patients. This knowledge would allow manufacturers and researchers to optimise these technologies based on the surgical needs and perform statistical comparisons that facilitate the design of highly efficient systems. Finally, finding a balance between the cost of holographic headsets and their suitability for implementation in clinical practice is important as these novel devices show key benefits: they are portable and wearable, integrate tracking and registration and handsfree navigation and offer direct visibility of the surgical site. Acknowledgements We thank the staff of the Medical Library of the University of Aberdeen for their advice and Prof. Jennifer Cleland and Dr. Jenny Gregory for discussion and support. This work was funded by the Roland Sutton Academic Trust (0053/R/17) and an Elphinstone PhD Scholarship from the University of Aberdeen.
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
189
Appendix (Tables 10.5, 10.6, 10.7, 10.8, and 10.9) Table 10.5 Search strategy used in this systematic review illustrated by the search done in MEDLINE Search Search term/s 1 Surgery, Computer-Assisted/or Tomography, X-Ray Computed/ or augmented reality.mp. or Endoscopy/ or Laparoscopy/ 2 image guided surg$.mp. or Surgery, Computer-Assisted/ 3 1 and 2 4 track$.tw. 5 registration.tw. 6 fiducial$.tw. 7 projector.tw. 8 projection.tw. 9 Head-mounted display$.tw. 10 Head-mounted display$.mp. or Surgery, Computer-Assisted/ 11 head$ up display$.tw. 12 “Head and Neck Neoplasms”/ or Carcinoma, Squamous Cell/ or head$ up display$.mp. 13 autostereoscop$.tw. 14 microscop$.tw. 15 smart glasses.tw. 16 retinal display$.tw. 17 4 or 5 or 6 or 7 or 8 or 9 or 10 or 11 or 12 or 13 or 14 or 15 or 16 18 3 and 17 19 augmented reality.tw. 20 18 and 19
N publications 483,962 15,684 15,367 100,868 74,110 2519 847 41,826 446 15,617 100 156,219 56 537,608 26 14 909,270 15,251 839 263
Table 10.6 Reviewed studies organised according to surgical procedure Surgery type Neurosurgery
Studies % N 26.3 20
Dental, craniomaxillofacial and oral
22.3
17
Assist several surgical procedures
21.0
16
Abdominal
13.1
10
Orthopaedic
11.8
9
2.6 1.3 1.3
2 1 1
Eye Endovascular Perforator flap
Articles Cabrilo et al. (2015), Deng et al. (2014), Drouin et al. (2017), Eftekhar, (2016), Hou et al. (2016), Huang et al. (2012), Kersten-Oertel et al. (2012), Kramers et al. (2014), Krempien et al. (2008), Liao et al. (2010), Mahvash and Tabrizi (2013), Maruyama et al. (2018), Shamir et al. (2011), Sun et al. (2016, 2017), Besharati Tabrizi and Mahvash (2015), Yang et al. (2018), Yoshino et al. (2015), Zeng et al. (2017), and Zhang et al. (2015) Badiali et al. (2014), Lee et al. (2010), Lin et al. (2015, 2016), Ma et al. (2019), Marmulla et al. (2005), Mezzana et al. (2011), Mischkowski et al. (2006), Qu et al. (2015), Suenaga et al. (2013, 2015), Tran et al. (2011), Wang et al. (2014, 2015, 2017), and Zhu et al. (2011, 2016) Cutolo et al. (2016), Fichtinger et al. (2005), Gavaghan et al. (2012), Giraldez et al. (2007), Han et al. (2013), He et al. (2016), Hu et al. (2013), Khan et al. (2006), Martins et al. (2016), Mondal et al. (2015), Shao et al. (2014), Vogt et al. (2006), Wacker et al. (2005), Wen et al. (2017), Zhang et al. (2017), and Wu et al. 2018) Mahmoud et al. (2017), Müller et al. (2013), Pessaux et al. (2015), Si et al. (2018), Sugimoto et al. (2010), Tang et al. (2017), Volonte et al. (2011), Wen et al. (2013, 2014), and Wesarg et al. 2004) Andress et al. (2018), Gibby et al. (2019), Kosterhon et al. (2017), Liang et al. (2012), Ma et al. (2017, 2018), Pauly et al. (2015), Wang et al. (2016), and Wu et al. (2014)) Rodriguez et al. (2012) and Scolozzi and Bijlenga (2017)) Parrini et al. (2014) Jiang et al. (2017)
L. Pérez-Pachón et al.
190
Table 10.7 Classification of reviewed automatic optical tracking studies according to the tracking method Studies % N
Tracking method Marker-based using: Infrared camera
40.7
31
19.7
15
RGB-D camera Projector and RGB camera Marker-less using: RGB camera RGB-D camera
1.3 2.6
1 2
3.9 6.5
3 5
Projector and RGB camera Electromagnetic Manual
1.3
1
2.6 10.5
2 8
Other
10.5
8
RGB camera
Articles
Cabrilo et al. (2015), Deng et al. (2014), Drouin et al. (2017), Gavaghan et al. (2012), Giraldez et al. (2007), He et al. (2016), Hu et al. (2013), Huang et al. (2012), Kersten-Oertel et al. (2012), Khan et al. (2006), Kosterhon et al. (2017), Lee et al. (2010), Liang et al. (2012), Liao et al. (2010), Lin et al. (2016), Ma et al. (2017, 2019), Maruyama et al. (2018), Shamir et al. (2011), Si et al. (2018), Suenaga et al. (2013), Tang et al. (2017), Tran et al. (2011), Vogt et al. (2006), Wacker et al. (2005), Wang et al. (2016), Wesarg et al. (2004), Yang et al. (2018), Yoshino et al. (2015), and Zhang et al. (2015, 2017) Badiali et al. (2014), Cutolo et al. (2016), Jiang et al. (2017), Kramers et al. (2014), Lin et al. (2015), Mischkowski et al. (2006), Müller et al. (2013), Parrini et al. (2014), Qu et al. (2015), Shao et al. (2014), Sun et al. (2017), Wang et al. (2015), Wu et al. (2014), and Zhu et al. (2011, 2016) Wen et al. (2014) Krempien et al. (2008) and Wen et al. (2013)
Suenaga et al. (2015) and Wang et al. (2014, 2017) Gibby et al. (2019), Marmulla et al. (2005), Pauly et al. (2015), Wen et al. (2017), and Wu et al. (2018) Zeng et al. (2017) Ma et al. (2018) and Martins et al. (2016) Eftekhar (2016), Hou et al. (2016), Mahvash and Tabrizi (2013), Mezzana et al. (2011), Pessaux et al. (2015), Sugimoto et al. (2010), Besharati Tabrizi and Mahvash (2015), and Volonte et al. (2011) Andress et al. (2018), Fichtinger et al. (2005), Han et al. (2013), Mahmoud et al. (2017), Mondal et al. (2015), Rodriguez et al. (2012), Scolozzi and Bijlenga (2017), and Sun et al. (2016)
Table 10.8 Reviewed studies organised according to the system’s usability Usability Compact Wireless
Studies % N 12.0 7 8.6
5
Surgical site directly visible
27.5
16
Hands-free tracking
32.7
19
Stand-alone application
6.9
4
Articles Cutolo et al. (2016), Gibby et al. (2019), Giraldez et al. (2007), Jiang et al. (2017), Kramers et al. (2014), Parrini et al. (2014), and Sun et al. (2017) Gibby et al. (2019), Kramers et al. (2014), Müller et al. (2013), Sun et al. (2017), and Wu et al. (2018) Gavaghan et al. (2012), Gibby et al. (2019), Jiang et al. (2017), Krempien et al. (2008), Liang et al. (2012), Lin et al. (2016), Marmulla et al. (2005), Maruyama et al. (2018), Shao et al. (2014), Si et al. (2018), Wang et al. (2016), Wen et al. (2013, 2014), Wu et al. (2014, 2018), and Zeng et al. (2017) Badiali et al. (2014), Cabrilo et al. (2015), Cutolo et al. (2016), Gibby et al. (2019), Krempien et al. (2008), Lee et al. (2010), Liang et al. (2012), Ma et al. (2017), Marmulla et al. (2005), Pauly et al. (2015), Suenaga et al. (2013, 2015), Tran et al. (2011), Wang et al. (2014, 2015, 2017), Wen et al. (2013), Yang et al. (2018), and Yoshino et al. (2015)) Gibby et al. (2019), Kramers et al. (2014), Maruyama et al. (2018), and Wu et al. (2018))
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review
191
Table 10.9 Classification of reviewed automatic optical tracking studies according to display device Display Headset Video see-through
Studies % N
Articles
15.5
9
5.1
3
Badiali et al. (2014), Cutolo et al. (2016), Hu et al. (2013), Huang et al. (2012), Lin et al. (2015), Parrini et al. (2014), Shamir et al. (2011), Vogt et al. (2006), and Wacker et al. (2005) Jiang et al. (2017), Lin et al. (2016), and Wang et al. (2016) Gibby et al. (2019), Si et al. (2018), and Wu et al. (2018)
Optical see-through (non-holographic) Optical see-through (holographic) Half-silvered mirror
5.1
3
22.4
13
Projector
15.5
9
Microscope
8.6
5
Tablet
8.6
5
Semi-transparent screen Smartphone Smart glasses Video camera screen Not specified
3.4
2
He et al. (2016), Liao et al. (2010), Ma et al. (2017, 2019), Pauly et al. (2015), Suenaga et al. (2013, 2015), Tran et al. (2011), Wang et al. (2014, 2015), Yang et al. (2018), and Zhang et al. (2015, 2017) Gavaghan et al. (2012), Krempien et al. (2008), Lee et al. (2010), Liang et al. (2012), Marmulla et al. (2005), Wen et al. (2013, 2014), Wu et al. (2014), and Zeng et al. (2017) Cabrilo et al. (2015), Drouin et al. (2017), Giraldez et al. (2007), Kosterhon et al. (2017), and Yoshino et al. (2015) Deng et al. (2014), Mischkowski et al. (2006), Müller et al. (2013), Tang et al. (2017), and Wen et al. (2017) Khan et al. (2006) and Wesarg et al. (2004)
3.4 3.4 1.7 6.9
2 2 1 4
Kramers et al. (2014) and Sun et al. (2017) Maruyama et al. (2018) and Shao et al. (2014) Kersten-Oertel et al. (2012) Qu et al. (2015), Wang et al. (2017), and Zhu et al. (2011, 2016)
References Andress S, Johnson A, Unberath M, Winkler AF, Yu K, Fotouhi J, Weidert S, Osgood G, Navab N (2018) On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial. J Med Imaging 5(2):021209 Azuma RT (1997) A survey of augmented reality. Presence Teleop Virt 6(4):355–385 Badiali G, Ferrari V, Cutolo F, Freschi C, Caramella D, Bianchi A, Marchetti C (2014) Augmented reality as an aid in maxillofacial surgery: validation of a wearable system allowing maxillary repositioning. J Cranio-Maxillo-Facial Surg 42(8):1970–1976 Bertolo R, Hung A, Porpiglia F, Bove P, Schleicher M, Dasgupta P (2019) Systematic review of augmented reality in urological interventions: the evidences of an impact on surgical outcomes are yet to come. World J Urol Available from: https://doi.org/10.1007/ s00345-019-02711-z. Besharati Tabrizi L, Mahvash M (2015) Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique. J Neurosurg 123(1):206–211
Bosc R, Fitoussi A, Hersant B, Dao T, Meningaud J (2019) Intraoperative augmented reality with heads-up displays in maxillofacial surgery: a systematic review of the literature and a classification of relevant technologies. Int J Oral Maxillofac Surg 48(1):132–139 Cabrilo I, Schaller K, Bijlenga P (2015) Augmented reality- assisted bypass surgery: embracing minimal invasiveness. World Neurosurg 83(4):596–602 Contreras López WO, Navarro PA, Crispin S (2019) Intraoperative clinical application of augmented reality in neurosurgery: a systematic review. Clin Neurol Neurosurg 177:6–11 Cutolo F, Carbone M, Parchi PD, Ferrari V, Lisanti M, Ferrari M (2016) Application of a new wearable augmented reality video see-through display to aid percutaneous procedures in spine surgery. Augment Reality Virtual Reality Comput Graphics 9769(Pt II):43–54 Deng W, Li F, Wang M, Song Z (2014) Easy-to-use augmented reality neuronavigation using a wireless tablet PC. Stereotact Funct Neurosurg 92(1):17–24 Drouin S, Kochanowska A, Kersten-Oertel M, Gerard IJ, Zelmann R, De ND, Beriault S, Arbel T, Sirhan D, Sadikot AF, Hall JA, Sinclair DS, Petrecca K, DelMaestro RF, Collins DL (2017) IBIS: an OR ready open-source platform for image-guided neurosurgery. Int J Comput Assist Radiol Surg 12(3):363–378
192 Eckert M, Volmerg JS, Friedrich CM (2019) Augmented reality in medicine: systematic and bibliographic review. JMIR Mhealth Uhealth 7(4):e10967 Eftekhar B (2016) App-assisted external ventricular drain insertion. J Neurosurg 125(3):754–758 Fichtinger G, Deguet A, Masamune K, Balogh E, Fischer G, Mathieu H, Taylor R, Zinreich S, Fayad L (2005) Image overlay guidance for needle insertion in CT scanner. IEEE Trans Biomed Eng 52(8):1415–1424 Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V (2018) Augmented reality in open surgery. Updates Surg 70(3):389–400 Fitzpatrick JM, West JB (2001) The distribution of target registration error in rigid-body point-based registration. IEEE Trans Med Imaging 20(9):917–927 Fritz J, U-Thainual P, Ungi T, Flammang AJ, Fichtinger G, Iordachita II, Carrino JA (2013) Augmented reality visualisation using an image overlay system for MR-guided interventions: technical performance of spine injection procedures in human cadavers at 1.5 Tesla. Eur Radiol 23(1):235–245 Gavaghan K, Oliveira-Santos T, Peterhans M, Reyes M, Kim H, Anderegg S, Weber S (2012) Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies. Int J Comput Assist Radiol Surg 7(4):547–556 Gibby JT, Swenson SA, Cvetko S, Rao R, Javan R (2019) Head-mounted display augmented reality to guide pedicle screw placement utilizing computed tomography. Int J Comput Assist Radiol Surg 14(3):525–535 Giraldez JG, Caversaccio M, Pappas I, Kowal J, Rohrer U, Marti G, Baur C, Nolte L-P, Gonzalez BM (2007) Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system. Int J Comput Assist Radiol Surg 1(5):253–264 Guyatt GH, Oxman AD, Vist GE, Kunz R, FalckYtter Y, Alonso-Coello P, Schanemann HJ (2008) GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 336(7650):924 Han S, Lee C, Kim S, Jeon M, Kim J, Kim C (2013) In vivo virtual intraoperative surgical photoacoustic microscopy. Appl Phys Lett 103(20):203702 He C, Liu Y, Wang Y (2016) Sensor-fusion based augmented-reality surgical navigation system. In: International Instrumentation and Measurement Technology Conference; 2016 May 23–26; Taipei, Taiwan. IEEE; 2016. Available from: https://doi. org/10.1109/I2MTC.2016.7520404 Hou Y, Ma L, Zhu R, Chen X (2016) iPhone-assisted augmented reality localization of basal ganglia hypertensive hematoma. World Neurosurg 94:480–492 Hu L, Wang M, Song Z (2013) A convenient method of video see-through augmented reality based on image- guided surgery system. In: Seventh International Conference on Internet Computing for Engineering and Science; 2013 September 20–22; Shanghai, China. IEEE; 2013. Available from: https://doi.org/10.1109/ ICICSE.2013.27
L. Pérez-Pachón et al. Huang CH, Hsieh CH, Lee JD, Huang WC, Lee ST, Wu CT, Sun YN, Wu YT (2012) A CT-ultrasound- coregistered augmented reality enhanced image- guided surgery system and its preliminary study on brain-shift estimation. J Instrum 7:P08016 Hummelink S, Hameeteman M, Hoogeveen Y, Slump CH, Ulrich DJO, Schultze Kool LJ (2015) Preliminary results using a newly developed projection method to visualize vascular anatomy prior to DIEP flap breast reconstruction. J Plast Reconstr Aesthet Surg 68(3):390–394 Jiang T, Zhu M, Zan T, Gu B, Li Q (2017) A novel augmented reality-based navigation system in perforator flap transplantation – a feasibility study. Ann Plast Surg 79(2):192–196 Jiang W, Ma L, Boyu Z, Yingwei F, Qu X, Zhang X, Liao H (2018) Evaluation of the 3D augmented reality- guided intraoperative positioning of dental implants in edentulous mandibular models. Int J Oral Maxillofac Implants 33:1219–1228 Joda T, Gallucci GO, Wismeijer D, Zitzmann NU (2019) Augmented and virtual reality in dental medicine: a systematic review. Comput Biol Med 108:93–100 Keller K, State A, Fuchs H (2008) Head mounted displays for medical use. J Disp Technol 4:468–472 Kersten-Oertel M, Chen SS, Drouin S, Sinclair DS, Collins DL (2012) Augmented reality visualization for guidance in neurovascular surgery. Stud Health Technol Inform 173:225–229 Khan MF, Dogan S, Maataoui A, Wesarg S, Gurung J, Ackermann H, Schiemann M, Wimmer-Greinecker G, Vogl TJ (2006) Navigation-based needle puncture of a cadaver using a hybrid tracking navigational system. Investig Radiol 41(10):713–720 Khor WS, Baker B, Amin K, Chan A, Patel K, Wong J (2016) Augmented and virtual reality in surgery-the digital surgical environment: applications, limitations and legal pitfalls. Ann Transl Med 4(23):454 Kim Y, Kim H, Kim YO (2017) Virtual reality and augmented reality in plastic surgery: A review. Arch Plast Surg 44(3):179–187 Kolodzey L, Grantcharov PD, Rivas H, Schijven MP, Grantcharov TP (2017) Wearable technology in the operating room: a systematic review. BMJ Innov 3(1):55–63 Kosterhon M, Gutenberg A, Kantelhardt SR, Archavlis E, Giese A (2017) Navigation and image injection for control of bone removal and osteotomy planes in spine surgery. Operative Neurosurg 13(2):297–304 Kramers M, Armstrong R, Bakhshmand SM, Fenster A, de Ribaupierre S, Eagleson R (2014) Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions. Stud Health Technol Inform 196:204–208 Krempien R, Hoppe H, Kahrs L, Daeuber S, Schorr O, Eggers G, Bischof M, Minter MW, Debus J, Harms W (2008) Projector-basted augmented reality for intuitive intraoperative guidance in image-guided 3D interstitial brachytherapy. Int J Radiat Oncol Biol Phys 70(3):944–952
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review Lee J-D, Huang C-H, Wang S-T, Lin C-W, Lee S-T (2010) Fast-MICP for frameless image-guided surgery. Med Phys 37(9):4551–4559 Li L, Yang J, Chu Y, Wu W, Xue J, Liang P, Chen L (2016) A novel augmented reality navigation system for endoscopic sinus and skull base surgery: a feasibility study. PLOS ONE 11(1):e0146996 Liang JT, Doke T, Onogi S, Ohashi S, Ohnishi I, Sakuma I, Nakajima Y (2012) A fluorolaser navigation system to guide linear surgical tool insertion. Int J Comput Assist Radiol Surg 7(6):931–939 Liao H, Inomata T, Sakuma I, Dohi T (2010) 3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay. IEEE Trans Biomed Eng 57(6):1476–1486 Liberati A, FAU AD, Tetzlaff JF, Mulrow C, Peter C, Clarke M, Kleijnen JF, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta- analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 62(10):e1–e34 Lin Y, Yau H, Wang I, Zheng C, Chung K (2015) A novel dental implant guided surgery based on integration of surgical template and augmented reality. Clin Implant Dent Relat Res 17(3):543–553 Lin L, Shi Y, Tan A, Bogari M, Zhu M, Xin Y, Xu H, Zhang Y, Xie L, Chai G (2016) Mandibular angle split osteotomy based on a novel augmented reality navigation using specialized robot-assisted arms – a feasibility study. J Cranio-Maxillofac Surg 44(2):215–223 Liu WP, Azizian M, Sorger J, Taylor RH, Reilly BK, Cleary K, Preciado D (2014) Cadaveric feasibility study of da Vinci Si-assisted cochlear implant with augmented visual navigation for otologic surgery. JAMA Otolaryngol Head Neck Surg 140(3):208–214 Ma L, Zhao Z, Chen F, Zhang B, Fu L, Liao H (2017) Augmented reality surgical navigation with ultrasound- assisted registration for pedicle screw placement: a pilot study. Int J Comput Assist Radiol Surg 12(12):2205–2215 Ma L, Zhao Z, Zhang B, Jiang W, Fu L, Zhang X, Liao H (2018) Three-dimensional augmented reality surgical navigation with hybrid optical and electromagnetic tracking for distal intramedullary nail interlocking. Int J Medical Rob Comput Assisted Surg 14(4):e1909 Ma L, Jiang W, Zhang B, Qu X, Ning G, Zhang X, Liao H (2019) Augmented reality surgical navigation with accurate CBCT-patient registration for dental implant placement. Med Biol Eng Comput 57(1):47–57 Mahmoud N, Grasa OG, Nicolau SA, Doignon C, Soler L, Marescaux J, Montiel JMM (2017) On-patient see- through augmented reality based on visual SLAM. Int J Comput Assist Radiol Surg 12(1):1–11 Mahvash M, Tabrizi LB (2013) A novel augmented reality system of image projection for image-guided neurosurgery. Acta Neurochir 155(5):943–947 Marmulla R, Hoppe H, Muhling J, Eggers G (2005) An augmented reality system for image-guided surgery. This article is derived from a previous article pub-
193
lished in the journal International Congress Series. Int J Oral Maxillofac Surg 34(6):594–596 Martins S, Vairinhos M, Eliseu S, Borgerson J (2016) Input system interface for image-guided surgery based on augmented reality. In: First International Conference on Technology and Innovation in Sports, Health and Wellbeing (TISHW); 2016 December 1–3; Vila Real, Portugal. IEEE; 2017. Available from: https://doi.org/10.1109/TISHW.2016.7847779 Maruyama K, Watanabe E, Kin T, Saito K, Kumakiri A, Noguchi A, Nagane M, Shiokawa Y (2018) Smart glasses for neurosurgical navigation by augmented reality. Operative Neurosurg (Hagerstown) 15(5):551–556 Mezzana P, Scarinci F, Marabottini N (2011) Augmented reality in oculoplastic surgery: first iPhone application. Plastic Reconstruct Surg 127(3):57e–58e Mischkowski RA, Zinser MJ, Kubler AC, Krug B, Seifert U, Zoller JE (2006) Application of an augmented reality tool for maxillary positioning in orthognathic surgery – a feasibility study. J Cranio-Maxillofac Surg 34(8):478–483 Mohring M, Lessig C, Bimber O (2004) Video see-through AR on consumer cell-phones. In: Third IEEE/ACM International Symposium on Mixed and Augmented Reality, 2004 Nov 5, IEEE, 2005 Jan 24. Available from: https://doi.org/10.1109/ISMAR.2004.63 Mondal SB, Gao S, Zhu N, Sudlow GP, Liang K, Som A, Akers WJ, Fields RC, Margenthaler J, Liang R, Gruev V, Achilefu S (2015) Binocular goggle augmented imaging and navigation system provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping Sci Rep 5. Available from: https://doi.org/10.1038/srep12117 Müller M, Rassweiler M-C, Klein J, Seitel A, Gondan M, Baumhauer M, Teber D, Rassweiler JJ, Meinzer H-P, Maier-Hein L (2013) Mobile augmented reality for computer-assisted percutaneous nephrolithotomy. Int J Comput Assist Radiol Surg 8(4):663–675 Parrini S, Cutolo F, Freschi C, Ferrari M, Ferrari V (2014) Augmented reality system for freehand guide of magnetic endovascular devices. IEEE Engineering in Medicine and Biology Society, pp 490–493 Pauly O, Diotte B, Fallavollita P, Weidert S, Euler E, Navab N (2015) Machine learning-based augmented reality for improved surgical scene understanding. Comput Med Imag Graph 41:55–60 Pessaux P, Diana M, Soler L, Piardi T, Mutter D, Marescaux J (2015) Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. Langenbeck’s Arch Surg 400(3):381–385 Poulin F, Amiot L (2002) Interference during the use of an electromagnetic tracking system under OR conditions. J Biomech 35(6):733–737 Profeta AC, Schilling C, McGurk M (2016) Augmented reality visualization in head and neck surgery: an overview of recent findings in sentinel node biopsy and future perspectives. Br J Oral Maxillofac Surg 54(6):694–696
194 Qu M, Hou Y, Xu Y, Shen C, Zhu M, Xie L, Wang H, Zhang Y, Chai G (2015) Precise positioning of an intraoral distractor using augmented reality in patients with hemifacial microsomia. J Cranio-Maxillo-Facial Surg 43(1):106–112 Rodriguez PS., Becker BC, Lobes Jr LA, Riviere CN (2012) Comparative evaluation of monocular augmented-reality display for surgical microscopes. In: Conference proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 1409–1412 Sayadi LR, Naides A, Eng M, Fijany A, Chopan M, Sayadi JJ, Shaterian A, Banyard DA, Evans GRD, Vyas R, Widgerow AD (2019) The new frontier: a review of augmented reality and virtual reality in plastic surgery. Aesthet Surg J 39(9):1007–1016 Scolozzi P, Bijlenga P (2017) Removal of recurrent intraorbital tumour using a system of augmented reality. Br J Oral Maxillofac Surg 55(9):962–964 Shamir RR, Horn M, Blum T, Mehrkens J, Shoshan Y, Joskowicz L, Navab N (2011) Trajectory planning with Augmented Reality for improved risk assessment in image-guided keyhole neurosurgery. In: International Symposium on Biomedical Imaging: from Nano to Macro. 2011 April 2–March 30, IEEE, 2011 June 9. Available from: https://doi.org/10.1109/ ISBI.2011.5872773 Shao P, Ding H, Wang J, Liu P, Ling Q, Chen J, Xu J, Zhang S, Xu R (2014) Designing a wearable navigation system for image-guided cancer resection surgery. Ann Biomed Eng 42(11):2228–2237 Si W, Liao X, Qian Y, Wang Q (2018) Mixed reality guided radiofrequency needle placement: a pilot study. IEEE Access 6:31493–31502 Suenaga H, Hoang Tran H, Liao H, Masamune K, Dohi T, Hoshi K, Mori Y, Takato T (2013) Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study. Int J Oral Sci 5(2):98–102 Suenaga H, Tran HH, Liao H, Masamune K, Dohi T, Hoshi K, Takato T (2015) Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study. BMC Med Imaging 15:51 Sugimoto M, Yasuda H, Koda K, Suzuki M, Yamazaki M, Tezuka T, Kosugi C, Higuchi R, Watayo Y, Yagawa Y, Uemura S, Tsuchiya H, Azuma T (2010) Image overlay navigation by markerless surface registration in gastrointestinal, hepatobiliary and pancreatic surgery. J Hepatobiliary Pancreat Sci 17(5):629–636 Sun G, Wang F, Chen X, Yu X, Ma X, Zhou D, Zhu R, Xu B (2016) Impact of virtual and augmented reality based on intraoperative magnetic resonance imaging and functional neuronavigation in glioma surgery involving eloquent areas. World Neurosurg 96:375–382 Sun G, Chen X, Hou Y, Yu X, Ma X, Liu G, Liu L, Zhang J, Tang H, Zhu R, Zhou D, Xu B (2017) Image-guided endoscopic surgery for spontaneous supratentorial intracerebral hematoma. J Neurosurg 127(3):537–542
L. Pérez-Pachón et al. Tang R, Ma L, Xiang C, Wang X, Li A, Liao H, Dong J (2017) Augmented reality navigation in open surgery for hilar cholangiocarcinoma resection with hemihepatectomy using video-based in situ three- dimensional anatomical modeling: a case report. Medicine 96(37):e8083 Tepper OM, Rudy HL, Lefkowitz A, Weimer KA, Marks SM, Stern CS, Garfein ES (2017) Mixed reality with HoloLens: where virtual reality meets augmented reality in the operating room. Plast Reconstr Surg 140(5):1066–1070 Tran HH, Suenaga H, Kuwana K, Masamune K, Dohi T, Nakajima S, Liao H (2011) Augmented reality system for oral surgery using 3D auto stereoscopic visualization. In: Medical image computing and computer- assisted intervention: MICCAI ...International Conference on Medical Image Computing and Computer-Assisted Intervention, vol 14, no. Pt 1, pp 81–88 Vávra P, Roman J, Zonča P, Ihnát P, Němec M, Kumar J, Habib N, El-Gendi A (2017) Recent development of augmented reality in surgery: a review. J Healthcare Eng 2017:4574172. Available from: https://doi. org/10.1155/2017/4574172 Vogt S, Khamene A, Sauer F (2006) Reality augmentation for medical procedures: system architecture, single camera marker tracking, and system evaluation. Int J Comput Vis 70(2):179–190 Volonte F, Pugin F, Bucher P, Sugimoto M, Ratib O, Morel P (2011) Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion. J Hepatobiliary Pancreat Sci 18(4):506–509 Wacker F, Vogt S, Khamene A, Sauer F, Wendt M, Duerk J, Lewin J, Wolf K (2005) MR image-guided needle biopsies with a combination of augmented reality and MRI: a pilot study in phantoms and animals. In: CARS 2005: Computerized Assisted Radiology Surgery, vol 1281, pp 424–428 Wang J, Suenaga H, Hoshi K, Yang L, Kobayashi E, Sakuma I, Liao H (2014) Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans Biomed Eng 61(4):1295–1304 Wang J, Suenaga H, Liao H, Hoshi K, Yang L, Kobayashi E, Sakuma I (2015) Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation. Comput Med Imag Graph 40:147–159 Wang H, Wang F, Leong APY, Xu L, Chen X, Wang Q (2016) Precision insertion of percutaneous sacroiliac screws using a novel augmented reality-based navigation system: a pilot study. Int Orthop 40(9):1941–1947 Wang J, Suenaga H, Yang L, Kobayashi E, Sakuma I (2017) Video see-through augmented reality for oral and maxillofacial surgery. Int J Med Rob Comput Assisted Surg 13(2). Available from: https://doi. org/10.1002/rcs.1754 Wen R, Chui C-K, Ong S-H, Lim K-B, Chang SK-Y (2013) Projection-based visual guidance for robot-
10 Image Overlay Surgery Based on Augmented Reality: A Systematic Review aided RF needle insertion. Int J Comput Assist Radiol Surg 8(6):1015–1025 Wen R, Tay W-L, Nguyen BP, Chng C-B, Chui C-K (2014) Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Comput Methods Prog Biomed 116(2):68–80 Wen R, Chng C, Chui C (2017) Augmented reality guidance with multimodality imaging data and depth- perceived interaction for robot-assisted surgery. Robotics 6(2):13 Wesarg S, Firle E, Schwald B, Seibert H, Zogal P, Roeddiger S (2004) Accuracy of needle implantation in brachytherapy using a medical AR system – a phantom study. In: Medical imaging 2004: visualization, image-guided procedures, and display, vol 5367, pp 341–352 West JB, Fitzpatrick JM, Toms SA, Maurer CR Jr, Maciunas RJ (2001) Fiducial point placement and the accuracy of point-based, rigid body registration. Neurosurgery 48(4):810–816. discussion 816-7 Wong K, Yee HM, Xavier BA, Grillone GA (2018) Applications of augmented reality in otolaryngology: a systematic review. Otolaryngol Head Neck Surg 159(6):956–967 Wu J, Wang M, Liu K, Hu M, Lee P (2014) Real-time advanced spinal surgery via visible patient model and augmented reality system. Comput Methods Prog Biomed 113(3):869–881 Wu ML, Chien JC, Wu CT, Lee JD (2018) An augmented reality system using improved-iterative closest point algorithm for on-patient medical image visualization. Sensors (Basel) 18(8):E2505 [pii] Yang G, Hu H, Wang B, Wen C, Huang Y, Fu Y, Su Y, Wu J (2018) A novel method and system for stereotactic surgical procedures. In: IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2017
195
December 2, IEEE, 2018 January 15. Available from: https://doi.org/10.1109/SPMB.2017.8257036 Yoon J, Chen R, Kim E, Akinduro O, Kerezoudis P, Han P, Si P, Freeman W, Diaz R, Komotar R, Pirris S, Brown B, Bydon M, Wang M, Wharen R, Quinones-Hinojosa A (2018) Augmented reality for the surgeon: systematic review. Int J Med Rob Comput Assisted Surg 14(4):e1914 Yoshino M, Saito T, Kin T, Nakagawa D, Nakatomi H, Oyama H, Saito N (2015) A microscopic optically tracking navigation system that uses high- resolution 3D computer graphics. Neurol Med Chir 55(8):674–679 Zeng B, Meng F, Ding H, Wang G (2017) A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation. Int J Comput Assist Radiol Surg 12(8):1355–1368 Zhang X, Chen G, Liao H (2015) A high-accuracy surgical augmented reality system using enhanced integral videography image overlay. In: Conference proceedings: ...Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference, vol 2015, pp 4210–4213 Zhang X, Chen G, Liao H (2017) High-quality see- through surgical guidance system using enhanced 3-D autostereoscopic augmented reality. IEEE Trans Biomed Eng 64(8):1815–1825 Zhu M, Chai G, Zhang Y, Ma X, Gan J (2011) Registration strategy using occlusal splint based on augmented reality for mandibular angle oblique split osteotomy. J Craniofac Surg 22(5):1806–1809 Zhu M, Chai G, Lin L, Xin Y, Tan A, Bogari M, Zhang Y, Li Q (2016) Effectiveness of a novel augmented reality-based navigation system in treatment of orbital hypertelorism. Ann Plast Surg 77(6):662–668