An Introduction to Veterinary Medicine Engineering 3031228049, 9783031228049

Do cephalopods change color when under distress? Is the reptilian heart analogous to a diaphragm positive displacement p

239 104 6MB

English Pages 159 [160] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Monitoring During Anesthesia: Physiology
1 Introduction
2 What Is the Risk of Anesthesia-Related Death?
3 Monitoring Guidelines
4 When Does Monitoring Begin and End?
5 Body Systems That Can Be Monitored
6 Monitoring Anesthetic Depth
7 Monitoring the Cardiovascular System
8 Monitoring the Respiratory System
References
Monitoring Equipment
1 Introduction
2 Bispectral Index
3 Electrocardiography
4 Blood Pressure Monitoring
5 Pulse Oximetry
6 Capnometry and Capnography
References
The Mechatronics Inside the Animal Kingdom
1 Systems
2 The Reptilian Heart
3 The Design
3.1 Network Physiology
3.2 Cyber-physical Sensor System
References
Frameworks and Platforms for Monitoring Animal Health and Wellness in Human Care and in the Wild
1 Introduction
2 A Brief History of Early Sensing Devices, Information System Frameworks and Computing Platforms for Continuous Health Monitoring
2.1 Early Systems for Continuous Health Monitoring
2.2 Information System Frameworks for Continuous Health Monitoring
3 A Framework for Animal Health and Behaviour Monitoring
3.1 Animal Data
3.2 Data Collection
3.3 Data Acquisition
3.4 Data Buffering and Data Transmission
3.5 Data Transformation and the Adaptive API
3.6 Data Analytics
3.7 Data Storage
3.8 Information Exploration
4 Discussion and Conclusion
References
Functional Sensory and Biomedical Applications of Cellulose Nanocrystals as a Sustainable Source of Material
1 Introduction to Cellulose Nanocrystals
2 Applications of CNCs
2.1 Applications Based on Chiral Nematic Self-Assembly
2.2 As a Strain Sensor
2.3 As a Humidity Sensor
2.4 As a Chemical Sensor
2.5 As a Hydrogel Reinforcing Agent
2.6 As a Drug Delivery Carrier
3 Discussion and Conclusion
References
Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How Biomedical Engineering Has Changed Its Course
1 Traditional Pedagogical Methods in Veterinary Anatomy
2 Pedagogical Evolution of Veterinary Anatomy
3 Technology-Enhanced Resources in Veterinary Medicine and Veterinary Anatomy Curriculum
4 Augmented and Mixed-Reality Technology in Veterinary Anatomy Curriculum to Enhance Learning Outcomes
References
Designing Immersive Environments for Extreme Environments
1 Introduction
1.1 Related Work
2 Underwater Physics
2.1 Buoyancy
2.2 Visual
2.3 Auditory
3 Perception Physiology
3.1 Visual
3.2 Auditory
3.3 Temperature
4 Perception Challenges Underwater
4.1 Haptic–Tactile Perception
4.2 Visual
4.3 Auditory
5 Concepts of Different Realities
6 Technological Approaches
6.1 Augmented Reality
6.2 Virtual Reality
7 Designing Requirements for Dynamic and Immersive Environments
7.1 Spatial Mapping
7.2 Case Study: 3D Horse Head Model to Augmented Reality Model
7.3 Case Study: Methodology
Source Imagery and 3D Modelling Software Selection
Modelling Strategy
Getting Started Creating the Blank Model
Carving Away the Model
Test Printing
8 Discussion
References
3D Printed Models for Veterinary Anatomy Teaching
1 Introduction
2 3D Printing Overview
2.1 Digital Image Acquisitions
2.2 Digital Image Editing
2.3 3D Printing
3 Use of 3D Printed Models in Veterinary Anatomy Education
3.1 Equine 3D Models
3.2 Canine Neuroanatomy 3D Models
3.3 Canine Musculoskeletal Models
3.4 Feline Larynx for Teaching Endotracheal Intubation
3.5 Canine Pulmonic Stenosis Model
4 Conclusion
References
Veterinary Surgery: Overview and Recent Achievements
1 Introduction
1.1 Where Do Surgeons Work?
1.2 What Is Surgery?
2 Surgical Subspecialties
2.1 Orthopedic Surgery
2.2 Surgical Oncology
2.3 Neurosurgery and Advanced Imaging
2.4 General/Soft Tissue Surgery and Minimally Invasive Surgery
3 The Operating Room and Equipment
3.1 Surgical Modalities
3.2 Environmental Impact
4 Future Opportunities in Veterinary Surgery
4.1 Robotic and Robotic-Assisted Surgery
4.2 Artificial Intelligence
5 Summary
References
Index
Recommend Papers

An Introduction to Veterinary Medicine Engineering
 3031228049, 9783031228049

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Nadja Johnson Bressan Catherine M. Creighton  Editors

An Introduction to Veterinary Medicine Engineering

An Introduction to Veterinary Medicine Engineering

Nadja Johnson Bressan • Catherine M. Creighton Editors

An Introduction to Veterinary Medicine Engineering

Editors Nadja Johnson Bressan Sustainable Design Engineering University of Prince Edward Island Charlottetown, PE, Canada

Catherine M. Creighton Atlantic Veterinary College University of Prince Edward Island Charlottetown, PE, Canada

ISBN 978-3-031-22804-9 ISBN 978-3-031-22805-6 https://doi.org/10.1007/978-3-031-22805-6

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration by Eagan Boire This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To our families for following us around the world, for their endless love, support, and encouragement.

Preface

Engineering is defined as the understanding and application of scientific principles to solve everyday challenges. In contrast, veterinary medicine is defined as the science and art that deals with the maintenance of health and the prevention, alleviation, and curing of disease and injury in animals. Veterinary medicine engineering is a novel field of science that applies scientific knowledge in designing and implementing various technologies to solve challenges in veterinary practice in clinical and research applications. The development of monitoring equipment for animal patient monitoring and assessment is a great example illustrating the complexity of challenges in veterinary medicine. For example, the electrocardiogram (ECG) was designed to determine heart rate and rhythm and assess human patients for abnormal cardiac rhythms, and presents some challenges when used in equine patients. The P-wave of the ECG is dependent on heart rate; its duration in horses is ≤160 ms and in humans is ≤100 ms. The PR interval in horses is ≤500 ms and in humans is 120–200 ms, and the QRS duration in horses is ≤140 ms, and in humans is 60–100 ms. When utilizing a human heart rate monitor, some values are skewed and do not accurately represent the horse’s heart rate and rhythm. Another challenge in horses compared to humans is the positioning of ECG electrodes on the body: in horses, electrode position is variable depending on clinician preference and whether the horse is anesthetized, conscious, or being evaluated during motion, while in humans, electrode position is an integral part of the characterization of the ECG. As another example of the challenges veterinarians face in patient assessment and monitoring, reptiles have an inconsistent and very low amplitude heart rate, which makes auscultation by standard stethoscopes difficult. The low electric amplitude of reptiles’ ECG (10 cycles) color changes in the range of green to red in response to a RH change from 30% to 95% [48]. In a recent work by Babaei-Ghazvini and Acharya, the coating and films of CNC composites with glycerol and PEG were studied to develop humidity sensors to cover a broad range of relative humidity levels (20–98%). Their results showed that an equal combination of glycerol and PEG could represent the best mechanical and photonic properties [5].

2.4 As a Chemical Sensor As mentioned in the previous section, the change in the chiral nematic pitch of the CNC photonic films can change the reflection band of the helical structure [6]. The difference in the pitch can also happen by chemical stimuli, which helps fabricate the iridescent CNC-based photonic films for chemical sensing such as pH, solvent, and ionic strength. A photonic nanocomposite was prepared by co-assembly of poly (N-isopropyl acrylamide) grafted CNCs with waterborne polyurethane latex, based on the chiral nematic CNCs by Sui et al. The fabricated nanocomposite showed and tested in response to humidity, solvent, and thermal stimuli. Regarding the solvent, they have tested the films on ethanol/water solvent with different polarities. Upon immersion of the films in the solvents with different ethanol/water ratios, the reflection band of the film changed immediately. By decreasing the ethanol content from 90:10 to 50:50 ratio, the film color showed a red shift, which can measure the ethanol concentration [66]. In another work, ammonia gas sensor based on films of copper (II)-doped CNCs in chiral nematic phase were developed and investigated by Dai et al. Chiral nematic CNCs films with a red shift of higher than

68

A. Babaei-Ghazvini and B. Acharya

50 nm were developed at a copper loading of 125 mmol/g CNCs. Based on their results, copper ions acted as a color-tuning agent by chelating with the negatively charged CNCs [21]. Colorimetric detection of Aldehyde gases with chiral nematic CNCs on solid substrates without the use of any other chemicals has been studied by Song et al. Based on their report, the color of films could be controlled by adjusting the self-assembly pitch of the CNC solution. Fabricated CNC films were functionalized with amine groups, and the color of the films could be changed exposing to aldehyde gases such as formaldehyde or propanal [64]. In another work by Zhao et al., films with dual functional properties of chiral nematic CNCs have been developed and investigated for their sensing properties against relative humidity and formaldehyde gas. They reported a synergic effect of cooperation and competition between formaldehyde and water molecules due to forming an Hbonded cross-linking network between water, formaldehyde, and CNCs in terms of the dual-responsiveness behavior. The authors claimed that their work could provide a promising strategy to design and develop multi-gas-sensitive devices with appropriate detection, suitable stability, and excellent reversibility [84]. In another study by He et al. chiral photonic films were fabricated by the co-assembly of CNCs and anthracene-centered luminophore (BPP2VA) resulting in a hybrid photonic and fluorescence composite. The films displayed exceptional responsiveness to several volatile acid vapors such as HCl vapor in various concentrations (1–103 ppm), TFA (1 ppm), and HNO3 visible to the naked eye [31].

2.5 As a Hydrogel Reinforcing Agent Hydrogels were invented and introduced by Wichterle and Lim [76] in the 1960s. So far, there is a great interest in hydrogels application in both academia and industry [1]. Synthetic hydrogels containing dynamic water-swollen in polymeric networks have a soft and wet texture similar to biological soft tissue. Hydrogel-related topics have been hot spots during the past decades due to their elasticity, dynamic selfhealing properties, and bio-compatibility [19, 45, 46]. The majority of the hydrogels are made of isotropic polymeric networks with a less ordered structure that leads to lower mechanical properties. This could be considered the major drawback of employing hydrogels in biomedical and tissue engineering applications [8, 34]. To overcome this drawback, many attempts have been made by the researchers in this field of research to reinforce their mechanical properties with various methods including double cross-linking [83], double networking [27, 47], homogenous network [60], micro-gel reinforcing [35], and nanocomposite hydrogels [34]. Among the methods mentioned above, nanocomposite reinforcement of hydrogel is one of the most reported methods because it is more straightforward than the other methods. CNCs, due to their hydrophilicity, excellent mechanical performance and sustainability, are one of the promising nanomaterials to be employed as a reinforcing agent. To date, hydrophilic CNCs have been used for developing hydrogels on a lot of researches with different types of polymeric compounds including alginate,

Functional Sensory and Biomedical Applications of Cellulose Nanocrystals as. . .

69

starch, PVA, chitosan, gelatin, collagen, etc. [22]. Gonzalez et al. demonstrated that CNCs and PVA composite hydrogels can be produced by freezing/thawing process. They found that CNCs acted as nucleation sites, enabling the hydrogels to form with enhanced mechanical properties [28]. Furthermore, Yang, Zhao et al. studied the effect of surface charge and aspect ratio of CNCs on the mechanical reinforcement of CNCs/PAM nanocomposite hydrogels. The results demonstrated that higher surface charge concentration improved dispersibility and enhanced stress transfer. Additionally, large aspect ratios of CNCs improved mechanical reinforcement [79]. By photo crosslinking PVA with UV irradiation and freezing/thawing the hydrogels, PVA/CNCs/poly (2-hydroxyethyl methacrylate) (PVA/CNC/poly HEMA) and PVA/CNC/poly (N -methylenebisacrylamide) (PVA/CNC/poly MBA) were produced by Bai et al. [7]. Compared to non-crosslinked PVA/CNC hydrogels, crosslinked hydrogels showed higher thermal stability, lower water loss rate, better swelling and reswelling behavior, and better adsorption performance. In a similar study from Wang et al. a free radical polymerization process of acrylic acid (AA) and CNCs was used to prepare porous poly (acrylic acid) and CNC adsorbents with high adsorption potential for methylene blue (MB) to remove the dye from wastewater using ammonium persulphate (APS) initiator and MBA as crosslinking agent. Based on their report, using Langmuir isotherms, the maximum adsorption capacity for methylene blue at pH 9.0 and at 313 K was 1734.816 mg g−1 , a value significantly higher than usually encountered for adsorbents [75]. In another study, Thinkohkaew et al. with using coconut derived CNCs, reinforced PVA, and successfully prepared the porous hydrogel by crosslinking sodium tetraborate. The component of the hydrogel that incorporated CNCs improved the thermal stability dramatically. Based on their report, due to the porosity in the produced hydrogel structure, the glass transition temperature was slightly lowered, while the melting temperature was significantly raised [71].

2.6 As a Drug Delivery Carrier The field of drug delivery research has become highly interdisciplinary over the past few decades. Scientists from fields such as biomedical engineering, pharmaceutical sciences, and life sciences investigate a plethora of topics related to their individual specializations. It has been found that the mechanical properties could have significant efficacy on drug delivery behavior of microcapsules prepared with natural polysaccharides including CNCs, starch nanocrystals, and chitin whiskers. Between the mentioned polysaccharides, CNCs gained a significant attention in this regard due to their high availability, crystallinity, and stiffens (tensile modulus ~140 GPa) [65]. CNCs have a lot of potential as drug carriers in drug delivery systems because they are biocompatible, biodegradable, and non-toxic. CNCs are also easier to remove from digestive and bloodstream systems, so they do not interfere with the functioning of any human body mechanisms [69]. The large surface area and very negative surface charge could lead to a higher loading capacity for drugs

70

A. Babaei-Ghazvini and B. Acharya

[37]. The use of CNCs as drug carriers has already been studied in the past. The CNCs have been successfully used as a drug carrier for hydrophilic drugs, including doxorubicin hydrochloride [37], procaine hydrochloride, imipramine hydrochloride [3], hydroquinone [67], and tetracycline hydrochloride [77]. Due to the abundance of hydroxyl and carboxyl groups on CNC surfaces (see Sect. 1), hydrophilic drugs are easily bound to CNC surfaces [37]. CNCs have been modified or functionalized in recent studies to increase their capabilities as drug carriers. Surfactants, oils, polymers, and small compounds with low molecular weight have been employed for the CNCs surface modification. The primary goal of surface modification is to alter the properties of CNCs, particularly to turn them from hydrophilic to hydrophobic. It is required because many medications, such as therapeutic compounds and anticancer treatments, have a hydrophobic property [78]. The use of CNCs modified by TEMPO and chitosan for transporting hydrophilic medicines was examined by Akhlaghi et al. They reported that the non-modified CNCs are more suitable for usage as a hydrophilic drug carrier. The CNCs modified with Rarasaponins, on the other hand, had superior loading capacity and release than the pure CNCs. Rarasaponins structure had a lot of hydroxyl and carboxyl groups. These functional groups helped the hydrophilic medication to bind better on the modified CNCs [3]. Putro et al. investigated the effects of CNC changes in hydrophobic drug delivery employing cationic, anionic, and nonionic surfactants. A cationic surfactant had good bindings onto CNCs due to the negative surface charge of CNCs. It was demonstrated that cationic CNCs have a larger loading capacity (65.49 mg/g) and encapsulation efficiency (87.32%) than anionic and nonionic CNCs [55]. The use of various chemicals to modify CNCs for improving the ability of CNCs in drug delivery uses has been considered during the past years including oxidation, sulfonation, carboxylation, esterification, silylation, cationization, and grafting [15]. However, in the case of health and environmental concerns, the choice of modifying agents must be carefully evaluated.

3 Discussion and Conclusion During the past few years, CNCs have gained increasing appreciation as emerging nanomaterials, leading to the development of CNC-based sensors, hydrogels, drug delivery carriers, and numerous other applications, including sensors and biomedical applications. These materials also have many remarkable properties, including high surface area, renewability, nontoxicity, and biocompatibility. In this chapter, we briefly discussed the potential application of CNCs in developing sensors, hydrogels, and drug delivery agents. The chiral nematic phase of the CNCs, due to their photonic reflection band, has attracted lots of attention for developing strain, humidity, and chemical sensors. Because of their exceptional nanostructure, outstanding mechanical properties, high reactive surfaces, as well as their biodegradability and biocompatibility, CNCs have been proven to be promising materials for fabricating hydrogels, either by themselves or by incorporating them

Functional Sensory and Biomedical Applications of Cellulose Nanocrystals as. . .

71

in a hydrophilic polymer matrix. In the biomedical field, CNC-based hydrogels have become valuable functional materials because they can be applied to a wide range of applications. However, most of the CNC-based hydrogels applications in the biomedical field are currently in the laboratory or at the fundamental level. In terms of the drug delivery application, because of the strong physical and chemical interactions between nanoparticles and drugs, they are also used to allow control over drug release and delivery to specific locations in the body. Last but not least, the continuous use of CNCs as a biocompatible, biodegradable, and sustainable material in biomedical and engineering research will likely transform the use of conventional materials in the near future.

References 1. Ahmed, E. M. (2015) ‘Hydrogel: Preparation, characterization, and applications: A review’, Journal of advanced research, 6(2), pp. 105–121. 2. Akampumuza, O. et al. (2017) ‘Review of the applications of biocomposites in the automotive industry’, Polymer Composites, 38(11), pp. 2553–2569. 3. Akhlaghi, S. P. et al. (2014) ‘Comparative release studies of two cationic model drugs from different cellulose nanocrystal derivatives’, European Journal of Pharmaceutics and Biopharmaceutics, 88(1), pp. 207–215. 4. Babaei-Ghazvini, A. et al. (2020) ‘Effect of magnetic field alignment of cellulose nanocrystals in starch nanocomposites: physicochemical and mechanical properties’, Carbohydrate Polymers, p. 116688. 5. Babaei-Ghazvini, A. and Acharya, B. (2021) ‘Humidity-Responsive Photonic Films and Coatings Based on Tuned Cellulose Nanocrystals/Glycerol/Polyethylene Glycol’, Polymers. https://doi.org/10.3390/polym13213695. 6. Babaei-Ghazvini, A., Acharya, B. and Korber, D. R. (2022) ‘Multilayer photonic films based on interlocked chiral-nematic cellulose nanocrystals in starch/chitosan’, Carbohydrate Polymers, 275, p. 118709. https://doi.org/10.1016/j.carbpol.2021.118709. 7. Bai, H. et al. (2018) ‘Interpenetrating polymer networks in polyvinyl alcohol/cellulose nanocrystals hydrogels to develop absorbent materials’, Carbohydrate polymers, 200, pp. 468– 476. 8. Balakrishnan, B. and Banerjee, R. (2011) ‘Biopolymer-based hydrogels for cartilage tissue engineering’, Chemical reviews, 111(8), pp. 4453–4474. 9. Beck-Candanedo, S., Roman, M. and Gray, D. G. (2005) ‘Effect of reaction conditions on the properties and behavior of wood cellulose nanocrystal suspensions’, Biomacromolecules, 6(2), pp. 1048–1054. 10. Beck, S. et al. (2013) ‘Controlled production of patterns in iridescent solid films of cellulose nanocrystals’, Cellulose, 20(3), pp. 1401–1411. 11. Beltrame, P. L. et al. (1992) ‘Structural features of native cellulose gels and films from their susceptibility to enzymic attack’, Journal of applied polymer science, 44(12), pp. 2095–2101. 12. Boott, C. E. et al. (2020) ‘Cellulose nanocrystal elastomers with reversible visible color’, Angewandte Chemie International Edition, 59(1), pp. 226–231. 13. Bucci, K., Tulio, M. and Rochman, C. M. (2020) ‘What is known and unknown about the effects of plastic pollution: A meta-analysis and systematic review’, Ecological Applications, 30(2), p. e02044. 14. Bumbudsanpharoke, N. et al. (2018) ‘Study of humidity-responsive behavior in chiral nematic cellulose nanocrystal films for colorimetric response’, Cellulose, 25(1), pp. 305–317. https:// doi.org/10.1007/s10570-017-1571-8.

72

A. Babaei-Ghazvini and B. Acharya

15. Bundjaja, V. et al. (2020) ‘Aqueous sorption of tetracycline using rarasaponin-modified nanocrystalline cellulose’, Journal of Molecular Liquids, 301, p. 112433. 16. Cao, Y. et al. (2020) ‘Tunable Diffraction Gratings from Biosourced Lyotropic Liquid Crystals’, Advanced Materials. 17. Chang, C. and Zhang, L. (2011) ‘Cellulose-based hydrogels: Present status and application prospects’, Carbohydrate polymers, 84(1), pp. 40–53. 18. Chen, H. et al. (2020) ‘Light-and humidity-responsive chiral nematic photonic crystal films based on cellulose nanocrystals’, ACS Applied Materials & Interfaces. 19. Chen, J. et al. (2019) ‘Polypyrrole-doped conductive supramolecular elastomer with stretchability, rapid self-healing, and adhesive property for flexible electronic sensors’, ACS applied materials & interfaces, 11(20), pp. 18720–18729. 20. Chowdhury, R. A. et al. (2018) ‘Cellulose nanocrystal (CNC) coatings with controlled anisotropy as high-performance gas barrier films’, ACS applied materials & interfaces, 11(1), pp. 1376–1383. 21. Dai, S. et al. (2017) ‘Cholesteric film of Cu(II)-doped cellulose nanocrystals for colorimetric sensing of ammonia gas’, Carbohydrate Polymers, 174, pp. 531–539. https://doi.org/10.1016/ j.carbpol.2017.06.098. 22. Du, H. et al. (2019) ‘Cellulose nanocrystals and cellulose nanofibrils based hydrogels for biomedical applications’, Carbohydrate Polymers, 209, pp. 130–144. https://doi.org/10.1016/ j.carbpol.2019.01.020. 23. Dunlop, M. J., Acharya, B. and Bissessur, R. (2018) ‘Isolation of nanocrystalline cellulose from tunicates’, Journal of Environmental Chemical Engineering, 6(4), pp. 4408–4412. 24. Elazzouzi-Hafraoui, S. et al. (2008) The shape and size distribution of crystalline nanoparticles prepared by acid hydrolysis of native cellulose. Biomacromol 9: 57–65. 25. Ferreira, F. V et al. (2018) ‘How do cellulose nanocrystals affect the overall properties of biodegradable polymer nanocomposites: a comprehensive review’, European polymer journal, 108, pp. 274–285. 26. Francisco, W. et al. (2015) ‘Functionalization of multi-walled carbon nanotube and mechanical property of epoxy-based nanocomposite’, Journal of Aerospace Technology and Management, 7(3), pp. 289–293. 27. Gong, J. P. et al. (2003) ‘Double-network hydrogels with extremely high mechanical strength’, Advanced materials, 15(14), pp. 1155–1158. 28. Gonzalez, J. S. et al. (2014) ‘Poly(vinyl alcohol)/cellulose nanowhiskers nanocomposite hydrogels for potential wound dressings’, Materials Science and Engineering: C, 34, pp. 54– 61. https://doi.org/10.1016/j.msec.2013.10.006. 29. Grunert, M. and Winter, W. T. (2002) ‘Nanocomposites of cellulose acetate butyrate reinforced with cellulose nanocrystals’, Journal of Polymers and the Environment, 10(1–2), pp. 27–30. 30. Habibi, Y., Lucia, L. A. and Rojas, O. J. (2010) ‘Cellulose nanocrystals: chemistry, selfassembly, and applications’, Chemical reviews, 110(6), pp. 3479–3500. 31. He, J. et al. (2019) ‘Volatile Acid Responsiveness of Chiral Nematic Luminescent Cellulose Nanocrystal/9,10-Bis((Z)-2-phenyl-2-(pyridin-2-yl)vinyl)anthracene Composite Films’, ACS Sustainable Chemistry & Engineering, 7(14), pp. 12369–12375. https://doi.org/10.1021/ acssuschemeng.9b01794. 32. He, Y.-D. et al. (2018) ‘Biomimetic optical cellulose nanocrystal films with controllable iridescent color and environmental stimuli-responsive chromism’, ACS applied materials & interfaces, 10(6), pp. 5805–5811. 33. Hoffman, A. S. (2012) ‘Hydrogels for biomedical applications’, Advanced drug delivery reviews, 64, pp. 18–23. 34. Hu, D. et al. (2020) ‘Ultrahigh strength nanocomposite hydrogels designed by locking oriented tunicate cellulose nanocrystals in polymeric networks’, Composites Part B: Engineering, 197, p. 108118. 35. Hu, J. et al. (2012) ‘Structure optimization and mechanical model for microgel-reinforced hydrogels with high strength and toughness’, Macromolecules, 45(12), pp. 5218–5228.

Functional Sensory and Biomedical Applications of Cellulose Nanocrystals as. . .

73

36. Huang, H. et al. (2019) ‘Liquid-behaviors-assisted fabrication of multidimensional birefringent materials from dynamic hybrid hydrogels’, ACS nano, 13(4), pp. 3867–3874. 37. Jackson, J. K. et al. (2011) ‘The use of nanocrystalline cellulose for the binding and controlled release of drugs’, International journal of nanomedicine, 6, p. 321. 38. Kasirajan, S. and Ngouajio, M. (2012) ‘Polyethylene and biodegradable mulches for agricultural applications: a review’, Agronomy for sustainable development, 32(2), pp. 501–529. 39. Khalil, H. P. S. A., Bhat, A. H. and Yusra, A. F. I. (2012) ‘Green composites from sustainable cellulose nanofibrils: A review’, Carbohydrate polymers, 87(2), pp. 963–979. 40. Khan, A. et al. (2012) ‘Mechanical and barrier properties of nanocrystalline cellulose reinforced chitosan based nanocomposite films’, Carbohydrate Polymers, 90(4), pp. 1601– 1608. https://doi.org/10.1016/J.CARBPOL.2012.07.037. 41. Kose, O., Boott, C. E., et al. (2019a) ‘Stimuli-responsive anisotropic materials based on unidirectional organization of cellulose nanocrystals in an elastomer’, Macromolecules, 52(14), pp. 5317–5324. 42. Kose, O., Tran, A., et al. (2019b) ‘Unwinding a spiral of cellulose nanocrystals for stimuliresponsive stretchable optics’, Nature communications, 10(1), p. 510. 43. Kosior, E. and Mitchell, J. (2020) ‘Current industry position on plastic production and recycling’, in Plastic Waste and Recycling. Elsevier, pp. 133–162. 44. Li, C. and Zhao, Z. K. (2007) ‘Efficient acid-catalyzed hydrolysis of cellulose in ionic liquid’, Advanced Synthesis & Catalysis, 349(11–12), pp. 1847–1850. 45. Li, M. et al. (2020) ‘Two-pronged strategy of biomechanically active and biochemically multifunctional hydrogel wound dressing to accelerate wound closure and wound healing’, Chemistry of Materials, 32(23), pp. 9937–9953. 46. Liu, X. et al. (2021) ‘Recent Advances on Designs and Applications of Hydrogel Adhesives’, Advanced Materials Interfaces, n/a(n/a), p. 2101038. https://doi.org/10.1002/ admi.202101038. 47. Lu, H. et al. (2019) ‘Modeling strategy for dynamic-modal mechanophore in double-network hydrogel composites with self-growing and tailorable mechanical strength’, Composites Part B: Engineering, 179, p. 107528. 48. Meng, Y. et al. (2020) ‘Fabrication of environmental humidity-responsive iridescent films with cellulose nanocrystal/polyols’, Carbohydrate Polymers, p. 116281. 49. Moberg, T. et al. (2017) ‘Rheological properties of nanocellulose suspensions: effects of fibril/particle dimensions and surface characteristics’, Cellulose, 24(6), pp. 2499–2510. 50. Nagalakshmaiah, M. et al. (2019) ‘Chapter 9—Biocomposites: Present trends and challenges for the future’, Koronis G., Silva, editors. Woodhead Publishing Series in Composites Science and Engineering. Woodhead Publishing. 51. Nuruddin, M. et al. (2020) ‘Influence of Free Volume Determined by Positron Annihilation Lifetime Spectroscopy (PALS) on Gas Permeability of Cellulose Nanocrystal Films’, ACS Applied Materials and Interfaces, 12(21), pp. 24380–24389. https://doi.org/10.1021/ acsami.0c05738. 52. Palkovits, R. et al. (2010) ‘Hydrogenolysis of cellulose combining mineral acids and hydrogenation catalysts’, Green Chemistry, 12(6), pp. 972–978. 53. Peng, B. L. et al. (2011) ‘Chemistry and applications of nanocrystalline cellulose and its derivatives: A nanotechnology perspective’, Canadian Journal of Chemical Engineering. Wiley Online Library, pp. 1191–1206. https://doi.org/10.1002/cjce.20554. 54. Peng, W. and Wu, H. (2019) ‘Flexible and stretchable photonic sensors based on modulation of light transmission’, Advanced Optical Materials, 7(12), p. 1900329. 55. Putro, J. N. et al. (2019) ‘The effect of surfactants modification on nanocrystalline cellulose for paclitaxel loading and release study’, Journal of Molecular Liquids, 282, pp. 407–414. 56. Qu, D., Zheng, H., et al. (2019b) ‘Chiral photonic cellulose films enabling mechano/chemo responsive selective reflection of circularly polarized light’, Advanced Optical Materials, 7(7), p. 1801395. 57. Qu, D., Chu, G., et al. (2019a) ‘Modulating the Structural Orientation of Nanocellulose Composites through Mechano-Stimuli’, ACS applied materials & interfaces, 11(43), pp. 40443–40450.

74

A. Babaei-Ghazvini and B. Acharya

58. Revol, J.-F. (1982) ‘On the cross-sectional shape of cellulose crystallites in Valonia ventricosa’, Carbohydrate Polymers, 2(2), pp. 123–134. 59. Saha, N. C. (2020) ‘COVID 19–its Impact on Packaging Research’. Springer. 60. Sakai, T. et al. (2008) ‘Design and fabrication of a high-strength hydrogel with ideally homogeneous network structure from tetrahedron-like macromonomers’, Macromolecules, 41(14), pp. 5379–5384. 61. Salimi, S. et al. (2019) ‘Production of nanocellulose and its applications in drug delivery: A critical review’, ACS Sustainable Chemistry & Engineering, 7(19), pp. 15800–15827. 62. Smalyukh, I. I. (2020) ‘Thermal Management by Engineering the Alignment of Nanocellulose’, Advanced Materials, p. 2001228. 63. Song, J. H. et al. (2009) ‘Biodegradable and compostable alternatives to conventional plastics’, Philosophical transactions of the royal society B: Biological sciences, 364(1526), pp. 2127– 2139. 64. Song, W. et al. (2018) ‘Cellulose nanocrystal-based colored thin films for colorimetric detection of aldehyde gases’, ACS applied materials & interfaces, 10(12), pp. 10353–10361. 65. Šturcová, A., Davies, G. R. and Eichhorn, S. J. (2005) ‘Elastic modulus and stress-transfer properties of tunicate cellulose whiskers’, Biomacromolecules, 6(2), pp. 1055–1061. 66. Sui, Y. et al. (2020) ‘Multi-responsive nanocomposite membranes of cellulose nanocrystals and poly (N-isopropyl acrylamide) with tunable chiral nematic structures’, Carbohydrate Polymers, 232, p. 115778. 67. Taheri, A. and Mohammadi, M. (2015) ‘The use of cellulose nanocrystals for potential application in topical delivery of hydroquinone’, Chemical biology & drug design, 86(1), pp. 102–106. 68. Tan, T. H. et al. (2019) ‘A review of nanocellulose in the drug-delivery system’, Materials for Biomedical Engineering, pp. 131–164. 69. Tang, L. et al. (2018) ‘Design and synthesis of functionalized cellulose nanocrystals-based drug conjugates for colon-targeted drug delivery’, Cellulose, 25(8), pp. 4525–4536. 70. Thakur, S. et al. (2018) ‘Recent developments in recycling of polystyrene based plastics’, Current Opinion in Green and Sustainable Chemistry, 13, pp. 32–38. 71. Thinkohkaew, K., Rodthongkum, N. and Ummartyotin, S. (2020) ‘Coconut husk (Cocos nucifera) cellulose reinforced poly vinyl alcohol-based hydrogel composite with controlrelease behavior of methylene blue’, Journal of Materials Research and Technology, 9(3), pp. 6602–6611. 72. Titton Dias, O. A. T. et al. (2020) ‘Current state of applications of nanocellulose in flexible energy and electronic devices’, Frontiers in Chemistry, 8, p. 420. 73. Tran, A., Boott, C. E. and MacLachlan, M. J. (2020) ‘Understanding the Self-Assembly of Cellulose Nanocrystals—Toward Chiral Photonic Materials’, Advanced Materials, p. 1905876. 74. Vanderfleet, O. M. and Cranston, E. D. (2020) ‘Production routes to tailor the performance of cellulose nanocrystals’, Nature Reviews Materials, pp. 1–21. 75. Wang, Y. et al. (2017) ‘Cellulose-based porous adsorbents with high capacity for methylene blue adsorption from aqueous solutions’, Fibers and Polymers, 18(5), pp. 891–899. 76. Wichterle, O. and Lim, D. (1960) ‘Hydrophilic gels for biological use’, Nature, 185(4706), pp. 117–118. 77. Wijaya, C. J. et al. (2017) ‘Cellulose nanocrystals from passion fruit peels waste as antibiotic drug carrier’, Carbohydrate polymers, 175, pp. 370–376. 78. Wijaya, C. J. et al. (2020) ‘Hydrophobic modification of cellulose nanocrystals from bamboo shoots using rarasaponins’, ACS omega, 5(33), pp. 20967–20975. 79. Yang, J. et al. (2014) ‘Tough nanocomposite hydrogels from cellulose nanocrystals/poly (acrylamide) clusters: influence of the charge density, aspect ratio and surface coating with PEG’, Cellulose, 21(1), pp. 541–551. 80. Yao, K. et al. (2017) ‘Flexible and responsive chiral nematic cellulose nanocrystal/poly (ethylene glycol) composite films with uniform and tunable structural color’, Advanced Materials, 29(28), p. 1701323.

Functional Sensory and Biomedical Applications of Cellulose Nanocrystals as. . .

75

81. Zaman, A. et al. (2020) ‘Preparation, properties, and applications of natural cellulosic aerogels: a review’, Energy and Built Environment, 1(1), pp. 60–76. 82. Zhang, Z.-L. et al. (2020) ‘Chameleon-Inspired Variable Coloration Enabled by a Highly Flexible Photonic Cellulose Film’, ACS Applied Materials & Interfaces. 83. Zhao, D. et al. (2016) ‘High-strength and high-toughness double cross-linked cellulose hydrogels: a new strategy using sequential chemical and physical cross-linking’, Advanced Functional Materials, 26(34), pp. 6279–6287. 84. Zhao, G. et al. (2020) ‘Dual Response of Photonic Films with Chiral Nematic Cellulose Nanocrystals: Humidity and Formaldehyde’, ACS Applied Materials & Interfaces, 12(15), pp. 17833–17844. 85. Zhong, R. and Ye, Z.-H. (2015) ‘Secondary cell walls: biosynthesis, patterned deposition and transcriptional regulation’, Plant and Cell Physiology, 56(2), pp. 195–214. 86. Zhou, J. and Hsieh, Y.-L. (2020) ‘Nanocellulose aerogel-based porous coaxial fibers for thermal insulation’, Nano Energy, 68, p. 104305. 87. Zhu, Q. et al. (2019) ‘Stimuli Induced Cellulose Nanomaterials Alignment and its Emerging Applications: a Review’, Carbohydrate Polymers, p. 115609. 88. Zhu, Q. et al. (2020) ‘Stimuli-Responsive Cellulose Nanomaterials for Smart Applications’, Carbohydrate Polymers, p. 115933.

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How Biomedical Engineering Has Changed Its Course Tammy Muirhead

Abstract Macroscopic anatomy is an essential course in the veterinary medicine curriculum that students need to fully comprehend to become efficient and successful veterinary professionals. Anatomy has been taught with both descriptive topographical and clinically applied approaches. Historically, detailed textbooks and cadaver dissection have been the foundation for macroscopic anatomy. Over the last few decades, pedagogical resources have evolved from fresh/fixed cadavers, prosections, and plastinated specimens to technologically enhanced models and interactive programs. This evolution has been fueled by limitations of the standard cadaver resources, animal ethics, advances in technology, and the students’ willingness to embrace technology. There is evidence of successful application of computer-based teaching programs into the veterinary anatomy curriculum. These technologically enhanced resources have shown to be engaging, interactive, and authentic learning experiences for students in both the medical and veterinary fields. Virtual reality (VR), augmented reality (AR), and mixed reality (MR) also have been introduced into the veterinary field at various levels to investigate their true value as teaching tools. There is promising potential for all of these modalities to enhance the learning environment for veterinary students; however, more studies are needed to determine efficiency as teaching resources. Keywords Anatomy · Dissection · Cadavers · Prosection · Virtual reality · Augmented reality · Mixed reality

1 Traditional Pedagogical Methods in Veterinary Anatomy Anatomy is one of the fundamental courses in the study of veterinary medicine. Veterinary students need to fully understand gross anatomy and be able to integrate

T. Muirhead () Department of Biomedical Sciences, University of Prince Edward Island, Charlottetown, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. J. Bressan, C. M. Creighton (eds.), An Introduction to Veterinary Medicine Engineering, https://doi.org/10.1007/978-3-031-22805-6_6

77

78

T. Muirhead

that knowledge, no matter what discipline they choose, whether general practice or a specialization. Proper anatomical knowledge is essential to accurately diagnose and treat any species. Physical examinations can be complete and competent when one understands the anatomical regions, boundaries, and locations of structures. Applied anatomy in the form of diagnostics, such as radiographs, sonograms, MRIs, and CT scans, furthermore requires detailed knowledge of the normal anatomical landscape to properly interpret what could be an abnormal finding. Additionally, as a veterinary professional, one has to know the anatomical differences among species at various levels which can be very tedious [47]. Variations among the mammalian species alone can be overwhelming, where veterinarians are expected to not just know mammalian anatomy but also avian, reptilian, and aquatic species. Teaching anatomy has been described via two methods: descriptive or topographical and applied or clinical anatomy. The first method is the more traditional method, whereas the second focuses on the dynamic anatomical functions and the relationships between anatomical structures [49]. A descriptive method of teaching is when the course content focuses on merely identifying structures, their close proximity and relationship to other structures, and their function. An applied anatomical teaching approach is one where one learns all the important anatomical structures with their relative facts and then applies that knowledge to relatable clinical applications. For example, using radiographs to enhance the learning of bones with their specific boney structures or using clinical signs manifested in a disease process to explain the importance of a structure. Textbooks, fresh cadavers or preserved cadavers, plastinated specimens, and skeletons have been traditionally used to teach gross anatomy in the veterinary curriculum. These modalities build the foundational knowledge of anatomy for the two aforementioned teaching methods [16]. Textbooks with elaborate illustrations supply the topographical information to convey anatomical details. Initially macroscopic anatomy texts had straight forward text with simple black-and-white hand-drawn labeled illustrations to complement the text. The text was basically a play-by-play on where structures were located in relationship to other structures to allow for simple interpretation. The “matter of fact” type of writing with an abundance of minor details could be monotonous and daunting to learn. Historically, cadaver dissection has been the mainstay to investigating animal and human anatomy. The early anatomists carried out countless dissections on a large variety of fresh animals to obtain anatomical and morphological knowledge [47]. The knowledge that was obtained by Galen (200 A.D.) and Andres van Wesel (1500s) via animal fresh dissection has provided a blueprint for many species. Through their hard work and dedication, they both established the simple anatomical and physiological principles applied in today’s medicine [12, 65]. The development of cadaver preservation in the form of formalin-fixed specimens has provided a more than one-time use of the cadaver, as well as the development of prosections. These resources have been used to teach anatomy for decade. [47]. Cadaver dissection accompanied with a detailed dissecting guide has proven to be significantly beneficial to veterinary student learning and retention. It

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

79

Fig. 1 Caprine prosections used for both in-person veterinary lab and digital images. (Atlantic Veterinary College, University of Prince Edward Island, Charlottetown, PE, Canada) (a) Caprine proximal pelvic limb, medial view. Biceps femoris muscle has been reflected to expose the nerves (sciatic, tibial and common peroneal nerves). (b) Caprine neck and thorax, lateal view of right side. Superfical muscles of the neck and extrinsic muscles of the thoracic limb

has been shown that cadaver practical classes have a positive impact on student’s perspective of learning anatomy, especially in terms of clinical skills [22]. Whether in human or veterinary medicine, dissection of cadavers has been viewed as the backbone of the rigorous basic science training in anatomy required to reinforce clinical learning [37]. Proper dissection can allow a systematic exploration of the cadaver. The sequential tissue layers can be isolated and identified to expose specific structures with the removal of regional fat connective tissue or fascia. The purpose is to enhance the learning of gross anatomy with the aim of visualization and tactile experience [63] (Fig. 1). A prosection is another useful resource of formalin-fixed cadavers. It is a dissected cadaver or a certain region of a cadaver done by a professional for the purpose of demonstrating a specific dissecting technique or anatomical feature(s) that are of interest. A major advantage of using prosections in the anatomy curriculum is that it exposes the students to accurate anatomical structures without having to go through countless hours of dissection in the lab [47]. Many anatomists feel it is an efficient way to learn anatomy and will compliment any dissections the students will have to perform [62]. Plastinated specimens have also been fundamental resources for teaching anatomy. Plastination is a process, established by Dr. Grunter von Hager decades ago, where a fresh specimen is cleaned, expanded, and positioned in a proper manner and preserved in silicone [24]. Plastinated specimens provide the students with a realistic dry study resource to compliment the information they are studying. These specimens can be used outside the lab, do not require gloves or special

80

T. Muirhead

Fig. 2 Plastinated canine torso with pelvic limbs (a) and thoracic limb (b) prepared at the Atlantic Veterinary College, University of Prince Edward Island, Charlottetown, Prince Edward Island, Canada. These specimens have structures labeled with numbers and a key for easy identification. The torso specimen has a QR (Quick Response) code to scan for anatomical structure identification

clothing to handle, and can be used in coordination with other resources without the fear of contaminating them. Anatomists feel that plastination is the most important dry technique developed to preserve biological specimens [31] (Fig. 2). Skeletons of various species have been used to provide a visual appreciation as to how particular bones articulate, their relative size, and to identify the structures on bones that are often used as landmarks. Bones are a great adjunct tool when trying to reinforce the anatomy while teaching veterinary osteology and expanding to radiology. Bones aid in converting the 2D image of the radiograph back to the 3D bones of the patients which is the clinical application of anatomy [15, 54].

2 Pedagogical Evolution of Veterinary Anatomy Resources available for teaching veterinary anatomy in the last decade have evolved and been modified based on student’s wants, limitations of the standard resources, ethics, and the advances in technology. Trying to teach an “old world” course in a new world filled with changes and requests can be challenging. However, modification of the curriculum has been successful to advance the learning of this essential curriculum. A prime example of this aforementioned evolution of teaching veterinary anatomy is the advancement of anatomical texts and dissection guides. Textbooks today still contain the standard anatomical information with instructional text to allow for proper dissection; however, they have significantly changed. They have moved away from the basic “matter of fact” writing to more elaborate compilation of information. The most current editions of anatomical textbooks are substantially supplemented with detailed, visually pleasing, colorful illustrations. These esthetically pleasing changes are also often accompanied with supplemental PDF copies and digital images which is what the majority of the student population prefers today [25, 47]. Textbooks and dissection guides also have the addition of supplemental information to compliment the essential anatomical facts in the text. Information that encompasses clinical application has been incorporated in numerous books which in

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

81

turn will help solidify the importance of knowing the normal anatomy. Illustrating to the students the clinical relevance of anatomical structures is an engaging method of teaching that will enhance learning. These additions spark an interest in anatomical structures and can help with retention of the material. This also enforces to the reader the importance of knowing the basic anatomy and applying it to their future patients. Some veterinary programs across North America have live animal palpation laboratories incorporated in their macroscopic anatomy curriculum. These palpation laboratories are carefully guided by the governing animal care committees with strict policies to ensure that animals are being handled in the proper way with animal welfare at the forefront. These labs allow students to identify specific anatomical landmarks that they would routinely look for on physical exams. The gentle palpation and manipulation of the animal can enhance the overall understanding of anatomical structures in relation to where it is located, what structure(s) is in close proximity to it, and what direction it can move. It establishes the general normal structures of the animal being studied. It also gives the students the opportunity to see variations of a particular species or breed. Over the years, dissection of a cadaver has been the most effective resource when learning anatomy. Preserved cadavers for the purpose of dissection provide more of a special 3D realism for location, relative size, and relationships between anatomical structures. Many anatomists feel this is truly important when having to study various species of animals with significantly different gross anatomy. A few recent studies carried out on veterinary students illustrated cadaveric dissections are the most effective methods of teaching to deliver veterinary anatomy knowledge and developing surgical skills. Students preferred to dissect over the use of prosections and other teaching tools [40, 63]. The utilization of formalin preserved cadavers, as recently as 10 years ago, was carried out with very few limitations; however, today this practice has encountered some limiting factors in the veterinary anatomy curriculum. The sourcing and availability of the animals has been a factor in North America. There are fewer companies and institutions processing animals for teaching purposes [40, 47, 55]. Thus, the price of cadavers has significantly increased in response to their declining numbers. In the face of a rising student enrollment in veterinary schools, the curriculum had to be modified where there are cadaver reassignments among groups of students. This allows for development of dissection skill along with collaborative interactions within the laboratory setting. It also encourages the 4Rs in the study of anatomy: reduce, replace, refine, and respect. Respect in terms of ethical sources of these animals. Another advantage to this type of dissecting is that it allows all students to see variations among a particular species [47, 55]. Even though the use of formalin-fixed animal cadavers has been the mainstay of anatomy pedagogy, it has had its drawbacks; the biggest being the use of chemicals in the preservation process of the carcasses and storage. This has also been another limiting factor based on the current views of society as we have become more alarmingly aware of the harmful effects of handling these specimens [3]. As a result, there has been a series of steps established in facilities and teaching institutions

82

T. Muirhead

to lessen any adverse effects. The use of neutralizing solutions, proper ventilation in laboratory settings, established tissue handling protocols, new low formalinbased embalming solutions, and formaldehyde monitoring are just a few essential modifications that can result in minimal exposure to students, instructors, and staff [13]. Enforcing some of these changes may be costly and unattainable for some institutions; thus, limiting the number of cadavers that can be maintained in a specific laboratory. Ethical concerns for the use of animals for dissection cadavers are a major limiting factor in animal cadaver production. Ethical aspects include where these animals are being sourced, maintained or housed, euthanized, and processed. The heightened awareness surrounding animal welfare and ethics has led to the use of cadavers for teaching being less desirable for the public, institutions, and students [40, 55]. Even though it is well-established that the dissection of cadavers is the best teaching tool in veterinary anatomy, both anatomists and students are increasingly questioning if the benefits are truly better than the cost of an animal’s life. Consequently, there has been a shift to run veterinary anatomy curriculum with ethically sourced cadavers from privately owned animals that have passed, obtain animal bodies from shelters or pounds, and use of unhealthy animals that need to be euthanized [55]. Thus, today many of the veterinary programs are using fewer formalin-fixed cadavers with a combination of alternative resources, such as prosections, plastinated, or 3D models. Studies have shown that plastinated models are an effective adjunct tool for teaching anatomy. Veterinary institutions have been using this resource for years which with other resources enhance the learning of students [31]. There are several significant advantages to using these specimens. First, they are accurate anatomical structures that don’t take up countless hours of dissection [59]. Second, they can be handled outside the lab setting without the use of gloves as they are clean, dry, odorless, and don’t have any toxic effects [31]. Unfortunately, these specimens are not as easily accessible as they have been in the past. As previously stated, there are less companies and institutions producing these valuable tools. Another fact is that the process to produce a plastinated specimens is tedious and requires a series of steps that take an exorbitant amount of time [24]. For this reason, these specimens are often not available when needed or when they are, they are not cost effective. Lastly, with heightened humane and animal welfare issues in the public eye, there is a greater demand for ethically sourced animal body donation programs. Thus, this limits the sources available for plastination [47]. Osteology is often one of the first topics covered in the first-year veterinary anatomy course. Students are required to study the bones of many species and identify specific structures on each bone. They are also expected to understand how bones articulate with each other and know regions where muscles attach. Students usually have both articulated skeletons and individual bones available to use as study tools. It is well known that using bones during instructional class time or in the laboratory gives the students a hands-on approach that will enhance cognitive learning [54]. They are often used as a team-based learning tool where small groups of students are involved in peer teaching sessions which has been shown to have

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

83

pedagogical merit [54]. Bones and skeletons of various species do take time to prepare properly with a step-wise process of tissue removal, boiling, degreasing, chemical fixation, drying, and mounting [59]. Therefore, due to the turn over time to prepare specimens, many institutions purchase specimens. In terms of cost, real bones and articulated skeletons can be pricy, and therefore, cost prohibitive for a limited teaching budget. These two factors may limit their use in a veterinary program and instructors may choose other teaching tools. Technological advances have been a big force in developing new anatomical teaching tools. Students are well versed in all the new technologies and expect those to be an integral part of their learning. Repositories of anatomical digital images, access to dissection videos, 3D printed models of specimens or bones, and computer-based learning programs are all commonly used in anatomical curriculum [17]. In the last 10–15 years, as students read about the normal anatomical structures in anatomy courses, they can quickly gravitate to online course material or reliable anatomical sites to find digital images or videos to help solidify what they are learning. The convenience of these resources has helped students to make more efficient use of their time and encourage quality learning [47]. When 3D printing started in the 1980s, products were very expensive with limited availability; however currently, they are easily accessible and affordable [21, 60]. Many 3D models are produced for veterinary teaching with great accuracy and consistency. Bones tend to be the most common structure produced for teaching purposes. One study compared the 3D model bones produced from a horse to the original bones and concluded that the models were remarkably similar and accurate [1]. Others structures such as the pelvis, reproductive organs, and gastrointestinal tracts have also been printed [60]. Pedagogical research has shown that these models are very effective teaching tools to compliment the curriculum [46, 47, 51]. Even though they have proven to be good teaching tools, one major downfall of the models is their durability; they are easily breakable and have a short life expectancy with repeated handling in comparison to real bones [17] (Fig. 3). Computer-based teaching programs have also been introduced into the veterinary anatomy curriculum. The majority of the programs in veterinary anatomy have two-dimensional images and videos to help instruct students. These programs, in combination with the traditional teaching tools, have proven to be effective in both learning and outcomes when compared to traditional tools alone [2]. Advanced reality technologies have been implemented in human and clinical veterinary medicine; however, it has not been fully developed in the veterinary anatomy curriculum as of yet. There have been some trials investigating its effectiveness in veterinary anatomy program; they have had some limitations but do have promise [10, 34, 52].

84

T. Muirhead

Fig. 3 An example of 3D Printed models: A. canine thoracic and pelvic limb. Prepared by the Dept. of engineering, University of Prince Edward Island. B. Equine pelvic girdle and reproductive tract. (Prepared by Dr. Robert Lofstedt (theriogenologist), Atlantic Veterinary College, University of Prince Edward Island)

3 Technology-Enhanced Resources in Veterinary Medicine and Veterinary Anatomy Curriculum The traditional veterinary anatomy curriculum is moving away from the gold standard of cadaver dissections for several reasons. Cadaver dissections are extremely time consuming in terms of cadaver preparation, purchasing, utilization, maintenance, storage, and disposal [22, 27, 33]. Normally, as part of the curriculum, students have to devote countless hours to dissection to obtain the objectives set out by the course [34]. Along with time commitments is the financial burden of having cadavers for teaching resources; this practice is also a major concern in many institutions [22, 27]. Lastly, there are concerns about animal welfare and ethics. As animal ethics regulations and animal protection laws are coupled with the heightened public awareness through media is influencing the practice of animal dissection [50]. Institutions are more apprehensive to use cadaveric dissection, and when they do, they are using reduced numbers in hopes of enforcing the 3Rs (reduce, replace, and refine) [23, 34, 42]. The progression toward technology-enhanced resources such as simulated-based medical education (SBME) is a natural path to address all the aforementioned issues. SBME is an interactive method of education where the use of simulation aides can replicate clinical scenarios [32]. Technology-enhanced resources can provide engaging, interactive, and authentic learning experiences for students in both the medical and veterinary fields. SBME has been used in human medical institutions since the 1990s and is already playing a rapidly evolving role in clinical veterinary medicine [50]. These tools are

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

85

currently being implemented for teaching clinical, surgical, and emergency skills in veterinary medicine. The goal is to simulate realistic situations and improve hand–eye coordination for many skills such as instrument handling, suturing, and diagnostic testing [47]. Specifically, it has been utilized to simulate rectal palpation of abdominal organs in bovids and equids with the use of a haptic device [4, 7, 14, 29]. The three main types of simulated modalities currently being used in clinical medicine are virtual (VR), augmented reality (AR), and mixed reality (MR). All of these simulated technologies provide accurate spatial 3D visuals to the user. VR is a technology where you can be immersed in a simulated environment with the use of a head-mounted device. This device completely takes over your vision to give the impression you are somewhere other than reality [45]. The main difference between AR and VR is that AR does not take over your vision, it just adds objects to your environment. AR is a technology where virtual information is overlayed into the real world. The user can handle the virtual object in the real world in real time; it is an interface of the virtual object and real objects. AR uses mobile devices, such as halolens or smart glasses that are transparent and enable the user to see their own environment in front of them. The advantage of AR over VR is that it creates a more realistic experience with better accuracy of the structure or object that is being focused [9, 28, 43]. MR is a technology where the real and the virtual worlds coexist and interact in real time. This technology produces new interactive environments and visualizations. The difference between this and AR is that the virtual objects are not just overlayed in the real world, they are anchored to the real world [39]. Currently, VR has been employed in both human and veterinary medical education, while AR has been utilized more in human medical education when compared to veterinary medical education. For example, in human medicine, AR has been used as an educational training tool for diagnostic procedures, such as endoscopy, laparoscopy, and chest tube placement. Medical schools have incorporated more AR technology into their physiology and anatomy courses to facilitate teaching and learning [41]. The outcome assessments of these modes concluded that the educational delivery are exceptional for increasing spatial understanding in the medical student [6]. Unfortunately, AR and MR implementation in veterinary education has been limited and has been lagging behind the human medical field [10]. Several veterinary schools have implemented and investigated VR technology into the first-year veterinary curriculum. At the Virginia-Maryland College of Veterinary Medicine (VMCVM), the faculty implemented a program to enhance the learning of their veterinary students. They developed a VR anatomy program using CT scans of the canine to enhance physical examination skills using the VR dog model. The goal was to build on the students’ conceptual knowledge of anatomical structures, the relationships of these structures, and then apply to that knowledge base as they determined physiological markers from the program when examining the dogs. Feedback from the DVM students illustrated that the VR system when initially implemented was significantly effective for learning anatomical features for 40% of the students and 83% concluded that they could transfer the anatomical

86

T. Muirhead

landmarks from the VR dog to the real dog. By the last laboratory, the percentage of students finding this learning tool effective and would be highly, likely to continue studying with VR was 57% [16]. Ross University of Veterinary Medicine (RUSVM) also incorporated a VR canine anatomy project, IVALA™, into the anatomy curriculum. The goal was to assess the relationship between this supplemental learning tool and the students’ grades. This program allowed students to identify, move, rotate, magnify, and remove individual anatomical structures from the virtual canine while providing anatomical information to the user. Statistical analysis revealed that students who engaged in the VR program scored significantly higher when compared to these students who did not supplement their studies with this technology (p = 0.003). The overall feedback from the students was that they enjoyed the experience and saw the benefits of the supplemental program in their learning. However, the students did not want this to replace the traditional dissection laboratories [34]. VR technology can be used in the veterinary curriculum as a non-invasive learning tool to help develop a wide variety of surgical skills in DVM students. VR technology has been implemented in the third-year of the veterinary curriculum to investigate whether it has benefits in developing DVM students’ surgical skills. The College of Veterinary Medicine at Lincoln Memorial University investigated whether or not VR training improved the surgical performance of a veterinary students’ first canine surgery. Some students used the VR technology in addition to the normal curriculum to prepare for their initial surgery. The results revealed that both the control group and test group of students spend similar times preparing for their impending surgeries. This was determined to be the reason for no significant difference in the performance scores of both groups [26]. The time invested in the VR technology did not improve the students’ surgical skills, but in contrast, it did not hinder their performance either. This emphasizes the need for more research in this specific area of interest in terms of reality technology.

4 Augmented and Mixed-Reality Technology in Veterinary Anatomy Curriculum to Enhance Learning Outcomes It is well established that veterinary and human anatomical education curriculum has shifted from the traditionally long hours in the dissecting laboratory to decreased credit hours and fewer cadaver resources available [5, 19, 25, 33]. In human medicine, in particular, it is reported that these factors have led to students lacking in anatomical knowledge that ultimately is insufficiently preparing them for their profession [35, 58]. Fortunately, computer-assisted learning has proven to be valuable to anatomic education to help counteract that [11, 30]. Students today are well versed in computer-assisted learning, thus the transition to technologically advance reality to learn anatomy would be a natural transition [38, 64].

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

87

AR’s ability to allow objects to coexist in the real world can provide a realistic resource to enhance veterinary anatomical comprehension and knowledge [34, 48, 53]. The experience with halolens and transparent smart glasses currently on the market has the potential to enhance the learning environment in many aspects of the veterinary profession. AR has progressively become more accessible and less expensive which has directly led to it being used in medical education [35]. The ability of AR to increase intuitive spatial learning is a desirable feature when studying anatomy. It has been shown that there is a direct correlation between the spatial ability of the student with increased proficiency in learning anatomy [18, 57]. As previously stated, there is evidence that AR benefits veterinary clinical teaching. There is, however, limited data about implementing and validating it for the instruction of veterinary anatomy [35]. Computer-based technology, such as AR, can be excellent supplemental anatomical teaching tools to enhance the learning of veterinary anatomy. Incorporating AR technology into the veterinary anatomy curriculum can be successfully accomplished, even though there have been several attempts, with various species, that have fallen short. Some educators tried to include isolated cases of the horse limb, rat brain, and the frog. The fact they used isolated cases along with a lack of sufficient anatomical detail limited their use in the veterinary curriculum [10, 36, 56]. Another attempt was made using a 3D database called the Visible Animal Project which was focusing on viscera in the canine trunk. The project also was incomplete and lacked details on the cranial anatomy of the dog [8]. A lack of anatomical detail was the main determining factor in the poor outcomes of these aforementioned projects. The development of AR technology with attention to specific detailed anatomical structures would be an extensive project that could potentially benefit the veterinary curriculum. At Ross University’s School of Veterinary Medicine, a pilot project introducing a IVALA AR heart program was initiated with pre-veterinary students to enhance the complex learning of the canine heart. Students were placed in two cohorts: traditional learning and AR learning groups. Pre- and post-learning evaluations were performed and showed that all students’ assessments of canine cardiology improved regardless of the teaching method. The results also revealed a positive correlation between spatial awareness scores and post-learning testing (p = 0.03). An overwhelming 84% of the students preferred learning with the AR technology when compared to traditional teaching [35]. Another AR simulator study by Lee et.al. in 2013 used the technology to teach veterinary students intravenous (IV) injections. The study was successful in teaching the students to be more proficient with IV injections using the simulator when compared to the control group of real dogs (p ≤ 0.01) [32]. Christ et al.in 2018 developed a veterinary teaching tool of the canine head anatomy using AR that could be used on a mobile device. They focused on the process of developing this, the challenges endured, resolutions to those challenges, and its potential in veterinary medicine as a teaching tool. They illustrated that AR technology could be developed and implemented in professional programs [10]. These findings illustrate that not only is there a desire to use AR technology among veterinary students but also AR can play a potentially greater role

88

T. Muirhead

in the enhanced teaching and learning of both the clinical and didactic veterinary curriculum. There is limited information on the advantages and value of using MR technology in the study of veterinary medicine. This technology interfaces the real and the virtual worlds that interact in real time. A review article in 2020 that looked at the use of MR and application in human field of medicine concluded that the use of this technology outweighs the benefits of traditional teaching in various disciplines. It was also concluded there were several challenges and that there is much room for further investigations with this technology [20]. One study in the field of human neurosurgery used MR as a method to improve the accuracy of surgeons when resecting maxillofacial tumors which are rare. Even though there were some intrinsic limitations and further investigation is needed, this technology proved to be a beneficial teaching tool to improve the accuracy of millimeters by the surgeons [44]. In veterinary medicine, the use of MR to perform an accurate femoral nerve block was investigated to help improve the recovery time of a canine patient post pelvic limb surgery. This case study used a Microsoft Hololens to focus on a 3Dgenerated canine cadaver leg to perform the anesthetic nerve block. This case study was promising in the development of the technology based on the results of the anesthesiologist’s repeated performance of the peripheral nerve block [61]. Further developments and research are needed to truly warrant the use of MR technology as a supplemental teaching tool in veterinary pedagogy.

References 1. de Alcântara Leite dos Reis D, Gouveia BLR, Júnior JCR, de Assis Neto AC (2019) Comparative assessment of anatomical details of thoracic limb bones of a horse to that of models produced via scanning and 3D printing. 3D Printing in Medicine 5:13. https://doi.org/ 10.1186/s41205-019-0050-2 2. Al-Khalili S, Coppoc G (2014) 2D and 3D Stereoscopic Videos Used as Pre-Anatomy Lab Tools Improve Students’ Examination Performance in a Veterinary Gross Anatomy Course. Journal of veterinary medical education 41:1–9. https://doi.org/10.3138/jvme.0613-082R 3. Alnagar F, Shmela M, Alrtib A, Benashour F, Buker A, Abdalmula A (2018) Health adverse effects of formaldehyde exposure to students and staff in gross anatomy. International Journal of Scientific Research and Management 6. https://doi.org/10.18535/ijsrm/v6i2.mp02 4. Baillie S, Mellor D, Brewster S, Reid S (2005) Integrating a Bovine Rectal Palpation Simulator into an Undergraduate Veterinary Curriculum. Journal of veterinary medical education 32:79– 85. https://doi.org/10.3138/jvme.32.1.79 5. Bergman EM, Van Der Vleuten CPM, Scherpbier AJJA (2011) Why don’t they know enough about anatomy? A narrative review. Medical Teacher 33:403–409. https://doi.org/10.3109/ 0142159X.2010.536276 6. Bork F, Stratmann L, Enssle S, Eck U, Navab N, Waschke J, Kugelmann D (2019) The Benefits of an Augmented Reality Magic Mirror System for Integrated Radiology Teaching in Gross Anatomy. Anat Sci Educ 12:585–598. https://doi.org/10.1002/ase.1864 7. Bossaert P, Leterme L, Caluwaerts T, Cools S, Hostens M, Kolkman I, de Kruif A (2009) Teaching Transrectal Palpation of the Internal Genital Organs in Cattle. Journal of Veterinary Medical Education 36:451–460. https://doi.org/10.3138/jvme.36.4.451

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

89

8. Böttcher P, Maierl J, Schiemann T, Glaser C, Weller R, Hoehne KH, Reiser M, Liebich HG (1999) The visible animal project: a three-dimensional, digital database for high quality threedimensional reconstructions. Vet Radiol Ultrasound 40:611–616. https://doi.org/10.1111/ j.1740-8261.1999.tb00887.x 9. Chien C-H, Chen C-H, Jeng T-S (2010) An Interactive Augmented Reality System for Learning Anatomy Structure. Hong Kong 6 10. Christ R, Guevar J, Poyade M, Rea PM (2018) Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality. PLOS ONE 13:e0195866. https://doi.org/10.1371/journal.pone.0195866 11. Codd AM, Choudhury B (2011) Virtual reality anatomy: Is it comparable with traditional methods in the teaching of human forearm musculoskeletal anatomy? Anatomical Sciences Education 4:119–125. https://doi.org/10.1002/ase.214 12. Conner A (2017) Galen’s Analogy: Animal Experimentation and Anatomy in the Second Century C.E. Anthos 8. https://doi.org/10.15760/anthos.2017.118 13. Coskey A, Gest TR (2015) Effectiveness of various methods of formaldehyde neutralization using monoethanolamine. Clinical Anatomy 28:449–454. https://doi.org/10.1002/ca.22534 14. Crossan A, Brewster S, Reid S, Mellor D (2001) A horse ovary palpation simulator for veterinary training. In: Brewster S, Murray-Smith R (eds) Haptic Human-Computer Interaction. Springer, Berlin, Heidelberg, pp 157–164 15. Croy BA, Dobson H (2003) Radiology as a Tool for Teaching Veterinary Anatomy. Journal of Veterinary Medical Education 30:264–269. https://doi.org/10.3138/jvme.30.3.264 16. DeBose K (2020) Virtual Anatomy: expanding veterinary student learning. Journal of the Medical Library Association 108:647–648 17. Estai M, Bunt S (2016) Best teaching practices in anatomy education: A critical review. Annals of Anatomy - Anatomischer Anzeiger 208:151–157. https://doi.org/10.1016/ j.aanat.2016.02.010 18. Fernandez R, Dror IE, Smith C (2011) Spatial abilities of expert clinical anatomists: Comparison of abilities between novices, intermediates, and experts in anatomy. Anatomical Sciences Education 4:1–8. https://doi.org/10.1002/ase.196 19. Fitzgerald J E. F., White M J., Tang S W., Maxwell-Armstrong C A., James D K. (2008) Are we teaching sufficient anatomy at medical school? The opinions of newly qualified doctors. Clinical Anatomy 21:718–724. https://doi.org/10.1002/ca.20662 20. Gerup J, Soerensen CB, Dieckmann P (2020) Augmented reality and mixed reality for healthcare education beyond surgery: an integrative review. Int J Med Educ 11:1–18. https:/ /doi.org/10.5116/ijme.5e01.eb1a 21. Griffey J (2014) Chapter 2: The Types of 3D Printing. Library Technology Reports 50:8–12 22. Gummery E, Cobb KA, Mossop LH, Cobb MA (2017) Student Perceptions of Veterinary Anatomy Practical Classes: A Longitudinal Study. J Vet Med Educ 1–14. https://doi.org/ 10.3138/jvme.0816-132r 23. Hart LA, Wood MW, Weng H-Y (2005) Mainstreaming Alternatives in Veterinary Medical Education: Resource Development and Curricular Reform. Journal of Veterinary Medical Education 32:473–480. https://doi.org/10.3138/jvme.32.4.473 24. Henry RW (2005) Using Plastinated Specimens in Teaching Veterinary Anatomy. Anatomia, Histologia, Embryologia 34:17–17. https://doi.org/10.1111/j.1439-0264.2005.00669_38.x 25. Heylings DJA (2002) Anatomy 1999–2000: the curriculum, who teaches it and how? Medical Education 36:702–710. https://doi.org/10.1046/j.1365-2923.2002.01272.x 26. Hunt J, Heydenburg M, Stacy A, Thompson R (2020) Does virtual reality training improve veterinary students’ first canine surgical performance? Veterinary Record 186:vetrec–2019. https://doi.org/10.1136/vr.105749 27. Jones DG (1997) Reassessing the importance of dissection: a critique and elaboration. Clin Anat 10:123–127. https://doi.org/10.1002/(SICI)1098-2353(1997)10:23.0.CO;2-W 28. Kamphuis C, Barsom E, Schijven M, Christoph N (2014) Augmented reality in medical education? Perspect Med Educ 3:300–311. https://doi.org/10.1007/s40037-013-0107-7

90

T. Muirhead

29. Kinnison T, Forrest ND, Frean SP, Baillie S (2009) Teaching bovine abdominal anatomy: Use of a haptic simulator. Anatomical Sciences Education 2:280–285. https://doi.org/10.1002/ ase.109 30. Küçük S, Kapakin S, Gökta¸s Y (2016) Learning anatomy via mobile augmented reality: Effects on achievement and cognitive load. Anatomical Sciences Education 9:411–421. https://doi.org/ 10.1002/ase.1603 31. Latorre RM, García-Sanz MP, Moreno M, Hernández F, Gil F, López O, Ayala MD, Ramírez G, Vázquez JM, Arencibia A, Henry RW (2017) How Useful Is Plastination in Learning Anatomy? Journal of Veterinary Medical Education 34:172–176 32. Lee S, Lee J, Lee A, Park N, Lee S, Song S, Seo A, Lee H, Kim J-I, Eom K (2013) Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine. The Veterinary Journal 196:197–202. https://doi.org/10.1016/j.tvjl.2012.09.015 33. Leveritt S, McKnight G, Edwards K, Pratten M, Merrick D (2016) What anatomy is clinically useful and when should we be teaching it? Anatomical Sciences Education 9:468–475. https:/ /doi.org/10.1002/ase.1596 34. Little WB, Artemiou E, Conan A, Sparks C (2018) Computer Assisted Learning: Assessment of the Veterinary Virtual Anatomy Education Software IVALATM . Veterinary Sciences 5:58. https://doi.org/10.3390/vetsci5020058 35. Little WB, Dezdrobitu C, Conan A, Artemiou E (2021) Is Augmented Reality the New Way for Teaching and Learning Veterinary Cardiac Anatomy? MedSciEduc 31:723–732. https:// doi.org/10.1007/s40670-021-01260-8 36. Martinelli MJ, Kuriashkin IV, Carragher BO, Clarkson RB, Baker GJ (1997) Magnetic resonance imaging of the equine metacarpophalangeal joint: three-dimensional reconstruction and anatomic analysis. Vet Radiol Ultrasound 38:193–199. https://doi.org/10.1111/j.17408261.1997.tb00840.x 37. McLachlan JC, Bligh J, Bradley P, Searle J (2004) Teaching anatomy without cadavers. Med Educ 38:418–424. https://doi.org/10.1046/j.1365-2923.2004.01795.x 38. McNulty JA, Sonntag B, Sinacore JM (2009) Evaluation of computer-aided instruction in a gross anatomy course: A six-year study. Anatomical Sciences Education 2:2–8. https://doi.org/ 10.1002/ase.66 39. Milgram P, Takemura H, Utsumi A, Kishino F (1995) Augmented reality: a class of displays on the reality-virtuality continuum. In: Telemanipulator and Telepresence Technologies. SPIE, pp 282–292 40. Mohamed R (2020) Attitude of Veterinary Students to Cadaveric Dissection in Teaching and Learning Veterinary Anatomy in the Caribbean. International Research in Education 8:139. https://doi.org/10.5296/ire.v8i1.16761 41. Moro C, Štromberga Z, Raikos A, Stirling A (2017) The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anatomical Sciences Education 10:549–559. https://doi.org/10.1002/ase.1696 42. Özkadif S, Eken E (2012) Modernization process in veterinary anatomy education. Energy Education Science and Technology Part B: Social and Educational Studies 4:957–962 43. Parsons D, MacCallum K (2021)

Current Perspectives on Augmented Reality in Medical Education: Applications, Affordances and Limitations

. AMEP 12:77–91. https://doi.org/ 10.2147/AMEP.S249891 44. Pepe A, Trotta GF, Gsaxner C, Wallner J, Egger J, Schmalstieg D, Bevilacqua V (2018) Pattern Recognition and Mixed Reality for Computer-Aided Maxillofacial Surgery and Oncological Assessment. In: 2018 11th Biomedical Engineering International Conference (BMEiCON). pp 1–5 45. Pottle J (2019) Virtual reality and the transformation of medical education. Future Healthc J 6:181–185. https://doi.org/10.7861/fhj.2019-0036 46. Preece D, Williams SB, Lam R, Weller R (2013) “Let’s get physical”: advantages of a physical model over 3D computer models and textbooks in learning imaging anatomy. Anat Sci Educ 6:216–224. https://doi.org/10.1002/ase.1345 47. Rea PM (2020) Biomedical Visualisation: Volume 8. Springer International Publishing

Introduction to Veterinary Engineering Teaching Veterinary Anatomy: How. . .

91

48. Sajid AW, Ewy GA, Felner JM, Gessner I, Gordon MS, Mayer JW, Shub C, Waugh RA (1990) Cardiology patient simulator and computer-assisted instruction technologies in bedside teaching. Medical Education 24:512–517. https://doi.org/10.1111/j.1365-2923.1990.tb02667.x 49. Salazar I (2002) Coming changes in veterinary anatomy: what is or should be expected? J Vet Med Educ 29:126–130. https://doi.org/10.3138/jvme.29.3.126 50. Scalese RJ, Issenberg SB (2005) Effective Use of Simulations for the Teaching and Acquisition of Veterinary Professional and Clinical Skills. Journal of Veterinary Medical Education 32:461–467. https://doi.org/10.3138/jvme.32.4.461 51. Schoenfeld-Tacher RM, Horn TJ, Scheviak TA, Royal KD, Hudson LC (2017) Evaluation of 3D Additively Manufactured Canine Brain Models for Teaching Veterinary Neuroanatomy. J Vet Med Educ 44:612–619. https://doi.org/10.3138/jvme.0416-080R 52. da Silveira EE, da Silva Lisboa Neto AF, Carlos Sabino Pereira H, Ferreira JS, Dos Santos AC, Siviero F, da Fonseca R, de Assis Neto AC (2020) Canine Skull Digitalization and Three-Dimensional Printing as an Educational Tool for Anatomical Study. J Vet Med Educ e20190132. https://doi.org/10.3138/jvme-2019-0132 53. Stanford W, Erkonen WE, Cassell MD, Moran BD, Easley G, Carris RL, Albanese MA (1994) Evaluation of a computer-based program for teaching cardiac anatomy. Invest Radiol 29:248– 252. https://doi.org/10.1097/00004424-199402000-00022 54. Tefera M (2011) Enhancing cognitive learning in Veterinary Osteology through student participation in skeleton preparation project. Ethiopian Veterinary Journal 15. https://doi.org/ 10.4314/evj.v15i1.67688 55. Tiplady C, Lloyd S, Morton J (2011) Veterinary science student preferences for the source of dog cadavers used in anatomy teaching. Altern Lab Anim 39:461–469. https://doi.org/10.1177/ 026119291103900507 56. Toga AW, Santori EM, Hazani R, Ambach K (1995) A 3D digital map of rat brain. Brain Res Bull 38:77–85. https://doi.org/10.1016/0361-9230(95)00074-o 57. Vorstenbosch MATM, Klaassen TPFM, Donders ART (Rogier), Kooloos JGM, Bolhuis SM, Laan RFJM (2013) Learning anatomy enhances spatial ability. Anatomical Sciences Education 6:257–262. https://doi.org/10.1002/ase.1346 58. Waterston S W., Stewart I J. (2005) Survey of clinicians’ attitudes to the anatomical teaching and knowledge of medical students. Clinical Anatomy 18:380–384. https://doi.org/10.1002/ ca.20101 59. Weiglein AH (1997) Plastination in the Neurosciences. CTO 158:6–9. https://doi.org/10.1159/ 000147902 60. Wilhite R, Wölfel I (2019) 3D Printing for veterinary anatomy: An overview. Anatomia, Histologia, Embryologia 48:609–620. https://doi.org/10.1111/ahe.12502 61. Wilkie N, McSorley G, Creighton C, Sanderson D, Muirhead T, Bressan N (2020) Mixed Reality for Veterinary Medicine: Case Study of a Canine Femoral Nerve Block. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). pp 6074–6077 62. Williams SR, Thompson KL, Notebaert AJ, Sinning AR (2019) Prosection or Dissection: Which is Best for Teaching the Anatomy of the Hand and Foot? Anatomical Sciences Education 12:173–180. https://doi.org/10.1002/ase.1808 63. Winkelmann A (2007) Anatomical dissection as a teaching method in medical school: a review of the evidence. Med Educ 41:15–22. https://doi.org/10.1111/j.1365-2929.2006.02625.x 64. Yeung JC, Fung K, Wilson TD (2012) Prospective Evaluation of a Web-Based ThreeDimensional Cranial Nerve Simulation. Journal of Otolaryngology 41:13 65. Zampieri F, ElMaghawry M, Zanatta A, Thiene G (2015) Andreas Vesalius: Celebrating 500 years of dissecting nature. Glob Cardiol Sci Pract 2015:66. https://doi.org/10.5339/ gcsp.2015.66

Designing Immersive Environments for Extreme Environments Tobias Cibis, Eagan Boire, and Nadja Johnson Bressan

Abstract Immersive media provide the opportunity for anyone to immerse themselves into an environment which may be entirely virtual or a representation of real-world environments. The simulation of an underwater environment enjoys special attention in the research world, with applications for tourism sectors, archaeological training and for orientation and navigation purposes in actual diving scenarios. Underwater the human senses are distorted due to changes in the prevailing physical quantities in water. The greater density of the medium water, when compared with the atmosphere, has negative consequences on the tactile, visual and auditory perception of humans. For a realistic representation of an underwater world virtually or the use of augmentation during submersion, a deep understanding of these physical conditions and their subsequent effects on the human sensory systems is required. This work identifies the predominant underwater physical quantities, determines the distortion effects on the human body and highlights technical solutions for image processing in augmented and virtual realities to accurately adopt the underwater world for immersive media. Keywords Immersive environment · Augmented reality · Underwater · Simulation

T. Cibis () Joint Research Centre for AI in Health and Wellness, University of Technology Sydney, Sydney, NSW, Australia Ontario Tech University, Oshawa, ON, Canada e-mail: [email protected] E. Boire · N. J. Bressan Faculty of Sustainable Design Engineering, University of Prince Edward Island, Charlottetown, PE, Canada e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. J. Bressan, C. M. Creighton (eds.), An Introduction to Veterinary Medicine Engineering, https://doi.org/10.1007/978-3-031-22805-6_7

93

94

T. Cibis et al.

1 Introduction The world of immersive realities is gaining more attention and is finding new innovative applications. Immersive realities can be divided into augmented reality (AR), mixed realities (MRs) or virtual realities (VRs). Application areas for such realities can range from entertaining purposes such as games and design, over subject exposure training purposes, to medical applications [1]. One key element of these applications is the creation of realistic representation of real-life objects, virtual objects or a combination of both inside the immersive environment. The perception of these objects is based on the degree of reality that can be transported through different extended realities. The generation of virtual objects or the recognition of real-life objects is always dependent on the environment. However, considering different environments, the recognition and creation of objects may prove to be difficult given the physical quantities prevailing in an environment. These physical quantities can have a direct or indirect impact on the human sensory systems thereby complicating the installation of an immersive environment. The underwater environment differs from atmospheric conditions because of the much higher density of water compared to air. The increased density of water as a direct impact on the human body distorting the sensory systems that the perception of physical stimuli differs significantly to those received in atmospheric conditions. This can lead to a misinterpretation of these stimuli and a mis-perception of the environment [7]. Furthermore, the underwater environment alters the transmission of tactile, visual and auditory stimuli, leading to an amplification of mis-perception of the environment. Thus, altered perception of the environment may challenge the navigation and orientation of a subject exposed to an environment or, in the worst case, pose as a serious safety hazard [7]. This work covers three topic areas, namely the basics of underwater physics, the physiology of perception during submersion and technological means in augmented reality (AR) and virtual reality (VR) to processes and create realistic underwater scenarios in those realities. The investigation of human perception focuses on tactile, visual and auditory sensory system functionality and the distortion that arises when being exposed to the media water. The identified distortions are utilized to identify technology means such as sound and image processing thereby addressing the fundamental physical quantities prevailing in underwater conditions [3]. Technological approaches that are employed to deal with these distortions are reviewed. The focus is on technologies that either aim to counteract these distortions to provided a support system for subjects exposed underwater or technologies that try to implement the distortions to provide a realistic representation of the underwater world for immersive realities.

Designing Immersive Environments for Extreme Environments

95

1.1 Related Work The implementation of realistic has only been addressed in a few research studies. These studies were limited to tourism and archaeological purposes for underwater exposure. To date, one of the dominant areas for the creation of virtual and augmented underwater worlds is the tourist and archaeological sectors [2]. Nomikou et al. [2] presented a framework for exploring the underwater world and their sunken treasures such as submerged cities, shipwrecks and sunken harbours, through augmented and virtual environments. Bruno et al. [15] developed a virtual reality for diving exploration of underwater cultural heritage sides. They focused on a realistic representation of the geological structures of the environment from the point of view of scuba divers. Navigation through the environment was implemented in accordance to the kinetics of a moving diver. Other examples, such as the use of tablets for augmented realities underwater [14] or full virtual reality setting to mimic diving experience [26], are presented in detail later on in this chapter.

2 Underwater Physics The primary difference between the medium water and the medium air is its density. Underwater density is much greater compared to atmospheric conditions. This has implications on the transmission of physical quantities which stimulate the human sensory systems. In the following, the main alterations of physical quantities in underwater settings are investigated.

2.1 Buoyancy The high density of water has a counteracting effect on gravity and thus provides the perception of weightlessness. This effect is called buoyancy. It is described through Archimedes principle, which is that an object wholly or partly immersed in water is buoyed up by a force equal to the weight of water displaced by the object. This buoyant force is the product of object volume (displaced water) and water density. The effects of buoyancy have an impact on the tactile perception of subject. However, this tactile sensation is difficult to augment and thus not further considered in the following. It is still work mentioning, as the buoyancy effects have a direct or an indirect impact on other human sensory systems.

96

T. Cibis et al.

2.2 Visual The refraction index of water, which is 1.3 times greater than in air, will distort underwater vision. The refracted image perceived underwater causes an enlargement of (4/3) and a reduction of the image distance to about (3/4) of the physical distance. Light is absorbed by water. Absorption is thereby associated with a loss of energy of the light to the surrounding medium. The energy of light can be expressed as the energy of a photon by its wavelength .λ, the Planck’s constant k and c the speed of light: E=

.

h·c λ

(1)

When transmitted through water, the photon is losing energy in accordance with the Lambert–Beere’s law, given the attenuation coefficient of water .μλ : Eλ = μλ · c · d

.

(2)

This means that light with a shorter wavelength and thus higher energy is capable to penetrate water to greater depths. Thus, the long wavelength light of the colour red is the first colour to be absorbed. At 10 msw only 40% of the light is transmitted and below 15 msw everything has a greenish appearance (Fig. 1). This means that light with a shorter wavelength and thus higher energy is capable to penetrate water to greater depths. Thus, the long wavelength light of the colour red is the first colour to be absorbed. At 10 msw, only 40% of the light is transmitted and below 15 msw everything has a greenish appearance. Increasing water depth not only affects the noticeable colour of light. Within the first 10 m depth, water absorbs more than 50% of the visible light energy (Fig. 2). The decay in light density can be described as I = I0 · exp(−μx)

.

(3)

where I.0 denotes the initial light intensity, x is the progression in depth and .μ is the attenuation coefficient of water. There are further environmental factors that contribute to the distortion of underwater vision. Water turbidity may also affect the transmission of light.

2.3 Auditory Sound propagates in the form of waves that are created by vibrations. These vibrations create areas of more or less densely packed particles. Hence, sound is always bound to a medium to be able to travel. The media can be air, liquid or solid. In a vacuum, sound waves are unable to propagate.

Designing Immersive Environments for Extreme Environments

97

Light Penetration in Lake Superior (Open Water, Clear Day) Wavelength (nanometers) 400 nm

Purple

500 nm

Blue

600 nm

Green

Yellow

700 nm

Red

Depth (meters)

UV

50 m

100 m

Fig. 1 The penetration of light into water can be distinguished by the different wavelengths of individual colours. Longer wavelengths such as for the colour red are absorbed in the shallow regions of the water, while shorter wavelengths can penetrate deeper into water depth [4]

Sound waves travel faster in a dense medium, and they do not travel in a vacuum. Since water has a greater density than air, sound waves travel much faster underwater. For example, in fresh water at a temperature of approximately 25.◦ C, sound travels about 4.3 times faster as it does in atmospheric conditions. Common values for the travelling speed of sound waves in the media air and water are 343 m/s in atmospheric conditions and underwater 1482 m/s. Sound travelling through air loses its loudness the further it travels away from its source. In air, the waves’ energy is quickly lost. Underwater, however, sound keeps its energy for longer due to the more densely packed particles which can carry the sound waves better. This also means that underwater sound waves are reaching a submersed diver with a greater pace and thereby keeping their intensity longer. Hence, location of sound underwater is difficult because there is no difference between air and bone conduction (the weight against the tympanic membrane makes it loose its elasticity) in either ear. In addition, each ear is stimulated at the same time [7].

98

T. Cibis et al.

Fig. 2 The intensity of light rapidly decreases with depth [4]

3 Perception Physiology 3.1 Visual Electromagnetic radiation in the wavelength range of 400–750 nm provides an adequate stimulus for the photoreceptors on the retina and is perceived as light [5]. The shortest, visible wavelengths are perceived as blue–violet sensations, while the longest lead to red sensations. The distinction of objects is based on the perception of light–dark contrasts (e.g. varying luminance) and the contrast in colours (e.g. different wavelengths), which are caused by different emissions and reflectance of light from the object. Different objects absorb and reflect light of different wavelengths in varying intensity. The difference in luminance of neighbouring objects is used to determine their physical contrast. The human ability to distinguish colour contrasts is best in the range of green sensations (leaf colours). This characteristic may prove important in underwater setting, as with increasing water depth only the shorter wavelengths of green and blue light are transmitted. The depiction of images in the human eye occurs according to the laws of optical physics. In air, the refractive power of a lens corresponds to the reciprocal of its focal length f and is expressed in diopters (dpt).

Designing Immersive Environments for Extreme Environments

0,0

3,6

99

7,2

24,4 mm

Ziliarmuskel hintere Augenkammer vordere Augenkammer

N.O. Papille

D

G

L b = 17 mm

g = 570 mm

B

K

Fovea

Hornhaut Iris Glaskörper SchlemmKanal

Retina Chorioidea

Zonulafasern

Sklera

Fig. 3 Anatomy of the eye and optical depiction of objects [5]

D=

.

1 [dpt] f

(4)

A healthy eye has a total refractive power of 58.8 dpt The size of images on the retina is determined utilizing the ray theorem. The image size is determined by the relation of the object’s size G and object’s distance g to the image size B and the image distance b on the retina [5] (Fig. 3). .

B G = g b

(5)

An upright object of size 10 mm in a distance of 570 mm results in an image size of 0.3 mm on the retina.   17 G ·b = · 10 = 0.3 mm (6) .B = g 570 The viewing angle, .α, under which the object is seen can be calculated using the relation .G/g and the tangent. .

tan(α) =

G g

(7)

100

T. Cibis et al.

3.2 Auditory The ear is the most sensitive human sensory organ and responsible for processing sound waves. Sound waves are compression waves or pressure fluctuations in the air. These fluctuations are described by sound pressure and frequency. The hearing range thereby comprises frequencies from 20 to 16.000 Hz, and loudness levels between 4 and 130 Phon [5]. A sound event is characterized by its frequency content and amplitude of the occurring pressure fluctuations. This pressure is called sound pressure. Like any other pressure, it is measured in Pascal (1 Pa = 1 N/m.2 ). The range of sound pressure that can be processed by the ear, the dynamic range of the ear, is very large. In order to have a compact measure of the large range of the sound pressure, the unit of the sound pressure level was created. Sound pressure levels are presented in decibel (dB), with the human perception covering the range from 0 to 120 dB. Thereby, level is characterized by the sound pressure P.x in a logarithmic ratio to a unified reference pressure P.0 (= .2 · 10−5 Pa). Thus, the sound pressure level is defined as 

Px .L = 20 log P0

 [dB]

(8)

A slight increase in decibel has the effect of a multiplication in actual sound pressure. An increase of 20 dB equals a tenfold increase in sound pressure [5]. The human ear is adjusted to hearing in atmospheric conditions. Humans’ heads contain tissue which itself contains water which can also transmit sound waves when submersed. This effect results in vibrations to bypass the eardrum, the dedicated part picking up sound waves in air. When this happens, losses and distortion in the auditory system occur because of the high impedance of water once it entered the auditory canal. Human ear function thereby losses sensitivity by 30–60 dB [5]. Directional sensitivity and orientation based on auditory are reduced, due to a given intensity difference between the two ears. Underwater the intensity of sound is not much absorbed by the divers head, and thus it remains approximately the same at both ears, regardless of the angle of incidence.

3.3 Temperature Temperature has direct and indirect influences on the human sensory systems and thereby on the perception of the surrounding. The first sensory organ that will perceive an underwater environment is the skin, and thus the initial perception is a tactile stimulation [7]. Cold sensations and a drop in skin temperature can result in numbness limiting the perception of tactile stimuli. Cold water causes the body core temperature to drop, thereby slowing down brain functions and thus cognitive

Designing Immersive Environments for Extreme Environments

101

functions. The combination of skin and body core temperature drops can initiate further perceptual narrowing which may impair performance and safety [7].

4 Perception Challenges Underwater During submersion, a diver is subject to distortions affection the perception systems and may result in negative limitations of a diver’s confidence, performance and safety [6, 7]. The underwater environment has multiple effects on a diver’s physical and mental conditions. The effects, specifically those on the perception systems, result in alterations of the tactile-kinesthetic and vestibular systems, the auditory system and visuals system. The perception challenges are the result of changing physical conditions which prevail in underwater environments and thereby significantly differ from those in atmospheric conditions which the human physiology has best evolved to [7]. Having investigated the underlying physical effects and the functional physiology in atmospheric conditions in the previous sections, both will be compared to investigate the extent of which the perception systems are affected. Environmental effects typically involve more than one sense, and thus distortion is amplified by the number of affected perception systems [7]. However, while in distorted state, the combination of multiple senses may result in the generation of valuable environmental cues. For example, in underwater settings, the sense of hearing is degraded, and it still provides information which combined with the input from another sense can be used to describe a unified sense to obtain environmental information [7].

4.1 Haptic–Tactile Perception The first sensory organ to be exposed to submersion is the skin. Skin perceptions include temperature sensation and haptic/tactile stimuli. Due to the much denser medium water, compared to air, haptic stimuli can be more intense than in air. Buoyancy effects, underwater currents, water wave propagations and other factors that may cause water movement are perceived and may be utilized as an indicator for orientation. In the absence of visual cues and normal patterns of vestibular cues, divers have to rely on tactile cues, which can be misleading. Although a diver can retain an accurate perception of the body configuration and spacial representation of their surround, it is considered that orientation awareness and object recognition based on tactile information is lost. The buoyancy effect can provide indirect spatial information, based on the observation of negatively or positively buoyant objects. For example, rising bubbles from exhaled gas provide directional information to the surface [7].

102

T. Cibis et al.

4.2 Visual For visual perception underwater, changes in temperature, light absorption, reflection, turbidity and illumination are factored into a degrading perception. The effects, as detailed in the physics section, are responsible for weakening of the refraction capability of the human eye [6]. Underwater, light is only slightly refracted and the image is far beyond the retina. This relates to a loss in focusing capacity and reduces a diver’s peripheral vision and depth perception. The refracted image of an underwater object causes an angular enlargement factor of about 4/3 and a reduction of the image distance to about 3/4 of the physical distance between object and observer. Different wavelengths are absorbed at different efficiency at varying water depths and also scattered by the particles in water. As a result, any visual acuity, luminance contrast and visual image quality are reduced and objects harder to detect. Absorption and scattering also significantly alter the perception of colour, further complicating the detectability of underwater objects, by reducing the colour appearance of the object.

4.3 Auditory The diver needs an accurate appreciation of their orientation, depth and distance from a reference point. In atmospheric conditions, the spatial orientation depends on information obtained from the visual and the vestibular and often auditory systems. Underwater the human ear function can loss sensitivity by 30–60 dB [5]. Orientation by directional auditory sensitivity is reduced underwater, due to an intensity difference between the two ears [7]. Underwater the intensity of sound is not much absorbed by the divers head but transmitted more efficiently through the watery tissues of the head and vestibular system [6]. Thus, the sound intensity is perceived as approximately the same at both ears, regardless of the angle of incidence [6, 7]. A diver should not solely rely on its auditory sense for orientation and directional information.

5 Concepts of Different Realities Milgram et al. [8] created the reality–virtuality continuum which includes six different classes: real environment, augmented reality (AR), mixed reality (MR), augmented virtuality (AV) and virtual reality (VR). Within the described taxonomy, key values are if a subject is dealing with real or virtual objects, real or virtual images and direct or non-direct vision of those.

Designing Immersive Environments for Extreme Environments

103

The continuum includes the real world on the one side and a fully immersive environment at the opposite side. Generally speaking, augmented reality is the art of superimposing computer-generated content over a live view of the world [1]. The characteristics of virtual environments are that they solely consist of virtual objects and immerse a user into this digital world. The space between the real–natural world and a full digital world is the mixed reality, which comprises real and virtual objects which are presented together within a single display. Reality The natural environment can be considered as the state or quality of being real. Real objects are characterized by their existence independent from ideas concerning it [10]. Reality is an environment that can be experienced without the need of any artificial enhancement. Thus, it can be considered as the space where humans can interact naturally with objects. The perception of the environment occurs through the human sensory systems vision, via sight through eyes, auditory, via sound through ears, and haptic via touch through experience [9]. Augmented Reality The core idea of augmented reality is to provide an information layer consisting of digital objects and cues for the user in addition to the real world [11]. This superposition is performed in real time and is open for user interactions. Using augmented reality, the user remains mainly exposed and immersed in reality. Augmented reality is an unobtrusive, real-time extension of reality, which registers real and virtual objects with each other. In AR, digital content can be used to occlude real-world objects and hence change the perception of the user [11]. Thus, perception is affected through the human sensory systems. Visual stimuli receives “holographic” information through a layer of image projection via a display or glasses. Auditory stimuli are fully augmented through speakers. AR provides no additional haptic sensations, as already experienced through the natural world. However, the AR system has an awareness of physical body movement and orientation [9]. Virtual Reality VR immerses a user in a completely computer-generated environment which can be experienced through artificially created sensory stimuli and interacted with comparable to the real world [1]. The principle of stereoscopic vision is applied to create depth perception by displaying slightly different images to each eye, creating a binocular parallax [11]. Within such a VR environment, the virtual world excludes the user from reality and therefore is the considered as the dominant world. All sensory systems are stimulated artificially during VR exposure. Vision is stimulated through a 3D illusionary world through a computer rendered model. Auditory sensation is limited to sound through headphones. Haptic information can be provided through hand controllers which give an understanding of body position and respond with vibration feedback from interactions in the virtual world [9]. In underwater scenarios, both augmented reality and virtual reality are attractive technologies for performance and safety enhancement, as well as educational

104

T. Cibis et al.

purposes. Given a specific required, a support system of orientation and object recognition in augmented reality would provide a great use to the diver while being submersed and still allow the interaction with the real world, while virtual realities and serious immersive games would be performed in atmospheric condition to complete educational objectives.

6 Technological Approaches 6.1 Augmented Reality Augmented reality utilizes super-positioning of augmented objects or object detection in real-world images. One of the earliest applications utilizing AR in underwater settings was introduced by Gallagher et al. [12]. He and his team created a headmounted display for US Navy divers which was used to supply the diver with additional, virtual information for their performed dives. To date, in underwater scenarios, AR is mostly used for augmented reality games (see Fig. 4), [6, 13, 14] and educational purpose in archaeological underwater settings [15, 16]. Augmentation is thereby realized by providing the diver with a handheld waterproof tablet. The tablet, including camera, is used to record the underwater environment and superimpose virtual objects onto the obtained video and image data.

Fig. 4 Example of utilizing augmented reality in underwater scenarios. Virtual objects are superpositioned over recorded environmental images using a head-mounted tablet. This demonstrator was introduced by Oppermann et al. [14]

Designing Immersive Environments for Extreme Environments

105

However, images taken in underwater settings are distorted in multiple ways that affect the perception of objects and orientation. Images are affected unbalanced colours due to an uneven absorption of light in each colour channel. Images can be come blurry and desaturated because of water’s turbidity. Further noises can be present in the images through larger organisms, underwater fauna and bubbles. All these effects have a direct negative impact on detection of objects, which is the primary task of artificial intelligence for augmented reality [17]. It is thus important to provide counteractive measures to improve a diver’s capability to detect, perceive and understand objects and elements underwater [17]. An initial approach often reported in the literature [18–25] is to artificially improve underwater images before marker detection algorithms are employed. Generally, image enhancement algorithms are considered to provide a better quality of underwater images, which creates the basis for successful marker detection efforts. One approach is a method of fusion of multiple image frames provided one input image using white-balancing and image enhancement techniques [19]. The application of white-balancing methods indicates an important characteristic of computer vision underwater, significantly improving the results of depth determination. Luminance channel I is computed as weighted sum of individual colour channels R, G, B [17]: I = w r R + wg G + wb B

.

(9)

By changing the weights in luminance, calculation is considered to improve the performance of computer vision algorithm underwater. Li et al. [20] created a pipeline of multiple filters to improve degraded images. The filters were designed to counteract the effects of uneven illumination, noise, turbidity and uneven absorption of colours. Li employed a light propagation model that accounts for the underwater distortion effects, creating an underwater optical image model. In this model, the absorption coefficient is different for each colour channel, with the highest for red and the lowest for blue [20].   I = J (x) exp−(βs +βa )d(x) + 1 − expβs d(x) A

.

(10)

The model determines an image I, using a radiance, haze-free image J, the distance between camera and object d(x), A as the veiling colour constant and x = (x, y) as a pixel. The underwater distortion factors are described by the attenuation factors .βs for the scattering coefficient and .βa as the absorption coefficient for light [20]. Furthermore, multiple filtering and denoising steps are performed in order to obtain an image that can be used for shape-based recognition algorithm. Chiang et al. [21] used foreground and background selections to estimate the depth of submersed object in an input image. The assumption included that depth information can be used to reduce uneven illumination effects and to reduce noises introduced by turbidity. Although the approach should successful applicability in

106

T. Cibis et al.

atmospheric conditions, dark channel operations are unsuitable in underwater settings due to the missing red colour channel. Chiang’s dehazing algorithm attempts to compensate for the occurring attenuation discrepancy along the propagation path of light from the object to the camera. The underwater model thereby incorporates the imaging formation model as follows [21]: Iλ (x) = Jλ (x) · tλ (x) + (1 − tλ (x)) · Bλ

(11)

λ ∈ [red, green, blue]

(12)

.

.

with x as a point in the underwater scene, .Iλ (x) the captured image by the observer, Jλ (x) the scene radiance at point x and .tλ (x) the residual energy ratio of .Jλ (x). The energy ratio can also be expressed in terms of the energy of the light beam with wavelength, .λ before (.Eλinitial (x)) and after (.Eλresidual (x)) travelling the distance between object and observer, d(x). The model itself is processed through multiple steps consisting of:

.

1. Estimation of distance between object and observer 2. Determination of light intensities of foreground and background 3. Correction of haze and wavelength attenuation along the underwater propagation path 4. Water depth estimation based on the residual energy ratios of different colour channels from the background light 5. Colour compensation based on the amount of attenuation corresponding to each light wavelength Carlevaris-Bianco et al. [22] used differences between the maximum in the red, green and blue channels to get initial depth estimates, while other approaches suggest to ignore the red channel completely [23]. Cho et al. [25] used a depth measure by determining the depth only in a few points in the input image. The determined points are used in combination with an incremental Gaussian process to estimate the depth in the rest of the image. Drews et al. [23, 24] designed a model which accounts for the effects of absorption, scattering and backscattering which are responsible for underwater image degeneration. The basis is set in an underwater attenuation light modelling providing a restoration method based on physical model of light propagation along the use of statistical information of the scene. Priors thereby consist of a priori knowledge about the underwater scene comprising information based on statistical and physical property, ad hoc rules and heuristic assumptions (Fig. 5). Drews et al. [24] created an approach that is able to simultaneously recover media transmission and scene depth and to restore visual quality of the images acquired in underwater scenarios. The underwater attenuation light modelling is based on the underwater restoration methodology utilizing monocular sequences of images. Underwater images are the result of complex interactions of the light beams travelling through the medium water. The image is subjected to three components, a direct illumination .Ed , forward scattering .Ef s and backscattering .Ebs .

Designing Immersive Environments for Extreme Environments

107

Fig. 5 Schematic diagram of the vision degrading factors including backscattering, forward scattering and absorption [23]

ET = Ed + Ef s + Ebs

.

(13)

Backscattering was identified as the main reason for image contrast degradation. The direct component is thereby defined as Ed = J · exp−ηd = J · tr

.

(14)

where J describes the underwater scene radiance and d scene depth. .tr is the modelled transmission and .η the attenuation coefficient. In underwater settings, the attenuation factor .η is a combination of the scattering coefficient .β and the absorption coefficient .α, which both are dependent on the light’s wavelengths. The prime component causing underwater image degradation, the backscattering component, is the result from the interaction of ambient illumination sources with particle dispersed in the medium water.   Ebs = A · 1 − exp−ηd = A · (1 − tr )

.

(15)

A constitutes the global light in the scene, which is a scalar that is wavelength dependent. As a result, Drew et al. [24] formulated an enhanced model of the image formation in underwater settings as: I (x) = J (x) · tr (x) + A(1 − tr )

.

where x holds pixel coordinate information.

(16)

108

T. Cibis et al.

6.2 Virtual Reality The use of virtual reality for full-immersion sensation in underwater environments, is realized via VR-glasses and additional feedback systems. Depending on the level of immersion fidelity which is intended to be created, the number of feedback mechanisms can be scaled up significantly [15, 26]. Common application scenarios for underwater VR simulations to date are serious games and archaeological side scenarios [15, 16]. While VR games are commonly available through game providers [27], the VR scenarios for archaeological purposes are designed specifically given a pre-defined, often, real-world existing locations. Depending on the level of immersion, virtual reality simulations require a number of sensors that measure the users interactions and translate these actions into the virtual world in real time. Jain et al. [26] developed a full-immersion virtual diving simulator in atmospheric lab conditions. Their goal was to create the physical and sensational conditions of SCUBA diving in laboratory conditions (see Fig. 6). A subject was connected to a prototype consisting of multiple sensor and feedback mechanisms which acted to simulate the sensation of diving. Thus, the sensor modalities to set up the virtual environment included inertial measurement units (IMUs), thermoception (sense of temperature), equilibrioception (sense of balance), proprioception (sense of kinesthesia), visual stimulation via Oculus Rift glasses, auditory stimuli and a breathing sensor attached to a snorkel [26]. All sensor and feedback units combined contributed to a high fidelity level of immersion. The visual stimuli were based on underwater games provided by Oculus Rift [27]. Fig. 6 Setup of terrestrial full-immersion virtual diving simulator from [26]

Designing Immersive Environments for Extreme Environments

109

7 Designing Requirements for Dynamic and Immersive Environments The design of applications for immersive environments has three fundamental components: the user neurophysiology, the head-mounted displays and the physical and virtual environment boundaries. To effectively create immersive environment applications, one must understand how the brain recognizes objects and how we recall the location of an object or the occurrence of an event through our spatial memory. DiCarlo stated that the core object recognition is the result of a cascade of reflexive feed-forward computations in the inferior temporal (IT) cortex and that subsets of the IT region are responsible for differentiating objects. Temporal continuity or the exposure to two or more discrete events at the same physical dimension, time and space as well as object consistency, colour and shape are some proprieties that drive the rapid identification of objects [30]. Bao and Tsao designed the map of objects space, a map that determines the type of object and the IT region that is activated when exposed to particular object shapes, establishing a mechanism for object recognition [31]. In another study Teichman demonstrated that object representation is influenced by object-colour knowledge and by temporal dynamics been essential in the process of core object recognition and our interaction with the world the surround us. The core object recognition is firstly driven by shape then by colour and knowledge of previous experiences, with colour and shape not being dependent feature for object identification. Figure 7

Fig. 7 Example of plot of shape and colour for core object recognition

110

T. Cibis et al.

illustrates the intersection between the map of objects and the influence of colours in identifying objects [32]. When designing for augmented reality the identification of the shape can be challenge due to the optical see-through background of the head-mounted devices. The election of the colour palette is fundamental to the success of an effective application. Merenda et al. determined with his study a defined palette for a palette of real-world backgrounds. The study analyses backgrounds, AR colour palettes and source lighting with results pointing to different levels of luminance and contrast as a feature for the accurate perception of the human vision to rapidly object core recognition [33].

7.1 Spatial Mapping Another requirement of AR applications is the physical and virtual environment boundaries delimitation. Human spatial memory is described as the process of storage and retrieval of information to plan a route to a location and to recall the location of an object during the event. O’Keefe in 1971 discovered the place cells, and through several decades he demonstrated that place cells can express location of the individual in addition to information about events and location suggesting that place cells are redefined by experience-dependent long-term synaptic plasticity [34]. In 2014, Moser and Moser described the grid cells, multiple functionally specialized cells of the hippocampal that composed a coordinate system for spatial navigation. The grid cells in combination with place and head direction cells recognize the location, direction and limits of space forming a circuit in the hippocampus similar to a global positioning system (GPS) [35]. The manner our brain perceives spatial location and creates spatial memory is most relevant for the design of augmented reality applications due to learning and recall and the role of higher environmental fidelity of these applications to the reinforcement of the spatial memory mechanism and object location memory accuracy, as the experiment from [36]. To better understand the complexity of spatial mapping for augmented reality applications, we must define three basic concepts: 1. Tracking: it refers to the problem of estimating the position and orientation of the ARS user’s viewpoint, assuming the user to carry a wearable camera, continually locating the user viewpoint when view moving position (x, y, z) Orientation (r, p, y) 2. Registration: it is the process of finding in real time the position and orientation of virtual objects so that they can be integrated in a plausible way in the physical world. Fixing virtual object on real object when view is fixed 3. Placement: it is the process that automatically moves dynamic objects until they come in contact with the physical static world objects.

Designing Immersive Environments for Extreme Environments

111

Fig. 8 Spatial Mapping Coordinates Planes

The challenge of placing, registering and tracking can be exemplified by Fig. 8 where the position of the user (x, y, z) differs from the coordinates of the camera that differs from the coordinates of the augmented object that then differs from the intersection between the physical world and the augmented application. Several techniques from global, Cartesian, projective coordinates through pinhole cameras to homography are utilized to position augmented objects in the physical environment. Homography by definition is a projective transformation that takes advantage of the individual projections of two relate images. Developing for our augmented applications, we are constraint to four visible light cameras for head tracking, two infra-red cameras or eye tracking and one 8 MP still camera, to project images in the physical environment while guided by the iris tracking to mimic the human vision system. Utilizing the homography algorithm, an object can be seen through the lens from the camera from different perspectives and angles. The homography camera model enables the designer to determine the object location in the virtual environment based on the physical environment, based on the camera position and characteristics, as follows:

⎡ ⎤ ⎤⎡ ⎤ ⎤ ⎡1 X

0 u u f 0 0 0 0 ρu ⎥ ⎢ R t ⎢ ⎥ 1  ⎢Y ⎥ ⎦ ⎣ . ⎣v ⎦ = ⎣ 0 ⎣ ρu v0 ⎦ 0 f 0 0 01x3 1 Z ⎦ 0 0 10 w 0 0 1 1 ⎡

where .ρu represents the intrinsic parameters of the camera, .f is the camera matrix, .R is the orientation of the camera or rotation matrix and .t is the position of the camera.

112

T. Cibis et al.

Now, when utilizing artificial vision underwater, we run into a few challenges, such as confusion from false targets, complex, unexpected backgrounds and extensibility of recognition ability on diverse objects. The ability to navigate the physical world and reach landmarks are linked to the ability to form, retain and utilize cognitive representation of the environment. To minimize errors and maximize the efficacy of the tool, our research group utilized template matching. Template matching is an effective method for target detection and recognition, utilizing a known object/environment to calculate similarities between the real object/environment and the augmented object/environment. To accomplish the design of underwater augmented reality tools for marine life, we choose to design a known terrestrial animal. This exercise with a known target, location, position and placement in the physical world represents the fundamental base for the development of an effective augmented reality application underwater, and this case study is described below.

7.2 Case Study: 3D Horse Head Model to Augmented Reality Model Equine colic is a broad term which describes abdominal pain in horses, and the cause of pain is often related to the gastrointestinal system of the horse and remains one of the most common clinical presentations in equine emergency practice [28]. Some causes of colic can be fatal without surgical intervention. Nasogastric intubation, the medical term for inserting a tube into the nasal passage, down the throat and into the stomach, is frequently required for diagnostic and therapeutic purposes related to colic. If an intestinal obstruction is suspected as the cause, the horse may need to have the stomach emptied or require oral fluids and laxatives. Consequently, being able to pass a nasogastric tube is an essential, day one skill for equine practitioners. One of the complications of poor nasogastric placement is epistaxis (nosebleed), which is caused by trauma to the ethmoids (this is a region of spongy bone located in the horse’s sinuses, which is near the opening to the oesophagus) [29]. Unfortunately, veterinary student opportunities to practise this skill are often limited by equine welfare and safety limitations [37]. This case study follows the process of applying engineering methodology and tools to remove the impediments to veterinary students having every opportunity to practise this fundamental skill. An anatomically correct facsimile of a horse’s head, complete with realistic and precise reproduction of the nasal passage and ethmoids, is to be made using traditional medical imaging data, converted to 3D and manipulated for effective 3D printing to create moulds. These methods can also be used to bring 3D assets into an augmented reality environment for any number of comparative or analytical purposes.

Designing Immersive Environments for Extreme Environments

113

7.3 Case Study: Methodology Source Imagery and 3D Modelling Software Selection Computed tomography (CT) images from a 7-year-old Newfoundland pony were used to design the 3D printed model. While costly software solutions that can convert DICOM CT data into printable 3D models do exist, a manual approach was used in this case as both a proof of concept and to manage costs. CT scans produce images which most would be familiar with, and they show a cross section “slice” of the patient being scanned, in this case, a horse. These images are presented in greyscale which indicates the relative radiodensity of the material being scanned. Figure 9 shows an example of one of the images used in this case study. In this image, the different shades show clear delineations between hard matter like bone/tooth in white, soft tissue in shades of grey, and cavities in black, seen here as the nostril openings of the nasal passage. These images will be the roadmap to the 3D model and assurance that what is produced is anatomically representative of the subject. Of note, different CT scan hardware will have varying levels of axial resolution in their imaging output. The images used for this case

Fig. 9 CT scan image of Newfoundland pony showing nostrils, teeth and soft tissue

114

T. Cibis et al.

study were taken from a 3 mm resolution scan, meaning each image in the CT scan package represents an axial translation of 3 mm. In order to create reliable 3D print using additive manufacturing processes, it is important that the models used are solid bodies. Meshes, textures and wireframe models not specifically designed for 3D printing can cause errors when printing is attempted. Solidworks is a software package created by Dassault Systems for all manner of 3D CAD design and simulation work. There are several competitors in the 3D CAD design space with similar functionality and the methods outlined could be adapted to their software relatively easily, and examples would be Autodesk Inventor and OnShape. Solidworks was chosen for this exercise due to existing familiarity with the software its ability to create solid body models and the availability of the software.

Modelling Strategy While Solidworks was the 3D modelling software chosen in this case, it was not specifically designed for this type of workflow. This creative use of the software made it necessary to test different methods assessing which would be optimal for creating the model from the DICOM images. The first method used the lofting function to create smooth transitions from one cross section sketch to the next, while initially promising, the function encountered repeated errors as the complexity of the cavities increased. This method would be the first choice if recreating relatively simple organic shapes, and however it should be avoided if there are multiple nested cavities within the cross section of the model. Moving on, the next method was fundamentally less complex and relied less on automated functions within the modelling software. In this method, the cross section sketches were extruded one by one, slowly building the model one 3 mm layer at a time using the cross sections from the DICOM images as a boundary. Like the first method, this had initial positive outcomes. Issues with this method were encountered when new cavities appeared from one cross section sketch to the next. This resulted in “zero-thickness geometry” errors when extruding the subsequent layers. Since this method showed promise and the errors encountered were predictable, the next step was modifying this method to be simpler, with less room for geometric errors. The main issue with the extrusion method presented itself when new cavities began in the cross section of the scan. To avoid this, the method was essentially reversed, a solid slightly larger than the final model was extruded with the voids removed layer by layer via the cut function. This strategy was chosen going forward as it was the least complicated, relied the least upon closed source automated functions and proved to be repeatable.

Getting Started Creating the Blank Model Since there is no automated function within Solidworks to convert a DICOM image package into a 3D model, it stood to reason that applying the method chosen above

Designing Immersive Environments for Extreme Environments

115

Fig. 10 Blank solid model in Solidworks

would represent a significant time commitment. Each image layer of the DICOM package needed to be visually inspected and the profile of the barriers between cavities and soft tissue traced to produce the sketch which would be cut from each layer of the blank solid model. First, a cylinder somewhat larger in dimension than what was to be modelled was made using the boss extrude function, Fig. 10. This gives the solid from which the layered cuts will shape the anatomically correct network of cavities and lumen which make up the nasal passages of the horse. Each of the cuts needs only to affect the 3 mm “slice” of the total model, with the profile of that cut identified in its respective CT image. To be able to make cuts that began at the correct offset and did not interfere with adjacent “slices” of the model, it was necessary to create reference planes along the body at 3 mm increments, and this provides a starting point for each of the cuts along the axis of the model. In this case, since the relevant anatomy being modelled had a length along the axis of the scan of 30 cm, 99 planes needed to be created, offset 3 mm from each subsequent neighbouring plane. Conveniently, when creating a new reference plane in Solidworks, the software gives the option to create multiple instances of the plane, equally spaced from each other.

Carving Away the Model With the blank slate prepared, it was time to begin removing the material which would eventually create the profile of the 3D model. The concept is relatively simple, a sketch is made on one of the reference planes which follows the contours of the cavities identified visually in the DICOM image, and this shape is then removed for 3 mm of the model using the extruded cut function within Solidworks. This is then repeated 99 times in the case of this model resulting in a relatively accurate representation of the targeted anatomy. While this is not complicated, the challenge lies in being able to create an accurate sketch for each reference plane. To streamline the process, each image from the DICOM package was assigned a plane in the order

116

T. Cibis et al.

the photos were scanned, the image was then imported into the sketch for each of the planes and using a combination of line segments, splines and three-point arcs, the profile of the desired cavities was traced by hand. Of note, Solidworks does have an automated trace functionality built-in which can be used to automatically generate outline profiles using high contrast images as reference. The function was not used in this case for the following reasons: (a) The passages and cavities of interest for this case had to be manually distinguished from the rest of the anatomy present in the CT image, much of the information present in the scan images was of no consequence to this case, and therefore modelling it would have required additional time set aside for its identification and removal after the fact. (b) As CT images use greyscale to differentiate different densities of material, there were shades and gradients where human judgement was needed to determine the correct placement of boundary lines between regions. (c) Finally, the accuracy of the auto-trace was found to be inconsistent due to the low contrast of the images. In a small test to determine the efficacy of auto-trace (in this specific use-case), corrections to the sketches produced by auto-trace led to an increase of 63% on average over the time taken to trace the sketch manually. Figure 11 shows an example of one of the CT images projected on its’ assigned reference plane, and the sketch of the region to be cut can be seen outlined in blue which was created using a combination of three-point arcs and splines. We can also see in this image that using this method provides a reliable visual aid for isolating Fig. 11 Example of CT scan image used to guide sketch

Designing Immersive Environments for Extreme Environments

117

Fig. 12 Complete model of equine nasal passage and ethmoids

certain details of anatomy without having to model the entirety of the scanned animal. Only the nasal passages and ethmoids were modelled in this case, however, the other anatomical features visible in the CT images were helpful for visualizing progress. This surplus of visual information is also an example of the quantity of detail which would need to be identified and removed using an automated system to translate these images to a 3D model. After performing this procedure for each reference plane, the result is a relatively accurate approximation of the nasal passage, ethmoids and transition to the esophageal passage of a horse, shown in Fig. 12. At this point, the process simply provides a 3D model of a cavity, not something which can be easily 3D printed or referenced in an augmented reality environment. To create a solid from the void, another body was placed around the model without merging them, and the “combine” function was then used to subtract everything but that which was filling the void. The result is shown in Fig. 13.

Test Printing With the model successfully converted from the traditional medical imaging source, it is possible to manipulate the file like it were any other traditional 3D solid body model created in Solidworks. The first test was a simple 3D print at 1:1 scale using an EOS 3D printer. For a complex shape with multiple voids such as this, it was important to use a printer which uses dissolvable support filament. A traditional low-cost 3D printer using print filament for support structure would have created an unusable mass with the support material and model itself bordering on indistinguishable. Post-printing, the model was soaked in a warm caustic bath to dissolve the support material for 48 h. The result, shown in Fig. 14, was the first point in the

118

T. Cibis et al.

Fig. 13 Inversion of equine nasal passage model

Fig. 14 3D print of equine nasal passage inversion model

project where the model could be physically manipulated and visualized. It was a milestone as this model shows the complex and intricate anatomy within the nasal passages of a horse in a unique way. Veterinary students could manipulate and examine the structure in a way that would not have been possible before this project. An invaluable attribute in engineering is the ability to recognize challenges in the midst of a project, adjust, reiterate and adapt. The result of the 3D print showed that making a silicone mould for the training analogue of the nasal passages would present challenges. The abrupt changes from layer to layer of the model due to the 3 mm resolution CT scan created a surface profile which would make removing the model from a silicone casting difficult and/or impossible. The model needed to be

Designing Immersive Environments for Extreme Environments

119

Fig. 15 3D model imported into Blender

smoothed. Additionally, through consultation with veterinarians it was agreed that significant portions of the model could be omitted as their absence was not likely to impact the sense of reality in the eventual training analogue. This was an important scope refresh which would lead to less time wasted. Solidworks is not well suited to creating more complex organic curved shapes like those that were needed to make a smoother, more life-like model. To facilitate moving forward with other modelling software, the file was converted into a VRML file via the export function built into Solidworks. VRML, or Virtual Reality Modelling Language, is a standard 3D vector-based file format. Exporting the model into this format would make working with it in other 3D modelling software possible. Blender is a powerful free-to-use 3D modelling software which is suited to complex curved models and manipulation on a vertex-by-vertex scale. In Fig. 15, we see the file imported into blender directly from the exported VRML file. The level of detail and sheer quantity of vertices are visible in Fig. 15. Simplifying the model required the identification of the areas which no longer needed to be included in the model, and in this case, most of the sinus cavity was removed via deletion of vertices and faces leaving the ethmoids, with the voids left behind filled with new simpler faces to maintain the integrity of the model. Mentioned earlier, but very important to note, is that 3D printing requires models of solid bodies. Since this is the case, every time a portion of the model was deleted the possibility of “breaking,” the model was present by creating a void in the model. This work, like the initial conversion of the model, involved significant manual manipulation of individual vertices and faces. Conveniently, the anatomy of interest in this case study was relatively symmetrical which allowed for time to be saved by slicing the entire model in half, simplifying one half of it and then mirroring the modified half of the model to recreate a symmetrical analogue of the nasal passage, shown below in Fig. 16. What is also shown in that image is the result of significant manual smoothing at the vertex and facet level along with the associated repairs meant to maintain the integrity of the model. The attempt was made early in the process to use the automated smoothing functions within blender, specifically the “subdivision

120

T. Cibis et al.

Fig. 16 Symmetrical, manually smoothed 3D model of equine nasal passages

surface” modifier. Unfortunately, due to the complexity of the model and the intricacy of the shapes involved, these functions broke the model for the purposes of 3D printing. Once the model had been simplified, mirrored and manually smoothed using the sculpt functions however, one pass of the subdivision surface modifier was possible without creating errors giving a final realistic and smoothed appearance.

8 Discussion This chapter is about the investigation of the human perception systems, the distortion causes by an underwater environment to which these systems are exposed to, as well as the identification of technological means to accurately implement augmented reality and virtual reality application for and in underwater scenarios. The investigation of physical quantities alterations in underwater environments compared to atmospheric conditions showed that tactile, visual and auditory signals are transmitted differently through water. These altercations mean that the perception of the environment, objects in the environment and navigation and orientation are significantly impaired. Impaired perception may have negative effects on the human’s physical and mental capacity and therefore pose a hazard risk. Immersive media can be used in two different ways to address the alterations of physical quantities and thus the distorted perception of such. Technological means can be designed to either try to mimic altered physical quantities for a realistic representation of the environment or can be installed to counteract and correct the altercations of the physical quantities in order to provide support to a diver for navigation and orientation.

Designing Immersive Environments for Extreme Environments

121

Researches into immersive realities addressing underwater settings have utilized both, whereby virtual realities focused on a realistic representation of the underwater environment and augmented realities were used to counteract the effects altered physical quantities. The technological means, especially those addressing the effects of visual altercations in underwater scenarios, provide extensive methods to improve vision and navigation while being submersed, by creating a perceptional standard similar to those expected in atmospheric conditions. This is mostly achieved through image processing techniques, as shown in Sect. 6. However, to date, virtual realities still lack on a realistic representation of the underwater environment, mostly by neglecting the effects of altered physical quantities. Virtual realities mostly rely on designs of serious games [15], which do not account for the full range of underwater altercations. In order to address these shortcomings, it can be suggested to use the technological means applied in augmented reality settings to counteract the physical altercations, by rewriting the image processing, in order to apply the distorted effects to the created virtual reality. In detail, this would mean that the equations presented in Sect. 6 could be rearranged to calculate the distortion effect in dependence of underwater exposure, as a function of depth. With our case study for the equine nasal passage, the foundation of a structure, shape and scale of the model is taken directly from DICOM CT scan imagery of an actual horse which lends a sense of anatomical realism and will lead to a more life-like experience in the eventual training analogue. This methodology now can be replicated to produce models of marine animals in their environment. Overall, the creation of underwater realities through immersive technologies is in its infancy. Different applications require different technological means to address the effects of underwater perception.

References 1. Wirth M., Mehringer W, Gradl S. Extended realities (xR) in elite sports- how immerisve technologies influence assessment and training. In: Engineering and Medicine in Extreme Environments, Springer, 2021. 2. Nomikou, P., Pehlivanides, G., El Saer, A., Karantzalos, K., Stentoumis, C., Bejelou, K., Antoniou, V., Douza, M., Vlasopoulos, O., Monastiridis, K. and Dura, A. Novel Virtual Reality Solutions for Captivating Virtual Underwater Tours Targeting the Cultural and Tourism Industries. In Proceedings of the 6th International Conference on Geographical Information Systems Theory, Applications and Management (GISTAM 2020), pages 7–13 3. Cibis, T., McEwan, A., Sieber, A., Eskofier, B., Lippmann, J., Friedl, K., & Bennett, M. (2017). Diving into research of biomedical engineering in scuba diving. IEEE reviews in biomedical engineering, 10, 323–333. 4. University of Hawai’i. Exploring our Fluid Earth. Last visited: 23.03.2021, Web: https://manoa. hawaii.edu/exploringourfluidearth/physical/ocean-depths/light-ocean 5. Schmidt, Robert F., Florian Lang, and Manfred Heckmann, eds. Physiologie des menschen: mit pathophysiologie. Springer-Verlag, 2011.

122

T. Cibis et al.

6. Morales, Rogelio, et al. “An underwater augmented reality system for commercial diving operations.” OCEANS 2009. IEEE, 2009. 7. Stanley, Jacqueline V., and Carl Scott. “The effects of the underwater environment on perception, cognition and memory.” ‘Challenges of Our Changing Global Environment’. Conference Proceedings. OCEANS’95 MTS/IEEE. Vol. 3. IEEE, 1995. 8. Milgram P, Kishino F. (1994) A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information Systems 77(12):1321–1329 9. AR-eye. Defining the Architecture of Augmented Reality. Web: https://www.ar-eye.com/2018/ 10/18/defining-the-architecture-of-augmented-reality/. Last visited: 24.03.2021 10. Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B. Recent advances in augmented reality. IEEE computer graphics and applications (2001) 21(6):34–47 11. Schmalstieg D, Hollerer T. Augmented reality: principles and practice. Addision-Wesley Professional, 2016 12. Gallagher, Dennis G. “Development of miniature, head-mounted, virtual image displays for navy divers.” Oceans’ 99. MTS/IEEE. Riding the Crest into the 21st Century. Conference and Exhibition. Conference Proceedings (IEEE Cat. No. 99CH37008). Vol. 3. IEEE, 1999. 13. Bellarbi, Abdelkader, et al. “Underwater augmented reality game using the DOLPHYN.” Proceedings of the 18th ACM symposium on Virtual reality software and technology. 2012. 14. Oppermann, Leif, et al. “Areef multi-player underwater augmented reality experience.” 2013 IEEE international games innovation conference (IGIC). IEEE, 2013. 15. Bruno, Fabio, et al. “Virtual dives into the underwater archaeological treasures of South Italy.” Virtual Reality 22.2 (2018): 91–102. 16. Liarokapis, Fotis, et al. “Underwater Search and Discovery: From Serious Games to Virtual Reality.” International Conference on Human-Computer Interaction. Springer, Cham, 2020. 17. Cejka, J., Agrafiotis, P., Bruno, F., Skarlatos, D., & Liarokapis, F. (2018). Improving markerbased tracking for augmented reality in underwater environments. 18. Brown, Hunter C., and Haozhu Wang. “Underwater augmented reality: Navigation and identification.” 2013 OCEANS-San Diego. IEEE, 2013. 19. Ancuti, Cosmin, et al. “Enhancing underwater images and videos by fusion.” 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012. 20. Li, Yujie, et al. “Real-time visualization system for deep-sea surveying.” Mathematical Problems in Engineering 2014 (2014). 21. Chiang, John Y., and Ying-Ching Chen. “Underwater image enhancement by wavelength compensation and dehazing.” IEEE transactions on image processing 21.4 (2011): 1756-1769. 22. Carlevaris-Bianco, Nicholas, Anush Mohan, and Ryan M. Eustice. “Initial results in underwater single image dehazing.” Oceans 2010 Mts/IEEE Seattle. IEEE, 2010. 23. Drews, Paulo LJ, et al. “Underwater depth estimation and image restoration based on single images.” IEEE computer graphics and applications 36.2 (2016): 24–35. 24. Drews, Paulo, et al. “Automatic restoration of underwater monocular sequences of images.” 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015. 25. Cho, Younggun, Young-Sik Shin, and Ayoung Kim. “Online depth estimation and application to underwater image dehazing.” OCEANS 2016 MTS/IEEE Monterey. IEEE, 2016. 26. Jain, Dhruv, et al. “Immersive terrestrial scuba diving using virtual reality.” Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 2016. 27. Ocean Rift. http://ocean-rift.com/ 28. Cohen, ND. “Epidemiology of colic” Vet Clin of No Am. 1997; Vol. 13 (2). 29. Hardy, J., et al. “Complications of nasogastric intubation in horses: nine cases” JAVMA. 1992; 201: 483–486 30. DiCarlo, James J., Davide Zoccolan, and Nicole C. Rust. “How does the brain solve visual object recognition?.” Neuron 73.3 (2012): 415–434. 31. Bao, Pinglei, et al. “A map of object space in primate inferotemporal cortex.” Nature 583.7814 (2020): 103–108

Designing Immersive Environments for Extreme Environments

123

32. Teichmann, Lina, et al. “The influence of object-color knowledge on emerging object representations in the brain.” Journal of Neuroscience 40.35 (2020): 6779–6789. 33. Merenda, Coleman, et al. “Effects of real-world backgrounds on user interface color naming and matching in automotive AR HUDs.” 2016 IEEE VR 2016 Workshop on Perceptual and Cognitive Issues in AR (PERCAR). IEEE, 2016. 34. O’Keefe, D. R., and R. L. Palmer. Atomic and molecular beam scattering from macroscopically rough surfaces. Diss. American Vacuum Society, 1971 35. Moser, Edvard I., May-Britt Moser, and Yasser Roudi. “Network mechanisms of grid cells.” Philosophical Transactions of the Royal Society B: Biological Sciences 369.1635 (2014): 20120511. 36. Murcia-López, María, and Anthony Steed. “The effect of environmental features, self-avatar, and immersion on object location memory in virtual environments.” Frontiers in ICT 3 (2016): 24. 37. Gronqvist G, et al. “The challenges of using horses for practical teaching purposes in veterinary programmes” Animals (Basel). 2016; 6(11)

3D Printed Models for Veterinary Anatomy Teaching W. Ben Stoughton

Abstract With the advent of widespread, affordable 3D printers and a demand for adjunctive veterinary anatomy models, the use of 3D printing in veterinary education is expanding. In this chapter, the principles of 3D printing will be reviewed including examples of various methods, tools, and techniques to be considered when designing anatomy models. Examples of these principles are discussed through evaluation of seven peer-reviewed publications from around the world reporting the development, piloting, and testing of 3D models in veterinary anatomy education. Areas of focus included equine, canine, and feline anatomy with examples of basic anatomy models and advanced clinical skill training using 3D printed simulators. Keywords Canine image · Equine model · 3D printing · Neuroanatomy · Musculoskeletal models

1 Introduction Teaching veterinary anatomy remains a core tenet in training veterinarians. Students learn medical terminology, anatomic form and function and set the foundation for future courses and their professional careers. Typically, veterinary anatomy courses occur in the first year of training and include hands-on experience with preserved small and large animal specimens. Students are actively involved in cadaver dissection, often leading to specimen destruction throughout the semester. Due to time, expense, and harsh chemicals used in tissue preservation for teaching, there has been a growing demand for complimentary, non-inferior methods for teaching veterinary anatomy. One such method is 3D printing of anatomic models [1–3].

W. B. Stoughton () Department of Health Management, University of Prince Edward Island, Charlottetown, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. J. Bressan, C. M. Creighton (eds.), An Introduction to Veterinary Medicine Engineering, https://doi.org/10.1007/978-3-031-22805-6_8

125

126

W. B. Stoughton

Additive manufacturing (i.e., 3D printing) is a type of technological production that involves layer-by-layer addition of materials to create a 3D structure [4]. This is distinct from subtractive manufacturing in which machining is used to remove material to create the desired structure. There are many types of 3D printers and a growing list of industrial applications and uses in education and medical practice [5–12]. As advancements in digital imaging and software development grow with the 3D printer market, the creative opportunities for use in veterinary medicine are expanding. One area in which 3D printing can advance education is through the refining and replacement of the used animals in teaching veterinary anatomy and clinical skill development. In this chapter, an overview of the process and exampled tools required for 3D printing will be provided alongside recent publications describing the practical outworking of these principles in veterinary medical education.

2 3D Printing Overview Although the type, cost, and durability of 3D models can vary, the potential for customizable anatomic models for clinical and teaching purposes are vast [13, 14]. In general, there are three key steps required for 3D printing: (1) image acquisition, (2) image editing, and (3) printing [15]. Image acquisition is needed to create the digital file of the 3D image to be printed. Image editing is required to ensure model accuracy and 3D printing logistics. Third, the uploaded image file is printed using the materials of choice and in the order needed to achieve the desired 3D model goals. In the section below, these three key steps are reviewed further.

2.1 Digital Image Acquisitions The first step in preparing a 3D print includes creating the digital file. There are several ways to obtain this. Most reported techniques include magnetic resonance imaging (MRI), computed tomography (CT), optical 3D scanning, and manually created computer-aided designs (CAD). The choice will depend on the project, cost, and resources available. For veterinary anatomy models, use of 3D scanning tools like MRI and CT are extremely useful. This technology provides a slice-by-slice scan of the tissue and provides 3D surface reconstruction for printing. If MRI or CT scanning are not available, a less expensive option for image acquisition is 3D optical scanning. A variety of 3D scanning tools are available and range widely in cost. Exampled options include: (1) Matter and Form ©($749) desktop scanner and (2) Creaform Go!SCAN 3D© ($60K) handheld scanner [16]. The advantage to this technique is that a high-resolution surface scan is produced that quickly creates a 3D digital image that can be prepared for printing. The degree of accuracy is nearly as good MRI scans, and the reported accuracy can be as low as 0.05 mm.

3D Printed Models for Veterinary Anatomy Teaching

127

2.2 Digital Image Editing After creating a digital image file, revisions are needed to correct, modify, or generate the desired changes to the model. The typical file type created after scanning is called Digital Imaging and Communications in Medicine (DICOM). After viewing and editing the DICOM file, it must be converted into a printable file type, like Standard Tessellation Language (STL). There are many DICOM viewers available, with OsiriX being the most famous medical image viewing software available [17]. The STL file type is supported by commonly used 3D printers and provides precise 3D image detail of surface geometry to guide printing. Each 3D printer will have computer software that communicates with the printer, allowing for modification and printing planning. Two great 3D printing software examples are Ultimaker Cura and Simplyfy3D® which allows for preprint preparation and custom settings for in-depth print control. One important factor in planning is determining what support structures are needed to print successfully. For example, if a canine radius and ulna need to be printed, the bones will likely need support structures to keep the print from collapsing and deforming during the process [13]. The 3D printer cannot print into thin air. The item must be built layer by layer. The type of support structure and areas in which it is needed will be determined by the material used, weight, and angles of the print. For Fused Deposition Modeling (FDM), any overhang angles >45◦ will require support structure. A lattice or tree-like support can be used and a water dissolvable material-like polyvinyl alcohol (PVA) are helpful to ease the structural support removal. After preparing the STL and any needed support structures, the 3D printing begins.

2.3 3D Printing Reviews of different 3D printers, the costs, applications, and limitations in veterinary medicine have been previously described [13, 18]. In the studies of 3D models in veterinary anatomy education discussed below, use of FDM printers are common. It is currently the most affordable and robust type of 3D printing available for creating veterinary educational models. For example, the ANYCUBIC i3 Mega FDM desktop 3D printer costs around $239 and can print with materials like PLA, ABS, HIPS (high impact polystyrene), and wood and has a build size of 21 × 21 × 20.5 cm. With a printer of this caliber, many smaller anatomic models can be created and with durable materials at a relatively inexpensive cost [19]. Depending on the budget, projects and teaching goals, the printer type and quality prints needed, there are a growing list of suitable options (Table 1).

A low-cost portable simulator of a domestic cat larynx for teaching endotracheal intubation Computed tomography angiography and three-dimensional printed anomalous coronary arteries

Use of 3D Printing Models for Veterinary Medical Education: Impact on Learning How to Identify Canine Vertebral Fractures Evaluation of 3D Additively Manufactured Canine Brain Models for Teaching Veterinary Neuroanatomy 3D Anatomical Model for Teaching Canine Lumbosacral Epidural Anesthesia 3D Printing of Canine Hip Dysplasia: Anatomic Models and Radiographs

Title 3D printed models of the digital skeleton of the horse

Fused Deposition Modeling

Canine epidural anesthesia model of vertebrae, sacrum, and pelvis Canine Coxofemoral joint

Fused Deposition Modeling

Fused Deposition Modeling

Feline larynx

Canine heart

Fused Deposition Modeling

Photojet Polymerization

Fused Deposition Modeling (FDM)

Printer type Fused Deposition Modeling (FDM)

Canine brain

Anatomy model Equine distal limb including phalanges and sesamoid bones Canine 3-D models of cervical vertebral fractures

uPrint SE Plus

Prusa i3

UP 3D Mini

Objet Connex350 Polyjet Machine UP 3D Mini

Witbox 2

Model ANYCUBIC i3 Mega

Table 1 Summary of studies of 3D printing in veterinary anatomy education

ABS

PLA

ABS

ABS

Acrylic-based photopolymer (ABS)

Unknown

Material Polylactic acid (PLA)

Unknown

60 h for creation 73.5 h for printing $20.25 for all models 3 h for printing $6 for model $15 for 3D print

9 h for creation 11.9 h for printing $2.77

144 h for 108 brain models $209 for full set

16 h $35 US/each

Cost and time 13 h

Motion artifacts, significant post-imaging manual segmentation

Simulator did not provide movement like laryngospasm

Extended time required to manually created digital images

Required use of high-resolution MRI and significant post- imaging editing. No soft tissue structures simulated

Limitations Due to printer size limitation, any longer bones would not be able to be printed Education study did not include a pre-test or control group.

128 W. B. Stoughton

3D Printed Models for Veterinary Anatomy Teaching

129

3 Use of 3D Printed Models in Veterinary Anatomy Education With the advent of 3D printing, various veterinary education programs around the world have begun to develop novel 3D printed anatomic models to aid in training of veterinarians. In this section, seven different published examples of veterinary anatomic models used in education will be discussed. The array of topics includes the equine digit, canine heart, canine skeleton, feline larynx, and brain. The works range from pilot studies for product development to human trials concluding the model’s educational merit. Each study provided below is summarized in Table 1.

3.1 Equine 3D Models 1. In 2021, researchers from Turkey’s Ankara University published a report describing 3D printing of the equine digital skeleton [19]. The equine digit consists of three phalanges, including the proximal, middle, and distal phalanx. This is a challenging area for veterinary students to grasp anatomically and developing durable, 3D models to assist learning the material is warranted. The authors were able to successfully reproduce the equine phalanges including the distal and proximal sesamoid bones. The specimens used for the study included distal limbs from four thoroughbred racehorses. Imaging data were collected using CT, followed by segmentation and smoothing for creation of the 3D computer model file. The CT scanning provided fair slice thickness (5 mm) and image resolution (512 × 512 pixels). The software Ultimaker Cura (Version 4.8.0, USA) was used for printing the file on the Anycubic I3 Mega (Shenzhen Technology, China) which uses fused deposition modeling (FDM) technology. The material used included polylactic acid (PLA), which is a meltable plastic and commonly used in FDM type 3D printing. It is lightweight and odorless. This 3D printer is relatively inexpensive ($239 USD) and produced accurate, life size models of the equine digit. Each bone was printed separately with a total of 13 h required to create the models. The authors’ report excellent accuracy compared to the real equine skeleton with various notches, protuberances, and surface irregularities reproduced well. Printing each piece separate was likely helpful in ensuring accuracy and limited artifact formation. Additionally, it allows the student to be able to put the model together or take it apart. The educational merit was not investigated in this report, but the work clearly serves as a building block for future 3D printed models of the equine limb. An additional consideration for 3D printing of the equine digit includes the soft tissue structures. The tendons and ligaments in this region are extremely complex. Having 3D models including the soft tissues structures, with color coding could be helpful for students in learning the regional anatomy [13].

130

W. B. Stoughton

3.2 Canine Neuroanatomy 3D Models 1. In 2017, a team from North Carolina State University published their findings demonstrating successful use of 3D printed models of canine brains for teaching veterinary neuroanatomy [20]. This vital work not only provided the steps needed to create 3D brain models but also tested the effectiveness of the models compared to the classic plastinated models. The study goals were to produce low-cost, durable, anatomically correct brain models and determine if the models impaired learning compared to classic brain models. The authors’ hypothesis was that 3D models would lead to equal or better gains in student learning. To create an accurate canine brain model, a high resolution MRI scan was performed of an excellent quality, formalin fixed, canine brain specimen. Image acquisition used a T2-weighted, Bruker 9.4T MRI with a 3D Rapid Acquisition with Relaxation Enhancement sequence. The scan of a single, formalin-fixed canine brain took 12 h and 13 min. This step was essential for creating high resolution imaging data to base the 3D prints on. Prior to 3D printing, the DICOM file format was transferred and modified into the standard tessellation language (STL) file format. Interestingly, in this transfer process, the authors had to manually edit the images to improve structural details. For example, they describe manually accentuating the external surface folds of the brain. This is an important point, because even with high resolution imaging, it is essential to confirm the accuracy of the STL file data since this is what will get printed. Three different models were created and included: whole brain, mediansectioned brain, and the brain with the left cerebral hemisphere and cerebellum removed to allow visualization of the brain stem. The 3D printer was the legacy Stratasys Connex 350, which is a unique high resolution, polymerization poly-jet printer that can accommodate different materials with variable flexibility into a single print job. This was helpful in creating the brain models because it allowed for different colored materials (black-and-white) to be used to create shades of gray to accentuate different areas. Overall, the estimated cost of printing the full model set ($209), excluding the preparation time, was less than creating plastinated models ($375). After creation of the 3D printed canine brains, student learning was measured using standard, in-class exams associated with the first-year veterinary anatomy course. To allow for causal inferences, a novel study design using propensity score matching (PSM) was utilized [21]. Application of this tool resulted in pairs of students across class years (2014 and 2015) based on common baseline variables including gender and the first anatomy exam score. A total of 58 pairs were generated for enrollment in the study. The students’ performance on 24 questions were used to assess the efficacy of the 3D printed models used in the 2015 anatomy course vs. the plastinated models used in 2014. Based on the results, no significant difference in mean score performance was identified between the years. Therefore, the authors’ concluded that the models were at least as useful as the plastinated models and showed no sign of a negative

3D Printed Models for Veterinary Anatomy Teaching

131

impact on the veterinary student learning of anatomy. This work provided the foundational educational merit for future development of veterinary anatomy models and further evidenced the use of PSM for evaluating novel educational tools in the curriculum. 2. In 2019, the first study describing 3D printed canine spinal anatomy and vertebral fractures was published by a team from Barcelona, Spain [22]. Their pivotal work showed markedly improved student identification of canine spinal fractures through use of 3D printed models. They compared these models to student performance with digital 3D models and CT reconstructions. The team used raw CT or CT myelography images from three dogs with vertebral fractures that had presented to veterinary clinics. Each dog had different vertebral fractures that were identified using a 16 slice GE CT scanner with 0.625 mm slice thickness. The imaging data were reconstructed for 3D modeling using OsiriXLite, and printing was completed using the Witbox 2. This 3D printer uses FDM technology and the author’s reported anatomically correct production of each vertebral fracture. The most important aspect of this study was the experimental design used to determine if the 3D models were more effective than computer models. The eligible study population consisted of 140, first-year veterinary students after completion of the first semester. Sixty students (42%) volunteered to participate and were randomly assigned to three groups including CT, 3D-CT, and 3D printed models. Each student received a short technical description for using the OsiriXLite program and had a review lecture on vertebral anatomy prior to engaging with the intervention. After the lecture and model engagement, each student completed a 12 question quiz evaluating vertebral anatomy and fracture identification. Demographics and subjective data were collected regarding the students learning experience and confidence. Results indicated students’ learning from the hands-on 3D printed vertebrae performed better than the CT and 3D-CT groups. However, no difference was detected between the groups regarding the students’ learning experience and self-confidence. Suñol et al. provide a solid introduction to the use of 3D printed axial skeletal fractures for teaching clinical anatomy to first-year veterinary students. It would have been informative to have a pre-test sample for the students to confirm their knowledge was similar and to balance the groups, if necessary. Additionally, a control group using radiographs would have been interesting to include, considering this is the more common clinical diagnostic to be used in practice. However, given the low cost of the printing, and the improved learning identified from their study, the incorporation of this tool in anatomy courses is warranted. 3. In 2020, a Brazilian team from Rio Branco at the Federal University of Acre reported the creation of a 3D anatomical model to aid in the teaching of canine lumbosacral epidural anesthesia [23]. Although this is a clinical skill development tool, it is an excellent example of the variety of applications that 3D anatomic models can have and a way to test efficacy. To create the model, a unique method of image acquisition using a 3D optical light scanner for 360o

132

W. B. Stoughton

image capture was utilized. After the light scanning procedure, post-scanning editing was required using the free 3D Autodesk Meshmixer software prior to 3D printing. The prototype was printed using FDM with ABS thermoplastic on an UP 3D Mini® printer. Based on the methods, it appears that each section of the model including the lumbar vertebrae, sacrum, and pelvis were printed separately and then attached together by unspecified flexible filaments. This makes sense, as this 3D printer is not big enough to print the whole model at the same time. It also appears that this method was chosen to allow flexibility to be incorporated into the model to mimic hip flexion required for lumbosacral epidural injection. After creation of the skeletal model, testing its impact on student learning was evaluated. The authors asked a simple question: “Would an epidural anesthesia 3D model help the students’ training process in performing epidural anesthesia in dogs?”. To test this hypothesis, the authors chose to do a controlled study with clinical cases using 20 students who had completed their anesthesiology coursework but never performed the procedure. The students were not distinguished based on grades in the courses, gender, or age and were randomly assigned to either the model (n = 10) or control (n = 10) groups. Both groups watched a 3-min video demo showing the epidural technique so that the control group was able to ask questions for 20 min, while the 3D model group was allowed to train with the model. The model group primarily trained on palpating the iliac prominences and dorsal spinous process of L7 required for needle positioning. Palpation of these bony prominences are essential for correct placement of the epidural. To evaluate the clinical impact of the pelvic model use, a small clinical trial using 20 dogs was performed. Each student performed an epidural on a dog prior to orchiectomy. A successful epidural was required to induce tail relaxation, absence of pain with phalangeal clamping in the pelvic limbs and stability of heart rate, arterial blood pressure, and respiratory rate during incision placement. In addition to comparing differences in the number of students successfully completing the epidural in each group, the students were asked to rank from 1 to 10 their view of the procedure difficulty via a short questionnaire. No significant difference was detected between the groups’ success in performing the epidural. However, 6/10 students in the model group were successful compared to 3/10 in the control group. Conversely, there was a significant difference in the student’s perceived degree of difficulty in performing the procedure with the model group having less difficulty than the control group. Although this study had limited statistical power, it is a solid preliminary study which indicated that inexpensive, basic, 3D printed models can aid in teaching clinical anatomy to veterinary students. This model focused on providing an additional opportunity for the trainee to palpate the key bony landmarks necessary for performing the epidural. The model was not designed nor able to simulate other components associated with the procedure including the sensation of palpating the ligamentum flavum or the negative pressure when the needle is correctly in the epidural space. Despite these limitations, evidence for improved confidence was identified, further building on the hypothesis that 3D printed tools are advantageous in veterinary education.

3D Printed Models for Veterinary Anatomy Teaching

133

3.3 Canine Musculoskeletal Models 1. In 2020, Nunez et al. from Brazil reported successful creation of a 3D printed canine hip dysplasia model for veterinary education [24]. The primary goal of the study was to utilize 3D printing to reproduce boney deformations found in canine hip dysplasia. The authors applied a unique approach which included high-definition scanning of normal canine pelvis, followed by manual anatomical changes to digital images to create degrees of hip dysplasia. The type of 3D scanner (Matter and Form©) used in this study is relatively inexpensive ($749 USD) and the currently available model (2021) has even better scan accuracy at ± 0.1 mm. The scanner accuracy described by Nunez et al. is reported at ± 0.25mm. Based on the 3D printed model exampled in the study, the scanning was excellent and captured the boney details well. The manual software editing of the scanned normal canine pelvis allowed for five different degrees of hip dysplasia severity to be created in accordance with reported radiographic standards [25]. The bones for the models were 3D printed individually using an Up Mini © printer with ABS filament. After printing, the pelvis models were assembled using elastic filament and radiographed to evaluate the suitability of the model for teaching. Each radiographed model was measured to determine if the degree of deformity to the acetabular dislocation fit the expected level for that grade of dysplasia. Although no educational trial was reported as part of this study, the authors confirmed the model’s accuracy and suitability for adjunctive teaching. The advantage includes the ability of the student to manipulate the model and view the radiographs. This takes the 2D radiographic image and provides a 3D reproduction for students to touch, rotate, and manipulate. The full canine hip dysplasia model cost was relatively low in terms of materials ($20.25) but was very time consuming with 60 h of creation time and 73.5 h required for printing. The additional creation time was due to the manual digital image editing to create correct degrees of hip dysplasia. This study provides a template for using digital editing to create abnormalities from normal tissues that can be printed and used for teaching including radiographic imaging.

3.4 Feline Larynx for Teaching Endotracheal Intubation 1. In 2020, Clausse et al. reported successful creation of a feline laryngeal model for teaching endotracheal intubation [26]. In this study, the authors exampled how to build a low-cost portable clinical skill simulator using 3D printing. Oberi et al. similarly reported a high-fidelity rabbit nasotracheal intubation simulator [27]. The most unique part of this feline study design was how 3D printing was used to create the simulator. Often the printed material is used as the primary tool for

134

W. B. Stoughton

teaching and learning. However, in this study, the 3D printed model of the cat larynx was used as a cast for creating a silicone mold of the larynx. To create the 3D printed feline larynx lumen, several steps were required. First, a CT scan was performed on a cat to obtain the data file. Next, significant manual, digital editing was performed to correct minor anomalies, size adjustments, and structural edits to facilitate printing and molding of the larynx. Interestingly, the authors chose to print the laryngeal lumen in two separate pieces to ensure strength and then glued the pieces together. Lastly, printing was performed on a Prusa i3 Steel RepRap machine using PLA material and took 3 h. This 3D printer is a high-quality, reliable, relatively inexpensive 3D printer workhorse with wide applications. At this stage in the process, the authors switched to constructing the physical model using different molding techniques. The 3D printed feline laryngeal lumen was used as a mold to cast and create the silicone walls of the larynx. After trying several different prototypes, the final version of the simulator was an inexpensive wood box costing 6$ with the 3D printed mold costing $15. This prototype, called the LaryngoCUBE, was evaluated at a veterinary conference by 46 veterinarians and anesthesiologists. Feedback provided on the simulators anatomic and material fidelity was overall good, and it was deemed suitable for training veterinary students. As a pilot study, this inexpensive model is authentic and should serve as a good adjunct in clinical skill training for veterinary students. This report provides evidence for successful use of 3D printed anatomic structures to make silicone molds for endotracheal intubation skill training.

3.5 Canine Pulmonic Stenosis Model 1. In 2019, Steiger-Vanegas et al. reported success in printing canine hearts with anomalous coronary artery anatomy for improved client communication and veterinarian training [28]. This work pivots from previous findings that indicate a significant advantage of physical 3D models over textbooks and 3D computer models [2]. The tactile manipulation promotes better concept comprehensive and spatial relationships, which can be very difficult with cardiovascular anomalies. The authors hypothesized that computed tomography angiography (CTA) images would successfully identify the anomalous coronary arteries in dogs and allow for full-sized, 3D printed models to be created. To test this hypothesis, diagnostic CTA images collected on a Toshiba 64 slice CT scanner from dogs were used. The authors reviewed the records between 2009 and 2016 and identified six dogs with CTA scans. Although the diagnostic image quality was adequate to confirm coronary artery anomalies in all six dogs, only four of the dog’s datasets were sufficient for 3D printing. Like all 3D printing projects, post-imaging editing and segmentation are needed to create the final printable model. In the two dogs that did not have printable image data, the cause was identified to be the use of a non-gated electrocardiograph CTA. With this

3D Printed Models for Veterinary Anatomy Teaching

135

protocol, there was too much movement artifact from the heart beats to provide accurate models to print. This is an important point, as the quality of the image has limits in terms of the automatic or manual segmentation process. If the goal is to create anatomically correct 3D models based on the patient’s heart condition, it is essential to have minimal motion artifacts with CTA. The 3D models were printed using FDM technology with a uPrint SE Plus machine and ABS filament. The printing details were not provided in the report; however, this is a high quality, affordable desktop 3D printer with sufficient resolution for printing blood vessels