215 24 11MB
English Pages XII, 293 [299] Year 2021
Bopaya Bidanda Paulo Jorge Bártolo Editors
Virtual Prototyping & Bio Manufacturing in Medical Applications Second Edition
Virtual Prototyping & Bio Manufacturing in Medical Applications
Bopaya Bidanda • Paulo Jorge Bártolo Editors
Virtual Prototyping & Bio Manufacturing in Medical Applications Second Edition
Editors Bopaya Bidanda University of Pittsburgh Department of Industrial Engineering Pittsburgh, PA, USA
Paulo Jorge Bártolo Centre for Rapid and Sustainable Product Polytechnic Institute of Leiria Leiria, Portugal
ISBN 978-3-030-35879-2 ISBN 978-3-030-35880-8 (eBook) https://doi.org/10.1007/978-3-030-35880-8 1st edition: © Springer Science+Business Media, LLC 2008 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book is dedicated to our parents Lucilia Pinto Dias and Antonio Dias (late) Neena and Monapa Bidanda (late) Claudina Coelho da Rocha and Arlindo Terreira Galha (late) Maria Alice and Francisco Bartolo And our families Helena and Pedro Louella, Maya, and Rahul For their constant support throughout this project Finally, we would like to give a special acknowledgement to Dr. Fengyuan Liu Without her untiring efforts this book would never have been completed
Preface
We are especially pleased to present the second edition of our edited book in an area that is quickly emerging as one of the most active research areas that integrates both engineering and medicine. Research in this area is growing by leaps and bounds. Preliminary research results show significant potential in effecting major breakthroughs ranging from a reduction in the number of corrective surgeries needed to the “scientific miracle” of generating tissue growth. Billions of dollars/euros/pounds have been invested in tissue engineering over the past decade—a large and significant component of this is in the area of virtual and physical prototyping. As a result, we have made significant changes to this edition as you will see from our summary of chapters below. Virtual and physical prototyping can broadly be divided into three categories: modeling, manufacturing, and materials. This book focuses on the first part and some areas of the second. The second book in this series will focus on the areas in the second and third categories. As you will see from this book, the principles utilized draw heavily from the more traditional engineering fields including mechanical engineering, industrial engineering, civil engineering (structures), and bioengineering. The first chapter by Ekinci et al. provides insights into the essentials and efficient methods to design and fabricate optimal vascular network for tissue engineering. The physiological considerations in design, the advantages, and in vitro studies of the fabricated optimal vascular vessels are described and explored. In Chap. 2, Ming Leu and his group of researchers review the use of virtual reality technology for a virtual bone surgery simulation system, which can be used for training in orthopedic surgery and planning of bone surgery procedures. Then, they discuss the basic methods and techniques used to develop these systems. Amorim et al. present fundamental aspects related to the generation and visualization of models for three-dimensional printing in Chap. 3. Concepts including medical imaging, preprocessing, segmentation, volume rendering, image data representation, 3D printing, and biofabrication are present. In Chap. 4, Naing et al. present a system of CAD structures based on convex polyhedral for use with rapid prototyping (RP) technology in tissue engineering vii
viii
Preface
applications that allows the designer (given the unit cell and the required dimensions) to automatically generate a structure that is suitable for the intended tissue engineering application. Scaffolds are key structures in tissue engineering as they provide an initial biochemical substrate for the novel tissue until cells can produce their own extracellular matrix. Therefore, scaffolds not only define the 3D space for the formation of new tissues but also serve to provide tissues with appropriate functions. Several techniques have been developed to produce scaffolds. In Chap. 5, Huang et al. review the current state of the art of additive manufacturing techniques used for tissue engineering. Different additive techniques are described, and their main advantages and disadvantages are analyzed. Antman-Passig and Shefi, in Chap. 6, present and describe the tissue engineering techniques and advanced fabrication strategies for oriented scaffolds and nerve conduit for nerve repair. Bioengineering strategies strongly depend on both material and manufacturing processes. It is difficult for a single fabrication technique currently to meet the requirements of all scale tissue regeneration. Many advanced biofabrication techniques have been developed to produce tissue constructs with improved properties in both mechanical and biological aspects for broad biomedical engineering applications. The seventh chapter by Aslan et al. reviews and discusses the use of different developed electrospinning techniques and hybrid electrospinning and meltextrusion techniques to produce polymer and polymer composite nanofiber meshes for skin repair and regeneration. In Chap. 8, Liu et al. describe and discuss different types of bioprinting techniques and classify into three main categories: basic, semihybrid, and fully hybrid additive systems. The main advantages and disadvantages of the systems are analyzed. In Chap. 9, Guardado and Cooper review the various attempts to use rapid prototyping techniques to directly or indirectly produce scaffolds with a defined architecture from various materials for intervertebral disc components, the nucleus pulposus, and annulus fibrosus replacement. In Chap. 10, Al-Tamimi et al. describe recent work on power bed fusion for making tissue engineering metallic fixation implants. A general overview about the fundamentals of bone characteristics, metallic biomaterials, and metallic powder bed fusion techniques are provided as well as models of heat, mass, and momentum transport phenomena associated with melting and solidification of metallic powders. The chapter also provides in-depth information about powder bed fusion of titanium and titanium alloys, cobalt-based alloys, and stainless steel for making bone tissue engineering fixation implants. Examples of internal implants produced by EBM or SLM discussing their mechanical and biological performance, stress shielding, personalization, and the reduction of the total surgical procedure are presented. The last chapter by Xu and Bártolo introduces the structure and the regeneration process of the nerve tissue and explains the current strategies used to treat nerve injury. A review on the current state of the art in the scaffold design requirements and additive manufacturing techniques for nerve scaffold fabrication within the tissue engineering context is provided. The techniques focus on the extrusion-
Preface
ix
based techniques, vat-photo polymerization, and electrospinning. The associated advantage and limitations are also discussed. The production of this book has been a most enjoyable experience. We thank the authors for their valuable and timely contributions to this volume. We would also like to thank Dean Jimmy Martin, US Steel Dean of Engineering, University of Pittsburgh; Prof. Martin Schröder, Dean of the Faculty of Science and Engineering; Prof. Alice Larkin, Head of the School of Engineering; and Prof. Tim Stallard, Head of the Department of Mechanical, Aerospace and Civil Engineering, University of Manchester, for their support of our academic endeavors. We would especially like to acknowledge the unlimited patience and constant support of Brinda Megasyamalan of Springer. Finally, we would like to acknowledge the multiple contributions of Dr. Fengyuan Liu. Without her untiring efforts, abundance of patience at our tardiness, and outstanding project management skills, this edition would never have been completed. Thank you Fengyuan! Pittsburgh, PA Leiria, Portugal
Bopaya Bidanda Paulo Jorge Bártolo
Contents
1
Optimised Vascular Network for Skin Tissue Engineering by Additive Manufacturing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alper Ekinci, Xiaoxiao Han, Richard Bibb, and Russell Harris
2
Virtual Bone Surgery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ming C. Leu, Wenjin Tao, Qiang Niu, and Xiaoyi Chi
3
Three-Dimensional Medical Imaging: Concepts and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paulo Henrique Junqueira Amorim, Thiago Franco de Moraes, Jorge Vicente Lopes da Silva, and Helio Pedrini
4
Computer Aided Tissue Engineering Scaffolds . . . . . . . . . . . . . . . . . . . . . . . . . M. W. Naing, C. K. Chua, and K. F. Leong
5
Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boyang Huang, Henrique Almeida, Bopaya Bidanda, and Paulo Jorge Bártolo
1 21
51
77
95
6
Engineering Oriented Scaffolds for Directing Neuronal Regeneration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Merav Antman-Passig and Orit Shefi
7
The Electrospinning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Enes Aslan, Henrique Almeida, Salem Al-Deyab, Mohamed El-Newehy, Helena Bartolo, and Paulo Jorge Bártolo
8
A Review of Hybrid Biomanufacturing Systems Applied in Tissue Regeneration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Fengyuan Liu, Cian Vyas, Jiong Yang, Gokhan Ates, and Paulo Jorge Bártolo
xi
xii
Contents
9
Low Back Pain: Additive Manufacturing for Disc Degeneration and Herniation Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Alexandra Alcántara Guardado and Glen Cooper
10
A Review on Powder Bed Fusion Additive Manufacturing for Metallic Fixation Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Abdulsalam Abdulaziz Al-Tamimi, Mohammed S. Al-Qahtani, Fengyuan Liu, Areej Alkahtani, Chris Peach, and Paulo Jorge Bártolo
11
Scaffold Design for Nerve Regeneration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Zhanyan Xu and Paulo Jorge Bártolo
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Chapter 1
Optimised Vascular Network for Skin Tissue Engineering by Additive Manufacturing Alper Ekinci, Xiaoxiao Han, Richard Bibb, and Russell Harris
1.1 Introduction Many clinical therapies utilise autologous and allografts to repair skin defects resulting from genetic disorders, acute trauma, chronic wounds or surgical interventions. Tissue engineering (TE) of skin is an emerging technology that offers many potential advantages in repairing skin defects over conventional autologous grafts [1]. It overcomes the shortage of donor organs and reduces the added cost and complications of tissue harvesting. Tissue-engineered skin can also be used as a skin equivalent for pharmaceutical or cosmetics testing, eliminating the need for animal testing [2]. A major issue in tissue engineering is that the artificial skin may not develop adequate vascularisation for long-term survival [3]. An artificial vascular system can be pre-embedded in a skin equivalent before it is implanted. The embedded network has three primary functions: (1) to supply nutrients and other soluble factors and to remove waste products from the surrounding cells, (2) to act as scaffolds for culturing vascular endothelial cells and (3) to develop
A. Ekinci Wolfson School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, Loughborough, UK e-mail: [email protected] X. Han () HNU College of Mechanical and Vehicle Engineering, Hunan University, Changsha, China e-mail: [email protected] R. Bibb Design School, Loughborough University, Loughborough, UK e-mail: [email protected] R. Harris Mechanical Engineering, University of Leeds, Leeds, UK e-mail: [email protected] © Springer Nature Switzerland AG 2021 B. Bidanda, P. J. Bártolo (eds.), Virtual Prototyping & Bio Manufacturing in Medical Applications, https://doi.org/10.1007/978-3-030-35880-8_1
1
2
A. Ekinci et al.
small sprouting capillaries that can be connected with existing blood vessels, also known as angiogenesis [1, 4–6]. Nutrition supply in the human body is realised by a very complex blood vessel network. It consists of vessels in dimensions between several millimetres down to several micrometres in diameter. To mimic the system, flexible structuring processes are needed. Traditional manufacturing technologies, such as spinning, dip-coating or extrusion, can produce linear tubes with different inner-diameters [7]. However, it is not possible to generate branched vessels, with decreasing or increasing internal diameters to mimic the natural changes in blood vessel networks. Additive manufacturing (AM) technologies have made it possible for the first time to manufacture artificial blood vessels and their networks of any sophisticated geometry and connections. With AM, three-dimensional (3D) objects can be produced from 3D computer-aided design (CAD) data by joining materials together using a layer-by-layer manner. There are many AM technologies classified as bioprinting systems, based on microvalve deposition, ink-jetting, material extrusion and stereolithography (SLA) techniques [8, 9]. SLA has advantages in 3D printing microvascular vessel networks due to (1) its high resolution, (2) its ability to produce flexible materials and (3) excellent process control. The use of these AM technologies will enable the generation and mimicking of complex blood vessel networks under controlled conditions. Currently, various research groups have successfully 3D printed and tested such vascular vessels [4, 10–13]. Wu et al. [13] used transient inks to print a solid template within the substrate and then removed the ink to create microchannels. Hinton et al. [14] invented a freeform reversible embedding of suspended hydrogels method (called FRESH in their paper) to print hydrated materials that enable the printing of complex vascular architectures. However, in their work, vascular networks were printed with little understanding of the physiological demands. Therefore, general design guidance is missing. Design parameters such as branch levels, a branching point, branch angles, vessel diameter, the daughter vessel asymmetry ratio, wall shear stress (WSS) and recirculation areas should be considered carefully in the design of vascular vessels. Based on these parameters, this chapter presents guidance on the design optimisation of a vascular network manufactured by SLA for skin tissue engineering.
1.2 Design of Vascular Network The main parameters considered in the design of a vascular network can be described in two categories: (1) the macro-scale parameters and (2) the micro-scale parameters. The macro-scale parameters include branch levels and branching point locations, while the micro-scale parameters include branch angles, vessel diameters, the daughter vessel asymmetry ratio, the WSS and the recirculation areas. Their definition and illustration are shown in Fig. 1.1.
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
3
Fig. 1.1 The macro-scale (a) and the micro-scale (b) parameter definitions [15]
In Fig. 1.1a, branch levels and branching point are illustrated, while Fig. 1.1b demonstrates parent diameter Rp , daughter diameters Rd1 and Rd2 , total branching angle, WSS and the recirculation areas.
1.2.1 Macro-Scale Design The design of vascular networks is focused on bifurcations, because, in normal vasculature, around 98% of blood vessels bifurcate at each junction, while only
4
A. Ekinci et al.
Fig. 1.2 Distributed configuration of the vascular network with different branching levels: (a) 2 levels, (b) 3 levels, (c) 4 levels [15] Fig. 1.3 The first version of the vascular system [15]
2% trifurcate [16, 17]. As the 3D structures can be formed by stacking 2D vascular systems, the locations of the branching points having different branching levels such as 2, 3, and 4 levels are evenly distributed on the skin patch, which can be illustrated in Fig. 1.2 (a–c). The formula and calculation of different branching levels are given in details in [15]. Based on this calculation, the first configuration sketch of the vascular system is shown in Fig. 1.3. It is shown that sharp junctions are used in all bifurcation points. These sharp apices at junctions of bifurcated vessels need to be avoided because they are considered risk factors for local mechanical weakness [18]. Rounding (increasing the radius) the apex at each junction can be one of the solutions. However, larger recirculation areas of blood are found in bifurcation vessels with rounded apices compared with sharp junctions [18]. Thus, a careful design of the bifurcation
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
5
junctions is necessary. At the macro-scale, the main objective of the design is to maximise the nutrient supply and the waste exchange to surrounding tissues and cells; nevertheless, the local bifurcation design needs to ensure that the shear stress on the vessel wall is in the healthy range at the micro-scale.
1.2.2 Micro-Scale Design The WSS is a critical haemodynamic indicator that affects endothelial cell development [17–20]. Many researchers have found that branching angles have a significant effect on WSS in the bifurcation of a branch vessel [21–24]. The maximum curvature of the junction is the most important factor that influences WSS. High curvature also leads to stress concentration, which weakens the system mechanically [18, 19, 25, 26]. The volume (V) of the junction is another important factor in haemodynamics [27, 28]. A large volume leads to local recirculation of the blood [27, 28]. Another physiological requirement at the micro-scale is to ensure minimal recirculation areas where nutrient and oxygen may be trapped. 1.2.2.1
Branch Angle and Vessel Diameters
Design approaches to optimise a vascular network have been based on the minimisation of the sum of the energy required for pumping blood through the network and the energy required for the metabolic supply of the blood volume. To minimise the energy, Murray’s law given in Eq. (1.1) is applied [29, 30]: 3 3 Rp3 = Rd1 + Rd2
(1.1)
Using Murray’s law, the radii of daughter vessels (Rd1 and Rd2 ) can be obtained based on the radius of their parent vessels (Rp ). It has been confirmed that most natural vascular systems follow Murray’s law [31]. It is widely recognised that local geometries of a vascular bifurcation, such as bifurcation angles, junction curvatures and branching, are major features of the arterial system [20, 25]. The basic principle for a good junction design is therefore to ensure that the volume of the junction remains in a desired narrow range while limiting the maximum curvature. The exact range and limit depend on specific applications. Han et al. [32] developed a mathematical model using parameters such as bifurcation angles, and diameters of parent and daughter vessels. All the parameters in the model influence the junction volume V and the maximum curvature Cmax . In their paper, a systematic parametric study was carried out to establish a set of simple design rules to achieve a balance between V and Cmax . A parametric map, which can be used as a guide for designers, is provided based on the parametric study. The parametric study shown in Fig. 1.4 calculated Cmax and V for bifurcation angle of 30◦ , 50◦ and 85◦ and more detailed information can be found in the study completed by Han et al. [32].
6
A. Ekinci et al. ftotal = 85˚
20
ftotal = 30˚ ftotal = 50˚
15
Cmax 10
5
0 0
4
8
12
16
6
6.5
V
Fig. 1.4 A Cmax —V map for random parameters [32] 10
8
Cmax
ftotal = 45˚
6
ftotal = 60˚ ftotal = 70˚
4
ftotal = 85˚
2
0 4.5
5
5.5 V
Fig. 1.5 Cmax—V plots for different bifurcation angles: φ total = 45◦ , φ total = 60◦ , φ total = 70◦ and φ total = 85◦ [32]
A further analysis was completed for four different values of ϕtotal , the total bifurcation angle of the vascular branch, shown in Fig. 1.5 to understand the relationship between Cmax and V for each bifurcation angle. ϕtotal is the sum of the ϕ1 and ϕ2 , where ϕ1 and ϕ2 are the daughters’ bifurcation angle, respectively. For each bifurcation angle, ϕtotal , one dashed line and one solid line are plotted reflecting a band corresponding to different values of α and α is defined in Eq. (1.2)
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
7
as: α=
min (ϕ1, ϕ2) max ϕ1, ϕ2
(1.2)
The band is rather narrow showing the insensitivity to α. Figure 1.5 can be used as a design guide to find the possible combinations of Cmax and V.
1.2.2.2
WSS and Recirculation Areas
Computational fluid dynamics (CFD) to study blood flow behaviour has to be based on accurate modelling of local vascular geometries [19, 20, 26, 27, 28]. Computer modelling of vascular bifurcation can be achieved in three ways: (1) skeleton based implicit surfaces [33–35], (2) blending objects obtained by canal surfaces [36, 37] and (3) sweeping disks or spheres along curves [34, 38, 39]. Of all previous researches about modelling vascular structures, Cai et al. proposed a relatively simple method based on surface sweeping techniques [38, 39]. In medical imaging, every detail of the vascular geometry has to be captured accurately. In tissue engineering, on the other hand, one only needs to control some key factors when designing an artificial vascular network. It is therefore possible to select a simple method for the convenience of tissue engineering researchers. Ensuring the smoothness of the junction and keeping a relatively small branch angle, as observed in the human body, is very important to avoid high WSS and recirculation. Vascular branches were constructed using the algorithm described in [32]. Figure 1.6 shows the mid-sections of the smooth branches with three different joining angles of 45◦ , 85◦ and 125◦ . CFD simulations were carried out for haemodynamic analysis of different branch designs given in Table 1.1. The definition of parameters used in this table is fully explained in Han et al. [40]. The purpose of the analysis is to compare the different designs in terms of the WSS and flow behaviour. CFD simulations were performed for cases 1, 2 and 5. In all cases two flows merge from the daughter vessels into the branch leading to a volume expansion. Using the same branching volume for all the three cases, the effect of branch geometry such as the branching angles can be analysed. The
Fig. 1.6 Mid-sections of the smooth branches with joining angles of 45◦ , 85◦ and 125◦ [40]
8 Table 1.1 Value of controlled variables for different cases [40]
A. Ekinci et al. Parameters Cases Branch angle 1 125◦ 2 85◦ 3 45◦ 4 45◦ 5 45◦ 6 45◦ 7 45◦
Cmax 0.44 1.43 1.34 3.02 5.37 7.5 14
V 5.4 5.4 5.9 5.56 5.4 5.3 5.2
Fig. 1.7 Recirculation area for branching angles 45◦ (case 5), 85◦ (case 2) and 125◦ (case 1) [40]
negative velocity observed in all the cases indicates backflow. In Fig. 1.7, it can be observed that the backflow induces recirculation in the branching area. Two vortices can be seen in the branching area although its magnitude is small comparing with the surrounding velocity field. A region with low-velocity vortices is known as a flow recirculation area. Nutrients for arteries or waste for veins in the blood flow can be trapped in such area. Therefore, it is important to understand how the recirculation area of a rounded junction affects the flow velocity profile and the wall WSS downstream. In Fig. 1.7, the ratios of recirculation area over the whole branch are (1) 45◦ : 26.4%, (2) 85◦ : 24.9% and (3) 125◦ : 21.8%. Junction with 45◦ branch angle has the largest recirculation area, while junction with 125◦ branch angle has the smallest recirculation area. WSS is one of the most significant haemodynamic factors that relate to blood vessel development and cardiovascular diseases [19, 25, 26]. In healthy cerebral arteries, the WSS ranges from 1 to 7 Pa [41]. WSS higher than 7 Pa can damage the endothelial cells during vascular remodelling while WSS lower than 1 Pa can lead to the formation of plaque due to insufficient mechanical stimulation on endothelial cells [41]. WSS distributions for different smoothed cases and their sharp counterparts are shown in Fig. 1.8. In the junction, the WSS can be many times higher than that in the straight vessel. WSS in the smoothed junction has a different distribution compared with those in the sharp junctions. High WSS (12 Pa) is found on the sharp junction compared to the rounded one (10 Pa) as shown in Fig. 1.8a. The area of low WSS in the smoothed
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
9
z = 20mm
z = 15mm
(a) 45˚
(b) 85˚
z = 20mm
(pa) 16 14 12 10 8 6 4 2
(c) 125˚
0
Fig. 1.8 Wall shear stress distribution for branching angles (a) 45◦ , (b) 85◦ and (c) 125◦ for the smoothed model (left) and sharp model (right) [40]
junction is larger than that in the sharp one due to recirculation. Figure 1.8a shows that at z = 15 mm, the WSS distribution is more uniform with a low average value of 4 Pa in comparison with 5 Pa in a sharp junction. In Fig. 1.8b, similar values of maximum WSS can be observed for both models (∼14 Pa). In the smoothed model, the distribution of WSS is more intense at the beginning of the downstream flow, but a more uniform distribution of low values is found at z = 20 mm in comparison with the sharp model. It is found in Fig. 1.8c that the recirculation area has a similar but weak influence on the WSS distribution in a rounded junction in comparison to the sharp junction. A correlation between the SS reduction and the bifurcation angle is shown in Fig. 1.9. From Fig. 1.9, it is seen that junctions with larger bifurcation angle result in a smaller WSS reduction. With further increase in the bifurcation angle, the WSS reduction decreases more slowly. This indicates that the smoothed design has less effect on WSS reduction downstream for bifurcations with larger angles. In the parametric design model of a branch junction, Cmax and its corresponding V are the most important geometric parameters. A larger Cmax leads to a smaller V, thus a smaller branching area. Further increasing Cmax , however, has a limited effect on
10
A. Ekinci et al.
Fig. 1.9 12.WSS reduction at z = 0.15 (m) using parametric model compared with sharp bifurcations versus bifurcation angles [40]
the branching area as V will decrease more slowly. In this section, CFD simulations are presented for smoothed junctions with different Cmax for a branching angle of 45◦ (cases 3–7).
1.2.2.3
Daughter Vessel Asymmetry Ratio
According to [42], the WSS in a vascular bifurcation is related to two local parameters. They are R+ , the asymmetry ratio and the total bifurcation is shown in Fig. 1.1b and R+ is calculated using Eq. (1.3): R + = Rd1 /Rd2
(1.3)
Khamassi et al. [42] established a CFD simulation to analyse how α and R+ effect the minimal WSS at bifurcation junctions. They also generated a diagram to explain their correlations. This diagram is used as a guide for the selection of bifurcation angles shown in Fig. 1.10. In Fig. 1.10, the circles represent the founding WSS using different combinations of R+ and α. The contour lines in Fig. 1.10 demonstrate the interpolated WSS values based on the founding values. A distinct optimum appears when R+ is shown as a square in Fig. 1.10. This diagram suggests that the branch angle and the asymmetry are the major geometry parameters of physiological bifurcations. The selection of bifurcation angle and the asymmetry should be in the range of the contour lines to lead to proper function: the bifurcation angle ranges from 60◦ to 140◦ , while the asymmetry ratio ranged from 0.6 to 1.
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
11
Fig. 1.10 WSS as a function of the bifurcation geometry [15]
Fig. 1.11 The new version of the vascular system [15]
In the first version of the vascular system, bifurcation angles and the asymmetry were checked, and it was found that two bifurcation angles were out of the range. An algorithm was then developed to fix the problem [15]. By updating this algorithm, the bifurcation points with sharp apices are rounded as can be seen in Fig. 1.11. To optimise a vascular network embedded in the skin patch to supply tissues and cells with nutrients and oxygen, exchange waste and to support angiogenesis, design criteria are considered in both macro-scale and micro-scale. It can be described as four criteria, which are as follows: • • • •
to maximise the nutrient supply and waste exchange to minimise the resistance to blood flow to ensure the shear stress on the vessel wall is in the healthy range to avoid the blood recirculation.
12
A. Ekinci et al.
1.3 The Application: Optimised Vascular Network Design for Skin Tissue Engineering Selecting the material and manufacturing technologies are important while designing a vascular network for skin tissue engineering using SLA. The material selected for manufacturing of a vascular network needs to have appropriate viscosity and polymerisation characteristics to allow it to be 3D printed successfully. Additionally, it also needs to have vessel-like properties, such as appropriate elasticity, biocompatibility and surface readiness for bio-coatability. The well-proven AM process SLA makes it possible to manufacture complex geometries such as a vascular network.
1.3.1 Additive Manufacturing Technologies for Biomanufacturing SLA was developed in 1980s and was one of the first commercial AM processes [43, 44]. Conventional SLA machines have vertical resolutions in the range of 150 μm. Further developments known as “micro SLA” can create geometries with high complexity [45] and with resolutions below 150 μm in all three spatial directions. Layer heights of less than 10 μm allow the replication of capillaries that are essential for the metabolism in the tissue. Alternative AM methods are not able to produce such high-resolution structures [10, 46]. The high resolution of AM cell scaffolds or membranes enables targeted cell alignment, cell growth and cell interaction [46, 47]. The SLA process relies on a photo-polymerisation process and suitable resins consist of monomers and photoinitiators (PI) that are typically toxic. Consequently, for biomedical applications, it is paramount to guarantee that the PI degrades completely during the polymerisation process. This challenge can only be overcome by interdisciplinary process improvement, including material, SLA process and environmental conditions since in state-of-the-art implementations a typical degree of polymerisation is between 40 and 70% resulting in a considerable amount of remaining PI [48]. A promising approach to guarantee complete polymerisation and to prevent the formation of unwanted compounds uses inert atmosphere. Only in the absence of oxygen, it is possible to achieve complete crosslinking and full disappearance of cytotoxic PI and monomers.
1.3.2 Materials and Methods The material chosen for the application consisted of three monomers plus a photoinitiator. The formulations are BPA-ethoxylated-diacrylate, lauryl acrylate and isobornyl acrylate (BLI). Three types of photoinitiator were tested to select the most appropriate one that has the lowest cytotoxicity. They are Irgacure® 184,
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
13
Irgacure® 2959 and Irgacure® 369. BLI samples with three photoinitiators were investigated by the two methods WST-1assay (three replicates of each kind of a material; characteristic of the sample: diameter: 14 mm, height: 1–2 mm, weight: 200 mg) and live/dead assay (characteristic of the sample: diameter: 14 mm, height: 2–3 mm, weight: 300 mg). All samples were disinfected in 70% ethanol for 30 min (meanwhile the samples swelled), washed two times for 30 min with PBS and equilibrated in cell culture medium. In the WST-1 assay, eluates from different specimens were tested for cytotoxic components. The specimens were eluted by incubation in complete cell culture medium for 24 h and following periods at 37 ◦ C. According to ISO norm 10993-5 a volume of 1 mL medium was applied per 0.2 g material. Eluate samples were taken after 1, 2, 3, 6, 7, 8, 9, 10, 13, 15, 17, 21 and 24 days and the medium was completely renewed. The elution period day 0–1, 1–2, 2–3, 6–7, 10–13 and 21–24 was finally tested. For this 3T3 cells have been pre-cultured for 1 day in a 96 well tissue culture plate inoculated with care with about 8000 cells per well. Then the medium was changed against the eluate samples (four replicates from each eluate sample resulting in 12 replicates representing the same material sample) and cells were incubated with the eluates for 24 h under cell culture conditions at 37 ◦ C in a 5% CO2 atmosphere. The negative control (representing no cytotoxic influence) received pure cell culture medium, whereas the positive control (representing highest level of cytotoxicity and complete inhibition of dehydrogenase activities) was obtained from wells without cells containing only culture medium. Eluate samples and controls were changed after 24 h against medium with WST-1® reagent but without phenol red and cultured for about 20 min to 1 h. Formation of coloured formazan was measured by the optical density at 450 nm. The development of dye intensity was kept under control to measure at a time point, when the optical density of the negative control was between 0.2 and 0.6.
1.3.3 In Vitro Testing Human adipose-derived stem cells (hASCs) and pericytes were isolated from human tissue derived from patients that underwent regular surgical treatment and signed an informed content at the BG University Hospital Bergmannsheil in Bochum, Germany. The hASCs were cultured in DMEM-HAMS-F12, and the pericytes were cultured in a pericytes-growth-medium (PGM, PromoCell). In the context of the study as a scaffold methacrylated gelatine (5%; IGB) was used [4]. Using methacrylated gelatine and a photoinitiator (LAP; INN) a stable hydrogel was created. Within this hydrogel, three diverse species of cells were spread: 600,000 HUVECS, 600,000 hASCs and 60,000 pericytes. In the hydrogel three different shapes of tubes were created: (1) a stainless steel moulded tube, (2) a single tube 3D printed by SLA using BLI with Irgacure® 184 and (3) a branched network 3D printed by SLA using BLI with Irgacure® 184.
14
A. Ekinci et al.
Fig. 1.12 The bioreactor system used for the tube supported tissue culture [15]
The bioreactor system developed in [22] was driven by a pump-sleeve-system to deliver medium (620 μl/min) to the hydrogels. The pump was connected to a nutrient bottle and the hydrogel containing chamber (see Fig. 1.12). To run the bioreactor system a medium mixture was performed in the same ratio as the corresponding cells were distributed. The hydrogels were cultured at 37 ◦ C and 5% CO2 for 7 days. After 7 days a live/dead assay was performed staining the hASCs with calcein (green) and Propidium Iodide (red).
1.4 Results and Discussion 1.4.1 Cytotoxicity Testing for Photoinitiators Firstly, in the live/dead assay, to evaluate the cytotoxic effect observable in case of direct contact of the cells to the material, the percentage of dead cells was determined by manually counting live and dead cells from fluorescence micrographs. Below 5% dead cells (during longer culture below 10%) was considered to be not cytotoxic since this can be observed in control cultures too. Low cytotoxicity is assumed from 5% to 20% dead cells, moderate cytotoxicity is from 20% to 50%, and high cytotoxicity is above 50%. Already the viability staining revealed the high cytotoxicity of BLI with Irgacure® 2959, where no viable cells could be at BLI with Irgacure® 184 and at BLI with Irgacure® 369 developed higher cell densities from about 135–225 to 360–625 cells/mm2 during culture time and had less than 10% dead cells (Table 1.2).
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
15
Table 1.2 Levels of cytotoxicity determined by live/dead staining [15] Control Material (glass) Day 1 No cytotoxicity Day 4 No cytotoxicity
Day 14
No cytotoxicity
BLI with Irgacure® 184 No cytotoxicity Low cytotoxicity; increased number of cells from day 1 to day 4 Low cytotoxicity
BLI with Irgacure® 2959 High cytotoxicity High cytotoxicity
High cytotoxicity
BLI with Irgacure® 369 No cytotoxicity Low cytotoxicity; increased number of cells from day 1 to day 4 Low cytotoxicity
1.4.2 The Printed Vascular Network The SLA process was developed in this work to produce the vascular network. It consists of a polymer bath, a laser and a scanner system with an F-theta lens (f = 100 mm) for fast beam deflection in x-y-direction. The polymer bath was positioned on a platform connected to a piezo-axis to allow positioning in the zdirection. For process development, different photo resins in combination with a photoinitiator 355 nm were investigated. Detailed SLA setting is shown in Fig. 1.13. To define a reliable process regime, PI concentration (0.5 wt% and 1 wt%), scan speed (5–600 mm/s), layer thickness (30 μm, 100 μm, 150 μm) and the distance between two lines were varied. Firstly, photo resin 3D-03H-87, Marabu with 1 wt% PI was used in printing to get the best set of parameters, which were 15 kHz, power: 10.1 mW, scan speed: 80 mm/s, line distance 30 μm and a layer thickness: 800 μm. Non-crosslinked photo resin was washed away with ethanol (70%). A branched vessel system was also printed from this material using the above parameters. It is showed that by using the SLA technique, the designed branched blood vessel Fig. 1.13 SLA process setups [15]
16
A. Ekinci et al.
Fig. 1.14 The structures of the printed branched vessel network using BLI with Irgacure® 184 with pores [15]
network could be constructed. The structure of the printed vascular network using BLI with Irgacure® 184 with pores is shown in Fig. 1.14. The printing accuracy of angles and pores was tested by flow test with dye solution. It demonstrated that all pores were open. In the first experiments it was observed that branching angles show irregularities due to pores or problems from data slicing.
1.4.3 In Vitro Testing hASCs were evaluated regarding their viability using a live/dead assay for 7 days of culture. Results of cell viability are shown in Fig. 1.15. Figure 1.15 illustrates the cell vitality within a 1 × 1 cm hydrogel supported via a stainless steel moulded central tube (Fig. 1.15a), an SLA-formed branched PA tube containing pores (Fig. 1.15b, c) and a single central SLA-formed PA tube containing pores (Fig. 1.15d). By comparing Fig. 1.15a, b, it is demonstrated that the ability of a branched tube can support the whole volume more appropriately compared with the central steel tube. Additionally, it is also shown that the surrounding cells get in contact to parts of the materials, infiltrating the pores (in Fig. 1.15c) and form more complex structures within the hydrogel (in Fig. 1.15d). Figure 1.16 gives a preliminary comparison of dead cell rate with different embedded tubes. It can be seen from this picture that after 7 days, cell death rate in the branched vessel is the lowest (27%) compared with the pure hydrogel (35%) and hydrogel with a single tube (55%). The pure hydrogel used as a scaffold in this work has proven to be non-toxic and has a good biocompatibility. This can also be seen in Fig. 1.16 that the pure hydrogel scaffold has less than 50% cell death rate after 7 days. The curable resin made of BLI with Irgacure® 184 was proven to be biocompatible and cytocompatible.
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
17
Fig. 1.15 Cell vitality within a 1 × 1 cm hydrogel supported via (a) a stainless steel moulded central tube; (b, c) a branched BLI with Irgacure® 184 tube containing pores [15]; (d) single central SLA-formed BLI with Irgacure® 184 tube containing pores Fig. 1.16 The dead cell rate (%) after 7 days with no embedded tube, single tube and branched tube (sample size for each is 3) [15]
70 60
Dead cells [%]
50 40 30 20 10 0 without material
central tube
branched tube
1.5 Conclusion In this chapter, an optimised vascular network was developed using a set of comprehensive design rules. These design rules considered the physiological requirements in both macro- and micro-scales. The vascular network is optimised not only to provide the maximum nutrient supply with minimal complexity but also to minimise
18
A. Ekinci et al.
recirculation areas and to keep WSS in a healthy range. For an application study, a suitable photo-curable resin which is elastic, biocompatible and bio-coatable with photoinitiators was selected. Among three resins, the results show that BLI with Irgacure® 184 has the lowest cytotoxicity and it was used with an SLA equipment to 3D print the design. SLA enables the manufacture of complex three-dimensional bifurcated vascular networks with controllable geometries. The results of the 3D printed design in preliminary in vitro studies showed that the branched resin vascular network had the lowest cell death rate. The design and manufacturing route for skin tissue engineering proved in this chapter can be used as a guide to design and manufacture an optimised vascular network.
References 1. R.A. Kamel, J.F. Ong, E. Eriksson, J.P.E. Junker, E.J. Caterson, Tissue engineering of skin. J. Am. Coll. Surg. 217(3), 533–555 (2013). https://doi.org/10.1016/j.jamcollsurg.2013.03.027 2. NC3RS, National Centre for the Replacement Refinement & Reduction of Animals in Research (2018). Retrieved November 12, 2018, from https://www.nc3rs.org.uk/ 3. F.R.A.J. Rose, R.O.C. Oreffo, Bone tissue engineering: hope vs hype. Biochem. Biophys. Res. Commun. 292(1), 1–7 (2002). https://doi.org/10.1006/bbrc.2002.6519 4. E. Hoch, G.E.M. Tovar, K. Borchers, Bioprinting of artificial blood vessels: current approaches towards a demanding goal. Eur. J. Cardiothorac. Surg. 46(5), 767–778 (2014). https://doi.org/ 10.1093/ejcts/ezu242 5. R.Y. Kannan, H.J. Salacinski, K. Sales, P. Butler, A.M. Seifalian, The roles of tissue engineering and vascularisation in the development of micro-vascular networks: A review. Biomaterials 26(14), 1857–1875 (2005). https://doi.org/10.1016/j.biomaterials.2004.07.006 6. C.W. Patrick Jr., Adipose tissue engineering: The future of breast and soft tissue reconstruction following tumor resection. Semin. Surg. Oncol. 713, 302–311 (2000). Retrieved from http:// oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA410572 7. R.J. Zdrahala, Small caliber vascular grafts. Part II: Polyurethanes revisited. J. Biomater. Appl. 11, 37–61 (1996). https://doi.org/10.1177/088532829601100102 8. F.P.W. Melchels, M.A.N. Domingos, T.J. Klein, J. Malda, P.J. Bartolo, D.W. Hutmacher, Additive manufacturing of tissues and organs. Prog. Polym. Sci. 37(8), 1079–1104 (2012). https://doi.org/10.1016/j.progpolymsci.2011.11.007 9. W.L. Ng, J.M. Lee, W.Y. Yeong, M. Win Naing, Microvalve-based bioprinting-process, bio-inks and applications. Biomater. Sci. 5(4), 632–647 (2017). https://doi.org/10.1039/ c6bm00861e 10. D.B. Kolesky, R.L. Truby, A.S. Gladman, T.A. Busbee, K.A. Homan, J.A. Lewis, 3D bioprinting of vascularized, heterogeneous cell-laden tissue constructs. Adv. Mater. 26(19), 3124–3130 (2014). https://doi.org/10.1002/adma.201305506 11. C. Kucukgul, B. Ozler, H.E. Karakas, D. Gozuacik, B. Koc, 3D hybrid bioprinting of macrovascular structures. Proc Eng 59, 183–192 (2013). https://doi.org/10.1016/j.proeng.2013.05.109 12. J.S. Miller, K.R. Stevens, M.T. Yang, B.M. Baker, D.H.T. Nguyen, D.M. Cohen, et al., Rapid casting of patterned vascular networks for perfusable engineered three-dimensional tissues. Nat. Mater. 11(9), 768–774 (2012). https://doi.org/10.1038/nmat3357 13. W. Wu, A. Deconinck, J.A. Lewis, Omnidirectional printing of 3D microvascular networks. Adv. Mater. 23(24), 178–183 (2011). https://doi.org/10.1002/adma.201004625 14. T.J. Hinton, Q. Jallerat, R.N. Palchesko, J.H. Park, M.S. Grodzicki, H.-J. Shue, et al., Threedimensional printing of complex biological structures by freeform reversible embedding of suspended hydrogels. Sci. Adv. 1(9), e1500758 (2015). https://doi.org/10.1126/sciadv.1500758
1 Optimised Vascular Network for Skin Tissue Engineering by Additive. . .
19
15. X. Han, J. Courseaus, J. Khamassi, N. Nottrodt, S. Engelhardt, F. Jacobsen, et al., Optimized vascular network by stereolithography for tissue engineered skin. Int. J. Bioprinting 4(2), 1–17 (2018). https://doi.org/10.18063/ijb.v4i2.134 16. G.S. Kassab, C.A. Rider, N.J. Tang, Y.C. Fung, Morphometry of pig coronary arterial trees. Am. J. Phys. Heart Circ. Phys. 265(1), H350–H365 (1993). https://doi.org/10.1152/ ajpheart.1993.265.1.H350 17. J. Ravensbergen, J.K.B. Krijger, B. Hillen, H.W. Hoogstraten, Merging flows in an arterial confluence: the vertebro-basilar junction. J. Fluid Mech. 304, 119–141 (1995). https://doi.org/ 10.1017/S0022112095004368 18. J. Ravensbergen, J.K.B. Krijger, B. Hillen, H.W. Hoogstraten, The influence of the angle of confluence on the flow in a vertebro-basilar junction model. J. Biomech. 29(3), 281–299 (1997). https://doi.org/10.1016/0021-9290(95)00064-X 19. U. Köhler, I. Marshall, M.B. Robertson, Q. Long, X.Y. Xu, P.R. Hoskins, MRI measurement of wall shear stress vectors in bifurcation models and comparison with CFD predictions. J. Magn. Reson. Imaging 14(5), 563–573 (2001). https://doi.org/10.1002/jmri.1220 20. I. Marshall, S. Zhao, P. Papathanasopoulou, P. Hoskins, X.Y. Xu, MRI and CFD studies of pulsatile flow in healthy and stenosed carotid bifurcation models. J. Biomech. 37(5), 679–687 (2004). https://doi.org/10.1016/j.jbiomech.2003.09.032 21. E.R. Edelman, Vascular tissue engineering: designer arteries. Cir. Res. 85(12), 1115–1117 (1999) 22. G. Liu, J. Wu, D.N. Ghista, W. Huang, K.K.L. Wong, Hemodynamic characterization of transient blood flow in right coronary arteries with varying curvature and sidebranch bifurcation angles. Comput. Biol. Med. 64, 117–126 (2015). https://doi.org/10.1016/ j.compbiomed.2015.06.009 23. M.H. Friedman, O.J. Deters, F.F. Mark, C. Brent Bargeron, G.M. Hutchins, Arterial geometry affects hemodynamics. A potential risk factor for atherosclerosis. Atherosclerosis 46(2), 225– 231 (1983) 24. C. Peng, X. Wang, Z. Xian, X. Liu, W. Huang, The impact of the geometric characteristics on the hemodynamics in the stenotic coronary artery. PLoS One 11(6), 1–18 (2016). https:// doi.org/10.1371/journal.pone.0157490 25. C.G. Caro, Discovery of the role of wall shear in atherosclerosis. Arterioscler. Thromb. Vasc. Biol. 29, 158–161 (2009). https://doi.org/10.1161/ATVBAHA.108.166736 26. G. Coppola, C. Caro, Arterial geometry, flow pattern, wall shear and mass transport : Potential physiological significance. J. R. Soc. Interface 6, 519–528 (2009). https://doi.org/10.1098/ rsif.2008.0417. (November 2008) 27. Q. Long, X.Y. Xu, M. Bourne, T.M. Griffith, Numerical study of blood flow in an anatomically realistic aorto-iliac bifurcation generated from MRI data. Magn. Reson. Med. 43(4), 565–576 (2000). https://doi.org/10.1002/(SICI)1522-2594(200004)43:43.0.CO;2-L 28. H. Meng, Z. Wang, Y. Hoi, L. Gao, E. Metaxa, D.D. Swartz, J. Kolega, Complex hemodynamics at the apex of an arterial bifurcation induces vascular remodeling resembling cerebral aneurysm initiation. Stroke 38(6), 1924–1931 (2007). https://doi.org/10.1161/ STROKEAHA.106.481234 29. C.D. Murray, The physiological principle of minimum work: I. The vascular system and the cost of blood volume. Proc. Natl. Acad. Sci. U. S. A. 12(3), 207–214 (1926). Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/16576980 30. C.D. Murray, The physiological principle of minimum work: II. Oxygen exchange in capillaries. Proc. Natl. Acad. Sci. U. S. A. 12(5), 299–304 (1926). https://doi.org/10.1073/ pnas.12.3.207 31. M. Zamir, Optimality principles in arterial branching. J. Theor. Biol. 62(1), 227–251 (1976). https://doi.org/10.1016/0022-5193(76)90058-8 32. X. Han, R. Bibb, R. Harris, Design of bifurcation junctions in artificial vascular vessels additively manufactured for skin tissue engineering. J. Vis. Lang. Comput. 28, 238–249 (2015). https://doi.org/10.1016/j.jvlc.2014.12.005
20
A. Ekinci et al.
33. J. Bloomenthal, Skeletal Design of Natural Forms (University of Calgary, Calgary, 1995). Retrieved from http://www.unchainedgeometry.com/jbloom/pdf/dis-front-matter.pdf 34. J. Kretschmer, C. Godenschwager, B. Preim, M. Stamminger, Interactive patient-specific vascular modeling with sweep surfaces. IEEE Trans. Vis. Comput. Graph. 19(12), 2828–2837 (2013). https://doi.org/10.1109/TVCG.2013.169 35. Y. Zhang, Y. Bazilevs, S. Goswami, C.L. Bajaj, T.J.R. Hughes, Patient-specific vascular NURBS modeling for isogeometric analysis of blood flow, in Proceedings of the 15th International Meshing Roundtable, IMR 2006, 196, (2006), pp. 73–92. https://doi.org/10.1007/ 978-3-540-34958-7_5 36. O. Gourmel, L. Barthe, M. Cani, B. Wyvill, A. Bernhardt, M. Paulin, H. Grasberger, A gradient based implicit blend. ACM T. Graphic 32(2), 1–12 (2013). https://doi.org/10.1145/ 2451236.2451238 37. R. Krasauskas, Branching blend of natural quadrics based on surfaces with rational offsets. Comput. Aided Geom. Des. 25(4–5), 332–341 (2008). https://doi.org/10.1016/ j.cagd.2007.11.005 38. Y. Cai, X. Ye, C. Chui, J.H. Anderson, Constructive algorithms of vascular network modeling for training of minimally invasive catheterization procedure. Adv. Eng. Softw. 34(7), 439–450 (2003). https://doi.org/10.1016/S0965-9978(03)00035-8 39. X. Ye, Y.Y. Cai, C. Chui, J.H. Anderson, Constructive modeling of G1 bifurcation. Computer Aided Geometric Design 19(7), 513–531 (2002). https://doi.org/10.1016/S01678396(02)00131-0 40. X. Han, R. Bibb, R. Harris, Engineering design of artificial vascular junctions for 3D printing. Biofabrication 8(2), 1–13 (2016). https://doi.org/10.1088/1758-5090/8/2/025018 41. T. Papaioannou, C. Stefanadis, Vascular wall shear stress: basic principles and methods. Hell. J. Cardiol. 46(January 2005), 9–15 (2014) 42. J. Khamassi, C. Bierwisch, P. Pelz, Geometry optimization of branchings in vascular networks. Phys. Rev. E 93(6), 62408 (2016). https://doi.org/10.1103/PhysRevE.93.062408 43. A. Gebhardt, Generative Fertigungsverfahren (Carl Hanser Verlag GmbH & Co. KG, München, 2013). https://doi.org/10.3139/9783446436527 44. I. Gibson, D.W. Rosen, B. Stucker, Additive Manufacturing Technologies (Springer, Boston, 2010). https://doi.org/10.1007/978-1-4419-1120-9 45. S. Maruo, K. Ikuta, Submicron stereolithography for the production of freely movable mechanisms by using single-photon polymerization. Sens. Actuators A Phys. 100(1), 70–76 (2002). https://doi.org/10.1016/S0924-4247(02)00043-2 46. S. Engelhardt, 3D-microfabrication of polymer-protein hybrid structures with a Qswitched microlaser. J Laser Micro Nanoeng 6(1), 54–58 (2011). https://doi.org/10.2961/ jlmn.2011.01.0012 47. S. Engelhardt, E. Hoch, K. Borchers, W. Meyer, H. Krüger, G.E.M. Tovar, A. Gillner, Fabrication of 2D protein microstructures and 3D polymer-protein hybrid microstructures by two-photon polymerization. Biofabrication 3(2), 025003 (2011). https://doi.org/10.1088/17585082/3/2/025003 48. S.S. Labana, Chemistry and Properties of Crosslinked Polymers (Academic Press, New York, 1977)
Chapter 2
Virtual Bone Surgery Ming C. Leu, Wenjin Tao, Qiang Niu, and Xiaoyi Chi
2.1 Introduction To become a skillful surgeon requires rigorous training and iterative practice. Traditional training and learning methods for surgeons are based on the Halstedian apprenticeship model, i.e., “see one, do one, teach one,” which is more than 100 years old [1]. For bone surgery, students often watch and perform operations on cadaveric or synthetic bones under the tutelage of an experienced physician before performing the procedure on patients under expert supervision. In the learning process they need to learn how to perform material removal operations including drilling, broaching, sawing, reaming, and milling, etc., which simulate real operations as shown in Fig. 2.1. Mistakes can lead to irreparable defects to the bone and surrounding soft tissue during these procedures, which can result in complications such as early loosening, malalignment, dislocation, altered gait, and leg length discrepancy [3]. The current system of surgery education has many challenges in terms of flexibility, efficiency, cost, and safety. In addition, as new types of operations are developed rapidly, more efficient methods of surgical skill education are needed for practicing surgeons [4]. Virtual Reality (VR) is one of the most active research areas in computer simulation. Virtual reality systems use computers to create virtual environments to simulate real-world scenarios. Special devices such as head-mounted displays, haptic devices, and data gloves are used for interacting with virtual environments to provide realistic feedback to the user. The most important contributing factor to VR development has been the arrival of low-cost, industry-standard multimedia com-
M. C. Leu · W. Tao () · Q. Niu · X. Chi Department of Mechanical and Aerospace Engineering, Missouri University of Science and Technology, Rolla, MO, USA e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2021 B. Bidanda, P. J. Bártolo (eds.), Virtual Prototyping & Bio Manufacturing in Medical Applications, https://doi.org/10.1007/978-3-030-35880-8_2
21
22
M. C. Leu et al.
Fig. 2.1 Comparison of (a) actual and (b) virtual surgical operations [2]
puters and high-performance graphics hardware. VR has been integrated into many aspects of the modern society such as engineering, architecture, entertainment, etc. The concept of developing and integrating computer-based simulation and training aids for surgery training begins with VR simulators. VR techniques provide a realistic, safe, and controllable environment for novice surgeons to practice surgical operations, allowing them to make mistakes without serious consequences. It promises to change the world of surgical training and practice. With a VR simulator, novice surgeons can practice and perfect their skills on simulated human models, and experienced surgeons will be able to use the simulator to plan surgical procedures. VR training also offers the possibility of providing a standardized performance evaluation for the trainees. Bone surgery is one of the medical applications which can be simulated using VR technology. There exist some surgical simulation tools for orthopedic applications such as knee surgery, but most of them involve only soft tissues. Few have considered the simulation of cutting, sawing, burring, etc., which involve operating on bones as well as on ligaments and muscles. The development of a virtual bone surgery system is very desirable for training surgeons, allowing them to visualize surgical operations such as drilling and cutting through the bone with the added realistic sense of touch during the process. As the minimally invasive surgery (MIS) becomes more prevalent in orthopedics in the future, VR technology will become more and more valuable for assisting actual surgical operations. As surgical techniques are developed to reduce access to the surgical site (via smaller incisions), and instruments and implants are miniaturized to accommodate for these techniques, surgical dexterity and bone preparation and implant positioning will become a less and less forgiving part of the operation. It will be necessary to integrate VR models with images obtained during the surgical operations, the so-called augmented reality (AR) technology, in order to assist the surgeons in performing the MIS process. This book chapter reviews the current virtual bone surgery systems developed in various research laboratories and discusses the basic methods and techniques used to develop these systems.
2 Virtual Bone Surgery
23
2.2 State of the Art in Bone Surgery Simulation 2.2.1 Current State of Surgical Simulation Surgical simulation is not a newly emerging field, and some early efforts can be found back in the 1990s. It has been intensively studied for decades, which had explored a wide range of surgical operations, such as endoscopic sinus surgery [5], tissue cutting [6], kidney removal surgery [7], venipuncture [8], wound suturing [9], coronary anastomosis [10], temporal bone surgery [11–14], and petrous bone surgery [15–18]. Although surgical simulation has been studied for over 20 years, it is still an active research area as the development of virtual simulation technologies. Some recent typical studies are briefly reviewed as follows. Lin et al. [19] developed a surgical training simulator with both visual and haptic feedback for the user to learn the skills of bone-sawing operation (i.e., operating at an appropriate feed velocity with a suitable force) in maxillofacial surgery. The voxel-based maxillofacial model was created based on CT scanning data, and the virtual tools were modeled through reverse engineering; see Fig. 2.2a. Multi-point collision detection algorithms were utilized to simulate the tool-bone interaction. Similarly, Gray et al. [20] applied pre-operative virtual surgical simulation to pediatric craniofacial surgeries, allowing for safe and precise craniofacial reconstruction in complex pediatric cases with a reduction of operative time (see Fig. 2.2b). Chan et al. [2] described the design of a virtual surgical environment for patient-specific simulation of temporal bone surgery using pre-operative medical data. Six-degree-of-freedom haptic feedback was provided during manipulation to convey both force and torque feedback. The virtual bone dissection was modeled and simulated based on the mechanical principles of orthogonal cutting and abrasive wear. A volume rendering engine based on the technique of ray-casting was developed to provide high-fidelity visual interface during the surgical manipulation of virtual anatomy (see Fig. 2.2c). In the above virtual surgical simulators and some recent studies [21, 22], most of the researchers focused on temporal bone surgery. Only a small portion of temporal
Fig. 2.2 Some examples of surgical simulation: (a) Lin et al. [19]; (b) Gray et al. [20]; (c) Chan et al. [2]
24
M. C. Leu et al.
Fig. 2.3 Virtual bone burring, free drilling, and guided drilling [24, 25]
bone was used in the simulation, the data was not huge, and tool-bone interaction was limited to burring/milling. In a real orthopedic surgery, however, there are also other machining operations like drilling, broaching, sawing, reaming, and milling. These operations are often needed prior to an orthopedic operation, such as pin or screw insertion to the bone. To accomplish these tasks, a virtual bone surgery system was developed at Missouri S&T. The various system components were integrated in a Windows GUI environment for purpose of implementation. The system development involved medical image processing, geometric modeling and data manipulation, force modeling, graphics rendering, and haptic rendering [23– 27]. Some of the simulated operators are shown in Fig. 2.3. The simulation of material removal for bone surgery, such as drilling or milling, can be achieved similar to the simulation of a virtual sculpting process for creating a 3D freeform object from a CAD model. It should be noted, however, that bone surgery simulation deals with inhomogeneous materials, while virtual sculpting deals with homogeneous materials. The material removal process can be simulated by continuously performing Boolean subtraction of the tool model from the bone model. Galyean and Hughes [28] introduced the concept of voxel-based sculpting as a method of creating freeform 3D shapes by interactively editing a model represented in a voxel raster. Wang and Kaufman [29] presented a similar sculpting system with carving and sawing tools. In order to achieve real-time interaction, their system reduced the operations between the tool and the object to voxel-byvoxel operations. Gibson et al. [30] used a volumetric approach to model organs and presented some early results of their effort to develop an arthroscopic knee surgery simulator. Computers were still too slow to allow realistic deformation of a volumetric representation at that time. Bærentzen [31] proposed octree-based volume sculpting and discussed the possibility of using it to support multi-resolution sculpting. To further enhance the realism of the surgical simulation, auditory feedback can be provided to augment the visual and haptic interfaces in the virtual environment. For example, the drilling sound can help operators perceive and maintain specific drilling speed and force during a surgical operation. Auditory feedback was absent
2 Virtual Bone Surgery
25
in most of the previous studies, but some researchers [26, 32–34] have explored including acoustic feedback in their virtual surgical simulators. To realize remote surgical collaborations among surgeons or online instructions between mentors and mentees in a virtual environment, network-based multiuser surgical simulators have been investigated in some studies [35–38]. In such systems, the virtual environment is shared across multiple remotely located participants to allow them to visualize and interact with the shared digital contents. The virtual contexts and multiuser interactions need to be synchronized at a high refresh rate to realize collaborations or instructions in real time. There are some virtual surgical systems commercially available in the market. Voxel-Man has developed surgery and training simulators for medical education, possessing a series of functionalities for importing models from CT, ear surgery, endoscopic sinus surgery, and dental training. The Voxel-Man ENT simulator has been used by many researchers and proved to be effective for improving the surgical skills [21, 39, 40]. Another commercial simulator for temporal bone surgery is the Mediseus Surgical Drilling Simulator, which was initially developed at the University of Melbourne. This simulator offers a VR environment with haptic feedback and manually segmented CT rendering. Distinct from other simulators, it is designed with a microscope-like interface with a stand-alone mobile platform. This platform has been assessed and validated [34, 41, 42], showing the participants trained on this simulator performed significantly better than the participants trained using the conventional methods. Another alternative temporal bone simulator is the Visible Ear Simulator (VES) [43], which is an academic freeware platform for the training of novice and experienced ear surgeons. It is a fully functional 3-D simulator for temporal bone drilling with force feedback and photo-realistic graphics. Comparison and discussion of different VR surgical simulators can be found in the surveys by Sethia and Wiet [44] and Bhutta [45].
2.2.2 Key Technologies The schematic of a virtual bone surgery system is shown in Fig. 2.4. The user can use a personal computer-based system to manipulate the interaction between the virtual bone and the virtual surgical tool, and perform virtual bone surgery by “seeing” bone material removal through a graphic display, “feeling” the machining force via a haptic device, and “hearing” the sound of tool-bone interaction. Generally speaking, virtual bone surgery includes the following key elements: image acquisition and processing, geometric modeling, physical modeling, visualization, and haptic interaction. The relationships between these key elements are illustrated in Fig. 2.5. Usually, image acquisition and processing precedes the simulation and it is done off-line in order to save the data processing time during the online simulation. A virtual bone surgery system consists of the following main elements (Fig. 2.5):
26
M. C. Leu et al. Graphic Display
Audio Device
Virtual Bone Virtual Surgical Tool
Haptic Device
Fig. 2.4 Schematic of a basic bone surgery simulation system Image acquisition and processing
Geometric modeling
Physical modeling
Visualization
Haptic interaction
Fig. 2.5 Key elements involved in bone surgery simulation
1. Input the CT or MRI data of the bones to construct a geometric model with properties such as materials and densities. 2. Develop mathematical models to represent the physics of tool-bone interaction, based on which the interactive force and sound generation are updated continuously (e.g., to simulate the drilling of a bone). 3. Implement real-time graphic rendering of volumetric data to obtain realistic visualization of bone surgery. 4. Provide force feedback to the user with haptic rendering. 5. Supply sound feedback to the user with auditory rendering. To develop a meaningful virtual bone surgery system with realistic visual effects, force feedback, and auditory rendering, several requirements that must be met are as follows: 1. The medical data obtained from image acquisition must be processed to minimize noise and irrelevant data [15, 25]. This data processing must be done before bone surgery simulation.
2 Virtual Bone Surgery
27
2. The virtual surgery system must update various data at different frequencies: above 30 Hz for visual rendering and above 1000 Hz for haptic rendering [46]. For the system including auditory rendering, besides the visual and haptic rendering, 20 kHz is the required frequency to update the collision checking flag and send the calculated sound signal to auditory hardware [26]. 3. Data modification calculation should involve only local data to reduce augmentation time [47, 48]. 4. The amount of force computation time should be small for real-time haptic rendering [48].
2.3 Medical Image Processing and Segmentation 2.3.1 Imaging Procedures Computer imaging techniques have become an important diagnostic tool in the practice of modern medicine. Today, advanced medical scanners can provide high quality and exceptionally detailed images for surgeons before performing the actual surgical procedures. Medical data obtained from imaging techniques typically represent the values of some properties at various 3D locations [49]. The most commonly used medical imaging techniques include CT (computed tomography), MRI (magnetic resonance imaging), SPECT (single-photon emission computed tomography), and PET (positron emission tomography), as shown in Fig. 2.6. These techniques use a data acquisition process to capture information about the internal anatomy of a patient. This information is in the form of slice-plane images, similar to conventional photographic X-rays [50]. CT and MRI are most commonly employed in obtaining medical images. CT provides high spatial resolution bone images, while MRI provides better images on soft tissues. For most bone surgery simulators, CT scan data are used because they show good contrast between bones and soft tissues. For reporting and displaying reconstructed CT values, Hounsfield Unit (HU) is a standardized and accepted unit.
Fig. 2.6 The most commonly used medical imaging techniques (Photos from Wikipedia)
28
M. C. Leu et al.
There are good correlations between CT scan data and bone’s material properties such as density and mechanical strength [51], so HU value is usually used to represent bone density for each data point. The process of constructing a VR environment from the imaging data is a major challenge. This process can be divided into three stages: (1) spatial co-registration of data from multiple modalities; (2) identification of tissue types (segmentation); and (3) definition of tissue boundaries for the VR environment [15].
2.3.2 Image Processing Noise and other artifacts are inherent in all methods of data acquisition. Due to noise in many signals and lots of irrelevant information in the medical data, image processing is necessary. Filtering and smoothing techniques, e.g., Gaussian filters and median filters, are usually used to reduce noise on images [50]. Since information gained from two images acquired in medical imaging procedures is usually complementary, proper integration of useful data obtained from the separate images is often desired. Image registration is the process of determining the spatial transform that maps points from one image to homologous points on the same object in the second image [52]. These images could have a different or the same format. The most common registration methods could be found in the survey of medical image registration by Maintz and Viergever [53]. It is also necessary to identify which type of tissue is present in the data space and to identify the precise location of edges between different tissue types. Image segmentation is the process of identifying the distribution of different tissue types within the data set. Bones can be extracted by manual or partially automated segmentation methods. Usually, threshold segmentation is used to distinguish pixels or voxels within an image by their gray-scale values. An upper and lower threshold can be defined, separating the image into the structure of interest and background. This method works very well for bone segmentation from CT scans since bone tissue attenuated significantly more during image acquisition and is therefore represented by much higher values on the Hounsfield scale compared to soft tissues. Whereas thresholding focuses on the difference of pixel intensities, the segmentation methods look for regions of pixels or voxels with similar intensities [54]. Segmentation methods are usually divided into two types: region-based and edge-based [55]. The region-based methods search for connected regions of pixels/voxels with some similar features such as brightness, texture pattern, etc. After dividing the medical image into regions in some way, similarity among pixels is checked for each region, and then neighboring regions with similar features are merged into a bigger region, and regions with no similar features are splitting into smaller regions. These steps are repeated until there is no more splitting or merging. A main issue of this approach is to determine exact borders of objects because regions are not necessary to split on natural borders of the object. Edgebased algorithms search for pixels with high gradient values which are usually edge
2 Virtual Bone Surgery
29
pixels, and then try to connect them to form a curve which represents a boundary of the object. A difficult problem here is how to connect high gradient pixels because in real images they are often not neighbors. Another problem is noise since a gradient operator is of a high-pass nature, the noise is usually also in high frequencies, and it can sometimes create false edge pixels.
2.4 Geometric Modeling and Data Manipulation 2.4.1 Volume Modeling The sequence of 2D slices of data obtained by CT, MRI, or ultrasound can be represented as a 3D discrete regular grid of voxels (volume elements), as shown in Fig. 2.7. For virtual surgery, voxel-based modeling has some advantages over the use of polygons or solid geometric primitives. First, voxel-based representation is natural for the 3D digital images obtained by medical scanning techniques such as MRI or CT. Second, since no surface extraction or data reformatting is required, errors introduced by fitting surfaces or geometric primitives to the scanned images can be avoided. Third, volumetric objects can incorporate detailed information about the internal anatomical or physiological structure of organs and tissues. This information is particularly important for realistic modeling and visualization of complex tissues [30]. In volume representation, the basic elements are voxels. Just as a pixel is a small rectangle, a voxel can be viewed as a small block. A voxel can be represented by the coordinates of its center point and the three orthogonal dimensions plus some attributes. If the voxels have fixed dimensions, then they can be represented by the vertices of a 3-D lattice, which are characterized by their positions and associated values of attributes. For example, it can be expressed as an array (x, y, z, v1 , v2 , . . . vn ), where (x, y, z) represents the position of each voxel and vi represents a property. These physical properties can be density, material classification, stiffness, and viscosity as well as display properties such as color, shading, etc.
Fig. 2.7 A volume seen as a stack of images and a volume seen as a 3D lattice of voxels
30
M. C. Leu et al.
In general, the samples may be taken at random locations. Depending on how the samples are connected to form a grid structure, there are two classes of volumetric data: structured and unstructured. Structured data have two components: a logical organization of the samples into a three-dimensional array, and a mapping of each sample to the physical domain. Unstructured data are a set of connected samples in space. They are not based upon a logical organization of arrays, but instead upon a group of cells of certain shapes, such as tetrahedra, hexahedra, or prisms. An interpolation function is used to produce a continuous scalar field for each property. This is critical for producing smooth volume and haptic rendering [48]. In order to meet the system requirements, it is often desirable to pre-compute and store the contents of each voxel, so there is no need to change every voxel during the surgical operation simulation. By storing the volumetric data in a space-efficient, hierarchical structure such as an octree, the storage requirements can be reduced.
2.4.2 Data Manipulation The data set for virtual surgery is usually huge. For example, for a medium resolution of 5123 , two bytes per voxel, the volume buffer must have 256M bytes [49]. Therefore, how to organize and manipulate such huge data is a challenging problem. Zhu et al. [56] used a finite element method (FEM) in their analysis of muscle deformation. A muscle was modeled with 8-node, 3D brick elements equivalent to the voxel structure. The simulation was achieved by solving a sparse linear system of equations which governs the behavior of the muscle. Like most other FEM models, computation is costly and pre-computation is often required for realtime applications. Gibson et al. [30] developed a linked volume model to represent the volumetric data. The links were stretched, contracted, or sheared during object deformation, and they were deleted or created when objects were cut or joined. Compared with the FEM method, the linked volume approach can be used for creating models with high geometric complexity, and it could achieve interactivity with the use of low-cost mathematical modeling. Bærentzen [31] proposed an octree-based volume sculpting method in order to quickly separate many homogeneously empty regions outside the object of interest. An octree structure as shown in Fig. 2.8 was chosen to organize the huge set of volumetric data and to improve the efficiency for data storage. A volume was subdivided until the leaf level of a prescribed size had been reached. It will significantly reduce the memory requirement and speed up the graphics rendering and modeling task. Basically, octrees are a hierarchical variant of spatial-occupancy enumeration that can be used to address the demanding storage requirements in volume modeling [57]. In virtual bone surgery, operation tools such as drills, mills, and broaches remove voxels occupied by the cutting tool’s volume during the course of the machining operation. For static data structure, e.g., 3D arrays, voxels can only be removed in
2 Virtual Bone Surgery
31
Root
Node with children
Node without children
Fig. 2.8 Octree representation
the defined size. That is, the cells representing the interaction between the cutting tool and the bone are constant in size, and thus the resolution is static. Due to this limitation, voxel removal can only be done on a rough level. Octree modeling can provide a flexible data structure for performing material removal simulation dynamically. High resolution can be achieved in the region of interest, which is usually the current surgical tool location and its neighborhood. The octree nodes representing cells in the region of interest are subdivided to generate children nodes representing sub-cells. The material removal operation is then done on the children node level. The subdivision process can be repeated until the desired resolution is reached. To control the resolution automatically, a criterion to end the subdivision can be set. For example, one criterion could be that the smallest linear dimension of the voxel is equal to the radius of the drill or mill multiplied by a factor. Another method is using bounding volume together with quadtree subdivision [25] to deal with irregular long bones. This method uses AABB (axis aligned bounding box) as the bounding volume type to determine a tight bounding box for the bone model. The whole of the bone volume is divided into many sub-volumes, which have certain slices/layers in the Z direction and different dimensions in X and Y directions. All these sub-volumes should have relatively tight bounding boxes around the objects as shown in Fig. 2.9a. Then, quadtree subdivision is obtained by successively dividing the sub-volumes from 1 to n in both x and y dimensions to form quadrants as shown in Fig. 2.9b. Each quadrant of the sub-volumes may be full, partially filled, or empty, depending if the entity of consideration intersects the area of concern. This method has been applied to remove irrelevant data and to organize the rest data, in order to make the virtual surgery system interactive in real time [25].
2.5 Graphic Rendering Volume visualization is the technique used to display the information inside volumetric data using interactive graphics and imaging. The methods of graphic
32
M. C. Leu et al.
Fig. 2.9 (a) Bounding volume and (b) quadtree subdivision for human bone
rendering of three-dimensional data (volumetric data) can be grouped into two: (1) surface rendering or indirect rendering and (2) volume rendering or direct rendering. To choose which kind of rendering method is suitable for the bone surgery system, the following considerations are important: (1) real-time rendering and (2) surface quality. Surface rendering extracts polygons from volumetric data and renders the surface interactively. It is more difficult for volume rendering to have interactive performance.
2.5.1 Surface Rendering Marching Cube [58] is the most popular algorithm in surface rendering. The marching cube algorithm traverses all boundary cells of the volume and determines the triangulation within each cell based on the values of the cell vertices. This method first partitions a volume data into cubes. Each cube consists of eight voxels. Then it decides the surface configuration of each cube according to 15 configurations (Fig. 2.10). Marching cube leads to satisfactory results for small or medium datasets. However, for simulation in the medical field, there usually exists a huge dataset which may restrict the interactive manipulation. Use of octrees for faster isosurface generation [59] is an improved algorithm for extracting surfaces from volume data. This algorithm stores min/max voxel values at each octree node, and then traverses octree nodes that may contain an isosurface to obtain the triangles forming the surface. Other researchers [60–62] also presented improved octree-based marching cube algorithms and their applications. The methods used some techniques to save storing space and improve performance, but none of them supported multi-resolution isosurface extraction. Adaptive-resolution surface rendering is the method mostly used for virtual bone surgery. Some researchers [63, 64] presented ideas on this surface rendering method.
2 Virtual Bone Surgery
33
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Fig. 2.10 The marching cube algorithm for surface rendering of voxel data
The rendering algorithms are based on an extended marching cube algorithm for octree data as follows: 1. Find the region of interest (i.e., the current surgical tool location and its neighborhood). 2. The region of interest is rendered in high resolution, meaning that the cells are subdivided into sub-cells, and the surface is extracted on the sub-cell level using the marching cube algorithm. 3. Regions not in the region of interest are rendered in lower resolution. The cells are merged to form coarser level cells. Trade-off exists between surface quality and interactivity. Although octree can address this problem to some extent, interactivity is still challenging to achieve for a large set of data. In order to improve performance, the initial resolution (usually not a very fine level) for the surface rendering needs to be specified. The dynamic
34
M. C. Leu et al.
resolution depends on how the surgical tool interacts with the bone material. Parallel computing can be used to increase the resolution.
2.5.2 Volume Rendering In this rendering method, the volume data are directly displayed, which means that the images are generated through the transformation, shading, and projection of 3D voxels into 2D pixels. Volume rendering demands greater computational processing but produces images with greater versatility. Since all the voxels located in the line of view are used in the image generation, this method allows the visualization of parts inside the surface. Although real-time rendering can hardly be achieved, this method is a good choice for applications with some special visualization requirements. Volume rendering will become more attractive in the future as computers are becoming faster and cheaper with larger memory. The most popular algorithm of volume rendering is Ray-Casting [65, 66]. Traditionally, the ray-casting algorithm spans the projection plane and casts the rays into the scene. Usually, parallel rays orthogonal to the projection plane are cast. These rays are cast from the observer position to the volume data. For each ray, sample points are calculated considering a fixed step on the path traced by the ray. The algorithm can calculate and accumulate both color and opacity values along the ray for obtaining the pixel color. Besides ray-casting, there are other popular algorithms in the volume rendering approach, e.g., splatting [67], shear-warp [68], and 3D texture-mapping [69]. Meißner [70] did an extensive survey on these various volume rendering algorithms. Currently, most bone surgery simulation systems do not use volume rendering because of the interactivity restriction, the need for expensive dedicated graphics hardware for this rendering method, and the need for huge amounts of computation time and substantial amounts of storage space. However, the merits of volume rendering along with the continuing decrease in computation costs may compel the researchers to use this method in the future.
2.6 Haptic Rendering Haptic interface can enhance the realism of virtual surgery by providing a realistic feel of the surgical operation. Haptic rendering is the process of applying reactive forces to the user through a force-feedback device [71]. The rendering consists of using information about the tool-object interface to determine forces to be displayed, given the action of the operational point. The major challenge in simulating forcereflecting volume models is to achieve an optimal balance between the complexity of geometric models and the realism of the visual and haptic displays in real time.
2 Virtual Bone Surgery
35
Collision Detection Volume Database
Contact Information
Collision Response
Position and Orientation Data
Haptic Device
Force Model
Fig. 2.11 Structure of haptic rendering
The following issues must be addressed in order to provide meaningful force feedback [27, 72]: 1. Force computation rate: This rate must be high enough and the latency must be low enough to generate a proper feel of the operation. 2. Generation of contact force: This creates the feel of the object during the surgical simulation. Interaction forces between the tool and the bone can be calculated using mathematical models. For haptic rendering, there are several important components: force modeling, collision detection, and haptic rendering as shown in Fig. 2.11.
2.6.1 Force Modeling Bone material removal operations are of considerable importance in orthopedic surgery [73]. In hip and knee replacement procedures, for instance, the geometrical accuracy of the prepared bone surface is particularly relevant to achieving accurate placement and good fixation of the implant. Bone drilling is needed prior to many orthopedic operations, such as pin or screw insertion to the bone, and it requires high surgery skills. There were several studies on bone drilling reported in the literature. Wiggins and Malkin [74] investigated the interrelationships between thrust pressure, feed rate, torque, and specific cutting energy (energy per unit volume required to remove material) for three types of drill bits. Jacob et al. [75] presented research results showing that the drill point geometry was critical when attempting to minimize drilling forces and that a softening effect occurred when the bone was drilled at relatively high speeds. Hobkirk and Rusiniak [76] studied the relationships between drilling speeds, operator techniques, types of drills, and the applied forces in bone drilling. Through experiments they showed that the peak force exerted on the drill varied between 5.98 and 24.32 N, and that the mean vertical force ranged from 4.22 to 18.93 N. Karalis and Galanos [77]
36
M. C. Leu et al.
tested the drilling force against the bone hardness and triaxial strength, and found a linear correlation between the triaxial compressive strength and the drilling force. Abouzgia and James [78] investigated the dependence of force on drill speed and measured the energy consumption during drilling. They found that the drilling force increased slightly with increase in speed at low starting speeds and decreased with increase in speed at high starting speeds. Some machining force models proposed later by other researchers are given below with specific equations. Allotta et al. [79] developed an experimental model for the description of a breakthrough during the penetration of a twist drill in a long bone as illustrated in Fig. 2.12. They presented an equation for thrust force required to drill a hole and reported its good correlation with experimental data. The thrust force required to drill a bone is T = Ks a
β D sin 2 2
(2.1)
where T is thrust force, Ks is the total energy per unit volume, a is the feed rate expressed in unit length per revolution, D is the diameter of the drill bit, and β is the convex angle between the main cutting lips (see Fig. 2.12). Ks represents the sum of shear energy required to produce gross plastic deformation. It is primarily the friction energy of the chip sliding past the tool plus other minor energies. Ks has been shown to vary between 4.8Ru and 6Ru , where Ru is the unitary ultimate tensile load. Ks = 5Ru is a practically acceptable value. During rotation and penetration across the bone, the drill bit is subject to a resistant torque (besides the thrust force) of Mz = 5Ru a
D2 8
(2.2)
Udilijak et al. [80] investigated the key parameters affecting bone drilling and modeled the force of drilling as a function of influencing parameters including axial feed, cutting speed, and drill tip angle. After experiments, they obtained the Fig. 2.12 Modeling force in drilling a long bone
Cortical bone
b
Trabecular bone
2 Virtual Bone Surgery
37
mathematical dependence of axial drilling force on the influencing parameters as follows: F = 58.42fz0.439 ε3.024
(2.3)
where F is axial drilling force in N, fz is feed rate per tooth in mm, and ε is drill tip angle in rad. Chi et al. [24] presented another drilling force model by performing regression of measured drilling force versus process and material parameters. The obtained force model was validated by performing more experiments with different sets of parameter values. The thrust force model can be written as: T = 134.6N −0.3327 v 0.5189 r 1.1841
(2.4)
where T represents the thrust force, N is the speed of drill bit in rotations per minute, v is feed rate in mm/s, and ρ is bone material density in g/cc. Bone burring is also an important surgical procedure used in temporal bone surgery. Agus et al. [11] presented a bone-burr interaction model. For a burr with a spherical bit of radius R rotating at angular velocity ω, they used Hertz’s contact theory to derive the following elastic deformation force that exerts on the burr: 3 h 2 − → F e = C1 R 2 nˆ R
(2.5)
where C1 is a constant that depends on the elastic properties of material, h is the tool embossing height. nˆ is the normal direction of the contact surface. Also, the friction force can be obtained as − → − → − r ξ ×→ ω − → − → dσ (2.6) Fμ = μ P ξ − → − → → ξ ω r ξ ×− − → where μ is a friction coefficient, ξ represents a point on the contact surface, − − → − → → − → P ξ is the pressure exerted by the burr at point ξ , and r ξ is the − → displacement measured from the center of sphere burr bit to point ξ , and dσ represents a differential area on the contact surface. The total force that should be provided by the haptic feedback device is − → − → − → FT = Fe+ Fμ
(2.7)
Other force models can also be applied in developing a virtual bone surgery system. For example, Eriksson et al. [81] used an energy-based approach to
38
M. C. Leu et al.
determine how the force relates to material removal rate in the milling process. This model is the same as the following simplified milling force model [82, 83]: Ft = Kt (MRR)/f
(2.8)
where Ft is the tangential cutting force, f is the feed rate, and MRR is the material removal rate. The radial cutting force is Fr = Kr Ft
(2.9)
where Kt and Kr are constant and their values depend on workpiece material, cutting tool geometry, and cutting conditions. There are other force models, e.g., the spring-damping force model [48, 72, 84] that could also be applied to virtual bone surgery. A haptic device can be used to give the user of the virtual bone surgery system realistic force feedback by rendering the force and torque computed using the cutting force models. Most virtual bone surgery systems use PHANToM device (SensAble Company) and GHOST SDK for haptic rendering. Two examples of such a system are shown in Fig. 2.13. This PHANToM has three motors and six encoders to enable 6-DOF motion tracking and 3-DOF force feedback. The GHOST (General Haptics Open Software Toolkit) SDK is a C++ object-oriented software toolkit that enables developers to interact with the haptic device and create a virtual environment at the object level. GHOST SDK provides a special class of functions called gstEffect, which allows adding “global” forces directly to the PHANToM. At each iteration of the servo loop, the pointer of the Effect object is passed to a PHANToM node. By generating the Effect force when non-null intersection
Fig. 2.13 Virtual bone surgical with haptic feedback: (a) Agus et al. [11]; (b) Chi et al. [24]
2 Virtual Bone Surgery
39
between the virtual tool and the virtual bone is detected, the system gives the user a realistic feel of force in real time. In order to run the components of a virtual bone surgery system asynchronously, a multithreading virtual environment can be implemented. The multithreading computation environment allows maintaining suitable update rates for the various components and subsystems of the simulation system. The haptic loop must maintain an update rate of above 1000 Hz, while the graphics loop can get by with an update rate of above 30 Hz.
2.6.2 Collision Detection and Force Generation In a bone surgery simulator, the haptic rendering consists of two parts: collision detection and force generation. The goal of collision detection, also known as interference detection or contact determination, is to report a geometric contact when it is about to occur or has just occurred [85]. Fast and accurate collision detection between geometric models is a fundamental issue in computer-based surgery simulation. In developing a virtual bone surgery system, it is necessary to perform collision detection for the purpose of simulating material removal and force feedback. An early approach to haptic rendering used single-point representation of the tool for collision detection and penalty-based methods for force generation [48, 86]. Collision detection was done by checking whether the point representing the tool was inside the object of consideration such as a bone. The surface information of an anatomic model can be obtained in terms of triangular facets using the marching cube algorithm previously described or by a method of surface reconstruction from dexel data [87]. Penalty-based methods generate a pre-computed force field based on the shortest distance from the interior point of an object to the object’s surface. Figure 2.14 shows the problems of penalty-based haptic rendering. One problem with this approach is that there may be points in an object which have the same distance to the surface (see Fig. 2.14a). Another problem is that when pressing an object Fig. 2.14 Problems of penalty-based haptic rendering: (a) points in an object which have the same distance to the surface, and (b) pressing an object with a sharp tip or fine feature
(a)
(b)
40
M. C. Leu et al.
PHYSICAL POSITION
PROXY POSITION
Fig. 2.15 Haptic rendering by virtual proxy [88]
with a sharp tip or fine feature, such as the one shown in Fig. 2.14b, the user will quickly feel the change of force direction from one side of the object to the other side and then feel no force at all. This can be a serious problem, especially when working with highly detailed models and small structures. Constraint-based methods were introduced by Zilles and Salisbury [88] and by Ruspini and Khatib [89] and by Ruspini et al. [90]. These methods use an intermediate object (representing the tool) which never penetrates a given workpiece, such as a bone in the environment, as shown in Fig. 2.15. The intermediate object (called God-Object or Proxy) remains on the surface of the workpiece during the simulation process. The force generated by the haptic device is proportional to the vector difference between the physical position of the virtual tool and the proxy position of the virtual tool. The haptic rendering algorithm updates the proxy position in respect to the physical position by locally minimizing the distance from the proxy position to the physical position. Since these calculations have to be performed on-the-fly, constraint-based approaches are computationally more expensive than penalty-based approaches. The single-point representation of an object for collision detection, as described above, has the following drawbacks: 1. It is not suitable for inhomogeneous workpiece material, e.g., human bone. 2. It does not represent the 3D shape of the surgical tool. 3. The virtual tool can reach points which may not be reachable by the real tool, e.g., entering a small hole with a large tool [91].
2 Virtual Bone Surgery
41
Multi-point collision detection methods have been developed more recently [17, 84]. These methods represent 3D shapes using multiple points on the surface of the tool. Using these methods, more realistic simulations of tools and tool-object interaction can be achieved and the drawbacks of the single-point approach can be overcome. However, multi-point collision detection is computationally more expensive. Moreover, this force feedback scheme may generate an unstable force in some cases [92], especially when the number of points on the tool surface is not adequate.
2.7 Auditory Rendering Sound cues can enhance haptic feedback when a user is interacting with an object in a virtual environment. In bone surgery, sound can provide information about the nature of the tool-bone contact region where the material removal operation occurs. For example, the change of sound from higher to lower pitches in bone drilling could signal reaching the interface between the bone and the soft tissues. Thus, it is desirable to include auditory rending in the system development, so that the VR system can be enriched to a full multimodal interaction environment including auditory rendering, besides graphics and haptics rendering. Therefore, the user can perform virtual bone surgery by simultaneously “seeing” bone material removal through a graphics display device, “feeling” the force via a haptic device, and “hearing” the sound of tool-bone interaction. In a virtual reality system with sound rendering, two kinds of sounds can be used: pre-recorded sound and synthesized sound. Pre-recorded sound is easy to acquire and playback. However, there are several drawbacks associated with using prerecorded sounds [93]. Most importantly, the sound is static and cannot be changed in response to changes in a simulation environment including user interactions. Also, a large sound library is required to create a VR system with an acoustically rich virtual environment. Furthermore, it is difficult and impractical to obtain an applicationspecific sound sequence for every application. Synthesis sound, on the other hand, is flexible, dynamic, and especially advantageous for the user action related virtual reality scenarios compared to pre-recorded sound. Thus, in the virtual bone surgery system developed by Niu and Leu [91], synthesized sounds were used to simulate the material removal process in drill-bone interaction. Although some research work can be found in the literature regarding the synthesis of contact sound for interactive simulation in a virtual environment [94, 95], there has been little effort on sound synthesis for material removal. Most studies in virtual bone surgery concentrate on graphics and haptic interfaces, and few papers [26, 32–34] can be found in the literature about auditory rendering for virtual bone surgery. Information in these studies did not address the subtle change in sound characteristics [96]. The most challenging issue of sound synthesis in virtual bone surgery is to have a sound model that allows real-time simulation while being
42
M. C. Leu et al.
sufficiently accurate to represent the important features of the sound during toolbone interaction [26].
2.7.1 Sound Modeling There has been some initial work on sound modeling for interactive bone surgery simulation in a virtual environment. Most of these methods can be categorized into physical modeling and spectral modeling. Physical modeling employs the knowledge of the physical laws that govern the motions and interactions within the system under study and expressing them as mathematical formulae. Spectral modeling is based on modeling the properties of sound waves as they are perceived by the listener [97]. Besides the two main categories of sound modeling synthesis methods mentioned above, there are also some other methods like frequency modulation (FM) method and autoregressive (AR) method found in the literature. FM modeling, originally introduced by Chowning [98], is a fundamental digital sound synthesis technique that employs an oscillating function. It combines two or more sinusoidal waves to form more complex waveforms. AR modeling was used by Kim et al. [99] to simulate small drill sound for a dental simulator. This mathematical modeling of a time series assumes that each value of the series depends only on a weighted sum of the previous values of the same series plus noise. The linear models give rise to rapid and robust computations. Although there are many methods of sound modeling, there has been little work on sound synthesis associated with material removal [26, 96]. It is difficult to use a physics-based method to model the machining sound because the mechanism of sound generation in the bone material removal process is highly complex. The primary objective of sound modeling and rendering for virtual bone surgery is to generate the sound of tool-bone interaction during the bone material removal operation. Thus, the virtual bone surgery system development consists of sound acquisition in the real world, sound characteristics analysis, mathematical model generation, and sound rendering for auditory display [26]. Niu and Leu [91] based on spectral modeling to develop a virtual bone surgery system. A sound model was developed and used to generate the synthetic sound in virtual bone surgery. It was modeled as the sum of a set of sinusoids plus a noise residual. Spectral modeling synthesis (SMS) was used for the virtual bone surgery simulation to determine the sinusoids and residual. SMS was used to find the mathematical models for free drilling, cortical bone drilling, cancellous bone drilling, etc. The general form of SMS can be written as [100]: s(t) ≈ sˆ (t) =
K k=1
Ak sin (ωk t + θk ) + r(t)
2 Virtual Bone Surgery
43 Original Spectrogram - Test3.wav 67.5dB
10000 60 52.5
8000 Frequency (Hz)
45 37.5
6000
30 22.5
4000
15 2000
7.5 0
0
1
2
3
4
5 6 Time (sec)
7
8
9
10
Drilling on cortical bone material After Drilling
Before drilling
Free-running
Drilling on cortical Drilling on bone material
Cancellous bone material
Drilling out Bone Material (Perforation)
Fig. 2.16 Different stages of drilling in the bone material [26]
where s(t) is an input signal; Ak , ωk , and θ k are the amplitude, frequency, and phase, respectively, of the kth sinusoid; and r(t) is the residual component of the signal at time t. In developing the virtual bone surgery system, Niu and Leu [91] conducted experiments to record sound clips from the drilling of different bone materials, and the power spectra of those sounds were obtained by Fast Fourier Transformation (FFT). It was found that the power spectra of sounds obtained from the drill’s free running, cortical bone drilling, and cancellous bone drilling were all similar, as shown in Fig. 2.16. Compared to free running, bone drilling influences primarily the amplitudes of the sound spectrum at peak frequencies, although the frequencies of some of the spectral peaks may shift slightly. The level of sound generated from the drilling of cortical bone material is higher than that generated from the drilling of cancellous bone material, indicating that the denser the bone material, the higher the sound amplitude. Niu [26] performed spectral modeling on various bone drilling sounds to obtain the sinusoidal and residual parts for each of these sounds. It is shown that the
44
M. C. Leu et al.
resulted residual parts shared a high level of similarity. Therefore, in the synthesis of bone drilling sounds, the residual part was kept the same as that obtained from the free running sound, and only the sinusoidal components were varied. Magnitude changes and frequency shifts, if any, were then applied to the sinusoidal components for generating the synthesized sounds for cortical bone drilling and cancellous bone drilling. The input peak frequencies, magnitudes, and phases were transformed into time-domain sinusoids and then added together frame by frame, called the additive synthesis process. The synthesis of the residual part of the sound took the residue’s enveloped spectrum, and the Inverse Fast Fourier Transform (IFFT) with a window function was applied to this spectrum to generate a stochastic signal in the time domain. Finally, the sinusoidal and residual parts were added together frame by frame to create the synthesized sound.
2.7.2 Sound Rendering Sound rendering, first introduced by Takala and Hahn [101], is a technique of generating a synchronized soundtrack for animations in a virtual environment. The synthesized sound in the time domain can be used for sound rendering in a virtual bone surgery system. The result of sound rendering generates the sound output to the suitable hardware (sound card, loudspeaker, etc.) for the user to hear the sound. For auditory rendering of the synthesized sound, the virtual bone material removal system can communicate with a sound card on a PC and create sound buffers using Microsoft MS-DirectSound API. A set of sound signals including the sinusoidal and residual parts could be generated and placed in the secondary buffers, with the DirectSound adding these signals and writing the result into the primary buffer to render the sound [26].
2.8 Conclusion Developing a bone surgery simulation system is a major undertaking and poses many technical challenges. The overarching objective of such a development is to build a high-fidelity simulation system which incorporates the latest technologies in virtual reality including computer graphics, haptics, and auditory rendering. This book chapter reviews the current bone surgery simulation systems, and the methods and techniques used to develop such systems. The described virtual bone surgery system development consists of the following tasks: image processing, geometric modeling, physical modeling, graphic rendering, haptic rendering, and auditory rendering. A virtual bone surgery system usually takes preprocessed CT or MRI image data to construct a geometric model of the bone and soft tissue using volume or surface modeling methods, and update the geometric model continuously during the virtual surgery. Special data structures
2 Virtual Bone Surgery
45
such as octree or bounding volume plus quadtree are used to handle the large set of medical data. To perform graphic displays in real time, surface rendering with a marching cube algorithm is used in most virtual bone surgery systems. For force feedback, physics-based models are used to represent the interface forces between the surgical tools and the bone/soft tissue in deformation and material removal. Auditory rendering can play an important role in the generation of an immersive virtual environment, and the sound can be modeled by physical modeling or spectral modeling. Overall, graphic rendering, haptic rendering, and auditory rendering are generated in real time using multithreading computations to provide realistic graphic, haptic, and auditory feedback during the bone surgery simulation. Research and development work on virtual bone surgery is far from mature. An ideal virtual bone surgery system should be able to provide high-fidelity dynamic graphic displays with realistic force and sound feedback during the simulated surgery process. In the future, with new emerging computer hardware, new algorithms and technologies, it would be possible to increase the level of realism by adding more virtual reality aspects to the bone surgery simulation system. For example, more realistic force, sound and visual effects such as bleeding, debris formation, and fluid flow in the bone surgery, could be included to make a virtual bone surgery system more immersive, intuitive, and interactive.
References 1. R.S. Haluck, T.M. Krummel, Computers and virtual reality for surgical education in the 21st century. Arch. Surg. 135, 786–792 (2000) 2. S. Chan, P. Li, G. Locketz, K. Salisbury, N.H. Blevins, High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery. Comput Assist Surg 21(1), 85–101 (2016) 3. M. Conditt, P.C. Noble, M.T. Thompson, S.K. Ismaily, G. Moy, K.B. Mathis, Quantitative analysis of surgical technique in total knee replacement, in Proc. of the 49th Annual Meeting of the Orthopaedic Research Society (2003), pp. 13–17 4. P.J. Gorman, A.H. Meier, T.M. Krummel, Computer-assisted training and learning in surgery. Comput. Aided Surg. 5, 120–130 (2000) 5. C.V. Edmond, D. Heskamp, D. Sluis, D. Stredney, G.J. Wiet, R. Yagel, S. Weghorst, P. Oppenheimer, J. Miller, M. Levin, L. Rosenberg, Simulation for ENT endoscopic surgical training, in Proc. Medicine Meets Virtual Reality 5, San Diego, CA (1997), pp. 518–528 6. S.L. Delp, P. Loan, C. Basdogan, J.M. Rosen, Surgical simulation: an emerging technology for training in emergency medicine. Presence 6(2), 147–159 (1997) 7. M. Bro-Nielsen, D. Helfrick, B. Glass, X. Zeng, H. Connacher, VR simulation of abdominal trauma surgery, in Medicine Meets Virtual Reality 6 (MMVR-6) (IOS Press, San Diego, 1998), pp. 117–123 8. V.L. Barker, in Cathsim, ed. by J.D. Westwood, H.M. Hoffman, R.A. Robb, D. Stredney (IOS Press, San Francisco, 1999), pp. 36–37 9. J. Berkley, S. Weghorst, H. Gladstone, G. Raugi, D. Berg, M. Ganter, in Fast Finite Element Modeling for Surgical Simulation, ed. by J.D. Westwood, H.M. Hoffman, R.A. Robb, D. Stredney (IOS Press, San Francisco, 1999), pp. 55–61 10. J.S. Røtnes, J. Kaasa, G. Westgaard, M. Grimnes, T. Ekeberg, A tutorial platform suitable for surgical simulator training (SimMentor™), in Medicine Meets Virtual Reality (IOS Press, Amsterdam, 2002)
46
M. C. Leu et al.
11. M. Agus, A. Giachetti, E. Gobbetti, G. Zanetti, A. Zorcolo, Real-time haptic and visual simulation of bone dissection. in IEEE Virtual Reality Conference (2002), pp. 209–216 12. J. Bryan, D. Stredney, G. Wiet, D. Sessanna, Virtual temporal bone dissection: a case study, in IEEE Visualization, San Diego, 21–26 October (2001) 13. D. Morris, C. Sewell, N. Blevins, F. Barbagli, A collaborative virtual environment for the simulation of temporal bone surgery, in Proceedings of Medical Image Computing and Computer-Assisted Intervention Conference, Saint-Malo (2004), pp. 319–327 14. G.J. Wiet, J. Bryan, E. Dodson, D. Sessanna, D. Stredney, P. Schmalbrock, B. Welling, Virtual temporal bone dissection simulation. Stud. Health Technol. Inform. 70, 378–384 (2000) 15. A. Jackson, N.W. John, N.A. Thacker, E. Gobbetti, G. Zanetti, R.J. Stone, A.D. Linney, G.H. Alusi, A. Schwerdtner, Developing a virtual reality environment for petrous bone surgery: a “state-of-the-art” review. J. Otol Neurotol. 23, 111–121 (2002) 16. N.W. John, N. Thacker, M. Pokric, A. Jackson, G. Zanetti, E. Gobbetti, A. Giachetti, R.J. Stone, J. Campos, A. Emmen, A. Schwerdtner, E. Neri, S.S. Franseschini, F. Rubio, An integrated simulator for surgery of the petrous bone, in Proceedings of Medicine Meets Virtual Reality (2001), pp. 218–224 17. A. Petersik, B. Pflesser, U. Tiede, K.H. Hoehne, R. Leuwer, Haptic volume interaction with anatomic models at sub-voxel resolution, in Haptics 2002, Orlando, Florida (2002), pp. 66– 72 18. B. Pflesser, A. Petersik, U. Tiede, H.K. Hohne, R. Leuwer, Volume cutting for virtual petrous bone surgery. Comput. Aided Surg 7, 74–83 (2002) 19. Y. Lin, X. Wang, F. Wu, X. Chen, C. Wang, G. Shen, Development and validation of a surgical training simulator with haptic feedback for learning bone-sawing skill. J. Biomed. Inform. 48, 122–129 (2014) 20. R. Gray, A. Gougoutas, V. Nguyen, J. Taylor, N. Bastidas, Use of three-dimensional, CAD/CAM-assisted, virtual surgical simulation and planning in the pediatric craniofacial population. Int. J. Pediatr. Otorhinolaryngol. 97, 163–169 (2017) 21. A. Arora, C. Swords, S. Khemani, Z. Awad, A. Darzi, A. Singh, N. Tolley, Virtual reality case-specific rehearsal in temporal bone surgery: a preliminary evaluation. Int. J. Surg. 12(2), 141–145 (2014) 22. T.Y. Fang, P.C. Wang, C.H. Liu, M.C. Su, S.C. Yeh, Evaluation of a haptics-based virtual reality temporal bone simulator for anatomy and surgery training. Comput. Methods Prog. Biomed. 113(2), 674–681 (2014) 23. X. Chi, M.C. Leu, J. Ochoa, Modeling of haptic rendering for virtual bone surgery, in Proceedings of ASME International Mechanical Engineering Congress and R&D Expo and Computers and Information in Engineering Conference, Anaheim, CA (2004) 24. X. Chi, Q. Niu, V. Thakkar, M.C. Leu, Development of a bone drilling simulation system with force feedback. in Proceedings of ASME International Mechanical Engineering Congress and Exposition, Orlando, FL (2005) 25. Q. Niu, X. Chi, M.C. Leu Large medical data manipulation for bone surgery simulation, in Proceedings of ASME International Mechanical Engineering Congress and Exposition, Orlando, FL (2005) 26. Q. Niu, Modeling and rendering for development of a virtual bone surgery system, Doctoral Dissertations (2008), p. 2213. http://scholarsmine.mst.edu/doctoral_dissertations/2213 27. X. Peng, X. Chi, J. Ochoa, M.C. Leu, Bone surgery simulation with virtual reality, in Proceedings of ASME Design Engineering Computers and Information in Engineering Conferences, Chicago, IL (2003) 28. T.A. Galyean, J.F. Hughes, Sculpting: an interactive volumetric modeling technique. Comput. Graph. 4(25), 267–274 (1991) 29. S.W. Wang, A.E. Kaufman, Volume sculpting, in Proceedings of Symposium on Interactive 3D Graphics, Monterey, CA (1995), pp. 151–156 30. S. Gibson, J. Samosky, A. Mor, C. Fyock, E. Grimson, T. Kanade, R. Kikinis, H. Lauer, N. McKenzie, S. Nakajima, H. Ohkami, R. Osborne, A. Sawada, Simulating arthroscopic knee surgery using volumetric object representations, real-time volume rendering and haptic
2 Virtual Bone Surgery
47
feedback, in Proceedings of the Computer Vision and Virtual Reality in Medicine and Medical Robotics and Computer Assisted Surgery (1997), pp. 369–378 31. A. Bærentzen, Octree-based volume sculpting, in Proceedings of IEEE Visualization Conference, (ACM Press, Research Triangle Park, 1998), pp. 9–12 32. D. Morris, C. Sewell, F. Barbagli, K. Salisbury, N.H. Blevins, S. Girod, Visuohaptic simulation of bone surgery for training and evaluation. IEEE Comput. Graph. Appl. 26(6), 48–57 (2006) 33. G.J. Wiet, D. Stredney, D. Sessanna, J.A. Bryan, D.B. Welling, P. Schmalbrock, Virtual temporal bone dissection: an interactive surgical simulator. Otolaryngol. Head Neck Surg. 127(1), 79–83 (2002) 34. Y.C. Zhao, G. Kennedy, R. Hall, S. O’leary, Differentiating levels of surgical experience on a virtual reality temporal bone simulator. Otolaryngol. Head Neck Surg. 143(5_suppl), 30–35 (2010) 35. J. Cecil, A. Gupta, P. Ramanathan, M. Pirela-Cruz, A distributed collaborative simulation environment for orthopedic surgical training, in Systems Conference (SysCon), 2017 Annual IEEE International, 24 April 2017 (IEEE, Piscataway, 2007), pp. 1–8 36. J. Cecil, P. Ramanathan, M. Pirela-Cruz, M.B. Kumar, A virtual reality based simulation environment for orthopedic surgery, in OTM Confederated International Conferences “On the Move to Meaningful Internet Systems”, 27 October (Springer, Berlin, 2014), pp. 275– 285 37. J. Cecil, P. Ramanathan, V. Rahneshin, A. Prakash, M. Pirela-Cruz, Collaborative virtual environments for orthopedic surgery, in Automation Science and Engineering (CASE), 2013 IEEE International Conference on 17 August 2013 (IEEE, Piscataway, 2013), pp. 133–137 38. M.B. Shenai, R.S. Tubbs, B.L. Guthrie, A.A. Cohen-Gadol, Virtual interactive presence for real-time, long-distance surgical collaboration during complex microsurgical procedures. J. Neurosurg. 121(2), 277–284 (2014) 39. A. Arora, A. Hall, J. Kotecha, C. Burgess, S. Khemani, A. Darzi, A. Singh, N. Tolley, Virtual reality simulation training in temporal bone surgery. Clin. Otolaryngol. 40(2), 153– 159 (2015) 40. M. Varoquier, C.P. Hoffmann, C. Perrenot, N. Tran, C. Parietti-Winkler, Construct, face, and content validation on Voxel-Man® simulator for Otologic surgical training. Int J Otolaryngol. 2017, 2707690 (2017) 41. P. Piromchai, P. Kasemsiri, S. Wijewickrema, I. Ioannou, G. Kennedy, S. O’Leary, The construct validity and reliability of an assessment tool for competency in cochlear implant surgery. Biomed. Res. Int. 2014, 1 (2014) 42. Y.C. Zhao, G. Kennedy, K. Yukawa, B. Pyman, S. O’Leary, Improving temporal bone dissection using self-directed virtual reality simulation: results of a randomized blinded control trial. Otolaryngol. Head Neck Surg. 144(3), 357–364 (2011) 43. M.S. Sorensen, J. Mosegaard, P. Trier, The visible ear simulator: a public PC application for GPU-accelerated haptic 3D simulation of ear surgery based on the visible ear data. Otol. Neurotol. 30(4), 484–487 (2009) 44. R. Sethia, G.J. Wiet, Pre-operative preparation for otologic surgery: temporal bone simulation. Curr. Opin. Otolaryngol. Head Neck Surg. 23(5), 355 (2015) 45. M.F. Bhutta, A review of simulation platforms in surgery of the temporal bone. Clin. Otolaryngol. 41(5), 539–545 (2016) 46. W.R. Mark, S.C. Randolph, M. Finch, J.M.V. Verth, R.M. Taylor II, Adding force feedback to graphics systems: issues and solution, in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (1996), pp. 447–452 47. O. Astley, V. Hayward, Design constraints for haptic surgery simulation, in Proceedings of the 2000 IEEE International Conference on Robotics & Automation, San Francisco, CA (2000) 48. R.S. Avila, L.M. Sobierajski, A haptic interaction method for volume visualization, in IEEE Visualization Proceedings, San Francisco, MA (1996), pp. 197–204 49. A. Kaufman, D. Cohen, R. Yagel, Volume graphics. IEEE Comput. 26(7), 51–64 (1993)
48
M. C. Leu et al.
50. W. Schroeder, K. Martin, B. Lorensen, The Visualization Toolkit – An Object-Oriented Approach to 3D Graphics, 3rd edn. (Prentice Hall, Upper Saddle River, 2002) 51. S.M. Bentzen, I. Hvid, J. Jorgensen, Mechanical strength of tibial trabecular bone evaluated by X-ray computer tomography. J. Biomech. 20(8), 743–752 (1987) 52. L. Luis, W. Schroeder, N. Lydia, C. Josh, ITK Software Guide: The Insight Segmentation and Registration Toolkit (Kitware Inc., Clifton Park, 2003) 53. J. Maintz, M. Viergever, A survey of medical image registration. Med. Image Anal. 2(1), 1–36 (1998) 54. L. Ritter, Z. Burgielski, N. Hanssen, T. Jansen, M. Lievin, R. Sader, H.F. Zeilhofer, E. Keeve, 3D interactive segmentation of bone for computer-aided surgical planning, in Proceedings 4th Annual Conference of the International Society for Computer Assisted Orthopaedic Surgery, CAOS’04, Chicago, IL (2004) 55. D. Kovacevic, S. Loncaric, E. Sorantin, Deformable contour based method for medical image segmentation, in First Croatian Symposium on Computer Assisted Surgery, Zagreb (1999) 56. Q. Zhu, Y. Chen, A.E. Kaufman, Real-time biomechanically-based muscle volume deformation using FEM, in Proceedings of the EUROGRAPHICS ’98 (1998) 57. J.D. Foley, A.V. Dam, S.K. Feiner, J.F. Hughes, Computer Graphics: Principles and Practice, 2nd edn. (Addison Wesley, Boston, 1996) 58. W.E. Lorence, H.E. Cline, Marching cubes: a high resolution 3D surface construction algorithm. Comput. Graph. 21(4), 163–169 (1987) 59. J. Wilhelms, A. van Gelder, Octrees for faster isosurface generation, in Proceedings of the 1990 Workshop on Volume Visualization (1990), pp. 57–62 60. R. Shekhar, E. Fayyad, R. Yagel, J. Cornhill, Octree based decimation of marching cubes surfaces. Visualization 96, 335–342 (1996) 61. P. Sutton, D.C. Hansen, Isosurface extraction in time-varying fields using a temporal branchon-need tree (T-BON), in IEEE Visualization ’99 (1999), pp. 147–153 62. F. Velasco, J.C. Torres, Cells octree: a new data structure for volume modeling and visualization, in Proceedings of the VI Fall Workshop on Vision, Modeling and Visualization, Stuttgart (2001), pp. 151–158 63. I. Boada, I. Navazo, Multiresolution isosurface fitting on a surface octree, in 6th International Fall Workshop Vision, Modeling and Visualization 2001, Stuttgart (2001), pp. 318–324 64. R. Westermann, L. Kobbelt, T. Ertl, Real-time exploration of regular volume data by adaptive reconstruction of isosurfaces. Vis. Comput. 15(2), 100–111 (1999) 65. Levoy M (1988) Volume rendering - display of surfaces from a volume data. IEEE Comput. Graph. Appl., L, 8(3): 29–37 66. M. Levoy, Efficient ray-tracing of volume data. ACM Trans. Graph. 9(3), 245–261 (1990) 67. L. Westover, Footprint evaluation for volume rendering, in SIG-GRAPH’90 (1990), pp. 367– 376 68. P. Lacroute, M. Levoy, Fast volume rendering using a shear-warp factorization of the viewing transformation, in Proc. SIGGRAPH ’94 (1994), pp. 451–458 69. B. Cabral, N. Cam, J. Foran, Accelerated volume rendering and tomographic reconstruction using texture mapping hardware, in 1994 Symposium on Volume Visualization (1994), pp. 91–98 70. M. Meißner, J. Huang, D. Bartz, K. Mueller, R. Crawfis, A practical evaluation of four popular volume rendering algorithms, in ACM Symposium on Volume Visualization (2000) 71. A.M. Okamura, Literature survey of haptic rendering, collision detection, and object modeling (1998). http://pegasus.me.jhu.edu/allisono/publications/old/hapticlit.html 72. J. Hua, H. Qin, Haptic sculpting of volumetric implicit functions, in Proceedings of the 2002 IEEE Symposium on Volume Visualization and Graphics (2002), pp. 55–64 73. C. Plaskos, A. Hodgeson, P. Cinquin, Modeling and optimization of bone-cutting forces in orthopedic surgery, in Medical Image Computing and Computer-Assisted Intervention – MICCAI, vol. 2878, (Springer, Berlin, 2003), pp. 254–261 74. K.L. Wiggins, S. Malkin, Drilling of bone. J. Biomech. 9, 553–559 (1976)
2 Virtual Bone Surgery
49
75. C.H. Jacob, J.T. Berry, M.H. Pope, F.T. Hoaglund, A study of the bone machining processdrilling. J. Biomech. 9, 343–349 (1976) 76. J. Hobkirk, K. Rusiniak, Investigation of variable factors in drilling bone. J. Oral Surg. 35, 968–973 (1977) 77. T. Karalis, P. Galanos, Research on the mechanical impedance of human bone by a drilling test. J. Biomech. 15(8), 561–581 (1982) 78. M.B. Abouzgia, D.F. James, Measurements of shaft speed while drilling through bone. J. Oral Maxillofac. Surg. 53, 1308–1315 (1995) 79. B. Allotta, F. Belmonte, L. Bosio, P. Dario, Study on a mechatronic tool for drilling in the osteosynthesis of long bones: tool/bone interaction, modeling and experiments. Mach. Des. 6(4), 447–459 (1996) 80. T. Udilijak, D. Ciglar, K. Mihoci, Influencing parameters in bone drilling, in Proceedings of 9th International Scientific Conference on Production Engineering CIM (2003), pp. 133–142 81. M. Eriksson, H. Flemmer, J. Wikander, Haptic simulation of the milling process in temporal bone operations, in MMVR 13 Medicine Meets Virtual Reality Conference, January (2005) 82. B.K. Choi, R.B. Jerard, Sculptured Surface Machining Theory and Applications (Kluwer Academic Publishers, Norwell, 1998) 83. Z. Yang, Y. Chen, Haptic rendering of milling, in Proceedings of Eurohaptics Conference (2003) 84. W.A. McNeely, K.D. Puterbaugh, J.J. Troy, Six degree-of-freedom haptic rendering using voxel sampling, in Proceedings of ACM SIGGRAPH (1999), pp. 401–408 85. M. Lin, S. Gottschalk, Collision detection between geometric models: a survey, in Proceedings of IMA Conference on Mathematics of Surfaces 1998 (1998) 86. T.M. Massie, J.K. Salisbury, The phantom haptic interface: a device for probing virtual objects, in ASME Haptic Interfaces for Virtual Environment and Teleoperator Systems, vol. 1 (1994), pp. 295–301 87. X. Peng, W. Zhang, S. Asam, M.C. Leu, Surface reconstruction from dexel data for virtual sculpting, in Proceedings of ASME International Mechanical Engineering Conference, Anaheim, CA (2004) 88. C. Zilles, J.K. Salisbury, A constraint-based god object method for haptics display, in Proceedings of IEEE/RSJ (1995) 89. D. Ruspini, O. Khatib, Dynamic models for haptic rendering systems, in Advances in Robot Kinematics: ARK98, Strobl/Salzburg (1998), pp. 523–532 90. D.C. Ruspini, K. Kolarov, O. Khatib, The haptic display of complex graphical environments, in Proceedings of ACM SIGGRAPH (1997), pp. 345–352 91. Q. Niu, M.C. Leu, Modeling and rendering for a virtual bone surgery system, in Medicine Meets Virtual Reality 15: In Vivo, In Vitro, In Silico: Designing the Next in Medicine, vol. 125 (IOS Press, Amsterdam, 2007), p. 352 92. M. Nakao, T. Kuroda, H. Oyama, A haptic navigation system for supporting master-slave robotic surgery, in ICAT 2003, Tokyo, 3–5 December (2003) 93. N.E. Miner, T.P. Caudell, Method of sound synthesis. United States Patent, No. 678355 B1 (2004) 94. D.K. Pai, K. van den Doel, D.L. James, J. Lang, J.E. Lloyd, J.L. Richmond, S.H. Yau, Scanning physical interaction behavior of 3D objects, in Computer Graphics (ACM SIGGRAPH 2001 Conference Proceedings), August (2001), pp. 87–96 95. K. Van den Doel, D.K. Pai, The sounds of physical shapes. Presence 7(4), 382–395 (1998) 96. N.P. Shine, P.G. O’Sullican, J. Connell, P. Rulikowski, J. Barrett, Digital spectral analysis of the drill-bone acoustic interface during temporal bone dissection: a qualitative cadaveric pilot study. Otol. Neurotol. 27, 728–733 (2006) 97. T. Tolonen, V. Valimaki, M. Karjalainen, Evaluation of modern sound synthesis methods (Helsinki University of Technology), Report 48, ISBN 951-22-4012-2 (1998) 98. J.M. Chowning, The synthesis of complex audio spectra by means of frequency modulation. J. Audio Eng. Soc. 21(7), 526–534 (1973)
50
M. C. Leu et al.
99. L. Kim, Y. Hwang, S.H. Park, S. Ha, Dental training system using multi-modal interface. Comput. Aided Des. Appl. 2(5), 591–598 (2005) 100. X. Serra, A system for sound analysis/transformation/synthesis based on a deterministic plus stochastic decomposition, Ph.D. Dissertation, Stanford University (1989) 101. Takala T, Hahn J (1992) Sound rendering, ACM Comput. Graph. 26(2): 211–220
Chapter 3
Three-Dimensional Medical Imaging: Concepts and Applications Paulo Henrique Junqueira Amorim, Thiago Franco de Moraes, Jorge Vicente Lopes da Silva, and Helio Pedrini
3.1 Introduction Medical imaging has been explored since the discovery of the X-ray in the late nineteenth century, becoming more popular with the advent of computer and several medical imaging modalities [43], such as computed tomography (CT), ultrasonography (US), magnetic resonance imaging (MRI), positron emission computed tomography (PET-CT), among others. Each modality is more suitable for acquiring images of a certain type of tissue or to visualize certain pathologies. CT, for instance, is adequate to visualize hard tissues such as bone and teeth, whereas MRI is proper to view soft tissues, brain, and ligaments. In the past, the only way to make an examination available to the doctor or patient was to print it on radiographic or paper films. Although this practice still exists, nowadays the most common procedure in large centers is to provide the examination digitally. In addition to the associated costs, digital media provides efficient and easy access to all the images acquired in an examination, since the radiologist was used to choose only some printed images considered more important. During this transition, DICOM (Digital Imaging and Communications in Medicine) emerged as a protocol for exchanging information between medical devices, integrating medical hardware and software to interpret images from different manufacturers. This protocol leveraged the diffusion of medical images, as well as the development of numerous tools for processing, analysis and visualization
P. H. J. Amorim · T. F. de Moraes · J. V. L. da Silva () Division of 3D Technologies, Center for Information Technology Renato Archer, Campinas, SP, Brazil e-mail: [email protected]; [email protected]; [email protected] H. Pedrini Institute of Computing, University of Campinas, Campinas, SP, Brazil e-mail: [email protected] © Springer Nature Switzerland AG 2021 B. Bidanda, P. J. Bártolo (eds.), Virtual Prototyping & Bio Manufacturing in Medical Applications, https://doi.org/10.1007/978-3-030-35880-8_3
51
52
P. H. J. Amorim et al.
of medical images, such as the open-source InVesalius [6, 18, 30], 3D Slicer [19], OsiriX [63], among others. This chapter focuses on fundamental aspects related to the generation and visualization of models for 3D printing. The following sections present concepts on medical imaging, preprocessing, segmentation, volume rendering, image data representation, 3D printing, and biofabrication.
3.2 Acquisition This section presents concepts related to acquisition of medical images with focus on the most popular modalities, which allow for the visualization of anatomical regions in three dimensions. Although there are filters to improve the quality of the input images, as presented in Sect. 3.3, a proper acquisition process must facilitate the subsequent task for diagnostic purpose or 3D printing of an anatomical model.
3.2.1 Computer Tomography (CT) Computed tomography (CT) [33] corresponds to an imaging procedure in which a narrow beam of X-rays, discovered by physicist Wilhelm Conrad Röntgen in 1895, is aimed at a patient to allow a non-invasive visualization of internal parts of the human body. CT employs X-ray as an energy source to generate threedimensional (3D) images, unlike conventional radiography that overlaps several anatomical structures in a single two-dimensional (2D) image. CT generates several images in a transverse orientation of the region under analysis. By using specific computer tools, it is possible to stack the generated images and provide the expert with the possibility to visualize only the tissues of interest in 3D. A tomography equipment consists of a ring having several sensors and an X-ray emitter. A table is arranged in the middle of the ring to support the patient to be scanned. This table is shifted while X-ray is emitted and sensors read the amount of X-ray that has reached the patient body [5]. The development of the first clinical CT equipment began in 1967, in England, when Godfrey Hounsfield observed that it would be possible to visualize internal structures without overlapping with different X-ray projections towards the human body [28]. The projections are represented by senograms [38] and reconstructed by means of back-projection technique in order to generate the final image. CT equipment are currently in the sixth generation, also known as multi-slice, since they can acquire multiple images with only one dose of X-ray. Each pixel of the CT image is described in the Hounsfield scale (HU). The HU value of a pixel with average linear attenuation coefficient μ is given by HU = 1000 ×
μ − μwater μwater − μair
(3.1)
3 Medical Imaging
53
where μair and μwater are the linear attenuation coefficients of air and water, respectively. In this scale, water is represented by 0, air by −1000, and more dense bones are represented by 3000 HU [28]. A variation of the CT is angiotomography, which uses an iodine-based drug to enhance arteries and blood vessels in the images. PET-CT (Positron Emission Tomography Computed Tomography) is another variation that uses computed tomography to acquire anatomical images along with a positron detector. To emphasize the anatomical regions with high metabolism, the patient receives a solution with isotopes. The region with some type of tumor will consume larger amounts of such solution and, consequently, will emit larger amounts of positron to be captured by sensors [11]. At the end of the image acquisition, it is necessary to perform a registration between the anatomical and metabolic image.
3.2.2 Cone Beam Computed Tomography (CBCT) The cone beam computed tomography was created to provide images of the dental and craniofacial region with lower emission of radiation and cost compared to computed tomography (CT). Analogously to CT, the CBCT has a transmitter and several detectors, although the X-ray is emitted in the form of a conical beam. During the acquisition process, the patient remains seated in a chair, while the emitter and detectors make a turn around the patient head. Due to its compact characteristics, the device is widely used in dental clinics [34]. After acquiring and reconstructing the signal to form a volume, the device performs volume slicing so that it is possible to individually view each image or perform 3D reconstruction again. Another advantage related to this type of exam is the possibility to reconstruct panoramic images from the volume. Unlike CT, the pixels of a CBCT image are not described on a Hounsfield scale. Each manufacturer maintains its own standard, however, the light gray tones represent denser tissue values, whereas darker gray tones represent the soft tissues.
3.2.3 Magnetic Resonance Imaging (MRI) Magnetic resonance imaging (MRI) equipment use a strong magnetic field to align the nuclei of hydrogen atoms. An atom nucleus possesses a proton with a positive electric charge and a neutron with positive and negative that cancel out. This electric charge causes the protons to rotate along their central axis, which results in their angular momentum, also known as spin. When they are surrounded by a strong magnetic field, they line up pointing in one direction and they are misaligned when they suffer a disturbance by electromagnetic signals. At the end of the disturbance, they again align, at which point the signal is captured and later filtered and processed to generate the image [10].
54
P. H. J. Amorim et al.
Fig. 3.1 Examples of images acquired under (a) T1 and (b) T2 imaging
When the protons align again releasing energy, it is possible to capture signals to produce the images. The magnetic resonance equipment allow to generate images with enhancement of different regions. These variations are known as sequences. The most common sequences are T1 and T2: in T1, the equipment will measure the longitudinal relationship of the spins, whereas in T2 the transverse relaxation. As shown in Fig. 3.1, images from sequence T1 highlight regions that have less hydrogen, such as fat and fibers. The sequence T2 emphasizes regions with enough hydrogen as eyeball and cortex of the brain. In addition to these sequences, there are FLAIR (fluid attenuated), DWI (diffusion weighted), tractography, among others [60]. Analogously to CT, a solution can be used to contrast blood vessels, usually gadolinium-based [41].
3.2.4 Ultrasonography (US) Ultrasonography is an exam modality created around 1950 that uses echoes of sound pulses that reach an anatomical region to form images. In this modality, one or more transducers are employed to transform electrical signals into high frequency sounds, usually between 1 and 10 MHz. The ultrasound echo is captured by the transducer, then converted into electrical signals and sent to a computer to translate the signals into images [16]. Similarly to other exam modalities, there are variations in the acquisition of ultrasound images depending on the region of interest and pathology investigated. For example, A-mode offers mechanisms for analysis of heart valve abnormalities, in which only a one-dimensional chart is provided. B-mode allows the visualization of cross-section images of the anatomical region under analysis. M-mode generates
3 Medical Imaging
55
successive signals in A-mode along with an image that allows to measure changes of structures over time. Doppler imaging highlights regions with movements at different frequencies, such that these regions are presented in different colors in the images [59]. It is also possible to generate three-dimensional images with ultrasound. This type of image is commonly used in obstetrics for fetal visualization. For this, the device is equipped with multiple transducers that consequently capture multiple signals and generate multiple images. These images are interpolated, resulting in a volume. In order to separate the fetus, the volume is segmented and presented on the screen of the equipment with ray casting techniques. Some devices, known as 4D ultrasound, are available to present the captured 3D images in real time [37].
3.2.5 Digital Imaging and Communications in Medicine (DICOM) Some medical examinations, such as computed tomography acquired prior to 1980, were made available in films or on paper, in the case of ultrasound. With the popularization of personal computers, the exams become available in digital format by the manufactures. Another advance was the networking of computers in the radiological environment. Since each manufacturer had its own file format or specific protocol of communication between equipment and computers, many problems occurred for the integration and management of medical data. Digital Imaging and Communications in Medicine (DICOM) is the result of a committee created by several medical device manufacturers to standardize printing, transmission, and storage of medical images [57]. The first version was released by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) in 1985. Second and third versions were released in 1988 and 1993, respectively. Known as ACR/NEMA standard in the previous versions, the DICOM services were adopted in the third version. In addition to pixel intensities, images in DICOM format contain metainformation about the patient, equipment, acquisition, among others. The DICOM standard also defines policies for interoperability between devices over the TCP/IP layer by defining how downloads, searching, and image caching are performed on PACS [17] servers. Furthermore, DICOM currently supports a number of other patterns, such as videos, triangular meshes, and waveforms [2].
3.3 Preprocessing After the acquisition process, medical images undergo a series of steps, according to the application under consideration. Common stages include segmentation,
56
P. H. J. Amorim et al.
registration, and classification (for instance, identification of tumor in a tomography image). However, the acquired images are frequently not suitable for these steps due to the occurrence of noise and low contrast. Additionally, certain characteristics of the image must be highlighted. The preprocessing step addresses these problems.
3.3.1 Noise Filtering Noise corresponds to variations in color or brightness of the image that were not present in the physical object during its acquisition. It occurs due to a number of factors, such as differences of radiation scattering on the surface of an object before reaching the sensor of a CT scanner or switch of bits during image transmission. Noise is an undesirable artifact that can compromise the efficiency of segmentation, registration, and classification algorithms, as well as the image analysis. Figure 3.2a shows an example of noisy image. There are several methods for noise reduction. In these methods, pixels or voxels are modified taking into account the intensities of their neighboring elements. However, many of these methods may blur the image, making the contour of the objects diffuse and affecting subsequent processing. The mean filter [23, 54, 65] smooths the image by replacing the value of the each center pixel (or voxel) within a local neighborhood (sliding window) by the average intensity of its neighbors. The median filter [23, 54, 65] replaces the value of the each center pixel (voxel) with the median of its neighborhood. This approach is usually more robust than the mean filter because the outliers will have little impact on the median value. The Gaussian filter [23, 54, 65] replaces the value of the central element with a weighted average of pixel values in the neighborhood, whose weights decrease with distance from the neighborhood center according to the normal (Gaussian) distribution. This method does not preserve image edges. The bilateral filtering [71] is a non-iterative approach to edge-preserving smoothing. In this method, each pixel or voxel is weighted with the average of its neighbors, however, unlike the Gaussian filter, the weights depend on both the Euclidean distance of the neighbor to the pixel to be changed and the difference of values between them. Figure 3.2 illustrates the application of mean, median, Gaussian, and bilateral filters to an input medical image for noise reduction purpose.
3.3.2 Edge Detection Edge detection techniques [23, 54, 65] are methods that search for regions in the image whose change of intensity of the pixels or voxels is abrupt, that is, a discontinuity. These regions correspond to the edges of the image, identifying a
3 Medical Imaging
57
Fig. 3.2 Different low-pass filtering techniques for noise reduction. (a) Original slice (with noise). (b) Mean filtering. (c) Median filtering. (d) Gaussian filtering. (e) Bilateral filtering
58
P. H. J. Amorim et al.
Fig. 3.3 Example of edge map obtained through gradient magnitude. (a) Original slice. (b) Gradient magnitude
possible boundary between two or more objects. Edge detection is an important process for image analysis because it can significantly reduce the data to be processed while preserving useful information, such as corners and object contours. Several edge detection methods are based on the concept of gradient, whose direction indicates the largest change in intensity in a neighborhood of a pixel or voxel. Mathematically, it is expressed as ∇f =
∂f ∂f , ∂x ∂y
(3.2)
The magnitude of the gradient is the rate of change of the gradient vector, given by 2 2 ∂f ∂f + ∇f = ∂x ∂y
(3.3)
The gradient direction, perpendicular to the image edge, is expressed as −1
θ = tan
∂f ∂f / ∂y ∂x
(3.4)
Figure 3.3 shows an example of image edges obtained by calculating the gradient magnitude. Finite-difference approximations of the first-order and second-order derivatives can be used to estimate the image gradient. Prewitt and Sobel edge detectors with 3 × 3 kernels are expressed, respectively, in Eqs. (3.5) and (3.6) as
3 Medical Imaging
59
Fig. 3.4 Illustration of edge map computed through the Sobel operator. (a) Horizontal derivative. (b) Vertical derivative. (c) Gradient magnitude
⎡ −1 ∂f ≈ ⎣−1 ∂x −1 ⎡ −1 ∂f ≈ ⎣−2 ∂x −1
⎤ 0 +1 0 +1⎦ 0 +1 ⎤ 0 +1 0 +2⎦ 0 +1
⎡ −1 ∂f ≈⎣ 0 ∂y +1 ⎡ −1 ∂f ≈⎣ 0 ∂y +1
⎤ −1 −1 0 0⎦ +1 +1 ⎤ −2 −1 0 0⎦
(3.5)
(3.6)
+2 +1
Figure 3.4 illustrates the results after applying Sobel operator to the image shown in Fig. 3.3a. Horizontal and vertical derivative approximations are computed and combined to generate the final edge map. Canny edge detector [9] aims to optimize the location of the edge center in the presence of noise. To mitigate image noise effect, the image is initially smoothed using a Gaussian filter. The gradient is calculated using Prewitt or Sobel operators. The next step is called non-maximum suppression, whose intent is to suppress all gradients to 0, except for the local maxima, which are the regions with the largest change in intensity. This step works as an edge-thinning technique. Since the resulting edges can contain pixels affected by noise or varying intensity, a double threshold is applied to remove these pixels, called spurious responses. Low and high thresholds are determined empirically according to the application under consideration and available images. Edge pixels with gradient magnitude greater than the high threshold are called strong edge pixels. Pixels whose magnitude gradient value are between high and low thresholds are called weak edge pixels. Values lower than low threshold values are suppressed to 0. Strong edge pixels are retained, weak edge pixels that are connected to strong edge pixels with respect to their 8-connectivity are retained. This last step is called hysteresis. Figure 3.5 illustrates the application of the Canny edge detector in Fig. 3.3a.
60
P. H. J. Amorim et al.
Fig. 3.5 Canny edge detector applied to an input image
3.3.3 Contrast Enhancement The main purpose of image enhancement techniques [23, 54, 65] is to enhance the quality of images to facilitate interpretation by medical experts. An important aspect is the image contrast, which is difference of brightness between two or more regions of the image. Low-contrast images make the localization of certain structures more difficult. Figure 3.6a shows an example of a low-contrast image. In this image, the kidneys are barely apparent due to poor contrast. There are several methods for enhancing contrast in an image. Many methods use the histogram of the image to adjust the intensity values. Histogram is a graphical representation of the amount of pixels for each intensity value of an image. Figure 3.6b shows an example of histogram computed from Fig. 3.6a. The histogram equalization technique redistributes the intensity values of the pixels or voxels so that the image has a uniform distribution of intensity values. Figure 3.6 illustrates the application of this technique. In this example, Fig. 3.6a presents low contrast, where it is difficult to differentiate the kidneys from other nearby structures. Figure 3.6b shows the image histogram. After applying the histogram equalization algorithm, we have an image with better contrast (Fig. 3.6c), where the kidneys are more apparent and its histogram (Fig. 3.6d) is more uniform.
3.4 Segmentation Image segmentation techniques aim to extract objects or regions of interest from an image. The extraction takes place according to certain selection criteria, for example, color or intensity of the region of interest, texture, connectivity of objects, among other information. An example of application of the segmentation in medical imaging is the extraction of arteries to facilitate aneurysm analysis. There are numerous image segmentation methods available in the literature [26, 53, 74].
3 Medical Imaging
(a)
(c)
61
(b)
(d)
Fig. 3.6 Histogram equalization technique applied to an input image. (a) Original slice. (b) Original slice histogram. (c) Image after histogram equalization. (d) Equalized histogram
62
P. H. J. Amorim et al.
3.4.1 Thresholding Thresholding is a simple yet powerful image segmentation technique to separate regions or objects of interest from the background using pixel color or intensity. The technique can be considered either within a local neighborhood or globally, where a pixel intensity or color range is selected throughout the image. For example, given an interval tmin and tmax , pixels outside this range are assigned the value 0, and 1 otherwise. Let f (x, y) and g(x, y) be the input image and the resulting image after the thresholding process, respectively. Then, g(x, y) =
1
if tmin ≤ f (x, y) ≤ tmax
0
otherwise
(3.7)
A thresholding technique was developed by Otsu [52], which considers the existence of two classes in an image, that is, background pixels (values that are lower than a computed TOtsu ) and object pixels (values that are higher than or equal to TOtsu ). In this method, the variance and the overall mean of the image are calculated. Then, the variance ratio between the classes is maximized with respect to the total variance. Figure 3.7 shows an example of application of the threshold TOtsu to remove the background pixels. In Fig. 3.7c, it is possible to observe the normalized histogram with the line of separation between the two classes.
3.4.2 Region Growing Traditional techniques of region growing require an initial pixel, known as seed, that belongs to the object or region of interest to be segmented. This seed expands to the other pixels that satisfy a certain stopping criterion, such as similarity of color, grayscale, or texture. It is also necessary to define the neighborhood size for the pixel seed. In two-dimensional images, typical values are 4 and 8, whereas common values in three-dimensional are 6, 18, and 26. There are variations of the region growing technique, such as automatic seed insertion [58, 69], statistical-based region growing [27, 50], among others.
3.4.3 Watershed Watershed [24] uses the concept of topographic surface flood to separate objects and regions present in the images. An image is represented by valleys and peaks according to their pixel values, analogous to a catchment basin of an elevation map.
3 Medical Imaging
63
(a)
(b)
12
10
8
6
4
2
0 –1000
–500
0
500
1000
1500
(c) Fig. 3.7 Illustration of Otsu technique to separate objects from the background. (a) Input image. (b) Output image. (c) Histogram
The process starts with markers, or similarly by water sources, that start filling the surface with water following the gradient of the image. When lower parts of the surfaces begin to be completely flooded, the watershed of a relief correspond to the limits of adjacent basins, which form lines of contention. This process is repeated until the water floods the entire surface. Different regions or objects separated by such lines are the result of segmentation.
3.5 Representation Images are usually represented as 2D matrices of pixels or 3D matrices of voxels. Although this type of representation is simple, the search for pixels according to certain characteristics when performed sequentially may demand high computational cost. Some segmentation, registration, and 3D visualization techniques, for
64
P. H. J. Amorim et al.
instance, require large number of search operations for pixel values. Furthermore, matrix representation is not efficient in terms of memory storage, since pixels or voxels with same value in a homogeneous region are repeated. An alternative for reducing the cost of image processing is to use spatial data structures that allow efficient access only in the desired regions when compared to the uniform matrix representation. In this section, we briefly describes some important spatial data structures for image representation.
3.5.1 Quadtree Quadtree [13, 48] is a hierarchical structure that decomposes the image recursively into quadrants. The decomposition stopping criterion is usually based on pixel similarity, such as color, intensity, or texture. Some applications require the image to be decomposed until it reaches each pixel individually [67]. It is possible to observe in Fig. 3.8 that several regions are fully homogeneous after the second subdivision level. These regions can be represented with only one value in the data structure, which makes the quadtree a compact representation in memory. Figure 3.9 illustrates a tree for the third level of the subdivision (Fig. 3.8c). It is possible to observe that the nodes in black represent a homogeneous region. To optimize the search for pixels, when necessary, only the boundary coordinates of the quadrant and the link to the next level are stored at each node. Additionally, depending on the application, other information can also be stored, such as mean intensity of the pixels, median, among others. A variation of the quadtree for three-dimensional space is called octree [29]. An octree has similar characteristics to a quadtree, however, the three-dimensional space is recursively subdivided into octants.
(a)
(b)
(c)
(d)
Fig. 3.8 Example of four subdivisions of a segmented CT image for a quadtree construction, (a) first, (b) second, (c) third, (d) fourth
3 Medical Imaging
65
Fig. 3.9 Tree for the third level of subdivision (Fig. 3.8c)
3.5.2 Pyramid (Multiscale Imaging) Pyramid is a structure that represents the images either in different resolutions or modalities. In the case of multiple levels of resolution, the hierarchical structure is formed by a sequence of several intermediate images, each one with usually twice the resolution compared to the previous one. The last level corresponds to the image at the original resolution. In the case of multimodalities, the spatial resolution of each pixel is larger in relation to the images at lower level in the hierarchy. For instance, computed tomography images that have pixels in millimeter scale would be located at the top of the hierarchy, whereas nanotomography images with pixels in nanometer scale would be placed at the base of the hierarchy. Due to the ability of the pyramids in providing a better comprehension of structures and functioning of organs and tissues at different levels [55], pyramid structures have been used in several image problems, such as registration [36, 39], segmentation [61], compression [3], among others. The structure can be implemented through a quadtree, where the root node represents the lowest resolution image or one of the modalities that generate pixels of higher spatial resolution.
3.6 Volume Rendering Volume rendering [47] can be defined as a set of techniques for displaying bidimensional projections from tridimensional data sets. These techniques can be categorized into two groups, isosurface rendering and direct volume rendering, described in more detail as follows.
3.6.1 Isosurface Rendering Isosurface rendering methods [7] extract an intermediate representation of a volume data set. The most commonly used representation is the triangle mesh, which is
66
P. H. J. Amorim et al.
Fig. 3.10 Illustration of (a) an isosurface extracted from a volumetric medical data and (b) a magnified region of the triangle mesh
formed by vertices and triangles. It is a surface representation, in which the vertices are located in the surface of the object to be represented, separating the internal region from the external one. In addition to the triangles and vertices, a directional normal vector indicate the external side of each triangle. Figure 3.10 illustrates a triangle mesh and an enlarged region that shows the mesh of triangles in more detail. Triangle meshes are widely used for visualization and animation because graphics processing units (GPUs) are optimized for this type of representation. They are also very useful in 3D printing. Furthermore, they can be edited in CAD tools for supporting surgical planning or prosthesis generation. Marching Cubes [42] is one of the most commonly used methods for extracting a triangle mesh from a data set volume. The inputs to the method are the volume data set and iso-value. Values higher than or equal to the iso-value are located inside the surface to be extracted. Lower values are not part of the surface. In this method, a 3D grid formed by cubes overlaps the volume data, so that the edges match the scalar values of the volume. For each cube, it is checked which edges are intercepted by the surface. An edge intercepts the surface when one vertex is larger than the iso-value and the other is smaller. To find the point of the edge intercepted by the surface, a linear interpolation is applied. After finding the intersections, triangles are created to connect these intersections following a table of 256 triangle configurations. Figure 3.11 shows some of these settings. Surface nets [22] is another method for triangle mesh generation. It also employs a 3D grid formed by cubes, but generates a vertex per cube that intercepts the surface, unlike the method of Marching Cubes that generates a vertex by edge intercepted by the surface. Another important isosurface rendering method is the marching tetrahedra [4] that uses a 3D grid formed by tetrahedra. These tetrahedra are created by dividing the cubes of the Marching Cubes into 6 tetrahedra. A table of triangle configurations is used in the process of triangle mesh construction. The quality of an isosurface depends on the method used to create it and the segmentation process. In general, the segmentation generates binary volumes, where
3 Medical Imaging
67
Fig. 3.11 Example of cube configurations for triangle generation in the algorithm of Marching Cubes. Vertices whose values are greater than the iso-value are marked with an yellow sphere. Image extracted from Wikimedia Commons: https://commons.wikimedia.org/wiki/File: MarchingCubes.svg
Fig. 3.12 (a) Isosurface with staircase artifacts and (b) same isosurface after applying context aware smoothing algorithm to attenuate staircase artifacts
1 indicates a voxel or pixel as part of the segmented object and 0 as not being part of it. Isosurfaces generated from binary volumes usually present staircase artifacts, such as those illustrated in Fig. 3.12a. These artifacts are not natural to the patient’s anatomy and occur mainly in regions of high curvature. To attenuate these artifacts, it is possible to smooth the binary volume before isosurface generation using the Gaussian filter. However, this method can lead to loss of fine details in the generated isosurface.
68
P. H. J. Amorim et al.
The method developed by Whitaker [72] iteratively smooths the edges of the object towards its gradient in order to generate a surface with minimum area. The context aware smoothing [46] method smooths the generated isosurface. To mitigate the loss of fine details, this method operates with weights that control the level of smoothing. In a first step, this method locates regions with staircase artifacts, searching for regions with approximately right angles. Larger weights are assigned to these regions and their adjacencies. Therefore, regions with presence of artifacts are smoother, reducing losses. The smoothing is then performed by some method, for instance, the one developed by Taubin [70] taking into the account the weights previously calculated. Figure 3.12b illustrates the result after applying the context aware smoothing to the isosurface shown in Fig. 3.12a.
3.6.2 Direct Volume Rendering Direct volume rendering (DVR) [64] is a set of volumetric visualization techniques that do not require an intermediate representation. In these approaches, the value of each voxel is mapped to a color and opacity according to a transfer function. Figure 3.13 shows an example of a transfer function. A widely used direct volume rendering technique is volume ray casting [40]. In this technique, rays are traced from the observer, passing through an image plane, to the volumetric data set. The rays intercept the voxels during their traversal, such that their color and opacity are accumulated to generate the final image. The emissionabsorption [20] is an example of a model to accumulate color and opacity values, expressed as C=
n
Ci
i−1
1 − αj
j −1
i=1
(3.8)
n 1 − αj α =1− j −1
100
0 -1000
0
650
Fig. 3.13 Example of a linear transfer function, which maps scalar value 0 to black and 0% opacity. The scalar value 650 is mapped to white with 100%, whereas intermediate values are linearly interpolated to obtain color and opacity values
3 Medical Imaging
69
where C and α are the accumulated color and opacity values computed iteratively, whereas Ci and αi correspond to color and opacity, respectively, obtained from the segment i along the viewing ray. In the volume ray casting, structures can be occluded by others that are closer to the observer. For example, in a CT data set, internal organs and the skeleton may not be visible to the observer because external skin is occluding their view. By changing the transfer function, which can be done interactively by the user or by automatic methods, it is possible to make certain structures transparent, reducing the obstruction problem. Figure 3.14 illustrates a change in the transfer function to highlight different body structures. In maximum intensity projection (MIP) [8], instead of accumulating projection rays, only the highest values are considered along the ray traversal. This reduces the problem of occlusion. However, it suffers from lack of visual depth information,
100
0 -1000
-250
384
(b) Transfer function adapted to soft tissues
(a) Ray casting for soft tissue
100
0 -1000
(c) Ray casting for bones
112
751
(d) Transfer function adapted to bones
Fig. 3.14 Ray casting technique applied to allow different views of the same volume data set by modifying the transfer function. (a) transfer function is adapted to reveal soft tissues; (b) transfer function is modified to display bone structures
70
P. H. J. Amorim et al.
Fig. 3.15 A volume data set rendered through different volume rendering techniques. (a) MIP. (b) AIP. (c) AIP. (d) MIDA. (e) Contour MIP. (f) Contour MIDA
which can lead to visualization ambiguities. MIP is useful to visualize bone structures, tumors, and regions with different contrast. Figure 3.15a illustrates an example of MIP rendering. Minimum intensity projection (MinIP) [45] and average intensity projection (AIP) [73] are variations of MIP, where rays select the minimum and mean values, respectively, intercepted along their traversal. MinIP is useful, for instance, to provide information about blood flow deviation, since blood usually has low intensity values in CT images. Figure 3.15b shows an example of visualization with MinIP technique. AIP provides a wide view of structures in the image. Both
3 Medical Imaging
71
MinIP and AIP suffer from visualization ambiguities due to the lack of visual depth information. Figure 3.15c shows an example of visualization with AIP. Maximum intensity difference accumulation (MIDA) [8] avoids the problem of occlusion taking into account that the regions of interest are those of high intensity values, as with MIP. This method works with accumulation of color and opacity that are averaged by weights, which are calculated according to the maxima found during the ray traversal. The greater the difference between the current maximum value and the previous one, the greater the weight. By using accumulation, as well as the ray casting approach, MIDA presents depth visual information, thus reducing ambiguities during visualization. Figure 3.15d shows an example of MIDA projection. Similar to MIP, MIDA is useful to visualize bone structures, tumors, and regions with contrast. Contour MIP [12] employs the angle between the gradient of the voxel intercepted by the ray and the view direction, expressed as s(P , V ) = (1 − |∇(P ).V |)n
(3.9)
where P is the intercepted voxel, ∇(P ) is the voxel gradient, and V is the vector of the view direction. The higher the sharpness coefficient n, the more edge regions will be highlighted in the final image. MIP is used to generate the final image, that is, the final image will be composed of the highest value S(P , V ) found. Due to the use of MIP, MIDA can also suffer from ambiguities by the lack of depth visual information. Contour MIDA [47] is a variation of Contour MIP that uses MIDA to compose the final image, which attenuates ambiguities when compared to Contour MIP. Both techniques emphasize voxels with higher values of gradient information, highlighting surfaces of different objects. Examples are illustrated in Fig. 3.15e, f.
3.7 3D Printing and Biofabrication Medical imaging is nowadays a great advance for diagnosis and treatment in modern medicine. It is becoming more representative of the anatomy due to the evolution of the medical scanners with higher definition, contrast, and better capacity of pre- and post-processing. For the successful integration of additive manufacturing (AM), a groundbreaking technology, also known as “3D Printing” for the popular media, supported by medical scanners is mandatory to obtain a 3D model of the anatomy of interest. It is a reverse engineering process that is based not only on the important imaging modalities mentioned before as CT, CBCT, MRI, and US 3D, but also on microCT to reproduce micro to nanostructure details [56] and photogrammetry to replace expansive lasers and light surface scanners for some reconstitution applications, such as the facial ones [66]. The standard for the interoperability among medical equipment is the DICOM protocol [2], which provides data for processing, visualization, and analysis of medical images using specific software tools that are able to generate 3D models
72
P. H. J. Amorim et al.
to be processed in AM equipment. On the other hand, AM can automatically produce accurate physical models from a virtual one with relevant internal and external geometric complexity, based on a layer-by-layer paradigm for deposition of the materials. Today, there are more than 50 companies commercializing AM technologies. These technologies are clustered according to ISO (International Organization for Standardization) into seven categories of processes [32]. Specifically in healthcare, as a downstream process for medical imaging, AM plays a key role in the production of 3D physical models for accurate surgical planning, also called biomodels as well as the development of customized prostheses, medical devices and surgical instruments just to name a few [21, 35, 51, 68]. The most used 3D file format for AM is the STereoLithography (STL), which is a simple but poor and redundant representation for 3D data with no topological information associated. This is the “de facto” standard for industry, created 30 years ago, whose syntax is composed of triangles with its three coordinates and a normal vector pointing out of each triangle according to the right-hand rule. Two surfaces composed of triangles with respective pointing out vectors define where the material of the object is located. This information is used to slice the object and realize it into layers of materials. STL can be represented as a binary or ASCII file. To overcome such limitations of the STL format, the ASTM (American Society for Testing and Materials) and ISO created the AMF (Additive Manufacturing File Format) in 2013. AMF is intended to be more accurate by means of curved triangles and represented using XML (eXtensible Markup Language). AMF also incorporates information such as unities, colors, materials, gradients, multiple copies, among others. It is proposed to be independent of specific technologies, simple to implement and interpret, non-redundant, adaptable to the new and improved AM technologies, computational efficient, and backward and forward compatible [31]. An alternative file format called 3MF for representing 3D model for AM is being proposed by a joint development foundation formed by AM systems and software solutions providers. Its specification and project details are available with details in the 3MF consortium webpage [1]. Imaging is expected to play a key role concerning the biological applications of AM. Reverse engineering of organs and tissues anatomy can provide information about structure of these anatomies for new models design, for computer simulation and also for biofabrication that, instead of synthetic materials, can include biological ones such as cells and cell agglomerate as printing materials. More recently, biofabrication concept and definition was coined by International Society of Biofabrication (ISB) as “the automated generation of biologically functional products with structural organization from living cells, bioactive molecules, biomaterials, cell aggregates such as micro-tissues, or hybrid cell-material constructs, through bioprinting or bioassembly and subsequent tissue maturation processes” [25]. There are currently at least three different approaches to the materialization of biofabrication concept: the first one is based on scaffolds as a temporary degradable structure for cell attachment and posterior new tissue growing [15]; the second approach is based on a scaffold-free concept where biological materials
3 Medical Imaging
73
as cells and/or tissue spheroids are deposited in an automated way in specific places according to a digital specification or blueprint. In this case, the biological material is deposited as a “bioink” in a “biopaper” as a gel or another soft material [44]. The third approach is based on a mix of the first two where cells are deposited in microscaffolds that mechanically can attach to each other called “lockyballs” [14, 62]. Therefore, the understanding of the composition and organization of tissues and organs is a key requirement to reproduce its complex heterogeneous architecture [49]. Moreover, imaging can be useful as a non-invasive method to control the quality and vascularization of the biofabricated new tissue.
3.8 Conclusions Recent advances in medical imaging techniques and equipment have aided healthcare professionals in several diagnostic tasks. Non-invasive procedures produce high quality images of internal regions of the patient’s body, reducing medical costs, improving patient’s quality of life, as well as reducing risks associated with clinical intervention. The integration of imaging techniques and manufacturing technologies have contributed to a wide range of medical applications. This chapter presented relevant concepts associated with the generation of three-dimensional models for visualization and biofabrication purposes. The comprehension of all components that constitute a medical imaging system is very challenging. Research in new devices and novel tools will push forward the boundaries of knowledge in healthcare, expanding the solutions to several medical applications. Acknowledgements The authors are thankful to São Paulo Research Foundation (FAPESP grants #2013/07559-3 and #2015/12228-1) and Brazilian National Council for Scientific and Technological Development (CNPq grants #305169/2015-7 and #465656/2014-5) for their financial support.
References 1. 3MF Consortium, 3MF Materials and Properties Extension Specification and Reference Guide. Accessed Feb. 2017. http://3mf.io/ 2. D. ACR/NEMA, Digital Imaging and Communications in Medicine (2017). http://dicom.nema. org/ 3. B. Aiazzi, L. Alparone, S. Baronti, F. Lotti, Lossless image compression by quantization feedback in a content-driven enhanced Laplacian pyramid. IEEE Trans. Image Process. 6(6), 831–843 (1997) 4. D. Akio, A. Koide, An efficient method of triangulating equi-valued surfaces by using tetrahedral cells. IEICE Trans. Inf. Syst. 74(1), 214–224 (1991) 5. P.H.J. Amorim, T.F. Moraes, J.V.L. Silva, H. Pedrini, R.B. Ruben. Automatic Reconstruction of Dental CT Images using Optimization. III International Conference on Biodental Engineering (BioDental). Porto, Portugal, pp. 57–62, June 22–23, (2014)
74
P. H. J. Amorim et al.
6. P. Amorim, T. Moraes, J. Silva, H. Pedrini, InVesalius: an interactive rendering framework for health care support, in Lecture Notes in Computer Science, vol .9474, ed. by G. Bebis, R. Boyle, B. Parvin, D. Koracin, I. Pavlidis, R. Feris, T. McGraw, M. Elendt, R. Kopper, E. Ragan, Z. Ye, G. Weber (Springer, Berlin, 2015), pp. 45–54 7. I. Bankman, Handbook of Medical Image Processing and Analysis (Academic Press, New York, 2008) 8. S. Bruckner, M.E. Gröller, Instant Volume visualization using maximum intensity difference accumulation. Comput. Graphics Forum 28(3), 775–782 (2009) 9. J. Canny, A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986) 10. H. Chrysikopoulos, Clinical MR Imaging and Physics: A Tutorial (Springer, Berlin, 2008) 11. P. Conti, H. Wagner, D. Cham, PET-CT: A Case Based Approach (Springer, New York, 2005) 12. B. Csébfalvi, L. Mroz, H. Hauser, A. König, M.E. Gröller, Fast visualization of object contours by non-photorealistic volume rendering. Comput. Graphics Forum 20(3), 452–460 (2001) 13. O.J. Dahl, E.W. Dijkstra, C.A.R. Hoare, Notes on Structured Programming (Academic, New York, 1972) 14. P. Danilevicius, R.A. Rezende, F.D. Pereira, A. Selimis, V. Kasyanov, P.Y. Noritomi, J.V. da Silva, M. Chatzinikolaidou, M. Farsari, V. Mironov, Burr-like, laser-made 3D microscaffolds for tissue spheroid encagement. Biointerphases 10(2), 021011 (2015) 15. B. Derby, Printing and prototyping of tissues and scaffolds. Science 338(6109), 921–926 (2012) 16. D. Dowsett, P.A. Kenny, R.E. Johnston, The Physics of Diagnostic Imaging (Taylor & Francis, Boca Raton, FL, USA, 2006) 17. K. Dreyer, D. Hirschorn, J. Thrall, A. Mehta, PACS: A Guide to the Digital Revolution (Springer, New York, 2005) 18. D. Fazanaro, P. Amorim, T. Moraes, J. Silva, H. Pedrini, NURBS parameterization for medical surface reconstruction. Appl. Math. 7(02), 137 (2016) 19. A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-Robin, S. Pujol, C. Bauer, D. Jennings, F. Fennessy, M. Sonka, J. Buatti, S. Aylward, J. Miller, S. Pieper, R. Kikinisa, 3D slicer as an image computing platform for the quantitative imaging network. Magn. Reson. Imaging 30(9), 1323–1341 (2012) 20. R. Fernando, GPU Gems: Programming Techniques, Tips and Tricks for Real-Time Graphics (Pearson Higher Education, Boston, 2004) 21. G. Giacomo, J. Silva, R. Martines, S. Ajzen, Computer-designed selective laser sintering surgical guide and immediate loading dental implants with definitive prosthesis in edentulous patient: a preliminary method. Eur. J. Dent. 8(1), 100–106 (2014) 22. S.F.F. Gibson, Constrained elastic surface nets: generating smooth models from binary segmented data. TR99 24 (1999) 23. R.C. Gonzalez, R.E. Woods, Digital Image Processing (Prentice Hall, Upper Saddle River, 2002) 24. V. Grau, A. Mewes, M. Alcaniz, R. Kikinis, S.K. Warfield, Improved watershed transform for medical image segmentation using prior information. IEEE Trans. Med. Imaging 23(4), 447– 458 (2004) 25. J. Groll, T. Boland, T. Blunk, J.A. Burdick, D.-W. Cho, P. D. Dalton, B. Derby, G. Forgacs, Q. Li, V.A. Mironov, Biofabrication: reappraising the definition of an evolving field. Biofabrication 8(1), 013001 (2016) 26. R.M. Haralick, L.G. Shapiro, Image segmentation techniques. Comput. Vis. Graphics Image Process. 29(1), 100–132 (1985) 27. R. Haralick, L. Shapiro, Computer and Robot Vision, vol. 1 (Addison-Wesley, Boston, 1992) 28. J. Hsieh, Computed Tomography: Principles, Design, Artifacts, and Recent Advances (SPIE, Bellingham, 2003) 29. G. Hunter, Efficient Computation and Data Structures for Graphics. Ph.D. thesis, Princeton University, Princeton, 1978
3 Medical Imaging
75
30. InVesalius Open Source Software for Reconstruction of Computed Tomography and Magnetic Resonance Images (2017). http://www.cti.gov.br/invesalius/ 31. ISO/ASTM 52915, International Organization for Standardization. Standard Specification for Additive Manufacturing File Format (AMF), version 1.1 (2013) 32. ISO/ASTM 52900 International Organization for Standardization, Additive Manufacturing— General principles—Terminology (2015) 33. A.C. Kak, M. Slaney, Principles of Computerized Tomographic Imaging (SIAM, Philadelphia, 2001) 34. S. Kapila, Cone Beam Computed Tomography in Orthodontics: Indications, Insights, and Innovations (Wiley, New York, 2014) 35. D. Kemmoku, P. Noritomi, F. Toland, J. Silva, Use of BioCAD in the development of a growth compliant prosthetic device for cranioplasty of growing patients, in Innovative Developments in Design and Manufacturing (CRC, Boca Raton, 2010), pp. 127–130 36. S. Klein, M. Staring, K. Murphy, M.A. Viergever, J.P.W. Pluim, elastix: a toolbox for intensitybased medical image registration. IEEE Trans. Med. Imaging 29(1), 196–205 (2010) 37. B. Kline-Fath, R. Bahado-Singh, D. Bulas, Fundamental and Advanced Fetal Imaging: Ultrasound and MRI (Wolters Kluwer Health, Philadelphia, 2014) 38. C.T. Leondes, Medical imaging systems technology: modalities, in Medical Imaging Systems Technology (World Scientific, Singapore, 2005) 39. H. Lester, S.R. Arridge, A survey of hierarchical non-linear medical image registration. Pattern Recogn. 32(1), 129–149 (1999) 40. M. Levoy, Efficient ray tracing of volume data. ACM Trans. Graph. 9(3), 245–261 (1990) 41. P. Libby, R. Kwong, Cardiovascular magnetic resonance imaging, in Contemporary Cardiology (Humana, Totowa, 2008) 42. W.E. Lorensen, H.E. Cline, Marching cubes: a high resolution 3D surface construction algorithm. Comput. Graph. 21(4), 163–169 (1987) 43. A. Macovski, Medical Imaging Systems (Prentice Hall, Upper Saddle River, 1983) 44. V. Mironov, V. Kasyanov, C. Drake, R.R. Markwald, Organ printing: promises and challenges. Regen. Med. 3(1), 93–103 (2008) 45. G. Mistelbauer, A. Morar, A. Varchola, R. Schernthaner, I. Baclija, A. Köchl, A. Kanitsar, S. Bruckner, E. Gröller, Vessel visualization using curvicircular feature aggregation. Comput. Graphics Forum 32(3), 231–240 (2013) 46. T. Moench, R. Gasteiger, G. Janiga, H. Theisel, B. Preim, Context-aware mesh smoothing for biomedical applications. Comput. Graph. 35(4), 755–767 (2011) 47. T.F. Moraes, P.H. Amorim, J.V. Silva, H. Pedrini, M.I. Meurer, Medical volume rendering based on gradient information, in Proceedings of the 5th ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing, 5th edn. (CRC, Tenerife, 2015), p. 181 48. G. Morton, A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing (International Business Machines Company, New York, 1996) 49. S.V. Murphy, A. Atala, 3D bioprinting of tissues and organs. Nat. Biotechnol. 32(8), 773–785 (2014) 50. R. Nock, F. Nielsen, Statistical region merging. IEEE Trans. Pattern Anal. Mach. Intell. 26(11), 1452–1458 (2004) 51. A.T. Oliveira, A.A. Camilo, P.R.V. Bahia, A.C.P. Carvalho, M.F. DosSantos, J.V.L. da Silva, A.A. Monteiro, A novel method for intraoral access to the superior head of the human lateral pterygoid muscle. Biomed. Res. Int. 2014 (2014) 52. N. Otsu, A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979) 53. N.R. Pal, S.K. Pal, A review on image segmentation techniques. Pattern Recogn. 26(9), 1277– 1294 (1993) 54. J.R. Parker, Algorithms for Image Processing and Computer Vision (Wiley, New York, 2010) 55. A.F. Pereira, D.J. Hageman, T. Garbowski, C. Riedesel, U. Knothe, D. Zeidler, M.L.K. Tate, Creating high-resolution multiscale maps of human tissue using multi-beam SEM. PLoS Comput. Biol. 12(11), e1005217 (2016)
76
P. H. J. Amorim et al.
56. F. Peyrin, P. Dong, A. Pacureanu, M. Langer, Micro-and nano-CT for the study of bone ultrastructure. Curr. Osteoporos. Rep. 12(4), 465–474 (2014) 57. O.S. Pianykh, Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide (Springer, Berlin, 2009) 58. R. Pohle, K.D. Toennies, Segmentation of medical images using adaptive region growing, in Medical Imaging 2001: Image Processing, vol 4322 (SPIE, 2001), pp. 1337–1346 59. J. Prince, J. Links, Medical Imaging Signals and Systems (Pearson Education, London, 2014) 60. P. Reimer, P. Parizel, J. Meaney, F. Stichnoth, Clinical MR Imaging: A Practical Approach (Springer, Berlin, 2010) 61. M.R. Rezaee, P.M.J. van der Zwet, B.P.E. Lelieveldt, R.J. van der Geest, J.H.C. Reiber, A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering. IEEE Trans. Image Process. 9(7), 1238–1248 (2000) 62. R.A. Rezende, F.D. Pereira, V. Kasyanov, A. Ovsianikov, J. Torgensen, P. Gruber, J. Stampfl, K. Brakke, J.A. Nogueira, V. Mironov, Design, physical prototyping and initial characterisation of ‘Lockyballs’. Virtual Phys. Prototyping 7(4), 287–301 (2012) 63. A. Rosset, L. Spadola, O. Ratib, OsiriX: an open-source software for navigating in multidimensional DICOM images. J. Digit. Imaging 17(3):205–216 (2004) 64. S.D. Roth, Ray casting for modeling solids. Comput. Graphics Image Process. 18(2), 109–144 (1982) 65. J.C. Russ, The Image Processing Handbook (CRC, Boca Raton, 2015) 66. R. Salazar-Gamarra, R. Seelaus, J.V.L. da Silva, A.M. da Silva, L.L. Dib, Monoscopic Photogrammetry to obtain 3D models by a mobile device: a method for making facial prostheses. J. Otolaryngol. Head Neck Surg. 45(1), 33 (2016) 67. H. Samet, Applications of spatial data structures: computer graphics, in Image Processing and GIS (Addison-Wesley, Reading, 1990) 68. E.K. Sannomiya, J.V.L. Silva, A.A. Brito, D.M. Saez, F. Angelieri, G. da Silva Dalben, Surgical planning for resection of an ameloblastoma and reconstruction of the mandible using a selective laser sintering 3D biomodel. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endod. 106(1), e36–e40 (2008) 69. F.Y. Shih, S. Cheng, Automatic seeded region growing for color image segmentation. Image Vis. Comput. 23(10), 877–886 (2005) 70. G. Taubin, A signal processing approach to fair surface design, in Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques (1995), pp. 351–358 71. C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in Sixth International Conference on Computer Vision (IEEE, Piscataway,1998), pp. 839–846 72. R.T. Whitaker, Reducing aliasing artifacts in ISO-surfaces of binary volumes, in IEEE Symposium on Volume Visualization (IEEE, Piscataway, 2000), pp. 23–32 73. Q. Wu, F. Merchant, K.R. Castleman, Microscope Image Processing, 1st edn. (Academic, New York, 2008) 74. Y.J. Zhang, A survey on evaluation methods for image segmentation. Pattern Recogn. 29(8), 1335–1346 (1996)
Chapter 4
Computer Aided Tissue Engineering Scaffolds M. W. Naing, C. K. Chua, and K. F. Leong
4.1 Introduction For the past decade, the focus of tissue engineering (TE) has been on the aspect of culturing organs in the hope of replacing damaged or diseased organs in the body. In TE, expertise from the fields of biological and material sciences, and engineering are combined to develop viable biological substitutes of a tissue which helps to restore, maintain, and/or improve the functions of that tissue. For matrix-producing connective tissues, the cells are anchorage-dependent and the presence of 3-D scaffolds with interconnected pore networks is crucial to aid in the proliferation and reorganisation of the cells [1, 2]. The scaffolds, which can be fabricated from natural or synthetic biomaterials [3–5], facilitate the creation of functional and structurally appropriate biological replicas of healthy versions of the required tissues. The ultimate aim is for these fabricated structures to be the basis of tissue regeneration such that patients can readily obtain implants which will not put them at risk for infection nor require them to be on life-long medication for organ rejection.
M. W. Naing Bio-Manufacturing Programme, Singapore Institute of Manufacturing Technology, Agency for Science, Technology and Research, Singapore, Singapore e-mail: [email protected] C. K. Chua () Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore, Singapore e-mail: [email protected] K. F. Leong School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore e-mail: [email protected] © Springer Nature Switzerland AG 2021 B. Bidanda, P. J. Bártolo (eds.), Virtual Prototyping & Bio Manufacturing in Medical Applications, https://doi.org/10.1007/978-3-030-35880-8_4
77
78
M. W. Naing et al.
At present, scaffold fabrication is mostly done using a variety of conventional methods such as salt and particulate leaching [1, 4–6], which have been successfully applied to establish the viability of various TE applications; engineered skin has been used clinically [7] and organs successfully tested in preclinical studies include blood vessels [8] and the bladder [9]. However, these techniques rely heavily on users’ skills and the applied procedure. Due to the difference in skills and procedures between individual users, the fabricated scaffolds are subject to variations and are not easily reproducible. Also, such techniques can only produce scaffolds with a range of pore sizes. As a result, these scaffolds render the researcher incapable of making consistent analysis. Since its inception, rapid prototyping (RP) technology has created a significant impact on the medical community. By combining medical imaging, computer aided design (CAD), and RP, it is possible to create accurate patient-specific anatomical models [10–13] and customised prototypes of devices for a wide variety of medical applications [14–19]. This has prompted researchers to experiment with RP techniques to fabricate scaffolds which can give controlled microarchitecture and higher consistency than those fabricated using conventional techniques. However, the internal microarchitecture of scaffolds built using the original building styles and patterns supplied by RP system manufacturer is limited [20]. As such, further improvements are needed to augment the range of possible pore sizes, their accuracy, and the consistency in their distribution and density. Since RP processes begin with the creation of a 3-D CAD model of the scaffold structure, one possible solution to achieve such improvements is to design the scaffold’s internal microarchitecture during the CAD modelling stages before committing the scaffold designs to RP fabrication [20–27]. To alleviate the difficulty encountered in creating scaffolds with designed internal architectures, Hollister et al. [21] introduced an image-based approach for designing and manufacturing TE scaffolds. In their work, a general program for building up a scaffold’s internal architecture by repeating a unit structure of cylinders, spheres, and other entities was developed. Hollister et al. also developed CAD techniques [22–26] for creating sacrificial moulds with designed internal channels or cavities which resembled the negative image of the final required scaffold. The moulds were used to cast hydroxyapatite scaffolds for bone TE applications from a highly loaded “reactive ceramic suspension” following the lost mould shape forming process. This project takes a different approach and aims to make significant improvement by employing CAD data manipulation techniques to develop a novel algorithm. This algorithm can be used to design and assemble a wide range of internal scaffold architectures from a selection of open celled polyhedron shapes. It is used in conjunction with an integrated manufacturing approach [10] that combines medical imaging and RP technologies to achieve rapid automated production of pre-designed 3-D tissue scaffolds that are not only consistent and reproducible, but also patientspecific. The system is named CASTS or computer aided system for tissue scaffolds. In contrast to the fabrication approach adopted by Hollister et al., which is based on the lost mould casting process, CASTS is aimed at direct fabrication of scaffolds on a RP system such as selective laser sintering (SLS). By adopting a direct
4 Computer Aided Tissue Engineering Scaffolds
79
fabrication approach, many disadvantages associated with casting can be eliminated. Examples of such disadvantages include additional production stages (lead time) incurred in developing a suitable suspension of the required material and a mould, increased costs, increased material wastage, and risk of material contamination. Scaffolds are necessary for growing bone tissue as they act as temporary substrates for the anchorage-dependent osteoblasts. Bone scaffolds should ideally assume the shape of the defect, provide mechanical support to the defect while healing occurs, and allow cell proliferation and tissue ingrowth into the scaffold. Seeded cells adhere to the scaffold in all three dimensions, proliferate and produce their own ECM which takes over the function of the biomaterial scaffold [28, 29].
4.1.1 Requirements of Tissue Engineering Scaffolds Functional properties of the scaffold depend on the characteristics of the scaffold material, the processing techniques, and the scaffold design, which in turn decides how the cells interact with the scaffold. The scaffold must be designed to cater to the conflicting needs of tissues in terms of mechanical strength, porosity, uniformity in pore size, and complexity in three dimensions [30]. Listed below are the desired characteristics of a scaffold [6, 26, 31, 32]: 1. 3-dimensional, highly porous with an interconnected pore network for cell growth and flow transport of nutrients and metabolic waste, 2. Biocompatibility and biodegradability, 3. Suitable surface chemistry and topography for cell attachment, proliferation, differentiation, and also to encourage formation of ECM, 4. Pre-defined microarchitecture, 5. Mechanical properties to match those of the tissues at the site of implantation; for load-bearing tissue, the scaffold must provide additional mechanical support during regeneration of tissue, and 6. Sterilisability.
4.1.2 Application of RP in TE Scaffold Fabrication Conventional techniques of scaffold fabrication rely heavily on user skills and experience such that there is poor repeatability in between users. Processing parameters are often inconsistent and inflexible thus resulting in highly inconsistent micro and macro structural properties in the scaffold. Use of organic solvents may have harmful effects on the cells and cause the cells to die or mutate. Porogen particles employed to induce pores may not be completely removed, making the process inefficient in terms of porosity. Most conventional processes are also limited
80
M. W. Naing et al.
to producing scaffolds with simple geometry which may not satisfy the geometric requirements of the defect [33]. Although RP techniques can potentially address most of the macro and micro structural requirements of TE scaffolds, only a few of the RP systems have been used for scaffold fabrication to date [33]. These systems include: 3-dimesional printing (3DP), fused deposition modelling (FDM), selective laser sintering (SLS), and inkjet printing [34]. 3DP was used to fabricate PLGA scaffolds through the use of moulds and particulate leaching using sucrose as porogen. While this method was proven to produce viable scaffolds, the lengthy process makes it tedious and ineffectual [35]. FDM fabricated polycaprolactone (PCL) scaffolds had pore sizes of 160–700 μm and 48–77% of porosity. Fibroblast cells seeded onto scaffold showed complete ingrowth after a few weeks of culture. Degradation tests proved that the process did not affect the material properties [36]. Sintering of poly-ether-ether-ketone (PEEK), PLLA, and PCL powder stocks had different degrees of success. Microporosity of the specimens was varied by changing laser power (or energy density), scan speed, and part bed temperature. High porosity and interconnectivity were obtained for all specimens [37]. MMII was used to produce moulds for scaffold manufacturing. The lost mould method was used to cast out the scaffolds which are made of naturally occurring materials such as tricalcium phosphate [38] and collagen [39].
4.2 Methodology Even with the incorporation of RP systems, the work carried out on TE scaffolds has been restricted due to many limitation factors, one of which is the lack of variety in patterns. What is required is a comprehensive system which can: 1. Provide the users with a database of designs to choose from, 2. Generate scaffolds of different parameters, and 3. Customise scaffolds according to patients’ specifications. The aim of this project, therefore, is to develop such a system which can satisfy these requirements. The prototype system is named the Computer Aided System for Tissue Scaffolds or CASTS. Figure 4.1 shows the process flow.
4.2.1 Concept Verification To verify the system, a scaffold in the shape of a femur was generated and fabricated. The surface profile of the femur was first extracted using MIMICS™ (Materialise, Belgium NV) from CT scan data (as shown in Fig. 4.2a). This surface profile was imported into Pro/ENGINEER software and further manipulated to obtain a closed
81
Mimics (Materialise NV) 3D Surface Model
Pro/Engineer environment Parametric Library Selection of polyhedra cells and sizing
Scaffold Assembly Algorithm Creation of user-defined scaffold micro-architecture STL Model
CAE Analysis FEM (mechanical, degradation & diffusion analyses)
RP Fabrication of Mould/Master SLS, SLA, LOM etc. Scaffold Structure
In Vivo
Transplantation
In Vitro
Cell Culture
3D Data Reconstruction
3D Reconstruction of Tomographic Data
CAD Modeling
2D Slice Data
Analyses & Fabrication
Computed Tomography (CT) Magnetic Resonance (MRI)
TE Application
Medical Imaging
Data Acquisition
4 Computer Aided Tissue Engineering Scaffolds
Fig. 4.1 Flow chart of the tissue scaffold fabrication and implantation process
volume (Fig. 4.2b). The internal architecture was generated using the algorithm and merged into the volume using Boolean operations (Fig. 4.2c). The selective laser sintering system (Sinterstation 2500) was used to fabricate the generated scaffold. The material used was Duraform™ Polyamide. The laser power was set to 4 W. The scan speed and part bed temperature were set to default values for Duraform™, which were 200 in/s (5080 mm/s) and 165 ◦ C, respectively. The powder layer thickness of each layer was set to 0.006 in (0.152 mm). Figure 4.3 shows the fabricated femur scaffold together with a RP fabricated model of the femoral head. The scaffold implant fabricated had good interconnectivity and the features of the microarchitecture could be clearly seen. It is also noted that the powder inside the scaffold was easily removed using manual means. Polishing the scaffold, however, was not possible because of the intricate nature of the structure.
82
M. W. Naing et al.
Fig. 4.2 (a) Surface contour of femur extracted from CT scan (b) Closed volume of femur generated in CAD software (c) Femur bone scaffold with designed microarchitecture Fig. 4.3 Fabricated models of the femoral head and scaffold structure for the femoral bone segment
4.2.2 Validation of CASTS In order to check the repeatability of the system, disc-shaped scaffolds were generated using the algorithm and fabricated using the Sinterstation 2500. The configuration tested was octahedron-tetrahedron configuration from the scaffold library. Each specimen sample fabricated was 16 mm in diameter. This was intended to facilitate the cytotoxicity testing that will follow once the specimens are fabricated using biomaterials. Using the user interface of the algorithm in Pro/ENGINEER, the configuration desired was first selected. After the selection, the layout was regenerated to activate the prompts for the secondary input. The other main inputs were the length of struts and the diameter of strut which together control the pore size and porosity. In the case of the octahedron-tetrahedron configuration, the strut length was the same throughout the unit cell and only one value needed to be specified. For consistency of results, the diameter of the strut was fixed at 0.25 mm (approximately two times the resolution of SLS) and only the strut length was varied to produce variation in the structures in terms of pore sizes and porosity. Table 4.1 is a summary of properties of the scaffolds generated.
4 Computer Aided Tissue Engineering Scaffolds Table 4.1 Properties of scaffolds generated
Strut length (mm) 1.0 1.5 2.0 2.5
83 Pore size (mm) 0.327 0.616 0.905 1.193
Porosity 0.583 0.815 0.896 0.933
Fig. 4.4 Steps in scaffold generation. (a) Rectangular block scaffold. (b) Surface file of the disc. (c) Scaffold with surface model embedded. (d) Final scaffold in the shape of the disc
The scaffold was generated to a size larger than the desired final scaffold size (see Fig. 4.4a). Concurrently, a set of 16 mm discs of surfaces was created in Pro/ENGINEER. The discs were varied in height from 1.0 to 2.5 mm, to match one layer of unit cell for differing strut lengths (see Fig. 4.4b). Once the scaffold structure had been generated, the appropriate surface model was appended to the disc and a Boolean subtraction was performed which gave the scaffold the shape of the disc (see Fig. 4.4c, d). The scaffold files (see Fig. 4.5) were exported into .STL format and checked for errors in edges and contours, after which they were sent to the SLS system to be fabricated.
84
M. W. Naing et al.
Fig. 4.5 An example of the generated scaffolds
Table 4.2 Material specifications—Duraform™ Polyamide
Particle shape Particle size range (μm) Average particle size (μm) Powder density (kg/m3 ) Solid density (kg/m3 ) Melting point (◦ C)
Irregular 25–92 60 590 970 186
4.2.3 Duraform™ Polyamide Scaffolds Nylon 12, with the commercial name Duraform™ polyamide, a standard material for SLS, was used for this investigation. As this material has been widely studied [18, 40–43] with regards to its application in SLS, this will reduce the number of variables that need to be dealt with. The advantages of Duraform™ are uniform powder size and consistent melting point (see material specifications in Table 4.2). Figure 4.6 shows the micrographs of a set of fabricated scaffolds. The features of the scaffolds were examined under microscope using Nikon SMZ-U Stereoscopic microscope with light unit Photomic PL 3000. Scaffolds with strut length 1.5 and 2.0 mm have reasonable pore sizes (0.616–0.905 μm) with little trapped powder. This shows that the scaffold library and the algorithm, together with the SLS system, can produce viable scaffolds with consistent and reproducible microarchitecture.
4.2.4 Biomaterial Scaffolds Two types of biomaterials were tested with regards to the feasibility of fabricating pre-designed scaffolds using CASTS.
4.2.4.1
Poly-Ether-Ether-Ketone (PEEK) and Hydroxyapatite (HA)
The same PEEK and HA powders as well as the PEEK-HA composite blend used by Tan et al. [37] were used for scaffold fabrication. The micrographs of as-received PEEK and HA powders were taken using scanning electron microscope (SEM) to
4 Computer Aided Tissue Engineering Scaffolds
85
Fig. 4.6 Micrographs of fabricated scaffold samples
Fig. 4.7 SEM micrographs of as-received (a) PEEK and (b) HA powders [31]
study the shape of the powders (Fig. 4.7). PEEK particles are irregular in shape whereas HA particles are relatively more spherical. This difference in morphology is important as it makes it possible to easily distinguish between the two types of materials by visual inspection.
86
M. W. Naing et al.
Fig. 4.8 SEM Micrographs of as-received PCL [44]
Table 4.3 Material specifications Brand name Particle shape Average particle size (μm) Powder density (kg/m3 ) Glass transition temperature (◦ C) Melting temperature (◦ C)
PEEK Victrex Irregular 25 1320a 143 343
HA Camceram Spherical 5–60 3050 N.A. 1500
PCL Solvay Irregular 1000 μm) which are not suitable for cell seeding. The large pores may cause the cells to slip through and settle at the bottom instead of attaching to the scaffold surfaces. Also, in the event that some cells do adhere to the surfaces and proliferate, this will result in a weak cell mass as cells only tend to grow along surfaces which is not desirable. Therefore, for this stage, only three varying strut lengths were considered: 1.0, 1.5, and 2.0 mm. For this investigation, a laser power of 17 W was used at part bed temperature 140 ◦ C and scan speed 5080 mm/s (200 in/s) [37]. The reason for using one of the lower laser powers is to ensure that the energy density (i.e., temperature) incident on the part during sintering does not become too high such that the properties of the material are altered. The first batch of specimens was built at a layer thickness of 0.152 mm (0.006 in). The parts built were found to be fragile and hard to handle. This could be due to the inability of the laser power to sinter each layer to the next completely at such thickness. Therefore, a second batch was fabricated with the layer thickness value reduced to 0.10 mm (0.004 in). In general, the fabricated scaffolds exhibit well-defined microarchitecture, indicating the possibility of incorporating biomaterials into CASTS, even though the structural integrity of the PEEK scaffolds was observed to be not as good as those fabricated with Duraform™. This is expected since Duraform™ is a commercial material with optimised settings in SLS, while PEEK is a new material. The advantage of PEEK scaffolds over Duraform™ scaffolds is in the reduction of powder trapped within the pores and also the ease of powder removal. This may be attributed to the small particle size of PEEK powders, which at 25 μm, is less than half the average size of the Duraform™ powders. Upon close inspection, the scaffolds show favourable results with intact struts and well-defined pores (Fig. 4.9). The intended architecture was clearly visible under the microscope. This is further shown in Fig. 4.10 where the top few layers have been removed to expose the interconnected struts within the specimen. Delamination was observed for the first layer (0.10 mm) in some samples with larger pore sizes (>900 μm). This indicated that the bond between the first layer and the next is weak. However, the delamination of the first layer did not affect the overall structure of the scaffold as seen in photos taken under the microscope. Figure 4.11 shows the delaminated first layer of a scaffold.
4.3.2 PEEK-HA Composite Scaffolds After successfully sintering the PEEK scaffolds, PEEK-HA composite scaffolds were fabricated. The composite blend is made up of 90% weight PEEK with 10% weight HA. The physical blend was produced by using PEEK as a base material and
88
M. W. Naing et al.
Fig. 4.9 Scaffolds with strut length 1.5 mm. (a) Top view. (b) Bottom view Fig. 4.10 Sample with top layers exposed to show the inner structure
Fig. 4.11 Delaminated first layer
adding in HA gradually in the roller mixer [37]. Figure 4.12a shows the distribution of HA particles (highlighted in red) in the composite after blending. After processing in SLS, it was seen that the HA particles were trapped in the interconnected PEEK matrix (see Fig. 4.12b). This is only possible because PEEK
4 Computer Aided Tissue Engineering Scaffolds
89
Fig. 4.12 SEM micrographs of PEEK-10% wt HA (a) before and (b) after sintering [37]
Fig. 4.13 Top and bottom views of the PEEK-HA composite scaffold
has a much lower melting point compared to HA and the temperatures that the composite is exposed to in the SLS are not high enough to affect the HA particles. Experiments showed that scaffolds with strut length 1.5 mm were most promising in terms of structural shape as well as theoretical pore size (~600 μm) and porosity (>80%). Therefore, a set of scaffolds with strut length 1.5 mm was fabricated using the composite. The same parameters used for sintering PEEK were used to process the PEEK-10% wt HA composite, i.e., laser power of 17 W at part bed temperature 140 ◦ C, scan speed 200 in/s (5080 mm/s), and layer thickness of 0.004 in (~0.10 mm). The scaffolds showed similar structure integrity and pore sizes as the pure PEEK scaffolds. The pores were also clearly visible under microscope (see Fig. 4.13). One observation made was the “balling effect” present in the fabricated specimens. This was not observed in the Duraform™ scaffolds or the pure PEEK samples. Nelson [45] attributed this to surface tension present in the powder bed. As PEEK particles act as the binder and the HA particles remain in the solid phase, droplets of binder form on the scanned surface. Figure 4.14 shows the inner struts
90
M. W. Naing et al.
Fig. 4.14 Balling effect seen in the inner struts
which have been removed from the scaffold. The balling effect was visible in some of the clusters of struts (highlighted in circles).
4.3.3 Polycaprolactone (PCL) Scaffolds Polycaprolactone, being a biodegradable material, has a natural advantage over PEEK. They have also been used to grow bone [6]. The scaffold model was fabricated using the following set of parameters: laser power = 3 W, fill scan speed = 5080 mm/s (200 in/s) (default), powder layer thickness = 0.102 mm (0.004 in), warm up height = 6.35 mm (0.250 in), and cool-down height = 2.54 mm (0.100 in). PCL disc scaffolds were fabricated using the same set of parameters. Similar to the biocomposite scaffolds, the strut length was 1.5 mm with strut diameter 0.25 mm. A sample of the fabricated PCL scaffold is shown in Fig. 4.15. In general, it was found that PCL scaffolds exhibited better structural integrity and higher strengths than PEEK or biocomposite scaffolds. There was also a lower percentage of delamination. When observed under the microscope, there was evidence of crystallinity in the scaffolds. However, due to the much larger particle size of PCL, the struts were found to be much thicker than those of PEEK scaffolds. One challenge with scaffolds fabricated by SLS is the removal of powder. With PCL scaffolds, powder removal was relatively easy compared to materials such as Duraform™. As the PCL parts fabricated were elastic, powder trapped within the scaffolds were easily removed by using a sieve shaker. For analysis through imaging techniques, cubes of 12.0 × 12.0 × 12.0 mm were fabricated (Fig. 4.16). To check for broken struts and residual trapped powder within the scaffolds, microcomputed tomography (micro-CT) was carried out using SkyScan-1074 Portable X-ray micro-CT scanner. Samples were first scanned in their entirety (slice thickness of 16 μm) and then zoomed in to the view the volume at the centre (slice thickness of 8 μm). Results showed that there was a through
4 Computer Aided Tissue Engineering Scaffolds
91
Fig. 4.15 Poly-ε-caprolactone scaffold
Fig. 4.16 Scaffold cubes built in different orientations
network of pores within the scaffolds. Figure 4.17 shows a scanned slice of the whole cross section of the scaffold built in XZ-plane and on the right, a portion of the reconstructed three-dimensional scaffold.
4.4 Conclusion The main advantage of this system is the elimination of the reliance on user skills that are necessary in conventional techniques of scaffold fabrication. From a small range of basic units, many different scaffolds of different architecture and properties can be designed and built. The system interface of CASTS in Pro/ENGINEER is user friendly and allows complete transfer of knowledge between users without the need for complex user manuals. Biomaterial scaffolds fabricated using the system showed much potential. Not only there was consistency in structure, but also there was little problem with pow-
92
M. W. Naing et al.
Fig. 4.17 A scanned micro-CT section of the scaffold oriented in XZ
der trapped within the scaffolds. Hence, CASTS is a viable system for generating and producing scaffolds for tissue engineering applications.
References 1. R.C. Thomson, et al., Polymer scaffold processing, in Principles of Tissue Engineering, ed. By R.P. Lanza, R. Langer, and J. Vacanti (Academic Press, San Diego, 2000), pp. 251–262 2. P. Zhuang, A.X. Sun, J. An, C.K. Chua, S.Y. Chew, 3D neural tissue models: From spheroids to bioprinting. Biomaterials 154, 113–133 (2018) 3. W.L. Ng, C.K. Chua, Y.F. Shen, Print me An organ! Why we are not there yet. Prog. Polym. Sci. 97, 101145 (2019) 4. M.S. Widmer, A.G. Mikos, Fabrication of biodegradable polymer scaffolds for tissue engineering, in Frontiers in Tissue Engineering, ed. By C.W. Patrick, A.G. Mikos, L.V. Mcintyre (Elsevier Sciences, New York, 1998), pp. 107–120 5. S.F. Yang et al., The design of scaffolds for use in tissue engineering. Part 1. Traditional factors. Tissue Eng. 7(6), 679–689 (2001) 6. D.W. Hutmacher, Scaffolds in tissue engineering bone and cartilage. Biomaterials 21(24), 2529–2543 (2000) 7. L.E. Niklason, R. Langer, Prospects for organ and tissue replacement. JAMA 285(5), 573–576 (2001) 8. L.E. Niklason et al., Functional arteries grown in vitro. Science 284(5413), 489–493 (1999) 9. F. Oberpenning et al., De novo reconstitution of a functional mammalian urinary bladder by tissue engineering. Nat. Biotechnol. 17(2), 149–155 (1999)
4 Computer Aided Tissue Engineering Scaffolds
93
10. C.K. Chua et al., An integrated experimental approach to link a laser digitiser, a CAD/CAM system and a rapid prototyping system for biomedical applications. Int. J. Adv. Manuf. Technol. 14(2), 110–115 (1998) 11. K.H. Low, K.F. Leong, C.K. Chua, Z.H. Du, C.M. Cheah, Characterization of SLS parts for drug delivery devices. Rapid Prototyp. J. 7(5), 262–268 (2001) 12. C.K. Chua et al., Rapid prototyping assisted surgery planning. Int. J. Adv. Manuf. Technol. 14(9), 624–630 (1998) 13. J. An, C.K. Chua, V. Mironov, A perspective on 4D bioprinting. International journal of bioprinting, in 2(1), (2016) 14. J. An, J.E.M. Teoh, R. Suntornnond, C.K. Chua, Design and 3D printing of scaffolds and tissues. Eng. 1(2), 261–268 (2015) 15. R.A. Levy et al., CT-generated porous hydroxyapatite orbital floor prosthesis as a prototype bioimplant. Am. J. Neuroradiol. 18(8), 1522–1525 (1997) 16. I. Ono et al., Treatment of large complex cranial bone defects by using hydroxyapatite ceramic implants. Plast. Reconstr. Surg. 104(2), 339–349 (1999) 17. A. Curodeau, E. Sachs, S. Caldarise, Design and fabrication of cast orthopedic implants with freeform surface textures from 3-D printed ceramic shell. J. Biomed. Mater. Res. 53(5), 525– 535 (2000) 18. C.M. Cheah et al., Characterization of microfeatures in selective laser sintered drug delivery devices. Proc. Inst. Mech. Eng. H J. Eng. Med. 216(6), 369–383 (2002) 19. N.L. Porter, R.M. Pilliar, M.D. Grynpas, Fabrication of porous calcium polyphosphate implants by solid freeform fabrication: A study of processing parameters and in vitro degradation characteristics. J. Biomed. Mater. Res. 56(4), 504–515 (2001) 20. K.F. Leong, C.M. Cheah, C.K. Chua, Building scaffolds with designed internal architectures for tissue engineering using rapid prototyping. Tissue Eng. 8(6), 1113 (2002) 21. S.J. Hollister et al., An image-based approach for designing and manufacturing craniofacial scaffolds. Int. J. Oral Maxillofac. Surg. 29(1), 67–71 (2000) 22. T.M.G. Chu et al., Hydroxyapatite implants with designed internal architecture. J. Mater. Sci.Mater. Med. 12(6), 471–478 (2001) 23. S.J. Hollister, et al., Design and manufacture of bone replacement scaffolds, in Bone Mechanics Handbook, ed. By S. Corwin (CRC Press, Boca Raton, 2001), pp. 1–14 24. S.E. Feinberg et al., Image-based biomimetic approach to reconstruction of the temporomandibular joint. Cells Tissues Organs 169(3), 309–321 (2001) 25. T.M.G. Chu et al., Mechanical and in vivo performance of hydroxyapatite implants with controlled architectures. Biomaterials 23(5), 1283–1293 (2002) 26. S.J. Hollister, R.D. Maddox, J.M. Taboas, Optimal design and fabrication of scaffolds to mimic tissue properties and satisfy biological constraints. Biomaterials 23(20), 4095–4103 (2002) 27. C.K. Chua et al., Development of a tissue engineering scaffold structure library for rapid prototyping. Part 1: Investigation and classification. Int. J. Adv. Manuf. Technol. 21(4), 291– 301 (2003) 28. J.S. Temenoff, L. Lu, and A.G. Mikos, Bone tissue engineering using synthetic biodegradable polymer scaffold, in Bone Engineering, ed. By J.E. Davies (Em Incoporated, Toronto, 1999) 29. G.P. Chen, T. Ushida, T. Tateishi, Development of biodegradable porous scaffolds for tissue engineering. Mater. Sci. Eng. 17(1–2), 63–69 (2001) 30. S.J. Hollister, Porous scaffold design for tissue engineering. Nat. Mater. 4(7), 518–524 (2005) 31. C.M. Agrawal, R.B. Ray, Biodegradable polymeric scaffolds for musculoskeletal tissue engineering. J. Biomed. Mater. Res. 55(2), 141–150 (2001) 32. D.J. Mooney, R.S. Langer, Engineering biomaterials for tissue engineering, in The Biomedical Engineering Handbook, ed. By J.D. Bronzino (CRC Press, Boca Raton, 1995), pp. 109/1– 109/8 33. K.F. Leong, C.M. Cheah, C.K. Chua, Solid freeform fabrication of three-dimensional scaffolds for engineering replacement tissues and organs. Biomaterials 24(13), 2363–2378 (2003) 34. C.K. Chua, K.F. Leong, 3D Printing and Additive Manufacturing Principles and Applications, Fifth edition, in World Scientific Publishing (2017)
94
M. W. Naing et al.
35. M. Lee, J.C.Y. Dunn, B.M. Wu, Scaffold fabrication by indirect three-dimensional printing. Biomaterials 26(20), 4281–4289 (2005) 36. C.X.F. Lam, In vitro degradation studies of customised PCL scaffolds fabricated via FDM, in ICBME, Singapore, 2002 37. K.H. Tan et al., Scaffold development using selective laser sintering of polyetheretherketonehydroxyapatite biocomposite blends. Biomaterials 24(18), 3115–3123 (2003) 38. S. Limpanuphap, B. Derby, Manufacture of biomaterials by a novel printing process. J. Mater. Sci. 13(12), 1163–1166 (2002) 39. E. Sachlos et al., Novel collagen scaffolds with predefined internal morphology made by solid freeform fabrication. Biomaterials 24(8), 1487–1497 (2003) 40. S. Yuan, F. Shen, C.K. Chua, K. Zhou, Polymeric composites for powder-based additive manufacturing: Materials and applications. Prog. Polym. Sci. 91, 141–168 (2019) 41. A.E. Tontowi, T.H.C. Childs, Density prediction of crystalline polymer sintered parts at various powder bed temperatures. Rapid Prototyp. J. 7(3), 180–184 (2001) 42. T.H.C. Childs, A.E. Tontowi, Selective laser sintering of a crystalline and a glass-filled crystalline polymer: Experiments and simulations. J. Eng. Manuf. 215(11), 1481–1495 (2001) 43. H.C.H. Ho, I. Gibson, W.L. Cheung, Effects of energy density on morphology and properties of selective laser sintered polycarbonate. J. Mater. Process. Technol. 90, 204–210 (1999) 44. E. Liu, Development of Customised Biomaterial Powder Composite for Selective Laser Sintering (Nanyang Technological University, Singapore, 2005) 45. J.C. Nelson, Selective Laser Sintering: A Definition of the Process and an Empirical Sintering Model (UMI, Ann Arbor, 1993), p. 231
Chapter 5
Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering Boyang Huang, Henrique Almeida, Bopaya Bidanda, and Paulo Jorge Bártolo
5.1 Introduction Tissue engineering is a rapidly expanding multidisciplinary and interdisciplinary field exploiting biocompatible and biodegradable materials, living cells, and biomolecular signals combined with additive manufacturing approaches to produce constructs to restore, maintain, or enhance the function of tissues or organs [1–8]. Three strategies have been explored for the creation of a new tissue [3, 4, 9– 15]: • Strategy 1: The use of isolated cells or cell substitutes. This strategy, widely used in clinical applications such as cornea, oesophagus, heart periodontal ligament and cartilage, avoids potential surgical complications but has the disadvantages of possible rejection or loss of function (in vivo). • Strategy 2: Delivery of tissue-induced substances such as low-molecular-weight drugs, proteins and oligonucleotides that can stimulate cell proliferation, migration and differentiation. These signalling molecules are generally divided into: (1) mitogens which stimulate cell division, (2) growth factors which mainly induce cell proliferation, and (3) morphogens that control tissue formation. The
B. Huang · P. J. Bártolo () School of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester, UK e-mail: [email protected]; [email protected] H. Almeida School of Technology and Management, Polytechnic Institute of Leiria, Leiria, Portugal e-mail: [email protected] B. Bidanda University of Pittsburgh, Department of Industrial Engineering, Pittsburgh, PA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 B. Bidanda, P. J. Bártolo (eds.), Virtual Prototyping & Bio Manufacturing in Medical Applications, https://doi.org/10.1007/978-3-030-35880-8_5
95
96
B. Huang et al.
Fig. 5.1 Bottom-up and top-down approaches
success of this strategy depends on the growth factors and controlled released systems (in vitro). • Strategy 3: Cells placed on or within constructs. This is the most common strategy and involves two approaches: bottom-up and top-down (Fig. 5.1). The bottom-up approach employs different techniques for creating modular tissues, which are then assembled into engineered tissues with specific micro-architectural features [16–18]. Tissue modules can be created through selfassembled aggregation, microfabrication of cell-laden hydrogels, fabrication of cell sheets, or direct printing [19–22]. The ability of cell aggregates to fuse is based on the concept of tissue fluidity, according to which embryonic tissues can be considered as liquids [23]. The major drawback of this approach is that some cell types are unable to produce sufficient extracellular matrix (ECM), migrate or form cell–cell junctions [14]. The top-down or scaffold-based approach is based on the use of a temporary scaffold that provides a substrate for implanted cells and a physical support to organize the formation of the new tissue [2, 24, 25]. In this approach, transplanted cells adhere to the scaffold, proliferate, secrete their own ECM, and stimulate new tissue formation. Cell seeding depends on fast attachment of cell to scaffolds, high cell survival and uniform cell distribution and these are strongly dependent on the scaffold material, architecture, surface stiffness, and surface energy [26–28]. Cells used in tissue engineering may be allogenic (donor to recipient), xenogenic (cross-species), syngeneic (genetically identical donor), or autologous (donor back
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
97
Table 5.1 Relationship between scaffold characteristics and the corresponding biological effect [39] Scaffold characteristics Biocompatibility Biodegradability Porosity Chemical properties of the material Mechanical properties
Biological effect Cell viability and tissue response Aids tissue remodelling Cell migration inside the scaffold Vascularization Aids in cell attachment and signalling in cell environment Allows release of bioactive substances Affects cell growth and proliferation response In vivo load bearing capacity
to donor) [9]. They should be nonimmunogenic, highly proliferate, easy to harvest, and with high capacity to differentiate into a variety of cell types with specialized functions [9]. Mesenchymal stem cells (MSCs) are considered to be a promising approach for tissue engineering due to their immunomodulatory capabilities. Bone marrow, umbilical cord, and adipose-derived stem cells are commonly used [29–36]. In the top-down approach, scaffolds provide an initial biochemical substrate for the novel tissue until cells can produce their own extracellular matrix (ECM). Therefore, scaffolds are 3D degradable porous structures that [2, 37, 38]: • • • •
Allow cell attachment, proliferation, and differentiation; Deliver and retain cells and growth factors; Enable diffusion of cell nutrients and oxygen; Enable an appropriate mechanical and biological environment for tissue regeneration in an organized way.
To achieve these goals an ideal scaffold must satisfy some biological and mechanical requirements (Table 5.1) [24, 25, 40–43]: 1. Biological requirements: a. Biocompatibility—the scaffold material must be non-toxic and allow cell attachment, proliferation, and differentiation. b. Biodegradability—the scaffold material must degrade into non-toxic products. c. Controlled degradation rate—the degradation rate of the scaffold must be adjustable in order to match the rate of tissue regeneration. d. Appropriate porosity and pore structure (Fig. 5.2)—a porous structure facilitates cell attachment, proliferation, and differentiation. Large number of pores may be able to enhance vascularization, while smaller diameter of pore is preferable to provide large surface per volume ratio. Pore interconnectivity is also critical for cell delivery, nutrient supply, and metabolic process. e. Should encourage the formation of ECM by promoting cellular functions. f. Ability to carry biomolecular signals such as growth factors that plays a significant role in instructing cell behaviour and guiding new tissue formation.
98
B. Huang et al.
Fig. 5.2 The effects of pore size on 3D printed scaffolds
2. Mechanical and physical requirements a. Sufficient strength and stiffness to withstand stresses in the host tissue environment. b. Adequate surface finish guaranteeing that a good biomechanical coupling is achieved between the scaffold and the tissue. Surface properties such as surface charge and surface topography can influence biocompatibility. c. Easily sterilized either by exposure to high temperatures or by immersing in a sterilization agent remaining unaffected by either of these processes. A variety of biodegradable materials have been used to produce the scaffolds including a wide range of organic, inorganic, and composite materials. The most commonly used materials to produce scaffolds for tissue engineering are indicated in Table 5.2.
5.2 Conventional Fabrication Techniques Conventional methods to fabricate scaffolds include [37, 44–48]:
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
99
Table 5.2 Biomaterials commonly used in tissue engineering Natural polymers Organic materials Alginate Collagen Chitosan Hyaluronic acid Poly(hydroxybutyrate) Gelatine
Synthetic polymers Aliphatic polyester (i.e., poly(glycolic acid), poly(lactic acid) and their co-polymers, poly(ε-caprolactone)) Polyanhydride Polyphosphazenes Poly(dioxanone) Polyethylene oxide/polybutylene terephthalate co-polymers Poly(propylene fumarate) Polyurethanes
Inorganic Hydroxyapatite (HA) and other types of calcium phosphate like fluorapatite or tricalcium phosphate (TCP) Biphasic hydroxyapatite/tricalcium phosphate ceramics Bioglass ceramics Graphene and carbon nanotubes
• Solvent casting/salt leaching: involves mixing solid impurities, such as sieved sodium chloride particles, into a polymer solvent solution, and casting the dispersion to produce a membrane of polymer and salt particles. The salt particles are then leached out with water to yield a porous membrane. Porosity and pore size have been shown to be dependent on salt weight fraction and particle size. Pore diameters of 100–500 μm and porosities of 87–91% have been reported. • Phase separation: involves dissolving a polymer in a suitable solvent, placing it in a mould, and then cooling the mould rapidly until the solvent is frozen. The solvent is removed by freeze-drying, leaving behind the polymer as foam with pore sizes of 1–20 μm in diameter. The pore size is controlled by the freezing rate and pH. • Foaming: is carried out by dissolving a gas, usually CO2 , at elevated pressure or by incorporating a chemical blowing agent that yields gaseous decomposition products. This process generally leads to pore structures that are not fully interconnected and produces a skin-core structure. • Gas saturation: this technique uses high pressure carbon dioxide to produce macroporous sponges at room temperature. Polymeric sponges with large pores (~100 μm) and porosities up to 93% have been reported. • Textile meshes: these processes include all technologies successfully employed to fabricate non-woven meshes of different polymers. Major limitations are due to difficulties in obtaining high porosity and regular pore size. Each of these techniques presents several limitations as they usually do not enable to properly control pore size, pore geometry, and spatial distribution of pores, besides being almost unable to construct internal channels within the scaffold. Beyond these limitations, these techniques usually involve the use of toxic organic solvents, long fabrication times on top of being labour-intensive processes. Therefore, additive biomanufacturing techniques are considered a viable alternative to fabricate scaffolds for tissue engineering.
100
B. Huang et al.
5.3 Additive Manufacturing Techniques for Tissue Engineering Biomanufacturing, initially defined in 2005 during the Biomanufacturing Workshop hosted by Tsinghua University in China as “the use of additive technologies, biodegradable and biocompatible materials, cells, growth factors, etc., to produce biological structures for tissue engineering applications,” represents a new group of non-conventional fabrication techniques recently introduced in the medical field. The main advantages of these techniques are both the capacity to rapidly produce very complex 3D models in a layer-by-layer fashion and the ability to use various raw materials. When combined with clinical imaging data, these fabrication techniques can be used to produce constructs that are customized to the shape of the defect or injury [25]. Moreover, some processes operate at room temperature, thus allowing for cell encapsulation and biomolecule incorporation without significantly affecting viability. The first step to produce a construct through additive biomanufacturing is the generation of the corresponding computer solid model through one of the currently available medical imaging techniques such as computer tomography (CT), magnetic resonance imaging (MRI), etc., together with specifications for cellular biomechanical and signalling characteristics. These imaging methods produce continuous volumetric data (voxel-based data), which provide the input data for the digital model generation [49]. Alternatively, a simplified model can be directly designed using computer-aided design (CAD) software or derived from mathematical equations [50, 51]. The model is then tessellated as an STL file, which is the standard file for facetted models. In this format, 3D models are represented by a number of three sided planar facets (triangles), each facet defining part of the external surface of the object. Finally, the STL model is mathematically sliced into thin layers (sliced model) and sent to the additive biomanufacturing system to be produced. The slicing can be uniform, where the layer thickness is kept constant, or adaptive, where the layer thickness changes based on the surface geometry of the CAD model. Figure 5.3 illustrates the main steps to produce a scaffold for tissue engineering. Different technologies can be used as indicated in Table 5.3.
Fig. 5.3 Main steps to produce a scaffold for tissue engineering through additive biomanufacturing [52]
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
101
Table 5.3 Additive biomanufacturing processes [52] Process Photo-fabrication processes
Description
Powder-bed fusion processes
Powder-bed fusion processes comprise SLS (selective laser sintering), SLM (selective laser melting), and EBM (electron beam melting) techniques, which use high-energy sources to consolidate powder material
Extrusion-based processes
By this process, thin thermoplastic filaments are melted by heating and deposited by a numerical-controlled robotic device. The material leaves the extruder in a liquid form and hardens immediately. The process can also operate physiological temperatures to print hydrogels and living cells
This technology involves the curing or solidification of a liquid photosensitive polymer through the use of an irradiation light source, which supplies the energy that is needed to induce a chemical reaction, bonding large number of small molecules and forming a highly cross-linked polymer
(continued)
102
B. Huang et al.
Table 5.3 (continued) Process Binder jetting method
Description This process deposits a stream of particles of a binder material over the surface of a powder bed, joining particles together where the object is to be formed. After the fabrication the unbounded powder material is removed and the part is usually submitted to a sintering process
5.3.1 Photo-Fabrication Process Photo-fabrication processes use light to create acellular scaffolds or cell-laden constructs through: 1. Processes that use light to start a chemical reaction that converts a liquid polymer into a solid structure (vat-photopolymerization process). 2. Processes that use light to create radiation forces or local heat to eject material droplets or cells towards a substrate (non-chemical process, e.g., LGDW and LIFT processes).
5.3.1.1
Vat-Photopolymerization
Vat-photopolymerization processes produce three-dimensional solid objects in a multi-layer procedure through the selective photo-initiated curing reaction of a liquid photosensitive polymer [24, 53–58]. These processes employ two distinct methods of irradiation [59]. The first method is the mask-based method in which an image is transferred to a liquid polymer by irradiating through a patterned mask. The irradiated part of the liquid polymer is then solidified. The second method, direct writing process, uses a focused UV beam to produce 3D polymer structures. The direct or laser writing approach consists of a vat containing a photosensitive polymer, a moveable platform on which the model is built, a laser to irradiate and cure the polymer, and a dynamic mirror system to direct the laser beam over the polymer surface “writing” each layer [59]. After drawing a layer, the platform dips into the polymer vat, leaving a thin film from which the next layer will be formed. The curing reaction can be induced using both single-photon polymerization and two-photon polymerization (2PP) reactions. The chemical principle of these two processes is similar, being the main difference the number of
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
103
absorbed photons required to induce the polymerization process [59]. Two-photoninitiated polymerization processes employ a femtosecond infrared laser without photo-masks [60–63]. This method allows a submicron 3D resolution on top of enabling 3D fabrication at greater depth and an ultra-fast fabrication. In the mask-based irradiation method, a dynamical pattern generator is used to shape a light beam, according to the image of the layer to be built [59, 64, 65]. LCD (liquid crystal display) panels and DMDs (digital micromirror devices) can be used as dynamic pattern generators [59]. A wide range of polymeric materials, including hydrogels, and polymer/ceramic materials can be processed through vat-photopolymerization. Moreover, the UV polymerization of hydrogels occurs at sufficiently mild conditions (low light intensity, short irradiation time, physiological temperature, and low organic solvent levels), enabling the reaction to be carried out in the presence of cells [66]. Hydrogels used in vat-photopolymerization include hyaluronic acid, chitosan, collagen, gelatine, dextran, pectin, poly(propylene fumarate) (PPF), polyethylene glycol (PEG), polyethylene oxide (PEO), poly(hydroxyethyl methacrylate) (PHEMA), hyaluronic acid-based materials, poly (vinyl alcohol) (PVA) derivatives, etc. [66– 71]. Single-photon direct writing was used byLan et al. [72] to produce PPF scaffolds with highly interconnected porous structure and porosity of 65%. Produced scaffolds were coated by applying accelerated biomimetic apatite and arginineglycine-aspartic acid (RGD) peptide coating to improve the biological performance. The coated scaffolds were seeded with MC3T3-E1 pre-osteoblasts and their biologic properties were evaluated using an MTS assay and histologic staining. Seck et al. [73] produced porous and non-porous biodegradable hydrogel structures using an aqueous photocurable polymer based on methacrylate-functionalized poly(ethylene glycol)/poly(D,L-lactide) macromers and Lucirin TPO-L as visible light photoinitiator. After photopolymerization, the obtained structures were extracted with distilled water to remove soluble compounds and dried at ambient conditions for 3 days. The structures showed good cell seeding characteristics, and human mesenchymal stem cells adhered and proliferated well on these structures. Vatphotopolymerization was also used to produce 3D scaffolds with embedded growth factor-delivering microspheres [12]. In this study, bone morphogenetic protein 2 (BMP-2)-loaded poly(lactic-co-glycolic acid) (PLGA) microspheres were incorporated into 3D scaffolds photo-fabricated using a suspension of microspheres and a PPF/diethyl fumarate (DEF) photopolymer. The effects of BMP-2 release were assessed in vitro by observing cell differentiation using MC3T3-E1 pre-osteoblasts. Chan et al. [74] modified a vat-photopolymerization apparatus to produce poly(ethylene glycol) diacrylate (PEGDA) constructs with encapsulated 3T3 cells. Each layer of cell-containing photopolymer was manually added to prevent cells settling to the bottom of the vat due to gravity (Fig. 5.4). This strategy allows to use multiple hydrogel compositions and cell types and to control the spatial distribution of cells and bioactive molecules. A similar approach was used by Zorlutuna et al. [75] to produce spatially organized 3D co-culture of multiple cell types to investigate cell–cell interaction and the microenvironments of complex
104
B. Huang et al.
a Laser Layer being polymerized Top view
Platform moves down for every layer
3D view Pre-polymer with cells
b Laser Source with cells dispensed for each layer Top view
Platform moves down for every layer
3D view
Fig. 5.4 (a) Approach to produce hydrogel constructs encapsulating cells. (b) Manually deposition of each layer of cell-containing photopolymer [74]
tissues. Two layer-constructs using different cell types and hydrogels were produced as described in Fig. 5.5. The first layer was produced by polymerizing poly(ethylene glycol) methyl ether methacrylate (PEGMA3400) containing adipose-derived stem cells. The second layer contains primary hippocampus neurons and skeletal muscle myoblast cells encapsulated in OMA-PEGMA1100 (oxidized methacrylic alginate and poly(ethylene glycol) methyl ether methacrylate). Morris et al. [76] produced a chitosan/PEGDA scaffolds. The composite was formulated by controlling the molecular weight of chitosan (50–190 kDa), feed-ratios (1:7.5), and photo-initiator concentration. Produced scaffolds showed homogeneous and interconnected pores with a nominal pore size of 50μm and an elastic modulus of ~400 kPa. The longterm cell viability and cell proliferation were observed by actin filament staining. Vat-photopolymerization was also used to produce polymer/ceramic scaffolds. Guillaume et al. [77] produced bone scaffolds using poly(trimethylene carbonate)
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
105
Fig. 5.5 (a) Fabrication procedure. (b) CAD model. (c) Fabrication sequence. (d) Fluorescence microscopy image of the adipose-derived stem cells (blue) in the first layer (the scale bar is 1 mm). (e) Fluorescence microscopy image of the primary hippocampus neurons (green) and the skeletal muscle myoblast cells (red) in the second layer (the scale bar is 1 mm) [75]
106
B. Huang et al.
Fig. 5.6 PTMC scaffolds with different amounts of HA. (a) CAD model and (b) macroscopic and (c) microscopic SEM images, (d) 3D reconstruction using micro-CT, (e) scaffold strut diameter, (f) pore diameter, and (g) porosity for PTMC, PTMC 20, and PTMC 40 scaffolds [77]
(PTMC) mixed with 20 and 40 wt.% of hydroxyapatite (HA) (Fig. 5.6). Results showed that the fabrication process leads to a surface-enrichment of HA nanoparticles in the composite scaffolds. A significant improvement of bone regeneration was observed by using composite scaffolds containing as low as 20 wt.% of HA. Scaffolds were assessed in vitro using bone marrow mesenchymal stem cells (hBMSCs) and in vitro to treat calvarial defects on rabbits. Single-photon polymerization was also used by Killion et al. [78] to process PEGDMA with a specific amount of distilled water, bioactive glass powder, and photo-initiator. Produced scaffolds presented enhanced mechanical properties compared to PEGMA scaffolds and did not display the inherent brittleness typically associated to bioactive glass based scaffolds. Two-photon and multi-photon polymerization has been used to process different natural and synthetic polymers and proteins such as PEG, gelatin, hyaluronic acid,
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
107
Fig. 5.7 Fluorescence confocal microscopy of PEGDA hydrogel scaffolds produced with predefined spatial-pattern in a single layer (a) and multi-layered (b, c) scaffolds carried either FITC or Cy5-labelled polystyrene particles [82]
bovine serum albumin (BSA), fibrogen, fibronectin, and collagen, and organically modified ceramic materials [60, 70, 79–81]. Lu et al. [82] used a dynamic masking system to produce hydrogel scaffolds with different pore geometries (hexagons, triangles, honeycombs with triangles, and squares) and pore sizes (165–650 μm), through the polymerization of a polymeric system consisting of PEGDA dissolved in phosphate buffered saline and 0.1 wt.% of photo-initiator (Irgacure 2959). Murine OP-9 marrow stromal cells were also encapsulated within the PEGDA hydrogel. An encapsulation efficiency of 73% and cell viability after 24 h of incubation were obtained. To achieve an efficient cell seeding, the scaffold surfaces were covalently functionalized with fibronectin. The modified scaffolds were seeded with murine mesenchymal stem cells, showing the ability to promote cell attachment and the osteogenic differentiation of cells through the addition of osteogenic culture medium. Using different fluorescently labelled polystyrene microparticles, the authors also demonstrated the feasibility of the system to produce scaffolds with entrapped multiple biochemical factors with precise pre-designed and spatially patterned layers (Fig. 5.7).
5.3.1.2
Non-chemical Photo-Fabrication Processes
Non-chemical photo-fabrication processes, such as laser-guided direct printing (LGDW) and laser-induced forward transfer (LIFT) processes, use light energy to generate radiation forces or local heating, respectively, to promote the ejection of hydrogels, suspended cells, and cell aggregates toward a substrate [5, 83–85]. The LGDW process uses a laser operating at a ~800 nm wavelength, which is weakly focused on a cell suspension and the radiation forces due to the difference in refractive index move the cells onto a receiving substrate with micrometer resolution. However, this technique suffers from low cell throughput (2.5 cells/min) and poor reproducibility [5].
108
B. Huang et al.
The LIFT process uses a high-energy pulsed laser (usually characterized by 248 nm of wavelength, 2.5 ns of pulse rate, and 5–10 μJ of energy) to induce the local heating of liquid suspension leading to its ejection (velocity ranging from 200 to 1200 m/s) towards a receiving substrate. Its setup consists of a laser system, a print ribbon from which the material is ejected, an image acquisition system, and a substrate [5, 86]. The size and shape of the ejected materials can be controlled by the incident laser spot [87]. The LIFT process comprises two main techniques: matrix-assisted pulsed laser evaporation (MAPLE) and biological laser processing (BioLP). The MAPLE technique uses a low-power laser operating in the UV or near-UV region (wavelength 193 nm and power ~0.02 J/cm2 ), which is focused by a microscope objective at the interface between the print ribbon and the optical absorbing material. The print ribbon is a laser-transparent quartz disk coated with an organic material dissolved in a laser absorbent solvent that absorbs the incident light. The laser energy is converted into thermal energy, which helps the evaporation of the volatile solvent and the deposition of a thin film [87]. The BioLP process (Fig. 5.8), which normally uses a near-IR laser, is characterized by a biocompatible laser-absorption inter-layer (1–100 nm) included between the print ribbon and the cellular layer (bioink). This process eliminates the interaction between the laser and the biological material, reduces heating problems, and allows more efficient droplet formation [5, 88, 89]. Non-chemical photo-fabrication processes have been successfully exploited for tissue engineering. Gaebel et al. [86] used a LIFT process for preparing a polyester urethane urea (PEUU) cardiac patch seeded with human umbilical vein endothelial cells (HUVEC) and hMSC in a defined pattern. The patches were cultivated in vitro and successfully transplanted to infarcted zone of rat hearts. Paun et al. [90] developed a multistep laser-based technique to pattern polymer blends containing polyurethane (PU), PLGA, and polylactide-polyethylene glycol-polylactide (PPP)
Laser Absorption Layer
Energy Meter
CCD Camera
Support Layer Laser
Biological Layer Microscope Objective Cells in Fluid Biolayer Ribbon
Substrate
Fig. 5.8 Schematic representation of the BioLP process [88]
Material Transfer
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
109
in 1:1:1 blending ratios. Polymers were patterned with periodic micro-channels by direct femtosecond laser ablation and coated with thin layers of polymer blends using a MAPLE process. Depending on the bottom/top layers, produced structures were classified as PU/PU:PLGA:PP and PU:PLGA:PPP/PU:PLGA:PP. These micro-patterned structures were designed for guided cell adhesion and localized hyaluronic acid immobilization. Both structures allowed the effective immobilization of hyaluronic acid. However, the highest cellular density was obtained with the PU:PLGA:PPP/PU:PLGA:PPP substrate. Wu and Ringeisen [91] used a BioLP system to fabricate branch/stem structures of human umbilical vein endothelial cells (HUVEC) and human umbilical vein smooth muscle cells (HUVSMC). These structures were designed to mimic biological vascular structures. Obtained results show that the BioLP process can be used to print lumen or lumen-like vascular system. The BioLP process was also used by Kerique et al. [92] for the in situ printing of a nano-hydroxyapatite slurry to treat mouse calvaria defects of critical size (Fig. 5.9). Thirty-six 12-week-old 0F-1 male mice were used. Heterogeneous effects on bone formation were observed after printing. In some cases, bone formation was clearly observed one month after printing while other bone defects appeared almost empty after three months. Nevertheless, the BioLP seems to be a promising technology for in vivo printing.
Fig. 5.9 (a) Critical size (4 mm) calvaria bone defects, (b) Schematic set up, (c) Specific holder for in vivo printing, (d) Micro-CT image obtained 1 week after printing, (e) Micro-CT image obtained one month after printing, (f) Micro-CT image obtained 3 months after printing. Scale bar represents 3 mm [92]
110
B. Huang et al.
5.3.2 Powder-Bed Fusion Process This technique uses a laser emitting infrared radiation, to selectively heat powder material just beyond its melting point. The laser traces the shape of each crosssection of the model to be built, sintering a thin layer of powder. After each layer is solidified, the piston over the model lowers to a new position and a new layer of powder is supplied using a mechanical roller [2, 25]. The powder that remains unaffected by the laser acts as a natural support for the model and remains in place until the model is complete. The mechanical properties and resolution are strongly dependent on the manufacturing direction and processing parameters like spot size, laser intensity, scan spacing, and particle size [6, 51, 93, 94]. An important limitation of this process is the dependence between the pores of the scaffold and both particle size of the powder material and the compaction pressure. The potential of powder-bed fusion to produce poly(ε-caprolactone) (PCL) scaffolds for replacement of skeletal tissues was shown by Williams et al. [94]. The scaffolds were seeded with bone morphogenetic protein-7 (BMP-7) transduced fibroblasts. In vivo results show that these scaffolds enhance tissue ingrowth, on top of possessing mechanical properties within the lower range of trabecular bone. Compressive modulus (52–67 MPa) and yield strength (2.0–3.2 MPa) were in the lower range of properties reported for human trabecular bone. Similarly, Chen et al. [95] produced a surface modified PCL scaffold for cartilage tissue engineering. The surface of the scaffold was coated with either gelatine or collagen to improve the hydrophilicity, water uptake, and mechanical strength. Results showed that collagen-modified scaffold presented best biological behaviour in terms of cell proliferation and extracellular matrix formation and chondrocytes/collagen scaffolds implanted into the female nude mice enhanced cartilage tissue generation. Weisgerber et al. [96] produced bone scaffolds based on collagen-glycosaminoglycan (CGCaP) composite fabricated via lyophilization of a CGCaP precursor suspension and a PCL support frame fabricated via powder-bed fusion technique. Results showed that the PCL support frame dominates the bulk mechanical response of the composite scaffold, resulting in a 6000-fold increase in the elastic modulus. The higher surface area of the collagen-PCL composite scaffolds increased the initial attachment of porcine adipose-derived stem cells. Powder-bed fusion was also used by Liao et al. [97] to produce PCL and PCL/TCP scaffolds, which were then coated with collagen type I. Results showed that the compressive modulus increases from 6.77 ± 0.19 to 1 3.66 ± 0.19 MPa by increasing the TCP content from 30 to 70% and no significant differences were observed between coated and non-coated scaffolds. However, collagen coating significantly improved the hydrophilicity and swelling ratios and tissue regeneration (Fig. 5.10). Shuai et al. [98] fabricated porous βTCP/HA scaffolds with different weight ratios (0/100, 10/90, 30/70, 50/50, 70/30, and 100/0). Results showed that the weight ratio of 30:70 allowed the fabrication of 1 scaffolds with the best mechanical properties, fracture toughness of 1.33 MPam 2 , and compressive strength of 18.35 MPa (Fig. 5.11).
5 Additive Biomanufacturing Processes to Fabricate Scaffolds for Tissue Engineering
111
Fig. 5.10 Histology images: (a, b) Masson’s trichrome; PCL and PCL/TCP/COL groups; blue: collagen deposits due to be formation; (c, d) H&E staining; PCL and PCL/TCP/COL, respectively; boxes indicate higher magnification areas; (e) Abundant granulation tissue (thick black arrow) with fibroblasts within the pores of the PCL scaffold; (f) Woven bone and vascular tissue formation (black circle) within the pores of the PCL scaffold; highly differentiated cells, such as osteoblasts, lining cells (thin black arrow), and osteoclasts (thick black arrow), were found within the new bone tissue. NB new bone formation, S scaffold [97]
Popov et al. [99] proposed the concept of surface selective laser sintering (SSLS) technique that enables to extend the range of polymers that can be used for scaffold fabrication. Unlike conventional powder-bed fusion processes, where polymer has a strong absorption at the laser wavelength, the SSLS process is based on melting the particle, which are transparent for laser radiation, due to the laser beam absorption
b 1/2
Fracture toughness (MPa·m )
a
B. Huang et al.
1.6
Fracture toughness
1.4
Compressive strength
28.0 24.0
1.2
20.0
1.0
16.0
0.8
12.0
0.6
8.0
0.4
4.0
0.2
Compressive strength (MPa)
112
0.0 0/100
10/90
30/70
50/50
70/30
100/0
TCP/HAP ratios
Fig. 5.11 (a) Ceramic porous scaffold, (b) Fracture toughness and compressive strength as a function of TCP/HAP ratios [98]
by a small amount (