127 56 11MB
English Pages [411]
Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
4091
Guang-Zhong Yang Tianzi Jiang Dinggang Shen Lixu Gu Jie Yang (Eds.)
Medical Imaging and Augmented Reality Third International Workshop Shanghai, China, August 17-18 2006 Proceedings
13
Volume Editors Guang-Zhong Yang Imperial College London, Institute of Biomedical Engineering London SW7 2BZ, UK E-mail: [email protected] Tianzi Jiang The Chinese Academy of Sciences, Institute of Automation Beijing 100080, China E-mail: [email protected] Dinggang Shen University of Pennsylvania School of Medicine, Department of Radiology Philadelphia, PA 19104-2644, USA E-mail: [email protected] Lixu Gu Shanghai Jiao Tong University, Department of Computer Science Shanghai 200240, China E-mail: [email protected] Jie Yang Shanghai Jiaotong University Institute of Image Processing and Pattern Recognition Shanghai 200240, China E-mail: [email protected] Library of Congress Control Number: 2006930593 CR Subject Classification (1998): I.5, I.4, I.3.5-8, I.2.9-10, J.3, I.6 LNCS Sublibrary: SL 6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics ISSN ISBN-10 ISBN-13
0302-9743 3-540-37220-2 Springer Berlin Heidelberg New York 978-3-540-37220-2 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2006 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 11812715 06/3142 543210
Preface
The Third International Workshop on Medical Imaging and Augmented Reality, MIAR 2006, was held in Shanghai, China at the Regal International East Asia Hotel during August 17-18, 2006. The goal of MIAR 2006 was to bring together researchers in medical image computing and intervention to present the state-of-the-art developments in this ever-growing research area. The meeting consisted of a single track of oral/poster presentations, with each session led by an invited lecture from our distinguished international faculty members. For MIAR 2006, we received 87 full submissions, which were subsequently reviewed by up to 5 reviewers, resulting in the acceptance of 45 full papers included in this volume. For this workshop, we also included four papers from the invited speakers covering shape modeling, fMRI analysis, and study of cerebral connectivity and plasticity. Running such a workshop requires dedication, and we appreciate the commitment of the MIAR 2006 Programme Committee and reviewers who worked to a tight deadline in putting together this workshop. We would also like to thank members of the local Organizing Committee, who have been working so hard behind the scenes to make MIAR 2006 a great success. It was our great pleasure to welcome this year’s MIAR attendees to Shanghai, which is the largest base of Chinese industrial technology, an important seaport and China's largest commercial and financial center. Shanghai is a multicultural metropolis with both modern and traditional Chinese features and we trust that the attendees took the opportunity to explore many different aspects of the city in addition to attending the workshop. For those unable to attend, we hope this volume will act as a valuable reference to the MIAR disciplines, and we look forward to meeting you at future MIAR workshops.
August 2006
Guang-Zhong Yang, Tianzi Jiang, Dinggang Shen, Lixu Gu, Jie Yang
MIAR 2006 The Third International Workshop on Medical Imaging and Augmented Reality
Advisory Co-chairs Yuqing Liu, Fuwai Hospital, China Xiaowei Tang, Bio-X Laboratory, Zhejiang University, China Program Committee Co-chairs Guang-Zhong Yang, Imperial College, UK Tianzi Jiang, NLPR, Institute of Automation, CAS, China Local Organization Co-chairs Lixu Gu, Shanghai Jiaotong University, China Dinggang Shen, UPenn, USA Jie Yang, Shanghai Jiaotong University, China Program Committee Members Nicholas Ayache, INRIA, France Christian Barillot, IRISA, Rennes, France Adrian Chung, Imperial College, UK Albert Chung, Hong Kong UTS, China Christos Davatzikos, University of Pennsylvania, USA Rachid Deriche, INRIA, France James S. Duncan, Yale University, USA Gary Egan, Howard Florey Institute, Australia Gabor Fichtinger, Johns Hopkins University, USA David Firmin, Imperial College, UK Jia-Hong Gao, UTHSCSA, USA Guido Gerig, UNC at Chapel Hill, USA James Gee, University of Pennsylvania, USA Nobuhiko Hata, Harvard University, USA Yoko Hoshi, Tokyo Institute of Psychiatry, Japan Karl Heinz Hoehne, University Hospital Eppendorf, Germany Xiaoping Hu, Emory University, USA Horace Ip, City University of Hong Kong, China Kuncheng Li, Xuanwu Hospital, China
VIII
Organization
Hongen Liao, The University of Tokyo, Japan Shuqian Luo, CUMS, China Anthony Maeder, CSIRO, Australia Dimitris Metaxas, Rutgers, USA Xiaochuan Pan, University of Chicago, USA Terry Peters, Roberts Research Institute, Canada Jerry L. Prince, Johns Hopkins University, USA Stephen Riederer, Mayo Clinic, USA Chuck Stewart, Rensselaer Polytechnic Institute, USA Paul Thompson, UCLA, USA Max Viergever, University Medical Center Utrecht, The Netherlands Yue Wang, Virginia Tech, USA Stephen TC Wong, Harvard University, USA Chenyang Xu, Siemens Research, USA Xiaohong Zhou, University of Illinois at Chicago, USA Student Paper Award Co-chairs Mingyue Ding, Huazhong University, China Yanxi Liu, Carnegie Mellon University, USA Adrian Chung, Imperial College, UK
Table of Contents
Invited Contributions Statistics of Pose and Shape in Multi-object Complexes Using Principal Geodesic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Styner, K. Gorczowski, T. Fletcher, J.Y. Jeong, S.M. Pizer, G. Gerig
1
Geodesic Image Normalization and Temporal Parameterization in the Space of Diffeomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.B. Avants, C.L. Epstein, J.C. Gee
9
Connectivity Analysis of Human Functional MRI Data: From Linear to Nonlinear and Static to Dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Deshpande, S. LaConte, S. Peltier, X. Hu
17
Lessons from Brain Mapping in Surgery for Low-Grade Gliomas: Study of Cerebral Connectivity and Plasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Duffau
25
Shape Modeling and Morphometry Multi-scale Voxel-Based Morphometry Via Weighted Spherical Harmonic Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.K. Chung, L. Shen, K.M. Dalton, R.J. Davidson
36
An Embedding Framework for Myocardial Velocity Processing with MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Cong, S.-L. Lee, A. Huntbatch, T. Jiang, G.-Z. Yang
44
A Multiscale Morphological Approach to Topology Correction of Cortical Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Li, A.D. Malony, D.M. Tucker
52
Finding Deformable Shapes by Point Set Matching Through Nonparametric Belief Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X. Dong, G. Zheng
60
Robust and Accurate Reconstruction of Patient-Specific 3D Surface Models from Sparse Point Sets: A Sequential Three-Stage Trimmed Optimization Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Zheng, X. Dong, L.-P. Nolte
68
X
Table of Contents
Generalized n-D C k B-Spline Scattered Data Approximation with Confidence Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.J. Tustison, J.C. Gee
76
Improved Shape Modeling of Tubular Objects Using Cylindrical Parameterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Huysmans, J. Sijbers, F. Vanpoucke, B. Verdonk
84
Patient Specific Modeling and Qualification Role of 3T High Field BOLD fMRI in Brain Cortical Mapping for Glioma Involving Eloquent Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Jiang, Z. Li, S. Li, S. Li, Z. Zhang
92
Noninvasive Temperature Monitoring in a Wide Range Based on Textures of Ultrasound Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 S. Zhang, W. Yang, R. Yang, B. Ye, L. Chen, W. Ma, Y. Chen A Novel Liver Perfusion Analysis Based on Active Contours and Chamfer Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 G. Chen, L. Gu Cerebral Vascular Tree Matching of 3D-RA Data Based on Tree Edit Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 W.H. Tang, A.C.S. Chung Application of SVD-Based Metabolite Quantification Methods in Magnetic Resonance Spectroscopic Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 124 M. Huang, S. Lu An Inverse Recovery of Cardiac Electrical Propagation from Image Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 H. Zhang, C.L. Wong, P. Shi
Surgical Simulation and Skills Assessment Optical Mapping of the Frontal Cortex During a Surgical Knot-Tying Task, a Feasibility Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 D. Leff, P.H. Koh, R. Aggarwal, J. Leong, F. Deligianni, C. Elwell, D.T. Delpy, A. Darzi, G.-Z. Yang Tracking of Instruments in Minimally Invasive Surgery for Surgical Skill Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 S. Speidel, M. Delles, C. Gutt, R. Dillmann The Effect of Depth Perception on Visual-Motor Compensation in Minimal Invasive Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 M. Nicolaou, L. Atallah, A. James, J. Leong, A. Darzi, G.-Z. Yang
Table of Contents
XI
Efficient and Accurate Collision Detection Based on Surgery Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 K. Xie, J. Yang, Y.M. Zhu Medical Simulation with Haptic and Graphic Feedback . . . . . . . . . . . . . . . . 171 S.-Y. Kim
Surgical Guidance and Navigation Towards a Hybrid Navigation Interface: Comparison of a Slice Based Navigation System with In-Situ Visualization . . . . . . . . . . . . . . . . . . . . . . . . . 179 J. Traub, P. Stefan, S.M. Heining, C. Riquarts, T. Sielhorst, E. Euler, N. Navab Surgical Navigation of Integral Videography Image Overlay for Open MRI-Guided Glioma Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 H. Liao, T. Inomata, I. Sakuma, T. Dohi Automatic Pose Recovery of the Distal Locking Holes from Single Calibrated Fluoroscopic Image for Computer-Assisted Intramedullary Nailing of Femoral Shaft Fractures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 G. Zheng, X. Zhang, L.-P. Nolte A Framework for Image-Guided Breast Surgery . . . . . . . . . . . . . . . . . . . . . . . 203 T.J. Carter, C. Tanner, W.R. Crum, N. Beechey-Newman, D.J. Hawkes 3D US Imaging System for the Guidance of Uterine Adenoma and Uterine Bleeding RF Ablation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 M. Ding, X. Luo, C. Cai, C. Zhou, A. Fenster
Image Registration A General Learning Framework for Non-rigid Image Registration . . . . . . . 219 G. Wu, F. Qi, D. Shen Learning-Based 2D/3D Rigid Registration Using Jensen-Shannon Divergence for Image-Guided Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 R. Liao, C. Guetter, C. Xu, Y. Sun, A. Khamene, F. Sauer Sparse Appearance Model Based Registration of 3D Ultrasound Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 K.Y.E. Leung, M. van Stralen, G. van Burken, M.M. Voormolen, A. Nemes, F.J. ten Cate, N. de Jong, A.F.W. van der Steen, J.H.C. Reiber, J.G. Bosch A Neighborhood Incorporated Method in Image Registration . . . . . . . . . . . 244 C. Yang, T. Jiang, J. Wang, L. Zheng
XII
Table of Contents
Robust Click-Point Linking for Longitudinal Follow-Up Studies . . . . . . . . 252 K. Okada, X. Huang, X. Zhou, A. Krishnan 3D Gabor Wavelets for Evaluating Medical Image Registration Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 L. Shen, D. Auer, L. Bai A Novel 3D Correspondence-Less Method for MRI and Paxinos-Watson Atlas of Rat Brain Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 C. Cai, M. Ding, H. Lei, J. Cao, A. Liu Multi-modality Image Registration Using Gradient Vector Flow Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Y. Guo, C.-H. Lo, C.-C. Lu Multi-stage Registration for Quantification of Lung Perfusion in Chest CT Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 H. Hong, J. Lee
PET Image Reconstruction List-Mode Affine Rebinning for Respiratory Motion Correction in PET Cardiac Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 A.J. Chung, P.G. Camici, G.-Z. Yang Simultaneous Estimation of PET Attenuation and Activity Images with Divided Difference Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 H. Liu, Y. Tian, P. Shi Convergent Bayesian Reconstruction for PET Using New MRF Quadratic Membrane-Plate Hybrid Multi-order Prior . . . . . . . . . . . . . . . . . . 309 Y. Chen, W. Chen, Y. Feng, Q. Feng
Image Segmentation Automatic Segmentation of the Aortic Dissection Membrane from 3D CTA Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 T. Kov´ acs, P. Cattin, H. Alkadhi, S. Wildermuth, G. Sz´ekely Inferring Vascular Structures in Coronary Artery X-Ray Angiograms Based on Multi-Feature Fuzzy Recognition Algorithm . . . . . . . . . . . . . . . . . 325 S. Zhou, W. Chen, J. Zhang, Y. Wang Hierarchical 3D Shape Model for Segmentation of 4D MR Cardiac Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Y. Shang, G. Su, O. D¨ ossel
Table of Contents
XIII
An Improved Statistical Approach for Cerebrovascular Tree Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 J.T. Hao, M.L. Li, F.L. Tang Segmentation of 3-D MRI Brain Images Using Information Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 J. Wang, J. Kong, Y. Lu, J. Zhang, B. Zhang Pulsative Flow Segmentation in MRA Image Series by AR Modeling and EM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 A. Gooya, H. Liao, K. Matsumiya, K. Masamune, T. Dohi Abdominal Organ Identification Based on Atlas Registration and Its Application in Fuzzy Connectedness Segmentation . . . . . . . . . . . . . . . . . . . . 364 Y. Zhou, J. Bai An Improved 2D Colonic Polyp Segmentation Framework Based on Gradient Vector Flow Deformable Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 D. Chen, M.S. Hassouna, A.A. Farag, R. Falk Segmentation for Medical Image Using a Statistical Initial Process and a Level Set Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 W.H. Cho, S.C. Park, M.E. Lee, S.Y. Park Leukocyte Detection Using Nucleus Contour Propagation . . . . . . . . . . . . . . 389 D.M. Ushizima, R.T. Calado, E.G. Rizzatti Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Statistics of Pose and Shape in Multi-object Complexes Using Principal Geodesic Analysis Martin Styner1,2, Kevin Gorczowski1 , Tom Fletcher1 , Ja Yeon Jeong1 , Stephen M. Pizer1 , and Guido Gerig1,2 1
Department of Computer Science 2 Department of Psychiatry The University of North Carolina Chapel Hill, NC 27599
Abstract. A main focus of statistical shape analysis is the description of variability of a population of geometric objects. In this paper, we present work in progress towards modeling the shape and pose variability of sets of multiple objects. Principal geodesic analysis (PGA) is the extension of the standard technique of principal component analysis (PCA) into the nonlinear Riemannian symmetric space of pose and our medial m-rep shape description, a space in which use of PCA would be incorrect. In this paper, we discuss the decoupling of pose and shape in multi-object sets using different normalization settings. Further, we introduce new methods of describing the statistics of object pose using a novel extension of PGA, which previously has been used for global shape statistics. These new pose statistics are then combined with shape statistics to form a more complete description of multi-object complexes. We demonstrate our methods in an application to a longitudinal pediatric autism study with object sets of 10 subcortical structures in a population of 20 subjects. The results show that global scale accounts for most of the major mode of variation across time. Furthermore, the PGA components and the corresponding distribution of different subject groups vary significantly depending on the choice of normalization, which illustrates the importance of global and local pose alignment in multi-object shape analysis.
1 Introduction Statistical shape modeling and analysis [1,2] is emerging as an important tool for understanding anatomical structures from medical images. Statistical shape modeling’s goal is to construct a compact and stable description of the mean and variability of a population. Principal Component Analysis (PCA) is probably the most widely used procedure for generating shape models of variability. These models can provide understanding for processes of growth and disease observed in neuroimaging [3]. Clinical applications favor a statistical shape modeling of multi-object sets rather than one of single structures outside of their multi-object context. Neuroimaging studies of mental illness and neurolocal disease, for example, are interested in describing group differences and changes due to neurodevelopment or neurodegeneration. These processes most likely affect multiple structures rather than a single one. A description of the change of the object set might help to explain underlying neurobiological processes G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 1–8, 2006. c Springer-Verlag Berlin Heidelberg 2006
2
M. Styner et al.
affecting brain circuits. Whereas Tsai et al. [4] and Yang et al. [5] describe statistical object modeling by level-sets, we propose explicit deformable shape modeling with a sampled medial mesh representation called m-rep, introduced by Pizer et al [6]. Deformable shape models represent the underlying geometry of the anatomy and then use a statistical analysis to describe the variability of that geometry. Several different geometric representations other than m-reps have been used to model anatomy, such as landmarks [7], dense collection of boundary points [8], or spherical harmonic decompositions [9]. Another shape variability approach focuses on the analysis of deformation maps [10,3,11,12]. A fundamental difficulty in statistical shape modeling is the high dimensionality of the set of features with a relatively small sample size, typically in the range of 20 to 50 in neuroimaging studies. This problem is even more evident for modeling sets of multiple objects. The analysis of transformation fields has to cope with the high dimensionality of the transformation, which renders the computation of the PCA basically unstable in respect to the training set. Adding or removing a single subject from the training set results in strikingly different principal components. In most shape modeling approaches, the underlying geometry is parameterized as an Euclidean feature space. As Davis and Joshi [13] note, the space of diffeomorphism is a curvilinear space. Treating it as an Euclidean space is a linear simplification of a higherdimensional, curvilinear problem. For medial descriptions, as well as for descriptions of pose, the feature space clearly contains elements of a non-Euclidean vector space. These features need to be parameterized in a nonlinear Riemannian symmetric space. We will use curved statistics for these parameters with modeling major modes of deformations via principle geodesic analysis (PGA) [14], a nonlinear extension of PCA. This paper summarizes work in progress towards an efficient and compact representation and statistical analysis of sets of objects. We choose the sampled medial m-rep representation and a statistical framework based on Riemannian metrics. The driving application is a longitudinal pediatric neuroimaging study.
2 Methodology This research is driven by the challenge to describe the shape statistics of a set of 3-D objects. Whereas analysis of single shapes is well advanced and has been described extensively using a variety of shape parametrization techniques, the research community does not yet have access to tools for statistical modeling and analysis of sets of objects. Estimating Variability of Multi-object Sets: In linear space, variability of parameterized objects can be described by principle component analysis (PCA) of spherical harmonics [9] or point distribution models (PDM) [8]. However, a linear PCA cannot describe object rotations and the modeling cannot be extended to model points and normals. Non-linear modeling is achieved by principle geodesic analysis (PGA), developed by Fletcher et al. [14]. PGA extends linear PCA into nonlinear space using “curved statistics” and is a natural generalization of PCA for data that are parameterized as curved manifolds. To recall, the intrinsic mean of a collection of points x1 , · · · , xN on a Rie2 mannian manifold M is the Fr´echet mean μ = argmin N i=1 d(x, xi ) , where d(., .) 3 denotes Riemannian distance on M . Whereas PCA in R generates linear subspaces that maximize the variance of projected data, geodesic manifolds are images of linear
Statistics of Pose and Shape in Multi-object Complexes Using PGA
3
Fig. 1. Visualization of the 10 selected deep brain structures. Left: Binary voxel objects (top) and implied solid surface of medial atoms (bottom). Right (with lateral ventricles): Left side view (top row), Top view (bottom row). The inter-subject shape differences clearly are larger than the longitudinal differences, which seem quite small.
subspaces under the exponential map and are defined as the manifolds that maximize projected variance. Principle geodesics can be found by a recursive gradient descent or with an approximation by the log map and a linear PCA in the tangent space of the map (please see [15] for details). An important fact is that PGA can be used with parametrization schemes that include point locations, scale, and angle parameters. Object Representation by a Mesh of Medial Samples: Medial representations represent an alternative to parametrization of 3-D objects via surfaces. Changes in terms of local translation, bending and widening can be more naturally expressed by medial, rather than surface, representations. Pizer et al. [6,16] developed an object representation by a mesh of medial atoms. The shape and structure of the skeletal sheet of atoms, as well as a local width function, define the object. Each atom is characterized as a tuple with position, radius, and the normal vectors to the boundary: m = {x, r, n0 , n1 } ∈ M, with M = R3 × R+ × S 2 × S 2 . The object surface can be interpolated from endpoints of the sets of medial atoms, but this representation also allows a continuous interpolation of the whole object interior and a rim exterior to the object boundary. Since the parameter vector of medial atoms includes position, length and angle (between normals), mean and variability of a population of object shapes is calculated via the Fr´echet mean and PGA framework as discussed before. Anatomical structures of interest, including left and right hippocampus, amygdala, putamen, caudate, and globus pallidus, have been segmented by well-trained1 experts using semi-automated procedures. The segmented objects are represented as binary voxel objects (see Fig. 1). We constructed sampled medial models from populations of objects, using the modeling scheme developed by Styner et al. [17] to determine the minimum sampling of each medial mesh model. The m-rep models are deformed to optimally fit the original segmentations of each anatomical object [16]. This process is applied individually to each of the 10 anatomical objects in each of the 20 image datasets. Correspondence across datasets is established by deformable m-rep modeling. Normalization and Statistics of Pose: The normalization of the m-rep pose is based on a procedure similar to Procrustes analysis. In the m-rep pose normalization, the sumof-squared geodesic distances, instead of Euclidean distances, between corresponding 1
See http://www.psychiatry.unc.edu/autismresearch/mri/roiprotocols.htm .
4
M. Styner et al.
medial atoms is minimized, as described in [14]. In this paper we discuss two types of pose normalization in the context of multiple object sets. The first one applies the above procedure to all objects jointly. We call this the global pose normalization of the objects. After this global pose normalization, the individual objects will likely have residual pose variation relative to the global pose (see Fig. 2a,b). Thus, we also perform an object-specific normalization called local pose normalization. The resulting global-to-local pose change parameters are the features we use to create new statistics of the object pose with a novel PGA extension. As with atoms, the PGA is approximated by mapping the pose parameters to the tangent space and running linear PCA. The mapping is accomplished by taking the log of rotations and scales. θ · v, where The log map of a unit-length quaternion q = (w, v) is defined as: sin(θ/2) θ = 2arccos(w). The extension of PGA beyond describing statistics of medial atoms into pose changes allows us to analyze pose and shape simultaneously. This is done by concatenating the pose parameters to the feature vectors of the atoms. Due to their greater magnitude, the pose parameters will tend to dominate the PGA. Therefore, a prewhitening is done to make each feature have standard deviation 1.0 across all samples in the tangent space.
3 Results Motivation and Clinical Data: The driving clinical problem of this research is the need for a joint analysis of the set of subcortical brain structures, over and above that of individual structures. The image data used in this paper is taken from an ongoing clinical longitudinal pediatric autism study. This study includes autistic subjects (AUT) and typically developing, healthy controls (TYP) with baseline at age 2 and follow-up at age 4. Through this longitudinal design, we can study growth (see Fig. 1), cross-sectional differences between groups and even group growth patterns. For the preliminary analysis shown here, we have selected 5 subjects each from the TYP and AUT groups. For eight of these subjects, we had longitudinal data with successful scans at ages 2 and 4. Our main goal is to systematically study the effect of different pose normalization settings on the analysis of longitudinal and cross-sectional shape changes. Principle Geodesic Analysis: PGA performs a compression of the multi-object shape variability to a small set of major eigenmodes of deformation. We assume that the first few modes describe most of the shape variability, the rest representing mostly noise. The quality of this compression can be evaluated with the criteria compactness, sensitivity and specificity as discussed in [17]. As a preliminary test, we followed the standard procedure of projecting the multi-object sets into the shape space of the eigenmodes λi . This leads to a set of weights in the shape space that describe the deviation of individual shapes from the mean. In our case, each weight vector represents a multi-object shape set. We applied PGA to the whole set of objects for the four subject groups, which ensures projection into the same geometric domain for all subjects. Shape: Differences in Normalization: First, the sets of objects are aligned purely by a global process, including global translation, rotation and scaling (see Fig. 2b). The resulting PGA captures variability in shape and in residual local pose.
Statistics of Pose and Shape in Multi-object Complexes Using PGA
a)
b)
c)
5
d)
Fig. 2. Normalization examples: a) Global rotation and translation (R/T) normalization; b) global R/T/S; c) global R/T/S plus local R/T normalization; d) global R/T plus local R/T/S
a)
b)
c)
d)
e)
Fig. 3. Top row: λ1 vs λ2 plot in pure global normalization settings. a) Global rotation and translation normalization; b) Global R/T/S normalization. The arrows indicate corresponding subject pairs. The range of (a) is larger than (b). Bottom row: c) M-reps deformed to -3 standard deviations of the first eigenmode of PGA with global R/T; d) +3 standard deviations; e) Euclidean distance map between mean of global R/T aligned m-reps at age 2 and age 4. The colormap ranges from red (Age 2 > Age 4) over green (Age 2 = Age 4) to blue (Age 2 < Age 4).
In a second step, we varied the global normalization procedure by disabling scaling normalization and again applied PGA. All objects were thus left in their original size and only rotation and translation were normalized (see Fig. 2a). In order to compare the differences in the two global PGA’s (with and without scaling normalization), we plotted the values of the first two major eigenmodes (λ1 , λ2 ) of deformation (see Fig. 3a,b). The arrows connect corresponding longitudinal pairs, which allows qualitative evaluation of the correlation between PGA modes and longitudinal changes. The λ1 axis in the PGA without global scaling normalization seems to characterize mainly differences between age 2 and 4 (Fig. 3a), as indicated by the parallel alignment of the connecting arrows to the λ1 axis. After scaling normalization (Fig. 3b) no coherent alignment of the arrows is visible. This suggests that the main effect of longitudinal change is reflected in the scaling normalization and thus the overall size of the object
6
M. Styner et al.
sets. Also, corresponding longitudinal pairs cluster quite well in the plot including scaling normalization, which supports our hypothesis that shape changes due to growth are considerably smaller than shape differences between subjects (see Fig. 1). As seen in Fig. 2, the individual objects still have relative pose differences after global pose normalization. Therefore, we proceeded by calculating the PGA shape space of objects after a global, then local pose normalization. The top row of Fig. 4 shows the λ1 vs. λ2 plot in the global plus local normalization settings. The left plot shows global normalization followed by local normalization including scaling. It is noteworthy that the use of scaling in the global normalization is irrelevant, as the local scaling operation supersedes the global one. This pose normalization setting is similar to the one commonly used in shape analysis of single objects with full Procrustes alignment. The top right plot of Fig. 4 shows the objects with global scaling normalization but no local scaling normalization, which is similar to another common technique of scaling normalization with brain volume.
Fig. 4. λ1 vs λ2 plots of different PGA’s. Top row: The left plot shows global rotation and translation normalization followed by local rotation, translation and scaling. The right plot has global scaling normalization, but no local scaling normalization. Bottom row: The left plot includes only the local pose change parameters. The right uses both medial atoms and pose parameters.
The high degree of difference between the PGA values in the two different local scaling settings can be due to two different factors: a) inter-subject variability of the residual scaling factors after global normalization or b) instability in the computation of the PGA directions. We are currently working in evaluating the stability of the PGA, but our earlier studies indicate that this cannot be the sole reason for the discrepancy. Also, the arrangement of the groups look quite different in the two plots. In neither plot does there seem to be a clustering according to age, but clustering according to diagnosis is
Statistics of Pose and Shape in Multi-object Complexes Using PGA
7
different. Although it is premature to draw any conclusions due to the small sample size and limited knowledge about the PGA stability, the preliminary results demonstrate the potential of multi-object shape analysis. Pose and Shape: Local Pose Change Parameters: The purpose of the normalization discussed previously is to decouple pose and shape differences in order to study only shape in the PGA. In a multi-object setting, however, the pose changes may be of interest. Therefore, we ran PGA on the local translation, rotation, and scale pose parameters as seen in Fig. 4, lower left. These represent the local residual pose changes after a global translation and rotation alignment. An advantage of this PGA calculation is that it suffers less from the high dimensionality, low sample size problem of the PGA on the medial atoms. Here, each subject is represented by a feature vector of length 70 (10 objects, each with translation, rotation, scale) as opposed to about 1900 for the medial atoms (210 atoms, each with 9 parameters). By analyzing only pose, we see a somewhat similar separation according to diagnosis as in the upper right of Fig. 4. Having analyzed pose and shape separately, our final step was to run PGA on both simultaneously. The lower right plot of Fig. 4 shows that the prewhitening discussed earlier has a large effect on the eigenmodes as compared with the upper left. A prewhitened PGA on only the medial atoms is not altered significantly by including the pose parameters, meaning the prewhitening itself accounts for the changes between the two plots and that including the pose does not destabilize the computation.
4 Discussion We have discussed work in progress towards extending statistical analysis of anatomical shape from single structures to multi-object sets. Key issues addressed are the extraction of a small set of key features representing both the shape and pose of object sets and calculation of mean and variability via Riemannian metrics. The current results suggest that after removing global scale, longitudinal data of the same subject cluster closely in the PGA space and thus that longitudinal shape change is smaller than shape variability across subjects, a driving hypothesis shown in Fig. 1. Also, projections of subject groups into PGA components vary significantly depending on choice of normalization. Several open issues remain and need to be addressed by our group and the international research community. In regard to the m-rep object parametrization used here, we still need to demonstrate the quality and stability of correspondence, as well as the robustness, sensitivity and specificity of PGA-based compression of features. Although our results indicate a possible separation between groups, it is important to note that PGA, similar to PCA, selects a subspace based on maximum common variability, not maximum separation. An extension of independent component analysis (ICA) to curved space or supervised training of a subspace of maximum separation will be explored in our future research. Applications in neuroimaging further require hypothesis testing schemes that combine shape and pose features with clinical variables, and that have to properly address the problems of nonlinear modeling and multiple comparison testing. Encouraging progress is shown by recent work of Terriberry et al. [18].
8
M. Styner et al.
Acknowledgements This research is supported by the NIH NIBIB grant P01 EB002779, the NIH Conte Center MH064065, and the UNC Neurodevelopmental Research Core NDRC, subcore Neuroimaging. The MRI images of infants, caudate images and expert manual segmentations are funded by NIH RO1 MH61696 and NIMH MH64580.
References 1. Dryden, I., Mardia, K.: Multivariate shape analysis. Sankhya 55 (1993) 460–480 2. Small, C.G.: The statistical theory of shape. Springer (1996) 3. Csernansky, J., Joshi, S., Wang, L., Haller, J., Gado, M., Miller, J., Grenander, U., Miller, M.: Hippocampal morphometry in schizophrenia via high dimensional brain mapping. Proc. Natl. Acad. Sci. USA 95 (1998) 11406–11411 4. Tsai, A., Yezzi, A., Wells, W., Tempany, C., Tucker, D., Fan, A., Grimson, E., Willsky, A.: Shape-based approach to curve evolution for segmentation of medical imagery. IEEE Transactions on Medical Imaging 22(2) (2003) 137–154 5. Yang, J., Staib, L.H., Duncan, J.S.: Neighbor-constrained segmentation with level set based 3d deformable models. IEEE Transactions on Medical Imaging 23(8) (2004) 940–948 6. Pizer, S., Fritsch, D., Yushkevich, P., Johnson, V., Chaney, E.: Segmentation, registration, and measurement of shape variation via image object shape. In: IEEE Trans. Med. Imaging. Volume 18. (1999) 851–865 7. Bookstein, F.: Shape and the information in medical images: A decade of the morphometric synthesis. In: MMBIA. (1996) 8. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models - their training and application. In: Computer Vision and Image Understanding. (1995) 38–59 9. Kelemen, A., Sz´ekely, G., Gerig, G.: Elastic model-based segmentation of 3d neuroradiological data sets. In: IEEE Trans. Med. Imaging. Volume 18. (1999) 828–839 10. Davatzikos, C., Vaillant, M., Resnick, S., Prince, J., Letovsky, S., Bryan, R.: A computerized method for morphological analysis of the corpus callosum. J. of Comp. Assisted Tomography 20 (1996) 88–97 11. Thompson, P., Mega, M., Toga, A.: Disease-Specific Brain Atlases. In: Brain Mapping: The Disorders. Academic Press (2000) 12. Thompson, P., Giedd, J., Woods, R., MacDonald, D., Evans, A., Toga, A.: Growth patterns in the developing brain detected by using continuum mechanical tensor maps. Nature 404 (2000) 190–193 13. Davis, B., Lorenzen, P., Joshi, S.: Large deformation minimum mean squared error template estimation for computational anatomy. In: ISBI. (2004) 173–176 14. Fletcher, P., Lu, C., Pizer, S., Joshi, S.: Principal geodesic analysis for the study of nonlinear statistics of shape. In: IEEE Transactions on Medical Imaging. Volume 23. (2004) 995–1005 15. Fletcher, P.: Statistical Variability in Nonlinear Spaces: Application to Shape Analysis and DT-MRI. PhD thesis, The University of North Carolina at Chapel Hill (2004) 16. Pizer, S., Fletcher, T., Fridman, Y., Fritsch, D., Gash, A., Glotzer, J., Joshi, S., Thall, A., Tracton, G., Yushkevich, P., Chaney, E.: Deformable m-reps for 3d medical image segmentation. International Journal of Computer Vision IJCV 55(2) (2003) 85–106 17. Styner, M., Gerig, G., Lieberman, J., Jones, D., Weinberger, D.: Statistical shape analysis of neuroanatomical structures based on medial models. Medical Image Analysis (MEDIA) 7(3) (2003) 207–220 18. Terriberry, T., Joshi, S., Gerig, G.: Hypothesis testing with nonlinear shape models. In Christensen, G.E., Sonka, M., eds.: Information Processing in Medical Imaging (IPMI). Number 3565 in Lecture Notes in Computer Science LNCS, Springer Verlag (2005)
Geodesic Image Normalization and Temporal Parameterization in the Space of Diffeomorphisms Brian B. Avants, C.L. Epstein, and J.C. Gee Depts. of Radiology and Mathematics, University of Pennsylvania Philadelphia, PA 19101
Abstract. Medical image analysis based on diffeomorphisms (differentiable one to one and onto maps with differentiable inverse) has placed computational analysis of anatomy and physiology on firm theoretical ground. We detail our approach to diffeomorphic computational anatomy while highlighting both theoretical and practical benefits. We first introduce the metric used to locate geodesics in the diffeomorphic space. Second, we give a variational energy that parameterizes the image normalization problem in terms of a geodesic diffeomorphism, enabling a fundamentally symmetric solution. This approach to normalization is extended for optimal template population studies using general imaging data. Finally, we show how the temporal parameterization and large deformation capabilities of diffeomorphisms make them appropriate for longitudinal analysis, particularly of neurodegenerative data.
1
Introduction
Computational anatomy (CA) uses imaging to make quantitative measurements of the natural world. One may view CA as the science of biological shape and its variation, with roots in the work of Charles Darwin and D’Arcy Thompson [1,2]. The wide availability of high resolution in vivo functional and structural imaging has caused a rapid increase in CA’s relevance and prominence. Currently, this developing science’s primary tools are the topology preserving diffeomorphic transformations. These transformations are used to map an individual image J into ¯ which serves as a common coordinate system. the space of a template image, I, When a population of images is mapped together by these transformations, each voxel in individual space corresponds smoothly with a single voxel in the reference space. This process creates a continuous spatial map of population information detailing, for example, the relative volume, functional activation or diffusion at a given anatomical position, such as the anterior hippocampus or occipital lobe gray matter. Topology preserving transformations (TPT) are of special interest for this technology as they will introduce neither folds nor tears in this map and will preserve the continuity of curves and surfaces. An object that is whole and continuous will remain so after TPT and neighboring objects (or image coordinates) will G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 9–16, 2006. c Springer-Verlag Berlin Heidelberg 2006
10
B.B. Avants, C.L. Epstein, and J.C. Gee
remain neighbors. Topology preserving normalization permits comparisons to be made across time points in an individual’s disease process or to study development patterns across a large population. Topology preserving normalization techniques have been successfully applied to quantify both functional and anatomical variables. Surface and curve topology preserving maps are often used to study cortical activation [3] as well as gray matter thickness between individuals [4]. Image-based maps operate directly in the image’s domain and are often used to study individual structures [5], to make volumetric measurements [6] or to normalize functional data [7]. Miller, et al. showed that Large Deformation Diffeomorphic Metric Matching (LDDMM) is able to localize hippocampal activation to provide increased statistical significance in functional imaging studies [8]. Avants et al. used a Lagrangian diffeomorphic normalization technique to map and statistically differentiate functionally homologous structures between species [9]. Furthermore, large deformation mappings are better able to separate structural and signal (intensity) differences in population studies, particularly in the presence of atrophy or significant shape variation. Diffeomorphic methods may also be extended to normalize vector or tensor images [10]. Much of CA’s most important developments were enabled by V. Arnol’d’s work on diffeomorphic theory [11] and its extension [12]. Miller, Younes, Trouve [13] and Grenander [14] have been essential in transferring this technology to medical imaging and vision. An important consequence of Arnol’d’s work is that one may search for geodesics in the diffeomorphic space as an alternative to directly solving problems of fluid mechanics. Furthermore, one gains the ability to work in a deformation space that is much larger than previously considered. The space of diffeomorphisms, by definition, encompasses any elastic or viscoelastic transformation that is diffeomorphic. At the same time, it also enables much larger deformations than captured by elastic transformations and a broader range of vector field regularizers than traditionally used by fluid models. Finally, one gains a rigorous measure of deformation that is naturally temporal, is symmetric and also satisfies the triangle inequality. Our approach to image normalization is based upon Arnol’d’s definition of a symmetric, time-parameterized shortest path (geodesic) between two diffeomorphic configurations of a domain. This view is fundamental to the theoretical foundation of diffeomorphic normalization, essential for computing true metric distances and desirable for its symmetry properties. It enables us to guarantee, in an algorithm, the symmetry implied by a geodesic connection between images, even when similarity terms such as mutual information are used [15]. This yields a new algorithm, geodesic normalization (GN), that parameterizes the deformation between an image pair with respect to both ends of a geodesic path.
2
Diffeomorphisms and Image Normalization
We now discuss some basic facts from the theory of diffeomorphisms, the mathematical underpinnings of GN. Recall that a diffeomorphism is a smooth oneto-one and onto map with a smooth inverse. Families of diffeomorphisms, here
Geodesic Image Normalization and Temporal Parameterization
11
denoted φ(x, t), can be generated by integrating time-dependent velocity (vector) fields through an ordinary differential equation [16], dφ(x, t) = v(φ(x, t), t). dt
(1)
This differential equation defines the change of the map, φ, by the value of a velocity field which is a smooth vector field. Velocities tell us how particles are moving through space: material velocity, V , assigns motion to a specific material point while v defines motion in a fixed coordinate system. The v in equation 1 is a spatial velocity in the Eulerian frame. Spatial velocity is computed at the tangent space to the diffeomorphism at time, t, such that v(φ(x, t), t) = V (x, t). We use the group G0 of diffeomorphisms on a bounded domain, Ω, where the map is the identity at the boundaries. The length of a diffeomorphic path between elements
Fig. 1. An illustration of the geodesic path taken, in time, by the particle at position x in domain Ω. The path is geodesic if the diffeomorphism associated with the domain minimizes the distance metric in equation 2. The geodesic, φ is the whole path. Its points are traversed via φ1 and φ2 .
in this space is similar to the length of a curve, C, connecting two points in 1 Euclidean space, l(C) = 0 dC/dtdt, where the Euclidean length of the curve’s tangent vector is integrated over its parameterization. Distances in the space of diffeomorphisms are infinite-dimensional analogies of curve length, where the infinitesimal increment in distance is given by a Sobolev norm, ·, operating on the tangent to the diffeomorphism (the spatial velocity) [17]. A geodesic between ψ1 and ψ2 , two elements of G0 , is defined by taking the infimum over all such paths [13], 1 v(φ(x, t))L dt, (2) D(φ(0), φ(1)) = inf φ
0
φ(0) = ψ1 and φ(1) = ψ2 , where · L is the Sobolev norm with respect to linear operator, L. Taking the infimum guarantees that we have a geodesic between the elements in G0 . The length of the geodesic gives a metric distance, does not depend on the origin of its measurement (it has right invariance) and is the basis for GN as well as Miller, Trouve and Younes’s work [13]. A geodesic in the space of diffeomorphisms thus defines the shortest route between two diffeomorphic transformations. Each transformation defines a sin-
12
B.B. Avants, C.L. Epstein, and J.C. Gee
gle, unique configuration of the coordinate system. The length of the path itself is (trivially) symmetric, that is, D(φ(0), φ(1)) = D(φ(1), φ(0) and satisfies the three metric properties. Furthermore, for all time t ∈ [0, 1], we have φ−1 2 (φ1 (x, t), 1 − t) = φ1 (x, 1) = z. This path is shown in figure 1. Rearranging this equation, we gain intermediate points along the geodesic from φ2 (z, 1 − t) = φ2 (φ1 (x, 1), 1 − t) = φ1 (x, t). In this way, we see that points along the geodesic are parameterized equivalently from coordinates at either endpoint. We will now introduce this coordinate system invariant parameterization into our normalization technology. The goal of image registration, in general, is to find, for each x in I, the z in J that gives I(x) = J(z) or, alternatively, f (J(z)) where f is an intensity-space transformation. If f is the identity, then the intensity at x in I should be equivalent to the intensity from coordinate z in J. The mapping from x to z may be written φ(x) = z, from image I to image J such that points in I are in oneto-one correspondence with points in J. When such maps are diffeomorphisms, we include a time parameter, t, that indexes the temporal evolution of φ. We apply a diffeomorphism to image I in order to change its coordinate system. This warping is denoted φ(x, 1)I or, with x implied, φ(1)I. Let us now consider the case when we are given two images, I and J, of the same class, known to be (approximately) diffeomorphic. Here, we know neither the path in time nor which image should be considered as the template or reference image. To make this problem well-posed, we seek the shortest diffeomorphism between these images such that φ1 (t)I = φ2 (1 − t)J, where the geodesic is parameterized as above for any t ∈ [0, 1]. Then our similarity term is |φ1 (t)I − φ2 (1 − t)J|2 and we include the metric distance term as regularizer to yield the geodesic normalization variational energy, t¯ EGN (I, J) = inf inf ω{ v1 (t)2L + v2 (t)2L }dt + φ1 φ2 t=0 |φ1 (t¯)I − φ2 (t¯)J|2 dΩ. (3) Ω
Subject to t¯ = 0.5 and each φi ∈ G0 the solution of: dφi /dt = v i (φi (t)) with φi (0) = Id.
(4)
Minimization with respect to φ1 and φ2 , and optimizing the shortest length constraint, provides symmetric geodesic normalization. We optimize to time t¯ = 0.5 such that the deformation and optimization act equally on both images. Landmarks may also be included in this energy, as in our previous work [9], by dividing the similarity term, as done with the image match terms above. A similar image matching equation appeared in [18] and [19] as part of a derivation for template generation. After minimization to time 0.5, the total symmetric normalization from I to J −1 is φ1 (x, 1) = φ−1 2 (φ1 (x, 0.5), 0.5) and from J to I, φ2 (z, 1) = φ1 (φ2 (z, 0.5), 0.5). This is distinct from inverse consistent image registration [20] in which a variational term is used to estimate “inverse consistency” and symmetry and
Geodesic Image Normalization and Temporal Parameterization
13
invertibility are not guaranteed. The inverse consistency is inherent to our method. That is, the paths computed between template and individual images, illustrated in figure 1 and figure 2, are identical when computed from either direction. The algorithm is useful for generating shape means as well as symmetric geodesic image interpolation, formulated in [21]. The pairwise approach given above may be generalized to define a symmetric population study. The novelty in this approach is that we find an optimal ¯ template I(ψ(x)) which minimizes the total length of a set of diffeomorphisms operating on a set of images, {Ji }, via associated geodesics, {φi }, as in figure 2. This image is a reference coordinate system which, when ψ is varied during optimization, allows us to minimize the following equation with respect to the template shape, i
φ1
φ2
0.5
{ v i1 2L + vi2 2L } dt +
infi infi
t=0
ω|φi1 (0.5)I¯ − φi2 (0.5)Ji |2 dΩ. Ω
where ∀i, φi1 (0) = ψ, φi1 (x, 1)I¯ = Ji , and each pairwise problem is solved with GN.
(5)
This equation is minimized iteratively. All φi and ψ are initialized as identity. The template at the first iteration is either the average of all individual images (after affine alignment) or a user selected estimate to the template. The remaining iterations optimize the shape and/or image appearance until the energy no longer decreases, as described in [19].
Fig. 2. The role of the template in the population analysis methods that we propose. Circles represent individuals from a population of images. Arrowed lines represent the symmetric paths between the individuals and the template, indicated by the differently colored circle. The population-biased analysis (at right) uses one of the individuals as an template, thereby reducing the size of the dataset by one. Results will also be biased toward individuals that are “closer” to the selected template, in terms of appearance and topology. Our method, illustrated at left, derives a mostrepresentative image that is not tied to any specific individual. Rather it is found by computing a shape and appearance average with respect to all the individuals in the dataset.
14
3
B.B. Avants, C.L. Epstein, and J.C. Gee
Application to Longitudinal Imaging
We now illustrate the application of our techniques to a population study of shape change in time, although our methods are also currently in use for analyzing diffusion in white matter, cortical functional activation and cerebral blood flow. This study illustrates the importance of large deformation capabilities and temporal parameterization for longitudinal studies.
Fig. 3. The average annual expansion and atrophy caused by frontotemporal dementia (FTD) is in the average FTD atlas space, as generated with respect to the MNI template. Expansion focuses in the anterior portion of the ventricles (blue-green scale) while atrophy focuses in the frontal and temporal lobes (red-yellow scale). Relative sparing of the occipital lobes is also visible. The white coordinate lines in the image correspond to the same point in the lower left, upper left and upper center images. The green coordinate lines show the same point in the lower right, lower center and upper right of the image.
Dementia is a problem that affects nearly everyone, whether through relatives, the cost to society or one’s own health. Its prevalence has the potential to double within the next two decades. However, dementia, in its many forms, remains poorly understood. Frontotemporal dementia (FTD), for instance, may occur at a rate that is much higher than previously expected and may often be misdiagnosed as Alzheimer’s Disease (AD). One tool in understanding this disease and its diagnosis is structural imaging, the power of which may be increased through patient-specific longitudinal protocols.
Geodesic Image Normalization and Temporal Parameterization
15
One potential tool for differentiating this disease from other types of dementia is to use structural MRI to make measurements of annual disease progression. We performed a study in which we investigated the feasibility of normalizing FTD data measuring annual atrophy [22]. Two considerations came into play for this study. First, FTD induces rapid and severe atrophy of the brain, making normalization challenging. Diffeomorphic normalization, however, is able to capture large deformations that enable one to map FTD subjects to an optimal shape template. The second consideration is that patients may not be imaged at exact one year increments. This requires atrophy estimates to be annualized. Our methods, which include a time parameterization, enable one to naturally capture an estimate of annual atrophy from an image pair that is separated by more than a year. For example, if we suppose that four years separate image one and two and that φ maps between these images, then we compute the atrophy from the value of the jacobian determinant at φ(0.25), which gives a quarter of the appropriate diffeomorphic map between the images. Results of annualized atrophy measurements gained from 13 patients are shown in figure 3.
4
Conclusion
Geodesic normalization provides high resolution maps in both space and time, computes distances that are truly symmetric and enables unbiased template generation. Our methodology explicitly optimizes the length of φ, parameterized symmetrically by its components, φ1 , φ2 in order to minimize a geodesic distance ¯ and a dataset. Optimizing the between image pairs or an optimal template, I, length of a diffeomorphism is an alternative to finding a geodesic by solving the Euler equations [11]. We have shown the appropriateness of these methods for addressing the difficulties of analyzing longitudinal neurodegenerative data. Future work will focus on using these methods in diffusion tensor and other alternative modality studies.
References 1. C. Darwin, Origin of Species, John Murray: London, UK, 1856. 2. D. W. Thompson, On Growth and Form, Cambridge University Press, England, 1917. 3. A. M. Dale, B. Fischl, and M. I. Sereno, “Cortical surface-based analysis i: Segmentation and surface reconstruction,” Neuroimage, vol. 9, no. 2, pp. 179–194, 1999. 4. P. Thompson and A. Toga, “A surface-based technique for warping 3-dimensional images of the brain,” IEEE Trans. Medical Imaging, vol. 15, no. 4, pp. 402–417, 1996. 5. J. G. Csernansky, S. Joshi, L. Wang, J. W. Haller, M. Gado, J. P. Miller, U. Grenander, and M. I. Miller, “Hippocampal morphometry in schizophrenia by high dimensional brain mapping,” Proc. Natl. Acad. Sci. (USA), vol. 95, no. 19, pp. 11406–11411, 1998.
16
B.B. Avants, C.L. Epstein, and J.C. Gee
6. C. Studholme, V. Cardenas, R. Blumenfeld, N. Schuff, H. J. Rosen, B. Miller, and M. Weiner, “Deformation tensor morphometry of semantic dementia with quantitative validation,” Neuroimage, vol. 21, no. 4, pp. 1387–1398, 2004. 7. R. Dann, J. Hoford, S. Kovaˇciˇc, M. Reivich, and R. Bajcsy, “Evaluation of elastic matching system for anatomic (CT, MR) and functional (PET) cerebral images,” J. Comput. Assist. Tomogr., vol. 13, pp. 603–611, 1989. 8. M. I. Miller, M. F. Beg, C. Ceritoglu, and C. Stark, Increasing the power of functional maps of the medial temporal lobe by using large deformation diffeomorphic metric mapping, PNAS, vol. 102, no. 27, pp. 9685-9690, 2005. 9. B. Avants, P. T. Schoenemann, and J. C. Gee, “Landmark and intensity-driven lagrangian frame diffeomorphic image registration: Application to structurally and functionally based inter-species comparison,” Medical Image Analysis, June 2005. 10. C. Yan, M. I. Miller, R. L. Winslow, and L. Younes, Large deformation diffeomorphic metric mapping of vector fields, tmi, vol. 24, no. 9, pp. 1216-1230, 2005. 11. V. I. Arnold and B. A. Khesin, “Topological methods in hydrodynamics,” Ann. Rev. Fluid Mech., vol. 24, pp. 145–166, 1992. 12. D. Ebin and J. Marsden, “Groups of diffeomorphisms and the motion of an incompressible fluid,” The Annals of Mathematics, vol. 92, no. 1, pp. 102–163, 1970. 13. M. Miller, A. Trouve, and L. Younes, “On the metrics and Euler-Lagrange equations of computational anatomy,” Annu. Rev. Biomed. Eng., vol. 4, pp. 375–405, 2002. 14. U. Grenander and M. I. Miller, “Computational anatomy: An emerging discipline,” Quarterly of Applied Mathematics, vol. 56, no. 4, pp. 617–694, 1998. 15. G. Hermosillo, C. Chefd’Hotel, and O. Faugeras, “A variational approach to multimodal image matching,” Intl. J. Comp. Vis., vol. 50, no. 3, pp. 329–343, 2002. 16. V. I. Arnold, Ordinary Differential Equations, Springer-Verlag: Berlin, 1991. 17. P. Dupuis, U. Grenander, and M. I. Miller, “Variational problems on flows of diffeomorphisms for image matching,” Quarterly of Applied Mathematics, vol. 56, no. 3, pp. 587–600, 1998. 18. S. Joshi, B. Davis, M. Jomier, and G. Gerig, “Unbiased diffeomorphic atlas construction for computational anatomy,” Neuroimage, vol. Suppl. 1, pp. S151–S160, September 2004. 19. B. Avants and J.C. Gee, “Geodesic estimation for large deformation anatomical shape and intensity averaging,” Neuroimage, vol. Suppl. 1, pp. S139–150, 2004. 20. H. J. Johnson and G. E. Christensen, “Consistent landmark and intensity-based image registration,” IEEE Trans. Med. Imaging, vol. 21, no. 5, pp. 450–461, 2002. 21. B. Avants, C. L. Epstein, and J. C. Gee, “Geodesic image interpolation: Parameterizing and interpolating spatiotemporal images,” in ICCV Workshop on Variational and Level Set Methods, 2005, pp. 247–258. 22. B. Avants, M. Grossman, and J. C. Gee, “The correlation of cognitive decline with frontotemporal dementia induced annualized gray matter loss using diffeomorphic morphometry,” Alzheimer’s Disease and Associated Disorders, vol. 19, 2005, Supplement 1:S25-S28.
Connectivity Analysis of Human Functional MRI Data: From Linear to Nonlinear and Static to Dynamic Gopikrishna Deshpande, Stephen LaConte, Scott Peltier, and Xiaoping Hu WHC Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Hospital Annex, 531 Asbury Circle, Suite 305, Atlanta, GA 30322, USA [email protected]
Abstract. In this paper, we describe approaches for analyzing functional MRI data to assess brain connectivity. Using phase-space embedding, bivariate embedding dimensions and delta-epsilon methods are introduced to characterize nonlinear connectivity in fMRI data. The nonlinear approaches were applied to resting state data and continuous task data and their results were compared with those obtained from the conventional approach of linear correlation. The nonlinear methods captured couplings not revealed by linear correlation and was found to be more selective in identifying true connectivity. In addition to the nonlinear methods, the concept of Granger causality was applied to infer directional information transfer among the connected brain regions. Finally, we demonstrate the utility of moving window connectivity analysis in understanding temporally evolving neural processes such as motor learning. Keywords: Functional Magnetic Resonance Imaging, Nonlinear Dynamics, Connectivity Analysis.
1 Introduction While studying the brain with functional neuroimaging, it is important to keep in mind the inherent dichotomy that exists in the brain. On one hand, there is functional specialization of different brain regions, the investigation of which has been the major focus of functional mapping studies. On the other hand, networks of regions also act together to accomplish various brain functions. In particular, neuroimaging data can be used to infer functional connectivity [1], which permits a systematic understanding of brain activity and allows the establishment and validation of network models of various brain functions. One approach for examining connectivity, that has gained a great deal of interest, is based on the temporal correlations in functional neuroimaging data [2]. While linear correlation analysis has been quite successful, physiological considerations suggest that the brain is likely to act as a nonlinear system that is not completely stochastic [3]. Therefore, in this paper, methods to characterize nonlinear connectivity in the brain are introduced. In addition to functional connectivity which does not ascertain the direction of influence, various methods including structural equation modeling [4], Bayesian approaches [5], Kalman filtering [6], dynamic causal modeling [7] and information G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 17 – 24, 2006. © Springer-Verlag Berlin Heidelberg 2006
18
G. Deshpande et al.
theoretic models [8] have been used to assess effective connectivity (EC). An alternate approach using temporal precedence information as inferred through Granger causality [9] is applied in this work.
2 Theory and Methods 2.1 Nonlinear Functional Connectivity According to dynamical systems theory [10], the state of a system at every instant is controlled by its state variables, and the phase space of its state variables specifies the system. Therefore, the dynamics of a system can be studied by studying the dynamics of its phase space. However, the data measured in an fMRI experiment are not state variables, but only evolving scalar measurements which are the projections of the actual state variables on a lower dimensional space. The problem of converting the observations into state variables is referred to as phase-space reconstruction (embedding) and is solved using Taken’s embedding theorem [11]. The general form of multivariate embedding is given by the equation below. yn = ( ϕn1 ,ϕn2,……..ϕnj )
(1)
where ϕ j = {x(t),x(t-τ),x(t-2τ),…,x(t-(n-1)τ)} are time delay vectors formed from the fMRI time series x(t), j is the number of fMRI time series used, n is the embedding dimension [12], and IJ is the embedding lag. The embedding dimension is basically an estimate of the number of independent and orthogonal parameters required to describe the dynamic evolution of the system. The choice of the time series, embedding dimensions and time delays have to be made based on a justifiable criterion. In the present work, the choice of these parameters is addressed by using the basic idea of minimizing a cost function which reflects the prediction error in the embedded state space [12]. The cost function is given byn
M ½ (d1 , d 2 ,, d M ) = arg min ® E (d1 , d 2 ,, d M ) : (d1 , d 2 ,, d M ) ∈ Z M , ¦ d i ≠ 0¾ i =1 ¯ ¿
E (d1 , d 2 ,, d M ) =
N 1 x1,n+1 − x1,η ( n )+1 , J o = max (d i − 1)τ i + 1 ¦ 1≤i ≤ M N − J o + 1 n≠ Jo
(2) (3)
where di’s are the embedding dimensions, xi,j are points in the state space, IJi are the time delays, Z is the set of all non-negative integers and ZM is the M-dimensional vector space corresponding to the scalar set Z. d=d1+d2+…+dM represents the multivariate embedding dimension of all the time series taken together. Bivariate Nonlinear Connectivity Index. The bivariate nonlinear connectivity index (BNC) is based on the bivariate and univariate embedding dimensions to measure nonlinear FC between any two regions of interests (ROI) in the brain. Let d1 and d2 be the univariate embedding dimensions of fMRI time-series 1 and 2 obtained from respective ROIs. Let d be the bivariate embedding dimension of the two time series embedded together (as a special case of the more general multivariate embedding formulation described before). Then, we define
Connectivity Analysis of Human Functional MRI Data
BNC=1- (|d-d1|+|d-d2|)/ (d1+d2)
19
(4)
When the two time series are connected, the bivariate dimension does not provide any extra information and d=d1=d2 and BNC=1. Therefore, higher values of BNC are associated with higher connectivity. Since this method makes no assumption of linearity, both linear and nonlinear couplings are accounted for. Delta-Epsilon Method. As we described in [13], we have adapted the delta-epsilon approach [14,15] to measure deterministic coupling arising from both linear and nonlinear dynamics for resting state functional connectivity studies. The procedure uses a fixed reference voxel, and every brain voxel individually as a candidate voxel to estimate the spatial pattern of coupling with the reference. In this case “coupling” implies that the candidate voxel provides predictive information about the reference voxel. To do this, the delta-epsilon method looks at the distance between joint phase space locations (į) at two different times and the distance between the reference voxel values at future times (ε). For example, embedding both xref and xcand in two dimensions and concatenating coordinates leads to a 4D phase space. In this case, į is calculated from {xref(ti),xref(ti-τ),xcand(ti),xcand(ti-τ)} and {xref(tj)xref(tj-τ),xcand(tj),xcand(tjτ)} while ε is calculated from xref(ti+1) and xref(tj+1) for times i and j, respectively. If there is coupling between the two time courses, small į values should result in small ε values. To obtain estimates of significance, surrogate data is generated by randomly permuting the xcand contribution to the phase space (keeping coordinates together, but randomizing their temporal order). Connectivity is assessed by the statistic, S (r ) = ε (r ) − ε * (r ) σ * (r )
(5)
where ε*(r) and σ*(r) are average image distance and standard deviation of the surrogate data, respectively. As in Hoyer [15], we used a cumulative ε as the average image distances for į < r. To display results as an image, we calculate the mean of S(r) for each candidate voxel. 2.2 Effective Connectivity EC is inferred from temporal precedence information obtained from Granger causality [9]. Given two time series x1(t) and x2(t), they can be modeled as a bivariate autoregressive process (of order p) as given below p
p
x1 (t ) = ¦ a11 ( j ) x1 (t − j ) + ¦ a12 ( j ) x 2 (t − j ) + e1 (t ) j =1
j =1
p
p
j =1
j =1
(6)
x 2 (t ) = ¦ a 21 ( j ) x1 (t − j ) + ¦ a 22 ( j ) x 2 (t − j ) + e 2 (t )
In the frequency domain, above equations can be written as § X1 ( f ) · § H11 ( f ) H12 ( f ) · § X1 ( f ) · § H ( f ) H12 ( f ) · § A11 ( f ) ¨¨ ¸¸ = ¨¨ ¸¸ ¨¨ ¸¸ where ¨¨ 11 ¸¸ = ¨¨ X f H f H f X f ( ) ( ) ( ) ( ) 22 © 2 ¹ © 21 ¹© 2 ¹ © H 21 ( f ) H 22 ( f ) ¹ © A21 ( f )
A12 ( f ) · ¸ A22 ( f ) ¸¹
−1
(7)
is the transfer matrix. We define the Granger causality as a function of frequency using the off-diagonal elements of the transfer matrix as
20
G. Deshpande et al.
2
H ij ( f ) =
Aij ( f ) A( f )
2
2
(8)
where the causality is from j to i. The values were summed over frequency to obtain one value of Granger causality (GC) for the jĺi link. The difference GC (DGC) is defined as given below and the sign of DGC is used to infer the direction of information transfer. DGC = GCiĺj – GCjĺi
(9)
A large positive value indicates large information transfer from i to j, a high negative value from j to i and a low positive or negative value would indicate a tendency towards bidirectional transfer of information.
3 Data Acquisition and Analysis 3.1 Static Analysis Resting State Paradigm. Two runs of echo-planar imaging (EPI) data were acquired, one during resting state and one during performance of a block design finger tapping paradigm, on 3 human subjects using a 3T Siemens Trio. Scan parameters were: repetition time (TR) =750 ms, echo time (TE) =34 ms, flip angle (FA)=50 deg and field of view (FOV)=22cm, with 5 axial slices, 5mm slice thickness, 1120 images and 64 phase and frequency encoding steps. A physiological monitoring unit consisting of a pulse-oximeter and nasal respiratory cannula was used during data acquisition to record cardiac and respiratory signals, respectively. These physiological fluctuations were corrected in the functional data retrospectively [16]. Four regions of interests (ROI)- left motor (LM), right motor (RM), frontal (F) and supplementary motor (SMA) -were identified based on finger-tapping data. A mean time course was calculated for each ROI. BNC was calculated between the time course of LM and that from each of the ROIs. A Kolmogorov-Smirnov test, based on the null hypothesis that the BNC values are purely attributed to noise, was performed to test the significance. The slice with the most motor activation pattern was examined with both the deltaepsilon method and correlation analysis, using a seed voxel in the left primary motor cortex identified from the activation data. For both methods, a threshold was chosen to select the top 10% of brain voxels. Subsequent to applying this threshold, a spatial contiguity cluster of three voxels was imposed and the linear and nonlinear maps were compared. 3.2 Dynamic Analysis Continuous Motor Paradigm. EPI data was acquired in 3 healthy volunteers while they performed a continuous self-paced bimanual tapping of the thumb with the index, middle, ring and little fingers (in that order). Scan parameters were: TR= 750 ms, TE= 34 ms, FA= 50°, FOV=22 cm, 1120 volumes and 10 slices spanning the corpus callosum to the top of the head. Activated voxels were identified using
Connectivity Analysis of Human Functional MRI Data
21
independent component analysis [17] and a reference region (RR) defined in bilateral motor cortex. The mean time course of the RR was chosen as the seed for further analysis. Linear FC was estimated as the cross-correlation between the seed and all other voxels (candidates) in five slices containing the motor cortex whereas BNC was calculated as a measure of nonlinear FC between the seed and other voxels. FC was calculated using 3 non-overlapping time windows, each containing 373 volumes. The significance of the changes in FC was ascertained using the Wilcoxon rank sum test. In each time window, the DGC of every voxel with respect to the mean voxel of RR was calculated to ascertain the magnitude and direction of information transfer between the voxel under consideration and RR. The DGC of voxels that passed the significance test with 99% confidence was mapped and overlaid onto the T1-weighted anatomical image to produce GC maps in each of the three time windows.
4 Results and Discussion 4.1 Resting State Paradigm Table.1 lists the BNC and linear correlation (LC) of the resting state fMRI data. LMļRM and LMļSMA exhibit strong connectivity, in agreement with results reported earlier [7]. Also, BNC rankings of the connections were more consistent across subjects than LC. Granger causality analysis showed that there was a strong unidirectional causal influence from SMA and RM to LM while LM’s influence on F was probably more bidirectional. Table 1. Significant FC and EC for resting-state fMRI data for two representative subjects
Network Sub-1 Sub-2
LMļSMA LMļRM LMļF LMļSMA LMļRM LMļF
Linear Connectivity (LC) 0.76 0.73 0.22 0.49 0.57 0.45
Nonlinear Connectivity (BNC) 0.52 0.89 0.15 0.63 0.76 0.54
Granger Causality Direction DGC LMĺSMA -0.52 LMĺRM -0.78 LMĺF 0.20 LMĺSMA -0.56 LMĺRM -0.65 LMĺF 0.15
The delta-epsilon approach was able to identify a connection between a seed in the left motor area (yellow arrow) and right motor area, SMA, and a medial frontal region (blue arrow) in the third subject (Fig.1). Although there are similarities between the connectivity identified with the delta-epsilon approach and that by the correlation analysis with the same seed, significant differences are present indicating that the embedding approach is likely to capture nonlinear correlations that may not be detected using linear correlation. In addition, the region indicated by the green arrow is a spurious correlation near a draining vein. Note that this region was not identified by the delta-epsilon method, suggesting that the nonlinear technique has enhanced specificity to the desired gray matter signal.
22
G. Deshpande et al. Correlation
Delta-Epsilon
Correlation
Fig. 1. Baseline data connectivity maps derived using delta-epsilon and correlation
4.2 Continuous Motor Paradigm The change in the magnitude of FC with the progression of time was not significant, though it showed an increasing trend. However, the number of significant connected voxels (p 0;
m' ≤ m − 1
where x and D represents the mean vector and the covariance matrix respectively; {σ i2 } are non-zero eigenvalues of D , and {pi } are the correspondent eigenvectors. Columns of P span the shape sub-space of the DS-PDM model with x as the origin. Then, any one of the instance in this space can be expressed as: '
x = x + ¦im= 0−1α i pi
(3)
3 A Sequential Three-Stage Trimmed Optimization Approach Given the positions of a reduced number n p . For any given p, μ1 p , σ 1 p , μ2 p , and σ 2 p can be computed from d l (x ) . Therefore, the likelihood of these two hypotheses can be calculates as:
∏
P ( H 0 | d l ( x )) =
1
x j ∈ [ 0 .0 , 0 . 5 ]
2 πσ 02
∏
P ( H 1 ( p ) | d l ( x )) = (
x r ∈[ 0 .0 , p ]
∏
x k ∈ ( p , 0 .5 ]
1 2πσ
2 2p
exp( −
exp( −
1 2πσ
2 1p
(d l ( x j ) − μ 0 ) 2
exp( −
2σ 02
); j = 0 ,1,..., S
( d l ( x r ) − μ1 p ) 2
(d l ( x k ) − μ 2 p )2 2σ 22 p
2σ 12p
)⋅
) ); r = 0 ,1,..., ¬ p ⋅ S ¼
(9) and
k = ¬ p ⋅ S ¼ + 1,..., S
where S is the number of samples; x j , xr , and xk are the regular sampling positions. For a given p, we can use a log-likelihood-ratio test statistic to reject H0 as follows. Λ ( H 0 , H1 ( p )) = ln(
P ( H1 ( p ) | d l ( x )) )/ S P( H 0 | d l ( x ))
(10)
Then, by the optimal Neyman-Pearson test, Λ ( H 0 , H 1 ( p )) > c , provides evidence to reject H0. The cut-off point c=ln(2.0) is selected so that the hypothesis H0 can be correctly classified with a probability of 0.95 under the normal distribution assumptions of H0 and H1(p). If H0 is rejected for at least one given p, the optimal value of the outlier rate is then estimated by finding the peak of the log-likelihood-ratio.
5 Experimental Results In this paper, we evaluated the present approach for handling of outliers on simulated data. The simulations were designed to examine the performance under different outlier rates. The simulation environment was as follows. First of all, a cadaver bone which was not used in the construction of the DS-PDM was employed in our experiment. The surface model of this cadaver bone, which was reconstructed from its CT volume data, was used as the ground truth. Points picked directly from the surface model were used for reconstruction. The experiment trials were carried out in the CT coordinate system. We created four point sets with different simulated outlier rates, i.e., 0%, 10%, 20%, and
74
G. Zheng, X. Dong, and L.-P. Nolte
30%. During the reconstruction, we assumed that we had no idea whether the input point set was contaminated or not. The approach presented in Section 4 was used to automatically determine the optimal value of the outlier rate in each study. Please note that the accuracy of the outlier rate estimation depends on the sample density of the EDF d(ȟ(i)). The denser the samples are, the more accurate the estimation will be. But dense samples also mean more running time. For all studies in this experiment, each time we drew samples from the set { ȟ(i) | ȟ(i) = 0.05*i, i=0, 1, …,l}. The reconstruction results were directly compared to the actual surface model obtained from the CT volume, which was taken as the ground truth. An open source tool MESH [15] was adapted to efficiently compute the TRE. The results of these four studies are presented by the top row of Fig. 1. The bottom row of Fig. 1 shows a reconstruction example when the point set with 30% outlier rate was used. It was found that the present approach could robustly and correctly determine an optimal value for the outlier rate in all studies, which guaranteed an accurate and robust surface reconstruction. Results of handling of outliers
Target Reconstrsuction Error (mm)
3 2.5 2 1.5 1 0.5 0 1
2
3
4
Studies with different outlier rates First Quartile
CT surface model and digitized surface points with 30% outliers
5 Percentile
The mean surface model of the DS-PDM and the digitized surface points
Median
95 Percentile
Third Quartile
The final reconstructed surface model and the digitized surface points
The final reconstructed surface model, the CT surface model, and the digitized surface points
Fig. 1. Results of automatic outlier rejections; top row: the box plot of the target reconstruction errors for different studies with different outlier rates: 1 – 0%; 2 – 10%; 3 – 20%, and 3 – 30%; and bottom row: a reconstruction example when the point set with 30% outliers was used
6 Conclusions We have presented an enhanced technique for robustly reconstructing patient-specific 3D bone model from sparse point sets. This technique can be seamlessly fit to all
Robust and Accurate Reconstruction of Patient-Specific 3D Surface Models
75
three stages of our previously proposed surface reconstruction approach without any significant modification. We have also proposed a hypothesis testing procedure to automatically determine an optimal value of the outlier rate. Our experiments verified that this enhanced technique was very successful in robustly eliminating outliers and also in enabling stable and accurate reconstructions.
References 1. Zheng G., Rajamani K.T. and Nolte L.-P.: Use of a dense surface point distribution model in a three-stage anatomical shape reconstruction from sparse information for computerassisted orthopaedic surgery: a preliminary study. Lecture Notes in Computer Science, Vol. 3852, (2006) 52-60. 2. Lavalle S., Merloz P., et al. Echomorphing introducing an intra-operative imaging modality to reconstruct 3d bone surfaces for minimally invasive surgery. CAOS 2004, pp. 38 – 39 3. Hofstetter R., Slomczykowski M, et al. Fluoroscopy as an imaging means for computerassisted surgical navigation. Comp Aid Surg Vol. 4, pp. 65 – 76, 2004 4. Fleute M. and Lavallee S. Building a complete surface model from sparse data using statistical shape models: application to computer assisted knee surgery system. MICCAI 1998, pp. 879-887, 1998 5. Chan C. S., Edwards P. J., Hawkes D. J. Integration of ultrasound-based registration with statistical shape models for computer-assisted orthopedic surgery. SPIE, Medical Imaging, pp. 414-424, 2003 6. Blanz V. and Vetter T. Reconstructing the complete 3D shape of faces from partial information. it+ti Oldenburg Verlag, pp. 295-302, 2002 7. Rajamani T. K., et al. A novel and stable approach to anatomical structure morphing for enhanced intraoperative 3D visualization. SPIE Medical Imaging, pp. 718 – 725, 2005 8. Rousseeuw P. and van Zomern B.: Unmasking multivariate outliers and leverage points. Journal of the American Statistical Association, 86:633 – 651, 1990. 9. Brechbuehler C., Gerig G., Kuebler O.: Parameterization of Closed Surfaces for 3D Shape Description. Comput Vision and Image Under (1995), 61: 154-170. 10. Davies R. H., Twining C. H., et al. 3D statistical shape models using direct optimization of description length. ECCV 2002, pp. 3 – 20, 2002. 11. Loop C.T., Smooth subdivision surfaces based on triangles. M.S. Thesis, Department of Mathematics, University of Utah, August 1987. 12. Chetverikov D., Svirko D., Stepanov D., and Krsek O.: The trimmed iterative closest point algorithm. ICPR’02, Volume 3: 545-548, 2002. 13. Bookstein F. Principal warps: thin-plate splines and the decomposition of deformations. IEEE T Pattern Anal. Vol 11., pp. 567 – 585, 1989 14. D.W. Scott, “Multivariate Density Estimation. Theory, Practice, and Visualization,” Wiley, 1992 15. Aspert N., Santa-Cruz D., Ebrahimi T. MESH: Measuring errors between surfaces using the Hausdorff Distance. ICME 2002. pp. 705 – 708, 2002
Generalized n-D C k B-Spline Scattered Data Approximation with Confidence Values Nicholas J. Tustison and James C. Gee Penn Image Computing and Science Laboratory University of Pennsylvania Philadelphia, PA 19104-6389
Abstract. The ability to reconstruct multivariate approximating or interpolating functions from sampled data finds many practical applications in medical image analysis. Parameterized reconstruction methods employing B-splines have typically utilized least-squares methodology for data fitting. For large sample sets, solving the resulting linear system is computationally demanding as well as susceptible to ill-conditioning. We present a generalization of a previously proposed fast surface fitting technique for cubic B-splines which avoids the pitfalls of the conventional fitting approach. Our proposed generalization consists of expanding the algorithm to n dimensions, allowing for arbitrary spline degree in the different parametric dimensions, permitting wrapping of the parametric domain, and the assignment of confidence values to the data points for more precise control over the fitting results. In addition, we implement our generalized B-spline approximation algorithm within the Insight Toolkit (ITK) for open source dissemination.
1 Introduction Given a set of uniformly or nonuniformly distributed data samples, the process of reconstructing a function from those samples finds diverse application in generic quantitative data analysis. Due to many of their salient properties (see, for example, [1]), parameterized reconstruction techniques employing B-splines have continued to find application since Riesenfeld first introduced B-splines to approximation problems in computer-aided design [2]. Due to its general applicability as well as ease of use, a popular technique for approximation of scattered data using B-splines is based on the minimization of the sum of the squared residuals between the individual data points and the corresponding function values (commonly referred to as least squares fitting). Such techniques generally involve the construction of a linear system from a set of data points. Standard matrix transformations are then used to solve for the values of the governing control points. Unfortunately, such techniques require the inversion of potentially large matrices which is not only computationally demanding but susceptible to memory problems. In addition, locally insufficient data point placement can lead to ill-conditioned matrices producing undesirable results. Lee et al. proposed a uniform B-spline approximation algorithm in [3] which circumvents the problematic issues associated with conventional least squares fitting for G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 76–83, 2006. c Springer-Verlag Berlin Heidelberg 2006
Generalized n-D C k B-Spline Scattered Data Approximation
77
2-D cubic B-spline surfaces. The authors also introduce this algorithm within the context of a multilevel framework for hierarchical surface fitting. While they discuss the possibility of extending their algorithm to multi-dimensions, both the details of the implementation of their algorithm as well as the applications are restricted to cubic B-spline surfaces (bivariate functions). In this paper, we generalize the algorithm for more encompassing applicability. Our proposed generalization accommodates multivariate B-spline objects of arbitrary degree where the degree is allowed to vary in each parametric direction. We also allow for confidence values to be associated with each data point for refined control of the fitting results. In addition, we allow for a “wrapping” of the parametric domain in user-specified dimensions. Finally, our algorithm is implemented within the open-source Insight software library [4] allowing for public dissemination of the algorithm presented in this paper.
2 B-Spline Scattered Data Approximation 2.1 B-Spline Basics B-splines, an acronym for basis splines,1 have become the de facto standard for computational geometric representation. This is, in large part, due to the discovery of a stable, recursive algorithm for calculating the B-splines known as the Cox-de Boor recurrence relation [6,7]. Two important properties of B-splines are the following: – Locality: Each point of the piecewise B-spline curve of degree d is influenced by the neighboring d + 1 control points. Similarly, each point of an n-dimensional Bspline n object, is influenced by the corresponding n-dimensional gridthconsisting of dimension. j=1 (dj + 1) control points where dj is the degree of spline in the j d−k – Continuity: A B-spline curve of degree has continuity C at the connecting point of the polynomial segments (called knots). It is smooth elsewhere. B-spline objects are defined by a set of B-splines and a grid of control points. In reconstructing the object, the B-splines serve as a weighted averaging function for the set of control points. As an example, the B-spline surface is a bivariate function given by N M S(u, v) = Pi,j Bi,d (u)Bj,e (v) (1) i=1 j=1
which can be generalized to formulate a B-spline object of n-dimensions S(u1 , . . . , un ) =
M1 i1 =1
1
...
Mn in =1
Pi1 ,...,in
n
Bij ,dj (uj ).
(2)
j=1
Contrary to popular usage in the CAGD community, we follow de Boor’s distinction discussed in [5] of using the term ‘B-spline’ to denote the shape function of limited support and not the B-spline object (e.g. curve, surface, volume).
78
N.J. Tustison and J.C. Gee
2.2 Least Squares Fitting Given a set of sampled data consisting of M points and a set of user-selected parametric ¯ = BP ¯ where S ¯ is a column values for each point, one can construct the linear system S vector composed of the M data points, B is the sparse matrix of size M × N consisting of the B-spline values at the user-specified parametric values (commonly called the ¯ which represents the unknown N control observation matrix), and the column vector P point values.
(a)
(b)
Fig. 1. (a) The original hypocycloid curve. (b) The 16 sample points of the hypocycloid.
For least squares fitting, one must ensure that the Schoenberg-Whitney conditions (discussed in [8]) are satisfied. These conditions concern the placement of the data within the parametric domain. Violation leads to instability in the system. This is illustrated with the following example. Suppose we wish to fit a a B-spline curve to the hypocycloid curve shown in Figure 1(a) using the 16 sample points of the curve shown in Figure 1(b). This curve can be described parametrically by the following set of equations: x = 3 cos (φ) − cos (0.75φ) y = 3 sin (φ) − sin (0.75φ)
(3) (4)
where 0 ≤ φ < 2π. The reconstructed curve using least squares fitting is shown in Figure 2(a) whereas fitting using our method is shown in Figure 2(b). Note that the least squares fitting scenario involves an underdetermined system which violates the previously specified conditions. Fortunately, the method we propose does not suffer from these inherent difficulties.
3 Generalized B-Spline Fitting Given the limitations associated with least squares fitting using B-splines, we present our generalized n-D B-spline fitting algorithm which is a significant extension of the surface fitting algorithm presented in [3].
Generalized n-D C k B-Spline Scattered Data Approximation
(a)
79
(b)
Fig. 2. Reconstruction of the hypocycloid (Figure 1(a)) using a periodic cubic B-spline curve of 32 control points. Visual comparison of the resulting solutions using (a) least squares and (b) our proposed method demonstrates the advantages of our method.
3.1 Extension to Multivariate B-Spline Objects with Confidence Values For a single isolated data point one can solve for the values of the surrounding subset of control points such that the B-spline object interpolates that point. Note that only a subset of control points affects the single data point due to the locality property of B-splines. Since such a system is underdetermined, the constraints inherent in the pseudoinverse produce the solution which minimizes the sum of the squares of the control point values. The solution for the multivariate case (Equation (2)) is acquired from
n c Sc j=1 Bij ,dj (uj ) (5) Pi1 ,...,in = d1 +1 dn +1 n 2 c k1 =1 . . . kn =1 j=1 Bkj ,dj (uj ) where (uc1 , . . . , ucn ) are the assigned parametric values for the cth data point Sc . The denominator is calculated from the subset of control points which have non-zero Bspline values for the point S. With irregularly placed data, it is likely that multiple data points will correspond to overlapping control points. In contrast to isolated data points, such a scenario requires finding a solution which provides an approximation to this subset of data points. Specifically, suppose that a single control point is influenced by M neighboring data points. Lee et al. chose a criterion which minimizes the sum of the error for each of the M data points assuming that each data point determines the control point values separately. We generalize this criteria and add a confidence term, δ, for each of the data points such that the new criteria is ⎛ ⎞2 M n n ⎠ δm ⎝Pi1 ,...,in Bij ,dj (um Bij ,dj (um (6) e(Pi1 ,...,in ) = j ) − Pm j ) m=1
j=1
j=1
Minimization of this criterion leads to the solution for Pi1 ,...,in , M n 2 m m=1 δm Pm j=1 Bij ,dj (uj ) Pi1 ,...,in = M n 2 m m=1 δm j=1 Bij ,dj (uj )
(7)
80
N.J. Tustison and J.C. Gee
3.2 Extension to Arbitrary Order The original formulation in [3] restricted discussion to cubic B-splines. Such a restriction is a natural choice since it can be shown that a cubic curve which interpolates a set of data points also minimizes its curvature [9]. Also, many algorithms restrict themselves to cubic splines (e.g. [10]). However, some algorithms, such as [11], demonstrate a preference for non-cubic B-splines. Although, for our implementation, the default spline is cubic, we allow for B-spline polynomials of arbitrary degree. 3.3 Wrapping of the Parametric Domain Since B-splines are often used to construct closed curves and other closed objects, e.g. cylindrical and toroidal structures, we accommodate these structures in our algorithm. B-splines of this type are often referred to as periodic. In constructing the periodic Bspline object, one simply constrains the first d control points in a given dimension to be identical to the last d control points. Algorithmically, this adds some simple bookkeeping requirements for the algorithm. 3.4 Multilevel Fitting for n-D C k B-Spline Objects For greater accuracy, a multilevel approach was proposed in the original algorithm. We extend this approach to arbitrary dimension and arbitrary spline degree. Due to computational efficiency and its ubiquity in other multiresolution algorithms, each level is characterized as having twice the grid resolution of the previous level. In ascending to the next higher level of higher resolution, the initial step requires calculating the new control point values which produce the same B-spline object with a higher control point resolution. Unfortunately, the description in [3] only provides the coefficients for calculating the higher resolution cubic B-spline surface from the lower resolution surface. Discussion in [12] describes the derivation of the coefficients for both the univariate and bivariate cubic B-spline case. We present a brief discussion of doubling the resolution for multivariate B-spline objects of arbitrary order. At a given resolution level, the B-splines are simply scaled versions of the B-splines (u) = Bi,d (2u) where B is the B-spline at the at the previous value such that Bi,d higher level. Thus, for each polynomial span in 1-D, finding the control point values entails first formulating the following equality: d+1
Pi Bi,d (u) =
i=1
d+1
Pi Bi,d (2u)
(8)
i=1
Grouping the monomials of the separate B-splines on both sides of the equality and setting like groups of terms equal to each other, we obtain the following linear system: ⎤⎡ ⎤ ⎡ 0 ⎤⎡ ⎤ ⎡ P1 P1 b1,1 . . . b1,d+1 2 b1,1 . . . 20 b1,d+1 ⎥ ⎢ .. ⎥ ⎢ ⎥ ⎢ .. ⎥ ⎢ .. .. . . . . . . . . (9) ⎦⎣ . ⎦ = ⎣ ⎦⎣ . ⎦ ⎣ . . . . . . bd+1,1 . . . bd+1,d+1
Pd+1
2d bd+1,1 . . . 2d bd+1,d+1
Pd+1
Generalized n-D C k B-Spline Scattered Data Approximation
81
where it is assumed that the B-splines are expressed in polynomial form, i.e. Bi,d (u) = bd+1,i ud +bd,i ud−1 +. . .+b1,i . This relatively small system is easily solved using standard matrix routines. Since each uniform B-spline is simply a parametric translation of a single representative B-spline function and since each B-spline is symmetric with respect to its maximum value, the matrix obtained from multiplying the inverse of the B-spline coefficient matrix on the left side of Equation (9) with the B-spline coefficient matrix on the right side contains consists of “translated” row pairs. Therefore, one can simply use the top two rows of coefficients of the resulting matrix product to calculate the control point values at the given level from the control point values at the previous level. Extending this formulation to the multivariate case is fairly straightforward. One
(a)
(b)
(c)
(d)
Fig. 3. Deformation fields in the lungs derived from a quadratic 4-D B-spline object at (a) t = 0, (b) t = 0.25, (c) t = 0.5, and (d) t = 0.75 where t ∈ [0, 1] is the normalized time point in the respiratory cycle
82
N.J. Tustison and J.C. Gee
simply calculates the tensor product of the relevant row of coefficients for each dimension. The coefficients for the multivariate (n-D) case are given by the elements of the n-tensor Tn = C1 ⊗ . . . ⊗ Cn
(10)
where Ci is one of the two rows of coefficients for the ith dimension and ⊗ denotes the outer or tensor product between two vectors. Note that different dimensions might employ different degrees of B-splines. Similarly, transcending one level might require refinement in one dimension but not in the other. These two cases are also handled by our algorithm.
4 Applications: Extracting Lung Deformation Using Landmarks To demonstrate the applicability of our methodology, we apply our B-spline algorithm to calculating smooth displacement fields in the lungs of a normal human volunteer using 21 landmarks placed by experts in 4-D computed tomography (CT) image data [13]. These landmarks are derived from canonical branching points of the airway and vascular tree. They are subsequently tracked over the entire respiratory cycle (inspiration to inspiration) consisting of 10 frames with initial inspiration being the reference time point. Using the displacement information from these 21 landmarks, we fit a 4-D B-spline object to the 4-D data. This B-spline object is characterized by quadratic Bsplines in the spatial dimensions and quartic B-splines in the temporal dimension. We used 4 spatial hierarchical levels and 2 temporal hierarchical levels with the minimum level consisting of 3 × 3 × 3 × 5 distinct control points. This B-spline object is wrapped in the temporal dimension to simulate the cyclical nature of the data. This allows us to not only interpolate spatially but temporally as well. Figure 3 contains four snapshots of the deformation field at distinct time points of the respiratory cycle.
5 Discussion The extensions to the original algorithm of Lee et al. presented in this paper generalizes the cubic bivariate B-spline surface fitting algorithm that the authors developed. These extensions include generalization to n dimensions and arbitrary degree of B-spline. Additionally, we allow for wrapping of the parametric domain for closed structures. Some of these extensions were demonstrated by applying our methodology to lung image and landmark data. For those that are interested in this work, we plan to submit this algorithm for inclusion in the Insight Toolkit. Acknowledgments. We thank Tessa Sundaram for providing the lung landmark data.
References 1. Les Piegl and Wayne Tiller, The NURBS Book, Springer, 1997. 2. R. F. Riesenfeld, Applications of B-Spline Approximation to Geometric Problems of Computer-Aided Design, Ph.D. thesis, Syracuse University, 1975.
Generalized n-D C k B-Spline Scattered Data Approximation
83
3. Seungyong Lee, George Wolberg, and Sung Yong Shin, “Scattered data interpolation with multilevel B-splines,” IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 3, pp. 228–244, 1997. 4. Luis Ibanez, Will Schroeder, Lydia Ng, and Josh Cates, The ITK Software Guide, Insight Software Consortium, 2 edition, November 2005. 5. Carl de Boor, Fundamental Developments of Computer-Aided Geometric Modeling, chapter B-spline basics, pp. 27–49, American Press, 1993. 6. M. G. Cox, “The numerical evaluation of B-splines,” Jour. Inst. Math. Applic., vol. 10, pp. 134–149, 1972. 7. C. de Boor, “On calculating with B-splines,” Jour. Approx. Theory, vol. 6, pp. 50–62, 1972. 8. Weiyin Ma and J. P. Kruth, “Parameterization of randomly measured points for least squares fitting of B-spline curves and surfaces,” Computer-Aided Design, vol. 27, pp. 663–675, 1995. 9. I. J. Schoenberg, “Spline functions and the problem of graduation,” Proc. Nat. Acad. Sci., vol. 52, pp. 947–950, 1964. 10. Nicholas J. Tustison, Victor G. Davila-Roman, and Amir A. Amini, “Myocardial kinematics from tagged MRI based on a 4-D B-spline model.,” IEEE Trans Biomed Eng, vol. 50, no. 8, pp. 1038–1040, Aug 2003. 11. Nicholas J. Tustison and Amir A. Amini, “Biventricular myocardial strains via nonrigid registration of anatomical NURBS model [corrected],” IEEE Trans Med Imaging, vol. 25, no. 1, pp. 94–112, Jan 2006. 12. Oyvind Hjelle, “Approximation of scattered data with multilevel B-splines,” Tech. Rep., 2001. 13. T. Li, E. Schreibmann, B. Thorndyke, G. Tillman, A. Boyer, A. Koong, K. Goodman, and L. Xing, “Dose reduction in 4D computed tomography,” Medical Physics, vol. 32, pp. 3650– 3659, 2005.
Improved Shape Modeling of Tubular Objects Using Cylindrical Parameterization Toon Huysmans1 , Jan Sijbers1 , Filiep Vanpoucke2 , and Brigitte Verdonk3 1
3
VisionLab, University of Antwerp (CDE), 2610 Antwerp, Belgium [email protected] 2 Medelec, University of Antwerp (CDE), 2610 Antwerp, Belgium Emerging Computational Techniques, University of Antwerp (CMI), 2020 Antwerp, Belgium
Abstract. Statistical shape modeling is widely used for medical image segmentation and interpretation. The main problem in building a shape model is the construction of a pointwise correspondence between the training objects. Manually corresponding objects is a subjective and time consuming task. Fortunately, surface parameterization can be automated and it has been successfully used. Mostly, the objects are of spherical nature such that spherical parameterization can be employed. However, for tubular objects, this method falls short. In this paper, a cylindrical parameterization technique is proposed and compared to spherical parameterization. As an application, both methods are applied to establish correspondences for a set of tympani scali of human cochleas and the quality of the models built from these correspondences is assessed. Keywords: statistical shape modeling, mesh parameterization, tubular objects.
1
Introduction
Statistical shape models are able to capture the shape present in a set of example objects. They have a wide range of applications, for example, the quantification of shape differences between groups or bone modeling for prosthesis design. In [1], a shape model was employed for image segmentation, which probably is its most well known application. In order to model a certain object class, a representative set of object surfaces is required. Typically, the objects are obtained from a representative set of 3D images. A surface representation of each object can be obtained by segmentation of the object of interest in each of the images. The most challenging task in building a shape model is the construction of a point wise correspondence between the objects in the set. A well known method to establish correspondences is surface parameterization. Parameterization assigns a unique 2D parameter value to each point of the surface, equal parameter values then define corresponding points between the surfaces. Usually, sets of spherical objects are encountered and spherical parameterization is used (for example [2]). But for tubular or very elongated objects, spherical G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 84–91, 2006. c Springer-Verlag Berlin Heidelberg 2006
Improved Shape Modeling of Tubular Objects
85
parameterization can produce bad correspondences. For building correspondences for this kind of objects, we propose the use of our recently developed cylindrical parameterization techique [3]. Both our cylindrical and spherical parameterization technique will be explained in section 2. In section 3, correspondences for a set of tympani scali are built, using cylindrical and spherical parameterization. As will turn out, these surfaces can be parameterized on the cylinder with less distortion and it is therefore easier to find a good correspondence.
2
Methods
In this section, the proposed pipeline for shape modeling cylindrical or spherical objects is outlined. The input in the pipeline is a set of training objects O = {Oi } with i = 1 . . . NO . Each training object Oi = (Vi , Ti ) consists of a set of vertices i } with vji ∈ R3 and a set of triangles Ti = {ti1 , . . . , tiNT }. Each Vi = {v1i , . . . , vN Vi i triangle t ∈ Ti is defined as the convex hull of three vertices from Vi . The surface of the object Oi is represented by the union of all triangles in Ti and defines a NT piecewise linear 2-manifold in R3 , denoted Mi = j=1i tij . The pipeline can be summarized as follows: as a first step, the objects in O are put in a common reference coordinate system using object alignment. Next, a correspondence is built between each two objects in O. This correspondence is established using parameterization on the cylinder or the sphere, followed by an alignment of the parameterizations. Finally, using this correspondence, the shape is modeled and, as a result, a parameterized shape M(b) is obtained that expresses the shape present in O as a function of a parameter vector b = (b1 , . . . , bt ). The following subsections explain these steps in detail. 2.1
Object Alignment
Object alignment removes the differences in position, rotation, and scale between the objects, leaving only the differences in shape. We use the iterative closest point method (ICP) [4] for this purpose. The ICP algorithm is an iterative approach to estimate the transformation that best aligns one object with another in terms of the mean squared error. We allow for similarity transformations, i.e. rotation, translation and scale. In order to align a group of objects, first a master object Om ∈ O that has an average shape is chosen manually. Then, using the ICP method, a transformation Ti is calculated for each object Oi ∈ O\Om that aligns it with the master object Om . By applying these transformations to the respective objects, a group of aligned objects O∗ = {Oi∗ } is obtained with Oi∗ = (Ti [Vi ], Ti ) for i = 1 . . . NO and Tm the identity transform. 2.2
Correspondence
The modeling process requires that each object is represented by a corresponding set of points. One possibility is to define a set of points on the surface Mm of master object Om and find the corresponding set of points on the surface of each other object in O. Instead of defining a correspondence for a single set of points,
86
T. Huysmans et al.
Fig. 1. The relation between the different surfaces, correspondences and parameterizations when the object set O consists of spherical objects. S is the sphere, it is parameterized on [0, 1]2 by longitude (u) and latitude (v) and we denote this parameterization with φS . Mi and Mj are the surfaces of the two objects Oi and Oj ∈ O. The correspondences between the sphere S and the surfaces Mi and Mj are denoted by γS→Mi and γS→Mj . The composition of the parameterization φS of the sphere together with the correspondences γS→Mi or γS→Mj gives us the parameterizations φMi and φMj of the surfaces Mi and Mj onto [0, 1]2 . The correspondence γMi →Mj is found by composition −1 of γS→Mj and γS→M . i
we choose to define a dense correspondence, as for example in [2]; this allows the flexibility of choosing an arbitrary set of points in the modeling stage. A dense correspondence is defined between every two surfaces Mi , Mj ∈ O and is denoted with γMi →Mj . For the correspondence to be valid, γMi →Mj should be a homeomorphism, i.e. it should be bijective, continuous and have a continuous inverse. We do not define the correspondences γMi →Mj directly, but instead use a set of correspondences γD→Mi between each surface Mi and a simple surface D of the same topology. For the case where Oi is composed of spherical objects, the sphere S is a suitable domain and thus we use D = S. Similarly, objects of cylindrical topology are corresponded with the cylinder, i.e. D = C. Once the correspondences γD→Mi are obtained, the correspondences between the objects Mi −1 can be found by composition: γMi →Mj = γD→Mj ◦ γD→M . In this work, the cori respondences γD→Mi are found by mesh parameterization and this is explained in the following paragraphs. A schematic view of the parameterization and correspondence functions for the spherical case can be found in Fig. 1. Spherical Parameterization. When the set O is composed of objects that have spherical topology, a correspondence γS→Mi between the sphere S and each surface Mi needs to be determined. Such a correspondence can be found by deforming the surface Mi to the sphere S. If this deformation maps the point p ∈ Mi to q ∈ S, then obviously γS→Mi (q) = p and thus a correspondence is defined. This process of deformation to the sphere is called spherical parameterization. Note that for γS→Mi to be a homeomorphism, the parameterization may not result in
Improved Shape Modeling of Tubular Objects
87
overlapping triangles. Also, the surface must be parameterized to the sphere in a consistent way for the correspondence γS→Mi to be useful, i.e. similar parts of similar surfaces should be deformed to the same portion of the sphere. This can be achieved by minimizing the angle and area distortion while deforming the surface. In this way, a consistent deformation is obtained up to three rotations of the sphere; this correspondence is denoted by γˆS→Mi . The three degrees of freedom are removed later in the parameterization alignment stage. We define the correspondence γˆS→Mi only for the vertices vij of Mi , i.e. γˆS→Mi (·) = vij for j = 1 . . . NVi . The correspondence for all other points of Mi can then be found by barycentric interpolation. The method used in this work calculates the deformation in a progressive manner, similar to the work of Praun and Hoppe [5]. The method consists of three steps: as a first step, the progressive mesh of the surface Mi is built, using a quadratic error criterion [6]. Since the surface Mi is of spherical topology, the base mesh of the progressive mesh is a tetrahedron. In the second step, this tetrahedron is deformed into the sphere S by equally separating its vertices on the sphere. The final step adds the vertices back to the mesh, one by one, and optimizes their position on the sphere in order to minimize the area and angle deformation. The optimization of each vertex is restricted to the kernel of its one-ring in order to avoid overlapping triangles. Once all vertices are restored, we have obtained a deformation of Mi into the sphere S and with it the correspondence γˆS→Mi . For a visualization of these steps see Fig. 2. The objective in the optimization of the last step is to minimize the angle and area distortion introduced by the deformation to the sphere. High area distortion will result in under- or oversampling of certain surface regions. However, one should not optimize for area distortion only because there can be multiple optima [7] and often very long and thin triangles are produced which can cause numerical problems. Several objectives have been proposed that use a trade-off between area and angle distortion, see [7] for a survey. Distortion can be measured using the singular values σmin and σmax of the Jacobian of the map ftj : R2 → R2 . i
Where ftj is the function that maps the triangle tji of Mi , described in a local 2D i coordinate system, to its deformed version on the sphere, also in a local 2D coordinate system. The values σmax and σmin are the maximal and minimal directional distortion that the deformation introduced to the triangle. The objective used in this work was introduced in [8] and allows some control (through λ) over the angle/area trade-off: λ σmin 1 σmax + . (1) σmax σmin + σmin σmax σmax σmin Cylindrical Parameterization. When the object set O is composed of surfaces of cylindrical topology, a correspondence γC→Mi between the cylinder C and the surface of each object of O needs to be constructed. This is done using cylindrical parameterization. Similar to the spherical case, the surface Mi is deformed to the cylinder in a consistent way and a correspondence is obtained up to a rotation around the axis of the cylinder.
88
T. Huysmans et al.
Fig. 2. The three phases of the spherical (top) and cylindrical (bottom) parameterization algorithm: surface decimation through consecutive edge collapses, parameterization of the base mesh, and refinement of the parameterization through vertex splits and vertex position optimization
The method to deform a surface to the cylinder is in fact similar to the spherical case. It also consists of three steps: – First, the progressive mesh is built by successively collapsing edges. The base mesh of a cylindrical surface is an open prism with triangular base, i.e. it has three vertices on both boundaries. – Then, the obtained prism can be deformed to the cylinder by equally separating the vertices of each boundary on the cylinder. – And finally, the previously removed vertices are subsequently added back to the surface using vertex splits. In this process, the positions of the vertices are optimized on the cylinder in order to minimize the introduced angle and area distortion. Again, the vertices are kept within the kernel of their one-ring during optimization, which avoids overlapping triangles. The objective used in this optimization is the equivalent of the spherical case. Once all vertices are added, the deformation of Mi to the cylinder C is obtained and thus also γˆC→Mi . This cylindrical parameterization process is visualized in Fig. 2. For a more detailed explanation, we refer to our previous work on cylindrical parameterization [3]. Parameterization Alignment. Using the above parameterization techniques, a set of correspondences γˆD→Mi is obtained, which defines the correspondences γD→Mi up to one (D = C) or three (D = S) rotations. These degrees of
Improved Shape Modeling of Tubular Objects
89
freedom are eliminated using optimization. We define the optimal rotations as those that best align the correspondences in terms of mean squared error between corresponding point sets. 2.3
Modeling
Using the correspondences γD→Mi , i = 1 . . . NO , the shape present in the group O can now be modeled. As in [9], principal component analysis (PCA) is used to fulfill this task. Each object can be represented by a finite set of corresponding points, simply by uniformly sampling the correspondences γD→Mi at a fixed set of Np points of D. Then, by concatenating the xyz-values of these samples, each object Mi can be represented by a single vector xi . PCA of the covariance matrix NO 1 T i=1 (xi −x)(xi −x) results in a set of eigenvectors pj and eigenvalues λj . Np NO We thus obtain a parameterized shape M(b) = x + Pb with P = (p1 |p2 | . . . |pt ) the concatenation of the first t eigenvectors and b the vector of parameters that defines the shape.
3
Results
Fig. 3 shows the cylindrical and the spherical parameterization of a human clavicle. The iso-parameter lines of the cylindrical parameterization are very smooth, this in contrast with the iso-parameter lines of spherical parameterization. The high distortion of the spherical parameterization results in long and thin triangles on the sphere, this makes it harder to optimize the parameterization and convergence to a local minimum might result. We have also applied both correspondence algorithms to a set of six timpani scali of human cochleas. The six scali were segmented manually from micro-CT image data and the first 1.5 turn of the scali starting from the cochlear duct was
Fig. 3. A cylindrical (top) and a spherical (bottom) parameterization of the human clavicle. The clavicle has spherical topology and prior to cylindrical parameterization it was made cylindrical by puncturing it at both poles of the spherical parameterization. We used λ = 2 in the distortion formula (1).
cumulative variance
Compactness 600 400 200 0 1
2
3
4
Number of modes
5
Specificity
6
10
x 10
8 6 4 2 1
2
3
4
Number of modes
5
Sum of squared errors
T. Huysmans et al. Sum of squared errors
90
6
3
x 10
Generalizability
2 1 0 1
2
3
4
5
Number of modes
Fig. 4. Compactness, specificity and generalizability properties of the spherical (dashed ) and cylindrical (solid ) model of the six scali timpani
Fig. 5. The first three modes of the cylindrical and the spherical model, visualized by a positive (right) and a negative (left) offset of one standard deviation to the mean. The surfaces of each mode are color coded by the amount of variation within that mode. Note that the color map is scaled differently for both models and each mode.
selected as the object of interest. Prior to parameterization, the objects were decimated from ≈ 500k faces to 20k faces, which resulted in a RMS error < 0.3 pixels. The scali were parameterized smoothly on the cylinder and it also resulted in a good model. Parameterizing the scali on the sphere caused high distortion and the resulting correspondences and model were not satisfactory. This can be seen from Fig. 5, which shows the first three modes of the cylindrical and spherical model. The model was built using 5000 samples, uniformly distributed over the parameterizaton domain. The quality of the spherical and cylindrical models can be compared quantitatively using three model properties: (1) the compactness of a model measured by the cumulative variance, (2) the specificity, which measures how random instances of the model differ from the training objects, and (3) the generalizability, which tells us how good a model generalizes a certain class. The generalizability is measured by fitting a leave-one-out version of the model to the object that was
Improved Shape Modeling of Tubular Objects
91
left out. It can be seen from Fig. 4 that the cylindrical model performs better on all three properties.
4
Discussion and Conclusions
We have compared spherical and cylindrical parameterization for building correspondences for a set of tubular objects. We observed that the parameterization on the cylinder introduced less distortion than parameterization on the sphere. We have also compared the models, built from these correspondences, and in the application under consideration the cylindrical model is more compact, more specific, and also generalizes better. For future work, we would like to compare both our parameterization methods with the method of Brechb¨ uhler et al. [2]. We would also like to use our cylindrical method to build a shape model for larger sets of tubular objects. Acknowledgements. This work was financially supported by the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWTVlaanderen) and the Fund for Scientific Research (F.W.O)-Flanders, Belgium.
References 1. Kelemen, A., Szekely, G., Gerig, G.: Elastic model-based segmentation of 3-D neuroradiological data sets. IEEE Trans Med Imaging 18(10) (1999) 828–839 2. Brechb¨ uhler, C., Gerig, G., K¨ ubler, O.: Parametrization of closed surfaces for 3-d shape description. Comput. Vis. Image Underst. 61(2) (1995) 154–170 3. T. Huysmans, J. Sijbers, and B. Verdonk, “Parameterization of tubular surfaces on the cylinder,” The journal of WSCG, vol. 13, no. 3, pp. 97–104, 2005. 4. Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2) (1992) 239–256 5. Praun, E., Hoppe, H.: Spherical parametrization and remeshing. ACM Trans. Graph. 22(3) (2003) 340–349 6. Garland, M., Heckbert, P.S.: Surface simplification using quadric error metrics. In: SIGGRAPH ’97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, New York, NY, USA, ACM Press (1997) 209–216 7. Floater, M.S., Hormann, K.: Surface parameterization: a tutorial and survey. In: Advances in multiresolution for geometric modelling. Springer Verlag (2005) 157– 186 8. Degener, P., Meseth, J., Klein, R.: An adaptable surface parameterization method. In: Proceedings of the 12th International Meshing Roundtable. (2003) 201–213 9. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models: their training and application. Comput. Vis. Image Underst. 61(1) (1995) 38–59
Role of 3T High Field BOLD fMRI in Brain Cortical Mapping for Glioma Involving Eloquent Areas Tao Jiang1, Zixiao Li1, Shouwei Li2, Shaowu Li1, and Zhong Zhang1 1 Glioma Center, Beijing Tiantan Hospital Affiliated to Capital University of Medical Sciences, Beijing 100050, P.R. China [email protected] 2 Neurosurgical Department, No.1 Hospital of Harbin Medical University, Harbin 150001, P.R. China
Abstract. The study included 26 patients with glioma involving motor cortical areas to assess the value of preoperative blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) in 3T high magnetic field for brain cortical mapping operation. With the guidance of preoperative mapping, all patients received microsurgery under awake anaesthesia to identify motor area by direct electrical stimulation. The tumor was resected at maximum with preservation of eloquent areas. Preoperative mapping was succeeded in twentythree patients; Primary motor cortex, premoter area, supplementary motor area and cerebellar motor related area were localized. There was good correlation between preoperative fMRI and intraoperative cortical stimulation results. BOLD-fMRI in 3T high magnetic field could non-invasively localize the brain motor cortex and show its relationship to lesion before operation, which may help neurosurgeons to optimize the surgical planning, guide the intraoperative motor areas mapping and resect the tumor with brain eloquent areas preservation.
1 Introduction The principle of neurosurgical treatment for supratentorial glioma in or close to sensorimotor cortices has changed from tumor total resection to excise glioma as much as possible with preservation of brain critical eloquent domains. It is inappropriate at present to locate the primary motor centers (precentral gyrus) with anatomic markers as “hand knot”[1] in conventional MRI for individual differences, mass effects of tumor and function displacement by cortex nearby. BOLD-fMRI could individually non-invasively map eloquent areas before operation, but the result may have disparity with true brain eloquent cortices[2]. As the golden standard, intraoperative direct electrostimulation can locate the eloquent domains with rather long operation time[3]. This study used preoperative fMRI to optimize surgery approach and guide intraoperative direct electrostimulation to locate motor cortical areas. G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 92 – 99, 2006. © Springer-Verlag Berlin Heidelberg 2006
Role of 3T High Field BOLD fMRI in Brain Cortical Mapping
93
2 Materials and Methods 2.1 Objects 26 glioma patients involving brain motor areas were admitted from Oct.2004 to Apr.2005 in Glioma Center of Tiantan Hospital of Beijing, including 19 male and 6 female cases. The patients’ average age was 40.8 years old. 2.2 Image Data Acquisition All magnetic resonance images were obtained in the Siemens 3.0T scanner(Siemens Trio) with gradient strength 40mT/m and slew rate 200T/m/s. BOLD-fMRI. The BOLD images were obtained with a gradient echo echo-planar imaging (EPI) pulse sequence, with repetition time (TR) 400ms; echo time (TE) 5.19ms; NEX1; flip angle 90 ; field of view (FOV) 210*210mm; matrix 256 * 256. Thirty six 3-mm- thickness slices were got in 166 seconds imaging time. Fixed the patient’s head into single channel loop by sponge and gave patient instruction by earphone. No intensified scan was carried out before fMRI. Bilateral thumb-index finger opposition movement with maximal extent was used as activation paradigm for mapping of motor cortex. The movement was performed at a self-paced rate for 30 seconds. The paradigm was composed of 3 ON and 3 OFF blocks, beginning with an OFF block lasting 32.5 seconds. All following ON and OFF blocks lasted 25 seconds. The initial 5 seconds acquired data in each block were discarded and not taken into later BOLD analysis. All patients received individual instructions and time to practice, which demanded them to concentrate their attention on the movement with hands pressing to pants seams, keep body relaxation without movement especially head, close eyes slightly and stop gulping saliva during the examination. Conventional MRI. The technical parameters included the following: 5 .0 mm slice thickness; 24 slices; T1 WI: TR 1680 ms, TE 11 ms, FOV 175 mm * 220 mm, matrix 256 * 256; T2 WI: TR 6980 ms, TE 103 ms, FOV 240mm *240mm, matrix 307*512. Use Gd-DTPA (0.2ml/kg) as contrast to get enhancing image with parameters the same as T1 WI and give an additional coronal scanning of T1 WI. 2.3 Image Data Processing Transferred the data to workstation ((Leonardo syngo 2003A, Siemens). Aligned all images, excluded the first two and optimized parameters to minimize noise signals before analysis. Integrated the fMRI image on anatomic image to form Alapha image[4].The mapping results are divided into 3 grades: 3, very good—the BOLD signal unequivocally delineated the functional central sulcus areas. The functional activity in the hemisphere with the intracerebral tumor could be displaced as compared to the activity pattern found in the contralateral hemisphere; 2, fair, the information regarding the location of the central sulcus s was obtained, but the number of activated voxels was fewer and/or more spread out. Alternatively, activity was well delineated but
94
T. Jiang et al.
the investigation was degraded as a result of motion artifacts; 1, unsuccessful, no reliable activation pattern was detected in the hemisphere(s), or activation was clearly delineated in one but not the other hemisphere with bilateral motor paradigms. The 3rd and 2nd grade was used to made preoperative surgical planning. 2.4 Neurosurgical Operation Craniotomy was made according to preoperative fMRI. Confirmed motor cortex of hand or face by intraoperative direct electrostimulation. Excised the tumor at maximal extent with preservation of critical cortical areas. 2.5 KPS Grading Evaluated patients’ postoperative living status according to KPS (Karnofsky Performance Status) score.
3 Results 3.1 Pathological Results There were 1 case of nerve ganglioneuroma, 11 cases of astrocytoma, 7 cases of oligoastrocytoma, 5 cases of anaplastic oligoastrocytoma and 2 cases of medulloblastoma (Table 1) according to the classification of central nervous system from WHO in 2000. 3.2 Rating of Preoperative BOLD-fMRI Results There were 19 cases of 3rd grade (73%), 4 cases of 2nd grade (15%) among 23patients with successful BOLD fMRI mapping and 3 cases of failure rating 1st grade (12%) by
Fig. 1. Image with different rating of BOLD-fMRI by bilateral hands movement: A, very good (3rd grade), it can show bilateral hemispheric central sulcus, motor cortices areas like M1, SMA and the relationship between glioma and these areas. B, fair (2nd grade), it can show central sulcus and gyrus with fewer activated signal; False shadow caused by slight body move can be seen in abnormal area on the left parietal bone. C, unsuccessful (1st grade), diffusive and spread activated signals are seen in biliteral hemisphere, which is caused by evident body move.
Role of 3T High Field BOLD fMRI in Brain Cortical Mapping
95
Table 1. Rating of BOLD-fMRI and KPS grading for each individual
No.
fMRI rating
Activated domain M1
PM
SMA
Preoperative KPS score
Postoperative KPS score
CE
1 2 Yes Yes Yes Yes 90 80 2 3 Yes Yes Yes Yes 90 100 3 3 Yes No Yes Yes 90 90 4 3 Yes Yes Yes Yes 70 80 5 3 Yes No No No 80 90 6 3 Yes Yes Yes Yes 90 100 7 1 Yes Yes Yes Yes 80 100 8 3 Yes Yes Yes Yes 60 100 9 2 Yes No Yes Yes 70 100 10 1 Yes Yes Yes Yes 80 90 11 3 Yes No Yes Yes 80 100 12 3 Yes No Yes Yes 80 100 13 3 Yes Yes Yes Yes 80 100 14 3 Yes No Yes Yes 80 90 15 2 Yes No Yes Yes 70 100 16 3 Yes No Yes Yes 90 100 17 1 Yes Yes Yes Yes 70 70 18 3 Yes No Yes Yes 90 100 19 3 Yes Yes Yes Yes 90 90 20 3 Yes No Yes Yes 90 90 21 2 Yes No Yes Yes 90 100 22 3 Yes No Yes Yes 90 100 23 3 Yes Yes Yes Yes 80 100 24 3 Yes Yes Yes Yes 80 100 25 3 Yes No Yes Yes 90 90 26 3 Yes No Yes Yes 90 90 M1, primary motor area; PM, premotor area; SMA, supplementary motor area; CE, cerebella
bilateral thumb-index finger opposition movement paradigm (Fig. 1). The signal intensity and location of activated areas varied among patients with successful mapping: primary motor area (M1) activated in 23 cases (100%), premotor area (PM) activated in 9 cases (39%), supplementary motor area (SMA) activated in 22 cases (96%) and cerebelli (CE) activated in 22 cases (96%) (Table 1). 3.3 Intraoperative Direct Cortical Electrical Stimulation Fully exposed M1 areas and tumor according to preoperative BOLD-fMRI. M1 areas verified by direct electrostimulation had good consistency with fMRI results (Fig. 2).
96
T. Jiang et al.
Fig. 2. Preoperative BOLD-fMRI image by hand movement, intraoperative pictures of motor areas confirmed by direct electrostimulation and the relationship between glioma (T) and motor cortices. A, T1WI image with contrast showed a mass in left frontal lobe. The pathology diagnosis was astrocytoma. B, BOLD-fMRI showed relationship between motor areas activated by hand movement like M1, SMA and the glioma. M1 was behind the glioma, SMA was on the inner backside of the glioma. C, Intraoperative direct electrostimulation picture showed the relationship between M1 and glioma, which had good consistency with activated cortices areas in B.
M1 could also be mapped in the unsuccessful cases of preoperative fMRI. All detected motor domains were protected and avoided excision in operation. 3.4 KPS Grading Results For the 21 patients with preoperative KPS score between 80 and 90, the average KPS score increased from preoperative 85.7 to 95.2 and 12 patients recovered up to 100. The other 5 patients with KPS score between 40 and 70, the average KPS score increased from preoperative 68 to 90 and 3 patients recovered up to 100 (Table 2).
4 Discussion The development of BOLD-fMRI technology using endogenous hemoglobin as contrast medium could preoperatively non-invasively make brain eloquent areas visible on image. This T2 blood oxygen effect was first reported by Ogawa[5] in 1990 and applied to brain function research and clinical practice[6]. Now it has been used by neurosurgeons to preoperatively locate important brain areas such as motor and language cortices in the treatment[7]. High field strength scanner has high sensitivity of magnetization effect producing good BOLD endogenous signals. The BOLD signals are mainly from small blood vessels in 1.5 T field whereas are from capillaries in 3.0 T high field, which may get results much nearer to real eloquent neuron activation domains[8]. The signal-noise ratio (SNR) is improved significantly in 3.0T high field scanner and shows linear increase (about 2.03±0.06) in comparison with 1.5T scanner, which may improve imaging parameters in many aspects like reducing scan time, improving space resolution or even enhancing them simultaneously. We
Role of 3T High Field BOLD fMRI in Brain Cortical Mapping
97
used Siemens 3.T scanner to get 36 slices of image simultaneously to form BOLDfMRI for the patients. It was found that areas of basal ganglion, cerebellar hemisphere and vermis may also be activated in bilateral hands movement task. 3 patients (12%) got negative results in BOLD-fMRI, which is in consistent with previous report that there are 0%~30% negative results in the healthy volunteers and brain tumor patients[5]. Increased asymmetry of partial magnetical field for metal fixity and crumble of bone sawing left in previous craniotomy may influence signal quality and movement can arouse more signal noises. It is consistent with previous reports that patient with glioma has few effect on signal activation of cortical eloquent areas [9]. Signal shifting and intensity decrease were found in patients with high-grade glioma or low-grade glioma with large cyst, indicating that tumor mass effect may cause signal shifting. Signal decrease may relate to the biological characteristics of tumor growth and surrounding microenvironmental changes, such as active blood vessel hyperplasia, cerebral blood flow volume increase, local hemodynamics abnormality caused by weakening, arteriovenous anastomosis and peripheral edema in high-grade glioma[10]. Motor cortical areas include M1 (primary motor area), PM (premotor area) and SMA (supplementary motor area). Activation area of brain cortices depends on the complexity of finger movement. Simple movement can activate contralateral M1. Complex movement can activate bilateral M1, PM and SMA. The intensity and range of activation are more obvious in complex movement task. M1 controls movement. PM activation is associated with choosing, preparation, learning of movement and maintains body’s rapid movement. SMA activation is correlated with inherent movement function startup mechanism[11]. Besides, complex movement task may cause activation of upper and foreside of parietal lobe, basal ganglion and vermis. We found all M1 involvement, partly or both PM and SMA activation, even cerebellar hemisphere and vermis activation in these 26 patients by bilateral finger movement task. For motor cortical areas translocation and displacement caused by individual differences and pathological changes, it is inappropriate to locate precentral gyrus with general anatomic signs as“hand knot”[1] in the sense of conventional imaging. Through individually preoperative non-invasive mapping of brain critical eloquent areas, doctors can preoperatively locate these areas on neuroimaging and choose the best surgical approach. Compared with other imaging technologies to locate brain domains like PET and magnetoencephalogram [12], BOLD-fRMI is easy to popularize, simple to operate, and can individually map cortex with high space resolution at low cost. On the other hand, BOLD-fMRI demands patients’ cooperation and keep their heads still during examination. It may have false shadows in previous craniotomy patients with metal fixity and the activated areas may be indistinct for the influence of patients’ subjective moods. The theory of direct electrostimulation mapping is to interfere normal cortical activity with certain electrical current to educe muscle movements, body feelings or language function inhibition, which may be apperceived by patient himself or watched by others. It is regarded as golden standard to locate brain eloquent areas[13]. It’s important to make appropriate craniotomy for successful cortex identification. We used intraoperative electrostimulation in combination with BOLDfMRI to optimize the craniotomy and confirmed the M1 cortex such as areas
98
T. Jiang et al.
controlling hand and face in all patients, which made the tumor resection at maximal extent with preservation of cortex function. In general, only M1 was localized in operation for the cause of time. The information of other areas participating in movement and the relationship with tumor were got by BOLD-fRMI in the process of tumor excision. The non-invasive, continuous and in vivo localization of brain areas involved in the tasks as M1, PM and SMA by BOLD-fMRI is indirect mapping of functional areas and may have tiny difference with true cortices from intraoperative direct electrostimulation in millimeters[14]. The BOLD-fMRI may give indication for areas to map in operation. The operation time and mapping range were largely reduced when using both techniques. The validity of preoperative BOLD mapping and parameters were further confirmed through intraoperative electrostimulation in this study, which is in accordance with others [15].
5 Conclusion Preoperative brain mapping with BOLD-fMRI and intraoperative direct electrostimulation together can help the neurosurgeons to resect the tumor at maximum and preserve the brain critical cortices to enhance their postoperative living status. Further investigation and observation are needed for the recurrence rate and median survival time of these patients.
References 1. Yousry TA, Schmid UD, Alkadhi H, et al. Localization of the motor hand area to a knob on the precentral gyrus. A new landmark. Brain, 1997,120 ( Pt 1):141-57. 2. Tomczak RJ, Wunderlich AP, Wang Y, et al. fMRI for preoperative neurosurgical mapping of motor cortex and language in a clinical setting. J Comput Assist Tomogr, 2000, 24(6):927-34. 3. Brell M, Conesa G, Acebes JJ. Intraoperative cortical mapping in the surgical resection of low-grade gliomas located in eloquent areas. Neurocirugia (Astur), 2003,14(6):491-503. 4. Haberg A, Kvistad KA, Unsgard G, et al. Preoperative blood oxygen level-dependent functional magnetic resonance imaging in patients with primary brain tumors: clinical application and outcome. Neurosurgery,2004, 54(4):902-14; discussion 914-5. 5. Ogawa S,Lee TM, Kay AR. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Pro Natl Acad Sci USA, 1990, 87(24):9868-9872. 6. Gaillard WD, Hertz-Pannier L, Mott SH, et al. Functional anatomy of cognitive development: fMRI of verbal fluency in children and adults. Neurology, 2000, 54(1): 180-5. 7. Moller M, Freund M, Greiner C, et al. Real time fMRI: a tool for the routine presurgical localisation of the motor cortex. Eur Radiol, 2005, 15(2):292-5. 8. Marzola P, Osculati F, Sbarbati A. High field MRI in preclinical research. Eur J Radiol, 2003, 48(2):165-70. 9. Holodny AI, Schulder M, Liu WC, et al. The effect of brain tumors on BOLD functional MR imaging activation in the adjacent motor cortex: implications for image-guided neurosurgery. AJNR Am J Neuroradiol, 2000, 21(8):1415-22.
Role of 3T High Field BOLD fMRI in Brain Cortical Mapping
99
10. Holodny AI, Schulder M, Liu WC, et al. Decreased BOLD functional MR activation of the motor and sensory cortices adjacent to a glioblastoma multiforme: implications for imageguided neurosurgery. AJNR Am J Neuroradiol, 1999, 20(4):609-12. 11. Allison JD, Meador KJ, Loring DW, et al. Functional MRI cerebral activation and deactivation during finger movement. Neurology, 2000, 54(1):135-42. 12. Sobottka SB, Bredow J, Beuthien-Baumann B, et al. Comparison of functional brain PET images and intraoperative brain-mapping data using image-guided surgery. Comput Aided Surg, 2002, 7(6):317-25. 13. Berger, M.S., Ojemann, G.A., et al. Intraoperative brain mapping techniques in neurooncology. Stereotact. Funct. Neurosurg, 1992, 58: 153–161. 14. Helmut Kober, Christopher Nimsky, Martin Moller, et al. Correlation of Sensorimotor Activation with Functional Magnetic Resonance Imaging and Magnetoencephalography in Presurgical Functional Imaging: A Spatial Analysis. NeuroImage, 2001(14):1214-1228. 15. Jaaskelainen J, Randell T. Awake craniotomy in glioma surgery. Acta Neurochir Suppl, 2003, 88:31-5.
Noninvasive Temperature Monitoring in a Wide Range Based on Textures of Ultrasound Images Su Zhang1, Wei Yang1, Rongqian Yang1, Bo Ye1, Lei Chen2, Weiyin Ma3, and Yazhu Chen1 1
Department of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China, 200240 {suzhang, weiyang}@sjtu.edu.cn 2 Sixth Hospital, Shanghai Jiao Tong University, Shanghai, China, 200231 3 Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong, China
Abstract. The B-mode ultrasound is widely used in thermal therapies as an invasive technique for temperature measurement and monitoring the effectiveness of the treatment. In traditional technique the B-mode ultrasound can only measure a small range of temperature changes using the gray scale of images. We obtained a series of real-time B-mode ultrasound images with a to 85 . In this paper, we wide range of temperature variation from 28 investigate image texture characteristics with respect to changes of tissue temperature. Results from water bath and radiofrequency ablation experiments show that there is a strong correlation between several texture features and temperature in a wide range of 28-85 .
1 Introduction In thermal therapy of oncology such as ultrasonic hyperthermia and radiofrequency (RF) ablation, it is important to monitor the temperature distribution because the temperature affects directly the treatment. Presently the invasive thermometer probes are clinically used for temperature monitoring during thermal therapy, although the invasive method is relatively risky and has several technical disadvantages for temperature measurement in oncology [1, 2]. The most important problem is probably the risk of metastases formation. Whenever the invasive probes penetrate through a catheter into the tumor, there is a possibility that the probes carry the tumor cells to the surrounding tissue or even to the blood and lymph systems, causing fatal consequences. In recent years, noninvasive temperature estimation as a means of monitoring and guidance for minimally invasive thermal therapy has attracted the attention of researchers. Currently, magnetic resonance imaging (MRI) and ultrasound have both been proven to have the temperature sensitivity and spatial resolution which are necessary to provide noninvasive temperature feedback. Among these two techniques, MRI is expensive, and has difficulty in interfacing with the heating devices. As a contrast, ultrasound is a nonionizing modality which is convenient and inexpensive. These attributes make it an attractive method for temperature estimation if ultrasonic G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 100 – 107, 2006. © Springer-Verlag Berlin Heidelberg 2006
Noninvasive Temperature Monitoring in a Wide Range
101
parameters, which are dependent on temperature, can be found, measured, and calibrated. Amini et al. [3] investigated the use of a high-resolution spectral estimation method for tracking frequency shifts at two or more harmonic frequencies associated with temperature change. Arthur et al. [4] used the monotonic changes in backscattered energy of ultrasound to estimate temperature changes. But in their study, the temperature estimated using ultrasound echo signals was generally limited in a narrow range and it requires the special equipments to acquire the radio frequency signal data. The relations between the temperature change and of the induced acoustic parameters such as ultrasonic speed, acoustic impedance, and attenuation coefficient on temperature are rather complex and it is difficult to describe these relations by analytical equations. In addition, the thermal expansion in the heated field also affects the ultrasound echoes. One solution is to use the statistical approach and B-mode image processing technique to estimate the temperature changes by analyzing the image textures in the ultrasound images [1]. In early research, Gertner et al. [5] found that the echogenicity (B-mode image brightness) of the liver increased slightly for tissue temperatures up to 40°C, but became lower than that of unheated tissue for temperatures above 40°C. Recently, the relationships between the tissue temperature and the B-mode image texture features, especially the mean gray scale has been investigated [1, 6, 7]. The temperature ranges estimated in these papers are 37-44°C [1], 27-63°C [6] and 28-45°C [7], respectively. Results given in [1, 6, 7] suggested that the mean gray scale of B-mode images were related to the invasively measured temperature. However, Hou et al. [6] found that the mean gray scale tendency was non-linear at different temperature phases and showed strong characteristic of hysteresis. In clinical applications such as high intensity focused ultrasonic (HIFU) hyperthermia and RF ablation it is common practice for the operating temperature to be elevated to the range of 60-100°C. Thus it is insufficient to estimate the tissue temperature using only the non-monotone change of mean gray scale in the wide temperature range. To tackle this problem the method of using B-mode image textures for temperature estimation is developed in this research. We investigated the relationships between sixteen B-mode image texture features and temperature change in a wide temperature range of 28-85 . These relationships can be applied in realtime monitoring of high temperature in HIFU hyperthermia or RF ablation.
2 Image Texture Features for Monitoring Temperature In statistical texture analysis, texture features are computed from the statistical distribution of observed combinations of intensities at specified positions in the image. Texture parameters are then calculated to quantify changes in B-mode images caused by the temperature change. 2.1 First-Order Gray Level Texture Parameters First-order gray level texture parameters are derived from the gray level histogram. They describe the first-order gray level distribution without considering spatial interdependence. The three selected texture parameters in this study are:
102
S. Zhang et al.
Mean Gray Scale (MGS) Standard Deviation (STD) Entropy of intensities (ENT) 2.2 Second-Order Texture Parameters This category of parameters describes the gray level spatial inter-relationships and hence, represents the efficient measures of the gray level texture homogeneity. The second-order texture parameters used in this study are derived from the gray level cooccurrence matrix [8] and the gray level-gradient co-occurrence matrix [9]. The five texture parameters derived from this matrix are as follows: Contrast (CCON) Inverse Difference moment (CIDM) Entropy (CENT) Angular Second Moment (CASM) Correlation (CCOR) The texture parameters extracted from the gray level-gradient co-occurrence matrix include: Small Gradient Emphasis (GSGE) Large Gradient Emphasis (GLGE) Intensity Asymmetry (GIA) Gradient Asymmetry (GGA) Energy (GEN) Mixed Entropy (GME) Inertia (GIN) Inverse Difference Moment (GIDM) In the analysis the 3 3 Sobel operators were applied to obtain the gradient images. In addition, the gray and gradient images were quantified to 16 intensity levels for calculation of these second-order texture parameters. The relationships between the temperature and these sixteen texture features calculated from the regions of interest (ROI) in B-mode images will be examined. We can then select superior texture features for temperature estimation and monitoring.
3 Experiments and Results We have carried out two experiments: using water bath and RF heating to investigate the relationships between the textures and temperature. Because the water bath and the RF ablation are different heating processes, the effects on these relationships of the different heating processes were also examined. 3.1 Water Bath Experiment Water bath experiment was carried out on a phantom which had approximate dimension of 5cm 6.5cm 7cm and was used as the research object. Measurements
Noninvasive Temperature Monitoring in a Wide Range
103
were carried out based on the experimental configuration depicted in Fig. 1. The phantom was set in a beaker filled with degassed water and heated in a water bath (DK-S22). The phantom was supported on a 1cm thick layer of rubber material with the same speed of sound and density as water to minimize the interface reflections. Water in the tank was heated by a heater that circulated in the tank. The temperature of the phantom was monitored by a needle type thermocouple. The temperature range covered in the experiment was 28°C to 85°C. The ultrasound probe and the phantom were fixed during the heating process.
Fig. 1. Configuration for the water bath experiment. The temperature was monitored by an invasive thermocouple probe in the phantom. The transducer was placed on the top of the phantom. B-mode images and temperature were recorded for subsequent analysis.
Ultrasound imaging of the phantom was taken using a GE 8L-RS diagnostic ultrasound system. The imaging parameters were fixed during the heating process. The original B-mode images were images of 256 levels gray scale. Three ROIs of 100 100 pixels were selected in different areas of each image.
Fig. 2. GME change vs. temperature change. The left shows the relative change of GME and temperature related to the heating time. The right pane is the scatter plot of GME change associated with temperature change.
104
S. Zhang et al.
Texture mixed entropy (GME) is used to reveal the relation between the texture features and the temperature change. The value of GME and the phantom temperature tend to increase consistently with the heating time (the left pane of Fig. 2). The curves in Fig. 2 show that the increase of GME in three different ROIs was monotonic. The phantom temperature fluctuated slightly due to minor noise. A strong correlation between GME changes in all three ROIs and the invasively measured temperature changes in the range of 28-85 was found by correlation analysis. The correlation coefficient of the GME and the temperature change was 0.977 (using significant level of 0.01). The right pane of Fig. 2 shows the scatter plot of GME change associated with temperature change. Least squares linear regression was applied to analyze the data. The relation between GME change ( GME) and measured temperature change ( Temp) is : GME
0.0175
Temp.
Results of all sixteen texture features outlined above are shown in Table 1. Textures with superior performance can be found. The texture features which have stronger correlations with the temperature are GIA, GGA, GEN, GME and ENT. Table 1 shows that the mean gray scale has less correlation with the temperature than other texture features such as GIA in the range of 28-85 . Table 1. Correlation and regression coefficient of textures change with temperature change Texture Corr. Coef. Reg. Coef. Texture Corr. Coef. Reg. Coef.
MGS 0.843 0.084 GSGE -0.88 -0.0004
STD 0.715 0.114 GLGE 0.72 0.026
ENT 0.966 0.019 GIA -0.93 -0.004
CCON 0.875 0.011 GGA -0.95 0.0047
CIDM -0.879 0.0005 GEN -0.964 -0.005
CENT 0.884 0.0095 GME 0.977 0.0175
CASM -0.872 -0.0027 GIN -0.924 -0.071
CCOR 0.198 / GIDM -0.931 -0.003
3.2 RF Ablation Experiment RF ablation experiment (Fig. 3) was carried out using a phantom which had the same configuration and dimension as that of the water bath experiment. For RF ablation a commercial system (Model 1500X RITA Generator) was used. The system consists of a 250W RF generator to which a needle and a neutral electrode are connected. The power of the RF generator was set to 20W and the desired temperature was set at 85 . During the heating process, the transducer was placed on the side of the phantom and the heating field was scanned. A series of B-mode ultrasound images and the temperature monitored by the thermocouples located within the insulated portion of the device were recorded. Ultrasound imaging parameters were set by a skilled physician and were fixed during the heating process. Three ROIs (100 100 pixels, which were near to the RF needles and had no overlapped area ) were selected in each image. The average temperature of all thermocouples was used as the measured temperature.
Noninvasive Temperature Monitoring in a Wide Range
105
Fig. 3. Configuration for the RF ablation experiment
Fig. 4. Relative changes of GSGE and temperature with the heating time
Fig. 4 shows the tendency of small gradient emphasis (GSGE) change and the measured temperature change related to the heating time. The temperature of the heating field increased rapidly when the RF heating started, and dropped smoothly to 85 after it reached the top of the curve. It is found that GSGE changed inversely proportional to the change of the temperature. Similarly, the change of some other textures such as GME, MGS, and ENT can also reflect the change of temperature during RF heating. The temperature curve can be divided into three segments: an increasing segment, a dropping segment and a steady segment (Fig. 4). The texture features correlated strongly with the temperature change in increasing segment are GSGE, GME, MGS and ENT. GSGE and MGS have a strong correlation with the temperature change in the dropping segment. The correlations between the texture features and temperature change in three ROIs are different. Results for texture features which strongly related to temperatures in all three ROIs are shown in Table 2 and 3. From Table 2 and 3 it is observed that the regression
106
S. Zhang et al.
coefficients of the same texture feature are different between the increasing segment and the dropping segment. This suggests that the relationship between the texture features and the temperature change are not the same for different heating processes. Table 2. Correlation/regression coefficient of textures changes with temperature changes during temperature increasing period in different ROI / ROI1 ROI2 ROI3
MGS 0.96/0.312 0.93/0.149 0.93/0.378
ENT 0.92/0.007 0.90/0.035 0.93/0.007
GSGE -0.88/0.0014 -0.97/0.0010 -0.85/0.0014
GLGE 0.98/0.098 0.76/0.041 0.96/0.119
GGA -0.94/0.0003 -0.87/-0.0035 -0.91/0.0006
GME 0.97/0.004 0.93/0.017 0.94/0.004
Table 3. Correlation/regression coefficient of textures change with temperature changes during temperature dropping period in different ROI / ROI1 ROI2 ROI3
MGS 0.97/1.038 0.98/0.613 0.99/1.320
GSGE -0.99/-0.0024 -0.98/-0.0040 -0.99/-0.0027
GLGE 0.90/0.379 0.97/0.220 0.99/0.470
4 Discussion It is illustrated that the water bath and the RF ablation experiments have quite different features. During RF heating, the ROIs were heated uniformly (compared to water bath) and texture changes of the ROIs were not consistent. Moreover, the temperature in the center of the RF heating field can reach as high as 100 . This is different from the water bath experiment in that the center of RF heating field could coagulate quickly at the high temperature. By applying the appropriate transformation function to the ultrasound image textures, the temperature changes in the scanned area are measurable. But to monitor the absolute temperature distribution in the tissue, another system could be used to determine the native temperature distribution before the therapy starts. For this purpose MRI technique seems to be more suitable. Using the native temperature distribution determined by MRI as reference, the temperature distribution estimated by the ultrasound method is an absolute temperature distribution. For fully usage the temperature dependence of image textures in various tissue types need to be studied.
5 Conclusion Several texture features of B-mode images such as GSGE, GME, and MGS have strong correlations with the temperature changes during heating process. Using these relationships, real-time temperature distribution can be estimated and monitored in a wide temperature range during thermal therapy. Application of this method for noninvasive temperature measurement has great potential in clinical applications.
Noninvasive Temperature Monitoring in a Wide Range
107
Experiment results suggest that the way that image textures change is associated with the heating process. Therefore, the methods and regression coefficients used to estimate temperature distribution need to be adapted for the detailed heating processes. Acknowledgements. This work is supported by the National Basic Research Program of China (No. 2003CB716103), and NSFC (No. 60573033). The authors would like to thank Prof. Bing Hu (Director of the Ultrasound Department, Sixth Hospital) for providing the ultrasonic equipments, Dr. Jingfeng Bai for his helpful discussions, and Dr. Y.B. Shi in Queen Mary University of London for the English revision of this paper.
References 1. Novak P.: Noninvasive Temperature Monitoring Using Ultrasound Tissue Characterization Method. Medical Imaging 2001: Ultrasonic Imaging and Signal Processing, SPIE Vol. 4325 (2001) 566-574 2. Varghese T., Zagzebski J.A., Chen Q., Techavipoo U., Frank G., Johnson C., Wright A., Lee F.T.: Ultrasound Monitoring of Temperature Change during Radiofrequency Ablation: Preliminary in-vivo Results. Ultrasound in Medicine and Biology, Vol. 28, No. 3 (2002) 321-329 3. Amini A. N., Ebbini E. S., Georgiou T. T.: Noninvasive Estimation of Tissue Temperature via High-Resolution Spectral Analysis Techniques. IEEE Transactions on Biomedical Engineering, Vol. 52, No. 2 (2005) 221-228 4. Arthur R. M., Trobaugh J. W., Straube W. L., Moros E. G.: Temperature Dependence of Ultrasonic Backscattered Energy in Motion-Compensated Images. IEEE Transactions on Ultrasonic, Ferroelectrics, and Frequency Control, Vol. 52, No. 10 (2005) 1644-1652 5. Gertner M. R., Worthington A. E., Wilson B. C., Sherar M. D.: Ultrasound Imaging of Thermal Therapy in in vitro Liver. Ultrasound in Medicine and Biology, Vol. 24, No. 7 (1998) 1023-1032 6. Hou Zhenxiu, Xu Zhenxiang, Jin Changshan: Experimental Study on Noninvasive Thermometry in HIFU. Chinese Journal of Ultrasound in Medicine, Vol. 18, No. 9 (2002) 653-657 7. Ren Xinying, Wu Shuicai, Du Xu , Bai Yanping: The Study on the Relativity of B-mode Ultrasound Image Gray and Temperature in Tissue. Beijing Biomedical Engineering, China, Vol. 123, No. 2 (2004) 116-118 8. Haralick R.M., Shanmugam K., Dinstein I.: Textural Features for Image Classification, IEEE Transactions on Systems, Man, And Cybernetics, Vol. SMC-3, No. 6 (1973) 610–621 9. Hong Ji-guang: Gray Level-gradient Co-occurrence Matrix Texture Analysis Method. Acta Automatica Sinica, Vol.10, No. 1 (1984) 22-25
A Novel Liver Perfusion Analysis Based on Active Contours and Chamfer Matching Gang Chen and Lixu Gu Computer Science, Shanghai Jiao Tong University 800 Dongchuan Road, Shanghai, P.R. China 200240
Abstract. Liver Perfusion gives important information about blood supply of liver. However, in daily clinical diagnosis, radiologists have to manually mark the perfusion position in time-sequence images due to the motion of liver caused by respiration. In this paper, we propose a novel hybrid method using a variation of active contours and modified chamfer matching to automatically detect the liver perfusion position and measure the intensity with a single shape prior. The experiment is taken on abdomen MRI series and the result reveals that after extracting liver’s rough boundary by active contours, precise perfusion positions can be detected by the modified chamfer matching algorithm, and finally a refined intensity curve without respiration affection can be achieved.
1 Introduction Liver perfusion is a quantitative measurement of blood flow of liver, which plays important role in providing information in the assessment and treatment of various liver diseases. For example, it can be used as a noninvasive and repeatable technique in diagnosis of acute rejection in the liver transplant [1]. In clinic, by injecting a contrast agent into the liver while taking abdomen MRI images in a fixed time intervals, the concentration of the contrast agent can be tracked and analyzed. The change of perfusion intensity is tracked in time, resulting in a perfusion curve. However in the clinical application, the problem is that the liver moves because the patient breathing throughout the series, resulting in the change of perfusion position. Moreover, it is unpractical to keep patients to hold their breaths during the process. So the radiologist has to manually mark the position, which is very tedious and timeconsuming. So far, the automated perfusion measurement process has received a large amount of attentions. In [2], Sebastian modeled the problem using registration method. In his method, Fast Marching Method (FMM) and Level Set Method[3,4,5] were used to segment the liver region, then a distance vector transform was employed to identify the perfusion position along the time sequence images. The result was promising, but the stop criteria of FMM strongly depends on the size of liver region that radiologists may not know exactly. It can lead to over or under segmentation when liver contours are not clear enough. Both of the situations will effect the perfusion localization precision. In [6], a new approach was proposed to get kidney perfusion curves that present the transportation of the contrast agent into the kidney. Then the curves are used in the G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 108–115, 2006. c Springer-Verlag Berlin Heidelberg 2006
A Novel Liver Perfusion Analysis
109
classification of normal and acute rejection in kidney transplants. It used a deformable model with shape prior constrains which is a mean sign distance map of a lot of segmented kidneys. However, it is not suitable to our application. Firstly, it needs a lot of prior shape models, which radiologists can hardly achieve; secondly, liver’s shape and size change significantly from different views or different slices, which brings more difficulties to build a mean model. In our paper, in order to help radiologists automatically tracking the liver’s perfusion position with little manual intervention, we employ a hybrid method which contains a variation of active contours [7] segmentation and a modified chamfer matching algorithm[8,9]. Active contour model is used to segment the liver region which has low gradient response on boundary. The segmentation results are distance maps which can be changed to contour lines. Then a modified edge pixel based chamfer matching algorithm is applied to match these contour lines to a single template shape contour whose perfusion position has been marked manually in advance. Finally, we can get all the abdomen MRI slices’ relative perfusion position to the template’s. The outline of the paper is as follows: section 2 is a brief review of the active contour model and the chamfer matching algorithm; section 3 illustrates our hybrid method and section 4 presents the experiment results; section 5 is the discussion and conclusion.
2 Active Contours and Chamfer Matching 2.1 Active Contours Active contours can be used to segment objects automatically, which is based on the evolution of a curve. Its objective is to minimize a metric function defined by the curvature. For example, starting with a curve around the object to be segmented, the curve moves toward its interior normal and stops on the boundary of the object. Assume Ω be a bounded open subset of R2 , C=∂Ω its boundary, and C(s):[0,1]→R2 be a parameterized curve, μ0 is the image. The classical active contour model is defined bellow [7]: 1 1 1 J1 (c) = α |C (s)|2 ds + β |C (s)|ds − λ |∇u0 (C(s))|2 ds (1) 0
0
0
Here, α, β, and λ are positive parameters. The first two terms control the smoothness of the evolution which is called the internal energy, and the third term makes the evolutional curve toward the object boundary called the external energy. Edge-stop criteria is used to stop the evolving curve on the object’s boundary. The edge-stop criteria is in the form of an edge stopping function and it depends on the gradient of image μ0 whose boundary is defined by gradient. A typical edge stopping function [10] is: 1 ,p≥1 (2) g(μ0 ) = 1 + |∇Gσ (x, y) ∗ μ0 (x, y)| p Where Gσ (x, y) ∗ μ0 (x, y) is the convolution of μ0 with the Gaussian: Gσ = σ−1/2 e−|x +y 2
2
|/4σ
(3)
It is obvious that the edge stopping function is close to zero in the region where the gradient response is strong and such region always is the boundary of the object.
110
G. Chen and L. Gu
2.2 Chamfer Matching Chamfer matching was firstly introduced in [9] by Barrow, which is a method that matches edge points or other low level feature points extracted from a 2D image. There are two binary images involved in the process. The first is a source image in the form of a distance map, and the second is a template image containing object’s shape contour. After transforming the template image according to the source image, the shape contour overlaps on the source image’s boundary region. An average of the overlapped pixels’ value on the source image is the measure of the correspondence between them. A perfect match means that the average is close to zero. Transformations with different parameters are applied and the one with minimum average is selected. Often the root mean square average distance(rms) is used as the measure: rms = ((v21 + v22 + · · · + v2N )/N)1/2 /3
(4)
Here, vi is the source image’s pixel which overlapped by the shape contour and N is the number of these pixels.
3 Hybrid Perfusion Analysis In our MRI images, part of the liver boundary is not clear and has no gradient defined, so we employ a modified energy minimization active contour model called active contours without edges proposed in [10]. Our segmentation method is a simplified implementation of [10]. It is based on the following two observations: firstly, the movement of the liver is slight,and the rotation can be ignored. Secondly, as the upper half of the liver has strong gradient response, it always can be segmented well. But the low half nearly has no gradient response defined and its segmentation result is bad. There are four steps in our hybrid method. Firstly, set a cycle’s center point and radius as initial curve for the modified active contours segmentation and all the images can share one center point and radius. In the second step, we apply active contours without edges method to yield contour lines and edge pixels, where the segmentation result is in the form of distance map and the liver shape can be extracted by edge detecting algorithm on that distance map. The third step involves manually selecting an image, refining the contour to make it a nearly complete liver shape as a template contour image for chamfer matching and mark the perfusion position. The forth step is to focus on applying modified chamfer matching to the segmented result, and calculate the quotient of matched edge pixels number and the rms in (4). After matching, all slices are related to the template image, and their perfusion positions can be gotten. Finally, we can measure the perfusion intensity and draw the perfusion curve. 3.1 Set Initial Curve’s Center Point and Radius Because the modified active contours segmentation method is based on curve evolution, we use a circle as the initial curve whose center and radius need to be set carefully. Theoretically, the center point should be at the center of the liver, but as mentioned above, the upper half of the liver has strong gradient response. So it is better to put
A Novel Liver Perfusion Analysis
111
the center point near the center of the upper half of the liver. The radius should be big enough to make the initial circle cover most of upper half part of the liver. Once we have set one image’s center point and radius, these parameters could be used in other images, which is based on the observation that the liver’s total movement is small respect to patient’s breathing. 3.2 Segmentation Using Active Contours Without Edges In our segmentation stage, we restrict the curve evolving process in a sub-area marked by manual which covers the whole liver shape. Because the movement of the liver is slight, the sub-area’s definition is suitable to all of the images. In this way we can reduce the cost of active contour model’s calculation. The original energy function of active contour model introduced by [10] is: F(c1 , c2 , C) = μ · Length(C) + ν · Area(inside(C)) |μ0 (x, y) − c1 |2 dxdy +λ1 inside(C) +λ2 |μ0 (x, y) − c2 |2 dxdy
(5)
outside(C)
Here, c1 and c2 represent the inner and outside curve’s average intensity, respectively. In the above, the length of curve C and the area of inside curve C are regularizing term. If we write the third term as F1 (C) and the forth term as F2 (C) while ignoring λ1 and λ2 , we can see that if the curve C is outside the object, F1 (C) > 0, F2 (C) ≈ 0; if inside the object, F1 (C) ≈ 0, F2 (C) > 0; if partial inside and partial outside the object, then F1 (C) > 0 and F2 (C) > 0; finally, if the curve C is exactly on the object boundary, then F1 (C) ≈ 0 and F2 (C) ≈ 0. Thus the energy function is minimized [10]. By formulating the energy function using level set, the evolving curve C can be represented by the zero level set of signed distance function φ. After simplifying the energy function and adding the curve term, the liver can be segmented by solving the Euler-Lagrange partial differential equation: φ ∂φ = δ(φ)[μ · div( ) − ν − λ1 (u0 − c1 )2 + λ2 (u0 − c2 )2 ] = 0 ∂t | φ|
(6)
Fig. 1. Active contours procedure. (left) The initial stage. (middle) The middle stage. (right) The final stage.
112
G. Chen and L. Gu
The time complex is: O(n2 ). After nearly 1500 iterations a roughly liver contour ’s distance map can be gotten. The modified active contours segmentation method works well but not fast enough to be a real-time application. Also it needs a lot of parameters tweaking, and still very application specific. For example it can not segment images which are very dark. Fig. 1 shows the evolving curve’s initial, middle, and final stages. 3.3 Select and Refine Template Image After segmentation, we get the distance map of liver’s shape. We use an approximation method to extract the zero level set which is the boundary of the liver. If a pixel Φ(i, j) on the zero level set, the sign of 4 neighbourhood pixels’ sign can’t be the same, in mathematics: max(φi, j, φi+1, j , φi, j+1 , φi+1, j+1 ) > 0 min(φi, j, φi+1, j , φi, j+1 , φi+1, j+1 ) < 0
(7)
Once getting the shape contour, we need to manually select an image, refine the contour to make it to a nearly complete liver shape which is used as a template contour image for the modified chamfer matching. Also need to mark the perfusion position in the selected image. Then the modified chamfer matching algorithm can get other slices’ perfusion position relate to the template image. Fig. 2 shows the selected shape before and after the refinement.
Fig. 2. Selection of the template image. (left) shape before refinement. (right) shape after refinement.
3.4 Modified Chamfer Matching In chamfer matching, the two input images are not symmetrical. The source image is a distance map which is formed by assigning each pixel a value to the nearest edge pixel. The template image is a binary image containing the shape want to match. In our application, the active contour model’s evolving result is naturally a distance map and the template image is selected and refined in step 3, where the two images are from different slices. Because of the low quality of our MRI image series, it is always a challenge to distinguish liver’s boundary region with others, which results in a very noisy distance map.
A Novel Liver Perfusion Analysis
113
To solve this problem, we propose two methods. Firstly, the noise part of the distance map is ignored, which can be realized by deleting the corresponding noisy pixels in the template image. So these noise regions will not influence the root mean square average. Secondly, the hit edge pixels’ number is introduced. After transformation, the template image is aligned to the source image. The corresponding pixel in source image near zero means it is on the boundary of liver shape. These pixels are called hit edge pixels and the more the hit edge pixels, the better they are matched. So, let vi be the hit edge pixel and N be the number of them, we want to get the maximum of following formula under different transformations: F(N, vi ) =
N ((v21 + v22 + · · · + v2N )/N)1/2 /3
(8)
Then we can get the X and Y axis’s relative transformation dxi and dyi of the ith image to the template image’s perfusion position. Assume x0 and y0 are the template image’s perfusion position, and xi , yi are the ith image’s, we have: xi = x0 + dxi yi = y0 + dyi
(9)
4 Experiments We implement the active contour model using Level Set to solve the partial difference equation and modified chamfer matching in C program language. To evaluate our implementation of the proposed hybrid liver perfusion analysis method, we use a series of two-dimensional 256 × 256 abdomen MRI images. They are taken with a GE Medical Systems Genesis Signa HiSpeed CT/i system at the Shanghai First People Hospital, and the parameters are: slice thickness 15.0, repetition time 4.7, echo time 1.2, magnetic field strength 15000, flip angle 60 degrees. The experiments are performed on a PC with Pentium-M 1600Mhz, 512Mb RAM. Most part of liver’s contour can be extracted from the segmentation result by finding the hit edge pixels after the modified chamfer matching. In order to compare the
Fig. 3. Comparison between our segmentation and FMM+Level Set Method. From left to right are: hit edge pixels; complete liver shape by connecting hit edge pixels; segmented liver by our method using level set method to smooth; segmented liver by FMM+Levle Set Method.
114
G. Chen and L. Gu
segmentation quantity to FMM + Level Set Method, we connect the contour and make it complete. By taking it as an initial zero level set for level set method, we can get smooth segmented liver shape after 20 iterations. It is compared to the segmentation result which only use FMM + Level Set Method. Fig. 3 shows the comparison between our model to FMM + Level Set Method. It shows that FMM(2500 iterations) + Level Set Method(200 iterations) only can segment the upper part of the liver, while more iterations will result in over-segmentation. Our method has the potential to segment the whole liver more precisely. The perfusion curve is confirmed by radiologists from the Shanghai First People’s Hospital. Our hybrid method can effectively compensate the liver’s movement. Fig. 4 shows the perfusion intensity curve, comparing to the perfusion curve ignoring liver’s movement by using a fixed position across the whole series.
Fig. 4. Liver perfusion intensity Curve. Points labeled ’x’ are obtained by our hybrid method and they are smoothed by Gaussian filter. It is compared to the result of the original method using a fixed position across the whole series.
5 Conclusions This paper introduce a hybrid method which contains active contour model and chamfer matching algorithm to automatically detect the liver perfusion position and measure the intensity. The experiment reveals that the hybrid method can segment most part of the liver and the modified chamfer matching algorithm can get other slices’ relative perfusion position to the selected template slice. The modified chamfer matching algorithm is efficient because it not only calculates the rms(4), but also takes the number of matched edge pixels into consideration. We also compare our segmentation result to FMM+Levle Set Method which reveals that our hybrid method with level set smoothing has the potential to segment the liver region correctly even there has lower gradient.
A Novel Liver Perfusion Analysis
115
The future work is to extend the approach to automatically locate perfusion area in 3D volume data, in order to meet the challenge of more complex movement of the liver.
Acknowledgements The authors would like to thank to all the members in the image-guided surgery and therapy laboratory in Shanghai Jiaotong University, and also thank to Sebastian Nowozin for his patiently discussion. And thanks to Dr. Zhang Hao, Mr. Xie Xueqian, Ms. Guai Hua and Mr. Zhang Tiannin from Shanghai First People Hospital for providing the data.
References 1. Bader, T.R., Herneth, A.M., Blaicher, W., Steininger, R., Muhlbacher, F., Lechner, G. and Grabenwoger, F.:Hepatic perfusion after liver transplantation: noninvasive measurement with dynamic single-section CT. Radiology, Vol. 209, pp. 129-134, 1998. 2. Sebastian, N., Gu, L.: A Novel Liver Perfusion Analysis Method. IEEE Engineering in Medicine and Biology 27th Annual Conference, 2005. 3. Sethian, J.A.: A Fast Marching Level Set Method for Monotonically Advancing Fronts. Proceedings of the National Academy of Sciences, vol. 93, no. 4, pp.1591-1595, 1996. 4. Osher, S., Sethian, J.: Fronts Propagating with Curvature-Dependent Speed: Algorithms Based on Hamilton-Jacobi Formulations. Journal of Computation Physics, vol. 79, pp. 12-49, 1998. 5. Malladi,R., Sethian, J.A.:An O(N log N) algorithm for shape modeling. Proceedings of the National Academy of Sciences, vol. 93, pp. 9389-9392, 1996. 6. Seniha, E.Y., Ayman, E., Aly, A., Mohamed, E., Tarek, A., Mohamed, A.: Automatic Detection of Renal Rejection after Kidney Transplantation. Computer Assisted Radiology and Surgery, pp.773-778, 2005. 7. Kass, M., Witkin, A., Terzopolous. D.: Snake: Active contour models. In First International Conference on Computer Vision, pp. 259–268, 1987. 8. Gunnila, B.: Hierarchical chamfer matching: A parametric edge matching algorithm. IEEE transaction on Pattern Analysis and Machine Intelligence, Vol.10, no.6, pp.849-856, 1988. 9. Barrow, H.G., Tenenbaum, J.M., Bolles, R.C., and Wolf, H.C.: Parametric correspondence and chamfer matching: Two new techniques for image matching. In Proc 5th Int.Joint Conf.Artifical Intelligence, pp. 659-663,1997. 10. Chan, T., Vese, L.: Active contours without edges. IEEE trans. Image Processing, pp.266277, 2001.
Cerebral Vascular Tree Matching of 3D-RA Data Based on Tree Edit Distance W.H. Tang and Albert C.S. Chung Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong {cstommy, achung}@cse.ust.hk
Abstract. In this paper, we present a novel approach to matching cerebral vascular trees obtained from 3D-RA data-sets based on minimization of tree edit distance. Our approach is fully automatic which requires zero human intervention. Tree edit distance is a term used in the field of theoretical computer science to describe the similarity between two labeled trees. In our approach, we abstract the geometry and morphology of vessel branches into the labels of tree nodes and then use combinatorial optimization strategies to compute the approximated edit distance between the trees. Once the optimal tree edit distance is computed, the spatial correspondences between the vessels can be established. By visual inspection to the experimental results, we find that our approach is accurate.
1 1.1
Introduction Why 3D-RA Data-Sets Are Used?
Three-Dimensional Rotational Angiography (3D-RA) images are playing an increasingly important role in the study of intracranial vasculatures. Due to its relatively higher voxel resolution and contrasting property than other image modalities, a more accurate surface can be segmented out for further geometric and morphological studies. 1.2
Clinical Motivations
Properties of intracranial aneurysms are under active research in the medical field. Beck et al. [1] and Weir et al. [2] studied the relations between the probabilities of rupture and the size and site of aneurysms. In specifying the locations of aneurysms, they simply used the names of the arteries where the aneurysms were located. We believe that a more accurate and specific coordinates for specifying the locations of aneurysms is needed. In specifying the structure of aneurysms, they usually used the diameters of the aneurysms. However, information about the size and orientation of the vessels where the aneurysms were located was not G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 116–123, 2006. c Springer-Verlag Berlin Heidelberg 2006
Cerebral Vascular Tree Matching of 3D-RA Data
117
considered. We believe that an accurate matching between normal vessel and problematic vessels can help locate and quantify the position of defection and help give more information to assess the chance of aneurysm rupture. 1.3
Previous Work
The goal of vascular matching is to find spatial correspondences between two sets of vessels. If they are acquired from the same patient, the problem can be solved easily by using rigid registration. We are therefore focusing on vascular mapping across different patients. This problem is ill-posed since the underlying transformation is unknown and the vascular structures can vary significantly across patients. Up to now, there is no standardized procedure for quantifying the goodness or error of a matching. There is not much research on vascular mapping in the literature. We present some previous work below. Antiga & Steinman [3] suggested a methodology for mapping bifurcating vessels. Vessel junction was decomposed into a geometric reference system. However, the emphasis is on the bifurcating junction, but not on the whole vascular tree. Cool et al. [4] presented a tissue-based registration approach to forming vascular density atlas. However, the registration was based on brain tissues but not the real geometry of the vessels. It only maps the position of the vessels with respect to the embedded brain tissue. The geometry and morphology of the vessels, such as radius and orientation information, were not used in the registration process. Chillet et al. [5] suggested a method to form a vascular atlas using vessel-to-image affine registration. However, the affine registration used may not be sufficient to represent the underlying deformation to match the vessels, especially for those which are highly curved and rolled. Jomier & Aylward [6] presented a modelto-image registration approach to mapping vessels. In their work, a global rigid transformation was performed first followed by local and piece-wise deformation of branches via propagation. However, if the structural difference between the vessels is significant, this method will not work since a mismatch of branch points will propagate to the children branches. Therefore, a global matching method which makes use of the geometry and morphology of the vessels is desirable. 1.4
Outline
In our method, we abstract the geometry and morphology of vessel branches into the labels of tree nodes. By doing so, we are able to use a tree edit distance approach to find the best matching in a global perspective but without ignoring the geometric and morphological properties of each branch. We will then use combinatorial optimization strategies to compute the approximated edit distance between the trees. Once the optimal tree edit distance is computed, the spatial correspondences between the vessels can be found. Detailed procedures will be explained in Section 2. In Section 3, experimental results will be shown and explained. Finally, in Section 4, a conclusion will be made and possible future work will be suggested.
118
2 2.1
W.H. Tang and A.C.S. Chung
Methodology Centreline Extraction
The first step is to obtain the centreline from a pre-segmented 3D-RA dataset. Since we are interested in a tree representation of centreline. We use a Voronoi diagram-based approach. The Voronoi diagram of the surface points on the segmented vessel is computed. It is a graph G = (V, E, W ), where each node v ∈ V is representing a point in R3 , each edge e ∈ E is representing a line segment in R3 . W : E → R is a weight assignment function to each edge. Once W is defined, we can find the weighted shortest path tree G = (V, E ) of G where E ⊂ E. We set W (e) to be the reciprocal of the average Euclidean distance from the edge e to the segmented surface. It is not difficult to observe that by using this weight assignment function, for every pair of nodes v1 , v2 ∈ V , the unique simple path between v1 and v2 defined in G is one of the many paths between v1 and v2 defined in G; and this simple path tends to keep the distance from the segmented surface as large as possible throughout it. The next step is to detect endpoints of the vessel. This can be done automatically as the major vessels are usually intersecting with the bounding box of the 3D-RA data-set, since 3D-RA considers only the region of interest in the brain. Once the endpoints are detected, the centreline is just the union of all paths from the endpoints to the root in the shortest path tree G . Note that the root is just an endpoint where the internal carotid artery intersects with the bounding box of the 3D-RA data-set, which can also be detected easily, because of its large cross-sectional radius. Besides the centreline itself, the cross-sectional radius information of each centreline point, which can be obtained from the Voronoi diagram, is also stored. 2.2
Branch Representation, Similarity and Importance Measure
A branch in a vascular tree is a vessel whose both ends are either a vessel junction or an endpoint on the bounding box of the 3D-RA data-set, with no other junctions in between. The whole vascular tree can be represented by a union of all branches, which intersect at junctions. Since we will abstract the geometry of branches into nodes in a theoretical tree, before outlining the tree edit distance algorithm, we first present the similarity function between two different branches and the importance function of a branch, which are equivalent to node relabeling cost and node removal cost respectively in the theoretical tree. A branch b can be represented by two parametric functions in range [0,1], xb (s) and rb (s). xb (s) is a vector function which represents the normalized arc length parameterized centreline curve of b, with xb (0) and xb (1) representing the end closer to the root and the end further away from the root respectively. rb (s) is a scalar function which represents the radius of b along the centreline curve xb (s), with the same parameterizations, i.e., rb (t) is the radius associated with the centreline point xb (t), where t ∈ [0, 1]. The similarity function (matching cost) S(b1 , b2 ) of two branches b1 and b2 with length l1 and l2 respectively is defined as, S(b1 , b2 ) = αS1 (b1 , b2 ) + βS2 (b1 , b2 ) + γS3 (b1 , b2 )
(1)
Cerebral Vascular Tree Matching of 3D-RA Data
where l1 + l2 S1 (b1 , b2 ) = 2
119
1
|rb1 − rb2 | ds
(2)
0
is the cost of matching the radii of the two branches b1 and b2 ; and, x • x l1 + l2 1 rb1 + rb2 1 b b 1 2 S2 (b1 , b2 ) = · cos−1 x · x ds 2 2 π 0 b1 b2
(3)
is the cost of matching the orientations of the two branches b1 and b2 , weighted by their average radius; and, S3 (b1 , b2 ) = |l1 − l2 |
(4)
is the cost of matching the lengths of the two branches. α, β and γ are weighting parameters and we set α = β = γ = 1 in our experiment. Note that S(b1 , b2 ) is zero if and only if the two branches b1 and b1 are identical, otherwise S(b1 , b2 ) > 0. The importance function (removal cost) D(b) of a branch b with length l is defined as, 1
D(b) = l
rb ds
(5)
0
which can be viewed as the surface area of the branch. 2.3
Tree Edit Distance
With the definition of matching cost and removal cost of branches in the previous sub-section, now we can abstract the geometry of the branches into nodes in a theoretical tree, and two nodes are connected together if and only if the two branches they represent form a junction. Therefore the theoretical tree formed can be viewed as a dual graph of the geometric tree of the centreline. For an example, please see Fig. 1. Note that since it is difficult to assign an order to the children of a node in a 3D tree, the theoretical tree is unordered, i.e, the order of the children for any node is not important, but only the parent-child relationship. Once we have it, everything about tree edit distance is manipulated on the theoretical tree. The returned result will be the set of nodes to be removed and the correspondences between the unremoved nodes of the two theoretical trees, which are respectively equivalent to the set of branches to be removed in order to yield the best matching and the correspondences between the unremoved branches in the two vascular structures. We will apply the algorithm in [7] to perform unordered tree matching. Let T1 and T2 be two trees with sets of nodes N1 and N2 respectively. The paper suggested a concept called marking. A marking K = (S1 , S2 ) is two sets of nodes S1 ∈ N1 and S2 ∈ N2 to be removed from T1 and T2 respectively. Removing a node n means setting the parent of all the children of n to be the parent of n and then ignoring n. Note that for a geometric vascular tree, removing a branch may cause two existing unremoved branches join together to form a longer branch;
120
W.H. Tang and A.C.S. Chung
Fig. 1. The top row shows three geometric trees with the branches labeled and the bottom row shows the corresponding theoretical trees with the nodes labeled. The left column shows the trees before removing F , the middle column shows the trees after removing F but before joining E and G, the right column shows the trees after joining E and G.
in the theoretical tree context, this is handled by joining the node which has only one child into its parent. For a visual example, please see Fig. 1. A marking K = (S1 , S2 ) is legal if and only if the resultant trees after the removal of S1 and S2 , denoted by K(T1 ) and K(T2 ) respectively, are isomorphic, i.e., there exists a one-to-one correspondence of all nodes between K(T1 ) and K(T2 ) such that parent-child relationship is preserved in the mapping. Note that since we are considering unordered trees, the number of such isomorphic mappings can be more than one in a marking. The removal cost of a marking K = (S1 , S2 ), D(K), is defined as, D(s) + D(s) (6) D(K) = s∈S1
s∈S2
which is the total removal costs for all removed nodes. The matching cost of a marking K, S(K), is defined as, S(K) = min S(n1 , n2 ) (7) im∈IM
(n1 ,n2 )∈im
where IM is the set of all isomorphic mappings for K(T1 ) and K(T2 ), (n1 , n2 ) is a corresponding pair of nodes in the isomorphic mapping im, and S(n1 , n2 ) is the matching cost between the nodes n1 and n2 . Also, S(K) is ∞ if K is not a legal marking. So S(K) is the total matching cost of all corresponding pairs in the best isomorphic mapping between K(T1 ) and K(T2 ). To find S(K), we use the bottom-up matching algorithm suggested in [7], which uses bipartite
Cerebral Vascular Tree Matching of 3D-RA Data
121
Fig. 2. (Color image) The two figures are two synthetic simple structures to be matched. The branches in black color are those to be removed in order to obtain the best matching. Two branches of same color across the figures are matched branch pairs.
matchings to find the best mapping between the children sets of the two nodes. Finally, the total tree edit distance between two trees T1 and T2 , Dist(T1 , T2 ) is then defined as, Dist(T1 , T2 ) =
min
K∈2N1 ×2N2
[D(K) + S(K)]
(8)
which is the minimum of the sum of the delete cost and matching cost over all possible markings. Note that 2N1 × 2N2 in the equation is the cartesian product of the two power sets, which is the set of all possible markings. Since it is exponential to the total number of nodes in the two trees, it is computationally infeasible to find the exact tree edit distance which requires an exhaustive search of all possible markings. Instead, we use an iterative improvement approach suggested in [7], which randomly adds or removes a node from either or both trees to find an approximated solution by a downhill optimization. Since this approach is stochastic and easily gets trapped in local minima, it is executed several times and the best solution is taken.
3
Experimental Results
We first apply our method on a pair of simple synthetic tubular structures. Since the structures just consist of a few branches, we use the exhaustive search method to enumerate through all possible markings. Fig. 2 shows the two structures and the matching result. The branches in black color are those to be removed in order to obtain the best matching. Branches of same color across the two structures are matched branch pairs. It shows that the matching result agrees with the intuitive mapping from visual inspection. Then we apply our method to two sets of real 3D-RA data from two different patients. There are more than 30 total branches in the data-sets, making exhaustive search impossible. So we execute the stochastic iterative improvement
122
W.H. Tang and A.C.S. Chung
Fig. 3. (Color image) The left and right columns respectively show the vascular data of two different patients. The top and middle rows are two different views of the segmented surfaces superimposed with the centerlines. The bottom row shows the matching result of our method. The branches in black color are those to be removed in order to obtain the best matching. Two branches of same color across the figures are matched branch pairs (Some colors are used twice in the same figure).
Cerebral Vascular Tree Matching of 3D-RA Data
123
method several times and take the best result. Fig. 3 shows the data and the result. We can see that the major branches are matched correctly. For other branches, the matching is consistent with their relative positions, although we do not apply any positional mismatch penalty in our cost function. Therefore by visual inspection, our method is accurate. This is our current research work to further test our method on a larger set of 3D-RA data.
4
Conclusion
We have presented a novel method for matching vascular trees. Our method is based on the concepts of theoretical tree edit distance. The geometry and morphology of the vascular branches are abstracted into the nodes of the theoretical trees. The branches similarity and importance measures are incorporated into the nodes relabeling and removal costs of the theoretical trees. These make our method able to perform the matching in a global perspective but still able to consider the geometry of every branches. Experimental results show that our method can work on both synthetic data and two sets of segmented 3D-RA vessels from different patients. In the future, we will use different optimization strategies to improve the suboptimal solution and perform experiments on more real data-sets.
References 1. Beck, J., Rohde, S., Berkefeld, J., Seifert, V., Raabe, A.: Size and location of ruptured and unruptured intracranial aneurysms measured by 3-dimensional rotational angiography. Surg. Neurol. 65 (2006) pp. 18–25 2. Weir, B., Disney, L., Karrison, T.: Sizes of ruptured and unruptured aneurysms in relation to their sites and the ages of patients. J. Neurosurg. 96 (2002) pp. 64–70 3. Antiga, L., Steinman, D.A.: Robust and objective decomposition and mapping of bifurcating vessels. IEEE Trans. on Med. Img. 23 (2004) pp. 704–713 4. Cool, D., Chillet, D., Kim, J., Guyon, J.P., Foskey, M., Aylward, S.: Tissue-based affine registration of brain images to form a vascular density atlas. MICCAI (2003) pp. 9–15 5. Chillet, D., Jomier, J., Cool, D., Aylward, S.: Vascular atlas formation using a vessel-to-image affine registration method. MICCAI (2003) pp. 335–342 6. Jomier, J., Aylward, S.R.: Rigid and deformable vasculature-to-image registration: A hierarchical approach. MICCAI (2004) pp. 829–836 7. Shasha, D., Wang, T.L., Zhang, K., Shih, F.Y.: Exact and approximate algorithms for unordered tree matching. IEEE Trans on Systems, Man, and Cybernetics 24 (1994) pp. 668–678
Application of SVD-Based Metabolite Quantification Methods in Magnetic Resonance Spectroscopic Imaging Min Huang1,2 and Songtao Lu2 1
School of Electronic Engineering, South-Central University for Nationalities, Wuhan, 430074, China 2 Institute of Biomedical Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China [email protected], [email protected]
Abstract. MRSI can reflect the abnormal metabolites information of different diseases in clinical diagnosis. We made research on the application of SVD-based metabolite quantification methods in 2D MRSI by comparing two different SVD algorithms. In the quantification process, first, the FID signals are rearranged into a data matrix. Then, we can make full SVD by Golub algorithm or partial SVD by Lanczos algorithm. Last, the parameter estimation on each metabolite can be acquired by the definition of the linear parameter model. The ordinary full SVD must decompose all the singular value, with a big cost of the time. The partial SVD just needs to calculate the less singular by the character of the Hankel matrix to improve the estimation speed. When the SNR of MRS signals is higher than ten, the computation time on partial SVD is decreased by thirteen times of the ordinary method. But the speed of quantification is only half of the ordinary one when the SNR is lower than one. Improvements of speed and accuracy in metabolite quantification are key factors for 2D MRSI to be a clinical tool in the future.
1 Introduction As an important application and a noninvasive imaging method in NMR, magnetic resonance spectroscopic imaging (MRSI) has been proposed as a method to diagnose and localize the brain tumor [1], temporal lobe epilepsy [2] and many other diseases in the early stage. It can present information in the form of spectrum for each voxel and in the form of metabolite map for each metabolite, the latter represents not only simply anatomy but also local metabolic states or local tissue abnormalities. In vivo 2D MRSI suffers generally from long imaging time and poor SNR of spectrum. After spatial reconstruction on MRSI raw data, the MRS parameter in each voxel needs to be quantified to observe the difference between the normal and abnormal tissues. Different from the single voxel MRS, except the accuracy, the speed of the quantification process for MRS data in multiple voxels is also an important factor for MRSI to be a clinical tool. The leastsquare (LS) algorithm is usually used to estimate the nonlinear model parameters in single voxel MRS [3]. But it requires a number of iteration steps, and start values must also be set for the nonlinear parameter, which is very long time consuming and thus prohibitive in clinical application for MRSI. Based on that the MR signal can be fitted with the exponential G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 124 – 131, 2006. © Springer-Verlag Berlin Heidelberg 2006
Application of SVD-Based Metabolite Quantification Methods
125
decay model, a linear model avoiding iterative calculation can replace the nonlinear model to improve the speed of quantification. We make research on the application of SVD-based metabolite quantification methods in 2D MRSI by comparing two different SVD algorithms using the linear parameter model.
2 Theory The sampled raw data signal M ( k x , k y , t ) in k-domain for 2D MRSI [4] is given by K
M(kx , ky , t) = ¦³ ³ mk (x, y)exp[i2π (kx x + ky y)]exp[(−λk +i2π fk )t]dxdy k =1
x y
(1)
mk ( x, y ) is the density in the position and λk is the decay constant of the k 'th metabolite, λk = 1/ T2 k . f k is the chemical shift frequency. where
After the spatial reconstruction on MRSI raw data with a similar algorithm on MRI, we get the spatial-domain signal m( x, y , t ) in each voxel position, which is a summation of FID signals of different metabolite. K
m( x, y, t ) = ¦ mk ( x, y) exp[(−λk + i 2π f k )t
(2)
k =1
And we know, the FID signal with only one kind of metabolite can be expressed by
s (t ) = a exp(−
t ) exp[i (2π ft + ϕ )] T2
(3)
Considering the implied phase in a position ( x, y ) , we can express equation (2) by K
s (t ) = ¦ ak exp(iϕk ) exp[(−λk + i 2π f k )t ]
(4)
k =1
The task of the quantification of metabolites is to estimate four parameters ak , λk , f k and ϕ k for the k’th metabolite. 2.1 Linear Model and Parameter Estimation A linear parameter model [5] avoiding the iterative calculation can replace the nonlinear equation (4) to improve the speed, with the following type K
sn = ¦ ckζ kn +δ + ε n k =1
n = 0,1,..., N − 1
(5)
ck = ak exp(iφk ) denotes the k’th complex-valued amplitude, ζ k = exp[ ᧤-λk + i 2π f k )Δt ] δ is the delay time tbeg = δΔt ε n is the noise.
where
126
M. Huang and S. Lu
A data matrix S with a structure of Hankel matrix can be defined,
§ s0 ¨ s S =¨ 1 ¨# ¨¨ © sL −1
s1 s2 ... sM −1 · ¸ s2 s3 ... sM ¸ ¸ # # ... # ¸ sL sL +1 ... sN −1 ¸¹
(6)
Given the FID signal is noiseless and contains K exponentially decaying sinusoids, the rank of the matrix S is just equal to K. But if there are noises in the FID signals, the rank will be full, which is equal to min (M, L). However, if the SNR is not too low, according to the position with a maximum slope, we can still get the approximate number K. Now, our object is changed to calculate the rank of the data matrix S by singular value decomposition (SVD) [6] H S LM = U LL ∧ LM VMM
(7)
If the rank is full, we can set the noise related singular values to zero to remove the noise. So, the matrix S LM can be truncated as follows, T H S LM = U LK ∧ K VMK
Then, we can compute the LS solution
(8)
QKK of the following equation.
U L(↑K) QKK = U L(↓K)
(9)
(↑ ) where U LK andU L(↓,K) are derived from ULK by deleting the first and the last row respectively.
ζ k of QKK . Now, according to ζ k = exp[ ᧤-λk + i 2π f k )Δt ] , the estimation for f k and λk can be solved followed with getting the LS solution of ck by equation (5). And by the equation ck = ak exp(iφk ) , we can get the estimation for ak and φk . And
we
can
get
the
K
eigenvalues
2.2 Full SVD and Partial SVD in Parameter Estimation The SVD of the matrix S in equation (8) is the most time consuming in the metabolite quantification process. The full SVD by Golub [6] includes following steps, Step 1 change the matrix S to bi-diagonal matrix by Householder method
§ α1 β1 · ¨ ¸ ¨ α2 β2 ¸ ¸ B = U1H SV1 = ¨ % % ¨ ¸ α M −1 β M −1 ¸ ¨ ¨ α M ¸¹ ©
(10)
Application of SVD-Based Metabolite Quantification Methods
Step 2 make QR decomposition with Givens rotate to get the matrix
127
∧ LM
∧ LM = U 2H BV2
(11)
Step 3 get the matrix of U and V U = U1U 2 , V = V1V2
(12)
But the algorithm by Golub needs a long time because of iterations in the step 2. The full SVD of a L×M matrix takes (2LM2+4M3) complex multiplications. It doesn’t exploit the character of Hankle matrix, with a big waste on memory space and computation time. From equation (8), we know that only a few largest singular values and related singular vectors need to be calculated for the quantification of K kinds of metabolites. We know the data matrix S has a Hankel structure i.e. Slm = Sl + m , we can make partial singular decomposition (PSVD) to deduce the time cost greatly. By Lanczos algorithm [7-8], the PSVD only need the following recursive steps 2
r j = S H u j − β j v j −1 , α j = r j , v j = r j / α j p j = Sv j − α j u j , β j +1 = p j Where,
p0 ≠ 0 , β1 = p0 2 , u1 = p0 / β1
If the matrix U
k +1
2
, u j +1 = p j / β j +1
v0 = 0
= [ u 1 , u 2 , ..., u k + 1 ] and Vk = [υ1 ,υ2 ,...,υk ] meet the
orthonormal character after K steps, we can get the lower bi-diagonal matrix BK
BK
§ α1 · ¨ ¸ β α2 ¸ =¨ 2 ¨ % % αK ¸ ¨¨ ¸ β K + 1 ¸¹ ©
(13)
The process needs only a few recursive steps with a little time cost and can automatically remove the noise. Now, the SVD of matrix BK can be get, which needs only O(6K3) complex multiplications with a large time reduced because of K C DCT are chosen. The lower the value of C DCT , the more DCT basis functions are used. As C DCT tends to be positive infinity (Inf), no DCT basis is applied and the nonlinear normalization algorithm thus becomes a linear affine one, i.e., only the global affine transforms such as translation, rotation and shear are considered. Since we are investigating the effects of C DCT , the default values are used for RS and N Ite , i.e. RS =1 and N Ite =16. Fig. 1 shows the values of H f , H θ ,
H ϕ and H f ,θ ,ϕ for variant C DCT . One can observe from the figure that the values of H f , H θ , H ϕ and H f ,θ ,ϕ increase monotonously with C DCT , since fewer DCT basis are used. When C DCT becomes Inf, i.e., no nonlinear deform is considered, the values
Fig. 1. The effects of cut-off wavelength on anatomical variability
266
L. Shen, D. Auer, and L. Bai
of H θ and H ϕ reach their maximum. However, when the value of C DCT ≥ 80, the DCT basis available seem not able to precisely describe local deform, i.e. the values of H f and H f ,θ ,ϕ are even higher than those when C DCT = Inf. It seems that 20 is the optimized value for C DCT . The results match well with what one would expect, which proves the effectiveness of MRGW in measuring local structure. Note that when the number of DCT basis increases, the computation and memory costs rise as well. The average time required to finish the spatial normalization process is about 120 seconds when C DCT =20 when a PC with P4 3.0 GHz (2G RAM) was used. The PC could not complete the normalization process when C DCT 0 0 ° 2 § RB 2 Vo ( s ) = ® RA ¨− − − ( 1 exp( )) exp 2 ° ¨ 2β 2 α 2 © ¯
or λ3 > 0 · S2 ¸(1 − exp( 2 )) ¸ 2c ¹
(5)
elsewhere
where Į, ȕ and c are thresholds which control the sensitivity of the line filter to the measures. With the Eigenvalues sorted such that |Ȝ1||Ȝ2||Ȝ3|, three quantities are defined to differentiate blood vessels from other structures: RA=|Ȝ1|/|Ȝ3|, RB=|Ȝ1|/(|Ȝ1Ȝ3|)1/2, S=(Ȝ12+Ȝ22+Ȝ32)1/2, RB is non zero only for blob-like structures, the RA ratio differentiates sheet-like objects from other structures and S, the Frobenius norm, is used to ensure that random noise effects are suppressed from the response. 4.2 Improved Prior Model
The Hammersley–Clifford theorem shows that the prior joint probability p(x) of a hidden label random field can be described by a Gibbs distribution[9]: p( x) =
1 −U ( x ) e z
(6)
where z is the partition function and used to normalize the probability .U(x) is called an energy function. In this research, the energy function of this model is given by
U (xs ) =
¦η (V
S
( x s , x r ) + V D ( x s ))
(7)
r∈
The spatial component of the energy function is given by °§ + β s , V S ( x s , x r ) = ®¨¨ °¯© − β s ,
xs = xr
· ¸ elsewhere ¸¹
(8)
where β s is a constant, and we introduce the shape component. − β D ° °+ β D VD = ® °+ f (Vo ( s )) °− f (Vo ( s )) ¯
if
Vo ( s ) = 0 and
x s = background
if
Vo ( s ) ≠ 0 and
x s = vessel
if
Vo ( s ) ≠ 0 and
x s = background
if
Vo ( s ) = 0 and
x s = vessel
(9)
where f (Vo ( s )) is a ascending function. It is easy to show that since Frangi's vesselness measure is incorporated into prior model a voxel belongs to vessel or background is not only determined by its intensity
An Improved Statistical Approach for Cerebrovascular Tree Extraction
345
and the label of neighbors but also its shape information. Our shape component improves the detection of those small vessels which may be left out by traditional prior model because of low intensity of the small vessels. The derived objective function in (1) can be optimized by the iterated conditional modes (ICM) method [13]. ICM persistently seeks a lower energy configuration and never allows increases in energy, which guarantees a faster convergence rate.
5 Experimental Results To examine the performance of proposed method, we have exert it to two clinical datasets. The first TOF clinical dataset is acquired from 1.5T GE MRI scanner. The data volume is 256 ×26 ×60 voxels with a voxel size of 0.938 ×0.938 ×2 mm3 . In the following two experiments, the parameter configurations were: Į= ȕ =0.5, c=5, ȕs=1, ȕD=1. Fig. 1 shows the segmentation results using different method. Vessel surfaces are rendered in 3D using the visualization toolkit (VTK). In this experiment f (V0 ( s )) is defined as a linear function, f (V0 ( s )) = ωV0 ( s ) , the parameter Ȧ is used to control the influence of shape component in prior model. MIP image is shown in Fig. 1(a); (b) shows the 3D surface render of segmentation result by the traditional prior model. Segmentation results by proposed shape incorporated prior model are shown in (c)-(f) with different parameters. (a)
(b)
(c)
(d)
(e)
(f)
Fig. 1. (a) MIP image, (b) Segmentation by the traditional prior model, (c)-(f) Segmentation by proposed prior model with different parameters: Ȧ =2,Ȧ =4,Ȧ =6,Ȧ =8, Ȧ =10 respectively
346
J.T. Hao, M.L. Li, and F.L. Tang
The second clinical data volume is 512 ×512 ×115 voxels with a voxel size of 0.43 ×0.43 ×1.017 mm3. The MIP image is shown in Fig. 2.(a), and the segmentations by traditional prior Model and proposed model are shown in (b) and (c) respectively.
(a)
(b)
(c)
Fig. 2. (a) MIP image, (b) segmentation by traditional Prior Model, (c) Proposed Prior model with Ȧ =12
Comparing segmentation results by traditional prior model with by proposed shape incorporated prior model we can see that the proposed model improved the capability of detecting tiny vessel branch.
6 Conclusion In this research, we present a shape information incorporated prior model to extract the whole vasculature. The voxel belongs to vessel or background is not only determined by intensity and spatial information but also shape information. Therefore, the proposed method can well overcome the drawback of TOF MRA that the intensity of tiny branch is relatively low, and improved the capability of detecting tiny vessel branch.
References 1. Kirbas C., Quek F., Vessel Extraction Techniques and Algorithms :A Survey Bioinformatics and Bioengineering, Proceedings. Third IEEE Symposium on10-12 March 2003 Page(s):238 – 245 2003 2. Suri J.S., Liu K.,et al., 2002. A Review on MR Vascular Image Processing: Skeleton Versus Nonskeleton Approaches: Part II, Ieee Transactions On Information Technology In Biomedicine, Vol. 6, NO. 4,338-350 3. Hassouna M.S., Farag A.A. 2006. Cerebrovascular segmentation from TOF using stochastic models, Medical Image Analysis 10 2–18. 4. McLachlan J. , Peel D., ,2000. Finite Mixture Models. New York: Wiley. 5. Dempster A., Laird N., Rubin D., 1977. Maximum. likelihood from incomplete data via the EM algorithm, J. Royal Stat. Soc. B, pp. 1-38.
An Improved Statistical Approach for Cerebrovascular Tree Extraction
347
6. Wong C. K. Wilbur, Chung C. S. Albert, 2005. Bayesian Image Segmentation Using Local Iso-Intensity Structural Orientation. IEEE Trans.Image Processing, Vol. 14, No. 10. 7. Meier T, Ngan K N, Crebbin G, 1997. A robust Markovian segmentation based on highest confidence first (HCF)[J]. IEEE Int Conf on Image Processing, 1:216-219. 8. Chung A.C.S., Noble J.A., Summers P. 2002. Fusing speed and phase information for vascular segmentation of phase contrast MR angiograms. Med. Image Anal. 6 (2), 109–128. 9. Geman S.,Geman D., 1984. Stochastic relaxation, Gibbs distributions,and the Bayesian restoration of images, IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, pp. 721–741.. 10. Descoteaux M., Collins L., Siddiqi K.2004. Geometric Flows for Segmenting Vasculature in MRI: Theory and Validation. In Medical Imaging Computing and Computer-Assisted Intervention, pages 500--507. 11. Sato Y., Nakajima S., Shiraga N., Atsumi H., Yoshida S., Koller T., Gerig G., Kikinis R..1998. 3d multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical Image Analysis, 2(2):143–168. 12. Frangi A., Niessen W., Vincken K. L., Viergever M. A.,1998. Multiscale vessel enhancement filtering. In MICCAI’98, pages 130–137.. 13. Besag J., 1985.On the statistical analysis of dirty pictures, J. R. Stat. Soc. B, 48, (3), pp. 259–302
Segmentation of 3-D MRI Brain Images Using Information Propagation Jianzhong Wang1,2, Jun Kong1,2, ∗, Yinghua Lu1, Jingdan Zhang1, and Baoxue Zhang2 1
Computer School, Northeast Normal University, Changchun, Jilin Province, China 2 Key Laboratory for Applied Statistics of MOE, China {wangjz019, kongjun, luyh, zhangjz358}@nenu.edu.cn
Abstract. This paper presents an integrated method for adaptive segmentation of brain tissues in three-dimensional (3-D) MRI (Magnetic Resonance Imaging) images. The method intends to do the volume segmentation in a slice-by-slice manner. Firstly, some slices in the volume are segmented using an automatic algorithm composed of watershed, fuzzy clustering (Fuzzy C-Means) and resegmentation. Then their adjacent slices can be segmented conveniently by propagating the information of them. The information is consisted of watershed lines and thresholds obtained from the re-segmentation approach. This integrated approach yields a robust and precise segmentation. The efficacy of the proposed algorithm is validated using extensive experiments.
1 Introduction With the increasing size and number of medical images, the use of computers in facilitating their processing and analyses has become necessary. Segmentation of medical images is often required as a preliminary stage and plays a crucial role in medical image analysis. Because of the advantages of MRI over other diagnostic imaging [2], the majority of researches in medical image segmentation pertains to its use for MR images, and there are a lot of methods available for MRI image segmentation [1-4]. Niessen et al. roughly grouped these methods into three main categories: classification methods, region-based methods and boundary-based methods. Just as pointed out in [4], the methods in the first two categories are limited by the difficulties due to intensity inhomogeneities, partial volume effects and susceptibility artifacts, while those in the category suffer from spurious edges. We present an integrated method for 3-D MRI brain images segmentation in this paper. Firstly, a 2-D image segmentation algorithm consisted of watershed, fuzzy cluster (here, we take Fuzzy C-Means) and re-segmentation approach is applied to a few slices of a volume. Then, these already segmented slices are considered as key images, and their neighboring slices can be segmented simply by propagating their information. The rest of this paper is organized as follows. Section 2 presents the ∗
Corresponding author. This work is supported by science foundation for young teachers of Northeast Normal University, No. 20061002, China.
G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 348 – 355, 2006. © Springer-Verlag Berlin Heidelberg 2006
Segmentation of 3-D MRI Brain Images Using Information Propagation
349
three stage 2-D segment method. In section 3, we describe the slice-by-slice segmentation on 3-D MRI volumes. Experimental results are presented in Section 4 and we conclude this paper in Section 5.
2 2-D Slice Segmentation 2.1 Watershed Algorithm The watershed algorithm is a popular segmentation method coming from the field of mathematical morphology and it has been widely used in many fields of image processing. In this study, we apply the Vincent and Soille [5] version of watershed algorithm, which is based on an immersion simulation: the topographic surface is immersed from its lowest altitude until water reaches all pixels.
(a)
(b)
(c)
Fig. 1. (a) Original image simulated from MRI brain phantom (b) Partition result after using watershed algorithm. (c) Some regions that aren’t divided completely.
The watershed algorithm possesses number of advantages. However, some drawbacks also exist. Firstly, result of classical watershed algorithm on grey images such as tissue images is over-segmentation, as shown in Fig. 1b. Secondly, there are some regions which are not divided completely due to the low contrast in them, particularly in the transitional regions of gray matter and white matter, or cerebrospinal fluid and gray matter. It is clearly shown in Fig. 1c obtained from one part of Fig. 1b zoomed in. 2.2 Merging the Over-Segmentation Regions The output of watershed algorithm is the segmentation of IG into a set of nonoverlapping regions denoted by Ri, i = 1, 2, …, n where n is the number of regions. To implement the merging of similar regions, we use FCM clustering method in our study. The mean value, denoted by mi, i = 1, 2, …, n of each region Ri, is needed. The classical FCM clustering algorithm used in this paper is formulated as:
J FCM (U , v ) =
C
¦¦u i∈Ω k =1
m ik
|| m i − v k || 2 , subject to
C
¦u k =1
ik
= 1 , Ω = {1, 2, ..., n} .
(1)
350
J. Wang et al.
where the matrix U={uik} is a fuzzy c-partition of IG, and uik gives the membership of region Ri in the kth cluster ck, C is the total number of clusters and set to 3 in our study, because three brain tissues are of interest: CSF (Cerebrospinal Fluid), GM (Gray Matter), and WM (White Matter), v = {v1, v2, v3} is the set of fuzzy cluster centroids, and v1, v2, v3 denote the centroids of CSF, GM and WM respectively, m ∈ (1, ∞ ) is the fuzzy index (in our study, m=2).
(a)
(b)
(c)
(d)
Fig. 2. (a) Image after using FCM. (b) A partial image of (a) zoomed in. (c) is the image after watershed lines removed from (b), (d) is the same part as (c) in “ground truth”.
The image after using FCM on Fig. 1c is shown in Fig. 2a, when we zoom in on a part of this image it becomes apparent that some regions are not partitioned as they should be (see Fig. 2b). Another disadvantage of the watershed algorithm highlights itself when we remove the watershed lines from Fig. 2b and compare it with the “ground truth” (see Fig. 2c and d). 2.3 The Re-segmentation Processing Before classifying pixels in the incompletely segmented regions, it is necessary to first detect regions that contain more than one tissue class. Three characters of each region are considered in this section: variance, mean value and membership. As we known, because the inhomogeneous regions contain more than one tissue, their variances must be higher than the homogeneous ones. Moreover, if a region after watershed segmentation is divided completely and accurately, the region’s mean value would be close to the centroid of the class it belongs to and its membership obtained after FCM would be high. Thus, the feature space is constructed by the variance, mean value and membership of each region (Fig. 3a). We assume that most of the regions after watershed algorithm are homogeneous, the incompletely segmented regions can be considered as data outliers in the feature space. Then we exploited a fast Minimum Covariance Determinant (MCD) estimator [8] in our study for detecting the transitional regions which are not divided completely. Fig. 3b is the fast MCD detection results. After removing the outliers, the left inliers are the homogeneous regions. MCD estimator not only can detects the resegmented regions, but also the inliers of the region class samples can be used as training set for the next supervised classification.
Segmentation of 3-D MRI Brain Images Using Information Propagation
(a)
351
(b)
Fig. 3. The feature space, x, y and z axis represents the mean value, variance and membership respectively, different colors and point styles are assigned to different tissue classes. (a) Original samples obtained by FCM clustering result (b) The samples after removing the outliers.
(a)
(b)
(c)
(d)
Fig. 4. (a) Re-segmented result image using our method. (b) Final result image without watershed lines. (c) is the zoom in on the same part of Fig.2b, (d) is the image after watershed lines removed from (c).
Due to the aim of re-segmentation, we exploited the classical k nearest-neighbor (kNN) classifier [9] to partition the regions needed re-segmentation. We choose the means of inlier’s regions obtained from fast MCD estimator as the training set. For each data point to be classified in re-segmented regions, kNN computes this point's closest k training samples in feature space. Then the data is classified with the label most represented among these k nearest neighbors. We choose k = 7 in our experiments according to the suggestion which is given by Enas in [10]. To illustrate our re-segmentation approach, the images after re-segmentation on Fig. 2a and b are shown in Fig. 4a and c respectively, Fig. 4b and d are the images without watershed lines. Compared Fig. 4d with Fig. 2c and d, the precise and veracity of our method is obviously validated.
3 3-D Volume Segmentation Though the three stage method described in Section 2 is validated for 2-D images segmentation. If we apply all three stages to each slice of a 3-D MRI volume, it is very time and computation consuming. Therefore we propose an extension of the 2-D method for volumes segmentation. In 3-D MRI volumes, the change in size and intensity for each structure varies lightly from one image to the adjacent. In other words, the differences between a few
352
J. Wang et al.
adjacent images are very small, it can be seen clearly from Fig. 5a, c and e. So after one slice of a 3-D MRI volume has been segmented by the method described in Section 2, its neighboring images can be segmented simply by propagating its information. The information is consisted of watershed lines, mean values and variances of homogeneous regions. We will call the already segmented image Ikey, and call the adjacent images under segmentation Iseg. Because the little different between them and the over-segmentation of the watershed, when we put the watershed lines of Ikey onto Iseg directly and partition Iseg in the same regions as Ikey, most regions’ mean value and variance do not change or only change a little. We can suppose that every region in Iseg still belongs to the class obtained from FCM in Ikey. After the MCD estimator used during Ikey segmentation, we have obtained the homogeneous regions. Three parameters of these regions in each tissue class are used in this section to detect the homogeneous regions in Iseg, they are maximum variance, minimum and maximum mean values, denoted by Vmax, Mmin and Mmax respectively. For each region in Iseg, if its mean value is greater than Mmin and smaller than Mmax of its class, and its variance is less than Vmax of its class, the region contain only one tissue. We can assign all pixels in it to its class. Otherwise, the region is inhomogeneous and we use the kNN classifier with the training set obtained in Ikey to classify pixels in it. Fig. 5b, d, and f show the segmentation results of the images in Fig. 5a, c, and e respectively. Only the image in Fig. 5d is segmented by the three stage method, others are segmented by the information of Fig. 5d.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5. (a), (c), and (e) are three adjacent images of a 3-D MRI volume with number 77, 78 and 79 respectively. We can see they vary lightly from each other. (b), (d), (f) show their segmentation results.
Using the extension method, less than half of the slices in a volume need to be applied with the watershed algorithm, FCM cluster and MCD estimator and the rest slices will contain only a small number of pixels that need classifying by the kNN. Thus, the overall execution time and computation of 3-D MRI volumes is reduced.
4 Experimental Results We implemented our proposed algorithm in Matlab on a Pentium 4 2.40GHz computer. Both simulated MRI images obtained from the BrainWeb Simulated Brain Database at the McConnell Brain Imaging Centre of the Montreal Neurological Institute (MNI), McGill University [7], and real MRI data obtained from the Internet Brain Segmentation Repository (IBSR) [11] are used in our experiment. Extra-cranial
Segmentation of 3-D MRI Brain Images Using Information Propagation
353
tissues are removed from all images and we de-noise the images using a versatile spatially adaptive wavelet-based algorithm [6] prior to segmentation. 4.1 Result Evaluation In order to evaluate our method for 3-D MRI segmentation, we use a simulated data volume with T1-weighted sequence, slice thickness of 1mm, volume size of 21 in this section. For every three neighboring slices in the volume, we first process the middle image by three stage method in Section 2, and then the other two images are segmented using the information of the already segmented one. Figure 6a and d are two MRI slices selected from the data volume. The corresponding segmentation results processed by our approach are shown in Fig. 6b and e with their “ground truth” in Fig. 6c and f. The result in Fig. 6b is segmented by three stage method and Fig. 6e is segmented by their adjacent image’s information. Three different indices (false positive ratio γ fp , false negative ratio γ fn , and similarity index ρ [12]) are exploited for each of three brain tissues as quantitative measures to validate the accuracy and reliability of our method. The validation results are shown in Table 1. It can be seen from the table that though the slice of number 129 is segmented by the information of its adjacent, the similarity indices of all the tissues are still larger than 90%, which indicates an excellent agreement between our segmentation results and the “ground truth”. We also compare the execution time between extension method and applying all three stages to each slice of data volume. The average process time per slice in the volume is shown in Table 2. We can see the extension method is much quicker for the volume segmentation. Fig. 7 shows the 3-D segmentation results of the whole brain and each tissue.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 6. Segmentation of simulated images from MRI brain phantom. (a) and (d) are images of number 77, and 129, with (b) and (e) their corresponding segmentation results using our proposed approach. (c) and (f) are the “ground truth” of (a) and (d). Table 1. Validation results for different slices
Number 77 γ fp γ fn ρ WM 2.36 7.25 95.01 GM 1.99 9.01 93.25 CSF 6.29 6.18 92.10
γ fp
Number 129 ρ γ fn
2.18 7.01 94.93 1.83 9.20 92.32 6.63 6.15 92.15
354
J. Wang et al. Table 2. Average execution time for each slice in the volume
Extension method 2.38
Time per slice(s)
(a)
(b)
Three stages method 3.97
(c)
(d)
Fig. 7. 3-D segmentation results for the whole brain and individual tissue. (a) Whole brain, (b) CSF, (c) GM and (d) WM.
4.2 Performance on Actual MRI Data and Result Comparison We compare the segmentation results between FCM clustering and segmented result using our proposed approach on actual MRI data. Two real MRI images are used in this section. Fig. 8a shows one slice of real T1-weighted normal MRI images and Fig. 8d shows a T2-weighted slice with tumor. Fig. 8b and e are FCM segmentation results. Using our re-segmentation processing, the result images are shown in Fig. 8c and f. During the segmentation of Fig. 8d, the tumor is not considered as an additional tissue and the cluster number in the FCM is also set to three. Visual inspection shows that our approach produces better segmentation than FCM, especially in the transitional regions.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 8. Segmentation of real MRI imageos. (a) and (d) Original image. (b) and (e) FCM segmentation. (c) and (f) Segmented result using our approach.
5 Conclusions We propose a novel approach for segmenting brain tissues in 3-D MRI images. The algorithm segments the MRI volume in a slice-by slice manner. A 2-D slice segmentation method is first introduced in this paper. This method is composed of three stages. In the first stage, watershed algorithm is applied to brain tissues as an initial segmenting method. The following procedure is a merging process for the over segmentation regions using FCM cluster. In the third stage, we exploited a method base on
Segmentation of 3-D MRI Brain Images Using Information Propagation
355
Minimum Covariance Determinant estimator to detect the regions needed segmentation again, and then partition them by a supervised k-Nearest Neighbor classifier. Furthermore, we illustrate how to apply our method to 3-D MRI volumes. Experiments show that this integrated scheme yields a robust and precise segmentation.
References 1. Pham DL, Xu CY, Prince JL: A survey of current methods in medical image segmentation. Ann. Rev. Biomed. Eng. 2 (2000) 315—37 [Technical report version, JHU/ECE 99—01, Johns Hopkins University]. 2. Pham D., Xu C., Prince J.: Current methods in medical image segmentation. Annu. Rev. Bio-med. Eng. 2, (2000)315–337. 3. Clarke L., Velthuizen R., Camacho M., Heine J., Vaidyanathan M., Hall L., Thatcher R., Silbiger M.: MRI segmentation: methods and application. Magn. Reson. Imaging 13 (3), (1995)343–368. 4. Niessen W., Vincken K., Weickert J., Haar Romeny B., Viergever M.: Multiscale segmenta-tion of three-dimensional MR brain images. Internat. J. Comput. Vision 31 (2/3), (1999)185–202. 5. Luc Vincent, Pierre Soille: Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations, IEEE Transaction on Pattern Analysis And Machine Intelligence, vol 13, No 6, (1991). 6. Pizurica A., Philips W., Lemahieu I., Acheroy M.: A versatile wavelet domain noise filtration technique for medical imaging. IEEE Trans. Med. Imaging 22 (3), (2003)323–331. 7. Kwan R.-S., Evans A., Pike G.: MRI simulation-based evaluation of image-processing and classification methods. IEEE Trans. Med. Imaging 18 (11), (1999)1085–1097. Available: http://www.bic.mni.mcgill.ca/brainweb. 8. 8 Rousseeuw, P.J., Driessen, K.: A fast algorithm for the minimum covariance determinant estimator. Technometrics 41 (3), (1999)212-223. 9. Chris A. Cocosco, Alex P. Zijdenbos, Alan C. Evans: A fully automatic and robust brain MRI tissue classification method. IEEE Transaction on Medical Image Analysis, vol.7, (2003) 513-527. 10. Enas, G., Choi, S., 1986. Choice of the smoothing parameter and efficiency of k-nearest neighbour classification. Computers and Mathematics with Applications 12A (2), 235-244. 11. DN Kennedy, PA Filipek, VS Caviness: Anatomic segmentation and volumetric calculations in nuclear magnetic resonance imaging, IEEE Transactions on Medical Imaging, Vol. 8, (1989)1-7. Available: http://www.cma.mgh.harvard.edu/ibsr/. 12. Zijdenbos A., Dawant B.: Brain segmentation and white matter lesion detection in MR images. Crit. Rev. Biomed. Eng. 22 (5–6), (1994)401–465.
Pulsative Flow Segmentation in MRA Image Series by AR Modeling and EM Algorithm Ali Gooya, Hongen Liao, Kiyoshi Matsumiya, Ken Masamune, and Takeyoshi Dohi Graduate School of Information Science and Technology, the University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656 Japan {gooya, liao, kiyoshi, masa, takdohi}@atre.t.u-tokyo.ac.jp http:// www.atre.t.u-tokyo.ac.jp/
Abstract. Segmentation of CSF and pulsative blood flow, based on a single phase contrast MRA (PC-MRA) image can lead to imperfect classifications. In this paper, we present a novel automated flow segmentation method by using PC-MRA image series. The intensity time series of each pixel is modeled as an autoregressive (AR) process and features including the Linear Prediction Coefficients (LPC), covariance matrix of LPC and variance of prediction error are extracted from each profile. Bayesian classification of the feature space is then achieved using a non-Gaussian likelihood probability function and unknown parameters of the likelihood function are estimated by a generalized ExpectationMaximization (EM) algorithm. The efficiency of the method evaluated on both synthetic and real retrospective gated PC-MRA images indicate that robust segmentation of CSF and vessels can be achieved by using this method.
1 Introduction Segmentation of CSF and pulsative arteries in phase contrast MRA has important clinical benefits [1]. On the other hand segmentation of such pulsate flow, based on a single phase contrast MRA (PC-MRA) image can produce improper classifications since the flow can not be identified by a unique intensity. Moreover, because of a relatively small flow area, achieving a high signal to noise ratio by a single PCMRA image, is basically limited. In order to obtain a higher efficiency, for instance, A.C.S. Chung et al. [2] use a statistical mixture model to describe background and vascular signals in PC-MRA speed images and incorporated phase information, embedded in the direction of flow vectors, into the segmentation procedure. However, they also report that imbalance between intensity distribution of vessels and background can lead to inaccurate statistical mixture modeling [3]. Motivated by these observations, in this paper we present a novel statistical flow segmentation method for series of gated cine PC-MRA images obtained from a same location. This method stems its higher efficiency from characterizing the time series of pixels. In similar works, N.Alperin et al. in [4] have proposed a pulsatility-based segmentation method for lumens conducting non-steady flow. Their method requires the user to select some reference points on the lumen to obtain correlation maps and G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 356 – 363, 2006. © Springer-Verlag Berlin Heidelberg 2006
Pulsative Flow Segmentation in MRA Image Series
357
specify a threshold for segmentation of these maps. In [5] Hata et al. report a Welch t-test statistical method to identify significant changes in pixel intensity profiles as a clue to identify non-background pixels. However, in their method to achieve a proper identification of flow, a couple of statistical parameters should be attuned by user. Our method is basically different from those methods and requires no user interaction. We observe that a time series of a white noise indicates no predictability, while for a real flow; having the history of the previous values of the signal, it should be possible to predict the next value using an autoregressive AR model. This paper is based on the work of R.L. Kashyap [6] for optimal feature selection and classification problems based on Linear Prediction Coefficients (LPC) feature set of time series. Although LPC has been already applied in one dimensional signal applications as ECG characterization and speech recognition, extension of this method for flow identification in cine PC-MRA images with limited samples is not straightforward. In contrary to speech recognition where the signal has an asymptotically zero mean, the average of the pixel values over its intensity profile, is an important feature and may not be subtracted. The contributions of this paper are twofold: we have tailored the theory in [6], for identification of flow in PC-MRA by relaxing the zero-mean assumption of the underlining signals and the second we present a generalized EM algorithm [7] based method to estimate unknown parameters. The rest of the paper is organized as follows: section two describes the feature sets used for flow identification, section three represents the backbone of our EM based algorithm, the implementation detail and results are presented in section four, and section five concludes the paper.
2 Feature Selection During gated cine PC-MRA protocol, several images are obtained from a same location. Consequently if we ignore the possible mismatching caused by motion, pixel (k ) can be characterized by a series of observations namely: z N( k ) = { y ( k ) (1),..., y ( k ) ( N )} . Given z N(k ) , the objective is the classification into one of Ci , i = 1,2 , as for flow and background classes. To this end, we assume that { y ( k ) (⋅)} is generated from a non-zero mean AR equation using y ( k ) (1),..., y ( k ) (m) as initial condition: m
y ( k ) (t ) = ¦φ j
(k )
y ( k ) (t − j ) + φ0( k ) + ω ( k ) (t ), t > m
j =1
(1)
In (1) {ω ( k ) (⋅)} is a sequence of independent random variables having normal distribution N (0, ρ (k ) ) .Thus the process { y ( k ) (⋅)} can be characterized by a (m + 1) dimensional vector (θ ( k ) ) T = ((φ ( k ) )T , ρ ( k ) ) where (φ ( k ) ) T = (φ 0( k ) ,..., φ m( k ) ) . If we further assume that the first m samples and the vector θ (k ) are statistically independent, probability of observing z N(k ) having θ (k ) is: p ( z N( k ) | θ ( k ) ) =
N
∏ p( y
t = m +1
(k )
(t ) |y ( k ) (t − 1),..., y ( k ) (t − m),θ ( k ) ). p ( y ( k ) (1),... y ( k ) (m))
(2)
358
A. Gooya et al.
Let z ( k ) (t − 1) = (1, y ( k ) (t − 1),..., y ( k ) (t − m))T , then the equation (2) can be written as: N ª º p ( z N( k ) | θ ( k ) ) = (1 / 2πρ ) ( N − m ) / 2 . exp « − (1 / ρ ( k ) ) ¦ ( y ( k ) (t ) − ( z ( k ) (t − 1)) T φ ( k ) ) 2 » t = m +1 ¬ ¼
(3)
. p( y ( k ) (1),... y ( k ) (m)) ,
Following the definition of complex classes in [6], in order to have a higher flexibility in grouping of different time series into a unique flow class, the classes are assumed to possess the following probability density functions p (θ ( k ) | Ci ) : p (((φ ( k ) ) T , ρ ( k ) ) T | C i ) = p (φ ( k ) | ρ ( k ) , C i ). p ( ρ ( k ) | C i ) , p (φ ( k ) | ρ ( k ) , Ci ) = N (φ 0 i , S 0i .ρ ( k ) ) ,
p( ρ
(k )
[
| C i ) = α iβ i Γ −1 ( β i ). exp − (α i / ρ ( k ) ) − ( β i + 1). ln( ρ ( k ) )
[
]
(4)
]
. exp − (α i / ρ ( k ) ) − ( β i + 1). ln( ρ ( k ) ) ,
In this case the total probability of p( z N( k ) | Ci ) can be obtained as follows: p( z N( k ) | Ci ) = ³ p( z N( k ) | θ ). p(θ | Ci )dθ ,
(5)
Substituting p ( z N( k ) | θ ( k ) ) and p (θ ( k ) | Ci ) defined at (4) into (5)it can be shown that p ( z N( k ) | C i ) / p ( y ( k ) (1),... y ( k ) ( m)) reduces to the following expression: p ( z N( k ) | C i ) / p ( y ( k ) (1),... y ( k ) ( m)) = ( 2π ) − ( N −m ) / 2 α iβ i Γ ( β i + ( N − m ) / 2)
[
.Γ −1 ( β i ).(det S i( k ) / det S 0 i )1 / 2. α i + N .ρ ( k ) + (1 / 2)(φ ( k ) − φ0i ) T .Bi( k ) .(φ ( k ) − φ0i )
],
− ( β i + ( N − m )) / 2
(6)
Where: −1
º ª S ( k ) = (1 / N ) « ¦ z ( k ) (t − 1).( z ( k ) (t − 1)) T », φ ( k ) = NS ( k ) .(¦ z ( k ) (t − 1). y ( k ) (t )) ¼ ¬ t t
ρ ( k ) = (1 / N )¦ ( y ( k ) (t ) − z (t −1) (t − 1).φ ( k ) ) 2 , S i( k ) = [S 0−i1 + NS ( k ) ]−1, Bi( k ) = S 0−i1 − S 0−i1 S i( k ) S 0−i1 ,
(7)
t
In all of the expressions, the summation over t is from (m + 1) to N . Equation (7) implies that all the information contained in z N(k ) is included in the vector θ (k ) = (φ ( k ) , ρ ( k ) , S ( k ) ) independent from index i .Therefore θ (k ) is the required feature
vector and can be obtained for each pixel independently.
3 Classification and Parameter Estimation Let S = {s ( k ) | k ∈ S M } , where S M = {1,..., M } represent a regular lattice structure with M sites (pixels).Let X = {x ( k ) | k ∈ S M } and Z = {z N( k ) | k ∈ S M } be the true pixel labels and observed image series respectively, where x ( k ) ∈ {Ci | i = 0,1} and z N(k ) has the same definition as in the previous section. Also let X be modeled as i.i.d random variables set with priors as p ( x ( k ) = Ci ) = wi , ¦ wi = 1; Let ψ = {α i , β i , φ0i , S 0 i , wi | i = 0,1} denote the set of
Pulsative Flow Segmentation in MRA Image Series
359
parameters to be estimated where α i , β i , φ0 i and S0i have the same definition as in section two. Bayesian estimation of X can be achieved by MAP principle which maximizes log( p ( X | Z ,ψ )) ∝ log( p ( X , Z | ψ )) with respect to X andψ , i.e Xˆ = arg max log( p ( X , Z |ψ )) X However log( p( X , Z |ψ )) = log( p( Z | X ,ψ )) + log( p( X |ψ )) , let Z be modeled as condition-
ally independent random vectors set then: M
log( p ( X , Z | ψ ) = ¦ [log( p ( z N( k ) | x ( k ) ,ψ )) + log( p ( x ( k ) | ψ ))] k =1
Since both X and ψ are unknown and they are strongly interdependent the data set is said to be “in-complete”. The problem of maximizing log( p ( X , Z |ψ )) can be achieved by EM algorithm [7] that is widely used in such similar situations. EM is an iterative algorithm consisting of two steps namely E step which calculates conditional expectation of objective function based on the latest updated estimation of parameters and M step where the calculated conditional expectation is maximized with respect to parameter set. In this case at iteration n , we will have E step as: Q(ψ | ψ ( n −1) ) = E X |Z {log p ( X , Z | ψ ) | ψ ( n −1) , Z } M
1
= ¦¦ p( x ( k ) = Ci | z N( k ) ,ψ ( n −1) ) [log p( z N( k ) | x ( k ) = Ci ,ψ ) + log wi ] ,
(8)
k =1 i = 0
Where posterior probabilities are computed as: p ( x ( k ) = Ci | z N( k ) ,ψ ( n −1) ) =
p ( z N( k ) | x ( k ) = Ci ,ψ ( n −1) ).wi( n −1) 1
¦ p( z
(k ) N
| x ( k ) = C j ,ψ ( n −1) ).w (jn −1)
,
(9)
j =0
In (9) p( z N( k ) | x ( k ) = Ci ,ψ ( n−1) ) represents likelihood probability function defined in (6) ignoring the term p( y ( k ) (1),...y ( k ) (m)) . In the M step, the maximization of Q(ψ | ψ ( n−1) ) with respect to parameters wi can be done analytically, taking derivates with respect to wi and using Lagrange multiplier method to constrain ¦ wi = 1 , one can easily verify the following update equation: M
wi( n ) = ¦ p( x ( k ) = Ci | z N( k ) ,ψ ( n −1) ) / M ,
(10)
k =1
In contrary to [6] which assumes a large N (typically more than one thousands in speech processing applications) and suggests a closed form of equations for updating of other parameters, unfortunately obtaining such equations for small N is extremely difficult if not impossible. In this case, rather than maximizing Q(ψ | ψ ( n −1) ) , simply finding ψ (n) such that: Q(ψ ( n) | ψ ( n−1) ) > Q(ψ ( n−1) | ψ ( n−1) ) results in a generalized EM algorithm [7]. Searching for such a ψ (n) is done numerically in a constrained space of parameters specified by 0 < α i , 2 < β i and 0 ≤ ( S 0i ) pp where (S 0i ) pq denotes the ( p, q ) th entry of matrix S0i .Also since S0i is a symmetric matrix, dimensionality of the search space is reduced and optimization is only established for (S 0i ) pq such that p ≥ q .Therefore updating of α i , β i , φ0 i and S0i is a general problem of constrained optimization and can be achieved by using of any known techniques, for instance in this paper, we have used MATLAB fmincon function that uses a sophisticated region based optimization
360
A. Gooya et al.
method [9]. At every iteration, specified by a unique point in the parameter space, the input to this function includes Q(ψ | ψ ( n−1) ) and its partial gradients with respect to α i , β i , φ0 i and S0i entries, and its output is a new updated point. Algorithm iterates recursively between E and M steps until the change in the values of Q(ψ | ψ ( n−1) ) drops below an arbitrary threshold. Pixel labels are then decided using Bayesian decision rule, i.e. pixels are assigned to class Ci , i = 1,2 , if p ( x ( k ) = C i | z N( k ) ,ψ (.) ) chooses the greater value.
4 Implementation and Results Intensity time series of the pixels are modeled as a first order AR process (m = 1) and features as defined in θ ( k ) = (φ ( k ) , ρ ( k ) , S ( k ) ) are extracted individually for each pixel and initial segmentation of the feature space is obtained using K-Means algorithm. Initially for both classes, wi is set to .5 and α i , β i are chosen so that 1 / ρ (.) has a gamma distribution G(α i , βi ) [6].For a first order AR process, φ0i and S0i will be a 2 1 vector and 2 2 matrix respectively parameter update is achieved using a MATLAB supplied function to optimize Q(ψ | ψ ( n−1) ) with respect to 7 parameters ( including α i and β i ) for each class. The forms of the gradients are not difficult to obtain and here are ignored for the sake of space. The cost function and its partial derivative are evaluated using a C MEX function on a 3.2 GHz PC platform with 4G byte memory. 4.1 Synthetic Image Data In this section the efficiency of the proposed method with series of synthetic images is evaluated in the presence of different noise levels. We aim at the assessing the ability of the algorithm in dealing with different pulsation patterns to simulate a typical situation. As shown in figure 1(upper row) our phantom image consists of small areas (3 3 ) distributed evenly and each area represents unique average and pulsative flow components; the average and the pulsation increases from the bottom to up and from the left to right respectively. We made 4 series of images (including 20 frames) each representing a different level of noise power; also noise is modeled by a Gauss-Maxwell distribution as in [2] and the correspondent histograms are indicated in the middle row. After extracting feature sets from time series of individual pixels, initial segmentation is made using K-means algorithm, then it is refined by the proposed algorithm until the convergence. Fig. 1(bottom row) shows the result after around 50 iterations. This procedure is repeated for 10 times for each series and average SNR ratios and global segmentation errors are calculated respectively. As indicated, the method performs reasonable segmentations, except for the areas where the characteristics of the noise and flow signals get very similar. Practically during our simulations, we found the algorithm quite robust even against strong noise.
Pulsative Flow Segmentation in MRA Image Series
361
5000 3000 2000
1500
2000
2000 1500
1500
Frequency
3000
Frequency
2500 Frequency
Frequency
4000
1000
1000
500 500
1000 0 0
1000
500
100
200 Intensity
300
400
0 0
100
200 intensity
300
400
0
100
200 Intensity
300
400
0 0
100
200 Intensity
300
400
Fig. 1. Sample images including hypothetical flow areas with sinusoidal intensities changing. Upper row: from left to right: SNR = -5.9, -11.32, -15.25, -18.37 dB, Middle row: Correspondent histograms of the flow (red) and background (blue) areas indicating different overlaps of both distributions. Bottom row: Segmentation obtained from our method. Global segmentation Errors from left to right = 0.74%, 1.92%, 6.9%, 7.52%
4.2 Clinical Evaluations The method is applied to a few real clinical PC-MRA image series of neck and head obtained from a 1.0T MR (Philips Medical Systems) system. For the first study we have made a series of 16 retrospective gated PC-MRA cine frames (figure 2).The scan detail are as follows: slice thickness = 10mm,FOV = 25cm,matrix = 180×256,TE = 6.6mse, flip angle = 10,VENC=10.0cm/sec. As shown in the upper row, CSF flow around the neck was of our interest. After extracting the feature sets, initial segmentation is made by K-means algorithm as in fig. 2(a). Then this segmentation is refined by our EM based algorithm fig. 2(b). Notice that despite of heavy noise, our algorithm has not only segmented the strong pulsate flow of CSF, but also identified some hardly noticeable flows. These slow rate flows stay difficult to be segmented using single image based segmentation techniques. Some sparse points are also noticeable, however, by visual support we strongly believe that these pixels belong to thin vessels around the neck. We have applied the method to other data sets of vasculature (figure 3). The same imaging parameters are used as before, except for an increases number of 22 frames gated within a heart cycle. Similarly, time series are initially classified by K-means algorithm and followed by our proposed method. We can observe that the method has clearly identified most of in-plane vasculature. In future, we will emphasize on concrete validation of this method upon consultation to experienced radiologists and accessing more data sets.
362
A. Gooya et al.
(a)
(b)
Fig. 2. Cine retrospective PC-MRA images indicating pulsate flow of CSF (6 frames out of 16 are indicated in upper row). Initial segmentation obtained from K-means algorithm(a). Segmentation from obtained from our algorithm (b).
(a)
(b)
Fig. 3. Cine retrospective PC-MRA images indicating pulsate flow of CSF and vasculature (6 frames out of 22 are indicated in upper row). Initial segmentation obtained from K-means algorithm (a). Segmentation from obtained from our algorithm (b).
5 Conclusion and Summary In this paper we presented an automatic segmentation technique for pulsative PCMRA image series based on AR modeling of pixels time series. The algorithm takes advantage of pulsation information of the time series and does not rely only on the average intensity levels. We observe that using this method identification of low flow rates is possible; this highlights its future applications in identification of capillaries
Pulsative Flow Segmentation in MRA Image Series
363
and small vessels. However, in the case of strong pulsative flows and ineffective gating, ghost like flow artifacts may cause difficulties to apply the method for the full FOV. Therefore application of this method in these cases can be limited to artifact free areas. Another limitation is the relatively long scanning time (particularly for a volume data) and possible motion artifacts and mismatching. One possible solution to alleviate this problem is to model the motion and estimate its parameters along with other classification parameters in the same framework of our EM based algorithm; however, this would increase the dimensionality of the search space and computation time. We are currently embedding the Markov random field notion in the segmentation procedure; this can eliminate the sparse points and improve the connectivity of segmented areas. Although, the method is originally is intended for functional PCMRA, it also can have other applications in perfusion MRI, functional imaging and contrast enhanced CT image series analysis. The work was supported in part by Grant-in-Aid for Scientific Research of Japan Society for the Promotion of Science (17100008) and the Ministry of Education, Culture, Sports, Science and Technology of Japan (17680037).
References 1. O. Baledent, M. C. H. Feugeas, and I.I. Peretti, C.D. Jones, and E.F. Roberts, “Cerebrospinal Fluid Dynamics and Relation with Blood Flow,” Invest. Radiol., Vol. 36, No.7, pp.368– 377, 2001. 2. A. C. S. Chung, J. A. Noble, and P. Summers, ‘‘Vascular segmentation of phase contrast magnetic resonance angiograms based on statistical mixture modeling and local phase coherence,’’ IEEE Trans. Med. Imaging Vol. 23, No. 12, pp. 1490-1507, 2004. 3. R. Gan, W. C. K. Wong, and A. C. S. Chung, ‘‘Statistical cerebrovascular segmentation in three-dimensional rotational angiography based on maximum intensity projections,” Med. Phys. , Vol.32 , No.9, pp.3017-3028, 2005. 4. N. Alperin, and S. H. Lee, “PUBS: Pulsatility-Based Segmentation of Lumens Conducting Non-steady Flow,” Magnetic Resonance in Medicine, Vol.49, pp.934–944, 2003. 5. N. Hata, T. Wada, K. Kashima, Y. Okada, M. Kitagawa, and T. Chiba., “Non-gated fetal MRI of umbilical blood flow in an acardiac twin,” Pediatr Radiol., Vol. 35, No. 8, pp. 8269, 2005. 6. R. L. Kashyap, “Optimal Feature Selection and Decision Rules in Classification Problems with Time Series,” IEEE Trans. Info. Theory, Vol. 24, No. 3, pp. 281-288, 1978. 7. A.P Dempster, N.M. Laird, and D.B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Roy. Stat. Soc., Series B, Vol. 39, pp. 1-38, 1977. 8. G.A.F. Seber , Multivariate Observations, Wiley, New York, 1984. 9. T.F. Coleman, and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.
Abdominal Organ Identification Based on Atlas Registration and Its Application in Fuzzy Connectedness Segmentation Yongxin Zhou and Jing Bai Tsinghua University, Beijing, 100084, P.R. China [email protected], [email protected]
Abstract. A framework based on atlas registration is proposed for automatic identification and segmentation of abdominal organs. The VIP-Man atlas is adopted to guide the whole process. The atlas was registered onto the subject through global registration and organ registration. In global registration, an affine transformation was found to eliminate the global differences between the atlas and the subject, using normalized mutual information as the similarity measure. In organ registration, organs of interest were registered respectively to achieve better alignments. An original similarity measure was proposed in organ registration. The registered atlas can be viewed as an initial segmentation of the subject, and make organs of interest identified. As an application of the registered atlas, novel methods were designed to estimate necessary parameters for fuzzy connectedness (FC) segmentation. Manual intervention was avoided, and thus to increase the automation degree of the method. This atlas-based method was tested on abdominal CT images of Chinese patients. Experimental results indicated the validity of the method for both male and female subjects of different ages. Keywords: Atlas registration, fuzzy connectedness, abdominal organ.
1 Introduction Organ segmentation is a primary step in image processing for lots of applications, such as 3D visualization, radiation treatments planning, medical database construction, etc. Manual segmentation by radiologists is reliable, but is tedious and timeconsuming. Manual segmentation has the advantage of expert’s a priori anatomical knowledge. This inspires us to complement the traditional solely intensity-based algorithms with anatomical knowledge, and thus leads to the strategy of atlas-based segmentation. Methods for multiple abdominal organ segmentation can be found in [1-3]. According to M. Kobashi et al. [1], organs were segmented with dynamic thresholds under a priori shape information constraints. The segmentation is disturbed by intensity inhomogeneity, and the results tend to be fragmented. According to L. Chien-Cheng et al. [2], spatial fuzzy rules based on anatomy knowledge were constructed for identifying abdominal organs. The design of such set of fuzzy rules is not an easy task, and each organ of interest requires specific description. In article [3], G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 364 – 371, 2006. © Springer-Verlag Berlin Heidelberg 2006
Abdominal Organ Identification Based on Atlas Registration
365
a probabilistic atlas was mapped onto a subject image to estimate the Gaussian parameters of organ intensities. Manual placement of initial control points in the subject image was required. And organs tend to be underestimated in boundary regions due to low probabilities assigned by the atlas. In this paper, we are interested in the integration of atlas registration and fuzzy connectedness (FC) segmentation. A novel method for atlas-subject registration is proposed, which consists of global registration and organ segmentation. The goal of global registration is to eliminate the overall misalignment between the atlas and the subject, e.g. imaging position and individual stature differences. In organ registration, we try to align each organ of interest respectively, that is, separate registrations are carried out on every organ. In global registration, the mutual information is used as the similarity measure. In organ registration, an original similarity measure is proposed. The registered atlas can be viewed as an initial segmentation of the subject, in which organs of interest get identified. As an application of the registered atlas, novel techniques are designed to estimate necessary parameters for fuzzy connectedness (FC) segmentation. Operator is released from the burden of viewing volume data and assigning these parameters manually. We chose the VIP-Man [4] as the atlas, which is a manual segmented whole-body model constructed from the male transversal color photographic images of the Visible Human Project (VHP). Organs or tissues are labeled with unique integers in VIPMan. Fig. 1 shows a slice from the abdominal part of VHP and the corresponding slice from VIP-Man atlas.
Fig. 1. Sample slices from VHP and VIP-Man atlas. Left: a sample slice of the male color photographic images; Right: the corresponding slice of the VIP-Man atlas.
2 Method 2.1 Global Registration The goal of global registration is to eliminate the overall misalignment between the atlas and the subject. Four concepts are entailed: normalized mutual information as the similarity measure, similarity transformation, Powell optimization, and nearest neighbor interpolation. Mutual information (MI) as similarity measure between gray-level image and labeled image is worth discussion. MI can be viewed as a measure of how well one image explains the other, and would achieve the maximum when both images are aligned. Images of different modalities can be registered by MI as long
366
Y. Zhou and J. Bai
as, homogeneous areas in images correspond to each other [5]. This stands true between our atlas and the subject. In other words, the integer-labeled atlas can also be viewed as an intensity image, in which each organ or tissue has a uniform intensity. One drawback of MI is that it is not invariant with respect to overlap. So we use the normalized MI [5], which has been shown to be robust and overlap independent. The similarity transformation T = ( p, q, r , u , v, w, φ , ω ,θ ) consists of 9
parameters, p, q, r for translation, u , v, w for scaling, and φ , ω ,θ for rotation. Although we loose some flexibility in similarity transformation, we gain considerable computational efficiency and optimization robust. Meanwhile, we refer to the following organ registration and FC segmentation to cover for the weakness of global registration. Let S denote the subject image, and A denote the atlas image. And let T * A denote the atlas transformed by similarity transformation T . The normalized MI is calculated as,
nMI ( S , T * A) =
H ( S ) + H (T * A) H ( S , T * A)
where
H ( S ) is the entropy of S , H (T * A) is the entropy of T * A , and H ( S , T * A) is the joint entropy of S and T * A . The transformed atlas T * A is constructed using nearest neighbor interpolation. T * A has the same resolution and size with S . The atlas’s integer labeled nature
would not allow high order interpolation. Meanwhile, the VIP-Man atlas has a rather high resolution (0.33 x0.33x1mm3), and CT images often have a relative low resolution. So the nearest neighbor interpolation would not introduce apparent alias into T * A . In other words, the nearest neighbor method is enough here. A modified Powell optimization algorithm is utilized to find the optimal global transformation, which maximizes the normalized MI.
Tg = arg max( nMI ( S , T * A) ) T
Powell optimization algorithm converts a multi-dimensional optimization question into iterations of one-dimensional optimization. During iterations of one-dimensional optimization, the searching direction keeps changing. In the modified Powell algorithm, we fix the searching directions to the nine directions of similarity transformation parameters. This modification allows us to set the optimization order of parameters, and to limit the searching range for each parameter. 2.2 Organ Registration In organ registration, we try to align each organ of interest respectively, that is, separate registrations are carried out on every organ. The registration of organ k is similar to the global registration in that it also consists of similarity transform, Powell
Abdominal Organ Identification Based on Atlas Registration
367
optimization and nearest neighbor interpolation. The difference is that mutual information is not valid. A novel similarity measure for organ registration is defined. Each organ of interest in the registered atlas has a corresponding region in the subject. The corresponding region of organ k is defined as
Ck (T ) = {x | x ∈ S , T * A( x) = k} , where x represents a pixel in S . The goal of organ registration is to make organ’s corresponding region cover more organ pixels and less non-organ pixels. By now, we can not ascertain whether or not a pixel belongs to organ. We first define a potential intensity range for organ,
Rk = [ μ k − λσ k , μ k + λσ k ] , where
μk
and
σk
are pixel intensity mean value and standard variance in
Ck . λ is
Rk . λ is set to be 1.2~1.3. If the k obeys Gaussian distribution, Ck will cover about
a constant coefficient, which adjusts the width of intensity distribution of organ 90% organ intensity range.
The number of organ pixels and the number of non-organ pixels in
Ck can be
defined as
° N kin (T ) = N {x | x ∈ Ck (T ), S ( x) ∈ Rk } ® out °¯ N k (T ) = N {x | x ∈ Ck (T ), S ( x) ∉ Rk } where N represents the number of pixels in the set. Then the similarity measure for organ registration can be defined as,
M k (T ) = N kin (T ) − N kout (T ) . When we try to maximize M k (T ) , we in fact try to make organ k ’s corresponding region under transformation T cover more organ pixels and less non-organ pixels. At one time, we try to register one organ, while keep other organs unchanged. The new position of the organ in registration should not collide with other organ’s current position. Considering the interactions among organs, we carry out organ registration in an iterative way. To conclude the atlas registration, the original atlas is registered onto the subject through global registration and organ registration. A new registered atlas is constructed, which has pixel-by-pixel correspondences with the subject image. Apparently, the two-step registration is not able of achieving a perfect align between the atlas and the subject. However, the correspondences of organs between the atlas and the subject are established, and thus all organs of interest get identified. Fig. 2 shows sample slices of global registration and organ registration.
368
Y. Zhou and J. Bai
Fig. 2. Sample slices of atlas registration. Left: global registration; Right, organ registration. The black contours delineate the boundary of organs in the registered atlas.
The registered atlas can be viewed as an initial segmentation of the subject. Based on this initial segmentation, we can initialize following segmentation methods to get the precise segmentation of organs. In this paper, we select the fuzzy connectedness (FC) method, and utilize the registered atlas to estimate necessary parameters of the FC method. 2.3 FC Segmentation The fuzzy connectedness framework was first proposed in [6] and further extended in [7], [8], and a tutorial on FC methods appears in [9]. Prior to the application of FC method, we need to define the fuzzy affinity function between neighboring pixels. In our implementation, the fuzzy affinity function of organ k between two pixels x and y is defined as [7]
μ k ( x, y ) = μα ( x, y ) μφ k ( x, y ) ⋅ μψ k ( x, y ) , where
μα ( x, y ) , μφ k ( x, y )
and
μψ k ( x, y )
are the space adjacency function,
intensity feature function, and intensity difference feature function respectively[7].
1, if x, y are adjacent , μα ( x , y ) = ® . ¯0, otherwise. μφ k ( x, y ) = min [ Pk ( S ( x)), Pk ( S ( y )) ] μψ k ( x, y ) = Qk ( S ( x ) − S ( y ) )
Note that the space adjacent function can be defined much more sophisticated than (8). A general principle is that the closer x is to y , the greater is the value of μα ( x, y ) . Pk is the intensity probability density function (PDF) of organ k . Qk is the absolute intensity difference PDF between neighboring pixels in organ k . The organ intensity PDF can be estimated as the normalized intensity histogram in
Ck Pk (l ) =
H k (l )
¦H l
k
(l ) ,
Abdominal Organ Identification Based on Atlas Registration
369
where H k (l ) is the intensity histogram of Ck , and l represents an intensity level. Note that Ck is the corresponding region of organ k based on the organ registered atlas. Let DH (l ) be the absolute intensity difference histogram between neighboring pixels in
Ck . Then Qk is defined as Qk (l ) = DH k (l )
¦ DH l
k
(l ).
Besides the fuzzy affinity function, we also need to assign an initial seed pixel for each organ of interest. Pk (l ) gives the probability of a pixel with intensity value l belonging to organ k . So we can specify seed pixel for organ k according to Pk (l ) . Seed pixel is specified according to the following rules: (1) it should lie in Ck ; (2) the seed itself should have the largest probability of belonging to organ; (3) among all pixels satisfying rule (1) and (2), the pixel which has the largest overall probability of belonging to the organ in its neighboring region is selected as the seed pixel. That is,
ok = arg max x
¦
Pk ( S ( y )), x ∈ Ck , and Pk ( S ( x)) = max Pk
y∈Ν ( x )
where N ( x ) is the set of neighboring pixels of of
x , and max Pk is the max value
Pk .
After the assign of fuzzy affinity function and organ seed pixel, the fuzzy connectedness scene (FCS) of each organ can be calculated using the dynamic programming technique in [9]. Different from common segmentation result, the FCS of organ does not tell us whether or not a pixel belongs to organ, but tell us the probability of a pixel belonging to organ. So a threshold of FC strength is needed to convert the FCS into a binary segmentation. We first set the initial threshold, denoted as T , at the high end of a predefined range [Tlow , Thigh ] , and then segment FCS with T to get an initial region for organ. The initial region covers most pixels of organ. T is decreased by a small step value, and shape changes of the segmented region are checked at each step. Only moderate shape changes are allowed. If an excessive change occurs at a step, the procedure stops. The value of T at the previous step is selected as the optimal threshold. If no excessive shape change occurs, Tlow would be selected as the optimal threshold. Fig. 3 shows sample slices of FC segmentation.
Fig. 3. Sample slices of FC segmentation. Left: the liver region in CT image; Middle: FCS of liver; Right: binary segmentation of liver.
370
Y. Zhou and J. Bai
3 Results The proposed segmentation method was tested on 5 CT data sets to verify its versatility of segmenting images of different modalities. All the images were provided by a hospital in China. The CT images have pixel size of 0.6~0.7 mm in transverse slices, and slice thickness of 5 mm. We chose liver, kidneys, and spleen as the organs of interest. In order to quantitatively evaluate the algorithm performance, we computed the false positive rate and false negative rate of the segmentation. Manual segmentation by expert was assumed to be the ground truth. For comparison, we quoted the false rates of abdominal organ segmentation as reported in reference [3]. Table 1 gives the results. From the last two rows, we can see that the FPR values of our algorithm are somewhat higher than the FPR values showed in [3]. As stated in the introduction part, algorithm described in [3] tends to be conservative due to the probabilistic atlas adopted. And the false negative values are generally larger than false positive rate values in reference [3]. Reference [3] did not select spleen as an organ of interest, but selected the spinal cord. For our algorithm, the area of spinal cord is too small, which makes the accurate estimation of its intensity PDF difficult. Table 1. False rates of CT image segmentation N o. 1 2 3 4 5
Se x
Ag e
F 70 F 51 M 29 M 11 F 48 Average Average in [3]
Liver
Right Kidney
Left Kidney
Spleen
FPR
FNR
FPR
FNR
FPR
FNR
FPR
FNR
0.016 0.062 0.172 0.054 0.027 0.066 0.007
0.064 0.109 0.066 0.160 0.130 0.106 0.078
0.066 0.121 0.043 0.129 0.066 0.085 0.001
0.031 0.068 0.105 0.077 0.055 0.067 0.105
0.055 0.115 0.055 0.103 0.052 0.076 0.001
0.015 0.056 0.045 0.082 0.054 0.05 0.107
0.048 0.066 0.105 0.044 0.056 0.064 n/a
0.036 0.069 0.175 0.119 0.057 0.091 n/a
From Table1 we can see that our algorithm demonstrates promising results in CT image segmentation. Since different data sets were tested by us and by authors of reference [3], the comparisons in Table 1 could only be taken as a reference.
4 Conclusions In this paper, an automatic method for the identification and segmentation of multiple abdominal organs is proposed. The main contribution of our work is twofold. First, we propose a two-step registration method to establish the correspondence between a pre-labeled atlas and CT image. The global registration step is robust in organ identification and efficient in computation. The organ registration step achieves a better alignment of organs of interest, while avoiding complex deformation of organ. A novel similarity measure is defined in our organ registration, has been proved effective by our experiment. Second, we utilize the registered atlas to initialize the fuzzy connectedness segmentation, as an alternation to human intervention.
Abdominal Organ Identification Based on Atlas Registration
371
Experiments on CT images indicated the validity of the atlas based fuzzy connectedness segmentation. Acknowledgement. This work is supported by the National Nature Science Foundation of China, grant #60331010.
References 1. Kobashi M. and Shapiro L. G., "Knowledge-based organ identification from ct images," Pattern Recognition, vol. 28, pp. 475-491, 1995. 2. Chien-Cheng L., Pau-Choo C., and Hong-Ming T., "Identifying multiple abdominal organs from CT image series using a multimodule contextual neural network and spatial fuzzy rules," vol. 7, pp. 217, 2003. 3. Hyunjin Park P. H. B., and Charles R. Meyer, Member, IEEE, "Construction of an Abdominal Probabilistic Atlas and its Application in Segmentation," IEEE Transactions on Medical Imaging, vol. 22, pp. 483-492, 2003. 4. Xu X. G., Chao T. C., and Bozkurt A., "VIP-man: An image-based whole-body adult male model constructed from color photographs of the visible human project for multi-particle Monte Carlo calculations," Health Physics, vol. 78, pp. 476-486, 2000. 5. Studholme C., Hill D. L. G., and Hawkes D. J., "An overlap invariant entropy measure of 3D medical image alignment," Pattern Recognition, vol. 32, pp. 71-86, 1999. 6. Udupa J. K. and Samarasekera S., "Fuzzy connectedness and object definition: theory, algorithms, and applications in image segmentation," Graphical Models and Image Processing, vol. 58, pp. 246-261, 1996. 7. Saha P. K., Udupa J. K., and Odhner D., "Scale-based fuzzy connected image segmentation: theory, algorithms, and validation," Computer Vision and Image Understanding, vol. 77, pp. 145-174, 2000. 8. Saha P. K. and Udupa J. K., "Relative fuzzy connectedness among multiple objects: theory, algorithms, and applications in image segmentation," Computer Vision and Image Understanding, vol. 82, pp. 42-56, 2001. 9. Udupa J. K. and Saha P. K., "Fuzzy connectedness and image segmentation," Proceedings of the Ieee, vol. 91, pp. 1649-1669, 2003.
An Improved 2D Colonic Polyp Segmentation Framework Based on Gradient Vector Flow Deformable Model Dongqing Chen1 , M. Sabry Hassouna1 , Aly A. Farag1, and Robert Falk2 1
Computer Vision and Image Processing (CVIP) Lab, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY., 40292, USA {dqchen, msabry, farag}@cvip.louisville.edu 2 Department of Medial Imaging, Jewish Hospital, Louisville, KY., 40202, USA [email protected]
Abstract. Computed Tomography Colonography has been proved to be a valid technique for detecting and screening colorectal cancers. In this paper, we present a framework for colonic polyp detection and segmentation. Firstly, we propose to use four different geometric features for colonic polyp detection, which include shape index, curvedness, sphericity ratio and the absolute value of inner product of maximum principal curvature and gradient vector flow. Then, we use the bias-corrected fuzzy c-mean algorithm and gradient vector flow based deformable model for colonic polyp segmentation. Finally, we measure the overlap between the manual segmentation and the algorithm segmentation to test the accuracy of our frame work. The quantitative experiment results have shown that the average overlap is 85.17% ± 3.67%.
1
Introduction
Colorectal cancer, which includes cancer of the colon, rectum, anus, and appendix, is the second leading cause of death among cancers and is the third most common form of cancer in the United States [1]. Computer tomographic colonoscopy (CTC), also referred as virtual colonoscopy (VC), is a minimally invasive technique and rapidly evolving diagnostic tool for the location, detection and identification of benign polyps on the colon wall on the early stage before their malignant transformation. Polyp segmentation can provide the entire voxel set of a polyp, thus it is helpful to quantify the characteristics of the polyp. Segmentation of a polyp in [2]–[3] deals with a surface enclosing the polyp and its interior voxels. In 2D, it is the closed contour at the boundary of the polyp. Several methods were proposed for colonic polyp segmentation and detection [2]–[4]. Yoshida and N¨ appi [5] firstly computed the geometric features to characterize polyps, folds and colonic walls at each voxel in the extracted colon. Then, the polyp candidates were detected by locating the voxels with low curveness values and high shape index using hysteresis thresholding. Finally, they G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 372–379, 2006. c Springer-Verlag Berlin Heidelberg 2006
An Improved 2D Colonic Polyp Segmentation Framework
373
applied fuzzy clustering to the candidates to segment polyp. However, their algorithm may fail in obtaining the polyps with big masses and irregular shapes and voxels in the polyp region with different curvedness and shape index. Napel et al. [6] attempted to develop an algorithm used to classify the output of CTC. They aimed at eliminating false positives (FPs) only and increasing specificity without sacrificing sensitivity. However, these existing schemes still suffer from too many FPs. Yao et al. [4] presented an automatic method to segment colonic polyps, which was based on a combination of knowledge-based intensity adjustment, fuzzy c-mean (FCM) clustering and deformable model. In their method, they employed the fuzzy c-mean clustering proposed by Xu, Pham and Prince [7], the balloon force model proposed by Cohen [8] based on the traditional snake [9]. In [3], two major improvements were made to extend to a 3D polyp segmentation. The two improvements could be summarized as: 1) the knowledge-guided intensity adjustment was extended to 3D, and 2) active contour models for 2D cases were replaced with 3D dynamic deformable surfaces. As a result, Yao et. al. [2] proposed their entire framework for colonic polyp segmentation based on fuzzy clustering and deformable models. Once the 2D polyp segmentation was finished on one slice, the procedure was propagated to the neighboring slices. Finally, all 2D segmentations were stacked up for generating a 3D segmentation for the whole volumetric dataset. In this paper, we present a framework for colonic polyp detection and segmentation. Firstly, we propose to use four different geometric features for colon polyp characterization and detection, which include shape index, curvedness, sphericity ratio and the absolute value of inner product of maximum principal curvature and gradient vector flow (GVF) [10] field. The proposed features will dramatically reduce the number of false positive polyps. Then, we use the biascorrected fuzzy c-mean (BCFCM) algorithm proposed by Farag et. al. [11] to solve the problem of noise sensitivity and computational complexity in Xu and Prince method. We also employ the GVF based deformable model [10] for segmentation to solve the problem of poor convergence associate with the traditional snake. Finally, we measure the overlap between the manual segmentation and the algorithm segmentation to test the accuracy of our framework. The quantitative experiment results have shown that the average overlap is 85.17% ± 3.67%.
2
Colonic Polyp Detection Using Accurate Principal Curvature Estimation
In this section, we use an improved method for accurate principal curvature estimation [12]. Based on the estimated principal curvatures, we propose four different geometric features for colonic polyp detection: 1) shape index (SI), 2) curvedness (CV), 3) sphericity ratio (SP), and 4) the absolute value of inner product of maximum principal curvature and GVF (MPGVF). The shape index and curvedness [5] are defined in terms of the principal curvatures as
374
D. Chen et al.
(a)
(b)
Fig. 1. (a) Three different tissues: (i) polyp: dome-like structure, (ii) folds: ridge-like structure, and (iii) colon wall: near flat or small cup-like structure, and (b) Shape index scales of different 3D geometric shapes ranging from 0 to 1
κ1 + κ2 1 1 − arctan 2 π κ1 − κ2 1 κ21 + κ22 CV = 2 Sphericity ratio can be computed as SI =
* κ1 − κ2 * * SP = * H
(1)
(2)
(3)
where, κ1 , κ2 and H are the maximum principal curvature, the minimum principal curvature, and mean curvature, respectively. As we can see in Fig. 1(a), colonic polyp generally appear as dome-like structure growing from colonic wall into lumen air with small curvedness. The haustral folds appear like ridge-like structure with large curvedness values, while the colonic walls look like nearly flat or cup structure, and they have small curvedness. From the shape index scale ranging from 0 to 1 as shown in Fig. 1(b) and curvedness, we can easily set thresholds of shape index thSI and curvedness thCV to detect colonic polyps from other tissues. The generated colonic polyp candidates include true positive (TP) and false positive, hence, SP and MPGVF are used to reduce false positives.
3
Colonic Polyp Segmentation Using BCFCM and GVF Deformable Model
For overview of the knowledge guided image intensity adjustment, one can refer to the work of Summers and his group (e.g., Yao et. al. [2] - [3]). 3.1
Extraction of Colon Surface
The colon surface is firstly extracted using region growing and iso-surface technique. A small local region around polyp candidate with size of 64 × 64 pixels centered at the centroid is obtained to identify the polyp.
An Improved 2D Colonic Polyp Segmentation Framework
3.2
375
Knowledge Guided Intensity Adjustment
The following basics are employed for image intensity adjustment to generate the iso-boundary. 1) colonic polyp growing from colon wall into lumen air, and 2) polyp-lumen boundaries tending to have convex curvatures. We employ the score evaluation system and image intensity adjustment criterion similar to those in [4]. Finally, the image intensity is decreased if it is not located inside a potential polyp. Otherwise, the intensity should be increased. 3.3
Bias-Corrected Fuzzy C-Mean Clustering
To solve the problem of noise sensitivity and computational complexity of FCM, we employ the bias-corrected FCM (BCFCM) [11]. The standard FCM objective function J for partitioning {xk }N k=1 into c clusters is expressed as J=
N c
2
upik xk − vi
(4)
i=1 k=1
c are the prototypes of the where, xk is the true intensity at the kth voxel, {vi }i=1 k clusters and the array [ui ] represents a partition matrix. We modify (4) by introducing a term that allows the labelling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The modified objective function is given by Jm =
N c
upik yk − βk − vi 2
i=1 k=1
+
c N
α NR i=1
k=1
⎛
upik ⎝
⎞ 2 yr − βr − vi ⎠
(5)
yr ∈Nk
where, Nk stands for the set of neighbors that exist in a window around xk and NR is the cardinality of Nk . The effect of the neighbors term is controlled by the parameter α. This regularizing term is inversely proportional to the signalto-noise (SNR). The membership estimator uik , the cluster prototype vi and the bias field βk can be calculated as: 1 uik = (6) 1 p−1 c Dik + Nα γi R j=1
N vik =
k=1
Djk + Nα γj
(yk − βk ) +
R
α NR
yr ∈Nk (yr
p (1 + α) N k=1 uik c upik vi βk = yk − i=1 c p i=1 uik
− βr )
(7) (8)
376
4
D. Chen et al.
Results and Discussion
We demonstrate the effectiveness of the proposed algorithm on the clinical colon datasets. The datasets are acquired using Siemens Sensation CT scanner. The dateset volume is 512 × 512 × 580 with voxel size 0.74 × 0.74 × 0.75 mm3 . Before fly-through navigation for colonic polyp detection, we use our algorithm [13] to compute continuous curve skeletons from 3D volumetric objects. The results of continuous curve skeletons of four different 3D volumetric colon datasets are shown in Fig. 2. Normally, the regular method [5] computes the maximum and minimum principal curvatures as + (9) κ1 = H + H 2 − K + κ2 = H − H 2 − K (10) where, K and H are Gaussian curvature and mean curvature, respectively.
(a)
(b)
(c)
(d)
Fig. 2. Accurate extraction of continuous skeletons from 3D volumetric colon datasets
Unfortunately, they suffered from some problems summarized as follows: 1) √ √ under discrete cases, H 2 − K in κ1 = H + H 2 − K and κ2 = H − H 2 − K can not be always guaranteed to be greater than or equal to zero, 2) κ1 and κ2 computed in this way can not provide any direction information, and 3) large neighborhood for the high accuracy of principal curvatures, increases the computational complexity. The traditional solution to the first problem is that: if H 2 − K is less than 0, κ1 and κ2 are simply set to be zeros. We just call such points on the surface as ”mistreated points”. Some comparison results are shown in Table 1. Total vertices denotes the total number of triangle vertices of mesh surface. It is found that total number of the mistreated points by our method drops dramatically. After the colonic polyp candidates are found (e.g. an example polyp as shown in Fig. 3(a)), since the image intensity of soft tissues around the polyp (≥ −350 Hounsf ield U nit (HU )) is greater than that of lumen air, which is around −1200 HU ∼ −1000 HU , the iso-boundary between lumen air and colon wall is easily identified using an iso-value of −700 HU .
An Improved 2D Colonic Polyp Segmentation Framework
377
Table 1. Comparison results for mistreated vertices by regular method and our method
Total vertices Regular method Our method
Colon Dataset1 Colon Dataset2 Colon Dataset3 Colon Dataset4 112273 139042 219619 231042 1050 1414 1445 761 7 3 5 0
(a)
(b)
(c)
Fig. 3. An example polyp. (a) Local region around polyp, (b) result after intensity adjustment, and (c) result after BCFCM.
A curvature threshold Cth is set to 0.2 mm−1 to classify different types of boundaries. The classification criterion can be given as: 1) concave boundaries, if κ > Cth ; 2) convex boundaries, if κ < −Cth ; 3) otherwise, flat boundaries. Each pixel in the image is given a score based on how far the pixel is located from the iso-boundaries. The score of pixel x is expressed as: score(x) =
Nd
E(dk (x), s)
(11)
k=1
where, dk is the direction angles on which a bunch of evenly spaced rays shooting from the pixel x, and Nd is the number of ray direction. In this paper, Nd is given as 36, which implies that equally spaced direction angle is 100 each. And the maximum diameter of a polyp s is assumed 25 mm, since the sizes of large polyps in our dataset within the range of 10 − 19 mm. We employ the same scoring criterion E(dk (x), s) given in [2] and [4]. As a result, the image intensity at each pixel value x is adjusted according to the assigned score as follows: 1) 2) 3) 4)
100 HU , if score(x) > N2d ; 50 HU , if N2d ≥ score(x) ≥ N4d ; 0 HU , if N4d > score(x) ≥ 0; −50 HU , if score(x) < 0;
Finally, the image intensity is increased if it is located inside a potential polyp. The results after intensity adjustment and BCFCM clustering are given
378
D. Chen et al.
in Fig. 3(b) and Fig. 3(c), respectively. From Fig. 3(c), we can find that the three different classes including polyp tissue, non-polyp tissue and lumen air, are obviously distinguished by assigning higher to lower intensity accordingly. Then, it is easy to apply GVF based active contour for polyp segmentation. The size of initial active contour is set to be a circle with half-pixel radius circle. The maximum displacement of vertices between iterations is less than a threshold of 0.5 pixel in our implementation, which is employed as the stopping criterion for the GVF model based active contour evolving. The initial contour, the contour after 5 iterations, the contour after 10 iterations and the final result are shown in Fig. 4 (a), (b), (c) and (d). The evolving speed is fast and the convergence can be guaranteed very well.
(a)
(b)
(c)
(d)
(e)
Fig. 4. GVF based active contour. (a) Initial contour, (b) after 5 iterations, (c) after 10 iterations, (d) final result, and (e) combined results of algorithm segmentation (red line) and manual segmentation (blue line).
We validate the accuracy of segmentation result by qualitatively computing the overlap between manual segmentation and the algorithm segmentation [2]. It can be computed by: Φ=
2||Sc ∩ Sm || × 100% ||Sc || + ||Sm ||
(12)
where, Φ is the overlap measurement, Sc is the algorithm segmentation, Sm is the manual segmentation, and || • || is the total number of pixels in a segmentation. Table 2 lists the validation results for 7 polyps with different sizes, which include three medium polyps and four large polyps. The average overlap is 85.17%, and the standard deviation is about 3.67%. A combined segmentation result is shown in Fig.4(e), where red line and blue line denote the results by algorithm segmentation and manual segmentation, respectively. Table 2. Overlap computed by our framework Polyp 1 Polyp 2 Polyp 3 Polyp 4 Polyp 5 Polyp 6 Polyp 7 Average Std Overlap (Φ) 79.26% 82.84% 84.52% 86.47% 83.47% 88.86% 89.80% 85.17% 3.67%
An Improved 2D Colonic Polyp Segmentation Framework
5
379
Conclusions
In this paper, we present a framework for colonic polyp segmentation with great false positive reduction. Four different geometric features: SP, CV, SP and MPGVF are used for colonic polyp detection. Colonic polyp segmentation is mainly based on modified fuzzy c-mean clustering and gradient vector flow deformable model for segmentation. Our preliminary results demonstrate that the proposed approach can produce good segmentation results of colonic polyps. For the future work, we are planing to combine the 3D automatic colonic polyp segmentation and detection with 3D VC endoscopy which has been developed in our lab to provide the clinician powerful diagnosis tool for screening colonic polyp in the early stage.
References 1. Abbruzzese, J., Pollock, R.: Gastrointestinal cancer. Springer (2004) 2. Yao, J., Miller, M., Franazek, M., Summers, R.M.: Colonic polyp segmetattion in CT colonography-based on fuzzy clustering and deformable models. IEEE Transactions on Medical Imaging 23 (2004) 1344–1352 3. Yao, J., Summers, R.M.: Three-dimensional colonic polyp segmentation using dynamic deformable surfaces. (2004) 280–289 4. Yao, J., Miller, M., Summers, R.M.: Automatic segmentation and detection of colonic polyps in CT colonography based on knowledge-guided deformable models. Volume 5031. (2003) 370–380 5. Yoshida, H., N¨ appi, J.: Three-dimensional computer-aided diagnosis scheme for detection of colonic polyps. IEEE Transactions on Medical Imaging 20 (2001) 1261–1274 6. Acar, B., Napel, S.: Edge displacement field-based classification for improved detection of polyps in CT colonography. IEEE Transactions on Medical Imaging 21 (2002) 1461–1467 7. Xu, C., Pham, D.L., Prince, J.L.: Finding the brain cortex using fuzzy segmentation, isosurface, and deformable surface model. (1997) 399–404 8. Cohen, L.D.: On active contour models and ballons. Computer Vision, Graphics, Image Processing: Image Understanding 53 (1991) 211–218 9. Kass, M., Witkin, A., Terzopoulos, D.: Snakes, active contour model. International Journal of Computer Vision 1 (1987) 321–331 10. Xu, C., Prince, J.L.: Snakes, shapes, and gradient vector flow. IEEE Transaction on Image Processing 7 (1998) 359–369 11. Ahmed, M.N., Yamany, S.M., Farag, A.A.: A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Transaction on Medical Imaging 21 (2002) 193–199 12. Taubin, G.: Estiamting the tensor of curvature of a surface from a polyhedral approximation. Procceding of the Fifth International Conference on Computer Vision (ICCV’95) (1995) 902–907 13. Hassouna, M.S., Farag, A.A., Falk, R.: Differential fly-throughs (dft): A general framework for computing flight paths. Proc. of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2005) 654–661
Segmentation for Medical Image Using a Statistical Initial Process and a Level Set Method WanHyun Cho1, SangCheol Park1, MyungEun Lee2, and SoonYoung Park2 1
Department of Statistics, Chonnam National University, Korea [email protected] 2 Department of Electronics Engineering, Mokpo National University, Korea {melee, sypark}@mokpo.ac.kr
Abstract. In this paper, we present a segmentation method for medical image based on the statistical clustering technique and the level set method. The segmentation method consists of a pre-processing stage for initialization and the final segmentation stage. First, in the initial segmentation stage, we adopt the Gaussian mixture model (GMM) and the Deterministic Annealing Expectation Maximization (DAEM) algorithm to compute the posterior probabilities for each pixels belonging to some region. And then we usually segment an image to assign each pixel to the object with maximum posterior probability. Next, we use the level set method to achieve the final segmentation. By using the level set method with a new defined speed function, the segmentation accuracy can be improved while making the boundaries of each object much smoother. This function combines the alignment term, which makes a level set as close as possible to a boundary of object, the minimal variance term, which best separates the interior and exterior in the contour and the mean curvature term, which makes a segmented boundary become less sensitive to noise. And we also use the Fast Matching Method for re-initialization that can reduce the computing time largely. The experimental results show that our proposed method can segment exactly the synthetic and CT images.
1 Introduction Medical image processing has revolutionized the field of medicine by providing novel methods to extract and visualize information from medical data, acquired using various acquisition modalities. Image segmentation is one of the most important steps in the analysis of the preprocessed patient image data, which can help diagnosis, treatment planning as well as treatment delivery. It is the process of labeling each pixel in a medical image dataset to indicate its tissue type or anatomical structure. However, owing to the noise corruption and sampling artifacts of medical images, the classical segmentation techniques such as edge detection and histogram threshold may cause considerable difficulties[1]. To overcome these difficulties, deformable models have been extensively studied and widely used in medical image segmentation with promising results. The deformable models are curves or surfaces defined within an image domain that can move under the influence of internal forces which are defined within the curve or surface, and external forces which are computed from the image data[2]. G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 380 – 388, 2006. © Springer-Verlag Berlin Heidelberg 2006
Segmentation for Medical Image
381
The deformable models are classified into two kinds of types, which are parametric deformable models and geometric deformable models. Parametric deformable models represent curves and surfaces explicitly in their parametric forms during deformation. This representation allows direct interaction with the model and can lead to a compact representation for fast real time implementation. Adaptation of the model topology, however, such as splitting or merging parts during the deformation, can be difficult using parametric models. On the other hand, geometric deformable models can handle topological changes naturally. These models represent curves and surfaces implicitly as a level set of a higher dimensional scalar function[3]. Here we only have an interest with an image segmentation using geometric deformable models. These models are based on evolution theory and level set method. The evolution theory is to study the deformation process of curves or surfaces using only geometric measures such as the unit normal and curvature[4]. Our segmentation framework consists of a preprocessing technique for initialization and a level set segmentation process for the final segmentation. First, to obtain the proper initial segmentation, we adopt the GMM to characterize the data in the cluster statistically. Especially, the DAEM algorithm is chosen to estimate the parameters of the GMM and we compute the posterior probabilities for each pixel using the estimated GMM. Finally, we segment the given image using a manner such as assigning each pixel into the cluster having the highest value of posterior probability. Second, the level set segmentation process is applied to the initially segmented image by using the DAEM algorithm. This method contains three terms that are called as a smoothing term, an alignment term and a minimal variance term. First, a smoothing term has a role to make a curve smooth by getting rid of noise. This is determined by mean curvature computed at a given point. And the alignment term should move the model toward boundaries of objects given in the input image. This is decided by the inner product of the curve normal and the gradient of an input image whose normal best aligns with the image gradient. Finally, the minimal variance term splits efficiently the interior and exterior of the boundary of object. And also as we apply repeatedly this method, until we would obtain the proper segmentation result.
2 Initial Segmentation Using Statistical Modeling and DAEM Algorithm Since the deformable model constructed by the level set process moves generally using gradient descent, it seeks local solutions, and therefore the results are strongly dependent on the initial values. Thus, we control the nature of the solution by specifying an initial model from which the deformation process proceeds. Here we are going to use a novel method for initial segmentation of CT images being called as Deterministic Annealing EM segmentation. This method incorporates Gaussian mixture model into DAEM algorithm. First, we suppose that a given image consists of a set of disjoint pixel labeled 1 to N , and that it is composed of the K distinct materials or classes. Also we let y i denotes the gray values observed from i th pixel ( i = 1,⋅ ⋅ ⋅, N ). Here, if we will employ a GMM to characterize the intensity distribution of gray values observed from
382
W.H. Cho et al.
each pixel consisting of a given image, then a density function of intensity y i is defined as the following model p( yi |Θ ) =
K
¦π
k
φ ( yi ; ȝ k ,σ
2 k
(1)
)
k =1
where
π k is the mixture coefficient for each component and φ ( y i ;
a normal distribution with mean
ȝ k , σ k2 ) denotes
ȝk and a variance σ k2 . Furthermore, we let
ǽ 1 , ⋅ ⋅ ⋅ , Z N denote the hidden class indicator vectors for each pixel, where the k th element z ik of Zi is taken to be one or zero according to the case in which the i th pixel does or does not belong to the k th cluster. Here, if the parameter vector ʌ is denoted as the prior probability in which each pixel belongs to a particular cluster, then the probability function of Z i is given as follows: K
∏
p (z i; ʌ ) =
π
Z
(2)
ik
k
k =1
Thus, the joint probability model for the given image can be represented as the following form p ( y1 , , y N ; z 1 , , z N | Θ , ʌ ) =
K
N
k =1
i =1
∏∏
(3)
( π k φ ( y i ; μ k , σ k )) z ik
Here, in order to use this model for image segmentation, we need to find the new technique that can be used to obtain the globally optimal estimators for the parameters of GMM. We will use the Deterministic Annealing Expectation Maximization technique. This algorithm is usually processed as the following manner. Specifically, it starts with initial values β 0 for the temperature parameter β and (Ĭ (0) , ʌ ( 0 ) ) for the parameter vector Ĭ and the prior probability ʌ . Then we first generate iteratively
successive estimates (Ĭ (t) , ʌ (t ) ) at the given value β by applying the following Annealing E step and M step, for t = 1, 2, and next we repeat the Annealing EM step as we increase the value of temperature β . DA-E-Step: Here, introducing an annealing parameter β , we consider the following objective function: ϑ ( Pz ( t ) , Ĭ ) = E P
(t) z
( − log p ( y | z , Ĭ ) p ( z )) + β ⋅ E P ( t ) (log P z
(t )
)
(4)
z
The solution of the minimization problem associated with the generalized free energy in ϑ ( Pz(t ) , Ĭ) with respect to probability distribution p (z; ʌ) with a fixed parameter Ĭ is the following Gibbs distribution: Pβ ( z | y, Ĭ ) =
( p ( y | z , Θ ) p ( z )) β ¦ z'∈ȍ ( p ( y | z ′, Θ) p (z ′)) β z
(5)
Segmentation for Medical Image
383
Hence we can obtain a new posterior distribution, pβ (z | y, Ĭ) parameterized by β . So, using a new posterior distribution pβ (z | y, Ĭ) , we can obtain the conditional expectation of the hidden variable Z ik given the observed feature data as follows. This is the posterior probability which i th pixel belongs to k th cluster. ( π k( t −1 )φ ( y i ; ȝ k( t −1 ) , σ k( t −1 ) )) β
τ k( t ) ( y i ) = E ( Z ik ) =
K
¦ (π
( t −1 ) j
φ ( y i ; ȝ (j t −1 ) , σ
( t −1 ) j
(6)
)) β
j =1
DA-M-Step: Next, we should find the minimum of ϑ ( Pz , Ĭ) with respect to Ĭ (t )
with fixed posterior distribution pβ (z | y, Ĭ) . It means finding the estimates Ĭ (t ) that minimizes ϑ ( Pz , Ĭ) . Since the second term on the right hand side of the generalized free energy in Equation (4) is independent of Ĭ , we should find the value of Ĭ minimizing the first term (t )
Q β (Ĭ ) = E
(7)
( − log p ( y | z , Θ ) p ( z ))
p z( t )
From minimizing trial, we can obtain the following estimators of mixing proportions, the component mean and the variance. These are respectively given as N
πˆ k =
N
1 N
¦
τ
β k
, ( y i ) , k = 1, , K
¦ τ β (y k
ȝˆ k =
i =1
i =1 N
¦
τ
β k
i
) yi , k = 1, , K
( yi )
i =1
N
and σˆ
k
=
¦
τ
β k
( y i )( y
i
− ȝˆ
i=1 N
¦
τ
β k
k
)
(8)
2
, k = 1, , K ( yi)
i=1
Finally, we will use the statistical cluster technique for the segmentation of medical images. Suppose that a given image consists of a set of the K distinct objects or clusters C1 ,, C K . We usually segment an image to assign each pixel to the cluster with maximum posterior probability. To do this, we try to find what the cluster has the maximum value among the estimated posterior probabilities obtained by using DAEM algorithm. This is define as zˆ i = arg max
1≤ k ≤ K
τ k( t ) ( y i ) , i = 1, , N
(9)
Then, we can segment an image by assigning the i th pixel to the zˆi -th cluster C zˆ having the maximum a posterior probability.
3 Final Segmentation of a Medical Image Using Level Set Method Level set method specifies a deformable curve or surface as a zero level set of a scalar signed distance function φ : U → ℜ , where we can think of φ (s, t ) as the dynamic
384
W.H. Cho et al.
volumetric function changing in time t [5,6]. Thus, a curve C can be expressed as the following set : C = {( x , y ) ∈ R 2 | φ ( s , t ) = 0 } .This deformable model separates the inside and the outside of some object; it is therefore often referred to as the interface. Thus, the curve or interface is given by the zero level set of the time-dependent level set function φ at any time t . Here, the geometric curve evolution equation is assumed to be given by Ct =
∂C = F n , ∂t
(10)
where F is any speed quantity that does not depend on a specific choice of parameterization. Then, its implicit level set evolution equation that tracks the evolving contour is given by φt =
∂φ = F |∇φ | ∂t
(11)
This relation is easily proven by applying the chain rule, and using the fact that the normal of any level set is given by the gradient, (12) ∇ φ φ
t
=
=
=
F < ∇ φ ,
| ∇ φ |
>=
F | ∇ φ |
This formulation enables us to implement curve evolution on the x, y coordinate system. It automatically handles topological changes of the evolving curve. Here, we are going to think about solving the level set equation. Let φ (s, t n ) repren
sents the current values of φ at some grid point s and time t . Updating in time consists of finding new values of φ at every grid point after some time increment Δt . A rather simple first-order accurate method for the time discrimination of equation (11) is the forward Euler method given by φt =
φ ( s , t n +1 ) − φ ( s , t n ) Δt
= F | ∇φ |.
(13)
So along the time axis, solutions of level set equation are obtained using finite forward differences, beginning with an initial model and stepping sequentially through a series of discrete time steps. Thus the update equation is given by the following solution:
φ ( s , t n +1 ) = φ ( s , t n ) + Δ t ⋅ F ⋅ | ∇ φ |
(14)
where F is a user-defined speed term which generally depends on a set of order n derivatives of φ as well as other functions of s. As the first issue to choose properly the speed function, we may consider the image regularization problem which removes the noise or spot. In general, we need techniques that can remove noise without too much blurring. The speed function in level set equation is chosen as the smoothing term using mean curvature[7,8]. As an example, we consider the explicit curve evolution as the geodesic active contour term, given by Ct = (g(C)κ − < ∇g, n >)n . The corresponding level set evolution is φ
t
= div ( g ( C )
∇ φ )| ∇ φ | | ∇ φ |
(15)
Segmentation for Medical Image
385
Here if we take the function g (⋅) with one, then the speed function in the corresponding level set equation becomes the mean curvature κ M of the level set S in the direction of the curve normal n : Fs = α k I (∇ ⋅
∇φ ). |∇φ |
Here this speed function is weighted by a factor α , allowing the user to control the amount of smoothing, and is tuned for each dataset. And a multiplicative stopping term k I slows down the deformable curve near the boundary and stops it at the boundary or edge of object. This is given as kI =
1 1+ | ∇ ( G * I ) |
.
Second, we can also consider another speed term. This term can lead the deformable curve toward the edges in the input data. It attracts the curve models to certain grey scale features in the input data. The idea using this measure is that in many cases, the gradient direction is a good estimator for the orientation of the edge contour. We note that this measure gets a high value if the curve normal has a same direction of the image gradient. As an example, we also consider the explicit curve evolution as the robust alignment term and the threshold term[9,10]. The first variation as a gradient descent process is C t = sign ( < ∇ I , n > ) Δ I n + β ( c 2 − c 1 )( I − ( c 1 + c 2 ) / 2 ) n , (16) where c = 1
1 |ΩC |
³³
Ω
I ( x , y ) dxdy
and c = 2
C
1 |Ω \Ω
C
|
³³
Ω \Ω
I ( x , y ) dxdy . C
The corresponding level set evolution is c +c
φ = ( sign ( < ∇ I , ∇ φ > ) Δ I + β ( c − c )( I − 1 t
2
1
2
2 )) | ∇ φ |
(17)
Finally, by combining two kinds of terms derived until now, we can obtain the final level set evolution equation: φ =φ t
t −1
c +c ∇φ + Δt ⋅ (αk (∇ ⋅ ) + sign (< ∇I , ∇φ >) ΔI + β (c − c )( I − 1 2 )) ⋅| ∇φ | (18) I 2 1 | ∇φ | 2
4 Experimental Results To assess the performance of the level set procedure, we have first conducted the experiment on the synthetic image using the level set method defined in the section 3. Our synthetic image consists of three binary objects as shown in Fig. 1(a) and the noisy image and the blurred image are obtained from the original image as shown in Fig. 1(b) and (c), respectively. Here, we add Gaussian noise with mean 10 and standard deviation 60 to the original image. And the blurring image is obtained by passing the lowpass filter of size 10x10 on the original image. The level set segmentation method is applied to the images with initial level set model of a circle being located on the center of the image.
386
W.H. Cho et al.
(a)
(b)
(c)
Fig. 1. Synthetic images and initial level set curve superposed over the images: (a) Original image (b) Noisy image (c) blurred image
The segmented results for the synthetic images using the level set method are shown in Fig. 2. As being expected, we can observe that the level set method can partition the objects accurately regardless of the presence of noise and blurring effect.
(a)
(b)
(c)
Fig. 2. The segmented results by the level set procedure: (a) Original image (b) Noisy image (c) blurred image
To demonstrate the performance of the proposed segmentation method for the medical image, we have applied the algorithm consisting of DAEM and level set procedure to the CT section image in Fig. 3(a). The DAEM algorithm first estimates the parameters of the GMM. Then the algorithm segments a CT image by assigning each pixel to the region having the maximum a posterior probability. The bold curves along the boundary of the lung are the extracted results by using DAEM as shown in Fig. 3(b). Finally, the lung region is segmented by applying the level set procedure to the images with the initial position in Fig. 3(b). The level set method segments the CT image accurately with the interior detailed information of the lung as shown in Fig. 3(c).
(a)
(b)
(c)
Fig. 3. Results of the proposed segmentation scheme. (a) Original Lung CT image, (b) Initial level set position extracted by using the DAEM, (c) Segmentation result of level set method.
Segmentation for Medical Image
387
Fig. 4 shows the partly zoomed result of the square area being located on the right hand side of the segmented lung image. It is noted that the level set method segments the blood vessels more accurately than the DAEM method.
(a)
(b)
Fig. 4. The partly zoomed result of segmented lung images using (a) DAEM, (b) level set method
5 Conclusions In this paper, we have proposed a segmentation method consisting of a pre-processing stage for initialization and the final segmentation stage. During the first stage for the initialization, we have adopted the GMM to characterize the data in the image cluster statistically and the DAEM algorithm is employed to estimate the parameters of the GMM. After computing the posterior probabilities for each pixel using the estimated GMM, we segment the given image using a manner such as assigning each pixel into the cluster having the highest value of posterior probability. During the second stage, the level set segmentation process is applied to the initially segmented image obtained by using the DAEM algorithm. This level set procedure can segment the image much more accurately, smoothly and insensitively to noise by exploiting three terms such as a smoothing term, alignment term and a minimal variance term. We conclude from the experiments that the level set operation has been proven to be robust to the noise and blurring effects. The proposed segmentation method consisting of DAEM and level set procedure has also provided the accurate segmentation result for the CT images.
References 1. Armato, S. G. and Sensakovic, W. F., “Automated lung Segmentation for Thoracic CT”, Academic Radiology, volume 11(2004) 1011-1021 2. Itai, Y. and etc, “Automatic extraction of abnormal areas on CT images of the lung area”, International Symposium on Advanced Intelligent Systems, volume 1, pp 360-392, 2005, Korea. 3. Kimmel, R., “Fast Edge Integration”, Geometric Level Set Methods, pp 59-77, Springer, 2003 4. Caselles, V., Kimmel, R. and Sapiro, G., “Geodesic Active Contours”, In Proceedings of ICCV 95, pp 694-699, Boston, USA, 1995
388
W.H. Cho et al.
5. Malladi, R., Setian, J. A. and Vemuri, B. C., “Shape Modeling with Front Propagation: A level Set Approach”, IEEE Transaction on PAMI, Vol. 17, pp 158-175, 1995 6. Pham, D. L., Xu, C. and Prince, J. L., “A survey of current methods in medical image segmentation”, Technical Report JHU/ECE 99-01, The Johns Hopkins University, 1998 7. Sethian, J. A., “Level Set Methods and Fast Marching Methods”, Cambridge university press, 2005 8. Sonka, M. and Fitzpatick, J. M., “Handbook of Medical imaging: Volume2 Medical Image Processing and Analysis”, SPIE Press, 2000 9. Vese, L. A. and Chan, T. F., “A Multiphase Level Set Framework for Image Segmentation Using the Mumford and shah model”, International Journal of Computer Vision, Vol. 50, pp 271-293, 2002 10. Whitaker, R., “Modeling deformable Surfaces with Level Sets”, IEEE Computer Graphics and Applications, Vol. 24(5), pp6-9, 2004
Leukocyte Detection Using Nucleus Contour Propagation Daniela M. Ushizima1 , Rodrigo T. Calado2 , and Edgar G. Rizzatti2 1
Catholic University of Santos, Inteligent System Group [email protected] 2 University of Sao Paulo, College of Medicine {calador, rizzattie}@nhlbi.nih.gov
Abstract. We propose a new technique for medical image segmentation, focused on front propagation in blood smear images to fully automate leukocyte detection. The current approach also incorporates contextual information, which it is especially important in direct general algorithms to the applied problem. A Bayesian classification of pixels is used to estimate cytoplasm color and is embedded in the speed function to accomplish cytoplasm boundary estimation. We report encouraging results, with evaluations considering difficult situations as cell adjacency and filamentous cytoplasmic projections.
1
Introduction
The complete blood cell count is one of the most straightforward tests, which reports the white blood cell (WBC) count, the differential count and the morphological evaluation of the blood smear at the microscope [1], as an ancillary test. While the standard available systems are quite accurate in differential counting of normal and malignant cells, they lack subclassification, which is accomplished using cell morphology. Morphological analysis still relies on microscopic observation by humans, and specific cells and/or morphological abnormalities are evaluated through manual observation by experts. The accurate identification and analysis is still a challenging problem in computer vision and demanding in massive leukocyte recognition [2]. As part of the project Computer Vision for Leukemia Diagnosis, the development of the software Leuko involves improvement of leukocyte segmentation. This paper proposes a fully automated leukocyte detection scheme (Fig.1), using a dual-component speed function for front propagation, using level set methods [3]. Previous Leuko experiments yielded a high accuracy in leukocyte differentiation [4,5], however WBC segmentation still required human interaction. Early tests involving level set for leukocyte detection are reported in [6], regarding a curvature-dependent speed function; we suspect this approach may embed artificial smoothing to the interface. Sec.2 introduces blood smear micrographies, Sec.3 gives an overview of the clinical decision support system Leuko. Sec.4 describes how to perform automatic G.-Z. Yang et al. (Eds.): MIAR 2006, LNCS 4091, pp. 389–396, 2006. c Springer-Verlag Berlin Heidelberg 2006
390
D.M. Ushizima, R.T. Calado, and E.G. Rizzatti
leukocyte identification using the proposed approach, combining image processing techniques, Bayesian classification and level set methods. The Experiments and Results (Sec.5) evaluates the accuracy of the presented scheme for artificial and real images. Discussion (Sec.6) synthesizes the contribution of this paper and future developments.
2
Blood Images
Leishman-stained blood smears slides were digitized using a 2.1 Megapixel digital camera. The 24-bit color pictures of 720 per 480 size presents resolution for chromatin texture description [2] since the pixel size is 0.06 μm per pixel. The camera is attached to a bright field microscope set up to magnify 1,250x. Basically, the leukocytes can be subdivided into mononucleated and polymorphonucleated types. Types L and M are mononucleated since their nucleus are typically round and cleft, respectively, presented in a unique region: the cytoplasm rarely has granules. The M type can present vacuolation and they are larger than the L type. Polymorphonucleated or granulocytes are B, E and N types, which present segmented (B and E) or diffuse (B) nucleus and cytoplasmic granules. The granules of B are larger than N , although both of them with similar azurophilic staining and E granules are eosinophilic (reddish), however with similar granule size to B. Lymphocytes can present malignancies as those regarded in Leuko, as chronic lymphocytic leukemic cells, prolymphocytes and hairy cells [1]. The cytoplasm color, texture and shape can vary widely; consequently, template models mostly fail. An adaptive algorithm, which takes contextual information into account and employs a flexible description of how the cytoplasm may happen around the nucleus has considerable potential.
3
Leuko Software
Mostly automatic systems for WBC counting rely on flow cytometry, able to distinguish a small subset of cell types. Leuko is a computer vision tool, where one can segment images for nucleus and cytoplasm feature extraction in such a way they can be classified into one of the eight considered classes of leukocytes: 5 normal types (basophil, eosinophil, lymphocyte, monocyte and neutrophil) and malignancies as chronic lymphocytic leukemia, prolymphocytic leukemia and Hairy-cell leukemia [5] with a promising classification accuracy. Since the nuclei stain much darker than other structures [1], they can be easily detected using green channel dynamical thresholding, as extensively addressed in previous papers [4,5]. After morphological filtering [7], the nucleus serve as a basis to determine the nuclear diameter (dn ) and to restrict color segmentation to the cell area, c, such as the cell diameter, dc = min {dn + 200, 400} and c ≈ d2c , according to the contextual information that 400 pixels ≈ 24 μm is the diameter of the largest leukocyte approximately. Both nucleus and cytoplasm
Leukocyte Detection Using Nucleus Contour Propagation Distance transform
Nucleus
50
50
100
100
150
150
200
200
250
250
300
300
350
350 100
200 (a)
300
100
Grayscale image
200 (b)
300
Probability matrix
50
50
100
100
150
150
200
200
250
250
300
300
350
350 100
(i)
391
200 (c)
300
100
200 (d)
300
(ii)
Fig. 1. Computational pipeline: (i) Block diagram: BW = black and white, DT = distance transform, prob.distrib = probability distribution; (ii) Images of the pipeline: a)distance transform of the nucleus, b) 1-nucleus, c) grayscale version of the image, d)color probability matrix
were segmented in Leuko, however, cytoplasm segmentation usually required human interaction for determining some cytoplasm color samples before automatic segmentation [4].
4
Automatic Cytoplasm Detection
The proposed approach to automate leukocyte segmentation is to model the micrography as a two region problem, so we can split the image into leukocyte and non-leukocyte. After nuclear material detection using dynamical thresholding algorithms [4,5], the initial guess is that the cytoplasm surrounds the nucleus. The color of the pixels around the nucleus are the first assumption toward cytoplasm color modeling for posterior segmentation. Since we detected the contour of the nucleus and collect color information about the cytoplasm using samples around the nucleus, we designed an scheme for automatic segmentation using front propagation where the initial curve is the nucleus contour and the speed function combines texture and color properties of the image. This section describes the development of a computational pipeline (Fig.1), starting from raw color images, through calculation of color pixel probability to front propagation, combining color and gradient information as speed function. This paper considers different image processing routines as conversion of the RGB color image (IRGB ) to grayscale (I), gradient of the grayscale image (∇I), smoothing of ∇I using Gaussian filtering, controlled by parameter σ, transformation between color spaces as RGB to XYZ and XYZ to CIELab [8,7]. Fig.1 presents some of the transformations along the computational pipeline for automated cytoplasm detection.
392
4.1
D.M. Ushizima, R.T. Calado, and E.G. Rizzatti
Modeling Cytoplasm Color
The term cytoplasm refers to everything between the cell membrane and the nuclear envelope, so it would be reasonable to assume that the cytoplasm is located around the nucleus. Unfortunately, image of smeared cells may occasionally present bursts or thinned cytoplasm, so the nucleus will be the more external than one would theoretically expect. We establish a compromise between the amount of color samples and the number of level curves, in order to estimate the parameters to the color probability distribution for cytoplasm identification. We calculate a normal distribution based on the pixels relying on the first six level curves, considering the converted image to the Lab space. So for each pixel ILab (x, y), representing a 3D vector in Lab space, we classify the pair (a, b) and its neighborhood, using Bayesian classification over the estimated color probability distribution. Circumventing the lightness sensitivity and the overfitting, we discarded the L channel in probability calculation and considered a neighborhood in color space, respectively. The color distribution uses information of a and b, where a is the red/green coordinate, with +a indicating red, and -a indicating green and b is the yellow/blue coordinate, with +b indicating yellow, and -b indicating blue. We considered the hypothesis (H) of a pixel to belong or not to the cytoplasm is equally probable and the maximum likelihood (M L) of the data (D) is the most probable hypothesis. It can be represented by M L ≡ argmaxh∈H P (D|h) This machine learning scheme accomplishes the first step toward cytoplasmic color representation with an inexpensive computational routine. The color probability distribution of a neutrophil is shown in Fig.1.d.; the brighter the gray level, the more probable that the pixel belongs to the cytoplasm, according to our model. 4.2
The Propagation Speed Model
Using the work described in Section 3, we can produce a robust and reliable segmentation of the nucleus. Our goal now is to use this, plus a propagation algorithm, to evolve this initial nucleus boundary until it is stopped on the edge of the cytoplasm. We characterize this evolving curve through the use of a level set embedding, following the work of Malladi, Sethian, and Vemuri [9] which used the Osher-Sethian level set approach [10] to segment medical images; see also [11]. Consider now a closed curve, starting at the boundary of the nucleus, and propagating with a speed F in its normal direction. A level set approach embeds the curve in a higher-dimensional function using the signed distance function such that the zero level set φ(x, t = 0) corresponds to the initial position: an initial value partial differential equation for the evolution of φ is given by φt + F |∇φ| = 0,
φ(x, t = 0) given
(1)
In order to allow this evolving curve to segment the desired boundary, we imagine an initial curve propagating in the image domain with speed that is a
Leukocyte Detection Using Nucleus Contour Propagation
393
decreasing function of the image gradient. A natural way to do this is as follows. Let I(x) be the image pixel values at an image point x. Then, let kI (x) =
1 1 + σ|∇I(x)|
(2)
This function is small where the image gradient |∇I| is large, and is close to unity in areas of constant image value. A large collection of more sophisticated flow-based PDE schemes have been introduced since these original works: we refer the reader to [3,12,13]. The algorithm, as described above, applies to a gray-scale image, so that the gradient involved in the determination of the speed function is naturally defined. Our goal now is to apply this to the segmentation and extraction of the cytoplasm boundary. We do this by building a dual-component speed function, one part of which depends on a speed function built as above and synthesized from a gray-level conversion, and the second built on a probability distribution around an extracted color representation of the typical cytoplasm pixel. More precisely, consider a speed function of the form F (x, y) = F1 (x, y) + F2 (x, y)
(3)
We do a gray level conversion of the image to produce F1 , chosen with an appropriate sigma. In order to compute F2 , we compute a small offset of the nucleus boundary into the cytoplasm. Along this offset boundary, we compute the mean and standard deviation, assuming a normal distribution, and based on this distribution (F2 ), we classify the pixel of the whole image. Thus, our algorithm contains a grayscale synthesized speed function, plus a color probability distribution based on a nearest-neighbor representation of the cytoplasm color pixels. Several improvements can be made to this model, including the use of image smoothing/enhancement pre-processing steps, and faster versions of the segmentation algorithms, such as an initial step based on Fast Marching methods. These improvements are mathematically appealing, but the wide range of pathologies presented in our image database often stymie more refined approaches.
5
Experiments and Results
We use Leuko and Matlab to model the leukocyte segmentation problem and test the described algorithms. Firstly, the proposed computational pipeline (Fig. 1) processes the raw color image and then detects its nucleus using the Leuko tools as described in Sec. 3, so that the most probable coverage area of cell is cropped from the original image and the nucleus image Inuc is also redimensioned. The cropped image IRGB is then converted into both grayscale and Lab color space. Using Inuc , we calculate the distance transform and extract the boundary of the nucleus. From these inputs, the dual-component will play an important role in guiding the front propagation until the motion of the curve is smaller than a small
394
D.M. Ushizima, R.T. Calado, and E.G. Rizzatti
Fig. 2. (a)Synthetic image, (b) Evolution of the front with the dual-component speed and (c) without the color speed term: inner contour (black) is the result for the same number of iterations in (b) and outer contour is the closest contour to the real boundary
parameter . The convergence of the curve to a contour will indicate it has reached the leukocyte boundary. We evaluate this scheme using artificial and real images, particularly in difficult situations as non-central nucleation, cell adjacency and filamentous cytoplasmic projections. A synthetic image is illustrated in Fig. 2, presenting a displaced nucleus in terms of center of mass of the whole cell. Previous experiments have shown that an interface propagating using only the first gradient gray-scale based component F1 of the speed function, as given in Eq.2, often escapes the cytoplasm boundary, mainly in non-circular shapes. Another drawback is on the convergence time of the cytoplasm boundary as illustrated in Fig.2.c; the inner curve represents the contour for the same number of iterations considered in Fig.2.b and the outer curve (red) indicates the closest contour to the real boundary, which takes almost twice the time. In addition to artificial images, we selected difficult images to be segmented, as Fig.3.a, containing red blood cells touching a neutrophil cytoplasm. Our model accurately deals with the adjacency and color issues, since the cytoplasm is correctly detected, although reddish granules of the neutrophil cytoplasm closely resembles red blood cell cytoplasm color. Other examples are listed in Fig.3, presenting original and segmented leukocyte, propagating nucleus boundary (inner curve) until estimated cytoplasm boundary (outer curve) is found. Normal (Fig.3.a-d) and abnormal (Fig.3.e-f) cells are considered, presenting sufficient accuracy for posterior feature extraction and classification using Leuko. The segmentation result is particularly interesting for Hairy cells as presented in Fig.3.e, a very hard cell to be segmented, which is characterized by filamentous projections of the cytoplasm, usually presenting moderate amount and pale blue staining. Our current prototype implementation is not as fast as it could be, with a code written in Matlab, which is not optimal. More importantly, we are using a full level set method, updating all the zero contours, not just ones near the front. We are currently implementing a narrow band technique (see [3]); after finishing this, we expect to implement a Fast Marching Method [3] to obtain a good initial guess faster: this should dramatically improve performance.
Leukocyte Detection Using Nucleus Contour Propagation Original image
Evolution of the front
Original image
Evolution of the front
Original image
Evolution of the front
Original image
(a) Original image
(c)
Evolution of the front
(d) Evolution of the front
(e)
(b) Original image
395
Evolution of the front
(f)
Fig. 3. Leukocytes and respective cytoplasm detection nucleus propagation toward cytoplasm boundary: (a) Neutrophil , (b) Lymphocyte, (c) Monocyte, (d) Eosinophil, (e) Hairy cell, (f) Prolymphocyte
6
Discussion
Segmentation is a key step in a wide range of computer vision applications. Even considering the state-of-art algorithms, user interaction is often required, a bottleneck in systems dealing with massive data as counting processes. Active contour models have been ubiquitous in achieving more proper approximation of the segment boundary. A skilled eye can pick out the desired boundaries from a noisy image, even those delineated by slight changes in image intensity; nonetheless, the drawn outline is often inexact, even when performed by a physician as we have shown in Fig.3. Most straightforward methods search for abrupt variations, so that an edge relies on the large intensity difference between neighboring pixels. The correct balance between this variation is essential so that one neither misses points of the boundary nor gets extra points. Level set methods turns to be a suitable strategy for allying global and local parameters such that an edge can be detected accurately. The proposed model combines the global texture variation of the image, extracting the gradient of the grayscale version of the micrography, with local description of the pixels, regarding the color of cytoplasm pixels, using an
396
D.M. Ushizima, R.T. Calado, and E.G. Rizzatti
unsupervised technique to estimate the probability of a pixel to belong to the cytoplasm, given the samples which relies in the first level curves of the distance transform of the nucleus. The color probability matrix misclassifies some pixels, as shown with high intensity levels in Fig.1(ii), where some red blood cell pixels are computed to belong to cytoplasm. This misclassification can be ignored in the presented images, however future developments will include a component to weight the probability matrix, according to the distance to the nuclear center of mass. The biological descriptions concerned with morphological features of the cytoplasm are essential [1] for clinical diagnosis. However, biohazards and inherent manual observation have led blood description to be often done through molecular biology and immunophenotyping, techniques compelling expensive capital investments and limiting public access to treatment in developing countries. The authors would like to acknowledge the Brazilian research council FAPESP for financial support.
References 1. Agosti, S.J., Cornbleet, P.J., Galagan, K., Gewirtz, A.S., Glassy, E.F., Novak, R., Spier, C.: Color Atlas of Hematology: an illustrated field guide based on proficiency testing. 1st edn. (1998) 2. Sabino, D.M.U., da F Costa, L., Calado, R.T., Zago, M.A.: Automatic leukemia diagnosis. Acta Microscopica 12(1) (2003) 1–6 3. Sethian, J.A.: Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry. Cambridge University Press (1999) 4. Sabino, D.M.U., da F Costa, L., Rizzatti, E.G., Zago, M.A.: A texture approach to leukocyte recognition. Real-Time Imaging 10(4) (2004) 205–216 5. Ushizima, D.M., Lorena, A.C., Carvalho, A.C.P.L.F.: Support vector machines applied to white blood cell recognition. In: V Int. Conf. Hybrid Intel. Systems. (2005) 6. Nilsson, B., Heyden, A.: Model-based segmentation of leukocyte clusters. In: 16th International Conference on Pattern Recognition, Quebec, Canada, IEEE (2002) 727–730 7. Gonzalez, R., Woods, R.: Digital Image Processing. Addison-wesley Pub Co (1992) 8. Castleman, K.R.: Digital Image Processing. 1st edn. Prentice Hall (1996) 9. Malladi, R., Sethian, J.A., Vemuri, B.C.: A topology independent shape modeling scheme. Proc. of SPIE Conf. on Geometric Methods in Computer Vision II 2031 (1993) 246–258 10. Sethian, J., Osher, S.: Fronts propagating with curvature-dependent speed - algorithms based on hamilton-jacobi formulations. Journal of Comp. Physics 79(1) (1988) 12–49 11. Caselles, V., Catte, F., Coll, T., Dibos, F.: A geometric model for active contours. Numerische Mathematik 66 (1993) 12. Kimmel, R.: Numerical Geometry of Images: Theory, Algorithms, and Applications. Springer-Verlag (2004) 13. Sapiro, G.: Geometric Partial Differential Equations and Image Processing. Cambridge University Press (2001)
Author Index
Aggarwal, R. 140 Alkadhi, H. 317 Atallah, L. 156 Auer, D. 261 Avants, B.B. 9 Bai, J. 364 Bai, L. 261 Beechey-Newman, N. Bosch, J.G. 236 Cai, C. 211, 269 Calado, R.T. 389 Camici, P.G. 293 Cao, J. 269 Carter, T.J. 203 Cattin, P. 317 Chen, D. 372 Chen, G. 108 Chen, L. 100 Chen, W. 309, 325 Chen, Y. 309 Chen, Y.Z. 100 Cho, W.H. 380 Chung, A.C.S. 116 Chung, A.J. 293 Chung, M.K. 36 Cong, L. 44 Crum, W.R. 203
Elwell, C. 140 Epstein, C.L. 9 Euler, E. 179
203
Falk, R. 372 Farag, A.A. 372 Feng, Q. 309 Feng, Y. 309 Fenster, A. 211 Fletcher, T. 1 Gee, J.C. 9, 76 Gerig, G. 1 Gooya, A. 356 Gorczowski, K. 1 Gu, L. 108 Guetter, C. 228 Guo, Y. 277 Gutt, C. 148 Hao, J.T. 341 Hassouna, M.S. 372 Hawkes, D.J. 203 Heining, S.M. 179 Hong, H. 285 Hu, X. 17 Huang, M. 124 Huang, X. 252 Huntbatch, A. 44 Huysmans, T. 84 Inomata, T.
Dalton, K.M. 36 Darzi, A. 140, 156 Davidson, R.J. 36 de Jong, N. 236 Deligianni, F. 140 Delles, M. 148 Delpy, D.T. 140 Deshpande, G. 17 Dillmann, R. 148 Ding, M. 211, 269 Dohi, T. 187, 356 Dong, X. 60, 68 D¨ ossel, O. 333 Duffau, H. 25
187
James, A. 156 Jeong, J.Y. 1 Jiang, T. 92 Jiang, T.Z. 44, 244 Khamene, A. 228 Kim, S.-Y. 171 Koh, P.H. 140 Kong, J. 348 Kov´ acs, T. 317 Krishnan, A. 252 LaConte, S. 17 Lee, J. 285
398
Author Index
Lee, M.E. 380 Lee, S.-L. 44 Leff, D. 140 Lei, H. 269 Leong, J. 140, 156 Leung, K.Y.E. 236 Li, K. 52 Li, M.L. 341 Li, S. 92 Li, S.W. 92 Li, Z. 92 Liao, H. 187, 356 Liao, R. 228 Liu, A. 269 Liu, H. 301 Lo, C.-H. 277 Lu, C.-C. 277 Lu, S. 124 Lu, Y. 348 Luo, X. 211
Shi, P. 132, 301 Sielhorst, T. 179 Sijbers, J. 84 Speidel, S. 148 Stefan, P. 179 Styner, M. 1 Su, G. 333 Sun, Y. 228 Sz´ekely, G. 317
Ma, W. 100 Malony, A.D. 52 Masamune, K. 356 Matsumiya, K. 356
van Burken, G. 236 van der Steen, A.F.W. 236 van Stralen, M. 236 Vanpoucke, F. 84 Verdonk, B. 84 Voormolen, M.M. 236
Navab, N. 179 Nemes, A. 236 Nicolaou, M. 156 Nolte, L.-P. 68, 195 Okada, K.
252
Park, S.C. 380 Park, S.Y. 380 Peltier, S. 17 Pizer, S.M. 1 Qi, F.
Tang, F.L. 341 Tang, W.H. 116 Tanner, C. 203 ten Cate, F.J. 236 Tian, Y. 301 Traub, J. 179 Tucker, D.M. 52 Tustison, N.J. 76 Ushizima, D.M.
389
Wang, J. 244 Wang, J.Z. 348 Wang, Y. 325 Wildermuth, S. 317 Wong, C.L. 132 Wu, G. 219 Xie, K. 164 Xu, C. 228
219
Reiber, J.H.C. 236 Riquarts, C. 179 Rizzatti, E.G. 389 Sakuma, I. 187 Sauer, F. 228 Shang, Y. 333 Shen, D. 219 Shen, L. 36 Shen, L.L. 261
Yang, C. 244 Yang, G.-Z. 44, 140, 156, 293 Yang, J. 164 Yang, R. 100 Yang, W. 100 Ye, B. 100 Zhang, Zhang, Zhang, Zhang,
B. 348 H. 132 J.D. 348 J.G. 325
Author Index Zhang, S. 100 Zhang, X. 195 Zhang, Z. 92 Zheng, G. 60, 68, 195 Zheng, L. 244
Zhou, C. 211 Zhou, S. 325 Zhou, X. 252 Zhou, Y. 364 Zhu, Y.M. 164
399