Introduction to Biomedical Imaging [2 ed.] 9781119867715


135 87

English Pages 387 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Title Page
Copyright
Contents
Preface
Introduction
About the Companion Website
Chapter 1 Image and Imaging System Characteristics
1.1 General Image and Imaging System Characteristics
1.2 Concept of Spatial Frequency
1.3 Spatial Resolution
1.3.1 Imaging System Point Spread Function
1.3.2 Imaging System Resolving Power
1.3.3 Imaging System Modulation Transfer Function
1.4 Signal‐to‐Noise Ratio
1.5 Contrast‐to‐Noise Ratio
1.6 Signal Digitization: Dynamic Range and Resolution
1.7 Post‐acquisition Image Filtering
1.8 Assessing the Clinical Impact of Improvements in System Performance
1.8.1 The Receiver Operating Characteristic Curve
1.A.1 Fourier Transforms
1.A.2 Fourier Transforms of Time Domain and Spatial Frequency Domain Signals
1.A.3 Useful Properties of the Fourier Transform
Exercises
References
Further Reading
Chapter 2 X‐ray Imaging and Computed Tomography
2.1 General Principles of Imaging with X‐rays
2.2 X‐ray Production
2.2.1 The X‐ray Tube
2.2.2 The X‐ray Energy Spectrum
2.3 Interactions of X‐rays with Tissue
2.3.1 Compton Scattering
2.3.2 The Photoelectric Effect
2.4 Linear and Mass Attenuation Coefficients of X‐rays in Tissue
2.5 Instrumentation for Planar X‐ray Imaging
2.5.1 Collimator
2.5.2 Anti‐scatter Grid
2.6 Digital X‐ray Detectors
2.7 X‐ray Image Characteristics
2.7.1 Signal‐to‐Noise
2.7.2 Spatial Resolution
2.7.3 Contrast‐to‐Noise
2.8 X‐ray Contrast Agents
2.8.1 Contrast Agents for the Gastrointestinal Tract
2.8.2 Iodine‐Based Contrast Agents
2.9 X‐ray Imaging Methods
2.9.1 X‐ray Fluoroscopy
2.9.2 Digital Subtraction Angiography
2.10 Clinical Applications of X‐ray Imaging
2.10.1 Digital Mammography
2.10.2 Abdominal X‐ray Scans
2.11 Computed Tomography
2.12 CT Scanner Instrumentation
2.12.1 Beam Filtration
2.12.2 Detectors for Computed Tomography
2.13 Image Processing for Computed Tomography
2.13.1 Filtered Backprojection (FBP) Techniques
2.13.2 Fan‐Beam and Spiral Reconstructions
2.14 Iterative Algorithms
2.15 Radiation Dose
2.16 Spectral/Dual Energy CT
2.17 Photon‐Counting CT
2.18 Cone Beam, Mobile, and Portable CT Units
2.19 Clinical Applications of Computed Tomography
2.19.1 Head and Neurovascular Scans
2.19.2 Pulmonary Disease
2.19.3 Abdominal Imaging
2.19.4 Cardiovascular Imaging
Exercises
References
Further Reading
Chapter 3 Nuclear Medicine
3.1 General Principles of Nuclear Medicine
3.2 Radioactivity and Radiotracer Half‐life
3.3 Common Radiotracers Used for SPECT
3.4 The Technetium Generator
3.5 The Distribution of Technetium‐Based Radiotracers within the Body
3.6 Instrumentation for SPECT and SPECT/CT
3.6.1 Collimators
3.6.2 Scintillation Crystal and Photomultiplier Tube‐Based Detectors
3.6.3 The Anger Position Network and Pulse Height Analyzer
3.6.4 Solid‐State Detectors and Specialized Cardiac Scanners
3.7 Image Reconstruction
3.7.1 Attenuation Correction
3.7.2 Scatter Correction
3.8 Image Characteristics
3.8.1 Signal‐to‐Noise
3.8.2 Spatial Resolution
3.8.3 Contrast‐to‐Noise
3.9 Clinical Applications of SPECT
3.9.1 Brain Imaging
3.9.2 Bone Scanning and Tumor Detection
3.9.3 Cardiac Imaging
3.9.4 The Respiratory System
3.9.5 The Liver and Reticuloendothelial System
3.10 Positron Emission Tomography
3.11 Radiotracers Used for PET
3.12 Instrumentation for PET
3.12.1 Scintillation Crystals and Detector Electronics
3.13 Image Reconstruction
3.13.1 Annihilation Coincidence Detection and Removal of Accidental Coincidences
3.13.2 Attenuation Correction
3.13.3 Scatter Correction
3.13.4 Dead‐Time Correction
3.14 Image Characteristics
3.14.1 Spatial Resolution
3.14.2 Signal‐to‐Noise
3.14.3 Contrast‐to‐Noise
3.15 Acquisition Methods for PET
3.16 Total Body PET Systems
3.17 Clinical Applications of PET/CT
3.17.1 Body Oncology
3.17.2 Brain Imaging
3.17.3 Cardiac Imaging
Exercises
References
Further Reading
Chapter 4 Ultrasound Imaging
4.1 General Principles of Ultrasound Imaging
4.2 Wave Propagation and Acoustic Impedance
4.3 Wave Reflection
4.4 Energy Loss Mechanisms in Tissue
4.4.1 Scattering
4.4.2 Absorption
4.4.3 Overall Wave Attenuation
4.5 Instrumentation
4.5.1 Transducer Construction
4.5.2 Transducer Arrays
4.5.2.1 Linear Sequential Array
4.5.2.2 Curvilinear/Convex Sequential Array
4.5.2.3 Linear‐Phased Array
4.6 Signal Detection and Processing
4.6.1 Time Gain Compensation
4.6.2 Receive Beam Forming
4.7 Diagnostic Scanning Modes
4.7.1 A‐Mode, M‐Mode, and B‐Mode Scans
4.7.2 Three‐Dimensional Imaging
4.7.3 Compound Imaging
4.7.4 Other Transmit and Receive Beamforming Techniques
4.8 Image Characteristics
4.8.1 Signal‐to‐Noise
4.8.2 Spatial Resolution
4.8.2.1 Axial Resolution
4.8.2.2 Lateral Resolution
4.8.3 Contrast‐to‐Noise
4.9 Artifacts in Ultrasound Imaging
4.10 Blood Velocity Measurements Using Ultrasound
4.10.1 The Doppler Effect
4.10.2 Pulsed‐Mode Doppler Measurements
4.10.3 Color Doppler/B‐mode Duplex and Triplex Imaging
4.10.4 Continuous Wave Doppler (CWD) Measurements
4.11 Ultrasound Contrast Agents
4.11.1 Harmonic and Pulse Inversion Techniques
4.11.2 Super‐Resolution in Ultrasound Imaging
4.12 Safety and Bioeffects in Ultrasound Imaging
4.13 Point‐of‐Care Ultrasound Systems
4.14 Clinical Applications of Ultrasound
4.14.1 Obstetrics and Gynecology
4.14.2 Breast Imaging
4.14.3 Musculoskeletal Structure
4.14.4 Abdominal
Exercises
References
Further Reading
Chapter 5 Magnetic Resonance Imaging
5.1 General Principles of MRI Acquisition and Hardware
5.2 Nuclear Magnetization
5.2.1 Quantum Mechanical Description
5.2.2 Classical Description
5.2.3 Hydrogen Nuclei in Water and Lipid
5.2.4 Radiofrequency Pulses and the Creation of Transverse Magnetization
5.2.5 Signal Detection and Fourier Transformation
5.3 T1 and T2 Relaxation Mechanisms and Tissue Relaxation Times
5.3.1 Tissue‐Dependent Relaxation Times
5.3.2 Measurement of T1 and T2: Inversion‐Recovery and Spin‐Echo Sequences
5.4 The MR Free Induction Decay
5.5 Magnetic Resonance Imaging
5.5.1 Spatial Localization
5.5.2 Imaging Concepts
5.5.2.1 Slice Selection
5.5.2.2 Phase‐encoding
5.5.2.3 Frequency‐encoding
5.5.2.4 The k‐Space Formalism and Image Reconstruction
5.6 Imaging Sequences and Techniques
5.6.1 Multislice Gradient‐Echo Sequences
5.6.2 Multislice Spin‐Echo and Turbo‐Spin‐Echo Sequences
5.6.3 Three‐Dimensional Gradient‐Echo and Spin‐Echo Sequences
5.6.4 Proton Density, T1‐, T2‐, and T2*‐Weighted Sequences
5.6.5 Lipid Suppression Techniques
5.7 MRI Contrast Agents
5.8 Advanced Sequences
5.8.1 Magnetic Resonance Angiography
5.8.2 Diffusion‐Weighted Imaging with Echo Planar Readout
5.8.3 In Vivo Localized Spectroscopy
5.8.4 Functional MRI
5.9 Instrumentation
5.9.1 Magnet Design
5.9.1.1 Clinical Superconducting Magnets
5.9.1.2 Very High Field Magnets
5.9.1.3 High‐Temperature Superconductors
5.9.1.4 Mid‐ and Low‐Field Magnets
5.9.2 Magnetic Field Gradient Coils
5.9.3 Radiofrequency Coils
5.9.3.1 Transmit Coil
5.9.4 Receiver Coil Array
5.9.5 Receiver Electronics
5.10 Image Reconstruction from Undersampled Data
5.10.1 Parallel Imaging Using an Array of Receiver Coils
5.10.2 Compressed Sensing
5.11 Image Characteristics
5.11.1 Signal‐to‐Noise
5.11.2 Spatial Resolution
5.11.3 Contrast‐to‐Noise
5.12 Image Artifacts
5.13 RF Safety Considerations
5.14 Clinical Applications of MRI
5.14.1 Neurological
5.14.2 Body Imaging
5.14.3 Musculoskeletal
5.14.4 Cardiac
Exercises
References
Further Reading
Chapter 6 Optical Imaging
6.1 General Properties of Optical Imaging Methods
6.2 Propagation of Light Through Tissue
6.3 Body Emissivity Techniques – Infrared Thermography
6.4 Direct Imaging with Visible Light
6.4.1 Fundus Photography
6.4.2 Scheimpflug Camera
6.5 Optical Coherence Tomography (OCT)
6.5.1 Basic Principles of Interferometry
6.5.2 Instrumentation for OCT
6.5.2.1 Light Sources
6.5.2.2 Beam‐Splitter
6.5.2.3 Photodetectors
6.5.3 Image Characteristics of OCT
6.5.4 OCT Angiography
6.5.5 Clinical Applications of OCT
6.6 Fluorescence‐Guided Surgery (FGS)
6.6.1 Principle of Fluorescence
6.6.2 Fluorescent Probes
6.6.3 Instrumentation for Fluorescence Imaging
6.6.4 Clinical Applications of Fluorescence‐Guided Surgery
6.7 Near‐Infrared Spectroscopy (NIRS) and Diffuse Optical Tomography (DOT)
6.7.1 Principle of NIRS
6.7.2 Instrumentation for NIRS
6.7.3 Principle of DOT
6.7.4 Clinical Applications of DOT
6.8 Photoacoustic Imaging (PAI)
6.8.1 Principles of PAI
6.8.2 Photoacoustic Microscopy and Photoacoustic Computed Tomography
6.8.3 Instrumentation for PAI
6.8.4 Clinical Applications of PAI
References
Further Reading
Chapter 7 Artificial Intelligence
7.1 Artificial Intelligence in Biomedical Imaging
7.2 Artificial Intelligence, Machine Learning, Deep Learning, and Neural Networks
7.2.1 Neural Networks
7.3 Deep Learning in Image Reconstruction
7.4 Convolutional Neural Networks (CNNs)
7.5 Artificial Intelligence in X‐ray and CT
7.5.1 Image Reconstruction
7.5.2 Clinical Applications
7.6 Artificial Intelligence in SPECT and PET
7.6.1 Image Reconstruction
7.6.2 Clinical Applications
7.7 Artificial Intelligence in Ultrasound
7.7.1 Improved Data Acquisition
7.7.2 Image Post‐processing
7.7.3 Image Analysis and Clinical Applications
7.8 Artificial Intelligence in MRI
7.8.1 Image Reconstruction
7.8.2 Clinical Applications
7.9 Artificial Intelligence in Optical Imaging
7.10 AI and Radiomics
7.11 Challenges for AI in Biomedical Imaging
References
Further Reading
Index
EULA
Recommend Papers

Introduction to Biomedical Imaging [2 ed.]
 9781119867715

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

ISTUDY

Introduction to Biomedical Imaging

IEEE Press 445 Hoes Lane Piscataway, NJ 08854 IEEE Press Editorial Board Sarah Spurgeon, Editor in Chief Jón Atli Benediktsson Anjan Bose Adam Drobot Peter (Yong) Lian

Andreas Molisch Saeid Nahavandi Jeffrey Reed Thomas Robertazzi

Diomidis Spinellis Ahmet Murat Tekalp

Introduction to Biomedical Imaging Second Edition

Andrew Webb Leiden University Medical Center Leiden The Netherlands

IEEE Press Series in Biomedical Engineering

Copyright © 2023 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data applied for: Hardback ISBN: 9781119867715 Cover Design: Wiley Cover Image: © Monty Rakusen/Getty Images Set in 9.5/12.5pt STIXTwoText by Straive, Chennai, India

v

Contents Preface

xv

Introduction xix xxxi

About the Companion Website

1

1

Image and Imaging System Characteristics

1.1

General Image and Imaging System Characteristics

1.2

Concept of Spatial Frequency

1.3

Spatial Resolution

1.3.1

Imaging System Point Spread Function

1

2

3 4

1.3.2

Imaging System Resolving Power 5

1.3.3

Imaging System Modulation Transfer Function

1.4

Signal-to-Noise Ratio

1.5

Contrast-to-Noise Ratio 9

1.6

Signal Digitization: Dynamic Range and Resolution

1.7

Post-acquisition Image Filtering 11

1.8

Assessing the Clinical Impact of Improvements in System Performance 12

1.8.1

The Receiver Operating Characteristic Curve 13

1.A.1

Fourier Transforms

1.A.2

Fourier Transforms of Time Domain and Spatial Frequency Domain Signals 15

1.A.3

Useful Properties of the Fourier Transform Exercises References

17 19

Further Reading 20

6

7 9

14

16

vi

Contents

2

X-ray Imaging and Computed Tomography

23

2.1

General Principles of Imaging with X-rays

23

2.2

X-ray Production 25

2.2.1

The X-ray Tube

2.2.2

The X-ray Energy Spectrum

2.3

Interactions of X-rays with Tissue 32

2.3.1

Compton Scattering

2.3.2

The Photoelectric Effect

2.4

Linear and Mass Attenuation Coefficients of X-rays in Tissue

25 29

33 34

2.5

Instrumentation for Planar X-ray Imaging

2.5.1

Collimator

2.5.2

Anti-scatter Grid

38

38 38

2.6

Digital X-ray Detectors

2.7

X-ray Image Characteristics

2.7.1

Signal-to-Noise

2.7.2

Spatial Resolution

2.7.3

Contrast-to-Noise 45

2.8

X-ray Contrast Agents

2.8.1

Contrast Agents for the Gastrointestinal Tract

2.8.2

Iodine-Based Contrast Agents

2.9

X-ray Imaging Methods 47

40 42

42 44 46 46

46

2.9.1

X-ray Fluoroscopy

2.9.2

Digital Subtraction Angiography

2.10

Clinical Applications of X-ray Imaging

2.10.1

Digital Mammography

2.10.2

Abdominal X-ray Scans

50

2.11

Computed Tomography

51

48 48 49

49

2.12

CT Scanner Instrumentation 53

2.12.1

Beam Filtration

2.12.2

Detectors for Computed Tomography

55 56

2.13

Image Processing for Computed Tomography

2.13.1

Filtered Backprojection (FBP) Techniques 57

2.13.2

Fan-Beam and Spiral Reconstructions

2.14

Iterative Algorithms

2.15

Radiation Dose

65

63

61

57

36

Contents

2.16

Spectral/Dual Energy CT

2.17

Photon-Counting CT 69

66

2.18

Cone Beam, Mobile, and Portable CT Units 71

2.19

Clinical Applications of Computed Tomography

2.19.1

Head and Neurovascular Scans

2.19.2

Pulmonary Disease

2.19.3

Abdominal Imaging

2.19.4

Cardiovascular Imaging Exercises

72

72

73 73 74

75

References

81

Further Reading 82 85

3

Nuclear Medicine

3.1

General Principles of Nuclear Medicine

3.2

Radioactivity and Radiotracer Half-life

85 87

3.3

Common Radiotracers Used for SPECT 89

3.4

The Technetium Generator

3.5

The Distribution of Technetium-Based Radiotracers within the Body 92

90

3.6

Instrumentation for SPECT and SPECT/CT 94

3.6.1

Collimators

3.6.2

Scintillation Crystal and Photomultiplier Tube-Based Detectors

3.6.3

The Anger Position Network and Pulse Height Analyzer

3.6.4

Solid-State Detectors and Specialized Cardiac Scanners

94

3.7

Image Reconstruction 103

3.7.1

Attenuation Correction

3.7.2

Scatter Correction

3.8

Image Characteristics 106

3.8.1

Signal-to-Noise

3.8.2

Spatial Resolution

3.8.3

Contrast-to-Noise 107

104

105

106 107

3.9

Clinical Applications of SPECT

3.9.1

Brain Imaging

3.9.2

Bone Scanning and Tumor Detection

3.9.3

Cardiac Imaging

3.9.4

The Respiratory System

107

108 110 110

108

100 102

98

vii

viii

Contents

3.9.5

The Liver and Reticuloendothelial System 112

3.10

Positron Emission Tomography

3.11

Radiotracers Used for PET

3.12

Instrumentation for PET

3.12.1

Scintillation Crystals and Detector Electronics

113

115 116 117

3.13

Image Reconstruction

3.13.1

Annihilation Coincidence Detection and Removal of Accidental Coincidences 119

3.13.2

Attenuation Correction

3.13.3

Scatter Correction

3.13.4

Dead-Time Correction

120

3.14

Image Characteristics

121

3.14.1

Spatial Resolution

3.14.2

Signal-to-Noise

3.14.3

Contrast-to-Noise

3.15

Acquisition Methods for PET

3.16

Total Body PET Systems

3.17

Clinical Applications of PET/CT 124

3.17.1

Body Oncology

3.17.2

Brain Imaging

3.17.3

Cardiac Imaging Exercises References

118

120

120

121

121 122 122

122

124 125 125

126 131

Further Reading 132 135

4

Ultrasound Imaging

4.1

General Principles of Ultrasound Imaging

4.2

Wave Propagation and Acoustic Impedance

4.3

Wave Reflection

139

4.4

Energy Loss Mechanisms in Tissue

4.4.1

Scattering

142

4.4.2

Absorption

4.4.3

Overall Wave Attenuation

4.5

Instrumentation

143 145

145

4.5.1

Transducer Construction

4.5.2

Transducer Arrays

149

146

142

135 137

Contents

4.5.2.1

Linear Sequential Array

4.5.2.2

Curvilinear/Convex Sequential Array 151

4.5.2.3

Linear-Phased Array

4.6

Signal Detection and Processing

4.6.1

Time Gain Compensation

4.6.2

Receive Beam Forming 154

4.7

Diagnostic Scanning Modes

151

152 153

153 155

4.7.1

A-Mode, M-Mode, and B-Mode Scans

4.7.2

Three-Dimensional Imaging

4.7.3

Compound Imaging

4.7.4

Other Transmit and Receive Beamforming Techniques 158

4.8

Image Characteristics 158

155

156

156

4.8.1

Signal-to-Noise

4.8.2

Spatial Resolution

4.8.2.1

Axial Resolution

4.8.2.2

Lateral Resolution

4.8.3

Contrast-to-Noise 161

158 159 159 160

4.9

Artifacts in Ultrasound Imaging

4.10

Blood Velocity Measurements Using Ultrasound

4.10.1

The Doppler Effect

4.10.2

Pulsed-Mode Doppler Measurements

4.10.3

Color Doppler/B-mode Duplex and Triplex Imaging

4.10.4

Continuous Wave Doppler (CWD) Measurements

4.11

Ultrasound Contrast Agents

161

163 164

169

4.11.1

Harmonic and Pulse Inversion Techniques 171

4.11.2

Super-Resolution in Ultrasound Imaging

4.12

Safety and Bioeffects in Ultrasound Imaging

4.13

Point-of-Care Ultrasound Systems

4.14

Clinical Applications of Ultrasound

4.14.1

Obstetrics and Gynecology

4.14.2

Breast Imaging

Musculoskeletal Structure

4.14.4

Abdominal References

176

176

4.14.3

Exercises

163

178 179 185

Further Reading 186

177

175 176

172 174

167 168

ix

x

Contents

189

5

Magnetic Resonance Imaging

5.1

General Principles of MRI Acquisition and Hardware

189

5.2

Nuclear Magnetization

5.2.1

Quantum Mechanical Description

5.2.2

Classical Description

5.2.3

Hydrogen Nuclei in Water and Lipid

5.2.4

Radiofrequency Pulses and the Creation of Transverse Magnetization 197

5.2.5

Signal Detection and Fourier Transformation

5.3

T 1 and T 2 Relaxation Mechanisms and Tissue Relaxation Times

5.3.1

Tissue-Dependent Relaxation Times

5.3.2

Measurement of T 1 and T 2 : Inversion-Recovery and Spin-Echo Sequences 204

5.4

The MR Free Induction Decay

206

5.5

Magnetic Resonance Imaging

207

5.5.1

Spatial Localization

5.5.2

Imaging Concepts

5.5.2.1

Slice Selection

5.5.2.2

Phase-encoding

191 191

195 197

199 200

202

207 209

210 212

5.5.2.3

Frequency-encoding

5.5.2.4

The k-Space Formalism and Image Reconstruction

5.6

Imaging Sequences and Techniques

217

5.6.1

Multislice Gradient-Echo Sequences

217

5.6.2

Multislice Spin-Echo and Turbo-Spin-Echo Sequences

5.6.3

Three-Dimensional Gradient-Echo and Spin-Echo Sequences

5.6.4

Proton Density, T 1 -, T 2 -, and T2∗ -Weighted Sequences

5.6.5

Lipid Suppression Techniques

5.7

MRI Contrast Agents

226

5.8

Advanced Sequences

228

5.8.1

Magnetic Resonance Angiography

5.8.2

Diffusion-Weighted Imaging with Echo Planar Readout 230

214

5.8.3

In Vivo Localized Spectroscopy

5.8.4

Functional MRI

233

5.9

Instrumentation

235

214

219 222

223

228

232

5.9.1

Magnet Design

5.9.1.1

Clinical Superconducting Magnets

235 235

221

Contents

5.9.1.2

Very High Field Magnets

5.9.1.3

High-Temperature Superconductors

5.9.1.4

Mid- and Low-Field Magnets

5.9.2

Magnetic Field Gradient Coils

5.9.3

Radiofrequency Coils

5.9.3.1

Transmit Coil

5.9.4

Receiver Coil Array 244

238 239

239 240

244

244

5.9.5

Receiver Electronics 246

5.10

Image Reconstruction from Undersampled Data

5.10.1

Parallel Imaging Using an Array of Receiver Coils

5.10.2

Compressed Sensing

5.11

Image Characteristics

247 248

250 252

5.11.1

Signal-to-Noise

5.11.2

Spatial Resolution

254

5.11.3

Contrast-to-Noise

254

5.12

Image Artifacts

5.13

RF Safety Considerations

252

255 256

5.14

Clinical Applications of MRI

5.14.1

Neurological

5.14.2

Body Imaging

5.14.3

Musculoskeletal

5.14.4

Cardiac

257

258 259 259

259

Exercises References

262 274

Further Reading 277 279

6

Optical Imaging

6.1

General Properties of Optical Imaging Methods

6.2

Propagation of Light Through Tissue

279

281

6.3

Body Emissivity Techniques – Infrared Thermography

6.4

Direct Imaging with Visible Light

6.4.1

Fundus Photography

285

6.4.2

Scheimpflug Camera

287

6.5

Optical Coherence Tomography (OCT) 288

6.5.1

Basic Principles of Interferometry

6.5.2

Instrumentation for OCT 291

285

289

284

xi

xii

Contents

6.5.2.1

Light Sources

291

6.5.2.2

Beam-Splitter

292

6.5.2.3

Photodetectors

6.5.3

Image Characteristics of OCT

6.5.4

OCT Angiography 294

6.5.5

Clinical Applications of OCT

6.6

Fluorescence-Guided Surgery (FGS)

6.6.1

Principle of Fluorescence

6.6.2

Fluorescent Probes

6.6.3

Instrumentation for Fluorescence Imaging

6.6.4

Clinical Applications of Fluorescence-Guided Surgery

6.7

Near-Infrared Spectroscopy (NIRS) and Diffuse Optical Tomography (DOT) 299

6.7.1

Principle of NIRS

292 292 295 296

296

296 297

299

6.7.2

Instrumentation for NIRS

6.7.3

Principle of DOT

6.7.4

Clinical Applications of DOT

302

6.8

Photoacoustic Imaging (PAI)

303

6.8.1

Principles of PAI

6.8.2

Photoacoustic Microscopy and Photoacoustic Computed Tomography 304

6.8.3

Instrumentation for PAI

6.8.4

Clinical Applications of PAI References

298

301

301

303

305 305

306

Further Reading 308 7

Artificial Intelligence 311

7.1

Artificial Intelligence in Biomedical Imaging

7.2

Artificial Intelligence, Machine Learning, Deep Learning, and Neural Networks 312

7.2.1

Neural Networks 313

7.3

Deep Learning in Image Reconstruction

7.4

Convolutional Neural Networks (CNNs) 318

7.5

Artificial Intelligence in X-ray and CT

7.5.1

Image Reconstruction 321

7.5.2

Clinical Applications

322

317 321

311

Contents

7.6

Artificial Intelligence in SPECT and PET

7.6.1

Image Reconstruction 323

7.6.2

Clinical Applications

7.7

Artificial Intelligence in Ultrasound

7.7.1

Improved Data Acquisition 325

7.7.2

Image Post-processing 326

7.7.3

Image Analysis and Clinical Applications 326

7.8

Artificial Intelligence in MRI

324

7.8.1

Image Reconstruction 326

7.8.2

Clinical Applications

324

326

328

7.9

Artificial Intelligence in Optical Imaging

7.10

AI and Radiomics

7.11

Challenges for AI in Biomedical Imaging References

329

329

331

Further Reading 337 Index 341

323

330

xiii

xv

Preface Looking back 20 years to when the first edition of this book came out, it is impressive to see by how much medical imaging technology has changed and improved, and the extent to which that first edition is now completely out of date. This second edition tries to incorporate as much of the current state-of-the-art technology as possible, with significant updates to the sections on computed tomography, nuclear medicine, ultrasound, and magnetic resonance imaging. Specifically, new sections have been added on: (i) Digital flat-panel X-ray detectors, modern-day 256–320 slice CT system design, fast and efficient CT iterative reconstruction schemes, spectral/dual energy CT methods, mobile and portable CT systems, and the first photon counting CT scanners. (ii) New solid-state detectors for SPECT and PET, integrated SPECT/CT and PET/CT scanners, updated scatter and attenuation correction methods, developments in iterative reconstruction, and new total body PET scanners. (iii) Beam-forming for ultrasound imaging, new contrast agents, super-resolution techniques, and point of care systems. (iv) New helium-free and high-temperature superconductor magnet designs for MRI, point-of-care scanners, and image reconstruction methods using undersampled data for rapid scanning. A general introduction has been added at the start of the book, in which the historical development of medical imaging techniques is summarized, and an outlook for future developments is given. Two new short chapters have been added, one on optical imaging techniques and the other on the ever-increasing role of artificial intelligence (AI) in biomedical imaging. Overall, about 30% new material has been added, there is a significant expansion of the problem sets, and errors in the original text have been corrected without, hopefully, introducing an equal number of new ones!

xvi

Preface

The general aim of the textbook remains the same, namely to have a textbook that is suitable for a one-semester/one-term course. As part of the IEEE Press Series in Biomedical Engineering, the approach and level of the material are aimed at junior- to senior-level undergraduates in bioengineering and/or other engineering disciplines. The content, however, should also be suitable for practitioners in more clinically related professions, in which imaging plays an important role. Overall, this means that the coverage is necessarily more succinct than other, more encyclopedic volumes either on medical imaging as a whole, or one of the specific modalities. Reference to these textbooks is given at appropriate places in the text. The approach of this book is to cover the physical principles, instrumental design, data acquisition strategies, image reconstruction techniques, and clinical applications of the imaging techniques most commonly used in clinical medicine as well as in academic and commercial research. Emphasis is very much on human rather than animal imaging, but reference to the latter is made where appropriate. The sections on clinical applications are relatively brief, comprising a few examples illustrative of the types of images that provide useful diagnostic information. Many hundreds of specialized diagnostic clinical imaging books exist, written by authors with far more expertise in these areas. Suggestions are made at the end of each chapter for further reading. These cover recent books, journal publications, and scientific review articles. Andrew Webb Leiden, The Netherlands

Preface

Additional Resources Medical Imaging Textbooks Prince, J.L. and Links, J.M. (2014). Medical Imaging: Signals and Systems. Pearson: 2nd ed. Suetens, P. (2017). Fundamentals of Medical Imaging, 3rde. Cambridge University Press. Samei, E. (2018). Hendee’s Medical Imaging Physics. 5th ed., Wiley-Blackwell. Jerrold, T., Bushberg, J., Seibert, A. et al. (ed.) (2020). The Essential Physics of Medical Imaging, 4the. Wolters Kluwer. Azhari, H., Kennedy, J.A., Weiss, N., and Volokh, L. (2020). From Signals to Image: A Basic Course on Medical Imaging for Engineers. Springer.

Journals Containing Articles from a Wide Range of Biomedical Imaging Modalities Physics in Medicine and Biology IEEE Transactions on Medical Imaging IEEE Transactions on Biomedical Engineering Medical Physics Journal of Medical Imaging Radiology Investigative Radiology European Journal of Radiology

xvii

xix

Introduction Biomedical Imaging: Past, Present, and Future Clinical imaging is an essential part of the diagnostic process for an enormous number of diseases. Many tens of millions of investigations are carried out per year, ranging from a relatively simple ultrasound sonogram all the way up to a whole body positron emission tomography (PET) scan. The number of scans continues to increase by several percent year-on-year with ultrasound and computed tomography (CT) being the most common, closely followed by magnetic resonance imaging (MRI), and nuclear medicine at about half these numbers. This chapter gives a brief historical overview of the different clinical imaging modalities in terms of instrumental and technique developments, and then discusses where medical imaging is heading in the future, including a section on the increasingly important role that artificial intelligence (AI) is starting to play. Listing important historical discoveries, and linking them with their corresponding protagonists, is a dangerous task since there are often multiple claims to simultaneous academic publications, commercial patents, or public demonstrations. This is particularly true given the tendency towards a western-world centered viewpoint, which has often ignored research performed in continents where the dissemination language is not English. To minimize the risks of controversial assignations, in this short chapter the major developments have not been associated with specific scientists unless they are unambiguous. The further reading section provides a wealth of articles where much more detail can be found concerning the development of each of the imaging modalities. With the exception of ultrasound, which is a mechanical wave, each of the major imaging modalities transmits electromagnetic (EM) energy into the body. Figure 1 shows a schematic of the relevant ranges of wavelength, frequency, and energy. Radiation can be classified as either ionizing or nonionizing. In general, EM energy with a frequency above the near-UV region is defined as ionizing, whereas below this it is termed nonionizing.

Wavelength (nm)

Frequency (Hz)

1015

1

1012

109

106

106

103

1012

1

10−3

1018

10−6 1024

Energy (eV) 10−12

10−9

10−6 MRI

10−3

1 Infrared

103

106 γ-rays

109

X-rays Visible light Figure 1 Electromagnetic spectrum highlighting the range of wavelengths/frequencies/energies relevant to medical imaging modalities.

Introduction

Historical Developments On 8 November, 1895, Wilhelm Röntgen saw the bones of his hand on a photographic plate placed on one side of a Crookes cathode ray tube. Röntgen also imaged his wife’s hand, which showed her wedding ring very clearly, and is probably the most reproduced medical image in history. Röntgen was awarded the Nobel Prize for Physics in 1901. Since the technology was relatively simple, it was very quickly taken up by physicians across the world, finding widespread application in military medicine during the first World War.

X-ray and Computed Tomography X-ray technology continued to advance throughout the twentieth century, and was essentially the only imaging modality for many decades. The introduction of film technology, replacing glass plates, was a major step forward, and made the storage of a patient’s data possible. Improvements in all parts of the instrumentation slowly increased the diagnostic quality of the images, with bariumand iodine-based contrast agents added to the imaging capabilities. A major breakthrough was the development of CT in the late 1960s by Godfrey Hounsfield and Alan Cormack, with the first patient scanned in 1971. CT enabled multiple thin slices, rather than a single projection image, to be acquired. Cormack and Hounsfield jointly received the Nobel Prize for Physiology or Medicine in 1979. Scans at that time took tens of minutes, with resolutions in the several millimeters range. Nowadays an entire CT of the body takes only a few seconds, with spatial resolutions of fractions of a millimeter. Key developments included the instrumentation required to perform multislice spiral CT (in which the patient bed slides through the X-ray beam), highly efficient semiconductor-based flat panel detectors, and iterative reconstruction techniques. 1890s

X-rays first discovered by Roentgen

1900

First X-ray fluoroscopy demonstration

1913

Development of the anti-scatter grid

1913

Development of the thermionic emission X-ray tube

1918

X-ray film developed

1920s

Barium used as a contrast agent for abdominal X-rays

1930

First clinical mammography trial

1950s

Concept of subtraction angiography described

1970s

First CT system designed by Hounsfield and Cormack

1980s

Multislice CT developed

1980s

Development of slip ring technology and solid state detectors

xxi

xxii

Introduction

1990s

Spiral multi-detector CT and reconstruction techniques developed

2000s

Flat panel detector technology introduced

2010s

Dual energy CT introduced commercially

2020

First commercial photon counting devices available

Iterative reconstruction techniques integrated into commercial systems

Nuclear Medicine Radioactivity was first “discovered” by Antoine Henri Becquerel in 1896, with Marie and Pierre Curie making many of the early breakthroughs in elucidating the nature of the phenomenon. In the 1920s and 1930s, the production of artificial radioactivity through nuclear bombardment was studied in many countries, resulting in the development of the cyclotron. The first radiotracer experiments were performed in rabbits using bismuth-210 labeled antisyphilitic drugs. Human nuclear medicine has its origins in both therapy and imaging. In 1946, radioactive iodine-131 was used in the treatment of thyroid tumors, and it was noted that if the γ-rays could be detected then an image of the radioactivity could be produced. The instrumentation required to perform such imaging was developed in the 1950s, with the result being termed an “Anger camera” after its inventor Hal Anger. This camera formed the basis for planar imaging of many different radioactive isotopes. Clinical utilization was significantly enhanced by the development and commercialization of the technetium generator, which could be delivered on a weekly basis to a nuclear medicine department. Parallel developments were occurring in instrumentation development for PET, again with a key development being the synthesis of still the most widely-used imaging agent, 18 F-fluorodeoxyglucose. Clinically, two-camera single-photon emission computed tomography (SPECT) systems were introduced in the 1980s for both brain and cardiac applications. PET scanners were also introduced commercially into the clinic, with improved spatial resolution using time of flight (TOF) becoming available in the 1990s. Improvements in detector technology for both SPECT and PET in the last two decades has resulted in higher spatial resolution and faster imaging times. The other major development has been the integration of a CT scanner into essentially all SPECT and PET systems, in order to provide efficient attenuation correction and also a high resolution anatomical image on which to overlay the functional radioactivity scans. 1940s

Production of radionuclides for medical purposes (Oak Ridge National Laboratory)

1950s

Development of the first “rectilinear scanner” by Cassen

1960s

Development of the 99m Tc generator

Development of the gamma camera by Anger

Introduction

1961

First single slice PET scanners

1970s

18

1980s

Development of SPECT technology

1990s

First SPECT/CT prototype first PET/CT prototype

F-fluorodeoxyglucose first produced

First TOF PET scanners demonstrated 2000s

Introduction of iterative reconstruction schemes for PET

2010s

First commercial PET scanners with LSO scintillators

2020

First commercial total body PET system

Ultrasound Similar to the case of nuclear medicine, ultrasound was first developed for therapeutic applications, specifically noninvasive procedures in which a focused beam was used to thermally destroy various pathologies. The history of ultrasound imaging began in the 1940s, with the first gynecological images of the unborn fetus, uterus, and pelvis published in 1958 by Ian Donald. Advances in transducer technology enabled real-time images to be acquired, and many researchers, particularly in Japan, developed the instrumentation for Doppler techniques to image blood flow. Ultrasound developed rapidly as a clinical technique in the 1970s, with phased array transducer technology improving the image quality and ease-of-use tremendously. Developments in transmit and receive beamforming, increased channel count, and three-dimensional capability have continued to improve the imaging capabilities. Ultrasound contrast agents were introduced in the 1980s, and new agents based on gas-filled microbubbles, together with techniques such as harmonic imaging, have improved tissue contrast, and reduced the contribution of image speckle. 1940s

First reports of in vivo ultrasound with potential medical applications

1950s

Implementation of Doppler techniques by Satomura and Nimura First obstetric studies using ultrasound by Donald

1960s

First real-time ultrasound scanning using three rotating transducers First description of fetal malformation using ultrasound

1970s

Development of the mechanical oscillating echocardiography transducer First commercial linear array scanner and concept of the phased array developed Doppler ultrasound introduced commercially

1980s

Albunex microbubble contrast agents FDA approved

1990s

Tissue harmonic imaging demonstrated for use with contrast agents

3D fetal ultrasound demonstrated Intravascular ultrasound probes used in the clinic

xxiii

xxiv

Introduction

2000s

Portable point-of-care ultrasound systems developed

2010s

Shear-wave elastography clinically approved Virtual beamforming technology available on commercial systems

2020s

Super-resolution techniques become available

Magnetic Resonance Imaging Until the early 1970s, nuclear magnetic resonance (NMR) spectroscopic techniques were used mainly to study the molecular structure of biologically active molecules such as proteins. In 1971, Paul Lauterbur invented the concept of MRI. During the 1970s, the technique evolved relatively slowly, with academic centers working at magnetic field strengths of ∼0.1 T, producing images with relatively low spatial resolution. Development into a widespread clinical imaging modality occurred in the 1980s when General Electric introduced a 1.5 T superconducting magnet, which also spawned a slew of new imaging techniques capable of forming images with different contrasts such as diffusion, perfusion, and angiography. Lauterbur received the 2003 Nobel Prize in Physiology or Medicine, together with Peter Mansfield who developed the basics of very fast acquisition techniques still used today. In the late 1990s the speed and signal-to-noise ratio (SNR) were increased substantially by the introduction of multiple detectors and data undersampling techniques. Advances in image reconstruction in the 2010s using compressed sensing technology have further shortened the time required for MRI scans. In the late 2010s superconducting magnets which are essentially cryogen-free were first demonstrated, coincident with new developments in point-of-care low field systems. 1970s

First MR image produced by Lauterbur

1980s

First whole body MRI system and in vivo scans performed in Aberdeen

First rapid echo planar imaging sequences developed by Mansfield Commercial 1.5 T systems become widespread in hospitals Development of diffusion, perfusion, and relaxation-weighted sequences 1990s

First functional MRI (fMRI) scans performed Introduction of multi-element receiver coil arrays Development of parallel imaging techniques to speed up data acquisition

2000s

Development of compressed sensing to further increase imaging speed

2010s

Introduction of “cryogen-free” magnet technology

2020s

Development of portable point-of-care MRI systems

Introduction

Optical Imaging Unlike the four imaging modalities described previously, which are generally located in a radiology department and serve an entire hospital, optical imaging systems are dedicated to specific applications and are found in ophthalmological, surgical, cancer, and other centers within a hospital. By far the most commonly-used technique is optical coherence tomography (OCT), which was developed in the early 1980s, and almost immediately used in diagnosing eye diseases. OCT underwent major technological advances in the 1990s, turning it from a relatively slow inefficient time-domain technique into a very fast real-time frequency-domain method. Near-infrared spectroscopy (NIRS) and diffuse optical tomography (DOT) techniques were developed in the 1970s, with the main advances being in the past decade due to improved laser diode and detector technology. Together with photoacoustic imaging (PAI), these techniques are just starting to acquire food and drug administration (FDA) approvals for specific clinical applications. 1950s

Development and FDA approval for clinical use of indocyanine green

1960s

First use of infrared thermography in the clinic

1970s

First demonstration of NIRS in humans

1980s

First demonstration of OCT for measurements of eye axial length

1990s

Development of broadband light sources in OCT improve spatial resolutions First demonstrations of PAI systems Development of time-resolved and frequency-domain NIRS First demonstrations of diffuse optical tomography

2000s

Development of frequency-domain OCT First clinical demonstrations of photoacoustic microscopy Fluorescein approved by FDA for clinical use

2010s

First commercial PAI device

Current and Future Trends in Biomedical Imaging Despite the fundamentally different physical principles behind each of the imaging modalities covered in this book, there are many commonalities in terms of the instrumentation used to acquire the data, the use of exogenous agents to increase tissue contrast, and different mathematical algorithms used to process data. It is not surprising, therefore, that there are many general trends in technological development which cut across the modalities. A few of these trends are summarized below.

xxv

xxvi

Introduction

Trends in Instrumentation In 2021, the first photon counting devices (PCDs) were introduced into commercial CT scanners, both whole-body and also smaller systems dedicated to breast imaging. These PCDs convert X-ray energy into an electrical signal directly, rather than going through the traditional two-step conversion process. This gives significant improvements in signal-to-noise, which can be translated into reduced patient radiation dose. There are also ongoing improvements in the detector technology for PET and SPECT in the form of new solid-state materials, which are increasing the spatial resolution and signal-to-noise. In PET, the first commercial extended axial range scanners, either 1 m long for upper-body applications, or almost 2 m long for whole-body examinations, were introduced in 2021. The vastly increased number of detectors means that the intrinsic SNR increases by factors up to forty. In optical techniques, laser and superluminescent technology continues to evolve. MRI and ultrasound detectors are relatively mature, but new magnet technology in MRI aims to reduce cryogenic consumption to only a few liters, while in ultrasound improvements in transducer and receiver technology have enabled new techniques such as elastography and real-time volumetric measurements.

Trends in Data Collection Much of the processing of the raw data which was previously done in hardware using analog electronic circuits is now performed in software after the data have been digitized. This removes many of the constraints on data acquisition rates, for example, as well as enabling higher performance from digital versus analog filters. This switch has resulted in the technology being designed to acquire as much raw data in as efficient a manner as possible. Examples include virtual beamforming in ultrasound imaging, and continuous acquisition of dynamic MRI data without gating to the cardiac or respiratory cycle.

Trends in Image Processing Up until ∼2000, many of the algorithms used for image processing were classical analytically based methods such as backprojection and inverse Fourier transformation, with the data being acquired strictly according to the Nyquist sampling criterion. In CT, SPECT/PET and increasingly MRI these algorithms are being replaced by model-based and iterative reconstruction methods. Iterative techniques have the advantage that they are able to incorporate known characteristics of the measurement system, as well as image priors such as total variation, which means that image artifacts can be reduced and the signal-to-noise is higher than

Introduction

with conventional reconstruction techniques. This can be used, for example, to reduce the patient radiation dose in CT or the total imaging time in PET. Iterative techniques are also able to deal with data which are undersampled with respect to the Nyquist criterion, and this has been applied extensively in MRI to speed up data acquisition. As covered in Chapter 7, machine learning algorithms are also increasingly being combined with iterative reconstruction to make the algorithms less noise sensitive, and to further decrease the amount of data, or the SNR of the raw data, required for reconstruction of diagnostically useful images.

Trends in Contrast Agent Use and Development The future role of contrast agents appears very different for the various modalities covered in this book. In optical imaging, in particular fluorescence guided surgery, there are a large number of promising new fluorophores which are undergoing Phase III clinical trials. Rather than nonspecific or untargeted agents such as fluorescein and indocyanine green, these newer agents are biologically targeted, normally towards tumors. The case is diametrically opposite for MRI where the use of various Gd-based agents, thought to be completely safe for decades, is now being restricted due to nephrogenic systemic fibrosis (NSF). As a result, there is a new emphasis on designing MRI sequences which require lower doses of agent, or no agent at all. There is a similar, though much less drastic, situation in CT and PET in which the use of techniques such as dual-energy CT or total body PET may also lead to reduced concentrations of iodinated contrast agents or radiotracers, and therefore lower radiation dose.

Trends Towards Increasing Portability, Sustainability, and Accessibility Much of the medical equipment described in this book is expensive, requires fixed siting in a well-controlled environment, and is available only in relatively large hospitals. As such, much of the world has little access to such technology, particularly those living in low and middle income countries (LMICs). In the financially developed world there are long waiting times for radiological scans, and the costs can be prohibitively high. All of these issues have led to an increase in the number of point-of-care (POC) and more inexpensive, portable, and accessible medical imaging systems. Examples include portable X-ray, low field MRI, ultrasound systems that can be run from a smartphone, and handheld OCT systems. Designing such systems inevitably means some compromise in terms of performance compared to a much larger conventional system, but advances in electronic miniaturization as well as local and cloud-based computing power mean that substantial improvements are being seen every year.

xxvii

xxviii

Introduction

Further Reading Books T. Doby and G. Alker, Origins and Development of Medical Imaging, Southern Illinois Press, Bloomington, USA (1997). B. Kevles, Naked To The Bone: Medical Imaging In The Twentieth Century, Basic Books, New York, USA (1998). R. B. Gunderman, X-Ray Vision: The Evolution of Medical Imaging and Its Human Significance, Oxford University Press, Oxford, UK (2012). A. M. K. Thomas and A. K. Banerjee, The History of Radiology, Oxford University Press, Oxford, UK (2013).

Historical Review Articles M. L. Gabriele, G. Wollstein, H. Ishikawa et al., Optical coherence tomography: history, current status, and laboratory work, Invest. Ophthalmol. Visual Sci. 52, 2425 (2011). M. Hayden, P. -J. Nacher, History and physical principles of MRI. Magnetic Resonance Imaging Handbook, 1, CRC press, 978-1482216288 (2016). S. Manohar and D. Razansky, Photoacoustics: a historical review, Adv. Opt. Photonics 8, 586 (2016). J. F. de Boer, R. Leitgab, and M. Wojtkowski, Twenty-five years of optical coherence tomography: the paradigm shift in sensitivity and speed provided by Fourier domain OCT, Biomed. Opt. Express 8, 3248 (2017). T. Hamoka and K. K. McCully, Review of early development of near-infrared spectroscopy and recent advancement of studies on muscle oxygenation and oxidative metabolism, J. Physiol. Sci. 69, 799 (2019). C. J. Anderson, X. Ling, D. J. Schlyer, and C.S. Cutler, A short history of nuclear medicine, in Radiopharmaceutical Chemistry, Springer, pp. 11–26 (2019). J. S. Oh, Nuclear medicine physics: review of advanced technology, Prog. Med. Phys., 31, 81 (2020). P. J. La Riviere and C. R. Crawford, From EMI to AI: a brief history of commercial CT reconstruction algorithms, J. Med. Imaging 8: 052111, (2021). J. Hsieh and T. Flohr, Computed tomography history and future perspectives, J. Med. Imaging 8: 052109-1 (2021). R. A. Schultz, J. A. Stein, and N. J. Pelc, How CT happened: the early development of medical computed tomography, J. Med. Imaging 8: 052110-1 (2021). W. P. Dillon, 50th anniversary of computed tomography: past and future applications in clinical neuroscience, J. Med.l Imaging, 8: 052112-1 (2021). R. Fahrig, D. A. Jaffrey, I. Sechopoulos and J. W. Stayman, Flat-panel conebeam CT in the clinic: history and current state, J. Med.l Imaging 8: 052115-1 (2021).

Introduction

J. M. Obaldo and B. E. Hertz, The early years of nuclear medicine: a retelling, Asia Ocean J. Nucl. Med. Biol. 9, 207 (2021). Society of Nuclear Medicine and Molecular Imaging Historical page http://www .snmmi.org/AboutSNMMI/Content.aspx?ItemNumber=4175. M. B. Nielsen, S. B. Sogaard, S. B. Andersen et al., Highlights of the development in ultrasound during the last 70 years: a historical review, Acta Radiol. 62, 1499 (2021). F. Duck, Ultrasound – the first fifty years, Med. Phys. Int. Special Issue, Hist. Med. Phys. 5, 470 (2021). A History of Medical Ultrasound Physics (various authors): http://www.mpijournal .org/pdf/2021-SI-06/MPI-2021-SI-06.pdf. P. Boernert and D. G. Norris, A half-century of innovation in technology – preparing MRI for the 21st century, Br. J. Radiol. 93: 00113, (2021). H. -J. Smith, The history of magnetic resonance imaging and its reflections, Acta Radiol. 62, 1481 (2021). H. M. Schouw, L. A. Huisman, Y. F. Janseen et al., Targeted optical fluorescence imaging: a meta-narrative review and future perspectives, Eur. J. Nucl. Med. Mol. Imaging 48, 4272 (2021).

xxix

xxxi

About the Companion Website This book is accompanied by a companion website: www.wiley.com/go/webb2e

The website includes Exercise Solutions.

1

1 Image and Imaging System Characteristics 1.1

General Image and Imaging System Characteristics

A clinical diagnosis may require radiological scans from multiple imaging modalities. For example, a patient may have an exploratory ultrasound followed by some combination of CT, MRI, and/or PET. Each of these modalities provides different types of clinical information (anatomical and/or functional, static and/or dynamic) which together give as complete as possible an “inside view” of what is happening in the body. Some of these modalities such as ultrasound are point-of-care, which means that small portable units can be used relatively easily, quickly, and cheaply: others are very expensive, large, and heavy fixed-site systems which might require the patient to wait several weeks before an appointment is available. In addition to the different types of information provided, each of the modalities also has specific image characteristics: these include intrinsic differences in spatial resolution (from the low micrometer range for OCT to several millimeters for SPECT), as well as signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Over several decades there have been constant technological improvements in imaging hardware and image processing algorithms which have led to enormous increases in the performance and diagnostic quality of clinical scans from each of the modalities covered in this book. Quantitatively, we can evaluate system performance via the three basic measures of spatial resolution, SNR, and CNR (taking into account the scan time required) [1–7]. There is a strong interdependence between these three measures, in terms of the parameters and hardware used to acquire and process the images. For optimal system design and analysis it is very important to understand the relationship between these measures, and also to realize that this relationship depends on which particular imaging modality is being considered. This chapter covers several of the quantitative aspects of assessing image quality, some of the trade-offs between SNR, CNR, and spatial resolution, and a basic description of data acquisition principles which are common Introduction to Biomedical Imaging, Second Edition. Andrew Webb. © 2023 The Institute of Electrical and Electronics Companion website: www.wiley.com/go/webb2e

2

1 Image and Imaging System Characteristics

to all of the digital data acquired, processed, and stored by modern clinical imaging modalities.

1.2

Concept of Spatial Frequency

The concept of spatial frequency is very useful in characterizing the performance of an imaging system. The spatial frequency modulation transfer function (MTF), covered later in this chapter, is one of the key manufacturer specifications for many different components of a system. As a simple example of spatial frequency, consider a series of black lines on a white background, as shown in Figure 1.1a–c. The spatial frequency, k, is defined as the number of lines/mm in a particular dimension, x, y, or z. The closer together the lines, the higher the spatial frequency, and the greater must be the resolving power of an imaging system in order to produce an image in which the lines are distinct. This concept can be extended to multiple dimensions, as shown in Figure 1.1d, with the object characterized by spatial frequencies, kx and ky , in this case. Obviously, the body does not consist of regularly spaced structures such as those shown in Figure 1.1, but rather the locations and geometries of specific organs and structures within these organs correspond to a range of spatial frequencies. Take for example an MRI of the head as shown in Figure 1.2a. In this particular image, there are different signal intensities from subcutaneous fat (bright), bone (zero), cerebrospinal fluid (bright), and white and gray matter (intermediate). A plot of the signal intensity in one dimension (along the dotted line) shows that there are y x 5 mm

2.5 mm

2.5 mm 1.5 mm

5 mm

kx = 2 mm–1 (a)

kx = 4 mm–1 (b)

ky = 6.7 mm–1 (c)

kx = 4 mm–1 ky = 2 mm–1 (d)

Figure 1.1 Illustration of spatial frequency. (a–c) The object to be imaged consists of ten narrow lines with different separations in either the x- or y-dimension. The respective k-value is given in units of lines/mm, or more commonly mm−1 . (d) For a two-dimensional object the corresponding parameters are k x and k y .

1.3 Spatial Resolution

(a) High kx

(b)

(c)

Low kx Low kx High kx

Figure 1.2 (a) An MRI of the head, showing many different tissues in the brain and skull. The line plot below the image shows the projection of the signal intensity along the dotted line. Areas where the signal intensity changes rapidly represent high k x values, and regions of relatively uniform signal intensity correspond to low k x values. (b) A map of areas which correspond to high k x and k y values. (c) A map of areas corresponding to low k x and k y values.

regions of the brain where the signal changes rapidly as a function of x-coordinate (corresponding to high kx values) and other regions where there is very little change (low kx values). Figure 1.2b shows the areas corresponding to high kx values, and Figure 1.2c those corresponding to low kx values. These two images illustrate the important point that sharp edges and small features are represented by high spatial frequencies, and areas of slowly varying contrast by low spatial frequencies. It might seem, therefore, that we want to design an imaging system which is very sensitive to high spatial frequencies, so that we can detect very small pathologies. However, we also have to consider that every image contains noise, which is randomly distributed across the image. The value of this noise changes rapidly from pixel to pixel, and so random noise corresponds to very high spatial frequencies. Therefore, an imaging system, or an image processing algorithm, which can capture high spatial frequencies, is able to resolve small features in the patient, but is also sensitive to noise. We will return to this point when considering trade-offs between different image characteristics in Section 1.3.

1.3 Spatial Resolution There are a number of measures which are used to describe the spatial resolution of an imaging modality: the most common are the point spread function (PSF) in the spatial domain and the MTF in the spatial frequency domain.

3

4

1 Image and Imaging System Characteristics

1.3.1

Imaging System Point Spread Function

The concept of the PSF of a particular imaging system is simply explained by considering a very small “point source” positioned within the imaging field of view (FOV), as shown in Figure 1.3a. This point source could be a small sphere of water for MRI, a small reflector for ultrasound, or a sphere filled with a γ-ray emitter for nuclear medicine. The image of the point source produced by the system could be a very good approximation to the actual object, as shown in Figure 1.3b, it could be blurred in all three dimensions as in Figure 1.3c, or could be blurred primarily in only one dimension as in Figure 1.3d, depending upon the particular imaging modality and how the image was acquired and processed. The mathematical relationship between the reconstructed image, I(x, y, z), and the object, O(x, y, z), can be represented by: (1.1)

I(x, y, z) = O(x, y, z) ∗ h(x, y, z)

where * represents a numerical convolution (see Appendix 1.A.3), and h(x, y, z) is the three-dimensional PSF. In a perfect imaging system, the PSF would be a delta function in all three dimensions, and in this case the image would be an exact representation of the object. In practice, the PSF has a finite width, which may be different in the x-, y-, and z-directions, which results in image blurring. The PSF may also contain side lobes, as covered in Chapter 4 on ultrasound. There are several components which contribute to the overall PSF of an imaging system. The first is the intrinsic physics involved in the imaging method. For example, in Chapter 3 we will see that the γ-rays detected in PET arise from a positron–electron annihilation which occurs at a position with an associated “sphere of uncertainty” of ∼1–2 mm. This means that h(x, y, z) due to this process alone represents a sphere with the corresponding dimensions. Second, each component of the detection system, e.g. the lens or charge-coupled device (CCD) camera in an optical imaging device, or the flat panel detector used for computed tomography, also has an associated PSF. Third, we can choose how fine a stepsize

Object (a)

(b)

(c)

(d)

Figure 1.3 (a) The object to be imaged is a very small sphere, termed a point source. (b) An image acquired with a system that has a very narrow PSF: the image is an excellent representation of the actual object. (c) An image with a broad PSF in all three dimensions, resulting in an image which is very blurred. (d) An image with a broad PSF in one dimension and a narrow one in the other two dimensions.

1.3 Spatial Resolution

to use to collect the data: this is the sampling contribution to the PSF. The finer the stepsize, the better the spatial resolution, but the longer the scan takes to acquire. The final contribution comes from the image reconstruction algorithm and any subsequent image filtering. Overall the total system PSF, htotal (x, y, z), is given by a series of mathematical convolutions of the individual PSFs for each stage: htotal (x, y, z) = hphysics (x, y, z)∗ hdetector (x, y, z)∗ hsampling (x, y, z)∗ hfilter (x, y, z) (1.2) A second commonly specified measure of spatial frequency, particularly for flat panel detectors used in CT, is the line spread function (LSF). As the name suggests, this corresponds to a measure which reduces the dimensionality of the three-dimensional PSF to a one-dimensional LSF. Mathematically, the LSF is given by: LSF(x) =



PSF(x, y, z)dydz

(1.3)

An edge spread function (ESF) is also sometimes defined, where this is defined as the convolution of the LSF with a step function. Experimentally, it is measured using a block of material with a sharp edge.

1.3.2

Imaging System Resolving Power

From a practical point of view the spatial resolution of an image can be defined as the smallest physical distance between two point sources for which the sources can be resolved as being separate. There are two mathematical functions, a sinc or Gaussian, which are good approximations to the LSF in several imaging modalities. If the LSF is a sinc function, then the Rayleigh criterion [8] can be applied, which states that two point sources can be resolved if the peak intensity of the LSF from one source coincides with the first zero-crossing point of the LSF of the other, as shown in Figure 1.4a. In this case the spatial resolution is defined as one-half the width of the central lobe of the sinc function. If the LSF is a Gaussian function, then the one-dimensional PSF, h(x), can be written as: ( ) (x − x0 )2 1 exp − (1.4) LSF(x) = √ 𝜎2 2𝜋𝜎 2 where 𝜎 is the standard deviation of the distribution and x0 is the center of the function. The full-width-half-maximum (FWHM) of a Gaussian function is given by: √ (1.5) FWHM = 2𝜎 2 ln 2 = 2.36𝜎

5

6

1 Image and Imaging System Characteristics

FWHM

(a)

FWHM

(b)

(c)

Figure 1.4 (a) For a sinc LSF, the signals from two point sources can be resolved when the separation between them is less than half the width of the main lobe of the sinc function. (b) For an arbitrary LSF, the two point sources can be resolved when their separation is less than the FWHM of the function. (c) In this example the two point sources can no longer be resolved due to the broad FWHM of the LSF.

Therefore, if the separation between the two structures in the x-dimension is greater than 2.36 times the standard deviation of the Gaussian LSF of the imaging system, then the two structures can be distinguished. If the form of the LSF cannot be described by an analytical function, then it can still be characterized in terms of its FWHM, as shown in Figure 1.4b,c. Then the criterion for resolution is that if the two points sources are separated by a distance greater than the FWHM, they can be resolved.

1.3.3

Imaging System Modulation Transfer Function

The spatial resolution of a system is also often characterized in terms of its MTF, which is measured in the spatial frequency domain. Since the spatial frequency and the spatial domains are related by the Fourier transform (see Appendix 1.A), the MTF and the PSF are mathematically related by: ∞

MTF(kx , ky , kz ) =

∫−∞

PSF(x, y, z)e−j2𝜋kx x e−j2𝜋ky y e−j2𝜋kz z dxdydz

(1.6)

A “perfect” imaging system would exactly reproduce features corresponding to both low and high spatial frequencies, and would have an MTF of unity for all values of k, as shown in Figure 1.5a. This corresponds to a delta function for the PSF. The effects of different one-dimensional MTFs and corresponding PSFs are shown in Figure 1.5b,c: as expected, via the Fourier transform relationship, a narrow MTF corresponds to a broad PSF and vice versa. As outlined previously, calculation of the overall PSF of an imaging system involves the mathematical convolution of the PSFs from each of the individual components of that system. In the spatial frequency domain, the individual MTFs are multiplied together to give the overall system MTF.

1.4 Signal-to-Noise Ratio

Object

Image

PSF(y)

MTF(ky) (a)

y

ky MTF(ky)

PSF(y)

(b) y

ky MTF(ky)

PSF(y)

(c) ky

y

Figure 1.5 (top right) The object being imaged corresponds to a set of lines with increasing spatial frequency from left to right. (a) An ideal MTF and the corresponding PSF produce an image which is an exact representation of the object. (b) A narrower MTF cannot accurately represent the very high spatial frequency information, and the resulting image is slightly blurred. (c) An even narrower MTF “loses” more of high spatial frequency information and produces an image which is more blurred.

1.4

Signal-to-Noise Ratio

The second key characteristic of an image is the SNR: if the value is too low then small features may not be visible, and the degree of confidence of any clinical diagnosis is reduced. The simplest way to measure SNR for a given region-of-interest (ROI) in the image is to calculate the mean of the signal intensity within the ROI, and to divide this by the standard deviation of the noise in a background region which lies outside the body, as illustrated in Figure 1.6a,b: SNR =

𝜇ROI 𝜎background

(1.7)

This simple expression assumes that the noise is distributed uniformly over the image. If this is not the case, then an alternative method to estimate the SNR is to run a scan a number of times in succession and to calculate the mean and standard deviation of the signal within the ROI: this is obviously not really feasible for clinical scans, but can be performed in phantoms.

7

8

1 Image and Imaging System Characteristics

(a)

(c)

(d)

(b)

(e)

(f)

Figure 1.6 (top row) Measurement of SNR in an MRI of the brain estimated by the ratio of (a) the mean signal intensity from all of the voxels within a circular ROI to (b) the standard deviation of the noise from an ROI outside the head. (bottom row) Illustration of signal averaging to improve the image SNR. (c) MRI acquired in a single scan, (d) two identical scans averaged together, (e) four scans, and (f) sixteen scans.

If the SNR from a single scan is too low for a clinical diagnosis, then averaging of multiple scans can be used to increase its value. This process assumes that the signal from successive images is coherent (deterministic), and the noise is incoherent. This procedure of signal averaging is an integral component of OCT, and is also often used in MRI. If the measured signal, ̂ S, is represented as: ̂ S=S+N

(1.8)

where S is the true signal and N is the noise component with a mean value of zero and a standard deviation 𝜎 N , then the SNR for a single scan, SNR1, is given by: SNR1 =

|S| 𝜎N

(1.9)

If K measurements are acquired and then averaged together, the averaged measured signal, SK , is given by: 1∑ N K k=1 k K

SK = S +

(1.10)

The SNR for the averaged scans, SNRK , is given by:

√ |S| |S| SNRK = √ { = K = SNR K 1 } 𝜎N ∑K var K1 k=1 S + Nk

(1.11)

1.6 Signal Digitization: Dynamic Range and Resolution

Equation (1.11) shows that the SNR is proportional to the square root of the number of averaged images. The trade-off in signal averaging is the increased data acquisition duration, which is K-times as long. Figure 1.6c–f shows the effects of signal averaging for an MRI of the brain.

1.5

Contrast-to-Noise Ratio

Even if the image has a very high SNR, it may not be diagnostically useful unless there is sufficient CNR to distinguish between healthy and pathological tissues. The image contrast, CAB , between two tissues A and B is defined as: CAB = |SA − SB |

(1.12)

where SA and SB are the image intensities from tissues A and B, respectively. The CNR, CNRAB , between tissues A and B is defined in terms of their respective SNRs: CNRAB =

CAB |S − SB | = A = |SNRA − SNRB | 𝜎N 𝜎N

(1.13)

where 𝜎 N is the standard deviation of the noise. Figure 1.7 illustrates how a decrease in the CNR of an image can result in small features no longer being visible. This can occur either due to a poor SNR as shown in Figure 1.7a–c, or due to a poor spatial resolution as shown in Figure 1.7d–f.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 1.7 Images showing the effect of the CNR on the detectability of two internal features (a–c) illustration of the effects of a decrease in the CNR due to lower SNR. (d–f) The effects on CNR due to poorer spatial resolution.

1.6 Signal Digitization: Dynamic Range and Resolution In all of the imaging modalities covered in this book, the output of the image detector is an analog electrical signal (current or voltage), which first passes through a low noise amplifier and is then digitized by an analog-to-digital

9

10

1 Image and Imaging System Characteristics

converter (ADC). The ADC is characterized by several parameters, the most important of which are its voltage range, resolution, and sampling rate. For example, an ADC for MRI might have a voltage range of +10 to −10 V, a resolution of 16-bits, and a sampling rate of 100 mega samples per second. In terms of resolution, the number of different output levels for an N-bit ADC is given by 2N , e.g. a 14-bit ADC has 16384 levels. The corresponding voltage resolution (also termed the least significant bit) of the ADC is defined as one-half of the maximum voltage divided by the number of levels. A higher resolution enables more accurate representation of the analog signal, but no matter how high the resolution of the ADC, there is an intrinsic error associated with digitizing an analog signal: this difference is called the quantization error or quantization noise. The higher the ADC resolution, the lower the quantization noise, as shown in Figure 1.8. There are four different basic types of ADC: flash, successive approximation register, pipelined, and delta-sigma. These architectures each have different properties in terms of their maximum resolution and sampling rate. Flash has very fast sampling (>1 GHz) but relatively low resolution, pipelined and SAR cover an intermediate range, and delta-sigma has very high resolution but lower sampling rates. After signal digitization, image reconstruction, and any image filtering (See Section 1.7), the images are stored digitally but must also be made available for the physician to view. The viewing station has a certain dynamic range in terms of the number of graytone levels that it can display. Typically the image is displayed at 8-bit resolution, i.e. with 256 different gray levels. Rather than compress the full dynamic range of the digitized image to this resolution, which would lose information, a “window” within the full dynamic range of the data is chosen, and this window is expanded or compressed into the 256 graytone levels that are displayed. 3-bit

4-bit

5-bit

Figure 1.8 A demonstration of the effects of the number of bits on the quantization noise of a digitized signal. The dotted line represents the analog signal, the solid gray line the digital output, and the solid black line the quantization noise. The vertical axis represents amplitude and the horizontal axis time.

1.7 Post-acquisition Image Filtering

1.7

Post-acquisition Image Filtering

After the raw data have been digitized and the images reconstructed, different filters can be applied to improve the image SNR or spatial resolution, or to highlight features such as edges/boundaries between tissues. Since these filters are applied to stored digital data, the effects of different filters can be evaluated without affecting the raw data. The simplest method to improve SNR is to apply a low-pass filter to the image. The term “low-pass” refers to the characteristics of the filter in the spatial frequency domain, i.e. this type of filter amplifies the low spatial frequencies in the image. As described in Section 1.2, low spatial frequencies are associated with areas of relatively uniform signal intensity, whereas high spatial frequencies represent the fine detail within tissue, sharp boundaries between tissues, and also noise. A low-pass filter therefore improves the image SNR by attenuating the contribution from noise, but it also degrades the spatial resolution. Such a filter can be applied in the spatial domain via a convolution process, as shown in Figure 1.9. Convolution involves placing the filter kernel, in this case a 3 × 3 matrix, over the image pixels, multiplying each pixel by the corresponding component of the kernel, and replacing the center pixel by the average of these values. The kernel is then displaced by one pixel in the horizontal dimension, and the process is repeated until the kernel has been applied to all the pixels in this horizontal dimension. This process is repeated for the next row of pixels until the whole image has been filtered. Figure 1.9 shows three different simple image kernels for low-pass, high-pass, and edge-enhancing filters. Figure 1.10 shows the effects of applying each of these three filters separately on different MRI brain scans. In practice, more sophisticated filtering is often performed, in which the filter has different characteristics for different areas of the image based on the local image characteristics: for example a Wiener filter provides an optimum trade-off between SNR and spatial resolution [9].

Low-pass 1 1 1 1 4 1 1 1 1

High-pass Edge enhancing –1 –1 –1 –1 +9 –1 –1 –1 –1

+1 +1 +1 0 0 0 –1 –1 –1

Figure 1.9 Illustration of how image convolution is carried out in the spatial domain. Three different 3 × 3 filter kernels are shown.

11

12

1 Image and Imaging System Characteristics

(a)

(b)

(c)

(d)

(e)

(f)

Figure 1.10 Illustrations of the effects of different post-acquisition filters. Transforming from image (a) to image (d) shows the effects of an edge-enhancing filter applied to a high-resolution, high SNR image. From image (b) to image (e) the effect of a low-pass filter on a noisy image. From image (c) to image (f) a high-pass filter applied to an image with low spatial resolution.

1.8 Assessing the Clinical Impact of Improvements in System Performance The ultimate aim of improving the technology for medical imaging is of course to improve patient outcome. Fryback and Thornbury [10] have proposed six hierarchical levels to characterize the path from technological improvements to an increase in the clinical efficacy of diagnostic imaging: an adapted version is shown in Figure 1.11. It is important to note that there is not necessarily a straightforward correlation between technological development (level 1) and patient outcome efficacy (level 5). For example, a technical innovation may make a system much too expensive for widespread use: alternatively, the improvement may be deemed too small to change local medical insurance policies. In other words, traversing between different levels involves many more considerations than just the technical aspects of medical imaging which are considered in this book. Nevertheless, it is important to have a quantitative measure of, for example, how level 1 improvements in technology relate to level 2 improvements in diagnostic accuracy, sensitivity, and specificity, as covered in Section 1.8.1.

1.8 Assessing the Clinical Impact of Improvements in System Performance

Level 6: improvements in cost/benefit of societal healthcare

Level 5: improvements in patient outcome morbidity/mortality, quality of life

Level 4: changes in therapeutic decisions

Level 3: changes in diagnostic decisions

Level 2: improvements in diagnostic accuracy, sensitivity, and specificity

Level 1: improvements in technology speed, SNR, CNR, and spatial resolution

Figure 1.11 Levels of evidence in evaluating the efficacy of diagnostic imaging. Source: Adapted from Fryback and Thornbury [10].

1.8.1

The Receiver Operating Characteristic Curve

The quantitative effect of a technological improvement on clinical diagnosis can be assessed using a receiver operating characteristic (ROC) curve. The concept of the ROC curve is illustrated here with a simple example. Consider the situation in which a patient is suspected of having a tumor. There are four possibilities for a radiologist making the diagnosis based on a series of images: true positive (where true refers to the correct diagnosis and positive to the tumor being present), true negative, false positive, and false negative, as shown in Figure 1.12. Three measures are commonly used in ROC curve analysis: Accuracy is the number of correct diagnoses divided by the total number of diagnoses, Sensitivity is the number of true positives divided by the sum of the true positives and false negatives, and, Specificity is the number of true negatives divided by the sum of the number of true negatives and false positives. The ROC curve plots the sensitivity (also known as the true positive rate) on the vertical axis versus 1-specificity (also known as the false positive rate) on the horizontal axis. The area under the ROC curve is a measure of the effectiveness

13

14

1 Image and Imaging System Characteristics

True positive rate = sensitivity 1.0

x

Actual situation Diagnosis

Tumor present

x

x

x

x

Tumor absent

x

Tumor present True positive

False positive

Tumor absent False negative

True negative

0.5

x x

0

x

0

0.5

1.0

False positive rate = (1-specificity) (a)

(b)

Figure 1.12 (a) A table showing the four possible outcomes of a tumor diagnosis. (b) An ROC curve corresponding to this table. The better the diagnosis, the higher the integrated area under the ROC curve. The dotted line shows the situation for a perfect diagnosis.

of the imaging system and/or the clinician’s interpretation of the images. A value of 100% for accuracy, sensitivity, and specificity is represented by a point on the ROC curve given by a true-positive fraction of 1, and a false-positive fraction of 0, i.e. the dashed line in Figure 1.12. The closer the actual ROC curve lies to this ideal line, the better. The integral under the ROC curve, therefore, gives a quantitative measure of the quality of the diagnostic procedure. So, if the performance (SNR, CNR, and spatial resolution) of an imaging system is improved, an ROC curve analysis can be performed comparing the new versus the old system, with the result being a quantitative measure of the improvement in clinical quality.

1.A Appendix 1.A.1

Fourier Transforms

The Fourier transform is an integral part of image processing for many of the modalities covered in this book. For example, in MRI (Chapter 5), the signals are acquired in the spatial frequency domain, and the signals undergo a multi-dimensional inverse Fourier transform to produce the image. In ultrasound imaging (Chapter 4), spectral Doppler plots are the result of Fourier transformation of the time-domain demodulated Doppler signals. This short appendix summarizes the basic mathematics and useful properties of the Fourier transform.

1.A.2 Fourier Transforms of Time Domain and Spatial Frequency Domain Signals

1.A.2 Fourier Transforms of Time Domain and Spatial Frequency Domain Signals The forward Fourier transform, S(f ), of a time domain signal, s(t), is given by: ∞

S(f ) =

∫−∞

s(t)e−j2𝜋ft dt

(1.A.1)

The inverse Fourier transform, s(t), of a frequency domain signal, S(f ), is given by: ∞

1 S(f )e+j2𝜋ft df 2𝜋 ∫−∞

s(t) =

(1.A.2)

The forward Fourier transform, S(k), of a spatial domain signal, s(x), is given by: ∞

S(k) =

∫−∞

s(x)e−j2𝜋kx x dx

(1.A.3)

The corresponding inverse Fourier transform, s(x), of a spatial frequency domain signal, S(k), is given by: ∞

s(x) =

S(k)e+j2𝜋kx x dk

∫−∞

(1.A.4)

Some useful Fourier pairs are shown in Figure 1.A.1. 1 ∞ n ∑δ f −∆t ∆t n = −∞



∑δ (t−n∆t)

n = −∞

Comb

Comb

e

−a t

u (t)

2a a + 4π 2 f 2 2

Exponential

e

−ax 2

π − ae

Lorentzian

π 2 k2

a

Gaussian

Gaussian

Π(x) Rectangular

sin(πk) πk

Figure 1.A.1 Some Fourier transform pairs commonly used in imaging.

Sinc

15

16

1 Image and Imaging System Characteristics

Imaging signals are acquired in more than one dimension, and image reconstruction then requires multi-dimensional Fourier transformation. For example: ∞

S(kx , ky ) =

s(x, y)e−j2𝜋(kx x+ky y) dxdy

(1.A.5)

S(kx , ky )e+j2𝜋(kx x+ky y) dkx dky

(1.A.6)

∫−∞ ∫−∞ ∞

s(x, y) =





∫−∞ ∫−∞

and similarly, in three dimensions. Highly efficient computational algorithms make the Fourier transform one of the quickest mathematical transforms to perform.

1.A.3

Useful Properties of the Fourier Transform

In order to understand many aspects of medical imaging, both in terms of the spatial resolution inherent to the particular modality and also the effects of image post-processing, a number of mathematical properties of the Fourier transform are very useful. The most relevant examples are listed below (a) Linearity: The Fourier transform of two additive functions is itself additive: as1 (t) + bs2 (t) ⇔ aS1 (f ) + bS2 (f ) aS1 (x) + bS2 (x) ⇔ as1 (kx ) + bs2 (kx )

(1.A.7)

This theorem shows that if the acquired time-domain signal consists of, for example, the sum of a number of different sinusoidal functions, each with a different frequency and amplitude, then the relative amplitudes of each component are maintained when the data are Fourier transformed, as shown in Figure 1.A.2. A1

f1

A1 +

A2 A3

FT A2

f2 f3

+

= f1

A3

Time

f2

f3

Frequency

Time

Figure 1.A.2 Illustration of the linearity of the Fourier transform. A time-domain signal (middle) is composed of three different time-domain signals (left). The Fourier transformed frequency spectrum (right) consists of signals from each of the three different frequencies with the same amplitudes as in the time-domain data.

Exercises

(b) Convolution: The equivalent of multiplying two signals together in the time domain is the convolution (*) of the two individual Fourier transformed components in the spatial frequency domain, and vice versa: s1 (t)s2 (t) ⇔ S1 (f ) ∗ S2 (f ) s1 (kx )s2 (kx ) ⇔ S1 (x) ∗ S2 (x)

(1.A.8)

The convolution of two functions p(x) and q(x) is defined as: ∞

p(x) ∗ q(x) =

∫−∞

p(x − 𝜏)q(𝜏)d𝜏

(1.A.9)

(c) Scaling law: if either a time-domain or spatial-domain signal is scaled by a factor b, then its Fourier transform is scaled by the inverse factor 1/b, i.e.: ( ) f 1 s(bt) ⇔ S |b| b ( ) kx 1 s(bx) ⇔ S (1.A.10) |b| b One example already encountered in Section 1.3.3 is the correspondence of a narrow PSF to a broad MTF, and vice versa.

Exercises Section 1.2 1.1

For the one-dimensional objects O(x), and LSFs h(x) shown in Figure 1.13, draw the resulting projections I(x). Write down whether each object contains high spatial frequencies, low spatial frequencies, or both. Which image best represents the object, and which is the most distorted?

O(x)

h(x)

(a)

*

(b)

*

(c)

*

(d)

*

I(x)

Figure 1.13 Exercise to show the effect of a fixed h(x) on four different objects O(x).

17

18

1 Image and Imaging System Characteristics

Section 1.3 1.2

Show mathematically that the FWHM of a Gaussian function is given by: ( √ ) FWHM = 2 2 ln 2 𝜎 ≅ 2.36𝜎

1.3

Plot the MTF on a single graph for each of the convolution filters shown below (Figure 1.14).

1

1

1

1

1

1

1

1

1

1

4

1

1

12

1

1

1

1

1

1

1

1

1

1

1

1

1

Figure 1.14 Exercise to calculate the MTF associated with three different convolution kernels.

Section 1.4 1.4

If the ROI chosen for an estimate of the SNR is four times bigger than the ROI chosen to estimate the standard deviation of the background noise, how does this affect the SNR measurement?

1.5

If the SNR is very low, then the formula given by equation (1.7) is no longer valid. Explain why this is so, and suggest what corrections could be made in this case.

Section 1.6 1.6

An analog signal of 526.35 mV is to be digitized using (i) a four-bit, (ii) a six-bit, and (iii) an eight-bit ADC, each with a voltage range of 1 V. Calculate the digitized signals for the three cases, and the percentage error compared to the actual analog signal.

1.7

Explain the effects of a signal which has a magnitude higher than the upper limit of an ADC. What does the time-domain signal look like, and what effect does this have on the frequency-domain signal?

1.8

An ultrasound signal is digitized using a 16-bit ADC at a sampling rate of 3 MHz. If the image takes 20 ms to acquire, how much data (in Mbytes) are there in each ultrasound image. If images are acquired for 20 s continuously, what is the total data output of the scan?

References

1.9

If a signal is digitized at a sampling rate of 20 kHz, at what frequency would a signal at 22 kHz appear?

1.10

A signal is sampled every 1 ms for 20 ms, with the following actual values of the analogue voltage at successive sampling times. Plot the values of the voltage recorded by a 5 V, 2-bit ADC assuming that the noise level is much lower than the signal and so can be neglected. On the same graph, plot the quantization error. Signal (volts) = −4.3, +1.2, −0.6, −0.9, +3.4, −2.7, +4.3, +0.1, −3.2, −4.6, + 1.8, +3.6, +2.4, −2.7, +0.5, −0.5, −3.7, +2.1, −4.1, −0.4

1.11

Using the same signal as in Exercise 1.10, plot the values of the voltage and the quantization error recorded by a 5 V, 3-bit ADC.

Section 1.8 1.12

In a patient study for a new test for multiple sclerosis (MS), thirty-two of the one hundred patients studied actually have MS. For the data given below, complete the two-by-two matrices and construct an ROC. The number of lesions detected in each patient corresponds to the threshold value for designating MS as the diagnosis.

No. lesions detected 50 2

40 0

8

1

30

20

10

5

2

16 3

22 6

25 15

30 35

32 60

1.13

Choose a medical condition and suggest a clinical test which would have: (a) High sensitivity but low specificity, (b) Low sensitivity but high specificity.

1.14

What does an ROC curve that lies below the random line, i.e. a line at 45∘ in Figure 1.12(b), suggest? Could this be diagnostically useful?

References 1 F. R. Verdun, D. Racine, J. G. Ott, M. J. Tapiovaara, P. Toroi, F. O. Bochud, W. J. H. Veldkamp, A. Schegerer, R. W. Bouwman, I. H. Giron, N. W. Marshall, and S. Edyvean, Image quality in CT: from physical measurements to model observers, Physica Med. 31(8):823–843 (2015).

19

20

1 Image and Imaging System Characteristics

2 S. D. Yu, G. Z. Dai, Z. Y. Wang, L. D. Li, X. H. Wei, and Y. Xie, A consistency evaluation of signal-to-noise ratio in the quality assessment of human brain magnetic resonance images, BMC Med. Imaging 18, 17 (2018). 3 K. M. Kempski, M. T. Graham, M. R. Gubbi, T. Palmer, and M. A. L. Bell, Application of the generalized contrast-to-noise ratio to assess photoacoustic image quality, Biomed. Opt. Express 11(7):3684–3698 (2020). 4 A. Rodriguez-Molares, O. M. H. Rindal, J. D’hooge, S. E. Masoy, A. Austeng, M. A. L. Bell, and H. Torp, The generalized contrast-to-noise ratio: a formal definition for lesion detectability, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 67(4):745–759(2020). 5 J. H. Yan, J. Schaefferkoetter, M. Conti, and D. Townsend, A method to assess image quality for low-dose PET: analysis of SNR, CNR, bias and image noise, Cancer Imaging 16, 26 (2016). 6 A. Ng and J. Swanevelder, Resolution in ultrasound imaging, Continuing Educ. Anaesth. Criti. Care Pain 11(5):186–192 (2011). 7 W. W. Moses, Fundamental limits of spatial resolution in PET, Nucl. Instrum. Methods Phys. Res., Sect. A 648 Supplement 1:S236–S240 (2011). 8 L. Rayleigh, Investigations in optics, with special reference to the spectroscope, Philos. Mag. 8 (49), 261–274 (1879). 9 Y. Q. Zeng, B. C. Zhang, W. Z. hao, S. X. Xiao, G. K. Zhang, H. P. Ren, W. B. Zhao, Y. H. Peng, Y. T. Xiao, Y. W. Lu, Y. S. Zong, and Y. M. Ding, Magnetic resonance image denoising algorithm based on cartoon, texture, and residual parts, Comput. Math. Methods Med. 2020, 1405647 (2020). 10 D. G. Fryback and J. R. Thornbury, The efficacy of diagnostic-imaging, Med. Decis. Making 11(2):88–94 (1991).

Further Reading Books J. Prince and J. Links, Medical Imaging Signals and Systems, 2nd ed., Pearson, New York City, USA (2014). H. Azhari, J. A. Kennedy, N. Weiss, and L. Volokh, From Signals to Image: A Basic Course on Medical Imaging for Engineers, Springer, Cham, Switzerland (2020).

Review Articles A. Kayugawa, M. Ohkubo, and S. Wada, Accurate determination of CT point-spreadfunction with high precision, J. Appl. Clin. Med. Phys. 14 (4):216–226 (2013). L. Z. Chow and R. Paramesran, Review of medical image quality assessment, Biomed. Signal Process. Control 27, 145 (2016).

Further Reading

A. N. Kamarudin, T. Cox, and R. Kolamunnage-Dona, Time-dependent ROC curve analysis in medical research: current methods and applications, BMC Med. Res. Method. 17, 53 (2017). N. A. Obuchowski and J. A. Bullen, Receiver operating characteristic (ROC) curves: review of methods with applications in diagnostic medicine, Phys. Med. Biol. 63, 07TR01 (2018). S. Perfetto, J. Wilder, and D. B. Walther, Effects of spatial frequency filtering choices on the perception of filtered images, Vision 4, 4020029 (2020).

21

23

2 X-ray Imaging and Computed Tomography 2.1 General Principles of Imaging with X-rays X-ray imaging is a transmission-based technique in which X-rays generated by a source pass through the patient and strike a flat panel detector (FPD) placed underneath the patient, as shown in Figure 2.1a. Contrast in the image arises from differential attenuation of X-rays as they pass through different tissues. For example, X-rays are very efficiently attenuated in bone but much less so in soft-tissue, as shown in Figure 2.1b. In planar X-ray radiography, the image produced is a simple two-dimensional projection of the tissues lying between the X-ray source and the detector. Planar X-ray radiography is used for a number of different purposes including the assessment of possible bone fractures, chest radiography for diseases of the lung, intravenous pyelography (IVP) to detect diseases of the genitourinary tract including kidney stones, and X-ray fluoroscopy (in which images are acquired continuously over a period of several minutes) for image-guided interventional surgeries such as pacemaker placement. For many clinical diagnoses, three-dimensional volumetric imaging of thin slices with good soft tissue contrast is required. In these cases, X-ray computed tomography (CT) is used. The basic principles of CT are shown in Figure 2.2a. The X-ray source and detectors together rotate around the patient, producing a series of one-dimensional transmission projections. The patient bed continuously slides through the rotating source/detector plane to give a three-dimensional data set through the body in only a few seconds. These data are reconstructed to give a series of two dimensional images, which can also be visualized as a 3D surface, as shown in Figure 2.2b. The strengths of CT as an imaging modality include: (i) very high spatial resolution (50% higher count-rates than a NaI(Tl)/PMT SPECT system. CZT detectors have been primarily incorporated into dedicated cardiac SPECT scanners, designed to measure myocardial perfusion to diagnose coronary artery

3.7 Image Reconstruction

(b)

(a)

(c) Figure 3.11 (a) Photograph of a dedicated cardiac SPECT system. (b) Schematic of the operation of the scanner, which does not require rotation. (c) Layout of the nineteen arrays of four-by-four detector blocks, each of which contains 256 CZT detectors. Source: General Electric Company.

disease, as well as damage to the heart following a heart attack. The fundamental geometry of a dedicated cardiac system is different from a conventional SPECT system, being C-shaped rather than elliptical, and also not requiring rotation. Figure 3.11 shows one commercial setup with 19 sets of four 16 × 16 pixel CZT modules arranged in three vertical rows. Square-shaped collimators are made from lead and the septa are registered to the appropriate square-shaped detectors: the septa reduce the pixel size from a physical 3 × 3 mm to ∼2.5 × 2.5 mm. In addition to standard imaging, the fact that many detectors are acquiring data simultaneously, and that rotation is not required to form the image, means that dynamic SPECT scanning can be performed.

3.7 Image Reconstruction Although filtered backprojection algorithms can be used for image reconstruction, most systems now use iterative methods, very similar to those outlined for CT. The most common algorithm is ordered subset expectation maximization (OS-EM) [8]

103

104

3 Nuclear Medicine Correct the image estimate with update function

Image estimate

Model attenuation, scatter, collimator

Calculate update function

No

Forward projection

Calculated projections

Less than convergence value?

Yes

Accept image estimate

Compare with measured projections

Calculated error in projections

Model attenuation, scatter, collimator

Compare with measured image

update image estimate

Figure 3.12 Schematic of an iterative reconstruction scheme for SPECT/CT. The initial image estimate can be calculated in many ways including filtered backprojection.

which is very fast and efficient. System characteristics such as position-dependent scatter, slight differences in individual detector efficiencies, position-dependent attenuation and spatial resolution due to the particular collimator used, and the effects of dead-time, can all be built into the iterative reconstruction model, shown schematically in Figure 3.12. Specific methods for attenuation and scatter correction are outlined in Sections 3.7.1 and 3.7.2.

3.7.1

Attenuation Correction

Spatially dependent γ-ray attenuation coefficients due to passing through different tissues can give artifacts in the image, and so need to be incorporated into the iterative image reconstruction scheme. On a SPECT/CT system an anatomical attenuation map can be generated from the CT data. Since each SPECT projection takes several tens of seconds to acquire and so is averaged over many respiratory cycles (meaning that motion blurs out many of the sharp tissue boundaries) the CT maps are smoothed with a Gaussian filter to give approximately the same PSF as the SPECT images. In addition, because attenuation effects vary with energy, it is necessary to convert the CT attenuation data, which are acquired at an effective

3.7 Image Reconstruction

µ (cm–1) at 140 keV 0.4 Lung and 0.3 soft tissue

Relative counts

116–125 126–154

Bone

0.2 0.1

Air

0.0 –1000 –500

Soft tissue and bone

Water

0 500 1000 CT number (HU) (a)

1500

140 100 γ-ray energy (keV) (b)

Figure 3.13 (a) Plot of the correction factor to convert attenuation coefficients measured by CT into appropriate values for attenuation correction of 140 keV γ-rays. (b) A dual-energy approach to estimating the scatter contribution to each pixel in the image.

energy of ∼68 keV into attenuation maps corresponding to an energy of 140 keV. This is typically accomplished by using a bilinear model [5] relating attenuation coefficients at the desired energy to CT numbers measured at the effective energy, as shown in Figure 3.13a.

3.7.2

Scatter Correction

The number of scattered γ-rays in the raw SPECT image varies from pixel-to-pixel, and so a position-dependent scatter correction needs to be performed. One method for performing this correction uses a dual-energy window detection method, illustrated in Figure 3.13b. One energy window is centered at the 140 keV photopeak, with a “subwindow” set to a lower energy. The main window contains contributions from both scattered and unscattered γ-rays, but the subwindow has contributions only from scattered γ-rays. The main window typically has a fractional width (W m ) of 20%, i.e. between 126 and 154 keV for 99m Tc. The subwindow has a fractional width (W s ) of ∼7% centered at 121 keV. The true number of primary γ-rays, Cprim , can be calculated from the total count, Ctotal , in the main window, and the count, Csub , in the subwindow: Cprim = Ctotal −

Csub Wm 2Ws

(3.19)

An alternative method is to use model-based scatter compensation [6]. Whichever method is used, the results from scatter correction are incorporated into the iterative reconstruction algorithm shown in Figure 3.12.

105

106

3 Nuclear Medicine

3.8

Image Characteristics

As outlined in the introduction to this chapter, the characteristics of SPECT scans are relatively low SNR and spatial resolution, but extremely high CNR, compared to other imaging modalities.

3.8.1

Signal-to-Noise

As for the case of X-ray production covered in Chapter 2, radioactive decay is a statistical process, so the number of radioactive disintegrations per unit time fluctuates around an average value described by a Poisson statistical distribution. Therefore, the SNR is proportional to the square root of the total number of counts: the higher the number of γ-rays detected the higher the SNR. Factors which affect the SNR include the following: 1. The radioactive dose administered – the number of γ-rays detected is proportional to the radiotracer dose, but there are clearly patient safety limits to the dose that can be used. There is also an upper limit in the number of counts that can be recorded per unit time, beyond which the dead-time of the system means that further increases in the counts do not improve the SNR. Typically, tens of thousand times fewer counts are detected using SPECT than for CT. 2. The effectiveness of the radiotracer at targeting a specific organ – the higher the organ-specificity of the radiotracer the higher the accumulated dose in that particular organ and the higher the SNR. 3. The total time over which the image is acquired – the longer the time the larger the number of γ-rays detected. The time is limited by patient comfort, and the radioactive and biological half-life of the radiotracer. 4. The intrinsic sensitivity of the gamma camera. For a given system, increasing the crystal thickness increases the SNR since more γ-rays are detected (however, this decreases the spatial resolution). Decreasing the length or thickness of the lead septa increases the SNR, but decreases the image contrast due to the larger contribution from Compton-scattered γ-rays. The geometry of the collimator, e.g. pinhole, converging, diverging, or other, also affects the image SNR. 5. Post-acquisition image filtering – due to the relatively low SNR of nuclear medicine images, additional processing of the final image to aid diagnosis is standard. Normally a low-pass filter is applied to the image, which reduces the noise level, but also blurs the image. Since the intrinsic image spatial resolution is relatively poor, quite strong filtering can be applied without introducing significant blurring.

3.9 Clinical Applications of SPECT

3.8.2

Spatial Resolution

There are four major contributions to the spatial resolution of a SPECT scan: 1. The intrinsic spatial resolution of the gamma camera (excluding the collimator), Rgamma , reflects the uncertainty in the exact location at which light is produced in the scintillation crystal. This value is determined by the thickness of the crystal and also by the intrinsic resolution of the Anger position encoder. The thicker the crystal, the broader the light spread function, and the poorer the spatial resolution. A typical value of Rgamma lies in the range 3–5 mm. 2. The geometry of the collimator – from Eq. (3.16) the spatial resolution resulting from the use of a parallel-hole collimator is determined by the length and thickness of, and spacing between, the lead septa, as well as depth within the body. The spatial resolution also depends on the choice of parallel, converging, diverging, or pinhole collimators. 3. Post-acquisition image filtering – as covered in Section 3.8.1. Considering the first two terms, the overall spatial resolution is given by: √ Rsystem = R2gamma + R2coll

(3.20)

Typical values for the overall system spatial resolution are ∼10–15 mm deep within the body, and 5–8 mm close to collimator surface, as shown previously in Figure 3.6.

3.8.3

Contrast-to-Noise

The intrinsic image contrast is extremely high in nuclear medicine since there is no background signal. However, the presence of Compton-scattered γ-rays does contribute to an “apparent” background signal. If the spatial resolution is poor, then the CNR is reduced, since image blurring causes signal “bleed” from areas of high signal intensity to those where no signal should be present. This phenomenon is referred to as the partial volume effect, and is particularly pronounced for small structures. Post-acquisition processing using low-pass filters to increase the SNR also affect the CNR: these filters can either increase the CNR if the SNR increase outweighs the loss in spatial resolution, or can decrease the CNR if the loss in spatial resolution is dominant.

3.9 Clinical Applications of SPECT The major clinical applications of SPECT are the measurement of blood perfusion in the brain, the diagnosis of tumors in various organs, and the assessment of cardiac function. The following descriptions are not exhaustive, but outline also the

107

108

3 Nuclear Medicine

use of a wide variety of different radiotracers for the detection of many different types of disease. The most common isotopes are 99m Tc, 123 I, 201 Tl and 111 In.

3.9.1

Brain Imaging

In a SPECT brain scan, each image of a multislice data set is formed from typically 500,000 counts. SPECT studies are performed to measure blood perfusion in the brain, most commonly using 99m Tc-HMPAO (Ceretec), the chemical structure of which was shown in Figure 3.4. Ceretec is a neutral complex which passes through the BBB due to its low molecular weight, relative lipophilicity, and electrical neutrality. The agent is metabolized within cells into a more hydrophilic species which cannot easily diffuse back out of the cell. Peak activity occurs 1–2 min after injection, with typically 4–7% of the dose accumulating in the brain. Brain perfusion studies are also carried out using Neurolite, the chemical structure of which was shown in Figure 3.4. Neurolite is a neutral, lipophilic 99m Tc(V) complex and is also able to penetrate the BBB. This agent undergoes ester hydrolysis in tissue resulting in the formation of one free acid group: having formed a charged complex, the agent is unable to diffuse back across the BBB. The healthy brain has symmetric blood perfusion patterns in the two hemispheres, with higher blood flow in cortical gray matter than white matter. Diseases which cause altered perfusion patterns include epilepsy, cerebral infarction, schizophrenia, and dementia. One commonly studied form of dementia is Alzheimer’s disease, which is characterized by bilateral decreased flow in the temporal and parietal lobes with normal flow in the primary sensorimotor and visual cortices. Brain tumors can also be visualized using SPECT via increased blood flow to the lesion resulting in higher signal from the tumor. In contrast to this situation, stroke patients often show a complete lack of blood flow in the affected area of the brain, as seen in Figure 3.14.

3.9.2

Bone Scanning and Tumor Detection

Whole body scanning using 99m Tc phosphonates such as methylenediphosphonate (MDP) or hydroxymethylenediphosphonate (HMDP, Osteoscan) can be used

Figure 3.14 Multislice SPECT brain perfusion images of a patient who has extensive brain damage from a stroke. A striking perfusion deficit can be seen in the lower right side of the brain (arrows).

3.9 Clinical Applications of SPECT

to detect bone tumors, and also soft-tissue tumors which cause deformation and remodeling of bone structure. The mode of concentration of these agents in bone is thought to involve the affinity of diphosphonate for the metabolically active bone mineral hydroxyapatite, which exhibits increased metabolic turnover during bone growth. The usual response of bone to a tumor is to form new bone at the site or in the periphery of the tumor. For example, spinal tumors which consist of metastatic lesions growing in the spinal marrow space cause the bone structure of the spine to remodel: this results in local uptake of the radiotracer. Scanning starts 2–3 h after injection of the radiotracer, to allow accumulation within the skeletal structure, and ten or more scans are used to cover the whole body, as shown in Figure 3.15. If any suspected tumor sites show up, then more localized scans at higher resolution can be acquired. Bone infarctions or aggressive bone metastases often show up as signal voids in the nuclear medicine scan, since bone necrosis has occurred and there is no longer any blood flow to deliver the radiotracer to that region. Gallium (67 Ga) citrate is another agent used for tumor detection since it concentrates in certain viable primary and metastatic tumors, as well as focal sites of infection, although the exact mechanism of biodistribution is not well understood. Typical applications of 67 Ga-scanning include diagnosis of Hodgkin’s disease, lung cancer, non-Hodgkin’s lymphoma, malignant melanoma, and leukemia. Radiotracers can also be designed to target specific sites present during the cell cycle of the cancerous tissue. For example, somatostatin receptors are over-expressed in a number of human tumors. A radiotracer, 111 In-DTPAoctreotide (where octreotide is a metabolically stable analog of somatostatin containing eight amino acids), has been designed to target these tumors, which include endocrine pancreatic tumors, carcinoids and paragangliomas, lymphomas, and breast cancer. The long half-life, 67 h, of 111 In means that imaging is typically performed 24 h after injection to allow the fraction of the radiotracer

Figure 3.15 A whole-body scan, composing 10 different scans, showing the uptake of 99m Tc-methylenephosphonate within the body.

109

110

3 Nuclear Medicine

dose which distributes nonselectively within tissue to be excreted. Two γ-rays with energies 172 and 247 keV are emitted by this radiotracer, and both are detected in order to increase the SNR.

3.9.3

Cardiac Imaging

As covered in Section 3.6.4, dedicated cardiac SPECT scanners have recently been developed, and have increased the quality of myocardial scans significantly. The most common type of scan measures myocardial perfusion, and is referred to as a cardiac stress test. This procedure is usually carried out using radiotracers such as 99m Tc-sestamibi (Cardiolite) or 99m Tc-tetrafosmin (Myoview), the chemical structures of which were shown in Figure 3.4. These agents are uni-positively charged complexes, a feature which results in their concentration in heart muscle. The degree of lipophilicity of the agents is also an important aspect in their uptake into the heart. If the lipophilicity is too low, this impedes uptake, but if the lipophilicity is too high then the agent is bound strongly to blood proteins, and the uptake is also low. The first stage of the stress test involves injecting a relatively low dose, ∼8 mCi, of radiotracer while the patient is exercising. Exercise continues for about a minute after injection to ensure clearance of the tracer from the blood. Regional uptake of the radiotracer is proportional to local blood flow, with about 5% of the dose going to the heart. Exercise increases the oxygen demand of the heart causing normal coronary arteries to dilate, with blood flow increasing to a value typically three to five times than at rest. If the coronary arteries are blocked (stenosis), however, they cannot dilate and so the blood flow cannot increase sufficiently to satisfy the oxygen demand of the heart. This results in mild myocardial ischemia, which shows up as an area of low signal intensity on the SPECT scan. If the stress test gives abnormal results, then another SPECT scan is taken at rest, a few hours later, with a larger dose, typically 20–25 mCi. Healthy patients exhibit uniform uptake of the radiotracer throughout the left ventricular myocardium, but myocardial infarcts appear as “cold spots,” i.e. as areas of low image intensity. Figure 3.16 shows a series of short-axis slices from a myocardial SPECT scan, along with a schematic of the heart which shows the orientation of the images. The multislice data can be displayed as oblique-, long-, or short-axes views of the heart, greatly aiding diagnosis.

3.9.4

The Respiratory System

The roles of the lungs are to add oxygen to, and remove carbon dioxide from, the blood supply to the rest of the body. These processes occur at the blood–air interface in the alveoli of the lungs. Blood flows to the lungs through the

3.9 Clinical Applications of SPECT

(a)

(b)

Figure 3.16 (a) Structure of the heart showing the right coronary artery (RCA) and left coronary artery (LCA). (b) SPECT scans acquired in the planes shown on the left. The data can be resliced to be shown in any direction, as shown in the two lower sets of images. Source: J. Heuser/Wikimedia Commons/CC BY SA.

pulmonary arteries and veins and via the bronchial arteries. Air enters the lungs via the pharynx into the trachea. Different respiratory diseases can either cause disruptions to blood flow (perfusion), air flow (ventilation), or both. Perfusion studies of the lung use 99m Tc-labeled microspheres of macroaggregated albumin (MAA), with the particles typically being between 30 and 40 μm in diameter. These microspheres are injected into the bloodstream and travel to the right side of the heart and pulmonary artery. Within a few seconds of injection, 90–95% of the dose becomes trapped in the pulmonary capillaries and precapillary arterioles. If there is no occlusion in the pulmonary arterial system then the particles are distributed uniformly within the lung. However, if the pulmonary artery or one of its branches is occluded, then radioactivity is absent from this region. In ventilation studies radioactive 133 Xe gas is dissolved in saline and injected intravenously into the patient, who must hold their breath for approximately 30 s during the scan. When the gas reaches the pulmonary artery, since 133 Xe is relatively insoluble in blood, it is expired into the lung. Only those areas of the lungs in which pulmonary artery circulation is intact give a signal on the nuclear medicine scan: any airway blockage results in a signal void. Both perfusion (Q) and ventilation (V) scans are often carried out in the same examination. A so-called “V/Q mismatch,” in which ventilation is normal but

111

112

3 Nuclear Medicine

Figure 3.17 (a) A perfusion scan from a healthy patient using 99m Tc-labeled MAA, which shows homogeneous perfusion throughout the lung. (b) A ventilation scan from a patient using 133 Xe, showing normal ventilation. (c) A perfusion scan using MAA which shows an inhomogeneous distribution of radioactivity, indicative of a pulmonary embolism.

perfusion is abnormal, is indicative of the presence of an obstruction such as a pulmonary embolism. The embolus blocks blood flow to the lungs but ventilation is normal, since there is no corresponding blockage in the airway. One example of planar scintigraphy images corresponding to such a case is shown in Figure 3.17. The opposite situation in which perfusion is normal but ventilation is abnormal suggests an obstructive airway disease. If both perfusion and ventilation are abnormal, referred to as a V/Q matched abnormality, this is indicative of diseases such as bronchitis, asthma, or pulmonary edema.

3.9.5

The Liver and Reticuloendothelial System

The functions of the liver include detoxification of the blood supply, formation of bile, and the metabolism and synthesis of a variety of proteins. The most common diseases of the reticuloendothelial system (RES) are cirrhosis and fatty infiltrations of the liver, the presence of tumors (hepatomas) and abscesses, obstructions to hepatobiliary clearance, and hemangiomas. The radiotracer used most often to image the liver is a 99m Tc-labeled sulfur colloid. This colloid consists of small particles with diameters (∼100 nm) less than the size of the capillary junctions in the liver. The particles in the colloid are phagocytized by the RES, and the radiotracer concentrates in the Kupffer cells in the liver, as well as in the spleen and bone marrow. In normal patients, between 80% and 90% of the Kupffer cells are located in the liver, between 5% and 10% in the spleen, and the remainder in the bone marrow. When a disease such as cirrhosis of the liver is present, then the liver is unable to phagocytize the particles fully, and an increased level of radioactivity is seen in the spleen

3.10 Positron Emission Tomography

Figure 3.18 Transverse SPECT images of the liver showing a cavernous hemangioma. (a) In the 99m Tc-labeled sulfur colloid scan there is an area, indicated by the arrow, which has a reduced uptake of the agent. (b) In the 99m Tc-labeled RBC scan an increased vascularity, compared to the surrounding liver in the hemangioma, is indicated by a higher signal intensity.

and bone marrow. The colloids are preferentially localized in normal tissue, with almost no radioactivity found in abnormal lesions (focal nodular hyperplasia is an exception), and so diseases such as metastatic tumors, cysts, abscesses, and hematomas can be visualized by the lack of radioactivity in these areas, the so-called “cold-spots” in the image. A second radiotracer, 99m Tc-labeled red blood cells (RBCs), is often used to determine whether a lesion, detected using the 99m Tc-labeled sulfur-colloid, is vascular (blood flow present) or avascular (blood flow absent) in nature. The radiolabeled RBCs mimic the body’s own RBCs and circulate in the bloodstream for a long period of time. Therefore, areas of the nuclear medicine scan which show activity are indicative of the presence of blood flow. An example of a nuclear medicine study using both of these 99m Tc-labeled agents is shown in Figure 3.18.

3.10 Positron Emission Tomography The second major modality used in a nuclear medicine department is PET, which is the most sensitive in vivo imaging technique for studying metabolism and physiology. The fundamental difference between PET and SPECT is that the injected radiotracer used in PET emits a positron (a positively charged electron) rather than a γ-ray. This positron travels a short distance (∼1 mm) in tissue in a random direction before annihilating with an electron. This annihilation results in the formation of two γ-rays, each with an energy of 511 keV, which travel in opposite directions at an angle of almost exactly 180∘ with respect to one another. e+ + e− → 𝛾 + 𝛾

113

114

3 Nuclear Medicine

(a)

(b)

Figure 3.19 (a) Two γ-rays produce signals in two detectors, thereby defining a line along which the source of radioactivity must be located. (b) By analyzing the time difference between the arrival of each γ-ray, time-of-flight (TOF) PET improves the event localization along this line. A higher timing resolution translates into higher effective sensitivity due to improved localization of each event.

Since two “anti-parallel” γ-rays are produced, and both are detected, a PET system consists of a complete ring of detectors (scintillation crystals) surrounding the patient, as shown in Figure 3.19. Since the two γ-rays are created simultaneously, both are detected within a certain time-window (the coincidence time), the value of which is determined by the diameter of the detector ring and the location of the radiotracer within the body. The location of the two crystals which detect the two anti-parallel γ-rays defines a line-of-response (LOR), along which the annihilation must have occurred, as shown in Figure 3.19a. This process of line-definition is referred to as annihilation coincidence detection (ACD). Further localization is performed using TOF principles, shown in Figure 3.29b, which constrains the estimated position of an annihilation to a subsection of the LOR by measuring the exact times at which the two γ-rays strike the detectors. For a given timing resolution, Δt, defined as the shortest time difference that can be measured by the detectors, the localization resolution, Δx, along a line between two detectors is given by: Δx =

cΔt 2

(3.21)

where, c is the speed of light. The variance of the intrinsic noise is proportional to the length of the LOR, and so the reduction in noise variance, the so-called multiplicative reduction factor, f , using TOF is given by: f =

2D D = Δx cΔt

where D is the size of the patient.

(3.22)

3.11 Radiotracers Used for PET

PET data are acquired in direct 3D-mode without collimators, and image reconstruction is performed using iterative techniques. Since the γ-ray energy of 511 keV in PET is much higher than the 140 keV γ-rays in SPECT, different materials are used for the scintillation crystals. The higher γ-ray energy means that fewer γ-rays are attenuated in tissue, which is one factor that contributes to the higher sensitivity of PET compared to SPECT. The fact that no collimation is used, unlike SPECT, is another enormous advantage of PET in terms of sensitivity. The spatial resolution in PET depends upon a number of factors including the number and size of the individual crystal detectors: typical values of overall system spatial resolution are ∼3–5 mm. Commercial PET systems have an integrated CT scanner, and so the modality is usually referred to as PET/CT. As for a SPECT/CT scanner the CT scan provides information necessary for attenuation correction of the PET data, as well as an anatomical overlay. A typical PET scan takes ∼20–30 min, including a quick scout PET scan to determine the required head-foot (axial) coverage, a 1-min CT scan which usually covers an axial FOV from the head to the bottom of the torso, and the actual PET scan itself which takes ∼15–20 min for a whole-body scan. A PET scanner is much more expensive than a SPECT scanner, costing approximately $2 million, in addition to the requirement for a nearby radio-labeling facility for producing the radiotracers. In 2022, there were over 2500 PET and PET/CT facilities in the United States. PET is used mainly in body oncology, with other applications in cardiology and neurology. The spatial distribution, extent of uptake, rate of uptake, and rate of washout of a particular radiotracer are all quantities which can be used to distinguish between diseased and healthy tissue.

3.11 Radiotracers Used for PET Radiotracers used in PET are structural analogs of biologically active molecules, such as glucose, in which one or more of the atoms have been replaced by a radioactive atom. The most commonly used radioisotopes are 18 F, 11 C, 15 O, 82 Rb, and 13 N, the properties of which are summarized in Table 3.3. Table 3.3 Properties and applications of the most common PET radiotracers. Radioisotope 18

F

Half-life (mins)

Radiotracer

Clinical applications

109.7

18

FDG

Oncology, inflammation, cardiac viability

C-palmitate

11

C

20.4

11

15

O

2.07

H2 15 O

Cerebral blood flow

13

N

9.96

13

NH3

Cardiac blood flow

1.27

82

RbCl2

Cardiac perfusion

82

Rb

Cardiac metabolism

115

116

3 Nuclear Medicine OAc

AcO AcO

Kryptofix AcO acetonitrile

O

OAc

o s

18F

o

OAc

O

AcO

o

18F

NaOH/HCl OAc

CF3

OH

O

OH 18F

OH

18FDG

Figure 3.20 The most common synthetic pathway for 18 FDG.

With the exception of 82 Rb (see next paragraph) all PET radiotracers are produced by an on-site or close-by cyclotron. The radioisotope is incorporated via rapid chemical synthesis into the corresponding radiotracer. For speed and safety considerations the synthesis should ideally be carried out robotically, and such units are available commercially to synthesize 18 F-fluorodeoxyglucose (FDG), 15 O , C15 O , C15 O, and H 15 O. The most common radiotracer, used in ∼80% of 2 2 2 studies, is FDG, the structure of which is shown in Figure 3.20 along with its basic synthetic pathway. Once injected into the bloodstream, FDG is actively transported across the BBB. Inside the cell, FDG is phosphorylated by glucose hexokinase to give FDG-6-phosphate. This chemical is trapped inside the cell, since it cannot react with G-6-phosphate dehydrogenase, which is the next step in the glycolytic cycle, because the hydroxyl group at the 2-carbon is a requirement for the process. Since a high glucose metabolic rate is characteristic of many types of tumors, these pathologies show up as areas of high signal intensity in oncological PET scans. 82 Rb is the only PET radiotracer that can be produced from an on-site generator rather than a cyclotron. The process uses 82 Sr as the parent isotope which has a half-life of 600 h. The physical set up is quite similar to the technetium generator described in Section 3.4 with the 82 Sr adsorbed on stannic oxide in a lead-shielded column. The column is eluted with NaCl solution: the eluent is in the form of rubidium chloride which is injected intravenously. The 82 Rb rapidly clears from the bloodstream and is extracted by myocardial tissue in a manner analogous to potassium. Areas of myocardial infarction are visualized within a few minutes post-injection as areas of low signal intensity compared to high uptake by healthy tissue.

3.12 Instrumentation for PET PET systems operate in 3D mode, essentially without collimation, so that every γ-ray that strikes the detector is recorded and analyzed. As in SPECT, the key instrumentation components are the detection crystals and the associate electronics, with most of the signal processing being done off-line in software after the data have been digitized.

3.12 Instrumentation for PET

3.12.1 Scintillation Crystals and Detector Electronics Detection of the anti-parallel γ-rays uses a large number of scintillation crystals, which are usually formed from either cerium doped lutetium oxyorthosilicate Lu2 SiO5 :Ce (LSO) or lutetium yttrium oxyorthosilicate, Lu2(1−x) Y2x SiO5 :Ce (LYSO) – in the text below the properties of LSO are discussed, but LYSO has very similar properties. The crystals are placed in a circular arrangement surrounding the patient, and the crystals are coupled to an equal or smaller number of silicon PMs (SiPMs). In general, the narrower the width of the crystal the higher the intrinsic spatial resolution of the system. However, there is a lower limit on the size, below which the spatial resolution can actually worsen. This is because a γ-ray that strikes a very thin crystal at a relatively large incident angle can be scattered and penetrate several adjacent crystals. The desirable properties of LSO as a detector include: (a) a high density (7.4 g cm−3 ) which results in a large effective cross-section for Compton scattering, and a correspondingly high γ-ray detection efficiency, (b) a large effective atomic number (65), which also results in a high γ-ray detection efficiency due to γ-ray absorption via photoelectric interactions, (c) a short decay time (40 ns) which allows a short coincidence time to be used, with an associated reduction in the number of accidental coincidences (Section 3.13.1), and an increased SNR in the reconstructed PET image, (d) a high emission intensity (0.75) which represents an efficient conversion of γ-ray energy into light, (e) an emission wavelength (420 nm) which is very close to the maximum sensitivity for SiPMs, (f) an index of refraction close to that of the SiPM to ensure efficient transmission of light between the crystal and the SiPM. Optical transparency at the emission wavelength is also important, and (g) being nonhygroscopic, which simplifies the design and construction of the many thousands of crystals needed in the complete system. Compared to conventional PMTs, SiPMs are more compact, use a lower operating voltage, provide up to 100% coverage of the crystal area, have fast timing resolution and can also be used in a strong magnetic field for PET/MR hybrid systems. An SiPM consists of a matrix of single-photon avalanche diodes (SPADs), each of which acts as an independent photon counter. SPADs are essentially avalanche photodiodes which operate in the Geiger mode, i.e. they are reverse-biased above their breakdown voltage so that a single photon creates a self-sustaining avalanche. SiPMs can be either analog or digital. In an analog SiPM, the discharge current from all the pixels are summed together via an application-specific integrated circuit (ASIC) to form an analog output current which is proportional to the number

117

118

3 Nuclear Medicine 7.8775 mm Die 1 Die 2 Die 3 Die 4

Pixel 4

Pixel 2

Pixel 3

3.2 mm

Pixel 1 LYSO crystal

Die 13 Die 14 Die 15 Die 16

7.15 mm

Die 9 Die 10 Die 11 Die 12

32.6 mm

Die 5 Die 6 Die 7 Die 8

3.8775 mm 32.6 mm

(a)

(b)

(c)

Figure 3.21 (a) Photograph of the Philips DPC-3200-22 SiPM-based PET detector. (b) Layout and dimensions of the digital SiPM. (c) Layout and dimensions of a single die composed of four pixels. Each pixel consists of 3200 SPADs.

of photons detected. A digital SiPM integrates the signal from each individual SPAD with CMOS circuits on the same substrate. A one-bit memory can be used to enable or disable each detector, and an on-chip time-to-digital converter puts a digital timestamp on each event. In one of most recent PET scanners, the Philips Vereos series, each sensor element consists of a 4 × 4 array of independent die sensors, each one consisting of a 2 × 2 pixel array as shown in Figure 3.21, with 3200 SPADs in each pixel. Each SPAD has an area of 59.4 × 64 μm2 [9, 10].

3.13

Image Reconstruction

Figure 3.22 shows the steps involved in image reconstruction. After passing through a threshold detector so that the signals from γ-rays that have lost a large amount of energy via Compton scattering in the body are rejected, a timestamp is added to each count, the amount of energy deposited in the detectors is recorded,

SiPM



Threshold detector

Timestamp

ADC

Time-sorted data

Coincidence detection

Energy correction timestamp correction Accidental rejection

Iterative reconstruction

Figure 3.22 Schematic of the preprocessing steps for the PET data before image reconstruction.

3.13 Image Reconstruction

and the data are digitized by the ADC. Any required corrections to the energy and timestamp (due to differences in the performance of individual detectors and/or processors) are performed in software, and then the data from all the detectors are sorted in the order of their recorded times. Coincidence detection and accidental coincidence rejection are performed, as described in Sections 3.13.1 and 3.13.2. Image reconstruction in PET uses very similar iterative algorithms to those in CT and SPECT. Attenuation correction, scatter correction, dead time correction and also a model of the scanner’s overall PSF are built into the iterative reconstruction, which typically performs a small number of iterations based on an OSEM algorithm. An additional post-processing low-pass smoothing filter may be applied to the data after reconstruction.

3.13.1 Annihilation Coincidence Detection and Removal of Accidental Coincidences In a PET scan, a large number of (annihilation) coincidences are detected. Some of these are true coincidences, but there are also many accidental coincidences which refer to events in which the LOR formed from the two detected γ-rays is assigned incorrectly. Different ways in which this can occur are shown in Figure 3.23 [11]. The rate (CijA ) at which these accidental coincidences are recorded for a given detector pair (i,j) is given by: CijA = 2𝜏Ri Rj

(3.23)

where, Ri and Rj are the single count rates in the individual detectors i and j, and 𝜏 is the coincidence resolving time. Equation (3.23) shows that the value of 𝜏 should 1

τ13

1

2

2

5

3

4 (a)

Event 1 2

5

τ τ14

3

3 (b)

5

4 t

4 (c)

Figure 3.23 Schematic (not to scale) showing the concept of coincidence detection and the removal of accidental coincidences. Consider five events, shown in (a) axial and (b) transverse views, and (c) as a function of time. Pair 1–5 is rejected since it occurs outside the coincidence resolving time 𝜏. Pair 1–2 is rejected since the LOR is outside the patient. Pair 1–3 is also rejected since the time between the events is longer than 𝜏 13 , the maximum time difference at which two γ-rays from the same event could arrive at the respective detectors. Pair 1–4 is accepted since it is within the accepted time 𝜏 14 . The process is now repeated starting with time point 2, and considering events 2–3, 2–4, and 2–5. Source: Based on Leung et al. [11].

119

120

3 Nuclear Medicine

be made as small as possible to reduce the contribution of accidental coincidences, but obviously not so short as to reject true coincidences.

3.13.2 Attenuation Correction The HVL for γ-rays with energies of 511 keV is about 7 cm in soft-tissue, which means that there is considerable attenuation of γ-rays originating from deep within the body or brain, which leads to incorrect quantification unless corrected for. As for SPECT, an attenuation correction map can be produced from the CT scan, and this information is fed into the iterative reconstruction algorithm.

3.13.3 Scatter Correction Scattered coincidences occur if one, or both, of the γ-rays from the positron annihilation undergo Compton scattering in the body. The net result is that a slightly erroneous LOR is recorded. Since the γ-rays lose only small amounts of energy in these Compton interactions, scatter rejection on the basis of the energy of the detected γ-rays is impractical. In 3D mode, up to 50% of the γ-rays represent radiation which has undergone Compton scattering, and therefore reduce the contrast in the image. The most common technique for scatter correction is called the single scatter simulation (SSS) method [12], in which a scattered image is simulated and incorporated directly into the iterative reconstruction process. An accurate attenuation map from the CT images is required to generate the simulation data. Additional information such as TOF data can also be incorporated into the simulation, and there are many variants on the method in which scatter and attenuation, for example, can be jointly estimated in the iterative loop [13]. Monte Carlo scatter simulations (MCSS) can also be used [14], again requiring an accurate attenuation map from the CT data. Given recent improvements in the energy resolution of PET detectors, some work has recently been performed using double-energy window and triple-energy window energy discrimination (analogous to the approach in SPECT), and incorporating this term into iterative reconstruction [6].

3.13.4 Dead-Time Correction At the beginning of a PET scan, when the radioactivity level is the highest, there is a chance of dead-time losses and pile-up events where the system cannot accurately record the very high number of events. These losses and subsequent corrections can be estimated for the particular PET scanner by using uniform phantoms with very high radioactivity, and measuring the singles counts for each detector as a function of time. Any differences in the time evolution of the number of counts versus the expected exponential decrease can be attributed to dead-time losses: these losses can then be incorporated into the model-based iterative reconstruction algorithm.

3.14 Image Characteristics

3.14

Image Characteristics

3.14.1 Spatial Resolution In contrast to SPECT, the PSF in PET is essentially independent of the depth within the body of the radioactive decay due to the inherent “double detection” of the two γ-rays. Other factors which influence the spatial resolution of the PET image include: (i) The finite distance which the positron travels before annihilation with an electron – this distance effectively defines a “sphere of uncertainty” with regards to the position of the radiotracer. The distance increases with the energy of the positron. Typical values of the maximum positron energy and corresponding FWHM distances traveled are: 18 F (640 keV, 1 mm), 11 C (960 keV, 1.1 mm), 13 N (1.2 MeV, 1.4 mm), 15 O (1.7 MeV, 1.5 mm), and 82 Rb (3.15 MeV, 1.7 mm), (ii) The slight deviation from 180∘ of the relative trajectories of the two γ-rays. Due to motion of the center-of-mass of the annihilation there is a statistical distribution in angles about a mean of 180∘ , with a FHWM of approximately 0.3∘ . The image spatial resolution, therefore, depends on the diameter of the detector ring. For example, from this effect alone, a 60 cm diameter ring has a contribution to the overall spatial resolution of 1.6 mm, whereas for a 100 cm diameter ring the value is 2.6 mm, and (iii) the dimensions of the detection crystal, with a larger number of smaller crystals having a higher spatial resolution than a smaller number of larger crystals. A contribution to the overall spatial resolution of one-half the crystal diameter is a good approximation. The overall spatial resolution is the combination of all three components, with typical FWHM values of 3–5 mm for a whole-body imager.

3.14.2 Signal-to-Noise The sensitivity of a PET scanner is defined as the number of photon pairs detected by the system per unit time per unit radioactivity injected, and has units of cps/μCi (counts per second per microcurie). The radiotracer dose, targeting efficiency, image acquisition time, γ-ray attenuation in the patient, system sensitivity and nature of image post-processing are the major components which affect the image SNR, in exactly the same way as covered previously for SPECT. For a conventional PET scanner only about 3–5% of the γ-rays are detected, with one of the main reasons being the limited axial length of the detector ring, typically 15–26 cm, which means that the trajectories of many γ-rays pass beyond this length and are not detected. One solution is to increase the length of the PET detector system, a topic covered in Section 3.16

121

122

3 Nuclear Medicine

3.14.3 Contrast-to-Noise In addition to the factors affecting SNR, the CNR is mainly influenced by nonspecific take up of radiotracers in healthy tissue adjacent to the pathology under investigation. For example, 18 FDG accumulates in both healthy brain tissue as well as pathological lesions and tumors. Radiotracers which have a greater specificity for tumor-targeting are described in Section 3.17.

3.15

Acquisition Methods for PET

As mentioned above, the length of a standard commercial PET ring ranges between ∼15 and 26 cm. This is sufficient for acquiring brain or cardiac PET scans with the patient and patient bed at a single fixed position. However, oncology examinations often call for whole-body or at least “brain-to-thighs” coverage, and in order to achieve this coverage up to five different scans are acquired with the patient bed at five different positions. There is deliberately some overlap in the five axial FOVs, as shown in Figure 3.24, to aid with fusing of the different images using appropriate processing algorithms. An important assumption is that the distribution of the radiotracer does not change very much during the entire scanning time. For FDG studies, in which acquisition typically begins about 1 h after injection, this assumption is generally valid.

Figure 3.24 Different bed positions are used to acquire data from an axial FOV covering the brain-to-thighs. There is some overlap between different positions to help with fusion of the individual images to produce the image on the right.

3.16 Total Body PET Systems The latest development in PET/CT systems is the concept of a total body PET system, as shown in Figure 3.25 [15–17]. There are many advantages to such a system,

3.16 Total Body PET Systems

Figure 3.25 Schematic of a total body PET system on the right with a conventionallength scanner on the left for comparison. Source: Based on Refs. [15–17].

including a much higher sensitivity, and a shorter scanning time for whole-body applications. The obvious disadvantage is the increased cost due to the much larger number of detectors required. The limited axial length of conventional PET scanners means that only ∼15% of the total body length is within the FOV of the PET detectors. As outlined previously, since γ-rays are emitted randomly in all directions, only about 3–5% of the γ-rays that escape from the body intercept the detector ring. If the ring length is extended from 20 to 200 cm, then the effective sensitivity for imaging the entire body has been estimated to increase by up to a factor-of-forty [15, 16]. Imaging just the head and torso would see the sensitivity increase by a factor of 24, and even for a small organ such as the brain or heart, there is a factor of four-to-five increase in sensitivity. For whole-body imaging this factor-of-forty increase in sensitivity can be used for different purposes depending upon the particular application: (i) an increase in SNR by a factor of 6.3 (enabling better quantitation), (ii) an imaging time reduced by a factor-of-forty (which would also improve image quality in terms of reduced motion artifacts, since some body scans can even be performed during a breath-hold), (iii) a reduced dose by the same factor-of-forty (from ∼6 to 0.15 mSv) for a typical 18 FDG scan. In addition to these factors, the increased sensitivity enables PET scanning to be performed at much later time points after injection of a long-lived radioisotope such as 89 Zr, which allows later time points to be studied to improve tumor tracer dynamics. The fact that whole-body coverage occurs at a single time, rather than in stages, also allows interactions between different organs, e.g. the heart-brain or gut-brain axis, to be investigated. The first commercial total body PET system, with an axial length of almost 2 m, was released in 2020 by United Imaging. At the same time Siemens released a

123

124

3 Nuclear Medicine

∼1 m long system which can be used for improved single-organ imaging. The cost of such systems is currently much higher than a conventional PET/CT scanner due to the larger number of detector rings that need to be incorporated. The technical challenges of designing such a system include the fact that the electronics must be able to handle very high singles and coincidence rates.

3.17 Clinical Applications of PET/CT Over 90% of PET examinations are for oncology, the remainder being ∼3% neurological and ∼6% cardiac. In 2020, over 2 million PET scans were performed in the United States.

3.17.1

Body Oncology

PET is used mainly for the diagnosis, staging and treatment evaluation for tumors in the breast, lung, head and neck, prostate, cervix, and also colon/rectum. Malignant cells, in general, have higher rates of aerobic glucose metabolism than healthy cells. Therefore, in PET scans using FDG, the tumors show up as areas of increased signal intensity. For metastatic cancer, in which lesions spread from their primary focus to secondary areas, a whole-body PET scan is performed. Overlays on the CT images help to identify the exact location of any lesions: an example of such an overlay is shown in Figure 3.26.

Figure 3.26 (left) Whole-body 18 FDG PET scan. (center) Corresponding CT scan. (right) Fused PET/CT scan.

3.17 Clinical Applications of PET/CT

In the past few years, targeted radiotracers have been shown to be very effective in specific cancers. For example, in 2020 the FDA approved the first targeted agent for prostate cancer, [68 Ga]Ga-prostate specific membrane antigen (PSMA)-11.

3.17.2

Brain Imaging

Brain scans using PET can measure both regional cerebral blood flow (rCBF) and also tissue metabolism. The former quantity is usually measured with 15 O labeled water, and the latter with FDG as shown in Figure 3.27. Diseases in which brain perfusion in localized areas is either increased or reduced compared to normal patients can be diagnosed by mapping the distribution of 15 O, the concentration of which is proportional to brain perfusion. In other applications, areas of the brain with reduced glucose metabolism can often be identified as focal centers for epilepsy. Another agent, 3,4-dihydroxy-6-fluoro-DL-phenylananine, [18 F]DOPA, measures L-DOPA uptake, and the rate of dopamine synthesis in the brain. Alzheimer’s and Parkinson’s disease can be diagnosed at an early stage by a reduction in dopamine synthesis.

3.17.3

Cardiac Imaging

As covered in Section 3.9.3 one of the major uses of SPECT/CT is myocardial perfusion imaging in the assessment of patients with coronary artery disease. It has traditionally been favored over PET due to the greater availability, and the fact that PET studies had to use very short-lived radiotracers. The recent development of the 82 Rb generator has meant that PET/CT is now being used in many more cardiac examinations, since it has higher sensitivity and spatial resolution than SPECT/CT. 82 Rb is a potassium analog with a half-life of 75 s, and is formed in the generator from parent 82 Sr with a half-life of 26 days. After intravenous injection, 82 Rb crosses the capillary membrane and is taken up in the heart through active transport via the sodium/potassium adenosine triphosphate transporter, which is dependent on coronary blood flow.

Figure 3.27

18

FDG PET scans of adjacent slices in the brain.

125

126

3 Nuclear Medicine

Exercises Sections 3.2, 3.3 3.1

(a) In a sample of 1000 atoms, if 50 decay in 5 s what is the radioactivity, measured in mCi, of the sample? (b) In order to produce a level of radioactivity of 1 mCi, how many nuclei of 99m Tc (𝜆 = 3.22 × 10−5 s−1 ) must be present? What mass of the radiotracer does this correspond to? (Avogadro’s number is 6.02 × 1023 ). (c) A radioactive sample of 99m Tc contains 10 mCi activity at 9 am. What is the radioactivity of the sample at 3 pm on the same day?

3.2

In a nuclear medicine scan using 99m Tc, the image SNR for a 30-min scan was 50 : 1 for an injected radioactive dose of 1 mCi. Imaging began immediately after injection. (a) If the injected dose were doubled to 2 mCi, what would be the image SNR for a 30 min scan? (b) If the scan time were doubled to 60 min with an initial dose of 1 mCi, what would be the image SNR?

3.3

A dose of 1 mCi of 99m Tc is administered to a patient. Calculate the total dose to the patient if the biological half-life of the radiotracer in the body is: (a) 2 y, (b) 6 h, (c) 2 min.

3.4

Two patients undergo nuclear medicine scans. One receives a dose of radiotracer A and the other radiotracer B. The radioactive half-life of A is 6 h and of B is 24 h. If the administered dose of radiotracer A is three times that of radiotracer B, and the biological half-lives of A and B are 6 and 24 h, respectively, at what time is the radioactivity in the body the same for the two patients?

Section 3.4 3.5

In the technetium generator, show mathematically that if 𝜆2 ≫ 𝜆1 , the radioactivities of the parent and daughter nuclei become equal in value at long times.

3.6

Using the equations derived in the analysis of the technetium generator, plot graphs of the activity of parent and daughter nuclei for the following cases: (a) 𝜏 1/2 (parent) = 600 h, 𝜏 1/2 (daughter) = 6 h (b) 𝜏 1/2 (parent) = 6.02 h, 𝜏 1/2 (daughter) = 6 h (c) 𝜏 1/2 (parent) = 0.6 h, 𝜏 1/2 (daughter) = 6 h

Exercises

3.7

Calculate the exact time at which the first three “milkings” of the technetium cow should be performed.

3.8

Do the tops of the curves in Figure 3.4 lie at the same values that would have been obtained if the technetium cow were not milked at all?

3.9

Rather than waiting 24 h, only 6 h are left between milkings. Plot the graph of radioactivity over the first 2 days.

Section 3.6 3.10

The linear attenuation coefficients for a NaI(Tl) crystal for the radiotracer elements 99m Tc,68 Ga, 131 I, and 111 In are 2.2, 1.7, 1.3, and 1.2 cm−1 , respectively. For a 6 mm and 1 cm thick crystal, calculate the relative signal intensities for each radiotracer element.

3.11

Calculate the magnification factor for the pinhole camera.

3.12

The thickness of the lead septa is chosen to ensure that only 5% of the γ-rays penetrate from one collimator hole to the next one. (a) Using Figure 3.28 show that the thickness is given by: [6d/𝜇]/[L−3𝜇]. (b) Calculate the septal thickness required for γ-rays of 140 keV for lead collimators with hole diameters of 0.25, 1, and 2.5 cm. The attenuation coefficient for lead is 30 cm−1 at 140 keV. Crystal

L

d

t

Figure 3.28 Illustration for Exercise 3.12.

3.13

If the acceptance window for a planar nuclear medicine scan is set to 15%, what is the maximum angle that a γ-ray can be Compton scattered and still be accepted if it strikes the scintillation crystal?

3.14

What is the energy of a γ-ray which has been Compton scattered at an angle of 30∘ in the body?

127

128

3 Nuclear Medicine

3.15

In Figure 3.29, label which of the collimators are diverging, parallel-hole, pinhole, and converging. The source is assumed to be a point source of radioactivity. Explain your reasoning.

System FWHM (cm)

Relative geometric efficiency

2 4 5 8 10 12 14 16 Source-to-collimator distance (cm)

2 4 5 8 10 12 14 16 Source-to-collimator distance (cm)

Figure 3.29 Illustration for Exercise 3.15.

3.16

Assuming that the body is circular with a diameter of 30 cm, calculate the spatial resolution (FWHM) for a parallel hole collimator with septal thickness 0.03 cm, length 2.5 cm, for two sources of radioactivity, one very close to the detector (z = 0 cm) and one at the other side of the body (z ∼ 30 cm).

3.17

Show that for the pinhole camera there is an approximate relationship between collimator efficiency and spatial resolution given by: g ∝ R2coll

3.18

For a converging collimator, describe qualitatively: (a) whether the efficiency increases or decreases as a function of z and 𝜃 (the angle of the septa with respect to parallel septa), and (b) similarly for the spatial resolution.

3.19

Given the following resistor values for the Anger network in Figure 3.9, show that the output is linear in X.

Exercises

3.20

PMT

RX+ (𝛀)

RX− (𝛀)

RY+ (𝛀)

RY− (𝛀)

8

Infinite

14.3

28.6

28.6

9

57.1

19.0

28.6

28.6

10

28.6

28.6

28.6

28.6

11

19.0

57.1

28.6

28.6

12

14.3

Infinite

28.6

28.6

Suppose that the true count rate in a gamma camera is 10,000 per second, but the measured rate is only 8000 per second. What is the dead time of the system?

Section 3.8 3.21

Three parameters which affect the image SNR in nuclear medicine are the thickness of the detector crystal, the length of the lead septa in the anti-scatter grid, and the FWHM of the energy window centered around 140 keV. For each parameter, does an increase in the value of the particular parameter increase or decrease the image SNR? In each case, name one other image characteristic (e.g. CNR, spatial resolution) that is affected, and explain whether this image characteristic is improved or degraded.

3.22

For a 128 × 128 data matrix, how many total counts are necessary for a 1% pixel-by-pixel uniformity level in a SPECT image?

3.23

Two ways of reducing the statistical noise in the image are to double the total imaging time, or double the mass of tracer injected. Considered separately, by what factor would each of these reduce the noise in the image?

3.24

Using the data in Figure 3.30 (a) as the image, calculate the image after convolution separately with the two kernels in (b) and (c). Which type of filters do these two kernels represent?

3.25

Suppose that two radiotracers could be given to a patient. Radiotracer A is taken up in a tumor ten times higher than in the surrounding tissue, whereas B suppresses uptake by a factor of ten. Assume that the tumor is 1 cm thick and the surrounding tissue is 10 cm thick. Ignoring attenuation effects, what is the CNR generated from each tumor. Assume that

129

130

3 Nuclear Medicine

1

1

1

1

1

1

2

2

2

1

1

1

1

–1

–1

–1

1

2

3

2

1

1

4

1

–1

9

–1

1

2

2

2

1

1

1

1

–1

–1

–1

1

1

1

1

1

(a)

(b)

(c)

Figure 3.30 Illustration for Exercise 3.24.

the uptake in normal tissue gives a rate of 10 counts per minute per square centimeter of tissue, and that the imaging time is 1 min. 3.26

Answer true or false with a couple of sentences of explanation. If a uniform attenuation correction is applied to a SPECT scan, a tumor positioned close to bone appears to have a lower radioactivity than is actually the true situation.

3.27

Calculate the maximum angle and corresponding energy of Comptonscattered γ-rays accepted for energy resolutions of 5%, 15%, and 25%.

Sections 3.10–3.14 3.28

Using the rest mass of the electron, show that the energies of the two γ-rays produced by the annihilation of an electron with a positron are 511 keV.

3.29

Assume that the head is an ellipse with major dimensions 28 and 22 cm. The patient is placed within a 45 cm diameter head PET scanner with a circular arrangement of detectors. What is the maximum recorded time difference for an event between two detectors?

3.30

PET scans often show an artificially high level of radioactivity in the lungs. Suggest one mechanism by which this might occur.

3.31

What timing resolution would be necessary to obtain a position resolution of 5 mm in TOF PET based only upon time-of-flight considerations?

3.32

If the brain is assumed to be a sphere with diameter 20 cm, and the largest dimension of the body to be 40 cm, what are the respective values of the

References

timing resolution necessary to reduce the noise in TOF PET compared to conventional PET? 3.33

Suggest why a curvilinear region of low signal intensity is often seen on PET/CT scans of the thorax and abdomen, which parallels the dome of the diaphragm.

3.34

Plot the count rates versus time for a detector which has significant dead-time losses. Assume that a phantom with uniform radioactivity is used to generate the γ-rays. On the same plot, show the decay in radioactivity with time.

Section 3.16 3.35

Suppose you can achieve a 24-fold reduction in scan time from 24 min to 1 min for a whole-body PET scan using a total body PET system, but it takes 5 min to prepare each patient. Discuss possible combinations of scan time, dose, and SNR increase that would make sense with these constraints.

References 1 A. Boschi, L. Uccelli, and P. Martini, A picture of modern Tc-99m radiopharmaceuticals: production, chemistry, and applications in molecular imaging, Appl. Sci. Basel 9(12):2526 (2019). 2 M. Ljungberg and P. H. Pretorius, SPECT/CT: an update on technological developments and clinical applications, Brit. J. Radiol. 91(1081):20160402 (2018). 3 V. Picone, N. Makris, F. Boutevin, S. Roy, M. Playe, and M. Soussan, Clinical validation of time reduction strategy in continuous step-and-shoot mode during SPECT acquisition, EJNMMI Phys. 8(1):10 (2021). 4 K. Van Audenhaege, R. Van Holen, S. Vandenberghe, C. Vanhove, S. D. Metzler, S. C. Moore, Review of SPECT collimator selection, optimization, and fabrication for clinical and preclinical imaging, Med. Phys. 42(8):4796–4813 (2015). 5 L. C. Martinez and A. Calzado, Evaluation of a bilinear model for attenuation correction using CT numbers generated from a parametric method, Appl. Radiat. Isot. 107, 77–86 (2016). 6 J. M. Alvarez-Gomez, J. Santos-Blasco, L. M. Martinez, and M. J. Rodriguez-Alvarez, Fast energy dependent scatter correction for list-mode PET data. J. Imaging 7(10):199 (2021).

131

132

3 Nuclear Medicine

7 J. W. T. Heemskerk and M. Defrise, Gamma detector dead time correction using Lambert W function, EJNMMI Phys. 7(1):27 (2020). 8 H. M. Hudson and R. S. Larkin, Accelerated image-reconstruction using ordered subsets of projection data, IEEE T Med. Imaging 13(4):601–609 (1994). 9 Z. Liu, S. Gundacker, M. Pizzichemi, A. Ghezzi, E. Auffray, P. Lecoq, M. Paganoni, In-depth study of single photon time resolution for the Philips digital silicon photomultiplier, J. Instrum. 11:P06006 (2016). 10 Z. Liu, M. Pizzichemi, E. Auffray, P. Lecoq, and M. Paganoni, Performance study of Philips digital silicon photomultiplier coupled to scintillating crystals, J. Instrum. 11:P01017 (2016). 11 E. K. Leung, M. S. Judenhofer, S. R. Cherry, and R. D. Badawi. Performance assessment of a software-based coincidence processor for the EXPLORER total-body PET scanner, Phys. Med. Biol. 63(18) (2018). 12 C.C. Watson, D. Newport, M.E. Casey, R. A. DeKemp, R. S. Beanlands, M. Schmand, Evaluation of simulation-based scatter correction for 3-D PET cardiac imaging, IEEE T Nucl. Sci. 44(1):90–97 (1997). 13 Y. Berker, V. Schulz, and J. S. Karp JS, Algorithms for joint activity-attenuation estimation from positron emission tomography scatter. EJNMMI Phys. 6(1):18 (2019). 14 C. S. Levin, M. Dahlbom, and E. J. Hoffman, A Monte-Carlo correction for the effect of Compton-scattering in 3-D pet brain imaging. IEEE T Nucl. Sci. 42(4):1181–1185 (1995). 15 S.R. Cherry, T. Jones, J. S. Karp, J. Y. Qi, W. W. Moses, and R D. Badawi, Total-body PET: maximizing sensitivity to create new opportunities for clinical research and patient care. J. Nucl. Med. 59(1):3–12 (2018). 16 S. Vandenberghe, P. Moskal, J. S. Karp. State of the art in total body PET. EJNMMI Phys. 7(1):35 (2020). 17 V. Nadig, K. Herrmann, F. M. Mottaghy, and V. Schulz, Hybrid total-body pet scanners-current status and future perspectives. Eur. J. Nucl. Med. Mol. I 49:445–459 (2021).

Further Reading Original Papers B. Cassen, L. Curtis, C. Reed, and R. Libby, Instrumentation for 131 I used in medical studies, Nucleonics 9, 46 (1951) H. O. Anger, Scintillation camera, Rev. Sci. Instrum. 29, 27 (1958). D. E. Kuhl and R. Q. Edwards, Image separation radioisotope scanning, Radiology 80, 653 (1963).

Further Reading

H. O. Anger, Scintillation camera with multichannel collimators, J. Nucl. Med. 5, 515 (1964). J. S. Robertson, R. B. Marr, M. Rosenblum, V. Radeka, and Y. L. Yamamoto, Thirty-two-crystal positron transverse section detector, in Tomographic Imaging in Nuclear Medicine, pp 142–153, Society of Nuclear Medicine (1973). D. E. Kuhl, R. Q. Edwards, A. R. Ricci, R. J. Yacob, T. J. Mich, and A. Alavi, The Mark IV system for radionuclide computed tomography of the brain, Radiology 121, 405 (1976). R. J. Jaszczak, P. H. Murphy, D. Huard, and J. A. Burdine, Radionuclide emission computed tomography of the head with 99m Tc and a scintillation camera, J. Nucl. Med. 18, 373 (1977). Z. H. Cho and M. R. Farukhi, Bismuth germanate as a potential scintillation detector in positron cameras, J. Nucl. Med. 18, 840 (1977). B. M. Gallagher, A. Ansari, H. Atkins, V. Casella, D. R. Christman, J. S. Fowler, T. Ido, R. R. MacGregor, P. Som, C. N. Wan, A. P. Wolf, D. E. Kuh, and M. Reivich, Radiopharmaceuticals XXVII. 18F-labeled 2-deoxy-2-fluoro-D-glucose as a radiopharmaceutical for measuring regional myocardial glucose metabolism in vivo: tissue distribution and imaging studies in animals, J. Nucl. Med. 18, 990 (1977). M. M. Ter-Pogossian, N. A. Mullani, J. Hood, C. S. Higgins, and M. Curie, A multi-slice positron emission computed tomograph (PETT IV) yielding transverse and longitudinal images, Radiology 128, 477 (1978).

Books Physics and Instrumentation M. Dahlbom, ed., Physics of PET and SPECT Imaging, CRC Press (2017). M. M. Khalil, ed., Basic Science of PET Imaging, Springer (2017). M. Ljungberg, ed., Handbook of Nuclear Medicine and Molecular Imaging for Physicists: Instrumentation and Imaging Procedures, CRC Press (2022).

Radionuclide Production M. R. Kilbourn and P. J. H. Scott, eds., Handbook of Radiopharmaceuticals: Methodology and Applications, 2nd ed., Wiley (2021).

Image Reconstruction D. Panetta and N. Camarlinghi, 3D Image Reconstruction for CT and PET, CRC Press (2020).

Review Articles E. Berg and S. R. Cherry, Innovations in instrumentation for positron emission tomography, Semin. Nucl. Med. 48, 311 (2018).

133

134

3 Nuclear Medicine

A. Dash and R. Chakravarty, Radionuclide generators: the prospect of availing PET radiotracers to meet current clinical needs and future research demands, Am. J. Nucl. Med. Mol. Imaging, 9, 30 (2019). J. Wu and C. Liu, Recent advances in cardiac SPECT instrumentation and imaging methods, Physi. Med. Biol. 64, 06TR01 (2019). H. Tan, Y. Gu, J. Yu et al. Total-body PET/CT: current applications and future perspectives, Am. J. Roentgenol., 215, 325 (2020). S. Vandenberghe, P. Moskal, and J. S. Karp, State of the art in total body PET, EJNMMI Phys. 7, 35 (2020). N. P. van der Meulen, K. Strobel, and T. V. M. Lima, New radionuclides and technological advances in SPECT and PET scanners, Cancers 13, 6183 (2021). T. H. Witney and P. J. Blower, The chemical tool-kit for molecular imaging with radionuclides in the age of targeted and immune therapy, Cancer Imaging 21, 18 (2021).

Specialized Journals Journal of Nuclear Medicine European Journal of Nuclear Medicine and Molecular Imaging Seminars in Nuclear Medicine

135

4 Ultrasound Imaging 4.1 General Principles of Ultrasound Imaging Ultrasound imaging produces a spatial map of the boundaries between tissues, and structures within tissues, by transmitting pulses of mechanical energy into the body and detecting the signals backscattered to the receiver. It operates at frequencies between ∼1 and ∼15 MHz, with lower frequencies being used to image deeper lying tissues and higher frequencies for better spatial resolution close to the surface. The intensity of the backscattered signals depends upon the difference in acoustic impedance between tissues. In addition to obtaining anatomical images, ultrasound is also used widely to measure blood flow in vessels via a Doppler shift in the frequency of the backscattered signal from red blood cells. The main uses of ultrasound are in obstetrics and gynecology, cardiovascular anatomy and function, musculoskeletal injuries, and assessment of blood flow in various vessels. The main strengths of ultrasound include: (i) It is nonionizing, portable, and inexpensive compared to the other imaging techniques, (ii) It is patient-friendly, with minimal patient preparation necessary and requiring only water-based gel to be placed on the patient to couple to the transducer, (iii) It is a real-time technique with frame rates of tens to hundreds per second allowing visualization of moving organs, (iv) It can be easily integrated into surgical procedures, and (v) Doppler-based flow measurements can be interlaced with anatomical scans on a real-time basis to integrate morphology and function. The main challenges of ultrasound include: (i) Soft-tissue contrast is limited compared to the other imaging techniques, (ii) Ultrasound waves cannot penetrate through gas or bone, meaning that certain organs such as the brain cannot be imaged, Introduction to Biomedical Imaging, Second Edition. Andrew Webb. © 2023 The Institute of Electrical and Electronics Companion website: www.wiley.com/go/webb2e

136

4 Ultrasound Imaging

(iii) Image quality is often dependent on the skill of the operator since most transducers are hand-held, (iv) The signal-to-noise ratio (SNR) is relatively low, with significant image “speckle” caused by interferences between backscattered waves. The hardware for an ultrasound system consists of a hand-held transducer, and a multi-channel transmit and receive system. The transducer consists of many hundreds of individual piezoelectric crystals. These oscillate mechanically when a voltage pulse is applied: by placing the transducer in direct contact with the patient these oscillations are converted into a pressure wave which travels through the patient, as shown in Figure 4.1. Subsets of the crystals are pulsed to produce a pressure wave corresponding to one line of the image. When the ultrasound wave encounters tissue surfaces, boundaries between tissues, or structures within organs, a certain fraction of the pressure wave is backscattered toward the transducer. The transducer also acts as the signal receiver, and converts the backscattered pressure waves into voltages which, after amplification and filtering, are digitized. Using the measured time delay between pulse transmission and echo reception, and an average propagation velocity through tissue of 1540 m s−1 , the depth of the feature producing the backscattered signal can be estimated. The beam is steered through the tissue by applying a voltage pulse to successive subsets of crystals in the transducer to acquire multiple image lines. Real-time imaging is possible: for example for a 10 cm depth of view the time between transmitting the pulse and receiving the furthest echoes for a single line of the image is ∼130 μs, so

Piezoelectric element Isolating spacer Transducer

Tissue boundary Tissue boundary

Figure 4.1 Principles of ultrasound imaging. Pressure waves are transmitted from each piezoelectric element of a transducer into the patient. These waves are backscattered from tissue boundaries, and are detected by the same piezoelectric elements. By measuring the time between transmission and reception, the distance to the tissue boundary can be estimated. By exciting successive subsets of crystals in the transducer, successive lines of the image are acquired and can be displayed as a real-time two-dimensional “B-mode” image.

4.2 Wave Propagation and Acoustic Impedance

an image of 256 lines can be acquired in ∼30 ms, corresponding to a rate of ∼30 frames per second (fps).

4.2 Wave Propagation and Acoustic Impedance The image contrast, signal intensity and noise characteristics in an ultrasound image are all determined by how the ultrasound waves travel through tissue, and the interactions which give rise to backscattered signals. Considerable insight into these processes can be gained by analyzing simple models of tissue structure and the properties of the ultrasound wave, as outlined in this section. A useful model of tissue is a lattice of small particles held together by elastic forces. Energy is coupled into the tissue using a piezoelectric crystal which is driven by a sinusoidal voltage, and this causes the crystal to expand and contract in thickness at the same frequency, as shown in Figure 4.2. As the ultrasound energy passes through the tissue, the particles move very short distances about a fixed mean position, whereas the ultrasound energy propagates over much larger distances. The direction of particle vibration and wave propagation is the same, meaning that ultrasound in tissue can be classified as a longitudinal wave. The following analysis assumes that the propagating wave has a planar wavefront, and that the tissue is perfectly homogeneous and does not attenuate the Compression

Rarefraction

Compression

Tissue

Transducer

Compressional pressure (pc)

Pressure

z Rarefactional pressure (pr) Figure 4.2 A schematic showing the passage of an ultrasound wave through tissue, causing an oscillating displacement of the molecules. The maximum positive pressure of the wave corresponds to the maximum compressional force, pushing the particles together. The corresponding maximum negative pressure represents a rarefraction force, pulling the particles apart.

137

138

4 Ultrasound Imaging

wave, i.e. no energy is lost as the wave passes propagates. The particle displacement (W), shown in Figure 4.2, is related to the ultrasound propagation velocity (c) by a second order differential equation referred to as the one-dimensional, linearized, lossless wave equation: 𝜕2 W 1 𝜕2 W = 2 2 (4.1) 2 𝜕z c 𝜕t The value of W is typically a few tenths of a nanometer. The value of c depends on the tissue density (𝜌) and compressibility (𝜅): 1 c= √ (4.2) 𝜅𝜌 The more rigid the tissue, the smaller the value of 𝜅, and therefore from equation (4.2) the higher the velocity of the ultrasound wave. This dependence of propagation velocity on tissue structural properties is potentially very useful for clinical diagnosis of, for example, a solid tumor mass surrounded by healthy, i.e. more compressible, tissue. Table 4.1 shows that the value of c in most soft tissues is ∼1540 m s−1 , with the values in bone and air (e.g. the lungs) being the highest and lowest due to their respective densities and compressibilities. The particle velocity (uz ) along the direction of energy propagation, which is denoted as the z-direction here, is given by the time derivative of the particle displacement: dW (4.3) dt The value of uz is typically 1–10 cm s−1 , and is much lower than the value of c. The pressure (p), measured in Pascals (Pa), of the ultrasound wave at a particular point in the z direction is given by: uz =

p = 𝜌cuz

(4.4)

Table 4.1 Acoustic Properties of Biological Tissues. Z × 105 (g cm−2 s−1 )

Air

0.00043

Blood

1.59

Speed of sound (ms−1 )

330 1570

Density (gm−3 )

Compressibility × 1011 (cm g−1 s2 )

1.3

70000

1060

4.0

Bone

7.8

4000

1908

0.3

Fat

1.38

1450

925

5.0

Brain

1.58

1540

1025

4.2

Muscle

1.7

1590

1075

3.7

Liver

1.65

1570

1050

3.9

Kidney

1.62

1560

1040

4.0

4.3 Wave Reflection

Since the source undergoes sinusoidal motion, p(t) and uz (t) can themselves be expressed as: p(t) = p0 ej𝜔t uz (t) = u0 ej𝜔t

(4.5)

where po and uo are the peak pressure and peak particle velocity, respectively, and 𝜔 is the angular frequency of the applied voltage. The intensity of the ultrasound wave is defined as the amount of power carried by the wave per unit area, and can be expressed as the product of p(t) and uz (t). The average intensity (I), measured in Watts m−2 , can be calculated by integrating over the period (T) of one cycle of the ultrasound wave: T

I=

1 1 p(t)uz (t)dt = p0 u0 T ∫0 2

(4.6)

There are federal guidelines which limit the maximum ultrasound intensity during a clinical scan. This subject is covered more fully in Section 4.13. A particularly important parameter in ultrasound imaging is the tissue acoustic impedance (Z), which is defined as the ratio of the pressure to the particle velocity: p (4.7) Z= uz This equation can be considered as a direct analog to Ohm’s law in electrical circuits, with the complementary parameters being voltage/pressure, current/ particle velocity, and resistance/acoustic impedance. The value of Z can also be expressed in terms of the physical properties of the tissue: √ 𝜌 1 = (4.8) Z = 𝜌c = 𝜌 √ 𝜅 𝜌𝜅 Table 4.1 also lists values of Z for tissues relevant to clinical ultrasound imaging. The values for many tissues are very similar, with the two extremes again being bone and air with much higher and lower values, respectively.

4.3

Wave Reflection

The relevance of the relative values of Z for ultrasound imaging is outlined in Figure 4.3, which shows the interaction of an ultrasound wave with a boundary between two tissues with different acoustic impedances. For simplicity, the boundary is drawn as being flat, implying both that its dimensions are much greater than the ultrasound wavelength, and also that any surface irregularities are much smaller than this wavelength. A certain fraction of the energy of the wave is scattered back toward the transducer, with the remaining fraction being transmitted

139

140

4 Ultrasound Imaging

Ii pi Incident wave

I r pr

θi

Backscattered wave

θr

Tissue 1 Z1 c1 Z2 c2 Tissue 2 θt

Figure 4.3 When an ultrasound beam interacts with a tissue boundary, a certain fraction of the energy of the wave is backscattered (reflected) with the remainder passing through. The intensities, pressures and angles of the backscattered and transmitted waves, relative to the incident wave, depend on the values of Z 1 , Z 2 , c1 , and c2 .

Transmitted wave I t pt

through the boundary. The equations governing the angles of reflection and transmission are given by: 𝜃i = 𝜃r sin 𝜃i c = 1 sin 𝜃t c2

(4.9) (4.10)

If the values of c1 and c2 are not equal, the incident and transmitted angles are also not equal, and the transmitted signal is refracted. The pressure reflection coefficient (Rp ), is defined as the ratio of the pressures of the reflected (pr ) and incident (pi ) waves, and is given by: Rp =

Z cos 𝜃i − Z1 cos 𝜃t pr = 2 pi Z2 cos 𝜃i + Z1 cos 𝜃t

(4.11)

The corresponding pressure transmission coefficient (T p ), defined as the ratio of the pressure of the transmitted wave (pt ) and incident wave (pi ), can be calculated by applying two boundary conditions. First, the acoustic pressures on both sides of the boundary are equal and second, the particle velocities normal to the boundary are equal. These conditions result in the relationship: p Tp = t = Rp + 1 (4.12) pi and therefore: p 2Z2 cos 𝜃i Tp = t = pi Z2 cos 𝜃i + Z1 cos 𝜃t

(4.13)

4.3 Wave Reflection

Note that if the ultrasound wave moves from a high to a low acoustic impedance, i.e. Z 1 > Z 2 , then the backscattered wave undergoes a 180∘ phase shift with respect to the incident wave. The corresponding intensity reflection coefficient (RI ) and intensity transmission coefficient (T I ) are defined in terms of the intensity of the reflected (I r ), incident (I i ), and transmitted (I t ) waves: RI =

Ir = Ii

(

Z2 cos 𝜃i − Z1 cos 𝜃t Z2 cos 𝜃i + Z1 cos 𝜃t

)2 (4.14)

In this case, conservation of energy means that the two intensity coefficients are related by: TI = 1 − |RI |2

(4.15)

and therefore: TI =

It 4Z2 Z1 cos2 𝜃i = Ii (Z2 cos 𝜃i + Z1 cos 𝜃t )2

(4.16)

Equation (4.14) shows that the reflected signal is maximum if the value of either Z 1 or Z 2 is zero. However, in this case, the ultrasound beam would not reach structures that lie deeper in the body. At an interface between bone and soft-tissue, for example, there is a very large reflected signal, or “echo,” comprising approximately 40% of the incident intensity. This greatly attenuates the transmitted beam and makes imaging of structures behind bone extremely difficult. At a gas/soft-tissue interface more than 99% of the intensity is reflected, making it impossible to scan through the lungs or gas in the bowel. At the other extreme, if Z 1 and Z 2 are equal in value, then no signal is detected from the tissue boundary. Using values of Z in Table 4.1, it can be seen that, at boundaries between soft-tissues, the intensity of the reflected wave is typically less than 0.1% of that of the incident wave. In the case that the incident wave is at 90 ∘ to the boundary, then the equations above simplifies to: Rp =

pr Z − Z1 = 2 pi Z2 + Z1

(4.17)

Tp =

2Z2 Z2 + Z1

(4.18)

RI =

Ir (Z − Z1 )2 = R2p = 2 Ii (Z2 + Z1 )2

(4.19)

TI =

It 4Z1 Z2 = Ii (Z2 + Z1 )2

(4.20)

141

142

4 Ultrasound Imaging

4.4

Energy Loss Mechanisms in Tissue

In addition to interacting with relatively large scale tissue boundaries, the ultrasound wave also encounters much smaller biological structures. These lead to scattering in all directions. The energy of the ultrasound wave is also reduced as it passes through tissue due to mechanical losses such as friction. The sum of these contributions to the loss in intensity/pressure of the propagating wave is termed attenuation. Scattering, absorption, and attenuation are covered in the next three sections.

4.4.1

Scattering

As outlined in Section 4.3, reflection and refraction of the ultrasound beam occur at boundaries where the dimensions of the surfaces are large compared to the wavelength of the ultrasound beam. In contrast, if the beam encounters tissue surface irregularities, or particles within tissue that are the same size as, or smaller than, the ultrasound wavelength then the wave is scattered in many directions. The angular dependence and magnitude of the scattering depend on the shape, size, and physical and acoustic properties (e.g. acoustic impedance, compressibility, and density) of the scatterer. The scattering process is characterized in terms of a scattering cross-section, 𝜎 s , which is defined as the power scattered per unit incident intensity. Scattering is extremely complicated, and there is no exact mathematical expression for the value of 𝜎 s for an arbitrary scatterer geometry. However, for particles which are much smaller than the ultrasound wavelength, an approximate expression is given by: [ ] 2 2 64𝜋 5 6 || 𝜅s − 𝜅 || 1 || 3(𝜌s − 𝜌) || 𝜎s = r | (4.21) | + 3 | 2𝜌 + 𝜌 | 9𝜆4 | 𝜅 | | s | where 𝜅 s is the adiabatic compressibility of the scatterer and 𝜅 that of the surrounding tissue, 𝜌s is the density of the scatterer and 𝜌 that of the tissue, and r is the radius of the scatterer. If the size of the scattering body is small compared to the wavelength, then scattering is almost uniform in direction, with slightly more energy being scattered back toward the transducer than away from it. This case is termed Rayleigh scattering, and is exemplified by the interaction of ultrasound with red blood cells (RBCs), which have diameters in the order of 7 μm. This interaction forms the basis for ultrasound Doppler blood velocity measurements, which are covered in Section 4.10. In this size regime, the value of 𝜎 s increases as the fourth power of frequency. Since the RBCs are very close to one another the scattered waves add

4.4 Energy Loss Mechanisms in Tissue

(a)

(b)

(c)

Figure 4.4 (a) Scattering from several structures which are close together produces waves which add constructively. (b) Structures which are relatively far from one another produce scattering patterns which add constructively at certain locations and destructively at others, thus producing areas of high and low image intensity, as illustrated in the image in (c) which was obtained from a homogeneous tissue, but which shows “apparent” internal structures.

constructively, as shown in Figure 4.4a. If the scatterers are much further apart, as in many soft tissues, shown in Figure 4.4b, then scattering produces areas of high and low intensities termed “speckle” which in general is considered to be an unwanted noise contribution to the image, as illustrated in Figure 4.4c. Imaging methods, such as compound scanning and harmonic imaging, to reduce the contribution from speckle are covered later in this chapter.

4.4.2

Absorption

Absorption losses refer to the conversion of ultrasound mechanical energy into heat. There are two major mechanisms involved in this process: relaxation and classical absorption. At clinical operating frequencies, the relaxation mechanism in tissue is much more important. Tissues can be characterized by a relaxation time (𝜏), which is the time taken for a molecule to return to its original position after having been displaced by the ultrasound wave. In Figure 4.2, compression of the particles corresponds to the passage of the positive half of the pressure wave, which forces the particles together. If the relaxation time of the molecule is of the same order as the period of the ultrasound wave, then at the time of the next positive pressure maximum the relaxation mechanism has resulted in the particles returning to their equilibrium separation. Thus, the actions of the pressure wave and relaxation are exactly opposite to one another. Intuitively, the maximum absorption of energy from the wave occurs when these two motions are exactly out of phase. A useful analogy is that of pushing a swing: the minimum energy required

143

144

4 Ultrasound Imaging

βr

βr

fr

Frequency

(a)

fr1 fr2 fr3 fr4 fr5 fr6

Frequency

(b)

Figure 4.5 (a) The relaxation absorption coefficient, 𝛽 r , as a function of frequency for a completely homogeneous tissue. (b) In practice, tissue consists of many slightly different relaxation frequencies: the sum of these gives an overall absorption coefficient which increases approximately linearly with frequency.

occurs when the swing is stationary at its maximum height, and the maximum energy corresponds to pushing it in the opposite direction to its travel, when it is at its lowest point. The relaxation process is characterized by a relaxation absorption coefficient, 𝛽 r . The frequency dependence of βr is shown in Figure 4.5a, and is given by: f2 ( )2

𝛽r ∼ 1+

(4.22)

f fr

Absorption due to relaxation has a maximum at a frequency f r = 1/𝜏. In reality, tissue is inhomogeneous and so consists of a broad range of values of 𝜏 and f r . The total absorption coefficient, 𝛽, can be written as the sum of these components: 𝛽r,tissue ∼

n ∑ 1

f2 ( )2 1+

(4.23)

f

fr,n

Measured values of absorption in many tissues have shown that there is an almost linear relationship between the total absorption coefficient and ultrasound operating frequency, as shown in Figure 4.5b. A small amount of energy is also lost due to “classical” absorption which occurs due to friction between particles as they are displaced by the passage of the ultrasound wave. This loss is characterized by an absorption coefficient, 𝛽 class . The value of 𝛽 class is proportional to the square of the frequency of the ultrasound wave. 𝛽class = Af 2

(4.24)

where A is a constant containing a number of quantities including the coefficient of viscosity, the coefficient of shear viscosity, and the thermal conductivity.

4.5 Instrumentation

4.4.3

Overall Wave Attenuation

The overall attenuation of the ultrasound beam as it propagates through tissue is the sum of the scattering and absorption processes, which in biological tissue can be characterized by an exponential decrease of both the pressure and intensity of the ultrasound beam as a function of propagation distance z: Iz = I0 e−𝜇z pz = p0 e−𝛼z

(4.25)

where 𝜇 is the intensity attenuation coefficient and 𝛼 the pressure attenuation coefficient, both measured in units of cm−1 : the value of 𝜇 is equal to 2𝛼. The value of 𝜇 is often stated in units of decibels (dB) per cm, where the conversion factor between the two units is given by: 𝜇(dB cm−1 ) = 4.343 𝜇(cm−1 )

(4.26)

On the dB scale, the two coefficients 𝜇 and 𝛼 have the same value: I(z) 1 a(dB cm−1 ) = − 20 log = 8.686𝛼(cm−1 ) = 𝜇(dB cm−1 ) z I(z = 0)

(4.27)

As seen previously, the processes of absorption and scattering are strongly dependent on frequency, and therefore one would expect the values of 𝜇 and 𝛼 to behave similarly. Somewhat surprisingly, given the complexity of the frequency dependence of the different mechanisms and the overall inhomogeneity of tissue, there is an approximately linear relationship between attenuation coefficient and frequency for most tissues. A typical attenuation coefficient for soft-tissue is 1 dB cm−1 MHz−1 , i.e. for an ultrasound beam at 3 MHz, the attenuation coefficient is 3 dB cm−1 . The values for air and bone are much higher, 45 dB cm−1 MHz−1 and 8.7 dB cm−1 MHz−1 , respectively. For lipid there is a different frequency dependence, with the attenuation coefficient being ∼0.7f1.5 dB cm−1 .

4.5 Instrumentation A simplified block diagram of the instrumentation in an ultrasound scanner is shown in Figure 4.6. The transmitter electronics consist of a digital transmit beamformer which generates the signals for each element of the transducer with the proper timing and phase to produce a focused transmit signal. Digital to analog converters (DACs) are used to translate the digital waveform to an analog signal, which is then amplified by a linear high-voltage amplifier to drive the transducer elements A transmit/receive switch ensures that in transmit mode all of the RF power goes to the transducer, whereas in receive mode the returning signal voltage passes

145

146

4 Ultrasound Imaging T/R switch Amp

Transducer

Transmit beamformer

Backscattered voltage Preamp

TGC amp

ADC Low-pass filter Receive beamformer

Display

Magnitude calculation

Logarithmic compression

Figure 4.6 Block diagram of the different hardware modules in a typical high-end ultrasound imaging system. Additional modules for Doppler processing are covered later in this chapter. TGC – time gain compensation. T/R – transmit/receive.

into a low noise preamplifier. A time gain compensation (TGC) amplifier provides a greater degree of amplification of the backscattered signals from deep in the body than those from closer to the surface in order to reduce the total dynamic range of the received signals. After low-pass filtering to remove high-frequency noise and any extraneous signals from outside the imaging bandwidth, the signal is digitized by a high resolution (12–16 bits) ADC. The signals are then combined by applying different time delays/phase shifts in a digital receive beamformer to ensure that all of the signals from every depth within the imaging field of view (FOV) are in phase. Logarithmic compression is used so that the image can be accurately displayed on a monitor with a limited number of gray levels. Each of the elements shown in Figure 4.6 are explained in more detail in the following sections.

4.5.1

Transducer Construction

Recapping from previous sections, in ultrasound imaging the transducer acts both as a transmitter and receiver. In transmission mode, it converts an oscillating voltage applied from an electrical source into mechanical vibrations, which are transmitted into the body as a series of pressure waves. Signal reception is essentially exactly the reverse process of transmission, in which the backscattered pressure waves are converted into electrical signals. These are amplified, digitized, and processed to form the ultrasound image. A schematic diagram of a transducer is shown in Figure 4.7: this is an array transducer with many hundreds of individual elements. The critical component

4.5 Instrumentation

Acoustic lens Second matching layer First matching layer Ground

Wiring

Piezoelectric Plastic case

Electronics Kerfs Backing block

Figure 4.7 Generic construction of an ultrasound transducer, showing the piezoelectric transducer elements, two matching layers and damping (backing) block.

in the process of transduction from an applied electrical voltage into mechanical vibration, and vice versa, is a piezoelectric crystal. When an alternating voltage is applied across opposite surfaces of a crystal of piezoelectric material, the thickness of the crystal oscillates at the same frequency as the driving voltage, with the change in thickness being proportional to the magnitude and polarity of this voltage. The most common piezoelectric material used for ultrasound transducers is lead zirconate titanate (PZT). Most transducers consist of composites of PZT and a non-piezoelectric polymer, which reduces the acoustic impedance of the material in order to improve coupling to the body. The two faces of the crystal are coated with a thin layer of silver and connected electrically. The distance between the centers of each of the crystals in the array is called the pitch, and the distance between the edges of adjacent crystals is the kerf. Each crystal in an array has a natural resonant frequency (f 0 ) given by: f0 =

ccrystal 2d

(4.28)

where ccrystal is the speed of sound in the crystal and d is its thickness. The speed of sound in a PZT crystal is roughly 4000 ms−1 , a value which translates into a thickness of ∼1.3 mm for an f 0 of 1.5 MHz. The crystal is sandwiched between a backing layer and one or more matching layers. The acoustic impedance of PZT is about 15 times that of skin or tissue. Even though the epoxy resin used to fill the spaces in between the PZT crystals has a lower acoustic impedance, there is still quite a strong mismatch between the acoustic impedance of the transducer and that of the skin, and so placing the piezoelectric crystal directly against the patient would result in a large amount of the energy being reflected back from the skin. In order to maximize energy transfer into the body, it can be shown mathematically (Exercise 4.8) that a matching layer

147

148

4 Ultrasound Imaging

of material with an acoustic impedance Z ML should be placed between the crystal (Z PZT ) and skin (Z skin ), where Z ML is given by: √ ZML = ZPZT Zskin (4.29) The thickness of this matching layer should be one-quarter of the ultrasound wavelength, again to maximize energy transmission through the layer in both directions (Exercise 4.11). In practice, two matching layers are commonly used, since the materials with the required acoustic impedance are more readily available (Exercise 4.10). When the transducer is placed on the patient’s skin, acoustic coupling gel is used to lubricate the interface to reduce any reflections from air pockets. The backing material dampens oscillations in the crystal after the voltage pulse has been transmitted, which allows short pulses to be transmitted, and also absorbs energy radiated backwards from the crystal. In some transducers a damping block is layered on the back of each of the elements and a separate absorbing layer covers the entire array. The damping/backing material is typically an epoxy substrate impregnated with metal powder, which absorbs energy from the vibrating transducer. With the exception of continuous wave Doppler measurements (Section 4.10.4), ultrasound imaging uses a series of short pulses (1–2 cycles). At the end of each voltage pulse, the crystal returns to a resting state, but in doing so internal vibrational modes are set up. These modes result in a finite “ring-down time” before the crystal physically comes to rest. Since the mechanical vibrations of the crystal produce the ultrasound pressure wave, the pulse of ultrasound transmitted into the body is actually longer than the applied voltage pulse. If there is close coupling between an efficient damping material and the crystal, the length of the ultrasound pulse is minimized. As will be covered in Section 4.8.2.1, the spatial resolution in the axial direction is proportional to the length of the ultrasound pulse, with a shorter pulse giving a better axial resolution. The number of “ring-down” cycles for a crystal is essentially independent of the resonant frequency of the crystal, and so higher ultrasound frequencies give shorter ring-down times and improved axial resolution. The scaling property of the Fourier transform, outlined in Chapter 1.A Appendix, shows that efficient mechanical damping results in ultrasound waves being transmitted into the body with a broad range of frequencies. The bandwidth (BW) of a transducer is usually stated at the 3 dB level, i.e. the value of the BW is the difference between the frequencies at which the amplitude of the beam drops to 50% of its peak value. Alternatively, the quality (Q) factor of the transducer can be specified, where: 2𝜋f0 (4.30) Q= BW Typical Q values for well-damped transducers are between 1 and 2. In addition to good axial spatial resolution, a large transducer bandwidth has advantages for harmonic and sub-harmonic imaging, which are covered in Section 4.11.1.

4.5 Instrumentation

4.5.2

Transducer Arrays

In clinical ultrasound imaging, there are two basic types of transducer arrays, sequential and phased, each of which can have various sizes and geometries [1]. In a sequential array, a subset of the total number of elements are excited simultaneously to form a thin ultrasound beam, and after all the signals have been received to form one line of the image the next subset of elements is excited. The linear array and convex array, shown in Figure 4.8a,b, respectively, are the most common forms of sequential array. In contrast, in a phased array (also called a sector array) all of the elements are excited for each line of the image, with preset time delays between firing each element being used to produce a focused beam. For each line of the image, the time delays are adjusted to sweep across the FOV, as shown in Figure 4.8c. The phased array is typically physically much smaller than sequential arrays: two-dimensional phased arrays can also be used for 3D imaging (see Section 4.7.2). Which type of array to use depends very much on the particular clinical application. The “acoustic window” refers to the aperture within the body through which the ultrasound can propagate. If, for example, one is trying to image through the ribs, then there is a small aperture between the ribs. So imaging the heart through the chest wall requires a small sector array transducer. For applications such as breast or carotid artery where a relatively small FOV is needed but the acoustic

(a)

(b)

(c)

Figure 4.8 Three different types of transducer array used in clinical practice. (a) linear sequential array, (b) convex sequential array, and (c) phased array. Example images are shown below each transducer.

149

150

4 Ultrasound Imaging

window is large, then a linear array can be used. For a larger FOV, with a large acoustic window available, the convex array is usually chosen. Linear arrays typically work between 5 and 20 MHz, and are used for highresolution imaging of superficial structures, including ocular imaging, joints, and musculoskeletal scans. Convex arrays work at low- to mid-frequencies, 1–10 MHz, and are used to image organs such as the kidneys, bladder, and liver, which require a large FOV. Phased arrays typically operate between 1 and 5 MHz (although higher frequency devices are also available). The geometry of the ultrasound beam produced by an array is essentially a superposition of the beam patterns from each of the individual elements of the array. The wavefront very close to the transducer face is extremely complicated, as shown in Figure 4.9a. The last intensity maximum occurs at the “near-field boundary” (NFB). Axial positions closer to the transducer than the NFB are referred to as being in the near-field, or Fresnel, zone, while those beyond the NFB comprise the far-field, or Fraunhofer, zone. The position, Z NFB , of the NFB is given by: ZNFB ≈

a2 𝜆

(4.31)

where a is half the effective dimensions of the active transducer elements. Beyond the NFB, the beam diverges in the lateral direction and the axial intensity decreases smoothly and is well-approximated as a planar wavefront. There are also side-lobes present, as shown in Figure 4.9b, which arise from the physical expansion of the crystals in directions orthogonal to the thickness mode. These can be minimized by apodizing the voltages applied to the active elements of the array, with a higher voltage being applied to the elements in the center and a lower voltage to those on the outside: this process is described further in Section 4.8.2. Since the array consists of a regularly spaced matrix of individual elements, grating lobes similar to those from an optical diffraction grating can potentially be produced, as also shown in Figure 4.9b. These are undesirable since grating lobes can produce reflections from tissue boundaries outside the imaging FOV which alias back into the image. For a rectangular element, the angle (𝜙g ), with respect to the main beam, at which these grating lobes occur is given by: ( ) n𝜆 −1 𝜙g = sin (4.32) g where g is the gap between the elements, and n = ±1, ±2, ±3 etc. For a linear array in which the beam is focused but not steered, if the spacing between elements is made less than the ultrasound wavelength, the grating lobes fall outside the edge of the FOV of the image. For a phased array which does use beam steering, then the condition is that the spacing must be less than half-a-wavelength.

4.5 Instrumentation

I

Near-field

Focal zone

Grating lobes

Grating lobes

NFB

Far-field

Side lobes

Side lobes (a)

(b)

Figure 4.9 (a) Plot of the intensity (I) of an ultrasound beam as a function of distance from the transducer. A subset of the elements in the array are fired simultaneously to produce a wavefront which has a natural focus at the NFB. The intensity within the near-field fluctuates rapidly, but beyond the NFB it decays smoothly. (b) In addition to the main beam there are also side-lobes and grating-lobes, the magnitudes of which should be minimized as discussed in the main text.

4.5.2.1 Linear Sequential Array

For a linear sequential array, the aperture is essentially the length of the entire transducer, and so a large FOV can be imaged close to the surface of the transducer, with slightly better resolution deeper in the tissue than can be achieved using curved transducers. The array typically consists of between 64 and 512 rectangularly shaped piezoelectric crystals. As shown in Figure 4.8a, a voltage pulse is applied simultaneously to a small number of elements. The width of the ultrasound beam is determined by the number of elements that are excited. After the backscattered echo is received, a second voltage pulse is applied to the next series of elements, producing a beam with a focal point displaced laterally with respect to the previous line. Sequential excitation is continued until all such groups have been excited. If an odd number of elements has been chosen, then the process can be repeated using simultaneous excitation of an even number of elements, which produces focal points at locations between those acquired previously. In this fashion, almost twice as many scan lines as there are transducer elements can be acquired. An image is produced with a rectangular FOV, as shown in Figure 4.9a. 4.5.2.2 Curvilinear/Convex Sequential Array

A convex array is shown in Figure 4.9b. This operates in exactly the same way as the linear sequential array, but because the face of the array is convex rather

151

152

4 Ultrasound Imaging

than planar, the image produces a sector image with a wide FOV particularly at large depths. 4.5.2.3 Linear-Phased Array

In a phased array, shown in Figure 4.8c, all the elements are excited for each line of the image, rather than just a small subset as for a sequential array. The voltage pulses exciting each element of a phased array are delayed in time with respect to each other, to produce a curved wavefront. At the focal point of the beam, the ultrasound waves from each of the individual elements in the array are all in phase, and so add constructively. Figure 4.10 shows how a simple phased array can be used to focus and steer the ultrasound beam electronically: this process is called transmitter beam-forming. Steering through an angle up to 45∘ is possible without producing grating lobes. Although all of the elements can be used for each line of the image as shown in Figure 4.10, alternatively a process termed “dynamic aperture” can be used to optimize the lateral resolution (Section 4.8.2.2) over the entire axial extent of the imaging FOV. Using a small number of elements to transmit the beam produces a focal point close to the transducer surface. At larger depths, the number of elements necessary to achieve the best lateral resolution increases. Therefore, the

Time

(a)

(b)

(c)

Figure 4.10 Illustration of beam-forming during ultrasound transmission using a linear phased array. (a)–(c) Applying voltage pulses to each individual element of the array at different times produces a focused beam. (b) Applying the voltages symmetrically with respect to the center elements, with the left and right elements excited first, causes the beam to focus at a point which is half-way along the array. (a) and (c) Asymmetric timing steers the focal point of the array to the left or the right side.

4.6 Signal Detection and Processing

Time

Figure 4.11 The process of dynamic aperture involves exciting an increasing number of elements in order to dynamically focus at larger depths within the tissue. First, a small number of elements are used to produce a focal point close to the transducer surface. The time required to acquire the backscattered echoes from such a shallow depth is very short, and so subsequent excitations using a larger number of elements for focusing at points deeper within tissue can be executed rapidly.

number of elements excited is increased dynamically during transmission of the ultrasound beam, as shown in Figure 4.11. The trade-off using this approach is that the imaging frame rate is reduced. Convex and linear scanners can also integrate dynamic focusing of the transmit field if required. This increases the spatial resolution, the SNR, and the penetration depth, but since the beam is more narrow, the number of sequential firings need to be increased to cover the same lateral FOV and so the frame rate decreases.

4.6

Signal Detection and Processing

As shown earlier in Figure 4.6 the components of the receiving system after the transducer include TGC amplifiers, ADCs, receiver beam-forming, logarithmic compression, and finally data storage and display. Each of these is considered in detail in Sections 4.6.1 and 4.6.2.

4.6.1

Time Gain Compensation

The signal voltages corresponding to the backscattered echoes have a large range of amplitudes: very high signals are produced from strong reflectors close to the transducer, and very low signals from scatterers deep within the body. The total range of signal amplitudes may be as high as 100–110 dB, which cannot

153

154

4 Ultrasound Imaging Backscattered signal (dB)

TGC amplification factor (dB) 100

0 –50

Large dynamic range

No TGC

50 0

–100

Time after pulse transmission

Time after pulse transmission Signal after TGC (dB) 0

Smaller dynamic range –50

With TGC

–100 Time after pulse transmission

(a)

(b)

Figure 4.12 Principle of time gain compensation. (a) The backscattered signal has a dynamic range of almost 100 dB, which cannot be accurately amplified and digitized. By applying an amplification factor which increases as a function of time after pulse transmission (corresponding to longer distances travelled through tissue) the dynamic range of the amplified signals is reduced. (b) B-mode images produced without and with TGC applied.

be accurately digitized by an ADC. The solution is to use TGC of the acquired signals, a process in which the amplification factor is increased as a function of time after the pulse has been transmitted. Signals arising from structures close to the transducer are amplified by a smaller factor than those from greater depths. Various linear or nonlinear functions of gain vs. time can be used, and these functions can be chosen on-line by the operator. The net effect of TGC is to compress the dynamic range of the backscattered echoes, as shown in Figure 4.12. After TGC, the signal is low-pass filtered and digitized.

4.6.2

Receive Beam Forming

Receive beam forming is essentially the reciprocal analog of dynamic focusing during signal transmission. Applying time delays and different weighting functions to the digitized data allows the effective focal length and aperture of the transducer to be optimized for each depth location for each line in the image. After receive beam forming, the signals pass through a logarithmic compression amplifier, which further reduces the dynamic range to 20–30 dB, so that the image can be displayed as a gray-scale image on a clinical viewing station. Images are typically displayed in magnitude mode.

4.7 Diagnostic Scanning Modes

4.7 Diagnostic Scanning Modes There are three basic modes of diagnostic “anatomical imaging” using ultrasound: A-mode, M-mode, and B-mode. Depending upon the particular clinical application, one or more of these modes may be used. The use of compound imaging and three-dimensional imaging are also described in this section. The use of ultrasound to measure blood flow in covered in Section 4.10.

4.7.1

A-Mode, M-Mode, and B-Mode Scans

Amplitude (A)-mode scanning refers to the acquisition of a one-dimensional scan. An A-mode scan simply plots the amplitude of the backscattered echos versus time after transmission of the ultrasound pulse. A-mode scans are mainly used as individual components of M-mode scans described next. A motion (M)-mode scan provides information on tissue movement within the body, and essentially displays a continuous series of A-mode scans. The brightness of the displayed signal is proportional to the amplitude of the backscattered echo, with a continuous time-ramp being applied to the horizontal axis of the display, as shown in Figure 4.13. The maximum time resolution of the M-mode scan is dictated by how long it takes for the echoes from the deepest tissue to return to the transducer. One of the main applications of M-mode ultrasound is in the emergency room (ER), where simple bedside ultrasound systems are used, and trained sonographers may not be available to perform technically challenging scans such as Doppler ultrasound. There are many clinical applications of M-mode ultrasound in the ER, including pneumothorax, cardiac tamponade, hypertrophic cardiomyopathy, and left ventricular systolic function [2]. Brightness (B)-mode scanning is by far the most commonly used mode and produces a two-dimensional image through a cross-section of tissue. Each line in the

Chest wall Pleural line

5.9

5.9 Chest wall

Chest wall Pleural line

Lung

Pleural line Lung

Lung

(a)

5.9

5.9

5.9

(b)

(c)

Figure 4.13 Use of M-mode ultrasound in the ER to investigate possible pneumothorax. (a) Placement of the M-mode scan for evaluation of the pleura and lung. (b) M-mode scan when the pleural layers are in normal contact. (c) M-mode scan when there is air in the intrapleural space, indicative of a pneumothorax.

155

156

4 Ultrasound Imaging

image consists of an A-mode scan with the brightness of the signal being proportional to the amplitude of the backscattered echos. B-mode scanning can be used to study both stationary and moving structures, such as the heart, since complete images can be acquired very rapidly.

4.7.2

Three-Dimensional Imaging

Three-dimensional ultrasound is particularly useful in surgical procedures such as transcatheter heart interventions, where valves, septal defects, and left atrial appendages are much better visualized than in two-dimensions. The two modes are typically performed in combination with each other, since the frame rate of 3D is much lower than 2D. 3D imaging can be performed using two different types of array, as shown in Figure 4.14 [3]. In one configuration, shown in Figure 4.14a, the position of a conventional 1D curved array is controlled by a small motor which “wobbles” the transducer in the elevation dimension. An alternative setup is a transducer which consists of a two-dimensional array of crystals, which produces a beam that can be electronically steered both in the elevation and azimuthal directions, as shown schematically in Figure 4.14b. Figure 4.15 shows two examples of three-dimensional ultrasound images.

4.7.3

Compound Imaging

Compound imaging, also called SonoCT, is a method which uses a phased array transducer to acquire multiple co-planar B-mode images at different angles,

(a)

(b)

Figure 4.14 Two different transducer array designs for 3D imaging. (a) A “wobbling” 1D array which rotates in the elevation direction. (b) A 2D crystal array, which can be electronically focused in both the lateral and elevation dimensions.

4.7 Diagnostic Scanning Modes

(a)

(b)

Figure 4.15 (a) A three-dimensional abdominal ultrasound scan showing individual gall stones. (b) A three-dimensional image of a fetal head in utero. Source: BSIP SA/Alamy Stock Photo.

Phased array transducer

(a)

(b)

(c)

Figure 4.16 (a) The principle of compound scanning. Comparison of a carotid artery bifurcation acquired using (b) a conventional B-mode scan with (c) a compound scan with nine overlapping beam orientations shows reduced speckle/clutter.

as shown in Figure 4.16a, and combines these multiple views into a single compound image. The effects of speckle (areas of constructive and destructive interference arising from small scatterers) and clutter (artifactual signals coming from side-lobes and/or grating lobes) are reduced considerably by the combination of views from different angles, since backscattered echoes from each scan add coherently, whereas speckle and clutter only add in a partially coherent manner. The improved SNR of the compound image results in improved visualization of internal structures, enabling, for example, detection of small lesions and calcifications. The greatest improvement in image SNR occurs in the center of the image, where the greatest number of lines overlap. The effects of image artifacts

157

158

4 Ultrasound Imaging

such as acoustic shadowing or enhancement (see Section 4.9) are also reduced. A comparison between conventional and compound B-mode scans is shown in Figure 4.16b and c.

4.7.4

Other Transmit and Receive Beamforming Techniques

In addition to the conventional scanning techniques described in Sections 4.7.1 and 4.7.2 and 4.7.3, there are many other ways in which ultrasound data can be acquired using different types of transmit and receive beamforming [4, 5]. Such techniques are typically used when either very high spatial resolution or very high frame rates are required. Examples include: (i) multiline acquisition, in which a wide beam is transmitted from a subset of the array elements and the data received by all of the elements are phased and reformatted to form narrow A-lines, (ii) multiline transmission, in which the converse scheme is applied: i.e. multiple narrow transmission lines are created by exciting slightly different frequencies for each line, (iii) planar wave imaging, in which a wave as wide as the transducer itself is produced by exciting all of the elements with the same phase, and (iv) synthetic aperture beam forming, in which only one element in the array is excited at a time, and all of the images are combined to produce a focus at each pixel within the image.

4.8

Image Characteristics

As outlined in the introduction, ultrasound imaging is characterized by a relatively low SNR and contrast-to-noise ratio (CNR), with spatial resolutions on the millimeter level, but very high image frame rates.

4.8.1

Signal-to-Noise

The signal intensity of the backscattered ultrasound signals is affected by: (i) The intensity of the ultrasound pulse transmitted by the transducer – the higher the intensity, the higher the amplitude of the detected signals, (ii) the operating frequency of the transducer – the higher the frequency, the greater the tissue attenuation, and therefore the lower the SNR at greater depths, and (iii) the type of focusing used – the stronger the focusing, the higher the energy per unit area of the ultrasound wave, the higher the SNR, and the deeper the

4.8 Image Characteristics

penetration. However, outside of the depth of focus (defined as the distance at which the energy of the beam is one-half of its maximum value), the energy per unit area is very low, as is the image SNR. Also, the narrower the beam the more lines must be acquired to cover a given lateral FOV, and so the lower the frame-rate. The noise in ultrasound images has two primary components: (i) Speckle corresponds to coherent wave interference in tissue and gives a granular appearance to what should appear as a homogeneous tissue. The particles which give rise to scattered signals are too small to be visualized directly, but the pattern produced on the ultrasound image is characteristic of particular size distributions, and (ii) clutter refers to signals arising from side-lobes, grating lobes, multipath reverberation, tissue motion, and any other acoustic phenomena which add features which are not actually present in the body. Clutter can be reduced significantly by using harmonic imaging methods, covered in Section 4.11.1.

4.8.2

Spatial Resolution

The spatial resolution has two distinct components, axial resolution along the axis of the beam, and lateral resolution perpendicular to the beam axis. 4.8.2.1 Axial Resolution

The axial resolution is defined as the closest separation, in the direction of the propagating ultrasound wave, of two scatterers that results in resolvable backscattered signals. Figure 4.17a and b shows an ultrasound pulse encountering two reflecting boundaries separated by different axial distances. The criterion for two features to be distinguished is that the backscattered waves do not overlap. Therefore, the axial resolution is given by: 1 (4.33) axial resolution = (PD)c 2 where PD is the pulse duration in seconds. Figure 4.17a shows the case in which the returning waves from two closely spaced features overlap, and so the two features appear as one elongated feature in the image. Figure 4.17b shows the case for a shorter ultrasound pulse, in which the two features can now be resolved. For higher ultrasound frequencies, the pulse length decreases for a given number of cycles, and so the axial resolution improves. Typical values of axial resolution are 1.5 mm at a frequency of 1 MHz, and 0.3 mm at 5 MHz. There are clear trade-offs between axial resolution and other image characteristics: (i) The higher the frequency, the better the axial resolution, but the lower the penetration depth,

159

160

4 Ultrasound Imaging

(a)

(b)

(c)

Figure 4.17 Illustrations of (a, b) axial and (c) lateral spatial resolution. (a) Two points cannot be resolved in the axial direction, since the backscattered ultrasound waves overlap. (b) By using a higher frequency transducer, the shorter pulse means that the returning signals no longer overlap, so they can be resolved. (c) In the lateral direction the spatial resolution is given by the width of the beam, which is spatially dependent and has a minimum value at the focal point of the beam.

(ii) The greater the degree of transducer damping the better the axial resolution, but the bandwidth of the transducer is widened meaning less energy at f 0 and lower SNR. 4.8.2.2

Lateral Resolution

The lateral resolution at a particular axial distance from the transducer is related to the width of the beam at that point. As discussed in Section 1.3.2, two features are distinguishable when the separation between them in the lateral direction is less or equal to the full width half maximum (FWHM) of the beam. If two backscatterers are positioned closer than this they produce echoes in the received signal that are superimposed on one another, as shown in Figure 4.17c. In general, for an unsteered array the lateral resolution at the focal point is approximately one-half the dimension of the number of active transducer elements. For a phased array, the lateral resolution at the focal point is improved by using strong focusing, but is worse at locations away from the focal point, as also shown in Figure 4.17c. Dynamic aperture can improve this situation at the cost of reduced frame rate, as outlined previously. In addition to the central transmitted beam there are also side-lobes, as shown in Figure 4.9b and also Figure 4.18a, which degrade the effective lateral resolution. Apodization of the transmit beam using, for example, a Hamming filter as shown in Figure 4.18b, reduces the effect of the side-lobes, but also broadens the main peak [6].

4.9 Artifacts in Ultrasound Imaging

|PSF(x)| (dB)

|PSF(x)| (dB)

–10

–10

–30

–30

–50 (a)

x

–50 (b)

x

Figure 4.18 (a) Plot through the lateral dimension of the ultrasound beam, showing the main lobe but also the presence of a number of side-lobes with significant intensities (only one side is shown for clarity). (b) Apodization of the transmit signals reduces the side-lobes at the price of widening the main lobe [6].

The main trade-offs between spatial resolution and other image characteristics are: (i) the higher the degree of focusing the better the lateral resolution, but the narrower the beam width which requires more lines for a given lateral FOV and so a lower frame-rate, and also a smaller depth of focus, (ii) the higher the transmit apodization the fewer the beam side-lobes, but the wider the main beam itself. In practice, the FWHM for a particular transducer at a certain depth can be measured experimentally by moving a small reflector across the ultrasound beam and measuring the intensity of the reflected signal as a function of the lateral position of the scatterer.

4.8.3

Contrast-to-Noise

Factors which affect the SNR also contribute to the image CNR. Noise sources such as clutter and speckle reduce the image CNR, especially for small pathologies within tissue. Although compound imaging can reduce the contribution from speckle, the greatest improvements in the CNR are obtained by using ultrasound contrast agents, harmonic imaging, and pulse inversion techniques, all of which are covered in Section 4.11.

4.9 Artifacts in Ultrasound Imaging Image artifacts, in which spatial features present in the image do not accurately represent the physical structure of the tissue, can arise from a number of sources [7]. Such artifacts must be recognized to avoid incorrect image

161

162

4 Ultrasound Imaging

(a)

(b)

(c)

Figure 4.19 Examples of ultrasound image artifacts. (a) An image of the lung showing strong reverberation artifacts. (b) Acoustic shadowing shown by the arrow behind a strongly-backscattering gall stone. (c) Acoustic enhancement with a bright signal, highlighted by the arrow, beneath a cyst with high water content and low absorption.

interpretation, but once recognized can, in fact, sometimes give useful diagnostic information. Image artifacts considered here include the effects of reverberation, refraction, acoustic enhancement or shadowing, speckle, and clutter. Reverberations occur if there is a very strong reflector such as bone or air close to the transducer surface. Multiple reflections occur between the surface of the transducer and the reflector, and these reflections appear as a series of repeating lines in the image, as shown in Figure 4.19a. These artifacts are relatively simple to detect due to the equidistant nature of the lines. Acoustic shadowing occurs when either a very strong reflector such as a gas/tissue boundary or a highly attenuating medium “shadows,” a deeper lying organ. Acoustic shadowing results in a dark area or “hole” in the image, as shown in Figure 4.19b. The opposite phenomenon, known as acoustic enhancement, occurs when a region of low attenuation is present within an otherwise homogeneous medium, as shown in Figure 4.19c. Clinical examples of tissues with low attenuation coefficients are cysts, vessels, and fluid-filled organs such as the kidney. The areas behind such tissues have higher than expected intensity, a feature which is a useful diagnostic tool for differentiating between fluid-filled cysts and solid tumors in breast examinations, for example. The refraction of ultrasound at a boundary between two tissues with different acoustic impedances has already been described (Section 4.3). Refraction is most troublesome at a bone/soft-tissue interface where large angular deviations of up to 20∘ in the direction of the transmitted wave can occur. At interfaces between different soft-tissues the refraction angle is only 1–2∘ and hence is not very important except when extremely precise measurements of distance are required as, for example, in the eye.

4.10 Blood Velocity Measurements Using Ultrasound

4.10 Blood Velocity Measurements Using Ultrasound Noninvasive, localized blood velocity measurements are extremely useful in the diagnosis of a number of diseases including detecting areas of stenosis or narrowing of the arteries. Ultrasound-based blood velocity measurements are based on the Doppler effect, as outlined in Sections 4.10.1–4.10.4.

4.10.1 The Doppler Effect An everyday example of the Doppler effect is the higher pitch of an ambulance siren as it approaches an observer compared to when it has passed. Similarly, blood flow, either toward or away from the transducer, alters the frequency of the backscattered echoes, as shown in Figure 4.20. Since blood contains a high proportion of RBCs, which have a diameter of 7–10 μm, the interaction between ultrasound and blood represents a scattering process as described in Section 4.4.1. Since the ultrasound wavelength is much greater than the dimensions of the scatterer the wave is scattered in all directions. Using the parameters shown in Figure 4.20, the component of the velocity (v) of the RBCs toward the transducer is given by v cos 𝜃. The effective frequency, f i eff , of the transmitted ultrasound beam, as “seen” by the RBC, is higher than the actual

Figure 4.20 Schematic showing the origin of the Doppler shift in ultrasound imaging of blood flow. The ultrasound beam is scattered from the RBCs in a vessel oriented at an angle 𝜃 with respect to the transducer surface. The backscattered ultrasound beam is detected by the transducer at a slightly different frequency (f rec ) from that transmitted (f i ) into the body.

Transducer

fi

frec

θ

RBCs

V

163

164

4 Ultrasound Imaging

transmitted/incident frequency (f i ), and is given by: f (c + v cos 𝜃) c + v cos 𝜃 = i (4.34) 𝜆 c There is an equivalent frequency shift during the return path, and so the frequency (f rec ) of the ultrasound received by the transducer is given by: fieff =

fieff (c + v cos 𝜃)

fi (c + v cos 𝜃)2 2fi v cos 𝜃 fi v2 cos2 𝜃 = f + + (4.35) i c c c2 c2 Therefore, the frequency shift, f D , due to the Doppler effect can be calculated as: frec =

=

2fi v cos 𝜃 fi v2 cos2 𝜃 2f v cos 𝜃 + ≅ i c c c2 The blood velocity can be calculated from the Doppler shift by: fD = fi − frec =

v=

cfD 2fi cos 𝜃

(4.36)

(4.37) ∘

The Doppler shift is in the low kHz range, for example if f i = 5 MHz, 𝜃 = 45 , and v = 50 cm s−1 then the Doppler shift is 2.26 kHz. The fractional change in frequency (Δf /f i ) is extremely small, in this case less than 0.05%. The Doppler shift can be increased by using higher ultrasound frequencies, but in this case the maximum depth at which vessels can be measured decreases, due to increased attenuation of the beam. Equation (4.37) also shows that an accurate measurement of blood velocity can only be achieved if the angle 𝜃 is known. This angle is usually estimated from simultaneously acquired B-mode scans using “duplex imaging” described in Section 4.10.3. A fixed error in the value of 𝜃 has the smallest effect when 𝜃 is small, and so in practice values of 𝜃 of less than 60∘ are ideally used. Doppler measurements can be performed either in continuous wave (CW) or pulsed-mode, depending upon the particular application. These methods are described in the next two sections.

4.10.2 Pulsed-Mode Doppler Measurements The general scheme for acquiring pulsed-mode Doppler images is shown in Figure 4.21. An ROI in the body containing the blood vessel of interest is chosen based on a B-mode scan, and the system then automatically focuses the beam to this ROI by determining the parameters for dynamic beamforming i.e. the time to wait after transmitting the pulse before starting to acquire the backscattered signal (defining the minimum depth), and the length of time for which the signal is acquired (defining the maximum depth). The minimum (dmin ) and maximum (dmax ) depths can be calculated from: dmin =

c(td − tp ) 2

, dmax =

c(td + tg ) 2

(4.38)

4.10 Blood Velocity Measurements Using Ultrasound

trep = 1/PRR tp Pulses td

S1

Signal

S2

S3

S4

tg

Depthmin

Depthmax

Depthmin

Depthmax

Figure 4.21 (a) General mode of operation of pulsed mode Doppler imaging. A series of signals S 1 , S 2 , S 3 , S 4 … S n are acquired, and processed as shown in Figure 4.22 to estimate the blood flow. (b) and (c) The parameters t p , t g , and t d (see main text) are chosen to localize the received signal to the desired ROI, defined by the focal region of the phased-array transducer and the minimum and maximum required depths: examples of obtaining information from a vessel close to the surface (b) and deeper within the body (c) are shown.

where tp is the length of the ultrasound pulse, td is the time delay between the end of the transmitted pulse and the receiver gate being opened, and tg is the time for which the receiver gate is open for each line of the image. A series of hundreds of RF pulses per B-mode line is transmitted, and the corresponding signals recorded, as shown in Figure 4.21. The time between each pulse is trep , with the pulse repetition rate (PRR) equal to 1/trep . Figure 4.22 shows a series of backscattered signals (S1 –Sn ) acquired at two successive times t1 and t2 . Each signal S1 –Sn has a different phase with respect to the initial transmitted pulse in the train. The signals are Fourier transformed to give a frequency-spectrum (comprising all of the Doppler shifts within the vessel), which can be converted into a velocity spectrum using equation (4.37). Doppler information can be presented in different modes. The most simple is spectral Doppler, which shows information from one specific position. The Doppler spectrum is then plotted as a function of time at this location, with the width of the Doppler spectrum represented on the vertical axis and the amplitude encoded by the grayscale value. One of the disadvantages of pulsed-mode Doppler measurements is that there is a limit to the highest velocity, vmax , that can be measured. The Nyquist theorem

165

166

4 Ultrasound Imaging

t1

t2

S1 S2

Sn Doppler frequency

n

Amplitude

Fourier transform

Doppler frequency

amplitude

Doppler frequency

Time

Figure 4.22 Steps in the derivation of the Doppler frequency distribution from a particular axial position within the ROI, which corresponds to a time point (represented by the dotted line) in each of the signals S 1 … S n . Fourier transform of the S n versus n plot gives the Doppler frequency spectrum shown at the bottom. The Doppler spectrum (bottom right) is then displayed as a function of time (t 1 , t 2 …) with the grayscale corresponding to the amplitude of the Doppler peak, and the vertical axis to its width.

states the maximum frequency that can detected is one-half the sampling frequency. From Figure 4.21, the sampling frequency for each point in the ROI is given by the value of the PRR and so the highest measurable Doppler frequency, f max , is given by: fmax =

PRR 2

(4.39)

The corresponding value of vmax is given by: vmax =

(PRR)c 4fi

(4.40)

If the Doppler shift has a value greater than f max , then it will “alias,” i.e. appear as a low frequency. If aliasing is suspected, the machine can be switched to CW-mode (Section 4.10.4) which does not suffer from this limitation.

4.10 Blood Velocity Measurements Using Ultrasound

The value of the PRR also determines the maximum depth, dmax , from which flow information can be obtained, with a value of dmax given by c/2PRR. The relationship between dmax and vmax is therefore given by: vmax =

c2 8fi dmax

(4.41)

4.10.3 Color Doppler/B-mode Duplex and Triplex Imaging Doppler flow measurements from the ROI can also be displayed as twodimensional color maps by processing all the data points in the waveforms S1 –Sn shown in Figure 4.22. Doppler acquisition is interlaced with regular B-mode imaging in order to be able to superimpose the flow maps on to higher-resolution anatomical images: this combination is called duplex imaging. Since the backscattered Doppler signal has a much lower intensity than the B-mode scan, much longer ultrasound pulses are used for Doppler scanning which results in poorer axial resolution. Only the mean value of the velocity, and not the full velocity distribution (as in spectral Doppler), is determined at each pixel. The mean value of the velocity (v) is calculated from the following equation: v=

f c fi 2 cos 𝜃

(4.42)

where 𝜃 is the angle defined in Figure 4.20, and f is the mean frequency shift given by: [ ] ∞ 2 ∫−∞ f SI2 (f ) + SQ (f ) df f = (4.43) [ ] ∞ 2 ∫−∞ SI2 (f ) + SQ (f ) df where SI (f ) and SQ (f ) are the quadrature components of the signal. The mean velocity, its sign (positive or negative), and its variance are represented by the hue, saturation, and luminance, respectively, of the color plot. Examples of duplex images are shown in Figure 4.23. The frame rates for Duplex imaging are much less than those for B-mode scanning. Another mode is triplex imaging, in which the spectral Doppler plot is also calculated and displayed. This gives additional information but results in a further reduction in frame rate due to increased acquisition and processing times. One of the difficulties in measuring color Doppler shifts occurs when a vessel lies exactly parallel to the face of the phased array transducer. If flow is unidirectional, one half of the image shows flow toward the transducer, and the other shows flow away from the transducer. Directly below the center of the transducer there is a signal void. The angle dependence can be removed by using the so-called “power Doppler” mode. In this mode the area under the plot of Doppler frequency vs.

167

168

4 Ultrasound Imaging

(a)

(b)

Figure 4.23 Duplex Doppler scans superimposed on B-mode images. (a) Flow in a recannulated umbilical cord. (b) Flow inside a carotid artery. A parallelogram-shaped region of interest is needed for vessels which are parallel to the transducer surface (©ATL Ultrasound).

amplitude is integrated to give the “Doppler power.” The Doppler power depends only upon the number of RBC scatterers, and is not angle dependent. Aliasing artifacts at high flow rates are also eliminated since the integral of an aliased signal is the same as that of unaliased signal. The major disadvantage with power Doppler is the loss of directional information.

4.10.4 Continuous Wave Doppler (CWD) Measurements Continuous wave Doppler (CWD) is used to measure the very high velocity blood flows typically found in the heart. Half of the elements in the transducer array are used to produce a focused transmit beam, the other half of the elements act as receivers to produce a focused receive beam. The advantages of CW Doppler over pulsed Doppler methods, is that the method is neither limited to a maximum depth, nor to a maximum measurable velocity. The major hardware challenge is that the transmit and receive channels must operate simultaneously, and the relatively weak Doppler signals are very close in frequency to very large signals generated by reflections from stationary tissue at the fundamental frequency and so there is a very large background signal which must be eliminated. The Doppler shift is estimated using the hardware configurations shown in Figure 4.24. The received signal, sin (t), can be represented as: sin (t) = A cos[2𝜋(fi + Δf )t]

(4.44)

where A is the amplitude of the backscattered signal, and Δf is the Doppler shift. In order to extract both the magnitude and sign of the Doppler shift, the signals are downsampled to baseband using a heterodyne system such as shown in basic form in Figure 4.24.

4.11 Ultrasound Contrast Agents

Mixer 90°

Σ

ADC sI(t)

90° sin(t)

cos2πfit

90° Mixer

Σ

ADC sQ(t)

Figure 4.24 Block diagram of a continuous-wave Doppler receiver. The two outputs of the heterodyne receiver enable the magnitude and the sign of the Doppler signal to be reconstructed.

In the case of a positive Doppler shift, the outputs of the heterodyne receiver are sI (t) = AcosΔft and sQ = 0, and for a negative Doppler shift, sI (t) = 0 and sQ (t) = AcosΔft.

4.11 Ultrasound Contrast Agents Ultrasound contrast agents are used to increase the backscattered signal from small blood vessels in tissue. This can be used, for example, to evaluate the vascularity of a lesion or the degree of blood perfusion within an organ. These contrast agents consist of hollow microbubbles, filled with a perfluorinated gas, with diameters between ∼1 and 3 μm so that when injected into the bloodstream, either as a small bolus or as a dilute continuous injection, they pass through the pulmonary, cardiac, and capillary systems into the microcirculation. They remain in the systemic vascular system for an amount of time which depends on the particular chemical formulation of the microbubble shell. “Heavy” perfluorinated gases are used, rather than air which was used in the first contrast agents, since they are less water-soluble and so less likely to leak out of the microbubble, Figure 4.25a. Microbubbles act as contrast agents by increasing the power of the backscattered signal. The power, Pr , received by a transducer is given by: Pr =

Ii N𝜎a2 4R2

(4.45)

169

170

4 Ultrasound Imaging

where R is the distance between the scatterer and transducer, a is the radius of the transducer, N is the number of scatterers, I i is the intensity of the incident ultrasound beam, and 𝜎 is the scattering cross section. Therefore, the larger the effective scattering cross-section of the microsphere, the larger the backscattered signal. There are two basic mechanisms by which such agents significantly increase the backscattered signal from blood. The first is the large difference in acoustic properties between gas-filled microbubbles and the surrounding blood and tissue. Equation (4.21) showed that the magnitude of the backscattered signal depends upon the differences in density and compressibility between the contrast agent and the surrounding medium. For a gas-filled microbubble, 𝜅 s ≪ 𝜅 and 𝜌s ≫ 𝜌, resulting in a high scattering cross-section. The second, and dominant, mechanism by which the effective scattering cross-section is increased is termed “resonance,” which refers to a microbubble absorbing energy very efficiently at a particular resonant frequency during compression, and reradiating this energy during expansion, resulting in a very strong echo signal, Figure 4.25b. The resonance frequency (f 0 ) of a bubble of radius r, with a spherical shell formed by lipid/protein which has a surface tension, 𝜎 st , is given by: √ ( ) 2𝜎st 3𝛾 1 f0 = p0 + (4.46) 2𝜋r 𝜌0 r where 𝛾 is the adiabatic ideal gas constant, p0 is the ambient pressure of the surrounding blood, and 𝜌0 is the ambient blood density. Fortuitously the value of f 0 for microbubbles which can enter the body’s microcirculation corresponds very well with the frequencies used for clinical ultrasound. Gas-filled microspheres, effectively acting as harmonic oscillators, produce increases in scattering cross section three orders of magnitude greater than their actual geometric cross section. There are currently five FDA/CE approved contrast agents which differ in terms of microbubble shell composition, the particular fluorinated gas, and mean diameter. In general, a more hydrophilic shell reduces the time that the microbubble remains in circulation, and the more elastic the material, the more energy can be absorbed without bursting. The most widely used agent is SonoVue, which has a lipid shell, is filled with sulfur hexafluoride gas, and the microbubbles have a mean diameter of ∼3 μm. Optison has a smaller diameter, 1–2.25 μm, is filled with octafluoropropane and has a shell formed from human albumin. The most recently approved agent (currently only available in Asia) is Sonazoid, which has a diameter of 3.2 μm, contains perfluorobutane gas, and has a shell with a hydrogenated egg phosphatidylserine sodium coating which means that the agent is taken up by the Kupffer cells in the liver. Microbubbles are also extensively used in cardiovascular applications, since they increase the contrast between tissue and blood, and can therefore be used to more accurately see changes in the thickness of the myocardial wall.

4.11 Ultrasound Contrast Agents

Fluorinated gas

Surfactant monolayer

Microbubble (a) Pressure

Amplitude

Time

f0

(b)

2f0 3f0

4f0 Frequency

(c)

Figure 4.25 (a) Schematic of a microbubble-based ultrasound contrast agent. The diameter of the microbubble is ∼1–3 μm. The shell is a few tens of nanometers thick. (b) Change in the dimensions of a microbubble as an ultrasound pressure wave passes through. (c) Frequency spectrum of the backscattered ultrasound wave produced if the incident pressure is high enough to cause nonlinear effects.

4.11.1 Harmonic and Pulse Inversion Techniques If the transmitted ultrasound pulse has sufficiently high power, then there are significant nonlinearities in the compression/expansion cycle of the microbubbles, and the pressure wave radiated by the microbubble during expansion contains measurable components at higher harmonics, as shown in Figure 4.25c. These harmonic signals have lower intensities than those at the fundamental frequency, but also have much lower intensity artifacts from clutter, tissue motion, side-lobes, and grating-lobes. Harmonic imaging is a technique where ultrasound pulses are transmitted at f 0 but the signals are received at nf 0 . The most common implementation of harmonic imaging uses the second harmonic (n = 2) of the fundamental frequency since it has the highest intensity. The bandwidth of the transducer, of course, must be sufficiently wide to have a high sensitivity at both f 0 and 2f 0 . Although a high-pass filter can be applied to the received data, and can theoretically be used to detect the 2f 0 component only, in practice there is significant signal contamination from the backscattered signal at f 0 .

171

172

4 Ultrasound Imaging

A technique called pulse-inversion can be used in combination with harmonic imaging techniques and can almost entirely remove the contribution of the signal at f 0 . This approach uses two scans with the transmitted ultrasound pulses in the two scans differing in phase by 180∘ , i.e. one set of pulses is effectively inverted. The backscattered signals at the fundamental frequency from the two scans are equal in magnitude and opposite in phase, but the harmonic signal has the same magnitude and phase for both scans. Summation of these two signals, therefore, results in cancellation of the fundamental signals, producing a pure harmonic signal. The obvious disadvantage of the pulse-inversion technique is that it has half the frame rate of a single scan. A comparison of fundamental and second harmonic mode images of the heart, both acquired using contrast agents, is shown in Figure 4.26. Although most second harmonic imaging uses microbubble contrast agents, it is also possible to use second harmonic imaging without contrast agents.

4.11.2 Super-Resolution in Ultrasound Imaging Another technique which is made possible by microbubble contrast agents is super-resolution ultrasound imaging, which achieves spatial resolutions below the theoretical limit of one-half the wavelength described previously. The concept behind super-resolution is to track individual microbubbles in the bloodstream by acquiring a time-series of images. The raw images of the individual microbubbles

(a)

(b)

Figure 4.26 Cardiac image showing a 4-chamber view of the heart using standard B-mode imaging (a) and harmonic imaging (b). Reverberation artifacts can be seen throughout the image as clutter in the image acquired at the fundamental frequency, but these are absent in the harmonic image.

4.11 Ultrasound Contrast Agents

(a)

(b)

(c)

(d)

(e)

(f) Figure 4.27 The principle of super-resolution ultrasound imaging. (a) Acquisition of a time-series of ultrasound images from vessels containing microbubbles. (b) Detection of the signals from the microbubbles via intensity thresholding. (c) Isolation of signals from single microbubbles; signals which show spatial overlap of multiple microbubbles are rejected. (d) Locating the center of mass of the microbubbles. (e) Tracking the microbubbles through consecutive frames to establish the velocity profiles. (f) Mapping of the center of mass information over the time-series produces an image of the vascular structure with a spatial resolution well below the half-wavelength limit.

are each limited by the intrinsic spatial resolution of the imaging system, but by tracking the passage of the microbubbles over time, an image of the vascular structure with far finer spatial resolution can be obtained. The principle is shown in Figure 4.27. An example of microbubble tracking analysis of a contrast-enhanced ultrasound scan of a patient with breast cancer is illustrated in Figure 4.28, which shows a

5 mm

5 mm (a)

(b)

(c)

Figure 4.28 (a) B-mode image of a breast cancer patient revealing a hypo-intense, irregular lesion with unsharp margins. (b) The maximum intensity projection image after microbubble injection confirms the lesion is highly vascularized, but details in the vascular architecture are very limited. (c) Increased information on the architecture is revealed by super-resolution based microbubble tracking analysis.

173

174

4 Ultrasound Imaging

highly vascularized rim from which vascular assemblies extend toward the center of the tumor.

4.12

Safety and Bioeffects in Ultrasound Imaging

Under normal operating conditions, ultrasound imaging is extremely safe, with no limit having been set by the FDA on the number of examinations a patient can undergo over any given period of time. Increasingly sophisticated image acquisition processes such as compound scanning and power color Doppler have, however, increased the amount of energy that is deposited in the body, and a number of regulatory guidelines for recommended safety levels have been established. Several measures are used to estimate the safety of an ultrasound imaging protocol. The temporal-averaged ultrasound intensity is calculated by multiplying the average intensity during the ultrasound pulse by the duty cycle, where the duty cycle is given by the duration of the ultrasound pulse divided by the time between pulses. The Gaussian beam profile can be accounted for by calculating the spatially averaged intensity, I SA . Common acronyms used for reporting ultrasound intensities for different procedures use a combination of these terms, for example spatial average temporal average (SATA), spatial peak temporal average (SPTA), spatial peak pulse average (SPPA), spatial peak temporal peak (SPTP), spatial peak (SP), and spatial average (SA). The American Institute of Ultrasound in Medicine sets guidelines for these values, based on estimations on the tissue heating produced. For example, there is a maximum limit of 720 mW cm−2 for the derated I SPTA (where derated refers to in situ measurements taking tissue attenuation into account). Tissue heating and cavitation are the two mechanisms by which destructive bioeffects could potentially occur during an ultrasound scan. For tissue heating, the intensity of the ultrasound beam and the duration of the scan are both important parameters. For cavitation, the relevant parameter is the pulse peak rarefractional pressure: this is particularly relevant to gas-filled contrast agents, in which cavitation of the bubbles can occur. Most systems have a real-time display of safety indices such as mechanical index (MI) and thermal index (TI). The TI is the ratio of the total acoustic power to that required to produce a maximum temperature increase of 1 ∘ C. The mechanical index is mathematically defined as: ] [ max pr (z)e−0.0345f0 z (4.47) MI = √ f0 where pr is the axial value of the rarefractional pressure measured in water. The TI and MI are only rough estimates of the risk of inducing biological effects, with values less than one indicating safe operation.

4.13 Point-of-Care Ultrasound Systems

4.13 Point-of-Care Ultrasound Systems Since ∼2015 there has been a large increase in the number of point-of-care ultrasound (POCUS) systems. These are small, often handheld systems which enable them to be used at the patient’s bedside for example. POCUS has found significant uptake in emergency medicine, critical care, internal medicine, and anesthesia [8]. Some POCUS systems even plug into a mobile phone for display, and save the data on local databases via wireless network. These systems typically cost a few thousand dollars. Image quality is not as good as the much more expensive, less portable systems, and there is in general reduced functionality, but they are finding increasing use particularly in resource-poor environments. Two such commercial systems are shown in Figure 4.29. There are several differences in the design of such systems compared to conventional ultrasound scanners. The transducer must be produced with much lower cost manufacturing technologies than those required to produce and mechanically dice a piezoelectric crystal. One of the most promising technologies is capacitive micromachined ultrasound transducers (CMUTs). A CMUT has a thin metalized membrane that is suspended by insulating posts over a conductive silicon substrate [9]. Several thousand separate sensors can be microfabricated at one time using photolithography. In transmit mode, when an alternating voltage is applied between the membrane and substrate, the membrane is moved by Coulomb forces which generates ultrasound waves. In receive mode, currents are generated by the change in the capacitance when the electrically biased membrane is moved by the reflected waves. One of the great advantages of a CMUT is that the similarity of the fabrication technology of complementary metal-on-semiconductor (CMOS) and CMUTs allows direct integration of the

Figure 4.29 Examples of commercial point of care ultrasound systems. Source: FUJIFILM SonoSite, Inc.; Butterfly Network.

175

176

4 Ultrasound Imaging

array with application-specific integrated circuits (ASICS) which can be used for all aspects of pulse transmission, data reception, and data processing. A related technology is a piezoelectric micromachined ultrasound transducer (PMUT) which has a structure similar to that of the CMUT except that a PMUT has a piezoelectric layer deposited on top of the silicon membrane [10]. POCUS systems can run off battery power, with a recharge time of a few hours.

4.14 Clinical Applications of Ultrasound The noninvasive, nonionizing nature of ultrasound imaging, lack of patient contraindications which prevent safe scanning, its ability to measure blood velocity, together with real-time image acquisition and easy patient access and portability mean that a very wide range of clinical protocols have ultrasound imaging as an integral component. As examples, applications of obstetrics, breast imaging, musculoskeletal, cardiological, and abdominal scans are outlined briefly below.

4.14.1 Obstetrics and Gynecology Ultrasound is the only imaging technique which is routinely used for fetal studies. Parameters such as the size of the head and brain ventricles, and the condition of the spine are measured to assess the health of the fetus. If amniocentesis is necessary to detect disorders such as Down’s syndrome, then ultrasound is used for needle guidance. Doppler ultrasound is also used to measure fetal blood velocity. The high spatial resolution and excellent image contrast in fetal imaging are shown in Figure 4.30.

4.14.2 Breast Imaging Ultrasound is used in conjunction with the primary technique of X-ray mammography in the diagnosis of breast cancer. If mammography suggests that a “lump” is present, then ultrasound can help to determine whether it is a fluid-filled cyst or a solid mass. Cysts typically have a round shape, anechoic interiors, and acoustic enhancement is often seen behind the cyst. Since cysts are fluid-filled, the presence of acoustic streaming (fluid motion arising from the ultrasound pressure, detected using Doppler techniques) is also a useful diagnostic. Ultrasound is particularly valuable in women with dense breast tissue or young women, since the tissue is relatively opaque to X-rays. If a needle-biopsy is needed in order to determine whether a solid mass is cancerous or not, then real-time B-mode ultrasound imaging can be used to guide the needle into the tumor. Ultrasound can also be used in the detection of microcalcifications, with spatial compound imaging being

4.14 Clinical Applications of Ultrasound

(a)

(b)

Figure 4.30 (a) B-mode image of the fetal brain. (b) Image of the fetal lung. Source: (a) Meduniver.

(a)

(b)

Figure 4.31 (a) A compound image of a dark mass within the breast. (b) Tracking a needle biopsy of breast tissue using real-time compound B-mode scanning.

particularly useful due to the reduction in speckle. Figure 4.31 shows two images: the first of a breast carcinoma acquired using compound imaging, and the second shows visualization of a needle biopsy. These studies typically use a 1D linear array transducer.

4.14.3 Musculoskeletal Structure Musculoskeletal damage can be quickly and effectively diagnosed using ultrasound. Figure 4.32 shows two examples of images acquired for such injuries.

177

178

4 Ultrasound Imaging

(a)

(b)

Figure 4.32 (a) Image of the patellar and (b) an image of the shoulder.

Typically a 1D linear or curvilinear array operating at high frequency is used to obtain the high spatial resolution necessary to distinguish small muscle and/or ligament and tendon tears.

4.14.4 Abdominal Many different organs in the abdomen can be scanned using ultrasound. For smaller organs, such as the appendix, close to the surface a linear array operating at relatively high frequency is used. For deeper organs such as the prostate, lower frequencies are used with a curvilinear array: examples of both of these types of scan are shown in Figure 4.33. As covered previously, ultrasound contrast agents are often used in the assessment of liver disease. Since liver lesions do not take up the agent, they can be detected by areas which do not undergo “late phase” signal enhancement, i.e. >2 min post injection. In contrast, hepatocellular tumors are characterized by early phase hyperenhancement (i.e. increased uptake) which then decreases in the postvascular phase, indicating malignancy.

(a) Figure 4.33 (a) Inflamed appendix and (b) prostate.

(b)

Exercises

Exercises Sections 4.1–4.4 4.1

Calculate the intensity transmission coefficient, TI , for the following interfaces, assuming that the ultrasound beam is exactly perpendicular to the interface: muscle/kidney, air/muscle, and bone/muscle. Discuss briefly the implications of these values of TI for ultrasound imaging.

4.2

Repeat the calculations in question 1 with the angle of incidence of the ultrasound beam now being 60∘ .

4.3

Given a 100 dB receiver dynamic range, and an operating frequency of 3 MHz, what is the maximum depth within tissue at which a perfect reflector, i.e. one that backscatters 100% of the incident energy, can be detected?

4.4

Calculate the distance at which the intensity of a 1 and 5 MHz ultrasound beam is reduced by half traveling through (a) bone, (b) air, and (c) muscle.

4.5

Plot the attenuation of the ultrasound beam for 1, 5, and 10 MHz at depths within tissue of 1, 5, and 10 cm. For each depth, calculate the fraction decrease in transmitted power, and the absolute power assuming an output power from the transducer of 100 mW cm−2 .

4.6

Plot the transmitted frequency spectrum of an ultrasound beam from a transducer operating at a central frequency of 1.5 MHz. Assume that the transducer is damped. Repeat the plot for the beam returning to the transducer after having passed through tissue and been backscattered.

4.7

Explain why a very fast or very slow tissue relaxation time results in a very small amount of energy being lost due to absorption.

Section 4.5 4.8

In order to improve the efficiency of a given transducer, the amount of energy reflected by the skin directly under the transducer must be minimized. A layer of material with an acoustic impedance Z ml is placed between the transducer and the skin. If the acoustic impedance of the skin is denoted by Z skin , and that of the transducer crystal Z PZT , show mathematically that the value of Z ml which minimizes the energy of the √ reflected wave is given by Zml = ZPZT Zskin

179

180

4 Ultrasound Imaging

4.9

Given values of Z PZT and Z skin of 30 × 105 g cm−2 s−1 and 1.7 × 105 g cm−2 s−1 , respectively, calculate what fraction of the energy from the transducer is actually transmitted into the patient.

4.10

If two matching layers are used instead of one, and the respective acoustic impedances are given by the analogues of the equation above, then calculate the increase in efficiency in transmitting power into the patient.

4.11

Consider a transducer which has a thickness of one quarter-wavelength A matching layer (previous question) is used to maximize the energy transferred from the transducer to the body. Show that the thickness of this matching layer should be one-quarter of the ultrasound wavelength.

4.12

For a non-steered array, derive the equation for the maximum size of each element such that no grating lobes are produced.

4.13

A phased array has a central operating frequency of 3.5 MHz. If the aperture through the ribs is 14 mm, what is the maximum number of elements in the phased array.

4.14

For a concave lens to focus a beam, should the speed of sound in the lens be greater than or less than in tissue?

4.15

Show the required timing for simultaneous steering and dynamic focusing a phased array. For simplicity, sketch the general scheme using a small number (for example five) of elements.

4.16

Sketch the corresponding delays required for dynamic beam-forming during signal reception.

Section 4.6 4.17

Use the following data to sketch the A-mode scan from Figure 4.34a. The amplitude axis should be on a dB scale, and the time axis in microseconds. Ignore any reflected signal from the transducer/fat interface, and assume that a signal of 0 dB enters the body. At a transducer frequency of 5 MHz, the linear attenuation coefficient for muscle and liver is 5 dB cm−1 , and for fat is 7 dB cm−1 . Relevant values of the acoustic impedance and speed of sound can be found in Table 4.1.

Exercises

Transducer

5 mm

Fat

3 cm

5 mm

Transducer

Muscle

Fat

Fat

Muscle

Tumour

Muscle

Liver

Liver (a)

(b)

Figure 4.34 Illustrations for Exercises 4.17, 4.18, and 4.19.

4.18

Determine and sketch the A-mode scan using the same parameters as above, but with a time gain compensation of 0.8 dB μs−1 .

4.19

For the object shown in Figure 4.34b qualitatively sketch the B-mode ultrasound image. Ignore speckle or scatter and only consider signals backscattered from the tissue boundaries. Acoustic impedances: tissue 1.61, tumor 1.52 (×105 g cm−2 s). Attenuation coefficients: tissue 1.0, tumor 0.4 (dB/cm/MHz). Speeds of sound: tissue 1540, tumor 750 m s−1 .

4.20

In a particular real-time imaging application the transducer moves through a 90∘ sector with a frame rate of 30 frames per second, acquiring 128 lines of data per frame. If the image is acquired up to a depth of 20 cm, and the lateral resolution of the beam width at this depth is 5 mm, calculate the effect of transducer motion on overall image blurring, i.e. is it the dominant factor?

4.21

A B-mode scan is taken of the object in Figure 4.35 with a linear array. There are four tissue components, A and B with a boundary in-between and two spherical tumors C and D. Given the corresponding ultrasound image

181

182

4 Ultrasound Imaging

a

c

b

d

Figure 4.35 Illustrations for Exercise 4.21.

shown on the right what can you deduce about the acoustic characteristics of components A, B, C, and D? 4.22

Given the ultrasound data in the table below, sketch the B-mode scan that would be obtained from the linear sequential array in Figure 4.36 (a quantitative analysis is NOT needed, ignore any refraction of the ultrasound beam) Tissue

Z (×105 )

c (m s−1 )

𝛍 (dB cm−1 )

Muscle

1.7

1540

1

Fat

1.4

1450

1.7

Liver

1.6

1570

1.5

Tumor 1

1.7

1540

1

Tumor 2

1.9

3080

1

Tumor 3

10

1540

5

Tumor 4

1.9

1540

20

Tumor 5

1.9

770

1

Tumor 6

1.9

1540

1

4.23

The three ultrasound images in Figure 4.37 are of the same object. Explain which operating parameter changes from image (a) to image (b) to image (c).

4.24

In a particular real-time imaging application the transducer moves through a 90∘ sector with a frame rate of 30 frames per second, acquiring 128 lines

Exercises

Figure 4.36 Illustrations for Exercise 4.22.

Linear array

1 cm 1.5 cm

1

2

3

4

5

6

1 cm 1.5 cm

Muscle

1 cm 1.5 cm

Fat Liver

(a)

(b)

(c)

Figure 4.37 Illustrations for Exercise 4.23.

of data per frame. If the image is acquired up to a depth of 20 cm, and the lateral resolution of the beam width at this depth is 5 mm, calculate the effect of transducer motion on overall image blurring, i.e. is it the dominant factor? 4.25

Look at Huang et al. [3], choose one technique, explain how it works and what are the advantages and disadvantages compared to standard phased array B-mode scanning.

Section 4.7 4.26

How does the frequency profile of the ultrasound beam change as it passes through tissue? How does this affect the lateral resolution.

4.27

Consider a focused transducer with a radius of curvature of 10 cm and a diameter of 4 cm. This transducer operates at a frequency of 3.5 MHz, and

183

184

4 Ultrasound Imaging

transmits a pulse of duration 0.857 μs. What is the axial and lateral resolution at the focal point of the transducer? 4.28

If the axial and lateral resolution were to be improved by a factor-of-two from those calculated in Exercise 4.14, what physical or operating parameters could be changed?

4.29

Would a high Q transducer have higher or lower side lobes than a low Q transducer. Explain why, and how this affects image quality.

4.30

Draw the corresponding beam pattern to that shown in Figure 4.10 for a transducer operating at double the frequency. Note all of the frequency-dependent changes in the beam pattern.

Section 4.8 4.31

Sketch the shape of the acoustic shadowing artifact produced from compound scanning.

4.32

The well-known reverberation artifact occurs when a strongly reflecting boundary within tissue is close to the transducer. Assume that there is a 2 cm thickness of muscle in front of the ribs. Zcrystal is 33 × 105, Zmuscle is 1.7 × 105, and Zbone is 7.8 × 105 g cm−2 s−1 , the speed of sound in muscle is 1540 m s−1 , and the attenuation coefficient of muscle is 1 dB cm−1 , calculate the time gain compensation (units of dB/microsecond) that must be used to make the intensity of each of the reverberation signals the same.

Section 4.9 4.33

Sketch the Doppler spectral patterns at points 1, 2, and 3 below in a stenotic artery, shown in Figure 4.38a.

4.34

On the same scale as for Exercise 4.33, sketch the Doppler spectral plots for the situation in Figure 4.38b in which the angle between the artery and the phased array transducer is altered.

4.35

Show that the effects of a fixed error in the estimated angle between the transducer and the direction of flow in Doppler imaging are minimized by using a small value of the angle.

References

1

2

3

3 2 1

(a)

(b)

Figure 4.38 Illustrations for Exercises 4.33 and 4.34.

4.36

Show that the outputs of the quadrature detector covered in Section 4.10.4 allow differentiation between positive and negative blood velocity by deriving the equations for sI (t) and sQ (t) in the main text.

Section 4.10 4.37

In a pulsed Doppler scan an ROI had been determined to be between 7 and 9 cm deep in the tissue. At what time should the receiver be turned on after the end of the pulse, and for how long should the receiver be gated on? The ultrasound pulse consists of five cycles at a central frequency of 5 MHz.

References 1 W. Lee and Y. Roh, Ultrasonic transducers for medical diagnostic imaging, Biomed. Eng. Lett. 7(2):91–97 (2017). 2 T. Saul, S. D. Siadecki, R. Berkowitz, G. Rose, D. Matilsky, and A. Sauler, M-mode ultrasound applications for the emergency medicine physician, J. Emerg. Med. 49(5):686–692 (2015). 3 Q. Huang and Z. Zeng, A review on real-time 3D ultrasound imaging technology, Biomed. Res. Int. 2017, 6027029 (2017). 4 L. Demi, Practical guide to ultrasound beam forming: beam pattern and image reconstruction analysis, Appl. Sci. 8(9): 1544 (2018).

185

186

4 Ultrasound Imaging

5 F.W. Kremkau, Your new paradigm for understanding and applying sonographic principles, J. Diagn. Med. Sonogr. 35(5):439–446 (2019). 6 S. J. Kwon and M. K. Jeong, Estimation and suppression of side lobes in medical ultrasound imaging systems, Biomed. Eng. Lett. 7(1), 31–43 (2017). 7 M. Baad, Z. F. Lu, I. Reiser, and D. Paushter, Clinical significance of US artifacts, Radiographics 37(5):1408–1423 (2017). 8 J. L. Diaz-Gomez, P. H. Mayo, and S. J. Koenig: Point-of-care ultrasonography, New Engl. J. Med. 385 (17):1593–1602 (2021). 9 K. Brenner, A. S. Ergun, K. Firouzi, M. F. Rasmussen, Q. Stedman, and B. Khuri-Yakub, Advances in capacitive micromachined ultrasonic transducers, Micromachines 10(2): 192 (2019). 10 M. Z. Chen, Q. Z. Zhang, X. Y. Zhao, F. F. Wang, H. L. Liu, B. C. Li, X. F. Zhang, and H. S. Luo, Design and analysis of piezoelectric micromachined ultrasonic transducer using high coupling PMN-PT single crystal thin film for ultrasound imaging, Smart Mater. Struct. 30(5): 055006 (2021).

Further Reading Original Papers J. J. Wild, The use of ultrasonic pulses for the measurement of biologic tissues, and the detection of tissue density changes, Surgery 27, 183 (1950). D. H. Howry and W. R. Bliss, Ultrasonic visualization of soft-tissue structures of the body, J. Lab. Clin. Med. 40, 579 (1952). I. Edler and C. H. Hertz, The use of an ultrasonic reflectoscope for the continuous recording of the movement of heart walls, Kungl. Fysiogr. Sällskap.i Lund Föhandl 24, 40 (1954). S. Satomura, Ultrasonic Doppler method for the inspection of cardiac functions, J. Acoust. Soc. Am. 29, 1181 (1957). J. C. Somer, Electronic sector scanning for ultrasonic diagnosis, Ultrasonics 6, 153 (1968). D. W. Baker, Pulsed ultrasonic Doppler blood-flow sensing, IEEE Trans. Son. Ultrason. SU-17, 170 (1970). B. Schrope, V. L. Newhouse, and V. Uhlendorf, Simulated capillary blood flow measurement using a nonlinear ultrasonic contrast agent, Ultrason. Imaging 14, 134 (1992). B. Schrope and V. L. Newhouse, Second harmonic ultrasound blood perfusion measurement, Ultrasound Med. Biol. 19, 567 (1993).

Further Reading

Books Physical Principles of Acoustics and Ultrasound H. Azhari, Basics of Biomedical Ultrasound for Engineers, Wiley-IEEE Press, Hoboken, USA (2010). P. R. Hoskins, K. Martin, and A. Thrush (Eds), Diagnostic Ultrasound: Physics and Equipment, 3rd ed., CRC Press, Boca Raton, USA (2019). F. W. Kremkau, Sonography Principles and Instruments, 10th ed., Saunders, Philadelphia, USA (2020). Flow Measurements Using Ultrasound D. H. Evans, W. N. McDicken, and N. McDicken, Doppler Ultrasound: Physics, Instrumental and Clinical Applications, 2nd ed., John Wiley, Hoboken, USA (2000). K. K. Shung, Diagnostic Ultrasound: Imaging and Blood Flow Measurements, 2nd ed., CRC Press, , Boca Raton, USA (2015). Safety S. B. Barrett and G. Kossoff, eds., Safety of Diagnostic Ultrasound, Parthenon Publishing Group, Nashville, USA (1998). Point-of-Care Ultrasound N. J. Soni, R. Arntfield, and P. Kory, Point of Care Ultrasound, 2nd ed., Elsevier, Amsterdam, NL (2019).

Review Articles W. Lee and Y. Roh, Ultrasonic transducers for medical diagnostic imaging, Biomed. Eng. Lett. 7, 91–97 (2017). Q. Huang and Z. Zeng, A review on real-time 3D ultrasound imaging technology, Bio. Med. Res. Int. 2017: 6027029 (2017). J. Seo and Y.-S. Kim, Ultrasound imaging and beyond: recent advances in medical ultrasound, Biomed. Eng. Lett. 7, 57 (2017). A. A. Oglat, M. Z. Matjafri, N. Suardi et al., A review of medical Doppler ultrasonography of blood flow in general and especially in common carotid artery, J. Med. Ultrasound 26, 3 (2018). P. Frinking, T. Segers, Y. Luan, and F. Tranquart, Three decades of ultrasound contrast agents: a review of the past, present and future improvements, Ultrasound Med. Biol. 46, 892 (2020). H. Yusefi and B. Helfield, Ultrasound contrast imaging: fundamentals and emerging technology, Front. Phys. 10: 791145 (2022).

187

188

4 Ultrasound Imaging

Specialized Journals Eur. J. Ultrasound. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. J. Clin. Ultrasound. Ultrasound Med. Biol. J. Acoust. Soc. Am.

189

5 Magnetic Resonance Imaging 5.1 General Principles of MRI Acquisition and Hardware Magnetic resonance imaging (MRI) is a modality which produces a spatial map of the hydrogen nuclei in the particular body part being scanned. These hydrogen nuclei occur primarily in tissue water (∼70% of body weight) and lipid. The signal intensity in each voxel in the MR image depends on many different physical characteristics of the tissue, including the number of hydrogen nuclei within the voxel, the tissue viscosity, how fast water diffuses in the tissue, and whether there is blood flow or perfusion. This complicated signal intensity dependence has both advantages and disadvantages. On the one hand, it means that there is a wealth of information on tissue integrity and health that can be analyzed. On the other hand, it means that a single contrast image is difficult to interpret on its own, and therefore a typical MRI scanning protocol might acquire between four and eight different contrasts, which makes the entire scanning session last on average ∼30 min. The major uses of MRI are in the areas of neurological disease, spinal disorders, and musculoskeletal damage. The main strengths of MRI as a clinical imaging modality include: (i) It uses electromagnetic waves which are nonionizing, (ii) Images can be acquired in any three-dimensional orientation (including oblique scans), (iii) Soft tissue contrast is very high, (iv) High spatial resolution images (